Compare commits

..

95 Commits

Author SHA1 Message Date
Hanefi Onaldi 11db00990f
Bump Citus version to 10.0.8 2023-04-26 16:05:20 +03:00
Hanefi Onaldi bd5544fdc5
Add missing entry for 10.0.8 2023-04-26 16:05:20 +03:00
Hanefi Onaldi e5e50570b3
Add changelog entries for 10.0.8 2023-04-26 13:31:50 +03:00
aykut-bozkurt 758cda1394 fix single tuple result memory leak (#6724)
We should not omit to free PGResult when we receive single tuple result
from an internal backend.
Single tuple results are normally freed by our ReceiveResults for
`tupleDescriptor != NULL` flow but not for those with `tupleDescriptor
== NULL`. See PR #6722 for details.

DESCRIPTION: Fixes memory leak issue with query results that returns
single row.

(cherry picked from commit 9e69dd0e7f)
2023-02-17 14:41:30 +03:00
Halil Ozan Akgul 8b4bab14a2 Fixes the bug where undistribute can drop Citus extension
(cherry picked from commit b255706189)

 Conflicts:
	src/backend/distributed/commands/alter_table.c
	src/include/distributed/metadata/dependency.h
2022-06-13 16:36:59 +03:00
jeff-davis 115f2c124a Columnar: fix wraparound bug. (#5962)
columnar_vacuum_rel() now advances relfrozenxid.

Fixes #5958.

(cherry picked from commit 74ce210f8b)
2022-05-27 09:33:18 -07:00
Onur Tirtir 4b7af5aaaf Fix coordinator/worker query targetlists for agg. that we cannot push-down (#5679)
Previously, we were wrapping targetlist nodes with Vars that reference
to the result of the worker query, if the node itself is not `Const` or
not a `Param`. Indeed, we should not do that unless the node itself is
a `Var` node or contains a `Var` within it (e.g.: `OpExpr(Var(column_a) > 2)`).
Otherwise, when worker query returns empty result set, then combine
query exec would crash since the `Var` would be pointing to an empty
tuple slot, which is not desirable for the node-executor methods.

(cherry picked from commit 79442df1b7)
2022-02-07 11:40:39 +03:00
Onur Tirtir 6b87b3ea27 Skip deleting options if columnar.options is already dropped (#5458)
Drop extension might cascade to columnar.options before dropping a
columnar table. In that case, we were getting below error when opening
columnar.options to delete records for the columnar table that we are
about to drop.: "ERROR:  could not open relation with OID 0".

I somehow reproduced this bug easily when upgrading pg, that is why
adding added the test to after_pg_upgrade_schedule.

(cherry picked from commit 25024b776e)

 Conflicts:
	src/test/regress/after_pg_upgrade_schedule
	src/test/regress/expected/upgrade_columnar_after.out
	src/test/regress/sql/upgrade_columnar_after.sql
2021-11-12 15:17:59 +03:00
Hanefi Onaldi d13b989cff
Bump Citus version to 10.0.6 2021-11-12 14:14:33 +03:00
Hanefi Onaldi 0f62f1a93a
Add changelog entries for 10.0.6
(cherry picked from commit 45549d20a6)
2021-11-12 14:12:16 +03:00
Sait Talha Nisanci 83585e32f9 Adjust tests for release-10.0 2021-11-08 12:51:23 +03:00
Sait Talha Nisanci 0f1f55c287 Fix missing from entry
(cherry picked from commit a0e0759f73)
2021-11-08 12:39:23 +03:00
Onder Kalaci 3e8348c29e Deparse/parse the local cached queries
With local query caching, we try to avoid deparse/parse stages as the
operation is too costly.

However, we can do deparse/parse operations once per cached queries, right
before we put the plan into the cache. With that, we avoid edge
cases like (4239) or (5038).

In a sense, we are making the local plan caching behave similar for non-cached
local/remote queries, by forcing to deparse the query once.

(cherry picked from commit 69ca943e58)
2021-11-08 12:38:11 +03:00
Nils Dijk 537618aaed
reinstate optimization that got unintentionally broken in 366461ccdb (#5418)
DESCRIPTION: Reinstate optimisation for uniform shard interval ranges

During a refactor introduced in #4132 the following change was made, which made the optimisation in `CalculateUniformHashRangeIndex` unreachable: 
366461ccdb (diff-565a339ed3c78bc5a0d4ffeb4e91032150b1dffbeeff59cd3e65981d20b998c7L319-R319)

This PR reinstates the path to the optimisation!
2021-11-05 13:09:25 +01:00
Onur Tirtir 6c989830d2 Add CheckCitusVersion() calls to columnarAM (#5308)
Considering all code-paths that we might interact with a columnar table,
add `CheckCitusVersion` calls to tableAM callbacks:
- initializing table scan (`columnar_beginscan` & `columnar_index_fetch_begin`)
- setting a new filenode for a relation (storage initializiation or a table rewrite)
- truncating the storage
- inserting tuple (single and multi)

Also add `CheckCitusVersion` call to:
- drop hook (`ColumnarTableDropHook`)
- `alter_columnar_table_set` & `alter_columnar_table_reset` UDFs
(cherry picked from commit f8b1ff7214)

 Conflicts:
	src/backend/columnar/cstore_tableam.c
	src/test/regress/expected/multi_extension.out
	src/test/regress/sql/multi_extension.sql

 Note: Not applying multi_extension.sql/out changes to 10.0 since
       previous Citus version (9.5) already doesn't have columnarAM.
2021-09-20 17:45:27 +03:00
Marco Slot 259511746e Small fix to PG12 compatibility 2021-09-10 14:21:03 +02:00
Marco Slot bd245b5fbb Avoid switch to superuser in worker_merge_files_into_table 2021-09-10 13:25:52 +02:00
Marco Slot 25c71fb3d0 Add worker_append_table_to_shard permissions tests 2021-09-10 13:25:52 +02:00
Marco Slot 28a503fad9 Perform copy command as regular user in worker_append_table_to_shard 2021-09-10 13:25:52 +02:00
Onur Tirtir 30b46975b8 Not read heaptuple after closing pg_rewrite (#5255)
(cherry picked from commit cc49e63222)
2021-09-08 16:02:05 +03:00
Hanefi Onaldi 5f5e5ef471
Bump Citus version to 10.0.5 2021-08-17 07:45:37 +03:00
Hanefi Onaldi 5a1036e361
Add changelog entries for 10.0.5
(cherry picked from commit 167a023770)
2021-08-16 17:38:54 +03:00
Onder Kalaci 6de2a09d79 Guard against hard WaitEvenSet errors
In short, add wrappers around Postgres' AddWaitEventToSet() and
ModifyWaitEvent().

AddWaitEventToSet()/ModifyWaitEvent*() may throw hard errors. For
example, when the underlying socket for a connection is closed by
the remote server and already reflected by the OS, however
Citus hasn't had a chance to get this information. In that case,
if replication factor is >1, Citus can failover to other nodes
for executing the query. Even if replication factor = 1, Citus
can give much nicer errors.

So CitusAddWaitEventSetToSet()/CitusModifyWaitEvent() simply puts
AddWaitEventToSet()/ModifyWaitEvent() into a PG_TRY/PG_CATCH block
in order to catch any hard errors, and returns this information to
the caller.
2021-08-10 09:38:09 +02:00
Onder Kalaci d485003807 Adjust the tests to earlier versions
- Drop PRIMARY KEY for Citus 10 compatibility
- Drop columnar for PG 12
- Do not start/stop metadata sync as stop is not implemented in 10.1
- PG 11 parallel query changes explain outputs
2021-08-06 16:38:01 +02:00
Onder Kalaci 32124efd83 Dropped columns do not diverge distribution column for partitioned tables
Before this commit, creating a partition after a DROP column
on the parent (position before dist. key) was leading to
partition to have the wrong distribution column.
2021-08-06 13:42:06 +02:00
naisila c84d1d9e70 Fix master_update_table_statistics scripts for 9.5 2021-08-03 16:45:46 +03:00
naisila b46f8874d3 Fix master_update_table_statistics scripts for 9.4 2021-08-03 16:45:46 +03:00
Hanefi Onaldi 1492bd1e8b
Bump Citus to 10.0.4 2021-07-14 16:03:45 +03:00
Hanefi Onaldi 4082fab0c9
Add changelog entry for 10.0.4
(cherry picked from commit 45b72c204d)
2021-07-14 15:49:44 +03:00
Hanefi Onaldi 4ca544200c
Use ONLY keywords on PG11 deparser 2021-07-13 17:27:34 +03:00
Marco Slot e58b78f1e8
Fix FROM ONLY queries on partitioned tables
(cherry picked from commit 4b49cb112f)
2021-07-13 17:27:33 +03:00
jeff-davis f526eec6a8 Columnar: use clause Vars for chunk group filtering. (#4856)
* Columnar: use clause Vars for chunk group filtering.

This solves #4780 and also provides a cleaner separation between chunk
group filtering and projection pushdown.

* Columnar: sort and deduplicate Vars pulled from clauses.

* Columnar: cleanup variable names.

* Columnar: remove alternate test output.

* Columnar: do not recurse when looking for whereClauseVars.

Co-authored-by: Jeff Davis <jefdavi@microsoft.com>
(cherry picked from commit 063e673038)
2021-07-13 12:01:57 -07:00
SaitTalhaNisanci 5759233f15
Warm up connections params hash (#4872)
ConnParams(AuthInfo and PoolInfo) gets a snapshot, which will block the
remote connectinos to localhost. And the release of snapshot will be
blocked by the snapshot. This leads to a deadlock.

We warm up the conn params hash before starting a new transaction so
that the entries will already be there when we start a new transaction.
Hence GetConnParams will not get a snapshot.

(cherry picked from commit b453563e88)
2021-07-13 11:30:15 +03:00
Hanefi Onaldi 6640c76bde
Switch to sequential mode on long partition names
This commit adds support for long partition names for distributed tables:
- ALTER TABLE dist_table ATTACH PARTITION ..
- CREATE TABLE .. PARTITION OF dist_table ..

Note: create_distributed_table UDF does not support long table and
partition names, and is not covered in this commit

(cherry picked from commit 9919fbe3f8)
2021-07-13 08:06:58 +03:00
Sait Talha Nisanci 11d5d21fd8
Call LockPlacementCleanup in RemoveOldShardPlacementForNodeGroup 2021-07-13 05:30:51 +03:00
SaitTalhaNisanci 4fbed90505
Fix data-race with concurrent calls of DropMarkedShards (#4909)
* Fix problews with concurrent calls of DropMarkedShards

When trying to enable `citus.defer_drop_after_shard_move` by default it
turned out that DropMarkedShards was not safe to call concurrently.
This could especially cause big problems when also moving shards at the
same time. During tests it was possible to trigger a state where a shard
that was moved would not be available on any of the nodes anymore after
the move.

Currently DropMarkedShards is only called in production by the
maintenaince deamon. Since this is only a single process triggering such
a race is currently impossible in production settings. In future changes
we will want to call DropMarkedShards from other places too though.

* Add some isolation tests

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
(cherry picked from commit 93c2dcf3d2)
2021-07-13 05:30:51 +03:00
Ahmet Gedemenli 7214673a9f Fix test output for cherry-picked commits for 10.0 2021-07-12 16:42:15 +03:00
Ahmet Gedemenli 79a274e226 Fix relname null bug when parallel execution
(cherry picked from commit 69d39c0e8b)
2021-07-12 16:42:15 +03:00
Ahmet Gedemenli dd2dfac198 Remove function GenerateNewTargetEntriesForSortClauses
(cherry picked from commit 9638933d9d)
2021-07-12 16:42:15 +03:00
Sait Talha Nisanci 3bcfadf2f1 update cluster test
(cherry picked from commit 3218e34be9)
2021-07-12 12:18:05 +03:00
Sait Talha Nisanci f5a7858ab9 Not consider old placements when disabling or removing a node
(cherry picked from commit 73c58b6160)
2021-07-12 11:50:39 +03:00
Hanefi Onaldi d7b90e0804
Remove public schema dependency for 10.0 upgrades
This commit contains a subset of the changes that should be cherry
picked to 10.0 releases.

(cherry picked from commit 8e9cc229ff)
2021-07-09 11:55:32 +03:00
Nils Dijk 74985a0977 fix 9.5-2 upgrade script to adhere to idempotency 2021-07-08 12:25:26 +02:00
Nils Dijk 57a52b01a2 Add test for idempotency of citus_prepare_pg_upgrade 2021-07-08 12:25:26 +02:00
Onur Tirtir c24088e12f Fix lower boundary calculation when pruning range dist table shards (#5082)
This happens only when we have a "<" or "<=" filter on distribution
column of a range distributed table and that filter falls in between
two shards.

When the filter falls in between two shards:

  If the filter is ">" or ">=", then UpperShardBoundary was
  returning "upperBoundIndex - 1", where upperBoundIndex is
  exclusive shard index used during binary seach.
  This is expected since upperBoundIndex is an exclusive
  index.

  If the filter is "<" or "<=", then LowerShardBoundary was
  returning "lowerBoundIndex + 1", where lowerBoundIndex is
  inclusive shard index used during binary seach.
  On the other hand, since lowerBoundIndex is an inclusive
  index, we should just return lowerBoundIndex instead of
  doing "+ 1". Before this commit, we were missing leftmost
  shard in such queries.

* Remove useless conditional branches

The branch that we delete from UpperShardBoundary was obviously useless.

The other one in LowerShardBoundary became useless after we remove "+ 1"
from there.

This indeed is another proof of what & how we are fixing with this pr.

* Improve comments and add more

* Add some tests for upper bound calculation too

(cherry picked from commit b118d4188e)
2021-07-07 13:13:50 +03:00
Nils Dijk 9a2227c70d Bump use of new sql function 2021-07-05 16:14:52 +02:00
Marco Slot 826ac1b099 Fix PG upgrade scripts for 10.0 2021-07-05 16:14:52 +02:00
Marco Slot d9514fa697 Fix PG upgrade scripts for 9.5 2021-07-05 16:14:52 +02:00
Marco Slot 2f27325b15 Fix PG upgrade scripts for 9.4 2021-07-05 16:14:52 +02:00
Jelte Fennema f41b5060f0 Avoid two race conditions in the rebalance progress monitor (#5050)
The first and main issue was that we were putting absolute pointers into
shared memory for the `steps` field of the `ProgressMonitorData`. This
pointer was being overwritten every time a process requested the monitor
steps, which is the only reason why this even worked in the first place.

To quote a part of a relevant stack overflow answer:

> First of all, putting absolute pointers in shared memory segments is
> terrible terible idea - those pointers would only be valid in the
> process that filled in their values. Shared memory segments are not
> guaranteed to attach at the same virtual address in every process.
> On the contrary - they attach where the system deems it possible when
> `shmaddr == NULL` is specified on call to `shmat()`

Source: https://stackoverflow.com/a/10781921/2570866

In this case a race condition occurred when a second process overwrote
the pointer in between the first process its write and read of the steps
field.

This issue is fixed by not storing the pointer in shared memory anymore.
Instead we now calculate it's position every time we need it.

The second race condition I have not been able to trigger, but I found
it while investigating this. This issue was that we published the handle
of the shared memory segment, before we initialized the data in the
steps. This means that during initialization of the data, a call to
`get_rebalance_progress()` could read partial data in an unsynchronized
manner.

(cherry picked from commit ca00b63272)
2021-06-21 16:42:10 +02:00
Nils Dijk 823ede78ab
Feature: localhost guc (#4836)
DESCRIPTION: introduce `citus.local_hostname` GUC for connections to the current node

Citus once in a while needs to connect to itself for some systems operations. This used to be hardcoded to `localhost`. The hardcoded hostname causes some issues, for example in environments where `sslmode=verify-full` is required. It is not always desirable or even feasible to get `localhost` as an alt name on the certificate.

By introducing a GUC to use when connecting to the current instance the user has more control what network path is used and what hostname is required to be present in the server certificate.
2021-06-01 13:18:15 +02:00
Ahmet Gedemenli 2ea3618f22 Add test for public shard not found issue
(cherry picked from commit 48a6a5b128)
2021-06-01 10:50:26 +03:00
Ahmet Gedemenli 88825b89a1 Fix tests for public schema
(cherry picked from commit d530d79d73)
2021-06-01 10:50:26 +03:00
Ahmet Gedemenli a216c6b62c Remove redundant if statement for schema name
(cherry picked from commit 840c879572)
2021-06-01 10:50:26 +03:00
Sait Talha Nisanci fcb932268a Bump version to 10.0.3 2021-03-17 18:02:01 +03:00
Sait Talha Nisanci 1200c8fd1c Update CHANGELOG for 10.0.3
(cherry picked from commit 92130ae2a2)
2021-03-17 18:01:57 +03:00
Önder Kalacı 0237d826d5 Make sure that single task local executions start coordinated transaction (#4831)
With https://github.com/citusdata/citus/pull/4806 we enabled
2PC for any non-read-only local task. However, if the execution
is a single task, enabling 2PC (CoordinatedTransactionShouldUse2PC)
hits an assertion as we are not in a coordinated transaction.

There is no downside of using a coordinated transaction for single
task local queries.
2021-03-17 14:56:28 +03:00
Ahmet Gedemenli e54b253713 Add udf citus_get_active_worker_nodes
(cherry picked from commit 5e5db9eefa)
2021-03-17 14:56:28 +03:00
Marco Slot 61efc87c53 Replace MAX_PUT_COPY_DATA_BUFFER_SIZE by citus.remote_copy_flush_threshold GUC
(cherry picked from commit fbc2147e11)
2021-03-17 07:35:46 +03:00
Marco Slot f5608c2769 Add GUC to set maximum connection lifetime
(cherry picked from commit 1646fca445)
2021-03-17 07:35:46 +03:00
Marco Slot ecf0f2fdbf Remove unnecessary AtEOXact_Files call
(cherry picked from commit 6c5d263b7a)
2021-03-16 10:01:14 +03:00
Onder Kalaci 0a09551dab Rename use -> shouldUse
Because setting the flag doesn't necessarily mean that we'll
use 2PC. If connections are read-only, we will not use 2PC.
In other words, we'll use 2PC only for connections that modified
any placements.

(cherry picked from commit e65e72130d)
2021-03-16 10:01:14 +03:00
Onder Kalaci 0805ef9c79 Do not trigger 2PC for reads on local execution
Before this commit, Citus used 2PC no matter what kind of
local query execution happens.

For example, if the coordinator has shards (and the workers as well),
even a simple SELECT query could start 2PC:
```SQL

WITH cte_1 AS (SELECT * FROM test LIMIT 10) SELECT count(*) FROM cte_1;
```

In this query, the local execution of the shards (and also intermediate
result reads) triggers the 2PC.

To prevent that, Citus now distinguishes local reads and local writes.
And, Citus switches to 2PC only if a modification happens. This may
still lead to unnecessary 2PCs when there is a local modification
and remote SELECTs only. Though, we handle that separately
via #4587.

(cherry picked from commit 6a7ed7b309)
2021-03-16 10:01:14 +03:00
Naisila Puka a6435b7f6b Fix upgrade and downgrade paths for master/citus_update_table_statistics (#4805)
(cherry picked from commit 71a9f45513)
2021-03-16 10:01:09 +03:00
Marco Slot f13cf336f2 Add tests for modifying CTE and SELECT without FROM
(cherry picked from commit 9c0d7f5c26)
2021-03-16 09:44:00 +03:00
Marco Slot 46e316881b Fixes a crash in queries with a modifying CTE and a SELECT without FROM
(cherry picked from commit 58f85f55c0)
2021-03-16 09:43:24 +03:00
Onur Tirtir 18ab327c6c Add tests for concurrent index deadlock issue (#4775)
(cherry picked from commit 9728ce1167)
2021-03-16 09:42:21 +03:00
Hadi Moshayedi 61a89c69cd Populate DATABASEOID cache before CREATE INDEX CONCURRENTLY
(cherry picked from commit affe38eac6)
2021-03-16 09:41:19 +03:00
Marco Slot ad9469b351 Try to return earlier in idempotent master_add_node
(cherry picked from commit f25de6a0e3)
2021-03-16 09:40:43 +03:00
Onder Kalaci 4121788848 Pass pointer of AttributeEquivalenceClass instead of pointer of pointer
AttributeEquivalenceClass seems to be unnecessarily used with multiple
pointers. Just use a single pointer for ease of read.

(cherry picked from commit 54ee96470e)
2021-03-16 09:40:07 +03:00
Onder Kalaci e9bf5fa235 Prevent infinite recursion for queries that involve UNION ALL and JOIN
With this commit, we make sure to prevent infinite recursion for queries
in the format: [subquery with a UNION ALL] JOIN [table or subquery]

Also, fixes a bug where we pushdown UNION ALL below a JOIN even if the
UNION ALL is not safe to pushdown.

(cherry picked from commit d1cd198655)
2021-03-16 09:39:59 +03:00
Naisila Puka 18c7a3c188 Skip 2PC for readonly connections in a transaction (#4587)
* Skip 2PC for readonly connections in a transaction

* Use ConnectionModifiedPlacement() function

* Remove the second check of ConnectionModifiedPlacement()

* Add order by to prevent flaky output

* Test using pg_dist_transaction

(cherry picked from commit 196064836c)
2021-03-16 09:31:18 +03:00
Halil Ozan Akgül 85a87af11c Update CHANGELOG for 10.0.2
(cherry picked from commit c2a9706203)

 Conflicts:
	CHANGELOG.md
2021-03-03 17:26:26 +03:00
Hanefi Onaldi 115fa950d3 Do not use security flags by default (#4770)
(cherry picked from commit 697bbbd3c6)
2021-03-03 13:20:05 +03:00
Naisila Puka 445291d94b Reimplement citus_update_table_statistics to detect dist. deadlocks (#4752)
* Reimplement citus_update_table_statistics

* Update stats for the given table not colocation group

* Add tests for reimplemented citus_update_table_statistics

* Use coordinated transaction, merge with citus_shard_sizes functions

* Update the old master_update_table_statistics as well

(cherry picked from commit 2f30614fe3)
2021-03-03 11:41:31 +03:00
Hanefi Onaldi 28f1c2129d Add security flags in configure scripts (#4760)
(cherry picked from commit f87107eb6b)
2021-03-03 11:41:00 +03:00
Marco Slot 205b8ec70a Normalize the ConvertTable notices
(cherry picked from commit dca615c5aa)
2021-03-03 11:40:38 +03:00
Halil Ozan Akgul 6fa25d73be Bump version to 10.0.2 2021-03-01 17:04:24 +03:00
SaitTalhaNisanci bfb1ca6d0d Use translated vars in postgres 13 as well (#4746)
* Use translated vars in postgres 13 as well

Postgres 13 removed translated vars with pg 13 so we had a special logic
for pg 13. However it had some bug, so now we copy the translated vars
before postgres deletes it. This also simplifies the logic.

* fix rtoffset with pg >= 13

(cherry picked from commit feee25dfbd)
2021-03-01 15:18:32 +03:00
Halil Ozan Akgul b355f0d9a2 Adds GRANT for public to citus_tables
(cherry picked from commit 5c5cb200f7)
2021-03-01 15:15:34 +03:00
Önder Kalacı fdcb6ead43 Prevent cross join without any target list entries (#4750)
/*
 * The physical planner assumes that all worker queries would have
 * target list entries based on the fact that at least the column
 * on the JOINs have to be on the target list. However, there is
 * an exception to that if there is a cartesian product join and
 * there is no additional target list entries belong to one side
 * of the JOIN. Once we support cartesian product join, we should
 * remove this error.
 */

(cherry picked from commit 0fe26a216c)
2021-03-01 15:13:26 +03:00
Onur Tirtir 3fcb011b67 Grant read access for columnar metadata tables to unprivileged user
(cherry picked from commit 54ac924bef)
2021-03-01 15:02:57 +03:00
Halil Ozan Akgul 8228815b38 Add 10.0-2 schema version
(cherry-picked from dcc0207605)
2021-03-01 14:58:41 +03:00
Onur Tirtir 270234c7ff Ensure table owner when using alter_columnar_table_set/alter_columnar_table_reset (#4748)
(cherry picked from commit 5ed954844c)
2021-03-01 14:38:19 +03:00
Naisila Puka 3131d3e3c5 Preserve colocation with procedures in alter_distributed_table (#4743)
(cherry picked from commit 5ebd4eac7f)
2021-03-01 14:36:52 +03:00
Hanefi Onaldi a7f9dfc3f0 Fix flaky test
(cherry picked from commit 5aff18b573)
2021-03-01 13:18:22 +03:00
Hanefi Onaldi 049cd55346 Remove length limitations for table renames
(cherry picked from commit 9a792ef841)
2021-03-01 13:18:05 +03:00
Hanefi Onaldi 27ecb5cde2 Failing long table name tests
(cherry picked from commit 7bebeb872d)
2021-03-01 13:17:48 +03:00
Naisila Puka fc08ec203f Fix insert query with CTEs/sublinks/subqueries etc (#4700)
* Fix insert query with CTE

* Add more cases with deferred pruning but false fast path

* Add more tests

* Better readability with if statements

(cherry picked from commit dbb88f6f8b)
2021-03-01 12:16:40 +03:00
Hadi Moshayedi 495470d291 Fix alignment issue in DatumToBytea
(cherry picked from commit 2fca5ff3b5)
2021-03-01 12:07:46 +03:00
SaitTalhaNisanci 39a142b4d9 Use PROCESS_UTILITY_QUERY in utility calls
When we use PROCESS_UTILITY_TOPLEVEL it causes some problems when
combined with other extensions such as pg_audit. With this commit we use
PROCESS_UTILITY_QUERY in the codebase to fix those problems.

(cherry picked from commit dcf54eaf2a)
2021-03-01 11:49:44 +03:00
Onur Tirtir ca4b529751 Bump version to 10.0.1 2021-02-19 12:05:56 +03:00
Onur Tirtir e48f5d804d Update CHANGELOG for 10.0.1
(cherry picked from commit 9031a22e20)

 Conflicts:
	CHANGELOG.md
2021-02-19 12:05:49 +03:00
Marco Slot 85e2c6b523 Rewrite time_partitions join clause to avoid smallint[] operator
(cherry picked from commit 972a8bc0b7)
2021-02-19 11:25:00 +03:00
Onur Tirtir 2a390b4c1d Bump Citus to 10.0.0 2021-02-16 14:39:24 +03:00
2533 changed files with 93063 additions and 410315 deletions

674
.circleci/config.yml Normal file
View File

@ -0,0 +1,674 @@
version: 2.1
orbs:
codecov: codecov/codecov@1.1.1
azure-cli: circleci/azure-cli@1.0.0
jobs:
build:
description: Build the citus extension
parameters:
pg_major:
description: postgres major version building citus for
type: integer
image:
description: docker image to use for the build
type: string
default: citus/extbuilder
image_tag:
description: tag to use for the docker image
type: string
docker:
- image: '<< parameters.image >>:<< parameters.image_tag >>'
steps:
- checkout
- run:
name: 'Configure, Build, and Install'
command: |
./ci/build-citus.sh
- persist_to_workspace:
root: .
paths:
- build-<< parameters.pg_major >>/*
- install-<<parameters.pg_major >>.tar
check-style:
docker:
- image: 'citus/stylechecker:latest'
steps:
- checkout
- run:
name: 'Check Style'
command: citus_indent --check
- run:
name: 'Fix whitespace'
command: ci/editorconfig.sh
- run:
name: 'Check if whitespace fixing changed anything, install editorconfig if it did'
command: git diff --exit-code
- run:
name: 'Remove useless declarations'
command: ci/remove_useless_declarations.sh
- run:
name: 'Check if changed'
command: git diff --cached --exit-code
- run:
name: 'Normalize test output'
command: ci/normalize_expected.sh
- run:
name: 'Check if changed'
command: git diff --exit-code
- run:
name: 'Check for C-style comments in migration files'
command: ci/disallow_c_comments_in_migrations.sh
- run:
name: 'Check if changed'
command: git diff --exit-code
- run:
name: 'Check for lengths of changelog entries'
command: ci/disallow_long_changelog_entries.sh
- run:
name: 'Check for banned C API usage'
command: ci/banned.h.sh
- run:
name: 'Check for tests missing in schedules'
command: ci/check_all_tests_are_run.sh
- run:
name: 'Check if all CI scripts are actually run'
command: ci/check_all_ci_scripts_are_run.sh
check-sql-snapshots:
docker:
- image: 'citus/extbuilder:latest'
steps:
- checkout
- run:
name: 'Check Snapshots'
command: ci/check_sql_snapshots.sh
test-pg-upgrade:
description: Runs postgres upgrade tests
parameters:
old_pg_major:
description: 'postgres major version to use before the upgrade'
type: integer
new_pg_major:
description: 'postgres major version to upgrade to'
type: integer
image:
description: 'docker image to use as for the tests'
type: string
default: citus/pgupgradetester
image_tag:
description: 'docker image tag to use'
type: string
default: 11-12-13
docker:
- image: '<< parameters.image >>:<< parameters.image_tag >>'
working_directory: /home/circleci/project
steps:
- checkout
- attach_workspace:
at: .
- run:
name: 'Install Extension'
command: |
tar xfv "${CIRCLE_WORKING_DIRECTORY}/install-<< parameters.old_pg_major >>.tar" --directory /
tar xfv "${CIRCLE_WORKING_DIRECTORY}/install-<< parameters.new_pg_major >>.tar" --directory /
- run:
name: 'Configure'
command: |
chown -R circleci .
gosu circleci ./configure
- run:
name: 'Enable core dumps'
command: |
ulimit -c unlimited
- run:
name: 'Install and test postgres upgrade'
command: |
gosu circleci \
make -C src/test/regress \
check-pg-upgrade \
old-bindir=/usr/lib/postgresql/<< parameters.old_pg_major >>/bin \
new-bindir=/usr/lib/postgresql/<< parameters.new_pg_major >>/bin
no_output_timeout: 2m
- run:
name: 'Regressions'
command: |
if [ -f "src/test/regress/regression.diffs" ]; then
cat src/test/regress/regression.diffs
exit 1
fi
when: on_fail
- run:
name: 'Copy coredumps'
command: |
mkdir -p /tmp/core_dumps
if ls core.* 1> /dev/null 2>&1; then
cp core.* /tmp/core_dumps
fi
when: on_fail
- store_artifacts:
name: 'Save regressions'
path: src/test/regress/regression.diffs
when: on_fail
- store_artifacts:
name: 'Save core dumps'
path: /tmp/core_dumps
when: on_fail
- codecov/upload:
flags: 'test_<< parameters.old_pg_major >>_<< parameters.new_pg_major >>,upgrade'
test-citus-upgrade:
description: Runs citus upgrade tests
parameters:
pg_major:
description: "postgres major version"
type: integer
image:
description: 'docker image to use as for the tests'
type: string
default: citus/citusupgradetester
image_tag:
description: 'docker image tag to use'
type: string
docker:
- image: '<< parameters.image >>:<< parameters.image_tag >>'
working_directory: /home/circleci/project
steps:
- checkout
- attach_workspace:
at: .
- run:
name: 'Configure'
command: |
chown -R circleci .
gosu circleci ./configure
- run:
name: 'Enable core dumps'
command: |
ulimit -c unlimited
- run:
name: 'Install and test citus upgrade'
command: |
# run make check-citus-upgrade for all citus versions
# the image has ${CITUS_VERSIONS} set with all verions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
check-citus-upgrade \
bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
citus-pre-tar=/install-pg11-citus${citus_version}.tar \
citus-post-tar=/home/circleci/project/install-$PG_MAJOR.tar; \
done;
# run make check-citus-upgrade-mixed for all citus versions
# the image has ${CITUS_VERSIONS} set with all verions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
check-citus-upgrade-mixed \
bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
citus-pre-tar=/install-pg11-citus${citus_version}.tar \
citus-post-tar=/home/circleci/project/install-$PG_MAJOR.tar; \
done;
no_output_timeout: 2m
- run:
name: 'Regressions'
command: |
if [ -f "src/test/regress/regression.diffs" ]; then
cat src/test/regress/regression.diffs
exit 1
fi
when: on_fail
- run:
name: 'Copy coredumps'
command: |
mkdir -p /tmp/core_dumps
if ls core.* 1> /dev/null 2>&1; then
cp core.* /tmp/core_dumps
fi
when: on_fail
- store_artifacts:
name: 'Save regressions'
path: src/test/regress/regression.diffs
when: on_fail
- store_artifacts:
name: 'Save core dumps'
path: /tmp/core_dumps
when: on_fail
- codecov/upload:
flags: 'test_<< parameters.pg_major >>,upgrade'
test-citus:
description: Runs the common tests of citus
parameters:
pg_major:
description: "postgres major version"
type: integer
image:
description: 'docker image to use as for the tests'
type: string
default: citus/exttester
image_tag:
description: 'docker image tag to use'
type: string
make:
description: "make target"
type: string
docker:
- image: '<< parameters.image >>:<< parameters.image_tag >>'
working_directory: /home/circleci/project
steps:
- checkout
- attach_workspace:
at: .
- run:
name: 'Install Extension'
command: |
tar xfv "${CIRCLE_WORKING_DIRECTORY}/install-${PG_MAJOR}.tar" --directory /
- run:
name: 'Configure'
command: |
chown -R circleci .
gosu circleci ./configure
- run:
name: 'Enable core dumps'
command: |
ulimit -c unlimited
- run:
name: 'Run Test'
command: |
gosu circleci make -C src/test/regress << parameters.make >>
no_output_timeout: 2m
- run:
name: 'Regressions'
command: |
if [ -f "src/test/regress/regression.diffs" ]; then
cat src/test/regress/regression.diffs
exit 1
fi
when: on_fail
- run:
name: 'Copy coredumps'
command: |
mkdir -p /tmp/core_dumps
if ls core.* 1> /dev/null 2>&1; then
cp core.* /tmp/core_dumps
fi
when: on_fail
- store_artifacts:
name: 'Save regressions'
path: src/test/regress/regression.diffs
when: on_fail
- store_artifacts:
name: 'Save core dumps'
path: /tmp/core_dumps
when: on_fail
- codecov/upload:
flags: 'test_<< parameters.pg_major >>,<< parameters.make >>'
when: always
tap-test-citus:
description: Runs tap tests for citus
parameters:
pg_major:
description: "postgres major version"
type: integer
image:
description: 'docker image to use as for the tests'
type: string
default: citus/exttester
image_tag:
description: 'docker image tag to use'
type: string
suite:
description: 'name of the tap test suite to run'
type: string
make:
description: "make target"
type: string
default: installcheck
docker:
- image: '<< parameters.image >>:<< parameters.image_tag >>'
working_directory: /home/circleci/project
steps:
- checkout
- attach_workspace:
at: .
- run:
name: 'Install Extension'
command: |
tar xfv "${CIRCLE_WORKING_DIRECTORY}/install-${PG_MAJOR}.tar" --directory /
- run:
name: 'Configure'
command: |
chown -R circleci .
gosu circleci ./configure
- run:
name: 'Enable core dumps'
command: |
ulimit -c unlimited
- run:
name: 'Run Test'
command: |
gosu circleci make -C src/test/<< parameters.suite >> << parameters.make >>
no_output_timeout: 2m
- run:
name: 'Copy coredumps'
command: |
mkdir -p /tmp/core_dumps
if ls core.* 1> /dev/null 2>&1; then
cp core.* /tmp/core_dumps
fi
when: on_fail
- store_artifacts:
name: 'Save tap logs'
path: /home/circleci/project/src/test/<< parameters.suite >>/tmp_check/log
when: on_fail
- store_artifacts:
name: 'Save core dumps'
path: /tmp/core_dumps
when: on_fail
- codecov/upload:
flags: 'test_<< parameters.pg_major >>,tap_<< parameters.suite >>_<< parameters.make >>'
when: always
check-merge-to-enterprise:
docker:
- image: citus/extbuilder:13.0
working_directory: /home/circleci/project
steps:
- checkout
- run:
command: |
ci/check_enterprise_merge.sh
ch_benchmark:
docker:
- image: buildpack-deps:stretch
working_directory: /home/circleci/project
steps:
- checkout
- azure-cli/install
- azure-cli/login-with-service-principal
- run:
command: |
cd ./src/test/hammerdb
sh run_hammerdb.sh citusbot_ch_benchmark_rg
name: install dependencies and run ch_benchmark tests
no_output_timeout: 20m
tpcc_benchmark:
docker:
- image: buildpack-deps:stretch
working_directory: /home/circleci/project
steps:
- checkout
- azure-cli/install
- azure-cli/login-with-service-principal
- run:
command: |
cd ./src/test/hammerdb
sh run_hammerdb.sh citusbot_tpcc_benchmark_rg
name: install dependencies and run ch_benchmark tests
no_output_timeout: 20m
workflows:
version: 2
build_and_test:
jobs:
- check-merge-to-enterprise:
filters:
branches:
ignore:
- /release-[0-9]+\.[0-9]+.*/ # match with releaseX.Y.*
- build:
name: build-11
pg_major: 11
image_tag: '11.9'
- build:
name: build-12
pg_major: 12
image_tag: '12.4'
- build:
name: build-13
pg_major: 13
image_tag: '13.0'
- check-style
- check-sql-snapshots
- test-citus:
name: 'test-11_check-multi'
pg_major: 11
image_tag: '11.9'
make: check-multi
requires: [build-11]
- test-citus:
name: 'test-11_check-mx'
pg_major: 11
image_tag: '11.9'
make: check-multi-mx
requires: [build-11]
- test-citus:
name: 'test-11_check-vanilla'
pg_major: 11
image_tag: '11.9'
make: check-vanilla
requires: [build-11]
- test-citus:
name: 'test-11_check-isolation'
pg_major: 11
image_tag: '11.9'
make: check-isolation
requires: [build-11]
- test-citus:
name: 'test-11_check-worker'
pg_major: 11
image_tag: '11.9'
make: check-worker
requires: [build-11]
- test-citus:
name: 'test-11_check-operations'
pg_major: 11
image_tag: '11.9'
make: check-operations
requires: [build-11]
- test-citus:
name: 'test-11_check-follower-cluster'
pg_major: 11
image_tag: '11.9'
make: check-follower-cluster
requires: [build-11]
- test-citus:
name: 'test-11_check-failure'
pg_major: 11
image: citus/failtester
image_tag: '11.9'
make: check-failure
requires: [build-11]
- test-citus:
name: 'test-12_check-multi'
pg_major: 12
image_tag: '12.4'
make: check-multi
requires: [build-12]
- test-citus:
name: 'test-12_check-mx'
pg_major: 12
image_tag: '12.4'
make: check-multi-mx
requires: [build-12]
- test-citus:
name: 'test-12_check-vanilla'
pg_major: 12
image_tag: '12.4'
make: check-vanilla
requires: [build-12]
- test-citus:
name: 'test-12_check-isolation'
pg_major: 12
image_tag: '12.4'
make: check-isolation
requires: [build-12]
- test-citus:
name: 'test-12_check-worker'
pg_major: 12
image_tag: '12.4'
make: check-worker
requires: [build-12]
- test-citus:
name: 'test-12_check-operations'
pg_major: 12
image_tag: '12.4'
make: check-operations
requires: [build-12]
- test-citus:
name: 'test-12_check-follower-cluster'
pg_major: 12
image_tag: '12.4'
make: check-follower-cluster
requires: [build-12]
- test-citus:
name: 'test-12_check-columnar'
pg_major: 12
image_tag: '12.4'
make: check-columnar
requires: [build-12]
- test-citus:
name: 'test-12_check-columnar-isolation'
pg_major: 12
image_tag: '12.4'
make: check-columnar-isolation
requires: [build-12]
- tap-test-citus:
name: 'test_12_tap-recovery'
pg_major: 12
image_tag: '12.4'
suite: recovery
requires: [build-12]
- tap-test-citus:
name: 'test-12_tap-columnar-freezing'
pg_major: 12
image_tag: '12.4'
suite: columnar_freezing
requires: [build-12]
- test-citus:
name: 'test-12_check-failure'
pg_major: 12
image: citus/failtester
image_tag: '12.4'
make: check-failure
requires: [build-12]
- test-citus:
name: 'test-13_check-multi'
pg_major: 13
image_tag: '13.0'
make: check-multi
requires: [build-13]
- test-citus:
name: 'test-13_check-mx'
pg_major: 13
image_tag: '13.0'
make: check-multi-mx
requires: [build-13]
- test-citus:
name: 'test-13_check-vanilla'
pg_major: 13
image_tag: '13.0'
make: check-vanilla
requires: [build-13]
- test-citus:
name: 'test-13_check-isolation'
pg_major: 13
image_tag: '13.0'
make: check-isolation
requires: [build-13]
- test-citus:
name: 'test-13_check-worker'
pg_major: 13
image_tag: '13.0'
make: check-worker
requires: [build-13]
- test-citus:
name: 'test-13_check-operations'
pg_major: 13
image_tag: '13.0'
make: check-operations
requires: [build-13]
- test-citus:
name: 'test-13_check-follower-cluster'
pg_major: 13
image_tag: '13.0'
make: check-follower-cluster
requires: [build-13]
- test-citus:
name: 'test-13_check-columnar'
pg_major: 13
image_tag: '13.0'
make: check-columnar
requires: [build-13]
- test-citus:
name: 'test-13_check-columnar-isolation'
pg_major: 13
image_tag: '13.0'
make: check-columnar-isolation
requires: [build-13]
- tap-test-citus:
name: 'test_13_tap-recovery'
pg_major: 13
image_tag: '13.0'
suite: recovery
requires: [build-13]
- tap-test-citus:
name: 'test-13_tap-columnar-freezing'
pg_major: 13
image_tag: '13.0'
suite: columnar_freezing
requires: [build-13]
- test-citus:
name: 'test-13_check-failure'
pg_major: 13
image: citus/failtester
image_tag: '13.0'
make: check-failure
requires: [build-13]
- test-pg-upgrade:
name: 'test-11-12_check-pg-upgrade'
old_pg_major: 11
new_pg_major: 12
image_tag: 11-12-13
requires: [build-11,build-12]
- test-pg-upgrade:
name: 'test-12-13_check-pg-upgrade'
old_pg_major: 12
new_pg_major: 13
image_tag: 11-12-13
requires: [build-12,build-13]
- test-citus-upgrade:
name: test-11_check-citus-upgrade
pg_major: 11
image_tag: '11.9'
requires: [build-11]
- ch_benchmark:
requires: [build-13]
filters:
branches:
only:
- /ch_benchmark\/.*/ # match with ch_benchmark/ prefix
- tpcc_benchmark:
requires: [build-13]
filters:
branches:
only:
- /tpcc_benchmark\/.*/ # match with tpcc_benchmark/ prefix

View File

@ -1,7 +0,0 @@
exclude_patterns:
- "src/backend/distributed/utils/citus_outfuncs.c"
- "src/backend/distributed/deparser/ruleutils_*.c"
- "src/include/distributed/citus_nodes.h"
- "src/backend/distributed/safeclib"
- "src/backend/columnar/safeclib"
- "**/vendor/"

View File

@ -1,33 +0,0 @@
# gdbpg.py contains scripts to nicely print the postgres datastructures
# while in a gdb session. Since the vscode debugger is based on gdb this
# actually also works when debugging with vscode. Providing nice tools
# to understand the internal datastructures we are working with.
source /root/gdbpg.py
# when debugging postgres it is convenient to _always_ have a breakpoint
# trigger when an error is logged. Because .gdbinit is sourced before gdb
# is fully attached and has the sources loaded. To make sure the breakpoint
# is added when the library is loaded we temporary set the breakpoint pending
# to on. After we have added out breakpoint we revert back to the default
# configuration for breakpoint pending.
# The breakpoint is hard to read, but at entry of the function we don't have
# the level loaded in elevel. Instead we hardcode the location where the
# level of the current error is stored. Also gdb doesn't understand the
# ERROR symbol so we hardcode this to the value of ERROR. It is very unlikely
# this value will ever change in postgres, but if it does we might need to
# find a way to conditionally load the correct breakpoint.
set breakpoint pending on
break elog.c:errfinish if errordata[errordata_stack_depth].elevel == 21
set breakpoint pending auto
echo \n
echo ----------------------------------------------------------------------------------\n
echo when attaching to a postgres backend a breakpoint will be set on elog.c:errfinish \n
echo it will only break on errors being raised in postgres \n
echo \n
echo to disable this breakpoint from vscode run `-exec disable 1` in the debug console \n
echo this assumes it's the first breakpoint loaded as it is loaded from .gdbinit \n
echo this can be verified with `-exec info break`, enabling can be done with \n
echo `-exec enable 1` \n
echo ----------------------------------------------------------------------------------\n
echo \n

View File

@ -1 +0,0 @@
postgresql-*.tar.bz2

View File

@ -1,7 +0,0 @@
\timing on
\pset linestyle unicode
\pset border 2
\setenv PAGER 'pspg --no-mouse -bX --no-commandbar --no-topbar'
\set HISTSIZE 100000
\set PROMPT1 '\n%[%033[1m%]%M %n@%/:%> (PID: %p)%R%[%033[0m%]%# '
\set PROMPT2 ' '

View File

@ -1,12 +0,0 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
docopt = "*"
[dev-packages]
[requires]
python_version = "3.9"

28
.devcontainer/.vscode/Pipfile.lock generated vendored
View File

@ -1,28 +0,0 @@
{
"_meta": {
"hash": {
"sha256": "6956a6700ead5804aa56bd597c93bb4a13f208d2d49d3b5399365fd240ca0797"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.9"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.org/simple",
"verify_ssl": true
}
]
},
"default": {
"docopt": {
"hashes": [
"sha256:49b3a825280bd66b3aa83585ef59c4a8c82f2c8a522dbe754a8bc8d08c85c491"
],
"index": "pypi",
"version": "==0.6.2"
}
},
"develop": {}
}

View File

@ -1,84 +0,0 @@
#! /usr/bin/env pipenv-shebang
"""Generate C/C++ properties file for VSCode.
Uses pgenv to iterate postgres versions and generate
a C/C++ properties file for VSCode containing the
include paths for the postgres headers.
Usage:
generate_c_cpp_properties-json.py <target_path>
generate_c_cpp_properties-json.py (-h | --help)
generate_c_cpp_properties-json.py --version
Options:
-h --help Show this screen.
--version Show version.
"""
import json
import subprocess
from docopt import docopt
def main(args):
target_path = args['<target_path>']
output = subprocess.check_output(['pgenv', 'versions'])
# typical output is:
# 14.8 pgsql-14.8
# * 15.3 pgsql-15.3
# 16beta2 pgsql-16beta2
# where the line marked with a * is the currently active version
#
# we are only interested in the first word of each line, which is the version number
# thus we strip the whitespace and the * from the line and split it into words
# and take the first word
versions = [line.strip('* ').split()[0] for line in output.decode('utf-8').splitlines()]
# create the list of configurations per version
configurations = []
for version in versions:
configurations.append(generate_configuration(version))
# create the json file
c_cpp_properties = {
"configurations": configurations,
"version": 4
}
# write the c_cpp_properties.json file
with open(target_path, 'w') as f:
json.dump(c_cpp_properties, f, indent=4)
def generate_configuration(version):
"""Returns a configuration for the given postgres version.
>>> generate_configuration('14.8')
{
"name": "Citus Development Configuration - Postgres 14.8",
"includePath": [
"/usr/local/include",
"/home/citus/.pgenv/src/postgresql-14.8/src/**",
"${workspaceFolder}/**",
"${workspaceFolder}/src/include/",
],
"configurationProvider": "ms-vscode.makefile-tools"
}
"""
return {
"name": f"Citus Development Configuration - Postgres {version}",
"includePath": [
"/usr/local/include",
f"/home/citus/.pgenv/src/postgresql-{version}/src/**",
"${workspaceFolder}/**",
"${workspaceFolder}/src/include/",
],
"configurationProvider": "ms-vscode.makefile-tools"
}
if __name__ == '__main__':
arguments = docopt(__doc__, version='0.1.0')
main(arguments)

View File

@ -1,40 +0,0 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach Citus (devcontainer)",
"type": "cppdbg",
"request": "attach",
"processId": "${command:pickProcess}",
"program": "/home/citus/.pgenv/pgsql/bin/postgres",
"additionalSOLibSearchPath": "/home/citus/.pgenv/pgsql/lib",
"setupCommands": [
{
"text": "handle SIGUSR1 noprint nostop pass",
"description": "let gdb not stop when SIGUSR1 is sent to process",
"ignoreFailures": true
}
],
},
{
"name": "Open core file",
"type": "cppdbg",
"request": "launch",
"program": "/home/citus/.pgenv/pgsql/bin/postgres",
"coreDumpPath": "${input:corefile}",
"cwd": "${workspaceFolder}",
"MIMode": "gdb",
}
],
"inputs": [
{
"id": "corefile",
"type": "command",
"command": "extension.commandvariable.file.pickFile",
"args": {
"dialogTitle": "Select core file",
"include": "**/core*",
},
},
],
}

View File

@ -1,222 +0,0 @@
FROM ubuntu:22.04 AS base
# environment is to make python pass an interactive shell, probably not the best timezone given a wide variety of colleagues
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# install build tools
RUN apt update && apt install -y \
bison \
bzip2 \
cpanminus \
curl \
docbook-xml \
docbook-xsl \
flex \
gcc \
git \
libcurl4-gnutls-dev \
libicu-dev \
libkrb5-dev \
liblz4-dev \
libpam0g-dev \
libreadline-dev \
libselinux1-dev \
libssl-dev \
libxml2-utils \
libxslt-dev \
libzstd-dev \
locales \
make \
perl \
pkg-config \
python3 \
python3-pip \
software-properties-common \
sudo \
uuid-dev \
valgrind \
xsltproc \
zlib1g-dev \
&& add-apt-repository ppa:deadsnakes/ppa -y \
&& apt install -y \
python3.9-full \
# software properties pulls in pkexec, which makes the debugger unusable in vscode
&& apt purge -y \
software-properties-common \
&& apt autoremove -y \
&& apt clean
RUN sudo pip3 install pipenv pipenv-shebang
RUN cpanm install IPC::Run
RUN locale-gen en_US.UTF-8
# add the citus user to sudoers and allow all sudoers to login without a password prompt
RUN useradd -ms /bin/bash citus \
&& usermod -aG sudo citus \
&& echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
WORKDIR /home/citus
USER citus
# run all make commands with the number of cores available
RUN echo "export MAKEFLAGS=\"-j \$(nproc)\"" >> "/home/citus/.bashrc"
RUN git clone --branch v1.3.2 --depth 1 https://github.com/theory/pgenv.git .pgenv
COPY --chown=citus:citus pgenv/config/ .pgenv/config/
ENV PATH="/home/citus/.pgenv/bin:${PATH}"
ENV PATH="/home/citus/.pgenv/pgsql/bin:${PATH}"
USER citus
# build postgres versions separately for effective parrallelism and caching of already built versions when changing only certain versions
FROM base AS pg15
RUN MAKEFLAGS="-j $(nproc)" pgenv build 15.13
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
# create a staging directory with all files we want to copy from our pgenv build
# we will copy the contents of the staged folder into the final image at once
RUN mkdir .pgenv-staging/
RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg16
RUN MAKEFLAGS="-j $(nproc)" pgenv build 16.9
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
# create a staging directory with all files we want to copy from our pgenv build
# we will copy the contents of the staged folder into the final image at once
RUN mkdir .pgenv-staging/
RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg17
RUN MAKEFLAGS="-j $(nproc)" pgenv build 17.5
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
# create a staging directory with all files we want to copy from our pgenv build
# we will copy the contents of the staged folder into the final image at once
RUN mkdir .pgenv-staging/
RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS uncrustify-builder
RUN sudo apt update && sudo apt install -y cmake tree
WORKDIR /uncrustify
RUN curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.68.1.tar.gz | tar xz
WORKDIR /uncrustify/uncrustify-uncrustify-0.68.1/
RUN mkdir build
WORKDIR /uncrustify/uncrustify-uncrustify-0.68.1/build/
RUN cmake ..
RUN MAKEFLAGS="-j $(nproc)" make -s
RUN make install DESTDIR=/uncrustify
# builder for all pipenv's to get them contained in a single layer
FROM base AS pipenv
WORKDIR /workspaces/citus/
# tools to sync pgenv with vscode
COPY --chown=citus:citus .vscode/Pipfile .vscode/Pipfile.lock .devcontainer/.vscode/
RUN ( cd .devcontainer/.vscode && pipenv install )
# environment to run our failure tests
COPY --chown=citus:citus src/ src/
RUN ( cd src/test/regress && pipenv install )
# assemble the final container by copying over the artifacts from separately build containers
FROM base AS devcontainer
LABEL org.opencontainers.image.source=https://github.com/citusdata/citus
LABEL org.opencontainers.image.description="Development container for the Citus project"
LABEL org.opencontainers.image.licenses=AGPL-3.0-only
RUN yes | sudo unminimize
# install developer productivity tools
RUN sudo apt update \
&& sudo apt install -y \
autoconf2.69 \
bash-completion \
fswatch \
gdb \
htop \
libdbd-pg-perl \
libdbi-perl \
lsof \
man \
net-tools \
psmisc \
pspg \
tree \
vim \
&& sudo apt clean
# Since gdb will run in the context of the root user when debugging citus we will need to both
# download the gdbpg.py script as the root user, into their home directory, as well as add .gdbinit
# as a file owned by root
# This will make that as soon as the debugger attaches to a postgres backend (or frankly any other process)
# the gdbpg.py script will be sourced and the developer can direcly use it.
RUN sudo curl -o /root/gdbpg.py https://raw.githubusercontent.com/tvesely/gdbpg/6065eee7872457785f830925eac665aa535caf62/gdbpg.py
COPY --chown=root:root .gdbinit /root/
# install developer dependencies in the global environment
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt pip install -r requirements.txt
# for persistent bash history across devcontainers we need to have
# a) a directory to store the history in
# b) a prompt command to append the history to the file
# c) specify the history file to store the history in
# b and c are done in the .bashrc to make it persistent across shells only
RUN sudo install -d -o citus -g citus /commandhistory \
&& echo "export PROMPT_COMMAND='history -a' && export HISTFILE=/commandhistory/.bash_history" >> "/home/citus/.bashrc"
# install citus-dev
RUN git clone --branch develop https://github.com/citusdata/tools.git citus-tools \
&& ( cd citus-tools/citus_dev && pipenv install ) \
&& mkdir -p ~/.local/bin \
&& ln -s /home/citus/citus-tools/citus_dev/citus_dev-pipenv .local/bin/citus_dev \
&& sudo make -C citus-tools/uncrustify install bindir=/usr/local/bin pkgsysconfdir=/usr/local/etc/ \
&& mkdir -p ~/.local/share/bash-completion/completions/ \
&& ln -s ~/citus-tools/citus_dev/bash_completion ~/.local/share/bash-completion/completions/citus_dev
# TODO some LC_ALL errors, possibly solved by locale-gen
RUN git clone https://github.com/so-fancy/diff-so-fancy.git \
&& mkdir -p ~/.local/bin \
&& ln -s /home/citus/diff-so-fancy/diff-so-fancy .local/bin/
COPY --link --from=uncrustify-builder /uncrustify/usr/ /usr/
COPY --link --from=pg15 /home/citus/.pgenv-staging/ /home/citus/.pgenv/
COPY --link --from=pg16 /home/citus/.pgenv-staging/ /home/citus/.pgenv/
COPY --link --from=pg17 /home/citus/.pgenv-staging/ /home/citus/.pgenv/
COPY --link --from=pipenv /home/citus/.local/share/virtualenvs/ /home/citus/.local/share/virtualenvs/
# place to run your cluster with citus_dev
VOLUME /data
RUN sudo mkdir /data \
&& sudo chown citus:citus /data
COPY --chown=citus:citus .psqlrc .
# with the copy linking of layers github actions seem to misbehave with the ownership of the
# directories leading upto the link, hence a small patch layer to have to right ownerships set
RUN sudo chown --from=root:root citus:citus -R ~
# sets default pg version
RUN pgenv switch 17.5
# make connecting to the coordinator easy
ENV PGPORT=9700

View File

@ -1,11 +0,0 @@
init: ../.vscode/c_cpp_properties.json ../.vscode/launch.json
../.vscode:
mkdir -p ../.vscode
../.vscode/launch.json: ../.vscode .vscode/launch.json
cp .vscode/launch.json ../.vscode/launch.json
../.vscode/c_cpp_properties.json: ../.vscode
./.vscode/generate_c_cpp_properties-json.py ../.vscode/c_cpp_properties.json

View File

@ -1,37 +0,0 @@
{
"image": "ghcr.io/citusdata/citus-devcontainer:main",
"runArgs": [
"--cap-add=SYS_PTRACE",
"--ulimit=core=-1",
],
"forwardPorts": [
9700
],
"customizations": {
"vscode": {
"extensions": [
"eamodio.gitlens",
"GitHub.copilot-chat",
"GitHub.copilot",
"github.vscode-github-actions",
"github.vscode-pull-request-github",
"ms-vscode.cpptools-extension-pack",
"ms-vsliveshare.vsliveshare",
"rioj7.command-variable",
],
"settings": {
"files.exclude": {
"**/*.o": true,
"**/.deps/": true,
}
},
}
},
"mounts": [
"type=volume,target=/data",
"source=citus-bashhistory,target=/commandhistory,type=volume",
],
"updateContentCommand": "./configure",
"postCreateCommand": "make -C .devcontainer/",
}

View File

@ -1,15 +0,0 @@
PGENV_MAKE_OPTIONS=(-s)
PGENV_CONFIGURE_OPTIONS=(
--enable-debug
--enable-depend
--enable-cassert
--enable-tap-tests
'CFLAGS=-ggdb -Og -g3 -fno-omit-frame-pointer -DUSE_VALGRIND'
--with-openssl
--with-libxml
--with-libxslt
--with-uuid=e2fs
--with-icu
--with-lz4
)

View File

@ -1,9 +0,0 @@
black==23.11.0
click==8.1.7
isort==5.12.0
mypy-extensions==1.0.0
packaging==23.2
pathspec==0.11.2
platformdirs==4.0.0
tomli==2.0.1
typing_extensions==4.8.0

View File

@ -1,28 +0,0 @@
[[source]]
name = "pypi"
url = "https://pypi.python.org/simple"
verify_ssl = true
[packages]
mitmproxy = {editable = true, ref = "main", git = "https://github.com/citusdata/mitmproxy.git"}
construct = "*"
docopt = "==0.6.2"
cryptography = ">=41.0.4"
pytest = "*"
psycopg = "*"
filelock = "*"
pytest-asyncio = "*"
pytest-timeout = "*"
pytest-xdist = "*"
pytest-repeat = "*"
pyyaml = "*"
werkzeug = "==2.3.7"
[dev-packages]
black = "*"
isort = "*"
flake8 = "*"
flake8-bugbear = "*"
[requires]
python_version = "3.9"

File diff suppressed because it is too large Load Diff

View File

@ -17,7 +17,13 @@ trim_trailing_whitespace = true
insert_final_newline = unset
trim_trailing_whitespace = unset
[*.{sql,sh,py,toml}]
# Don't change test/regress/output directory, this needs to be a separate rule
# for some reason
[/src/test/regress/output/**]
insert_final_newline = unset
trim_trailing_whitespace = unset
[*.{sql,sh}]
indent_style = space
indent_size = 4
tab_width = 4

View File

@ -1,7 +0,0 @@
[flake8]
# E203 is ignored for black
extend-ignore = E203
# black will truncate to 88 characters usually, but long string literals it
# might keep. That's fine in most cases unless it gets really excessive.
max-line-length = 150
exclude = .git,__pycache__,vendor,tmp_*

8
.gitattributes vendored
View File

@ -16,6 +16,7 @@ README.* conflict-marker-size=32
# Test output files that contain extra whitespace
*.out -whitespace
src/test/regress/output/*.source -whitespace
# These files are maintained or generated elsewhere. We take them as is.
configure -whitespace
@ -25,9 +26,10 @@ configure -whitespace
# except these exceptions...
src/backend/distributed/utils/citus_outfuncs.c -citus-style
src/backend/distributed/deparser/ruleutils_15.c -citus-style
src/backend/distributed/deparser/ruleutils_16.c -citus-style
src/backend/distributed/deparser/ruleutils_17.c -citus-style
src/backend/distributed/utils/pg11_snprintf.c -citus-style
src/backend/distributed/deparser/ruleutils_11.c -citus-style
src/backend/distributed/deparser/ruleutils_12.c -citus-style
src/backend/distributed/deparser/ruleutils_13.c -citus-style
src/backend/distributed/commands/index_pg_source.c -citus-style
src/include/distributed/citus_nodes.h -citus-style

View File

@ -1,23 +0,0 @@
name: 'Parallelization matrix'
inputs:
count:
required: false
default: 32
outputs:
json:
value: ${{ steps.generate_matrix.outputs.json }}
runs:
using: "composite"
steps:
- name: Generate parallelization matrix
id: generate_matrix
shell: bash
run: |-
json_array="{\"include\": ["
for ((i = 1; i <= ${{ inputs.count }}; i++)); do
json_array+="{\"id\":\"$i\"},"
done
json_array=${json_array%,}
json_array+=" ]}"
echo "json=$json_array" >> "$GITHUB_OUTPUT"
echo "json=$json_array"

View File

@ -1,38 +0,0 @@
name: save_logs_and_results
inputs:
folder:
required: false
default: "log"
runs:
using: composite
steps:
- uses: actions/upload-artifact@v4.6.0
name: Upload logs
with:
name: ${{ inputs.folder }}
if-no-files-found: ignore
path: |
src/test/**/proxy.output
src/test/**/results/
src/test/**/tmp_check/master/log
src/test/**/tmp_check/worker.57638/log
src/test/**/tmp_check/worker.57637/log
src/test/**/*.diffs
src/test/**/out/ddls.sql
src/test/**/out/queries.sql
src/test/**/logfile_*
/tmp/pg_upgrade_newData_logs
- name: Publish regression.diffs
run: |-
diffs="$(find src/test/regress -name "*.diffs" -exec cat {} \;)"
if ! [ -z "$diffs" ]; then
echo '```diff' >> $GITHUB_STEP_SUMMARY
echo -E "$diffs" >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo -E $diffs
fi
shell: bash
- name: Print stack traces
run: "./ci/print_stack_trace.sh"
if: failure()
shell: bash

View File

@ -1,35 +0,0 @@
name: setup_extension
inputs:
pg_major:
required: false
skip_installation:
required: false
default: false
type: boolean
runs:
using: composite
steps:
- name: Expose $PG_MAJOR to Github Env
run: |-
if [ -z "${{ inputs.pg_major }}" ]; then
echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
else
echo "PG_MAJOR=${{ inputs.pg_major }}" >> $GITHUB_ENV
fi
shell: bash
- uses: actions/download-artifact@v4.1.8
with:
name: build-${{ env.PG_MAJOR }}
- name: Install Extension
if: ${{ inputs.skip_installation == 'false' }}
run: tar xfv "install-$PG_MAJOR.tar" --directory /
shell: bash
- name: Configure
run: |-
chown -R circleci .
git config --global --add safe.directory ${GITHUB_WORKSPACE}
gosu circleci ./configure --without-pg-version-check
shell: bash
- name: Enable core dumps
run: ulimit -c unlimited
shell: bash

View File

@ -1,27 +0,0 @@
name: coverage
inputs:
flags:
required: false
codecov_token:
required: true
runs:
using: composite
steps:
- uses: codecov/codecov-action@v3
with:
flags: ${{ inputs.flags }}
token: ${{ inputs.codecov_token }}
verbose: true
gcov: true
- name: Create codeclimate coverage
run: |-
lcov --directory . --capture --output-file lcov.info
lcov --remove lcov.info -o lcov.info '/usr/*'
sed "s=^SF:$PWD/=SF:=g" -i lcov.info # relative pats are required by codeclimate
mkdir -p /tmp/codeclimate
cc-test-reporter format-coverage -t lcov -o /tmp/codeclimate/${{ inputs.flags }}.json lcov.info
shell: bash
- uses: actions/upload-artifact@v4.6.0
with:
path: "/tmp/codeclimate/*.json"
name: codeclimate-${{ inputs.flags }}

View File

@ -1,3 +0,0 @@
base:
- ".* warning: ignoring old recipe for target [`']check'"
- ".* warning: overriding recipe for target [`']check'"

View File

@ -1,51 +0,0 @@
#!/bin/bash
set -ex
# Function to get the OS version
get_rpm_os_version() {
if [[ -f /etc/centos-release ]]; then
cat /etc/centos-release | awk '{print $4}'
elif [[ -f /etc/oracle-release ]]; then
cat /etc/oracle-release | awk '{print $5}'
else
echo "Unknown"
fi
}
package_type=${1}
# Since $HOME is set in GH_Actions as /github/home, pyenv fails to create virtualenvs.
# For this script, we set $HOME to /root and then set it back to /github/home.
GITHUB_HOME="${HOME}"
export HOME="/root"
eval "$(pyenv init -)"
pyenv versions
pyenv virtualenv ${PACKAGING_PYTHON_VERSION} packaging_env
pyenv activate packaging_env
git clone -b v0.8.27 --depth=1 https://github.com/citusdata/tools.git tools
python3 -m pip install -r tools/packaging_automation/requirements.txt
echo "Package type: ${package_type}"
echo "OS version: $(get_rpm_os_version)"
# For RHEL 7, we need to install urllib3<2 due to below execution error
# ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl'
# module is compiled with 'OpenSSL 1.0.2k-fips 26 Jan 2017'.
# See: https://github.com/urllib3/urllib3/issues/2168
if [[ ${package_type} == "rpm" && $(get_rpm_os_version) == 7* ]]; then
python3 -m pip uninstall -y urllib3
python3 -m pip install 'urllib3<2'
fi
python3 -m tools.packaging_automation.validate_build_output --output_file output.log \
--ignore_file .github/packaging/packaging_ignore.yml \
--package_type ${package_type}
pyenv deactivate
# Set $HOME back to /github/home
export HOME=${GITHUB_HOME}
# Print the output to the console

View File

@ -1,545 +0,0 @@
name: Build & Test
run-name: Build & Test - ${{ github.event.pull_request.title || github.ref_name }}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
skip_test_flakyness:
required: false
default: false
type: boolean
push:
branches:
- "main"
- "release-*"
pull_request:
types: [opened, reopened,synchronize]
merge_group:
jobs:
# Since GHA does not interpolate env varibles in matrix context, we need to
# define them in a separate job and use them in other jobs.
params:
runs-on: ubuntu-latest
name: Initialize parameters
outputs:
build_image_name: "ghcr.io/citusdata/extbuilder"
test_image_name: "ghcr.io/citusdata/exttester"
citusupgrade_image_name: "ghcr.io/citusdata/citusupgradetester"
fail_test_image_name: "ghcr.io/citusdata/failtester"
pgupgrade_image_name: "ghcr.io/citusdata/pgupgradetester"
style_checker_image_name: "ghcr.io/citusdata/stylechecker"
style_checker_tools_version: "0.8.18"
sql_snapshot_pg_version: "17.5"
image_suffix: "-dev-d28f316"
pg15_version: '{ "major": "15", "full": "15.13" }'
pg16_version: '{ "major": "16", "full": "16.9" }'
pg17_version: '{ "major": "17", "full": "17.5" }'
upgrade_pg_versions: "15.13-16.9-17.5"
steps:
# Since GHA jobs need at least one step we use a noop step here.
- name: Set up parameters
run: echo 'noop'
check-sql-snapshots:
needs: params
runs-on: ubuntu-latest
container:
image: ${{ needs.params.outputs.build_image_name }}:${{ needs.params.outputs.sql_snapshot_pg_version }}${{ needs.params.outputs.image_suffix }}
options: --user root
steps:
- uses: actions/checkout@v4
- name: Check Snapshots
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
ci/check_sql_snapshots.sh
check-style:
needs: params
runs-on: ubuntu-latest
container:
image: ${{ needs.params.outputs.style_checker_image_name }}:${{ needs.params.outputs.style_checker_tools_version }}${{ needs.params.outputs.image_suffix }}
steps:
- name: Check Snapshots
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check C Style
run: citus_indent --check
- name: Check Python style
run: black --check .
- name: Check Python import order
run: isort --check .
- name: Check Python lints
run: flake8 .
- name: Fix whitespace
run: ci/editorconfig.sh && git diff --exit-code
- name: Remove useless declarations
run: ci/remove_useless_declarations.sh && git diff --cached --exit-code
- name: Sort and group includes
run: ci/sort_and_group_includes.sh && git diff --exit-code
- name: Normalize test output
run: ci/normalize_expected.sh && git diff --exit-code
- name: Check for C-style comments in migration files
run: ci/disallow_c_comments_in_migrations.sh && git diff --exit-code
- name: 'Check for comment--cached ns that start with # character in spec files'
run: ci/disallow_hash_comments_in_spec_files.sh && git diff --exit-code
- name: Check for gitignore entries .for source files
run: ci/fix_gitignore.sh && git diff --exit-code
- name: Check for lengths of changelog entries
run: ci/disallow_long_changelog_entries.sh
- name: Check for banned C API usage
run: ci/banned.h.sh
- name: Check for tests missing in schedules
run: ci/check_all_tests_are_run.sh
- name: Check if all CI scripts are actually run
run: ci/check_all_ci_scripts_are_run.sh
- name: Check if all GUCs are sorted alphabetically
run: ci/check_gucs_are_alphabetically_sorted.sh
- name: Check for missing downgrade scripts
run: ci/check_migration_files.sh
build:
needs: params
name: Build for PG${{ fromJson(matrix.pg_version).major }}
strategy:
fail-fast: false
matrix:
image_name:
- ${{ needs.params.outputs.build_image_name }}
image_suffix:
- ${{ needs.params.outputs.image_suffix}}
pg_version:
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
- ${{ needs.params.outputs.pg17_version }}
runs-on: ubuntu-latest
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ matrix.image_suffix }}"
options: --user root
steps:
- uses: actions/checkout@v4
- name: Expose $PG_MAJOR to Github Env
run: echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
shell: bash
- name: Build
run: "./ci/build-citus.sh"
shell: bash
- uses: actions/upload-artifact@v4.6.0
with:
name: build-${{ env.PG_MAJOR }}
path: |-
./build-${{ env.PG_MAJOR }}/*
./install-${{ env.PG_MAJOR }}.tar
test-citus:
name: PG${{ fromJson(matrix.pg_version).major }} - ${{ matrix.make }}
strategy:
fail-fast: false
matrix:
suite:
- regress
image_name:
- ${{ needs.params.outputs.test_image_name }}
pg_version:
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
- ${{ needs.params.outputs.pg17_version }}
make:
- check-split
- check-multi
- check-multi-1
- check-multi-mx
- check-vanilla
- check-isolation
- check-operations
- check-follower-cluster
- check-columnar
- check-columnar-isolation
- check-enterprise
- check-enterprise-isolation
- check-enterprise-isolation-logicalrep-1
- check-enterprise-isolation-logicalrep-2
- check-enterprise-isolation-logicalrep-3
include:
- make: check-failure
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-failure
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-failure
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: installcheck
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
pg_version: ${{ needs.params.outputs.pg15_version }}
- make: installcheck
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
pg_version: ${{ needs.params.outputs.pg16_version }}
- make: installcheck
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
pg_version: ${{ needs.params.outputs.pg17_version }}
- make: check-query-generator
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-query-generator
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-query-generator
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
runs-on: ubuntu-latest
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root --dns=8.8.8.8
# Due to Github creates a default network for each job, we need to use
# --dns= to have similar DNS settings as our other CI systems or local
# machines. Otherwise, we may see different results.
needs:
- params
- build
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
- name: Run Test
run: gosu circleci make -C src/test/${{ matrix.suite }} ${{ matrix.make }}
timeout-minutes: 20
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ fromJson(matrix.pg_version).major }}_${{ matrix.make }}
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.PG_MAJOR }}_${{ matrix.suite }}_${{ matrix.make }}
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-arbitrary-configs:
name: PG${{ fromJson(matrix.pg_version).major }} - check-arbitrary-configs-${{ matrix.parallel }}
runs-on: ["self-hosted", "1ES.Pool=1es-gha-citusdata-pool"]
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
strategy:
fail-fast: false
matrix:
image_name:
- ${{ needs.params.outputs.fail_test_image_name }}
pg_version:
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
- ${{ needs.params.outputs.pg17_version }}
parallel: [0,1,2,3,4,5] # workaround for running 6 parallel jobs
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
- name: Test arbitrary configs
run: |-
# we use parallel jobs to split the tests into 6 parts and run them in parallel
# the script below extracts the tests for the current job
N=6 # Total number of jobs (see matrix.parallel)
X=${{ matrix.parallel }} # Current job number
TESTS=$(src/test/regress/citus_tests/print_test_names.py |
tr '\n' ',' | awk -v N="$N" -v X="$X" -F, '{
split("", parts)
for (i = 1; i <= NF; i++) {
parts[i % N] = parts[i % N] $i ","
}
print substr(parts[X], 1, length(parts[X])-1)
}')
echo $TESTS
gosu circleci \
make -C src/test/regress \
check-arbitrary-configs parallel=4 CONFIGS=$TESTS
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ env.PG_MAJOR }}_arbitrary_configs_${{ matrix.parallel }}
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.PG_MAJOR }}_arbitrary_configs_${{ matrix.parallel }}
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-pg-upgrade:
name: PG${{ matrix.old_pg_major }}-PG${{ matrix.new_pg_major }} - check-pg-upgrade
runs-on: ubuntu-latest
container:
image: "${{ needs.params.outputs.pgupgrade_image_name }}:${{ needs.params.outputs.upgrade_pg_versions }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
strategy:
fail-fast: false
matrix:
include:
- old_pg_major: 15
new_pg_major: 16
- old_pg_major: 16
new_pg_major: 17
- old_pg_major: 15
new_pg_major: 17
env:
old_pg_major: ${{ matrix.old_pg_major }}
new_pg_major: ${{ matrix.new_pg_major }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
with:
pg_major: "${{ env.old_pg_major }}"
- uses: "./.github/actions/setup_extension"
with:
pg_major: "${{ env.new_pg_major }}"
- name: Install and test postgres upgrade
run: |-
gosu circleci \
make -C src/test/regress \
check-pg-upgrade \
old-bindir=/usr/lib/postgresql/${{ env.old_pg_major }}/bin \
new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin
- name: Copy pg_upgrade logs for newData dir
run: |-
mkdir -p /tmp/pg_upgrade_newData_logs
if ls src/test/regress/tmp_upgrade/newData/*.log 1> /dev/null 2>&1; then
cp src/test/regress/tmp_upgrade/newData/*.log /tmp/pg_upgrade_newData_logs
fi
if: failure()
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ env.old_pg_major }}_${{ env.new_pg_major }}_upgrade
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.old_pg_major }}_${{ env.new_pg_major }}_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-citus-upgrade:
name: PG${{ fromJson(needs.params.outputs.pg15_version).major }} - check-citus-upgrade
runs-on: ubuntu-latest
container:
image: "${{ needs.params.outputs.citusupgrade_image_name }}:${{ fromJson(needs.params.outputs.pg15_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
with:
skip_installation: true
- name: Install and test citus upgrade
run: |-
# run make check-citus-upgrade for all citus versions
# the image has ${CITUS_VERSIONS} set with all verions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
check-citus-upgrade \
bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
citus-old-version=${citus_version} \
citus-pre-tar=/install-pg${PG_MAJOR}-citus${citus_version}.tar \
citus-post-tar=${GITHUB_WORKSPACE}/install-$PG_MAJOR.tar; \
done;
# run make check-citus-upgrade-mixed for all citus versions
# the image has ${CITUS_VERSIONS} set with all verions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
check-citus-upgrade-mixed \
citus-old-version=${citus_version} \
bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
citus-pre-tar=/install-pg${PG_MAJOR}-citus${citus_version}.tar \
citus-post-tar=${GITHUB_WORKSPACE}/install-$PG_MAJOR.tar; \
done;
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ env.PG_MAJOR }}_citus_upgrade
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.PG_MAJOR }}_citus_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
upload-coverage:
if: always()
env:
CC_TEST_REPORTER_ID: ${{ secrets.CC_TEST_REPORTER_ID }}
runs-on: ubuntu-latest
container:
image: ${{ needs.params.outputs.test_image_name }}:${{ fromJson(needs.params.outputs.pg17_version).full }}${{ needs.params.outputs.image_suffix }}
needs:
- params
- test-citus
- test-arbitrary-configs
- test-citus-upgrade
- test-pg-upgrade
steps:
- uses: actions/download-artifact@v4.1.8
with:
pattern: codeclimate*
path: codeclimate
merge-multiple: true
- name: Upload coverage results to Code Climate
run: |-
cc-test-reporter sum-coverage codeclimate/*.json -o total.json
cc-test-reporter upload-coverage -i total.json
ch_benchmark:
name: CH Benchmark
if: startsWith(github.ref, 'refs/heads/ch_benchmark/')
runs-on: ubuntu-latest
needs:
- build
steps:
- uses: actions/checkout@v4
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: install dependencies and run ch_benchmark tests
uses: azure/CLI@v1
with:
inlineScript: |
cd ./src/test/hammerdb
chmod +x run_hammerdb.sh
run_hammerdb.sh citusbot_ch_benchmark_rg
tpcc_benchmark:
name: TPCC Benchmark
if: startsWith(github.ref, 'refs/heads/tpcc_benchmark/')
runs-on: ubuntu-latest
needs:
- build
steps:
- uses: actions/checkout@v4
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: install dependencies and run tpcc_benchmark tests
uses: azure/CLI@v1
with:
inlineScript: |
cd ./src/test/hammerdb
chmod +x run_hammerdb.sh
run_hammerdb.sh citusbot_tpcc_benchmark_rg
prepare_parallelization_matrix_32:
name: Prepare parallelization matrix
if: ${{ needs.test-flakyness-pre.outputs.tests != ''}}
needs: test-flakyness-pre
runs-on: ubuntu-latest
outputs:
json: ${{ steps.parallelization.outputs.json }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/parallelization"
id: parallelization
with:
count: 32
test-flakyness-pre:
name: Detect regression tests need to be ran
if: ${{ !inputs.skip_test_flakyness }}}
runs-on: ubuntu-latest
needs: build
outputs:
tests: ${{ steps.detect-regression-tests.outputs.tests }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Detect regression tests need to be ran
id: detect-regression-tests
run: |-
detected_changes=$(git diff origin/main... --name-only --diff-filter=AM | (grep 'src/test/regress/sql/.*\.sql\|src/test/regress/spec/.*\.spec\|src/test/regress/citus_tests/test/test_.*\.py' || true))
tests=${detected_changes}
# split the tests to be skipped --today we only skip upgrade tests
skipped_tests=""
not_skipped_tests=""
for test in $tests; do
if [[ $test =~ ^src/test/regress/sql/upgrade_ ]]; then
skipped_tests="$skipped_tests $test"
else
not_skipped_tests="$not_skipped_tests $test"
fi
done
if [ ! -z "$skipped_tests" ]; then
echo "Skipped tests " $skipped_tests
fi
if [ -z "$not_skipped_tests" ]; then
echo "Not detected any tests that flaky test detection should run"
else
echo "Detected tests " $not_skipped_tests
fi
echo 'tests<<EOF' >> $GITHUB_OUTPUT
echo "$not_skipped_tests" >> "$GITHUB_OUTPUT"
echo 'EOF' >> $GITHUB_OUTPUT
test-flakyness:
if: ${{ needs.test-flakyness-pre.outputs.tests != ''}}
name: Test flakyness
runs-on: ubuntu-latest
container:
image: ${{ needs.params.outputs.fail_test_image_name }}:${{ fromJson(needs.params.outputs.pg17_version).full }}${{ needs.params.outputs.image_suffix }}
options: --user root
env:
runs: 8
needs:
- params
- build
- test-flakyness-pre
- prepare_parallelization_matrix_32
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.prepare_parallelization_matrix_32.outputs.json) }}
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4.1.8
- uses: "./.github/actions/setup_extension"
- name: Run minimal tests
run: |-
tests="${{ needs.test-flakyness-pre.outputs.tests }}"
tests_array=($tests)
for test in "${tests_array[@]}"
do
test_name=$(echo "$test" | sed -r "s/.+\/(.+)\..+/\1/")
gosu circleci src/test/regress/citus_tests/run_test.py $test_name --repeat ${{ env.runs }} --use-whole-schedule-line
done
shell: bash
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: test_flakyness_parallel_${{ matrix.id }}

View File

@ -1,79 +0,0 @@
name: "CodeQL"
on:
schedule:
- cron: '59 23 * * 6'
workflow_dispatch:
jobs:
analyze:
name: Analyze
runs-on: ubuntu-22.04
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'cpp', 'python']
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
- name: Install package dependencies
run: |
# Create the file repository configuration:
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main 15" > /etc/apt/sources.list.d/pgdg.list'
# Import the repository signing key:
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
sudo apt-get install -y --no-install-recommends \
autotools-dev \
build-essential \
ca-certificates \
curl \
debhelper \
devscripts \
fakeroot \
flex \
libcurl4-openssl-dev \
libdistro-info-perl \
libedit-dev \
libfile-fcntllock-perl \
libicu-dev \
libkrb5-dev \
liblz4-1 \
liblz4-dev \
libpam0g-dev \
libreadline-dev \
libselinux1-dev \
libssl-dev \
libxslt-dev \
libzstd-dev \
libzstd1 \
lintian \
postgresql-server-dev-15 \
postgresql-server-dev-all \
python3-pip \
python3-setuptools \
wget \
zlib1g-dev
- name: Configure, Build and Install Citus
if: matrix.language == 'cpp'
run: |
./configure
make -sj8
sudo make install-all
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3

View File

@ -1,54 +0,0 @@
name: "Build devcontainer"
# Since building of containers can be quite time consuming, and take up some storage,
# there is no need to finish a build for a tag if new changes are concurrently being made.
# This cancels any previous builds for the same tag, and only the latest one will be kept.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
push:
paths:
- ".devcontainer/**"
workflow_dispatch:
jobs:
docker:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
attestations: write
id-token: write
steps:
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/citusdata/citus-devcontainer
tags: |
type=ref,event=branch
type=sha
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: 'Login to GitHub Container Registry'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{github.actor}}
password: ${{secrets.GITHUB_TOKEN}}
-
name: Build and push
uses: docker/build-push-action@v5
with:
context: "{{defaultContext}}:.devcontainer"
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@ -1,79 +0,0 @@
name: Flaky test debugging
run-name: Flaky test debugging - ${{ inputs.flaky_test }} (${{ inputs.flaky_test_runs_per_job }}x${{ inputs.flaky_test_parallel_jobs }})
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
flaky_test:
required: true
type: string
description: Test to run
flaky_test_runs_per_job:
required: false
default: 8
type: number
description: Number of times to run the test
flaky_test_parallel_jobs:
required: false
default: 32
type: number
description: Number of parallel jobs to run
jobs:
build:
name: Build Citus
runs-on: ubuntu-latest
container:
image: ${{ vars.build_image_name }}:${{ vars.pg15_version }}${{ vars.image_suffix }}
options: --user root
steps:
- uses: actions/checkout@v4
- name: Configure, Build, and Install
run: |
echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
./ci/build-citus.sh
shell: bash
- uses: actions/upload-artifact@v4.6.0
with:
name: build-${{ env.PG_MAJOR }}
path: |-
./build-${{ env.PG_MAJOR }}/*
./install-${{ env.PG_MAJOR }}.tar
prepare_parallelization_matrix:
name: Prepare parallelization matrix
runs-on: ubuntu-latest
outputs:
json: ${{ steps.parallelization.outputs.json }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/parallelization"
id: parallelization
with:
count: ${{ inputs.flaky_test_parallel_jobs }}
test_flakyness:
name: Test flakyness
runs-on: ubuntu-latest
container:
image: ${{ vars.fail_test_image_name }}:${{ vars.pg15_version }}${{ vars.image_suffix }}
options: --user root
needs:
[build, prepare_parallelization_matrix]
env:
test: "${{ inputs.flaky_test }}"
runs: "${{ inputs.flaky_test_runs_per_job }}"
skip: false
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.prepare_parallelization_matrix.outputs.json) }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
- name: Run minimal tests
run: |-
gosu circleci src/test/regress/citus_tests/run_test.py ${{ env.test }} --repeat ${{ env.runs }} --use-whole-schedule-line
shell: bash
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: check_flakyness_parallel_${{ matrix.id }}

View File

@ -1,177 +0,0 @@
name: Build tests in packaging images
on:
pull_request:
types: [opened, reopened,synchronize]
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
get_postgres_versions_from_file:
runs-on: ubuntu-latest
outputs:
pg_versions: ${{ steps.get-postgres-versions.outputs.pg_versions }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Get Postgres Versions
id: get-postgres-versions
run: |
set -euxo pipefail
# Postgres versions are stored in .github/workflows/build_and_test.yml
# file in json strings with major and full keys.
# Below command extracts the versions and get the unique values.
pg_versions=$(cat .github/workflows/build_and_test.yml | grep -oE '"major": "[0-9]+", "full": "[0-9.]+"' | sed -E 's/"major": "([0-9]+)", "full": "([0-9.]+)"/\1/g' | sort | uniq | tr '\n', ',')
pg_versions_array="[ ${pg_versions} ]"
echo "Supported PG Versions: ${pg_versions_array}"
# Below line is needed to set the output variable to be used in the next job
echo "pg_versions=${pg_versions_array}" >> $GITHUB_OUTPUT
shell: bash
rpm_build_tests:
name: rpm_build_tests
needs: get_postgres_versions_from_file
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
# While we use separate images for different Postgres versions in rpm
# based distros
# For this reason, we need to use a "matrix" to generate names of
# rpm images, e.g. citus/packaging:centos-7-pg12
packaging_docker_image:
- oraclelinux-8
- almalinux-8
- almalinux-9
POSTGRES_VERSION: ${{ fromJson(needs.get_postgres_versions_from_file.outputs.pg_versions) }}
container:
image: citus/packaging:${{ matrix.packaging_docker_image }}-pg${{ matrix.POSTGRES_VERSION }}
options: --user root
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set Postgres and python parameters for rpm based distros
run: |
echo "/usr/pgsql-${{ matrix.POSTGRES_VERSION }}/bin" >> $GITHUB_PATH
echo "/root/.pyenv/bin:$PATH" >> $GITHUB_PATH
echo "PACKAGING_PYTHON_VERSION=3.8.16" >> $GITHUB_ENV
- name: Configure
run: |
echo "Current Shell:$0"
echo "GCC Version: $(gcc --version)"
./configure 2>&1 | tee output.log
- name: Make clean
run: |
make clean
- name: Make
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
make CFLAGS="-Wno-missing-braces" -sj$(cat /proc/cpuinfo | grep "core id" | wc -l) 2>&1 | tee -a output.log
# Check the exit code of the make command
make_exit_code=${PIPESTATUS[0]}
# If the make command returned a non-zero exit code, exit with the same code
if [[ $make_exit_code -ne 0 ]]; then
echo "make command failed with exit code $make_exit_code"
exit $make_exit_code
fi
- name: Make install
run: |
make CFLAGS="-Wno-missing-braces" install 2>&1 | tee -a output.log
- name: Validate output
env:
POSTGRES_VERSION: ${{ matrix.POSTGRES_VERSION }}
PACKAGING_DOCKER_IMAGE: ${{ matrix.packaging_docker_image }}
run: |
echo "Postgres version: ${POSTGRES_VERSION}"
./.github/packaging/validate_build_output.sh "rpm"
deb_build_tests:
name: deb_build_tests
needs: get_postgres_versions_from_file
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
# On deb based distros, we use the same docker image for
# builds based on different Postgres versions because deb
# based images include all postgres installations.
# For this reason, we have multiple runs --which is 3 today--
# for each deb based image and we use POSTGRES_VERSION to set
# PG_CONFIG variable in each of those runs.
packaging_docker_image:
- debian-bookworm-all
- debian-bullseye-all
- ubuntu-focal-all
- ubuntu-jammy-all
POSTGRES_VERSION: ${{ fromJson(needs.get_postgres_versions_from_file.outputs.pg_versions) }}
container:
image: citus/packaging:${{ matrix.packaging_docker_image }}
options: --user root
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set pg_config path and python parameters for deb based distros
run: |
echo "PG_CONFIG=/usr/lib/postgresql/${{ matrix.POSTGRES_VERSION }}/bin/pg_config" >> $GITHUB_ENV
echo "/root/.pyenv/bin:$PATH" >> $GITHUB_PATH
echo "PACKAGING_PYTHON_VERSION=3.8.16" >> $GITHUB_ENV
- name: Configure
run: |
echo "Current Shell:$0"
echo "GCC Version: $(gcc --version)"
./configure 2>&1 | tee output.log
- name: Make clean
run: |
make clean
- name: Make
shell: bash
run: |
set -e
git config --global --add safe.directory ${GITHUB_WORKSPACE}
make -sj$(cat /proc/cpuinfo | grep "core id" | wc -l) 2>&1 | tee -a output.log
# Check the exit code of the make command
make_exit_code=${PIPESTATUS[0]}
# If the make command returned a non-zero exit code, exit with the same code
if [[ $make_exit_code -ne 0 ]]; then
echo "make command failed with exit code $make_exit_code"
exit $make_exit_code
fi
- name: Make install
run: |
make install 2>&1 | tee -a output.log
- name: Validate output
env:
POSTGRES_VERSION: ${{ matrix.POSTGRES_VERSION }}
PACKAGING_DOCKER_IMAGE: ${{ matrix.packaging_docker_image }}
run: |
echo "Postgres version: ${POSTGRES_VERSION}"
./.github/packaging/validate_build_output.sh "deb"

14
.gitignore vendored
View File

@ -38,23 +38,9 @@ lib*.pc
/Makefile.global
/src/Makefile.custom
/compile_commands.json
/src/backend/distributed/cdc/build-cdc-*/*
/src/test/cdc/tmp_check/*
# temporary files vim creates
*.swp
# vscode
.vscode/*
# output from diff normalization that shouldn't be commited
*.unmodified
*.modified
# style related temporary outputs
*.uncrustify
.venv
# added output when modifying check_gucs_are_alphabetically_sorted.sh
guc.out

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +0,0 @@
# Microsoft Open Source Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
Resources:
- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
- Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns

View File

@ -11,65 +11,8 @@ sign a Contributor License Agreement (CLA). For an explanation of
why we ask this as well as instructions for how to proceed, see the
[Microsoft CLA](https://cla.opensource.microsoft.com/).
### Devcontainer / Github Codespaces
The easiest way to start contributing is via our devcontainer. This container works both locally in visual studio code with docker-desktop/docker-for-mac as well as [Github Codespaces](https://github.com/features/codespaces). To open the project in vscode you will need the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers). For codespaces you will need to [create a new codespace](https://codespace.new/citusdata/citus).
With the extension installed you can run the following from the command pallet to get started
```
> Dev Containers: Clone Repository in Container Volume...
```
In the subsequent popup paste the url to the repo and hit enter.
```
https://github.com/citusdata/citus
```
This will create an isolated Workspace in vscode, complete with all tools required to build, test and run the Citus extension. We keep this container up to date with the supported postgres versions as well as the exact versions of tooling we use.
To quickly start we suggest splitting your terminal once to have two shells. The left one in the `/workspaces/citus`, the second one changed to `/data`. The left terminal will be used to interact with the project, the right one with a testing cluster.
To get citus installed from source we run `make install -s` in the first terminal. Once installed you can start a Citus cluster in the second terminal via `citus_dev make citus`. The cluster will run in the background, and can be interacted with via `citus_dev`. To get an overview of the available commands.
With the Citus cluster running you can connect to the coordinator in the first terminal via `psql -p9700`. Because the coordinator is the most common entrypoint the `PGPORT` environment is set accordingly, so a simple `psql` will connect directly to the coordinator.
### Debugging in the VS code
1. Start Debugging: Press F5 in VS Code to start debugging. When prompted, you'll need to attach the debugger to the appropriate PostgreSQL process.
2. Identify the Process: If you're running a psql command, take note of the PID that appears in your psql prompt. For example:
```
[local] citus@citus:9700 (PID: 5436)=#
```
This PID (5436 in this case) indicates the process that you should attach the debugger to.
If you are uncertain about which process to attach, you can list all running PostgreSQL processes using the following command:
```
ps aux | grep postgres
```
Look for the process associated with the PID you noted. For example:
```
citus 5436 0.0 0.0 0 0 ? S 14:00 0:00 postgres: citus citus
```
4. Attach the Debugger: Once you've identified the correct PID, select that process when prompted in VS Code to attach the debugger. You should now be able to debug the PostgreSQL session tied to the psql command.
5. Set Breakpoints and Debug: With the debugger attached, you can set breakpoints within the code. This allows you to step through the code execution, inspect variables, and fully debug the PostgreSQL instance running in your container.
### Getting and building
[PostgreSQL documentation](https://www.postgresql.org/support/versioning/) has a
section on upgrade policy.
We always recommend that all users run the latest available minor release [for PostgreSQL] for whatever major version is in use.
We expect Citus users to honor this recommendation and use latest available
PostgreSQL minor release. Failure to do so may result in failures in our test
suite. There are some known improvements in PG test architecture such as
[this commit](https://github.com/postgres/postgres/commit/3f323956128ff8589ce4d3a14e8b950837831803)
that are missing in earlier minor versions.
#### Mac
1. Install Xcode
@ -87,19 +30,9 @@ that are missing in earlier minor versions.
cd citus
./configure
# If you have already installed the project, you need to clean it first
make clean
make
make install
# Optionally, you might instead want to use `make install-all`
# since `multi_extension` regression test would fail due to missing downgrade scripts.
cd src/test/regress
pip install pipenv
pipenv --rm
pipenv install
pipenv shell
make check
```
@ -114,10 +47,10 @@ that are missing in earlier minor versions.
sudo apt-key add -
sudo apt-get update
sudo apt-get install -y postgresql-server-dev-14 postgresql-14 \
sudo apt-get install -y postgresql-server-dev-11 postgresql-11 \
autoconf flex git libcurl4-gnutls-dev libicu-dev \
libkrb5-dev liblz4-dev libpam0g-dev libreadline-dev \
libselinux1-dev libssl-dev libxslt1-dev libzstd-dev \
libselinux1-dev libssl-dev libxslt-dev libzstd-dev \
make uuid-dev
```
@ -127,19 +60,9 @@ that are missing in earlier minor versions.
git clone https://github.com/citusdata/citus.git
cd citus
./configure
# If you have already installed the project previously, you need to clean it first
make clean
make
sudo make install
# Optionally, you might instead want to use `sudo make install-all`
# since `multi_extension` regression test would fail due to missing downgrade scripts.
cd src/test/regress
pip install pipenv
pipenv --rm
pipenv install
pipenv shell
make check
```
@ -171,33 +94,59 @@ that are missing in earlier minor versions.
```bash
sudo yum update -y
sudo yum groupinstall -y 'Development Tools'
sudo yum install -y postgresql14-devel postgresql14-server \
sudo yum install -y postgresql11-devel postgresql11-server \
git libcurl-devel libxml2-devel libxslt-devel \
libzstd-devel llvm-toolset-7-clang llvm5.0 lz4-devel \
openssl-devel pam-devel readline-devel
git clone https://github.com/citusdata/citus.git
cd citus
PG_CONFIG=/usr/pgsql-14/bin/pg_config ./configure
# If you have already installed the project previously, you need to clean it first
make clean
PG_CONFIG=/usr/pgsql-11/bin/pg_config ./configure
make
sudo make install
# Optionally, you might instead want to use `sudo make install-all`
# since `multi_extension` regression test would fail due to missing downgrade scripts.
cd src/test/regress
pip install pipenv
pipenv --rm
pipenv install
pipenv shell
make check
```
### Following our coding conventions
Our coding conventions are documented in [STYLEGUIDE.md](STYLEGUIDE.md).
CircleCI will automatically reject any PRs which do not follow our coding
conventions. The easiest way to ensure your PR adheres to those conventions is
to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify)
tool. This tool uses `uncrustify` under the hood.
```bash
# Uncrustify changes the way it formats code every release a bit. To make sure
# everyone formats consistently we use version 0.68.1:
curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.68.1.tar.gz | tar xz
cd uncrustify-uncrustify-0.68.1/
mkdir build
cd build
cmake ..
make -j5
sudo make install
cd ../..
git clone https://github.com/citusdata/tools.git
cd tools
make uncrustify/.install
```
Once you've done that, you can run the `make reindent` command from the top
directory to recursively check and correct the style of any source files in the
current directory. Under the hood, `make reindent` will run `citus_indent` and
some other style corrections for you.
You can also run the following in the directory of this repository to
automatically format all the files that you have changed before committing:
```bash
cat > .git/hooks/pre-commit << __EOF__
#!/bin/bash
citus_indent --check --diff || { citus_indent --diff; exit 1; }
__EOF__
chmod +x .git/hooks/pre-commit
```
### Making SQL changes
@ -234,50 +183,3 @@ style `#include` statements like this:
Any other SQL you can put directly in the main sql file, e.g.
`src/backend/distributed/sql/citus--8.3-1--9.0-1.sql`.
### Backporting a commit to a release branch
1. Check out the release branch that you want to backport to `git checkout release-11.3`
2. Make sure you have the latest changes `git pull`
3. Create a new release branch with a unique name `git checkout -b release-11.3-<yourname>`
4. Cherry-pick the commit that you want to backport `git cherry-pick -x <sha>` (the `-x` is important)
5. Push the branch `git push`
6. Wait for tests to pass
7. If the cherry-pick required non-trivial merge conflicts, create a PR and ask
for a review.
8. After the tests pass on CI, fast-forward the release branch `git push origin release-11.3-<yourname>:release-11.3`
### Running tests
See [`src/test/regress/README.md`](https://github.com/citusdata/citus/blob/master/src/test/regress/README.md)
### Documentation
User-facing documentation is published on [docs.citusdata.com](https://docs.citusdata.com/). When adding a new feature, function, or setting, you can open a pull request or issue against the [Citus docs repo](https://github.com/citusdata/citus_docs/).
Detailed descriptions of the implementation for Citus developers are provided in the [Citus Technical Documentation](src/backend/distributed/README.md). It is currently a single file for ease of searching. Please update the documentation if you make any changes that affect the design or add major new features.
# Making a pull request ready for reviews
Asking for help and asking for reviews are two different things. When you're asking for help, you're asking for someone to help you with something that you're not expected to know.
But when you're asking for a review, you're asking for someone to review your work and provide feedback. So, when you're asking for a review, you're expected to make sure that:
* Your changes don't perform **unnecessary line addition / deletions / style changes on unrelated files / lines**.
* All CI jobs are **passing**, including **style checks** and **flaky test detection jobs**. Note that if you're an external contributor, you don't have to wait CI jobs to run (and finish) because they don't get automatically triggered for external contributors.
* Your PR has necessary amount of **tests** and that they're passing.
* You separated as much as possible work into **separate PRs**, e.g., a prerequisite bugfix, a refactoring etc..
* Your PR doesn't introduce a typo or something that you can easily fix yourself.
* After all CI jobs pass, code-coverage measurement job (CodeCov as of today) then kicks in. That's why it's important to make the **tests passing** first. At that point, you're expected to check **CodeCov annotations** that can be seen in the **Files Changed** tab and expected to make sure that it doesn't complain about any lines that are not covered. For example, it's ok if CodeCov complains about an `ereport()` call that you put for an "unexpected-but-better-than-crashing" case, but it's not ok if it complains about an uncovered `if` branch that you added.
* And finally, perform a **self-review** to make sure that:
* Code and code-comments reflects the idea **without requiring an extra explanation** via a chat message / email / PR comment.
This is important because we don't expect developers to reach out to author / read about the whole discussion in the PR to understand the idea behind a commit merged into `main` branch.
* PR description is clear enough.
* If-and-only-if you're **introducing a user facing change / bugfix**, your PR has a line that starts with `DESCRIPTION: <Present simple tense word that starts with a capital letter, e.g., Adds support for / Fixes / Disallows>`.
* **Commit messages** are clear enough if the commits are doing logically different things.

View File

@ -1,43 +0,0 @@
# Devcontainer
## Coredumps
When postgres/citus crashes, there is the option to create a coredump. This is useful for debugging the issue. Coredumps are enabled in the devcontainer by default. However, not all environments are configured correctly out of the box. The most important configuration that is not standardized is the `core_pattern`. The configuration can be verified from the container, however, you cannot change this setting from inside the container as the filesystem containing this setting is in read only mode while inside the container.
To verify if corefiles are written run the following command in a terminal. This shows the filename pattern with which the corefile will be written.
```bash
cat /proc/sys/kernel/core_pattern
```
This should be configured with a relative path or simply a simple filename, such as `core`. When your environment shows an absolute path you will need to change this setting. How to change this setting depends highly on the underlying system as the setting needs to be changed on the kernel of the host running the container.
You can put any pattern in `/proc/sys/kernel/core_pattern` as you see fit. eg. You can add the PID to the core pattern in one of two ways;
- You either include `%p` in the core_pattern. This gets substituted with the PID of the crashing process.
- Alternatively you could set `/proc/sys/kernel/core_uses_pid` to `1` in the same way as you set `core_pattern`. This will append the PID to the corefile if `%p` is not explicitly contained in the core_pattern.
When a coredump is written you can use the debug/launch configuration `Open core file` which is preconfigured in the devcontainer. This will open a fileprompt that lists all coredumps that are found in your workspace. When you want to debug coredumps from `citus_dev` that are run in your `/data` directory, you can add the data directory to your workspace. In the command pallet of vscode you can run `>Workspace: Add Folder to Workspace...` and select the `/data` directory. This will allow you to open the coredumps from the `/data` directory in the `Open core file` debug configuration.
### Windows (docker desktop)
When running in docker desktop on windows you will most likely need to change this setting. The linux guest in WSL2 that runs your container is the `docker-desktop` environment. The easiest way to get onto the host, where you can change this setting, is to open a powershell window and verify you have the docker-desktop environment listed.
```powershell
wsl --list
```
Among others this should list both `docker-desktop` and `docker-desktop-data`. You can then open a shell in the `docker-desktop` environment.
```powershell
wsl -d docker-desktop
```
Inside this shell you can verify that you have the right environment by running
```bash
cat /proc/sys/kernel/core_pattern
```
This should show the same configuration as the one you see inside the devcontainer. You can then change the setting by running the following command.
This will change the setting for the current session. If you want to make the change permanent you will need to add this to a startup script.
```bash
echo "core" > /proc/sys/kernel/core_pattern
```

View File

@ -13,16 +13,10 @@ include Makefile.global
all: extension
# build columnar only
columnar:
$(MAKE) -C src/backend/columnar all
# build extension
extension: $(citus_top_builddir)/src/include/citus_version.h columnar
extension: $(citus_top_builddir)/src/include/citus_version.h
$(MAKE) -C src/backend/distributed/ all
install-columnar: columnar
$(MAKE) -C src/backend/columnar install
install-extension: extension install-columnar
install-extension: extension
$(MAKE) -C src/backend/distributed/ install
install-headers: extension
$(MKDIR_P) '$(DESTDIR)$(includedir_server)/distributed/'
@ -33,35 +27,27 @@ install-headers: extension
clean-extension:
$(MAKE) -C src/backend/distributed/ clean
$(MAKE) -C src/backend/columnar/ clean
clean-full:
$(MAKE) -C src/backend/distributed/ clean-full
.PHONY: extension install-extension clean-extension clean-full
# Add to generic targets
install: install-extension install-headers
install-downgrades:
$(MAKE) -C src/backend/distributed/ install-downgrades
install-all: install-headers
$(MAKE) -C src/backend/columnar/ install-all
$(MAKE) -C src/backend/distributed/ install-all
# Add to generic targets
install: install-extension install-headers
clean: clean-extension
# apply or check style
reindent:
${citus_abs_top_srcdir}/ci/fix_style.sh
check-style:
black . --check --quiet
isort . --check --quiet
flake8
cd ${citus_abs_top_srcdir} && citus_indent --quiet --check
.PHONY: reindent check-style
# depend on install-all so that downgrade scripts are installed as well
check: all install-all
# explicetely does not use $(MAKE) to avoid parallelism
make -C src/test/regress check
# depend on install for now
check: all install
$(MAKE) -C src/test/regress check-full
.PHONY: all check clean install install-downgrades install-all

View File

@ -64,8 +64,8 @@ $(citus_top_builddir)/Makefile.global: $(citus_abs_top_srcdir)/configure $(citus
$(citus_top_builddir)/config.status: $(citus_abs_top_srcdir)/configure $(citus_abs_top_srcdir)/src/backend/distributed/citus.control
cd @abs_top_builddir@ && ./config.status --recheck && ./config.status
# Regenerate configure if configure.ac changed
$(citus_abs_top_srcdir)/configure: $(citus_abs_top_srcdir)/configure.ac
# Regenerate configure if configure.in changed
$(citus_abs_top_srcdir)/configure: $(citus_abs_top_srcdir)/configure.in
cd ${citus_abs_top_srcdir} && ./autogen.sh
# If specified via configure, replace the default compiler. Normally
@ -93,5 +93,7 @@ endif
override CPPFLAGS := @CPPFLAGS@ @CITUS_CPPFLAGS@ -I '${citus_abs_top_srcdir}/src/include' -I'${citus_top_builddir}/src/include' $(CPPFLAGS)
override LDFLAGS += @LDFLAGS@ @CITUS_LDFLAGS@
HAS_TABLEAM:=@HAS_TABLEAM@
# optional file with user defined, additional, rules
-include ${citus_abs_srcdir}/src/Makefile.custom

638
README.md
View File

@ -1,496 +1,154 @@
| **<br/>The Citus database is 100% open source.<br/><img width=1000/><br/>Learn what's new in the [Citus 13.0 release blog](https://www.citusdata.com/blog/2025/02/06/distribute-postgresql-17-with-citus-13/) and the [Citus Updates page](https://www.citusdata.com/updates/).<br/><br/>**|
|---|
<br/>
![Citus Banner](images/citus-readme-banner.png)
![Citus Banner](/github-banner.png)
[![Slack Status](http://slack.citusdata.com/badge.svg)](https://slack.citusdata.com)
[![Latest Docs](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://docs.citusdata.com/)
[![Stack Overflow](https://img.shields.io/badge/Stack%20Overflow-%20-545353?logo=Stack%20Overflow)](https://stackoverflow.com/questions/tagged/citus)
[![Slack](https://cituscdn.azureedge.net/images/social/slack-badge.svg)](https://slack.citusdata.com/)
[![Code Coverage](https://codecov.io/gh/citusdata/citus/branch/master/graph/badge.svg)](https://app.codecov.io/gh/citusdata/citus)
[![Twitter](https://img.shields.io/twitter/follow/citusdata.svg?label=Follow%20@citusdata)](https://twitter.com/intent/follow?screen_name=citusdata)
[![Citus Deb Packages](https://img.shields.io/badge/deb-packagecloud.io-844fec.svg)](https://packagecloud.io/app/citusdata/community/search?q=&filter=debs)
[![Citus Rpm Packages](https://img.shields.io/badge/rpm-packagecloud.io-844fec.svg)](https://packagecloud.io/app/citusdata/community/search?q=&filter=rpms)
## What is Citus?
Citus is a [PostgreSQL extension](https://www.citusdata.com/blog/2017/10/25/what-it-means-to-be-a-postgresql-extension/) that transforms Postgres into a distributed database—so you can achieve high performance at any scale.
With Citus, you extend your PostgreSQL database with new superpowers:
- **Distributed tables** are sharded across a cluster of PostgreSQL nodes to combine their CPU, memory, storage and I/O capacity.
- **References tables** are replicated to all nodes for joins and foreign keys from distributed tables and maximum read performance.
- **Distributed query engine** routes and parallelizes SELECT, DML, and other operations on distributed tables across the cluster.
- **Columnar storage** compresses data, speeds up scans, and supports fast projections, both on regular and distributed tables.
- **Query from any node** enables you to utilize the full capacity of your cluster for distributed queries
You can use these Citus superpowers to make your Postgres database scale-out ready on a single Citus node. Or you can build a large cluster capable of handling **high transaction throughputs**, especially in **multi-tenant apps**, run **fast analytical queries**, and process large amounts of **time series** or **IoT data** for **real-time analytics**. When your data size and volume grow, you can easily add more worker nodes to the cluster and rebalance the shards.
Our [SIGMOD '21](https://2021.sigmod.org/) paper [Citus: Distributed PostgreSQL for Data-Intensive Applications](https://doi.org/10.1145/3448016.3457551) gives a more detailed look into what Citus is, how it works, and why it works that way.
![Citus scales out from a single node](images/citus-scale-out.png)
Since Citus is an extension to Postgres, you can use Citus with the latest Postgres versions. And Citus works seamlessly with the PostgreSQL tools and extensions you are already familiar with.
- [Why Citus?](#why-citus)
- [Getting Started](#getting-started)
- [Using Citus](#using-citus)
- [Schema-based sharding](#schema-based-sharding)
- [Setting up with High Availability](#setting-up-with-high-availability)
- [Documentation](#documentation)
- [Architecture](#architecture)
- [When to Use Citus](#when-to-use-citus)
- [Need Help?](#need-help)
- [Contributing](#contributing)
- [Stay Connected](#stay-connected)
## Why Citus?
Developers choose Citus for two reasons:
1. Your application is outgrowing a single PostgreSQL node
If the size and volume of your data increases over time, you may start seeing any number of performance and scalability problems on a single PostgreSQL node. For example: High CPU utilization and I/O wait times slow down your queries, SQL queries return out of memory errors, autovacuum cannot keep up and increases table bloat, etc.
With Citus you can distribute and optionally compress your tables to always have enough memory, CPU, and I/O capacity to achieve high performance at scale. The distributed query engine can efficiently route transactions across the cluster, while parallelizing analytical queries and batch operations across all cores. Moreover, you can still use the PostgreSQL features and tools you know and love.
2. PostgreSQL can do things other systems cant
There are many data processing systems that are built to scale out, but few have as many powerful capabilities as PostgreSQL, including: Advanced joins and subqueries, user-defined functions, update/delete/upsert, constraints and foreign keys, powerful extensions (e.g. PostGIS, HyperLogLog), many types of indexes, time-partitioning, and sophisticated JSON support.
Citus makes PostgreSQLs most powerful capabilities work at any scale, allowing you to handle complex data-intensive workloads on a single database system.
## Getting Started
The quickest way to get started with Citus is to use the [Azure Cosmos DB for PostgreSQL](https://learn.microsoft.com/azure/cosmos-db/postgresql/quickstart-create-portal) managed service in the cloud—or [set up Citus locally](https://docs.citusdata.com/en/stable/installation/single_node.html).
### Citus Managed Service on Azure
You can get a fully-managed Citus cluster in minutes through the [Azure Cosmos DB for PostgreSQL portal](https://azure.microsoft.com/products/cosmos-db/). Azure will manage your backups, high availability through auto-failover, software updates, monitoring, and more for all of your servers. To get started Citus on Azure, use the [Azure Cosmos DB for PostgreSQL Quickstart](https://learn.microsoft.com/azure/cosmos-db/postgresql/quickstart-create-portal).
### Running Citus using Docker
The smallest possible Citus cluster is a single PostgreSQL node with the Citus extension, which means you can try out Citus by running a single Docker container.
```bash
# run PostgreSQL with Citus on port 5500
docker run -d --name citus -p 5500:5432 -e POSTGRES_PASSWORD=mypassword citusdata/citus
# connect using psql within the Docker container
docker exec -it citus psql -U postgres
# or, connect using local psql
psql -U postgres -d postgres -h localhost -p 5500
```
### Install Citus locally
If you already have a local PostgreSQL installation, the easiest way to install Citus is to use our packaging repo
Install packages on Ubuntu / Debian:
```bash
curl https://install.citusdata.com/community/deb.sh > add-citus-repo.sh
sudo bash add-citus-repo.sh
sudo apt-get -y install postgresql-17-citus-13.0
```
Install packages on Red Hat:
```bash
curl https://install.citusdata.com/community/rpm.sh > add-citus-repo.sh
sudo bash add-citus-repo.sh
sudo yum install -y citus130_17
```
To add Citus to your local PostgreSQL database, add the following to `postgresql.conf`:
```
shared_preload_libraries = 'citus'
```
After restarting PostgreSQL, connect using `psql` and run:
```sql
CREATE EXTENSION citus;
````
Youre now ready to get started and use Citus tables on a single node.
### Install Citus on multiple nodes
If you want to set up a multi-node cluster, you can also set up additional PostgreSQL nodes with the Citus extensions and add them to form a Citus cluster:
```sql
-- before adding the first worker node, tell future worker nodes how to reach the coordinator
SELECT citus_set_coordinator_host('10.0.0.1', 5432);
-- add worker nodes
SELECT citus_add_node('10.0.0.2', 5432);
SELECT citus_add_node('10.0.0.3', 5432);
-- rebalance the shards over the new worker nodes
SELECT rebalance_table_shards();
```
For more details, see our [documentation on how to set up a multi-node Citus cluster](https://docs.citusdata.com/en/stable/installation/multi_node.html) on various operating systems.
## Using Citus
Once you have your Citus cluster, you can start creating distributed tables, reference tables and use columnar storage.
### Creating Distributed Tables
The `create_distributed_table` UDF will transparently shard your table locally or across the worker nodes:
```sql
CREATE TABLE events (
device_id bigint,
event_id bigserial,
event_time timestamptz default now(),
data jsonb not null,
PRIMARY KEY (device_id, event_id)
);
-- distribute the events table across shards placed locally or on the worker nodes
SELECT create_distributed_table('events', 'device_id');
```
After this operation, queries for a specific device ID will be efficiently routed to a single worker node, while queries across device IDs will be parallelized across the cluster.
```sql
-- insert some events
INSERT INTO events (device_id, data)
SELECT s % 100, ('{"measurement":'||random()||'}')::jsonb FROM generate_series(1,1000000) s;
-- get the last 3 events for device 1, routed to a single node
SELECT * FROM events WHERE device_id = 1 ORDER BY event_time DESC, event_id DESC LIMIT 3;
┌───────────┬──────────┬───────────────────────────────┬───────────────────────────────────────┐
│ device_id │ event_id │ event_time │ data │
├───────────┼──────────┼───────────────────────────────┼───────────────────────────────────────┤
│ 1 │ 1999901 │ 2021-03-04 16:00:31.189963+00 │ {"measurement": 0.88722643925054} │
│ 1 │ 1999801 │ 2021-03-04 16:00:31.189963+00 │ {"measurement": 0.6512231304621992} │
│ 1 │ 1999701 │ 2021-03-04 16:00:31.189963+00 │ {"measurement": 0.019368766051897524} │
└───────────┴──────────┴───────────────────────────────┴───────────────────────────────────────┘
(3 rows)
Time: 4.588 ms
-- explain plan for a query that is parallelized across shards, which shows the plan for
-- a query one of the shards and how the aggregation across shards is done
EXPLAIN (VERBOSE ON) SELECT count(*) FROM events;
┌────────────────────────────────────────────────────────────────────────────────────┐
│ QUERY PLAN │
├────────────────────────────────────────────────────────────────────────────────────┤
│ Aggregate │
│ Output: COALESCE((pg_catalog.sum(remote_scan.count))::bigint, '0'::bigint) │
│ -> Custom Scan (Citus Adaptive) │
│ ... │
│ -> Task │
│ Query: SELECT count(*) AS count FROM events_102008 events WHERE true │
│ Node: host=localhost port=5432 dbname=postgres │
│ -> Aggregate │
│ -> Seq Scan on public.events_102008 events │
└────────────────────────────────────────────────────────────────────────────────────┘
```
### Creating Distributed Tables with Co-location
Distributed tables that have the same distribution column can be co-located to enable high performance distributed joins and foreign keys between distributed tables.
By default, distributed tables will be co-located based on the type of the distribution column, but you define co-location explicitly with the `colocate_with` argument in `create_distributed_table`.
```sql
CREATE TABLE devices (
device_id bigint primary key,
device_name text,
device_type_id int
);
CREATE INDEX ON devices (device_type_id);
-- co-locate the devices table with the events table
SELECT create_distributed_table('devices', 'device_id', colocate_with := 'events');
-- insert device metadata
INSERT INTO devices (device_id, device_name, device_type_id)
SELECT s, 'device-'||s, 55 FROM generate_series(0, 99) s;
-- optionally: make sure the application can only insert events for a known device
ALTER TABLE events ADD CONSTRAINT device_id_fk
FOREIGN KEY (device_id) REFERENCES devices (device_id);
-- get the average measurement across all devices of type 55, parallelized across shards
SELECT avg((data->>'measurement')::double precision)
FROM events JOIN devices USING (device_id)
WHERE device_type_id = 55;
┌────────────────────┐
│ avg │
├────────────────────┤
│ 0.5000191877513974 │
└────────────────────┘
(1 row)
Time: 209.961 ms
```
Co-location also helps you scale [INSERT..SELECT](https://docs.citusdata.com/en/stable/articles/aggregation.html), [stored procedures](https://www.citusdata.com/blog/2020/11/21/making-postgres-stored-procedures-9x-faster-in-citus/), and [distributed transactions](https://www.citusdata.com/blog/2017/06/02/scaling-complex-sql-transactions/).
### Distributing Tables without interrupting the application
Some of you already start with Postgres, and decide to distribute tables later on while your application using the tables. In that case, you want to avoid downtime for both reads and writes. `create_distributed_table` command block writes (e.g., DML commands) on the table until the command is finished. Instead, with `create_distributed_table_concurrently` command, your application can continue to read and write the data even during the command.
```sql
CREATE TABLE device_logs (
device_id bigint primary key,
log text
);
-- insert device logs
INSERT INTO device_logs (device_id, log)
SELECT s, 'device log:'||s FROM generate_series(0, 99) s;
-- convert device_logs into a distributed table without interrupting the application
SELECT create_distributed_table_concurrently('device_logs', 'device_id', colocate_with := 'devices');
-- get the count of the logs, parallelized across shards
SELECT count(*) FROM device_logs;
┌───────┐
│ count │
├───────┤
│ 100 │
└───────┘
(1 row)
Time: 48.734 ms
```
### Creating Reference Tables
When you need fast joins or foreign keys that do not include the distribution column, you can use `create_reference_table` to replicate a table across all nodes in the cluster.
```sql
CREATE TABLE device_types (
device_type_id int primary key,
device_type_name text not null unique
);
-- replicate the table across all nodes to enable foreign keys and joins on any column
SELECT create_reference_table('device_types');
-- insert a device type
INSERT INTO device_types (device_type_id, device_type_name) VALUES (55, 'laptop');
-- optionally: make sure the application can only insert devices with known types
ALTER TABLE devices ADD CONSTRAINT device_type_fk
FOREIGN KEY (device_type_id) REFERENCES device_types (device_type_id);
-- get the last 3 events for devices whose type name starts with laptop, parallelized across shards
SELECT device_id, event_time, data->>'measurement' AS value, device_name, device_type_name
FROM events JOIN devices USING (device_id) JOIN device_types USING (device_type_id)
WHERE device_type_name LIKE 'laptop%' ORDER BY event_time DESC LIMIT 3;
┌───────────┬───────────────────────────────┬─────────────────────┬─────────────┬──────────────────┐
│ device_id │ event_time │ value │ device_name │ device_type_name │
├───────────┼───────────────────────────────┼─────────────────────┼─────────────┼──────────────────┤
│ 60 │ 2021-03-04 16:00:31.189963+00 │ 0.28902084163415864 │ device-60 │ laptop │
│ 8 │ 2021-03-04 16:00:31.189963+00 │ 0.8723803076285073 │ device-8 │ laptop │
│ 20 │ 2021-03-04 16:00:31.189963+00 │ 0.8177634801548557 │ device-20 │ laptop │
└───────────┴───────────────────────────────┴─────────────────────┴─────────────┴──────────────────┘
(3 rows)
Time: 146.063 ms
```
Reference tables enable you to scale out complex data models and take full advantage of relational database features.
### Creating Tables with Columnar Storage
To use columnar storage in your PostgreSQL database, all you need to do is add `USING columnar` to your `CREATE TABLE` statements and your data will be automatically compressed using the columnar access method.
```sql
CREATE TABLE events_columnar (
device_id bigint,
event_id bigserial,
event_time timestamptz default now(),
data jsonb not null
)
USING columnar;
-- insert some data
INSERT INTO events_columnar (device_id, data)
SELECT d, '{"hello":"columnar"}' FROM generate_series(1,10000000) d;
-- create a row-based table to compare
CREATE TABLE events_row AS SELECT * FROM events_columnar;
-- see the huge size difference!
\d+
List of relations
┌────────┬──────────────────────────────┬──────────┬───────┬─────────────┬────────────┬─────────────┐
│ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │
├────────┼──────────────────────────────┼──────────┼───────┼─────────────┼────────────┼─────────────┤
│ public │ events_columnar │ table │ marco │ permanent │ 25 MB │ │
│ public │ events_row │ table │ marco │ permanent │ 651 MB │ │
└────────┴──────────────────────────────┴──────────┴───────┴─────────────┴────────────┴─────────────┘
(2 rows)
```
You can use columnar storage by itself, or in a distributed table to combine the benefits of compression and the distributed query engine.
When using columnar storage, you should only load data in batch using `COPY` or `INSERT..SELECT` to achieve good compression. Update, delete, and foreign keys are currently unsupported on columnar tables. However, you can use partitioned tables in which newer partitions use row-based storage, and older partitions are compressed using columnar storage.
To learn more about columnar storage, check out the [columnar storage README](https://github.com/citusdata/citus/blob/master/src/backend/columnar/README.md).
## Schema-based sharding
Available since Citus 12.0, [schema-based sharding](https://docs.citusdata.com/en/stable/get_started/concepts.html#schema-based-sharding) is the shared database, separate schema model, the schema becomes the logical shard within the database. Multi-tenant apps can a use a schema per tenant to easily shard along the tenant dimension. Query changes are not required and the application usually only needs a small modification to set the proper search_path when switching tenants. Schema-based sharding is an ideal solution for microservices, and for ISVs deploying applications that cannot undergo the changes required to onboard row-based sharding.
### Creating distributed schemas
You can turn an existing schema into a distributed schema by calling `citus_schema_distribute`:
```sql
SELECT citus_schema_distribute('user_service');
```
Alternatively, you can set `citus.enable_schema_based_sharding` to have all newly created schemas be automatically converted into distributed schemas:
```sql
SET citus.enable_schema_based_sharding TO ON;
CREATE SCHEMA AUTHORIZATION user_service;
CREATE SCHEMA AUTHORIZATION time_service;
CREATE SCHEMA AUTHORIZATION ping_service;
```
### Running queries
Queries will be properly routed to schemas based on `search_path` or by explicitly using the schema name in the query.
For [microservices](https://docs.citusdata.com/en/stable/get_started/tutorial_microservices.html) you would create a USER per service matching the schema name, hence the default `search_path` would contain the schema name. When connected the user queries would be automatically routed and no changes to the microservice would be required.
```sql
CREATE USER user_service;
CREATE SCHEMA AUTHORIZATION user_service;
```
For typical multi-tenant applications, you would set the search path to the tenant schema name in your application:
```sql
SET search_path = tenant_name, public;
```
## Setting up with High Availability
One of the most popular high availability solutions for PostgreSQL, [Patroni 3.0](https://github.com/zalando/patroni), has [first class support for Citus 10.0 and above](https://patroni.readthedocs.io/en/latest/citus.html#citus), additionally since Citus 11.2 ships with improvements for smoother node switchover in Patroni.
An example of patronictl list output for the Citus cluster:
```bash
postgres@coord1:~$ patronictl list demo
```
```text
+ Citus cluster: demo ----------+--------------+---------+----+-----------+
| Group | Member | Host | Role | State | TL | Lag in MB |
+-------+---------+-------------+--------------+---------+----+-----------+
| 0 | coord1 | 172.27.0.10 | Replica | running | 1 | 0 |
| 0 | coord2 | 172.27.0.6 | Sync Standby | running | 1 | 0 |
| 0 | coord3 | 172.27.0.4 | Leader | running | 1 | |
| 1 | work1-1 | 172.27.0.8 | Sync Standby | running | 1 | 0 |
| 1 | work1-2 | 172.27.0.2 | Leader | running | 1 | |
| 2 | work2-1 | 172.27.0.5 | Sync Standby | running | 1 | 0 |
| 2 | work2-2 | 172.27.0.7 | Leader | running | 1 | |
+-------+---------+-------------+--------------+---------+----+-----------+
```
## Documentation
If youre ready to get started with Citus or want to know more, we recommend reading the [Citus open source documentation](https://docs.citusdata.com/en/stable/). Or, if you are using Citus on Azure, then the [Azure Cosmos DB for PostgreSQL](https://learn.microsoft.com/azure/cosmos-db/postgresql/introduction) is the place to start.
Our Citus docs contain comprehensive use case guides on how to build a [multi-tenant SaaS application](https://docs.citusdata.com/en/stable/use_cases/multi_tenant.html), [real-time analytics dashboard]( https://docs.citusdata.com/en/stable/use_cases/realtime_analytics.html), or work with [time series data](https://docs.citusdata.com/en/stable/use_cases/timeseries.html).
## Architecture
A Citus database cluster grows from a single PostgreSQL node into a cluster by adding worker nodes. In a Citus cluster, the original node to which the application connects is referred to as the coordinator node. The Citus coordinator contains both the metadata of distributed tables and reference tables, as well as regular (local) tables, sequences, and other database objects (e.g. foreign tables).
Data in distributed tables is stored in “shards”, which are actually just regular PostgreSQL tables on the worker nodes. When querying a distributed table on the coordinator node, Citus will send regular SQL queries to the worker nodes. That way, all the usual PostgreSQL optimizations and extensions can automatically be used with Citus.
![Citus architecture](images/citus-architecture.png)
When you send a query in which all (co-located) distributed tables have the same filter on the distribution column, Citus will automatically detect that and send the whole query to the worker node that stores the data. That way, arbitrarily complex queries are supported with minimal routing overhead, which is especially useful for scaling transactional workloads. If queries do not have a specific filter, each shard is queried in parallel, which is especially useful in analytical workloads. The Citus distributed executor is adaptive and is designed to handle both query types at the same time on the same system under high concurrency, which enables large-scale mixed workloads.
The schema and metadata of distributed tables and reference tables are automatically synchronized to all the nodes in the cluster. That way, you can connect to any node to run distributed queries. Schema changes and cluster administration still need to go through the coordinator.
Detailed descriptions of the implementation for Citus developers are provided in the [Citus Technical Documentation](src/backend/distributed/README.md).
## When to use Citus
Citus is uniquely capable of scaling both analytical and transactional workloads with up to petabytes of data. Use cases in which Citus is commonly used:
- **[Customer-facing analytics dashboards](http://docs.citusdata.com/en/stable/use_cases/realtime_analytics.html)**:
Citus enables you to build analytics dashboards that simultaneously ingest and process large amounts of data in the database and give sub-second response times even with a large number of concurrent users.
The advanced parallel, distributed query engine in Citus combined with PostgreSQL features such as [array types](https://www.postgresql.org/docs/current/arrays.html), [JSONB](https://www.postgresql.org/docs/current/datatype-json.html), [lateral joins](https://heap.io/blog/engineering/postgresqls-powerful-new-join-type-lateral), and extensions like [HyperLogLog](https://github.com/citusdata/postgresql-hll) and [TopN](https://github.com/citusdata/postgresql-topn) allow you to build responsive analytics dashboards no matter how many customers or how much data you have.
Example real-time analytics users: [Algolia](https://www.citusdata.com/customers/algolia)
- **[Time series data](http://docs.citusdata.com/en/stable/use_cases/timeseries.html)**:
Citus enables you to process and analyze very large amounts of time series data. The biggest Citus clusters store well over a petabyte of time series data and ingest terabytes per day.
Citus integrates seamlessly with [Postgres table partitioning](https://www.postgresql.org/docs/current/ddl-partitioning.html) and has [built-in functions for partitioning by time](https://www.citusdata.com/blog/2021/10/22/how-to-scale-postgres-for-time-series-data-with-citus/), which can speed up queries and writes on time series tables. You can take advantage of Cituss parallel, distributed query engine for fast analytical queries, and use the built-in *columnar storage* to compress old partitions.
Example users: [MixRank](https://www.citusdata.com/customers/mixrank)
- **[Software-as-a-service (SaaS) applications](http://docs.citusdata.com/en/stable/use_cases/multi_tenant.html)**:
SaaS and other multi-tenant applications need to be able to scale their database as the number of tenants/customers grows. Citus enables you to transparently shard a complex data model by the tenant dimension, so your database can grow along with your business.
By distributing tables along a tenant ID column and co-locating data for the same tenant, Citus can horizontally scale complex (tenant-scoped) queries, transactions, and foreign key graphs. Reference tables and distributed DDL commands make database management a breeze compared to manual sharding. On top of that, you have a built-in distributed query engine for doing cross-tenant analytics inside the database.
Example multi-tenant SaaS users: [Salesloft](https://fivetran.com/case-studies/replicating-sharded-databases-a-case-study-of-salesloft-citus-data-and-fivetran), [ConvertFlow](https://www.citusdata.com/customers/convertflow)
- **[Microservices](https://docs.citusdata.com/en/stable/get_started/tutorial_microservices.html)**: Citus supports schema based sharding, which allows distributing regular database schemas across many machines. This sharding methodology fits nicely with typical Microservices architecture, where storage is fully owned by the service hence cant share the same schema definition with other tenants. Citus allows distributing horizontally scalable state across services, solving one of the [main problems](https://stackoverflow.blog/2020/11/23/the-macro-problem-with-microservices/) of microservices.
- **Geospatial**:
Because of the powerful [PostGIS](https://postgis.net/) extension to Postgres that adds support for geographic objects into Postgres, many people run spatial/GIS applications on top of Postgres. And since spatial location information has become part of our daily life, well, there are more geospatial applications than ever. When your Postgres database needs to scale out to handle an increased workload, Citus is a good fit.
Example geospatial users: [Helsinki Regional Transportation Authority (HSL)](https://customers.microsoft.com/story/845146-transit-authority-improves-traffic-monitoring-with-azure-database-for-postgresql-hyperscale), [MobilityDB](https://www.citusdata.com/blog/2020/11/09/analyzing-gps-trajectories-at-scale-with-postgres-mobilitydb/).
## Need Help?
- **Slack**: Ask questions in our Citus community [Slack channel](https://slack.citusdata.com).
- **GitHub issues**: Please submit issues via [GitHub issues](https://github.com/citusdata/citus/issues).
- **Documentation**: Our [Citus docs](https://docs.citusdata.com ) have a wealth of resources, including sections on [query performance tuning](https://docs.citusdata.com/en/stable/performance/performance_tuning.html), [useful diagnostic queries](https://docs.citusdata.com/en/stable/admin_guide/diagnostic_queries.html), and [common error messages](https://docs.citusdata.com/en/stable/reference/common_errors.html).
- **Docs issues**: You can also submit documentation issues via [GitHub issues for our Citus docs](https://github.com/citusdata/citus_docs/issues).
- **Updates & Release Notes**: Learn about what's new in each Citus version on the [Citus Updates page](https://www.citusdata.com/updates/).
## Contributing
Citus is built on and of open source, and we welcome your contributions. The [CONTRIBUTING.md](CONTRIBUTING.md) file explains how to get started developing the Citus extension itself and our code quality guidelines.
## Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Stay Connected
- **Twitter**: Follow us [@citusdata](https://twitter.com/citusdata) to track the latest posts & updates on whats happening.
- **Citus Blog**: Read our popular [Citus Open Source Blog](https://www.citusdata.com/blog/) for posts about PostgreSQL and Citus.
- **Citus Newsletter**: Subscribe to our monthly technical [Citus Newsletter](https://www.citusdata.com/join-newsletter) to get a curated collection of our favorite posts, videos, docs, talks, & other Postgres goodies.
- **Slack**: Our [Citus Public slack](https://slack.citusdata.com/) is a good way to stay connected, not just with us but with other Citus users.
- **Sister Blog**: Read the PostgreSQL posts on the [Azure Cosmos DB for PostgreSQL blog](https://devblogs.microsoft.com/cosmosdb/category/postgresql/) about our managed service on Azure.
- **Videos**: Check out this [YouTube playlist](https://www.youtube.com/playlist?list=PLixnExCn6lRq261O0iwo4ClYxHpM9qfVy) of some of our favorite Citus videos and demos. If you want to deep dive into how Citus extends PostgreSQL, you might want to check out Marco Slots talk at Carnegie Mellon titled [Citus: Distributed PostgreSQL as an Extension](https://youtu.be/X-aAgXJZRqM) that was part of Andy Pavlos Vaccination Database Talks series at CMUDB.
- **Our other Postgres projects**: Our team also works on other awesome PostgreSQL open source extensions & projects, including: [pg_cron](https://github.com/citusdata/pg_cron), [HyperLogLog](https://github.com/citusdata/postgresql-hll), [TopN](https://github.com/citusdata/postgresql-topn), [pg_auto_failover](https://github.com/citusdata/pg_auto_failover), [activerecord-multi-tenant](https://github.com/citusdata/activerecord-multi-tenant), and [django-multitenant](https://github.com/citusdata/django-multitenant).
[![Circleci Status](https://circleci.com/gh/citusdata/citus.svg?style=svg)](https://circleci.com/gh/citusdata/citus.svg?style=svg)
[![Code Coverage](https://codecov.io/gh/citusdata/citus/branch/master/graph/badge.svg)](https://codecov.io/gh/citusdata/citus/branch/master/graph/badge.svg)
### What is Citus?
* **Open-source** PostgreSQL extension (not a fork)
* **Built to scale out** across multiple nodes
* **Distributed** engine for query parallelization
* **Database** designed to scale out multi-tenant applications, real-time analytics dashboards, and high-throughput transactional workloads
Citus is an open source extension to Postgres that distributes your data and your queries across multiple nodes. Because Citus is an extension to Postgres, and not a fork, Citus gives developers and enterprises a scale-out database while keeping the power and familiarity of a relational database. As an extension, Citus supports new PostgreSQL releases, and allows you to benefit from new features while maintaining compatibility with existing PostgreSQL tools.
Citus serves many use cases. Three common ones are:
1. [Multi-tenant & SaaS applications](https://www.citusdata.com/blog/2016/10/03/designing-your-saas-database-for-high-scalability):
Most B2B applications already have the notion of a tenant /
customer / account built into their data model. Citus allows you to scale out your
transactional relational database to 100K+ tenants with minimal changes to your
application.
2. [Real-time analytics](https://www.citusdata.com/blog/2017/12/27/real-time-analytics-dashboards-with-citus/):
Citus enables ingesting large volumes of data and running
analytical queries on that data in human real-time. Example applications include analytic
dashboards with sub-second response times and exploratory queries on unfolding events.
3. High-throughput transactional workloads:
By distributing your workload across a database cluster, Citus ensures low latency and high performance even with a large number of concurrent users and high volumes of transactions.
To learn more, visit [citusdata.com](https://www.citusdata.com) and join
the [Citus slack](https://slack.citusdata.com/) to
stay on top of the latest developments.
### Getting started with Citus
The fastest way to get up and running is to deploy Citus in the cloud. You can also setup a local Citus database cluster with Docker.
#### Hyperscale (Citus) on Azure Database for PostgreSQL
Hyperscale (Citus) is a deployment option on Azure Database for PostgreSQL, a fully-managed database as a service. Hyperscale (Citus) employs the Citus open source extension so you can scale out across multiple nodes. To get started with Hyperscale (Citus), [learn more](https://www.citusdata.com/product/hyperscale-citus/) on the Citus website or use the [Hyperscale (Citus) Quickstart](https://docs.microsoft.com/en-us/azure/postgresql/quickstart-create-hyperscale-portal) in the Azure docs.
#### Citus Cloud
Citus Cloud runs on top of AWS as a fully managed database as a service. You can provision a Citus Cloud account at [https://console.citusdata.com](https://console.citusdata.com/users/sign_up) and get started with just a few clicks.
#### Local Citus Cluster
If you're looking to get started locally, you can follow the following steps to get up and running.
1. Install Docker Community Edition and Docker Compose
* Mac:
1. [Download](https://www.docker.com/community-edition#/download) and install Docker.
2. Start Docker by clicking on the applications icon.
* Linux:
```bash
curl -sSL https://get.docker.com/ | sh
sudo usermod -aG docker $USER && exec sg docker newgrp `id -gn`
sudo systemctl start docker
sudo curl -sSL https://github.com/docker/compose/releases/download/1.11.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```
The above version of Docker Compose is sufficient for running Citus, or you can install the [latest version](https://github.com/docker/compose/releases/latest).
2. Pull and start the Docker images
```bash
curl -sSLO https://raw.githubusercontent.com/citusdata/docker/master/docker-compose.yml
docker-compose -p citus up -d
```
3. Connect to the master database
```bash
docker exec -it citus_master psql -U postgres
```
4. Follow the [first tutorial][tutorial] instructions
5. To shut the cluster down, run
```bash
docker-compose -p citus down
```
### Talk to Contributors and Learn More
<table class="tg">
<col width="45%">
<col width="65%">
<tr>
<td>Documentation</td>
<td>Try the <a
href="https://docs.citusdata.com/en/stable/tutorials/multi-tenant-tutorial.html">Citus
tutorial</a> for a hands-on introduction or <br/>the <a
href="https://docs.citusdata.com">documentation</a> for
a more comprehensive reference.</td>
</tr>
<tr>
<td>Slack</td>
<td>Chat with us in our community <a
href="https://slack.citusdata.com">Slack channel</a>.</td>
</tr>
<tr>
<td>Github Issues</td>
<td>We track specific bug reports and feature requests on our <a
href="https://github.com/citusdata/citus/issues">project
issues</a>.</td>
</tr>
<tr>
<td>Twitter</td>
<td>Follow <a href="https://twitter.com/citusdata">@citusdata</a>
for general updates and PostgreSQL scaling tips.</td>
</tr>
<tr>
<td>Citus Blog</td>
<td>Read our <a href="https://www.citusdata.com/blog/">Citus Data Blog</a>
for posts on Postgres, Citus, and scaling your database.</td>
</tr>
</table>
### Contributing
Citus is built on and of open source, and we welcome your contributions.
The [CONTRIBUTING.md](CONTRIBUTING.md) file explains how to get started
developing the Citus extension itself and our code quality guidelines.
### Who is Using Citus?
Citus is deployed in production by many customers, ranging from
technology start-ups to large enterprises. Here are some examples:
* [Algolia](https://www.algolia.com/) uses Citus to provide real-time analytics for over 1B searches per day. For faster insights, they also use TopN and HLL extensions. [User Story](https://www.citusdata.com/customers/algolia)
* [Heap](https://heapanalytics.com/) uses Citus to run dynamic
funnel, segmentation, and cohort queries across billions of users
and has more than 700B events in their Citus database cluster. [Watch Video](https://www.youtube.com/watch?v=NVl9_6J1G60&list=PLixnExCn6lRpP10ZlpJwx6AuU3XIgNWpL)
* [Pex](https://pex.com/) uses Citus to ingest 80B data points per day and analyze that data in real-time. They use a 20+ node cluster on Google Cloud. [User Story](https://www.citusdata.com/customers/pex)
* [MixRank](https://mixrank.com/) uses Citus to efficiently collect
and analyze vast amounts of data to allow inside B2B sales teams
to find new customers. [User Story](https://www.citusdata.com/customers/mixrank)
* [Agari](https://www.agari.com/) uses Citus to secure more than
85 percent of U.S. consumer emails on two 6-8 TB clusters. [User
Story](https://www.citusdata.com/customers/agari)
* [Copper (formerly ProsperWorks)](https://copper.com/) powers a cloud CRM service with Citus. [User Story](https://www.citusdata.com/customers/copper)
You can read more user stories about how they employ Citus to scale Postgres for both multi-tenant SaaS applications as well as real-time analytics dashboards [here](https://www.citusdata.com/customers/).
___
Copyright © Citus Data, Inc.
[faq]: https://www.citusdata.com/frequently-asked-questions
[tutorial]: https://docs.citusdata.com/en/stable/tutorials/multi-tenant-tutorial.html

View File

@ -1,41 +0,0 @@
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->

View File

@ -1,160 +0,0 @@
# Coding style
The existing code-style in our code-base is not super consistent. There are multiple reasons for that. One big reason is because our code-base is relatively old and our standards have changed over time. The second big reason is that our style-guide is different from style-guide of Postgres and some code is copied from Postgres source code and is slightly modified. The below rules are for new code. If you're changing existing code that uses a different style, use your best judgement to decide if you use the rules here or if you match the existing style.
## Using citus_indent
CI pipeline will automatically reject any PRs which do not follow our coding
conventions. The easiest way to ensure your PR adheres to those conventions is
to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify)
tool. This tool uses `uncrustify` under the hood.
```bash
# Uncrustify changes the way it formats code every release a bit. To make sure
# everyone formats consistently we use version 0.68.1:
curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.68.1.tar.gz | tar xz
cd uncrustify-uncrustify-0.68.1/
mkdir build
cd build
cmake ..
make -j5
sudo make install
cd ../..
git clone https://github.com/citusdata/tools.git
cd tools
make uncrustify/.install
```
Once you've done that, you can run the `make reindent` command from the top
directory to recursively check and correct the style of any source files in the
current directory. Under the hood, `make reindent` will run `citus_indent` and
some other style corrections for you.
You can also run the following in the directory of this repository to
automatically format all the files that you have changed before committing:
```bash
cat > .git/hooks/pre-commit << __EOF__
#!/bin/bash
citus_indent --check --diff || { citus_indent --diff; exit 1; }
__EOF__
chmod +x .git/hooks/pre-commit
```
## Other rules we follow that citus_indent does not enforce
* We almost always use **CamelCase**, when naming functions, variables etc., **not snake_case**.
* We also have the habits of using a **lowerCamelCase** for some variables named from their type or from their function name, as shown in the examples:
```c
bool IsCitusExtensionLoaded = false;
bool
IsAlterTableRenameStmt(RenameStmt *renameStmt)
{
AlterTableCmd *alterTableCommand = NULL;
..
..
bool isAlterTableRenameStmt = false;
..
}
```
* We **start functions with a comment**:
```c
/*
* MyNiceFunction <something in present simple tense, e.g., processes / returns / checks / takes X as input / does Y> ..
* <some more nice words> ..
* <some more nice words> ..
*/
<static?> <return type>
MyNiceFunction(..)
{
..
..
}
```
* `#includes` needs to be sorted based on below ordering and then alphabetically and we should not include what we don't need in a file:
* System includes (eg. #include<...>)
* Postgres.h (eg. #include "postgres.h")
* Toplevel imports from postgres, not contained in a directory (eg. #include "miscadmin.h")
* General postgres includes (eg . #include "nodes/...")
* Toplevel citus includes, not contained in a directory (eg. #include "citus_verion.h")
* Columnar includes (eg. #include "columnar/...")
* Distributed includes (eg. #include "distributed/...")
* Comments:
```c
/* single line comments start with a lower-case */
/*
* We start multi-line comments with a capital letter
* and keep adding a star to the beginning of each line
* until we close the comment with a star and a slash.
*/
```
* Order of function implementations and their declarations in a file:
We define static functions after the functions that call them. For example:
```c
#include<..>
#include<..>
..
..
typedef struct
{
..
..
} MyNiceStruct;
..
..
PG_FUNCTION_INFO_V1(my_nice_udf1);
PG_FUNCTION_INFO_V1(my_nice_udf2);
..
..
// .. somewhere on top of the file …
static void MyNiceStaticlyDeclaredFunction1(…);
static void MyNiceStaticlyDeclaredFunction2(…);
..
..
void
MyNiceFunctionExternedViaHeaderFile(..)
{
..
..
MyNiceStaticlyDeclaredFunction1(..);
..
..
MyNiceStaticlyDeclaredFunction2(..);
..
}
..
..
// we define this first because it's called by MyNiceFunctionExternedViaHeaderFile()
// before MyNiceStaticlyDeclaredFunction2()
static void
MyNiceStaticlyDeclaredFunction1(…)
{
}
..
..
// then we define this
static void
MyNiceStaticlyDeclaredFunction2(…)
{
}
```

View File

@ -1,6 +1,6 @@
#!/bin/bash
#
# autogen.sh converts configure.ac to configure and creates
# autogen.sh converts configure.in to configure and creates
# citus_config.h.in. The resuting resulting files are checked into
# the SCM, to avoid everyone needing autoconf installed.

View File

@ -283,14 +283,6 @@ actually run in CI. This is most commonly forgotten for newly added CI tests
that the developer only ran locally. It also checks that all CI scripts have a
section in this `README.md` file and that they include `ci/ci_helpers.sh`.
## `check_migration_files.sh`
A branch that touches a set of upgrade scripts is also expected to touch
corresponding downgrade scripts as well. If this script fails, read the output
and make sure you update the downgrade scripts in the printed list. If you
really don't need a downgrade to run any SQL. You can write a comment in the
file explaining why a downgrade step is not necessary.
## `disallow_c_comments_in_migrations.sh`
We do not use C-style comments in migration files as the stripped
@ -301,18 +293,6 @@ Instead use SQL type comments, i.e:
```
See [#3115](https://github.com/citusdata/citus/pull/3115) for more info.
## `disallow_hash_comments_in_spec_files.sh`
We do not use comments starting with # in spec files because it creates errors
from C preprocessor that expects directives after this character.
Instead use C type comments, i.e:
```
// this is a single line comment
/*
* this is a multi line comment
*/
```
## `disallow_long_changelog_entries.sh`
@ -366,37 +346,3 @@ foo = 2
#endif
```
This was deemed to be error prone and not worth the effort.
## `fix_gitignore.sh`
This script checks and fixes issues with `.gitignore` rules:
1. Makes sure we do not commit any generated files that should be ignored. If there is an
ignored file in the git tree, the user is expected to review the files that are removed
from the git tree and commit them.
## `check_gucs_are_alphabetically_sorted.sh`
This script checks the order of the GUCs defined in `shared_library_init.c`.
To solve this failure, please check `shared_library_init.c` and make sure that the GUC
definitions are in alphabetical order.
## `print_stack_trace.sh`
This script prints stack traces for failed tests, if they left core files.
## `sort_and_group_includes.sh`
This script checks and fixes issues with include grouping and sorting in C files.
Includes are grouped in the following groups:
- System includes (eg. `#include <math>`)
- Postgres.h include (eg. `#include "postgres.h"`)
- Toplevel postgres includes (includes not in a directory eg. `#include "miscadmin.h`)
- Postgres includes in a directory (eg. `#include "catalog/pg_type.h"`)
- Toplevel citus includes (includes not in a directory eg. `#include "pg_version_constants.h"`)
- Columnar includes (eg. `#include "columnar/columnar.h"`)
- Distributed includes (eg. `#include "distributed/maintenanced.h"`)
Within every group the include lines are sorted alphabetically.

View File

@ -15,6 +15,9 @@ PG_MAJOR=${PG_MAJOR:?please provide the postgres major version}
codename=${VERSION#*(}
codename=${codename%)*}
# get project from argument
project="${CIRCLE_PROJECT_REPONAME}"
# we'll do everything with absolute paths
basedir="$(pwd)"
@ -25,12 +28,12 @@ build_ext() {
pg_major="$1"
builddir="${basedir}/build-${pg_major}"
echo "Beginning build for PostgreSQL ${pg_major}..." >&2
echo "Beginning build of ${project} for PostgreSQL ${pg_major}..." >&2
# do everything in a subdirectory to avoid clutter in current directory
mkdir -p "${builddir}" && cd "${builddir}"
CFLAGS=-Werror "${basedir}/configure" PG_CONFIG="/usr/lib/postgresql/${pg_major}/bin/pg_config" --enable-coverage --with-security-flags
CFLAGS=-Werror "${basedir}/configure" PG_CONFIG="/usr/lib/postgresql/${pg_major}/bin/pg_config" --enable-coverage
installdir="${builddir}/install"
make -j$(nproc) && mkdir -p "${installdir}" && { make DESTDIR="${installdir}" install-all || make DESTDIR="${installdir}" install ; }

View File

@ -14,8 +14,8 @@ ci_scripts=$(
grep -v -E '^(ci_helpers.sh|fix_style.sh)$'
)
for script in $ci_scripts; do
if ! grep "\\bci/$script\\b" -r .github > /dev/null; then
echo "ERROR: CI script with name \"$script\" is not actually used in .github folder"
if ! grep "\\bci/$script\\b" .circleci/config.yml > /dev/null; then
echo "ERROR: CI script with name \"$script\" is not actually used in .circleci/config.yml"
exit 1
fi
if ! grep "^## \`$script\`\$" ci/README.md > /dev/null; then

View File

@ -7,12 +7,13 @@ source ci/ci_helpers.sh
cd src/test/regress
# 1. Find all *.sql and *.spec files in the sql, and spec directories
# 1. Find all *.sql *.spec and *.source files in the sql, spec and input
# directories
# 2. Strip the extension and the directory
# 3. Ignore names that end with .include, those files are meant to be in an C
# preprocessor #include statement. They should not be in schedules.
test_names=$(
find sql spec -iname "*.sql" -o -iname "*.spec" |
find sql spec input -iname "*.sql" -o -iname "*.spec" -o -iname "*.source" |
sed -E 's#^\w+/([^/]+)\.[^.]+$#\1#g' |
grep -v '.include$'
)

88
ci/check_enterprise_merge.sh Executable file
View File

@ -0,0 +1,88 @@
#!/bin/bash
# Testing this script locally requires you to set the following environment
# variables:
# CIRCLE_BRANCH, GIT_USERNAME and GIT_TOKEN
# fail if trying to reference a variable that is not set.
set -u
# exit immediately if a command fails
set -e
# Fail on pipe failures
set -o pipefail
PR_BRANCH="${CIRCLE_BRANCH}"
ENTERPRISE_REMOTE="https://${GIT_USERNAME}:${GIT_TOKEN}@github.com/citusdata/citus-enterprise"
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# List executed commands. This is done so debugging this script is easier when
# it fails. It's explicitly done after git remote add so username and password
# are not shown in CI output (even though it's also filtered out by CircleCI)
set -x
check_compile () {
echo "INFO: checking if merged code can be compiled"
./configure --without-libcurl
make -j10
}
# Clone current git repo (which should be community) to a temporary working
# directory and go there
GIT_DIR_ROOT="$(git rev-parse --show-toplevel)"
TMP_GIT_DIR="$(mktemp --directory -t citus-merge-check.XXXXXXXXX)"
git clone "$GIT_DIR_ROOT" "$TMP_GIT_DIR"
cd "$TMP_GIT_DIR"
# Fails in CI without this
git config user.email "citus-bot@microsoft.com"
git config user.name "citus bot"
# Disable "set -x" temporarily, because $ENTERPRISE_REMOTE contains passwords
{ set +x ; } 2> /dev/null
git remote add enterprise "$ENTERPRISE_REMOTE"
set -x
git remote set-url --push enterprise no-pushing
# Fetch enterprise-master
git fetch enterprise enterprise-master
git checkout "enterprise/enterprise-master"
if git merge --no-commit "origin/$PR_BRANCH"; then
echo "INFO: community PR branch could be merged into enterprise-master"
# check that we can compile after the merge
if check_compile; then
exit 0
fi
echo "WARN: Failed to compile after community PR branch was merged into enterprise"
fi
# undo partial merge
git merge --abort
if ! git fetch enterprise "$PR_BRANCH" ; then
echo "ERROR: enterprise/$PR_BRANCH was not found and community PR branch could not be merged into enterprise-master"
exit 1
fi
# Show the top commit of the enterprise PR branch to make debugging easier
git log -n 1 "enterprise/$PR_BRANCH"
# Check that this branch contains the top commit of the current community PR
# branch. If it does not it means it's not up to date with the current PR, so
# the enterprise branch should be updated.
if ! git merge-base --is-ancestor "origin/$PR_BRANCH" "enterprise/$PR_BRANCH" ; then
echo "ERROR: enterprise/$PR_BRANCH is not up to date with community PR branch"
exit 1
fi
# Now check if we can merge the enterprise PR into enterprise-master without
# issues.
git merge --no-commit "enterprise/$PR_BRANCH"
# check that we can compile after the merge
check_compile

View File

@ -1,25 +0,0 @@
#!/bin/bash
set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# Find the line that exactly matches "RegisterCitusConfigVariables(void)" in
# shared_library_init.c. grep command returns something like
# "934:RegisterCitusConfigVariables(void)" and we extract the line number
# with cut.
RegisterCitusConfigVariables_begin_linenumber=$(grep -n "^RegisterCitusConfigVariables(void)$" src/backend/distributed/shared_library_init.c | cut -d: -f1)
# Consider the lines starting from $RegisterCitusConfigVariables_begin_linenumber,
# grep the first line that starts with "}" and extract the line number with cut
# as in the previous step.
RegisterCitusConfigVariables_length=$(tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | grep -n -m 1 "^}$" | cut -d: -f1)
# extract the function definition of RegisterCitusConfigVariables into a temp file
tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | head -n $(($RegisterCitusConfigVariables_length)) > RegisterCitusConfigVariables_func_def.out
# extract citus gucs in the form of <tab><tab>"citus.X"
grep -P "^[\t][\t]\"citus\.[a-zA-Z_0-9]+\"" RegisterCitusConfigVariables_func_def.out > gucs.out
LC_COLLATE=C sort -c gucs.out
rm gucs.out
rm RegisterCitusConfigVariables_func_def.out

View File

@ -1,33 +0,0 @@
#! /bin/bash
set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# This file checks for the existence of downgrade scripts for every upgrade script that is changed in the branch.
# create list of migration files for upgrades
upgrade_files=$(git diff --name-only origin/main | { grep "src/backend/distributed/sql/citus--.*sql" || exit 0 ; })
downgrade_files=$(git diff --name-only origin/main | { grep "src/backend/distributed/sql/downgrades/citus--.*sql" || exit 0 ; })
ret_value=0
for file in $upgrade_files
do
# There should always be 2 matches, and no need to avoid splitting here
# shellcheck disable=SC2207
versions=($(grep --only-matching --extended-regexp "[0-9]+\.[0-9]+[-.][0-9]+" <<< "$file"))
from_version=${versions[0]};
to_version=${versions[1]};
downgrade_migration_file="src/backend/distributed/sql/downgrades/citus--$to_version--$from_version.sql"
# check for the existence of migration scripts
if [[ $(grep --line-regexp --count "$downgrade_migration_file" <<< "$downgrade_files") == 0 ]]
then
echo "$file is updated, but $downgrade_migration_file is not updated in branch"
ret_value=1
fi
done
exit $ret_value;

View File

@ -1,10 +1,6 @@
#! /bin/bash
set -euo pipefail
# make ** match all directories and subdirectories
shopt -s globstar
# shellcheck disable=SC1091
source ci/ci_helpers.sh
@ -16,17 +12,17 @@ source ci/ci_helpers.sh
# and reusing them if needed. GNU sed unfortunately does not support lookaround assertions.
# /* -> --
find src/backend/{distributed,columnar}/sql/**/*.sql -print0 | xargs -0 sed -i 's#/\*#--#g'
find src/backend/distributed/sql/*.sql -print0 | xargs -0 sed -i 's#/\*#--#g'
# */ -> `` (empty string)
# remove all whitespaces immediately before the match
find src/backend/{distributed,columnar}/sql/**/*.sql -print0 | xargs -0 sed -i 's#\s*\*/\s*##g'
find src/backend/distributed/sql/*.sql -print0 | xargs -0 sed -i 's#\s*\*/\s*##g'
# * -> --
# keep the indentation
# allow only whitespaces before the match
find src/backend/{distributed,columnar}/sql/**/*.sql -print0 | xargs -0 sed -i 's#^\(\s*\) \*#\1--#g'
find src/backend/distributed/sql/*.sql -print0 | xargs -0 sed -i 's#^\(\s*\) \*#\1--#g'
# // -> --
# do not touch http:// or similar by allowing only whitespaces before //
find src/backend/{distributed,columnar}/sql/**/*.sql -print0 | xargs -0 sed -i 's#^\(\s*\)//#\1--#g'
find src/backend/distributed/sql/*.sql -print0 | xargs -0 sed -i 's#^\(\s*\)//#\1--#g'

View File

@ -1,12 +0,0 @@
#! /bin/bash
set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# We do not use comments starting with # in spec files because it creates warnings from
# preprocessor that expects directives after this character.
# `# ` -> `-- `
find src/test/regress/spec/*.spec -print0 | xargs -0 sed -i 's!# !// !g'

View File

@ -1,19 +0,0 @@
#! /bin/bash
set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# Remove all the ignored files from git tree, and error out
# find all ignored files in git tree, and use quotation marks to prevent word splitting on filenames with spaces in them
# NOTE: Option --cached is needed to avoid a bug in git ls-files command.
ignored_lines_in_git_tree=$(git ls-files --ignored --cached --exclude-standard | sed 's/.*/"&"/')
if [[ -n $ignored_lines_in_git_tree ]]
then
echo "Ignored files should not be in git tree!"
echo "${ignored_lines_in_git_tree}"
echo "Removing these files from git tree, please review and commit"
echo "$ignored_lines_in_git_tree" | xargs git rm -r --cached
exit 1
fi

View File

@ -9,14 +9,8 @@ cidir="${0%/*}"
cd ${cidir}/..
citus_indent . --quiet
black . --quiet
isort . --quiet
ci/editorconfig.sh
ci/remove_useless_declarations.sh
ci/disallow_c_comments_in_migrations.sh
ci/disallow_hash_comments_in_spec_files.sh
ci/disallow_long_changelog_entries.sh
ci/normalize_expected.sh
ci/fix_gitignore.sh
ci/print_stack_trace.sh
ci/sort_and_group_includes.sh

View File

@ -1,157 +0,0 @@
#!/usr/bin/env python3
"""
easy command line to run against all citus-style checked files:
$ git ls-files \
| git check-attr --stdin citus-style \
| grep 'citus-style: set' \
| awk '{print $1}' \
| cut -d':' -f1 \
| xargs -n1 ./ci/include_grouping.py
"""
import collections
import os
import sys
def main(args):
if len(args) < 2:
print("Usage: include_grouping.py <file>")
return
file = args[1]
if not os.path.isfile(file):
sys.exit(f"File '{file}' does not exist")
with open(file, "r") as in_file:
with open(file + ".tmp", "w") as out_file:
includes = []
skipped_lines = []
# This calls print_sorted_includes on a set of consecutive #include lines.
# This implicitly keeps separation of any #include lines that are contained in
# an #ifdef, because it will order the #include lines inside and after the
# #ifdef completely separately.
for line in in_file:
# if a line starts with #include we don't want to print it yet, instead we
# want to collect all consecutive #include lines
if line.startswith("#include"):
includes.append(line)
skipped_lines = []
continue
# if we have collected any #include lines, we want to print them sorted
# before printing the current line. However, if the current line is empty
# we want to perform a lookahead to see if the next line is an #include.
# To maintain any separation between #include lines and their subsequent
# lines we keep track of all lines we have skipped inbetween.
if len(includes) > 0:
if len(line.strip()) == 0:
skipped_lines.append(line)
continue
# we have includes that need to be grouped before printing the current
# line.
print_sorted_includes(includes, file=out_file)
includes = []
# print any skipped lines
print("".join(skipped_lines), end="", file=out_file)
skipped_lines = []
print(line, end="", file=out_file)
# move out_file to file
os.rename(file + ".tmp", file)
def print_sorted_includes(includes, file=sys.stdout):
default_group_key = 1
groups = collections.defaultdict(set)
# define the groups that we separate correctly. The matchers are tested in the order
# of their priority field. The first matcher that matches the include is used to
# assign the include to a group.
# The groups are printed in the order of their group_key.
matchers = [
{
"name": "system includes",
"matcher": lambda x: x.startswith("<"),
"group_key": -2,
"priority": 0,
},
{
"name": "toplevel postgres includes",
"matcher": lambda x: "/" not in x,
"group_key": 0,
"priority": 9,
},
{
"name": "postgres.h",
"matcher": lambda x: x.strip() in ['"postgres.h"'],
"group_key": -1,
"priority": -1,
},
{
"name": "toplevel citus inlcudes",
"matcher": lambda x: x.strip()
in [
'"citus_version.h"',
'"pg_version_compat.h"',
'"pg_version_constants.h"',
],
"group_key": 3,
"priority": 0,
},
{
"name": "columnar includes",
"matcher": lambda x: x.startswith('"columnar/'),
"group_key": 4,
"priority": 1,
},
{
"name": "distributed includes",
"matcher": lambda x: x.startswith('"distributed/'),
"group_key": 5,
"priority": 1,
},
]
matchers.sort(key=lambda x: x["priority"])
# throughout our codebase we have some includes where either postgres or citus
# includes are wrongfully included with the syntax for system includes. Before we
# try to match those we will change the <> to "" to make them match our system. This
# will also rewrite the include to the correct syntax.
common_system_include_error_prefixes = ["<nodes/", "<distributed/"]
# assign every include to a group
for include in includes:
# extract the group key from the include
include_content = include.split(" ")[1]
# fix common system includes which are secretly postgres or citus includes
for common_prefix in common_system_include_error_prefixes:
if include_content.startswith(common_prefix):
include_content = '"' + include_content.strip()[1:-1] + '"'
include = include.split(" ")[0] + " " + include_content + "\n"
break
group_key = default_group_key
for matcher in matchers:
if matcher["matcher"](include_content):
group_key = matcher["group_key"]
break
groups[group_key].add(include)
# iterate over all groups in the natural order of its keys
for i, group in enumerate(sorted(groups.items())):
if i > 0:
print(file=file)
includes = group[1]
print("".join(sorted(includes)), end="", file=file)
if __name__ == "__main__":
main(sys.argv)

View File

@ -1,25 +0,0 @@
#!/bin/bash
set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# find all core files
core_files=( $(find . -type f -regex .*core.*\d*.*postgres) )
if [ ${#core_files[@]} -gt 0 ]; then
# print stack traces for the core files
for core_file in "${core_files[@]}"
do
# set print frame-arguments all: show all scalars + structures in the frame
# set print pretty on: show structures in indented mode
# set print addr off: do not show pointer address
# thread apply all bt full: show stack traces for all threads
gdb --batch \
-ex "set print frame-arguments all" \
-ex "set print pretty on" \
-ex "set print addr off" \
-ex "thread apply all bt full" \
postgres "${core_file}"
done
fi

View File

@ -1,12 +0,0 @@
#!/bin/bash
set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
git ls-files \
| git check-attr --stdin citus-style \
| grep 'citus-style: set' \
| awk '{print $1}' \
| cut -d':' -f1 \
| xargs -n1 ./ci/include_grouping.py

View File

@ -10,7 +10,7 @@
# argument (other than "yes/no"), etc.
#
# The point of this implementation is to reduce code size and
# redundancy in configure.ac and to improve robustness and consistency
# redundancy in configure.in and to improve robustness and consistency
# in the option evaluation code.

128
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for Citus 13.2devel.
# Generated by GNU Autoconf 2.69 for Citus 10.0.8.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Citus'
PACKAGE_TARNAME='citus'
PACKAGE_VERSION='13.2devel'
PACKAGE_STRING='Citus 13.2devel'
PACKAGE_VERSION='10.0.8'
PACKAGE_STRING='Citus 10.0.8'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@ -622,6 +622,7 @@ ac_includes_default="\
ac_subst_vars='LTLIBOBJS
LIBOBJS
HAS_TABLEAM
HAS_DOTGIT
POSTGRES_BUILDDIR
POSTGRES_SRCDIR
@ -644,7 +645,6 @@ LDFLAGS
CFLAGS
CC
vpath_build
with_pg_version_check
PATH
PG_CONFIG
FLEX
@ -693,7 +693,6 @@ ac_subst_files=''
ac_user_opts='
enable_option_checking
with_extra_version
with_pg_version_check
enable_coverage
with_libcurl
with_reports_hostname
@ -1262,7 +1261,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures Citus 13.2devel to adapt to many kinds of systems.
\`configure' configures Citus 10.0.8 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1324,7 +1323,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of Citus 13.2devel:";;
short | recursive ) echo "Configuration of Citus 10.0.8:";;
esac
cat <<\_ACEOF
@ -1339,8 +1338,6 @@ Optional Packages:
--without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no)
--with-extra-version=STRING
append STRING to version
--without-pg-version-check
do not check postgres version during configure
--without-libcurl do not use libcurl for anonymous statistics
collection
--with-reports-hostname=HOSTNAME
@ -1429,7 +1426,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
Citus configure 13.2devel
Citus configure 10.0.8
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1912,7 +1909,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Citus $as_me 13.2devel, which was
It was created by Citus $as_me 10.0.8, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -2559,36 +2556,7 @@ if test -z "$version_num"; then
as_fn_error $? "Could not detect PostgreSQL version from pg_config." "$LINENO" 5
fi
# Check whether --with-pg-version-check was given.
if test "${with_pg_version_check+set}" = set; then :
withval=$with_pg_version_check;
case $withval in
yes)
:
;;
no)
:
;;
*)
as_fn_error $? "no argument expected for --with-pg-version-check option" "$LINENO" 5
;;
esac
else
with_pg_version_check=yes
fi
if test "$with_pg_version_check" = no; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: building against PostgreSQL $version_num (skipped compatibility check)" >&5
$as_echo "$as_me: building against PostgreSQL $version_num (skipped compatibility check)" >&6;}
elif test "$version_num" != '15' -a "$version_num" != '16' -a "$version_num" != '17'; then
if test "$version_num" != '11' -a "$version_num" != '12' -a "$version_num" != '13'; then
as_fn_error $? "Citus is not compatible with the detected PostgreSQL version ${version_num}." "$LINENO" 5
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: building against PostgreSQL $version_num" >&5
@ -4565,9 +4533,21 @@ cat >>confdefs.h <<_ACEOF
_ACEOF
#
# LZ4
#
if test "$version_num" != '11'; then
HAS_TABLEAM=yes
$as_echo "#define HAS_TABLEAM 1" >>confdefs.h
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: postgres version does not support table access methods" >&5
$as_echo "$as_me: postgres version does not support table access methods" >&6;}
fi;
# Require lz4 & zstd only if we are compiling columnar
if test "$HAS_TABLEAM" == 'yes'; then
#
# LZ4
#
@ -4576,9 +4556,7 @@ if test "${with_lz4+set}" = set; then :
withval=$with_lz4;
case $withval in
yes)
$as_echo "#define HAVE_CITUS_LIBLZ4 1" >>confdefs.h
:
;;
no)
:
@ -4591,15 +4569,13 @@ $as_echo "#define HAVE_CITUS_LIBLZ4 1" >>confdefs.h
else
with_lz4=yes
$as_echo "#define HAVE_CITUS_LIBLZ4 1" >>confdefs.h
fi
if test "$with_lz4" = yes; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for LZ4_compress_default in -llz4" >&5
if test "$with_lz4" = yes; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for LZ4_compress_default in -llz4" >&5
$as_echo_n "checking for LZ4_compress_default in -llz4... " >&6; }
if ${ac_cv_lib_lz4_LZ4_compress_default+:} false; then :
$as_echo_n "(cached) " >&6
@ -4644,27 +4620,27 @@ _ACEOF
else
as_fn_error $? "lz4 library not found
If you have lz4 installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-lz4 to disable lz4 support." "$LINENO" 5
If you have lz4 installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-lz4 to disable zlib support." "$LINENO" 5
fi
ac_fn_c_check_header_mongrel "$LINENO" "lz4.h" "ac_cv_header_lz4_h" "$ac_includes_default"
ac_fn_c_check_header_mongrel "$LINENO" "lz4.h" "ac_cv_header_lz4_h" "$ac_includes_default"
if test "x$ac_cv_header_lz4_h" = xyes; then :
else
as_fn_error $? "lz4 header not found
If you have lz4 already installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-lz4 to disable lz4 support." "$LINENO" 5
If you have lz4 already installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-lz4 to disable lz4 support." "$LINENO" 5
fi
fi
fi
#
# ZSTD
#
#
# ZSTD
#
@ -4691,8 +4667,8 @@ fi
if test "$with_zstd" = yes; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for ZSTD_decompress in -lzstd" >&5
if test "$with_zstd" = yes; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for ZSTD_decompress in -lzstd" >&5
$as_echo_n "checking for ZSTD_decompress in -lzstd... " >&6; }
if ${ac_cv_lib_zstd_ZSTD_decompress+:} false; then :
$as_echo_n "(cached) " >&6
@ -4737,23 +4713,25 @@ _ACEOF
else
as_fn_error $? "zstd library not found
If you have zstd installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-zstd to disable zstd support." "$LINENO" 5
If you have zstd installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-zstd to disable zlib support." "$LINENO" 5
fi
ac_fn_c_check_header_mongrel "$LINENO" "zstd.h" "ac_cv_header_zstd_h" "$ac_includes_default"
ac_fn_c_check_header_mongrel "$LINENO" "zstd.h" "ac_cv_header_zstd_h" "$ac_includes_default"
if test "x$ac_cv_header_zstd_h" = xyes; then :
else
as_fn_error $? "zstd header not found
If you have zstd already installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-zstd to disable zstd support." "$LINENO" 5
If you have zstd already installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-zstd to disable zlib support." "$LINENO" 5
fi
fi
fi
fi # test "$HAS_TABLEAM" == 'yes'
@ -4881,6 +4859,8 @@ POSTGRES_BUILDDIR="$POSTGRES_BUILDDIR"
HAS_DOTGIT="$HAS_DOTGIT"
HAS_TABLEAM="$HAS_TABLEAM"
ac_config_files="$ac_config_files Makefile.global"
@ -5393,7 +5373,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by Citus $as_me 13.2devel, which was
This file was extended by Citus $as_me 10.0.8, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -5455,7 +5435,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
Citus config.status 13.2devel
Citus config.status 10.0.8
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [13.2devel])
AC_INIT([Citus], [10.0.8])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands
@ -74,13 +74,7 @@ if test -z "$version_num"; then
AC_MSG_ERROR([Could not detect PostgreSQL version from pg_config.])
fi
PGAC_ARG_BOOL(with, pg-version-check, yes,
[do not check postgres version during configure])
AC_SUBST(with_pg_version_check)
if test "$with_pg_version_check" = no; then
AC_MSG_NOTICE([building against PostgreSQL $version_num (skipped compatibility check)])
elif test "$version_num" != '15' -a "$version_num" != '16' -a "$version_num" != '17'; then
if test "$version_num" != '11' -a "$version_num" != '12' -a "$version_num" != '13'; then
AC_MSG_ERROR([Citus is not compatible with the detected PostgreSQL version ${version_num}.])
else
AC_MSG_NOTICE([building against PostgreSQL $version_num])
@ -222,44 +216,54 @@ PGAC_ARG_REQ(with, reports-hostname, [HOSTNAME],
AC_DEFINE_UNQUOTED(REPORTS_BASE_URL, "$REPORTS_BASE_URL",
[Base URL for statistics collection and update checks])
#
# LZ4
#
PGAC_ARG_BOOL(with, lz4, yes,
[do not use lz4],
[AC_DEFINE([HAVE_CITUS_LIBLZ4], 1, [Define to 1 to build with lz4 support. (--with-lz4)])])
AC_SUBST(with_lz4)
if test "$version_num" != '11'; then
HAS_TABLEAM=yes
AC_DEFINE([HAS_TABLEAM], 1, [Define to 1 to build with table access method support, pg12 and up])
else
AC_MSG_NOTICE([postgres version does not support table access methods])
fi;
if test "$with_lz4" = yes; then
AC_CHECK_LIB(lz4, LZ4_compress_default, [],
[AC_MSG_ERROR([lz4 library not found
If you have lz4 installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-lz4 to disable lz4 support.])])
AC_CHECK_HEADER(lz4.h, [], [AC_MSG_ERROR([lz4 header not found
If you have lz4 already installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-lz4 to disable lz4 support.])])
fi
# Require lz4 & zstd only if we are compiling columnar
if test "$HAS_TABLEAM" == 'yes'; then
#
# LZ4
#
PGAC_ARG_BOOL(with, lz4, yes,
[do not use lz4])
AC_SUBST(with_lz4)
#
# ZSTD
#
PGAC_ARG_BOOL(with, zstd, yes,
[do not use zstd])
AC_SUBST(with_zstd)
if test "$with_lz4" = yes; then
AC_CHECK_LIB(lz4, LZ4_compress_default, [],
[AC_MSG_ERROR([lz4 library not found
If you have lz4 installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-lz4 to disable zlib support.])])
AC_CHECK_HEADER(lz4.h, [], [AC_MSG_ERROR([lz4 header not found
If you have lz4 already installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-lz4 to disable lz4 support.])])
fi
if test "$with_zstd" = yes; then
AC_CHECK_LIB(zstd, ZSTD_decompress, [],
[AC_MSG_ERROR([zstd library not found
If you have zstd installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-zstd to disable zstd support.])])
AC_CHECK_HEADER(zstd.h, [], [AC_MSG_ERROR([zstd header not found
If you have zstd already installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-zstd to disable zstd support.])])
fi
#
# ZSTD
#
PGAC_ARG_BOOL(with, zstd, yes,
[do not use zstd])
AC_SUBST(with_zstd)
if test "$with_zstd" = yes; then
AC_CHECK_LIB(zstd, ZSTD_decompress, [],
[AC_MSG_ERROR([zstd library not found
If you have zstd installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-zstd to disable zlib support.])])
AC_CHECK_HEADER(zstd.h, [], [AC_MSG_ERROR([zstd header not found
If you have zstd already installed, see config.log for details on the
failure. It is possible the compiler isn't looking in the proper directory.
Use --without-zstd to disable zlib support.])])
fi
fi # test "$HAS_TABLEAM" == 'yes'
PGAC_ARG_BOOL(with, security-flags, no,
@ -296,6 +300,7 @@ AC_SUBST(CITUS_LDFLAGS, "$LIBS $CITUS_LDFLAGS")
AC_SUBST(POSTGRES_SRCDIR, "$POSTGRES_SRCDIR")
AC_SUBST(POSTGRES_BUILDDIR, "$POSTGRES_BUILDDIR")
AC_SUBST(HAS_DOTGIT, "$HAS_DOTGIT")
AC_SUBST(HAS_TABLEAM, "$HAS_TABLEAM")
AC_CONFIG_FILES([Makefile.global])
AC_CONFIG_HEADERS([src/include/citus_config.h] [src/include/citus_version.h])

BIN
github-banner.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 168 KiB

View File

@ -1,40 +0,0 @@
[tool.isort]
profile = 'black'
[tool.black]
include = '(src/test/regress/bin/diff-filter|\.pyi?|\.ipynb)$'
[tool.pytest.ini_options]
addopts = [
"--import-mode=importlib",
"--showlocals",
"--tb=short",
]
pythonpath = 'src/test/regress/citus_tests'
asyncio_mode = 'auto'
# Make test discovery quicker from the root dir of the repo
testpaths = ['src/test/regress/citus_tests/test']
# Make test discovery quicker from other directories than root directory
norecursedirs = [
'*.egg',
'.*',
'build',
'venv',
'ci',
'vendor',
'backend',
'bin',
'include',
'tmp_*',
'results',
'expected',
'sql',
'spec',
'data',
'__pycache__',
]
# Don't find files with test at the end such as run_test.py
python_files = ['test_*.py']

View File

@ -16,6 +16,7 @@ README.* conflict-marker-size=32
# Test output files that contain extra whitespace
*.out -whitespace
src/test/regress/output/*.source -whitespace
# These files are maintained or generated elsewhere. We take them as is.
configure -whitespace

View File

@ -1,3 +1,68 @@
# The directory used to store columnar sql files after pre-processing them
# with 'cpp' in build-time, see src/backend/columnar/Makefile.
/build/
# =====
# = C =
# =====
# Object files
*.o
*.ko
*.obj
*.elf
*.bc
# Libraries
*.lib
*.a
# Shared objects (inc. Windows DLLs)
*.dll
*.so
*.so.*
*.dylib
# Executables
*.exe
*.app
*.i*86
*.x86_64
*.hex
# ========
# = Gcov =
# ========
# gcc coverage testing tool files
*.gcno
*.gcda
*.gcov
# ====================
# = Project-Specific =
# ====================
/data/*.cstore
/data/*.footer
/sql/*block_filtering.sql
/sql/*copyto.sql
/sql/*create.sql
/sql/*data_types.sql
/sql/*load.sql
/expected/*block_filtering.out
/expected/*copyto.out
/expected/*create.out
/expected/*data_types.out
/expected/*load.out
/results/*
/.deps/*
/regression.diffs
/regression.out
.vscode
*.pb-c.*
# ignore files that could be created by circleci automation
files.lst
install-*.tar
install-*/

View File

@ -1,60 +0,0 @@
citus_subdir = src/backend/columnar
citus_top_builddir = ../../..
safestringlib_srcdir = $(citus_abs_top_srcdir)/vendor/safestringlib
SUBDIRS = . safeclib
SUBDIRS +=
ENSURE_SUBDIRS_EXIST := $(shell mkdir -p $(SUBDIRS))
OBJS += \
$(patsubst $(citus_abs_srcdir)/%.c,%.o,$(foreach dir,$(SUBDIRS), $(sort $(wildcard $(citus_abs_srcdir)/$(dir)/*.c))))
MODULE_big = citus_columnar
EXTENSION = citus_columnar
template_sql_files = $(patsubst $(citus_abs_srcdir)/%,%,$(wildcard $(citus_abs_srcdir)/sql/*.sql))
template_downgrade_sql_files = $(patsubst $(citus_abs_srcdir)/sql/downgrades/%,%,$(wildcard $(citus_abs_srcdir)/sql/downgrades/*.sql))
generated_sql_files = $(patsubst %,$(citus_abs_srcdir)/build/%,$(template_sql_files))
generated_downgrade_sql_files += $(patsubst %,$(citus_abs_srcdir)/build/sql/%,$(template_downgrade_sql_files))
DATA_built = $(generated_sql_files)
PG_CPPFLAGS += -I$(libpq_srcdir) -I$(safestringlib_srcdir)/include
include $(citus_top_builddir)/Makefile.global
SQL_DEPDIR=.deps/sql
SQL_BUILDDIR=build/sql
$(generated_sql_files): $(citus_abs_srcdir)/build/%: %
@mkdir -p $(citus_abs_srcdir)/$(SQL_DEPDIR) $(citus_abs_srcdir)/$(SQL_BUILDDIR)
@# -MF is used to store dependency files(.Po) in another directory for separation
@# -MT is used to change the target of the rule emitted by dependency generation.
@# -P is used to inhibit generation of linemarkers in the output from the preprocessor.
@# -undef is used to not predefine any system-specific or GCC-specific macros.
@# `man cpp` for further information
cd $(citus_abs_srcdir) && cpp -undef -w -P -MMD -MP -MF$(SQL_DEPDIR)/$(*F).Po -MT$@ $< > $@
$(generated_downgrade_sql_files): $(citus_abs_srcdir)/build/sql/%: sql/downgrades/%
@mkdir -p $(citus_abs_srcdir)/$(SQL_DEPDIR) $(citus_abs_srcdir)/$(SQL_BUILDDIR)
@# -MF is used to store dependency files(.Po) in another directory for separation
@# -MT is used to change the target of the rule emitted by dependency generation.
@# -P is used to inhibit generation of linemarkers in the output from the preprocessor.
@# -undef is used to not predefine any system-specific or GCC-specific macros.
@# `man cpp` for further information
cd $(citus_abs_srcdir) && cpp -undef -w -P -MMD -MP -MF$(SQL_DEPDIR)/$(*F).Po -MT$@ $< > $@
.PHONY: install install-downgrades install-all
cleanup-before-install:
rm -f $(DESTDIR)$(datadir)/$(datamoduledir)/citus_columnar.control
rm -f $(DESTDIR)$(datadir)/$(datamoduledir)/columnar--*
rm -f $(DESTDIR)$(datadir)/$(datamoduledir)/citus_columnar--*
install: cleanup-before-install
# install and install-downgrades should be run sequentially
install-all: install
$(MAKE) install-downgrades
install-downgrades: $(generated_downgrade_sql_files)
$(INSTALL_DATA) $(generated_downgrade_sql_files) '$(DESTDIR)$(datadir)/$(datamoduledir)/'

View File

@ -41,7 +41,7 @@ Benefits of Citus Columnar over cstore_fdw:
* Append-only (no ``UPDATE``/``DELETE`` support)
* No space reclamation (e.g. rolled-back transactions may still
consume disk space)
* No bitmap index scans
* No index support, index scans, or bitmap index scans
* No tidscans
* No sample scans
* No TOAST support (large values supported inline)
@ -52,11 +52,13 @@ Benefits of Citus Columnar over cstore_fdw:
... FOR UPDATE``)
* No support for serializable isolation level
* Support for PostgreSQL server versions 12+ only
* No support for foreign keys
* No support for foreign keys, unique constraints, or exclusion
constraints
* No support for logical decoding
* No support for intra-node parallel scans
* No support for ``AFTER ... FOR EACH ROW`` triggers
* No `UNLOGGED` columnar tables
* No `TEMPORARY` columnar tables
Future iterations will incrementally lift the limitations listed above.
@ -89,25 +91,38 @@ data.
Set options using:
```sql
ALTER TABLE my_columnar_table SET
(columnar.compression = none, columnar.stripe_row_limit = 10000);
alter_columnar_table_set(
relid REGCLASS,
chunk_group_row_limit INT4 DEFAULT NULL,
stripe_row_limit INT4 DEFAULT NULL,
compression NAME DEFAULT NULL,
compression_level INT4)
```
For example:
```sql
SELECT alter_columnar_table_set(
'my_columnar_table',
compression => 'none',
stripe_row_limit => 10000);
```
The following options are available:
* **columnar.compression**: `[none|pglz|zstd|lz4|lz4hc]` - set the compression type
* **compression**: `[none|pglz|zstd|lz4|lz4hc]` - set the compression type
for _newly-inserted_ data. Existing data will not be
recompressed/decompressed. The default value is `zstd` (if support
has been compiled in).
* **columnar.compression_level**: ``<integer>`` - Sets compression level. Valid
* **compression_level**: ``<integer>`` - Sets compression level. Valid
settings are from 1 through 19. If the compression method does not
support the level chosen, the closest level will be selected
instead.
* **columnar.stripe_row_limit**: ``<integer>`` - the maximum number of rows per
* **stripe_row_limit**: ``<integer>`` - the maximum number of rows per
stripe for _newly-inserted_ data. Existing stripes of data will not
be changed and may have more rows than this maximum value. The
default value is `150000`.
* **columnar.chunk_group_row_limit**: ``<integer>`` - the maximum number of rows per
* **chunk_group_row_limit**: ``<integer>`` - the maximum number of rows per
chunk for _newly-inserted_ data. Existing chunks of data will not be
changed and may have more rows than this maximum value. The default
value is `10000`.
@ -173,14 +188,10 @@ operations that are supported on row tables but not columnar
data to be updated only affects row tables (e.g. ``UPDATE parent SET
i = i + 1 WHERE n = 300``).
Note that Citus Columnar supports `btree` and `hash `indexes (and
the constraints requiring them) but does not support `gist`, `gin`,
`spgist` and `brin` indexes.
For this reason, if some partitions are columnar and if the index is
not supported by Citus Columnar, then it's impossible to create indexes
on the partitioned (parent) table directly. In that case, you need to
create the index on the individual row partitions. Similarly for the
constraints that require indexes, e.g.:
Because columnar tables do not support indexes, it's impossible to
create indexes on the partitioned table if some partitions are
columnar. Instead, you must create indexes on the individual row
partitions. Similarly for constraints that require indexes, e.g.:
```sql
CREATE INDEX p2_ts_idx ON p2 (ts);
@ -233,14 +244,16 @@ CREATE TABLE perf_columnar(LIKE perf_row) USING COLUMNAR;
## Data
```sql
CREATE OR REPLACE FUNCTION random_words(n INT4) RETURNS TEXT LANGUAGE sql AS $$
WITH words(w) AS (
SELECT ARRAY['zero','one','two','three','four','five','six','seven','eight','nine','ten']
),
random (word) AS (
SELECT w[(random()*array_length(w, 1))::int] FROM generate_series(1, $1) AS i, words
)
SELECT string_agg(word, ' ') FROM random;
CREATE OR REPLACE FUNCTION random_words(n INT4) RETURNS TEXT LANGUAGE plpython2u AS $$
import random
t = ''
words = ['zero','one','two','three','four','five','six','seven','eight','nine','ten']
for i in xrange(0,n):
if (i != 0):
t += ' '
r = random.randint(0,len(words)-1)
t += words[r]
return t
$$;
```
@ -266,7 +279,7 @@ INSERT INTO perf_columnar SELECT * FROM perf_row;
=> SELECT pg_total_relation_size('perf_row')::numeric/pg_total_relation_size('perf_columnar') AS compression_ratio;
compression_ratio
--------------------
5.3958044063457513
5.4080768380134124
(1 row)
```
@ -274,12 +287,32 @@ The overall compression ratio of columnar table, versus the same data
stored with row storage, is **5.4X**.
```
=> VACUUM VERBOSE perf_row;
INFO: vacuuming "public.perf_row"
INFO: "perf_row": found 0 removable, 10 nonremovable row versions in 1 out of 5769231 pages
DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 3110
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 5769230 frozen pages.
0 pages are entirely empty.
CPU: user: 0.10 s, system: 0.05 s, elapsed: 0.26 s.
INFO: vacuuming "pg_toast.pg_toast_17133"
INFO: index "pg_toast_17133_index" now contains 0 row versions in 1 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO: "pg_toast_17133": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages
DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 3110
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
=> VACUUM VERBOSE perf_columnar;
INFO: statistics for "perf_columnar":
storage id: 10000000000
total file size: 8761368576, total data size: 8734266196
compression rate: 5.01x
total row count: 75000000, stripe count: 500, average rows per stripe: 150000
storage id: 10000000020
total file size: 8741486592, total data size: 8714771176
compression rate: 4.96x
total row count: 75000000, stripe count: 501, average rows per stripe: 149700
chunk count: 60000, containing data for dropped columns: 0, zstd compressed: 60000
```
@ -289,13 +322,8 @@ not account for the metadata savings of the columnar format.
## System
* Azure VM: Standard D2s v3 (2 vcpus, 8 GiB memory)
* Linux (ubuntu 18.04)
* Data Drive: Standard HDD (512GB, 500 IOPS Max, 60 MB/s Max)
* PostgreSQL 13 (``--with-llvm``, ``--with-python``)
* ``shared_buffers = 128MB``
* ``max_parallel_workers_per_gather = 0``
* ``jit = on``
* 16GB physical memory
* 128MB PG shared buffers
Note: because this was run on a system with enough physical memory to
hold a substantial fraction of the table, the IO benefits of columnar
@ -306,16 +334,11 @@ is substantially increased.
```sql
-- OFFSET 1000 so that no rows are returned, and we collect only timings
SELECT vendor_id, SUM(quantity) FROM perf_row GROUP BY vendor_id OFFSET 1000;
SELECT vendor_id, SUM(quantity) FROM perf_row GROUP BY vendor_id OFFSET 1000;
SELECT vendor_id, SUM(quantity) FROM perf_row GROUP BY vendor_id OFFSET 1000;
SELECT vendor_id, SUM(quantity) FROM perf_columnar GROUP BY vendor_id OFFSET 1000;
SELECT vendor_id, SUM(quantity) FROM perf_columnar GROUP BY vendor_id OFFSET 1000;
SELECT vendor_id, SUM(quantity) FROM perf_columnar GROUP BY vendor_id OFFSET 1000;
```
Timing (median of three runs):
* row: 436s
* columnar: 16s
* speedup: **27X**
* row: 201700ms
* columnar: 14202ms
* speedup: **14X**

View File

@ -1,6 +0,0 @@
# Columnar extension
comment = 'Citus Columnar extension'
default_version = '12.2-1'
module_pathname = '$libdir/citus_columnar'
relocatable = false
schema = pg_catalog

File diff suppressed because it is too large Load Diff

View File

@ -1,165 +0,0 @@
/*-------------------------------------------------------------------------
*
* columnar_debug.c
*
* Helper functions to debug column store.
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "funcapi.h"
#include "miscadmin.h"
#include "access/nbtree.h"
#include "access/table.h"
#include "catalog/pg_am.h"
#include "catalog/pg_type.h"
#include "storage/fd.h"
#include "storage/smgr.h"
#include "utils/guc.h"
#include "utils/memutils.h"
#include "utils/rel.h"
#include "utils/tuplestore.h"
#include "pg_version_compat.h"
#include "pg_version_constants.h"
#include "columnar/columnar.h"
#include "columnar/columnar_storage.h"
#include "columnar/columnar_version_compat.h"
static void MemoryContextTotals(MemoryContext context, MemoryContextCounters *counters);
PG_FUNCTION_INFO_V1(columnar_store_memory_stats);
PG_FUNCTION_INFO_V1(columnar_storage_info);
/*
* columnar_store_memory_stats returns a record of 3 values: size of
* TopMemoryContext, TopTransactionContext, and Write State context.
*/
Datum
columnar_store_memory_stats(PG_FUNCTION_ARGS)
{
const int resultColumnCount = 3;
TupleDesc tupleDescriptor = CreateTemplateTupleDesc(resultColumnCount);
TupleDescInitEntry(tupleDescriptor, (AttrNumber) 1, "TopMemoryContext",
INT8OID, -1, 0);
TupleDescInitEntry(tupleDescriptor, (AttrNumber) 2, "TopTransactionContext",
INT8OID, -1, 0);
TupleDescInitEntry(tupleDescriptor, (AttrNumber) 3, "WriteStateContext",
INT8OID, -1, 0);
tupleDescriptor = BlessTupleDesc(tupleDescriptor);
MemoryContextCounters transactionCounters = { 0 };
MemoryContextCounters topCounters = { 0 };
MemoryContextCounters writeStateCounters = { 0 };
MemoryContextTotals(TopTransactionContext, &transactionCounters);
MemoryContextTotals(TopMemoryContext, &topCounters);
MemoryContextTotals(GetWriteContextForDebug(), &writeStateCounters);
bool nulls[3] = { false };
Datum values[3] = {
Int64GetDatum(topCounters.totalspace),
Int64GetDatum(transactionCounters.totalspace),
Int64GetDatum(writeStateCounters.totalspace)
};
HeapTuple tuple = heap_form_tuple(tupleDescriptor, values, nulls);
PG_RETURN_DATUM(HeapTupleGetDatum(tuple));
}
/*
* columnar_storage_info - UDF to return internal storage info for a columnar relation.
*
* DDL:
* CREATE OR REPLACE FUNCTION columnar_storage_info(
* rel regclass,
* version_major OUT int4,
* version_minor OUT int4,
* storage_id OUT int8,
* reserved_stripe_id OUT int8,
* reserved_row_number OUT int8,
* reserved_offset OUT int8)
* STRICT
* LANGUAGE c AS 'MODULE_PATHNAME', 'columnar_storage_info';
*/
Datum
columnar_storage_info(PG_FUNCTION_ARGS)
{
#define STORAGE_INFO_NATTS 6
Oid relid = PG_GETARG_OID(0);
TupleDesc tupdesc;
/* Build a tuple descriptor for our result type */
if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
{
elog(ERROR, "return type must be a row type");
}
if (tupdesc->natts != STORAGE_INFO_NATTS)
{
elog(ERROR, "return type must have %d columns", STORAGE_INFO_NATTS);
}
Relation rel = table_open(relid, AccessShareLock);
if (!IsColumnarTableAmTable(relid))
{
ereport(ERROR, (errmsg("table \"%s\" is not a columnar table",
RelationGetRelationName(rel))));
}
Datum values[STORAGE_INFO_NATTS] = { 0 };
bool nulls[STORAGE_INFO_NATTS] = { 0 };
/*
* Pass force = true so that we can inspect metapages that are not the
* current version.
*
* NB: ensure the order and number of attributes correspond to DDL
* declaration.
*/
values[0] = Int32GetDatum(ColumnarStorageGetVersionMajor(rel, true));
values[1] = Int32GetDatum(ColumnarStorageGetVersionMinor(rel, true));
values[2] = Int64GetDatum(ColumnarStorageGetStorageId(rel, true));
values[3] = Int64GetDatum(ColumnarStorageGetReservedStripeId(rel, true));
values[4] = Int64GetDatum(ColumnarStorageGetReservedRowNumber(rel, true));
values[5] = Int64GetDatum(ColumnarStorageGetReservedOffset(rel, true));
/* release lock */
table_close(rel, AccessShareLock);
HeapTuple tuple = heap_form_tuple(tupdesc, values, nulls);
PG_RETURN_DATUM(HeapTupleGetDatum(tuple));
}
/*
* MemoryContextTotals adds stats of the given memory context and its
* subtree to the given counters.
*/
static void
MemoryContextTotals(MemoryContext context, MemoryContextCounters *counters)
{
if (context == NULL)
{
return;
}
MemoryContext child;
for (child = context->firstchild; child != NULL; child = child->nextchild)
{
MemoryContextTotals(child, counters);
}
context->methods->stats(context, NULL, NULL, counters, true);
}

File diff suppressed because it is too large Load Diff

View File

@ -1,866 +0,0 @@
/*-------------------------------------------------------------------------
*
* columnar_storage.c
*
* Copyright (c) Citus Data, Inc.
*
* Low-level storage layer for columnar.
* - Translates columnar read/write operations on logical offsets into operations on pages/blocks.
* - Emits WAL.
* - Reads/writes the columnar metapage.
* - Reserves data offsets, stripe numbers, and row offsets.
* - Truncation.
*
* Higher-level columnar operations deal with logical offsets and large
* contiguous buffers of data that need to be stored. But the buffer manager
* and WAL depend on formatted pages with headers, so these large buffers need
* to be written across many pages. This module translates the contiguous
* buffers into individual block reads/writes, and performs WAL when
* necessary.
*
* Storage layout: a metapage in block 0, followed by an empty page in block
* 1, followed by logical data starting at the first byte after the page
* header in block 2 (having logical offset ColumnarFirstLogicalOffset). (XXX:
* Block 1 is left empty for no particular reason. Reconsider?). A columnar
* table should always have at least 2 blocks.
*
* Reservation is done with a relation extension lock, and designed for
* concurrency, so the callers only need an ordinary lock on the
* relation. Initializing the metapage or truncating the relation require that
* the caller holds an AccessExclusiveLock. (XXX: New reservations of data are
* aligned onto a new page for no particular reason. Reconsider?).
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "miscadmin.h"
#include "safe_lib.h"
#include "access/generic_xlog.h"
#include "catalog/storage.h"
#include "storage/bufmgr.h"
#include "storage/lmgr.h"
#include "pg_version_compat.h"
#include "columnar/columnar.h"
#include "columnar/columnar_storage.h"
/*
* Content of the first page in main fork, which stores metadata at file
* level.
*/
typedef struct ColumnarMetapage
{
/*
* Store version of file format used, so we can detect files from
* previous versions if we change file format.
*/
uint32 versionMajor;
uint32 versionMinor;
/*
* Each of the metadata table rows are identified by a storageId.
* We store it also in the main fork so we can link metadata rows
* with data files.
*/
uint64 storageId;
uint64 reservedStripeId; /* first unused stripe id */
uint64 reservedRowNumber; /* first unused row number */
uint64 reservedOffset; /* first unused byte offset */
/*
* Flag set to true in the init fork. After an unlogged table reset (due
* to a crash), the init fork will be copied over the main fork. When
* trying to read an unlogged table, if this flag is set to true, we must
* clear the metadata for the table (because the actual data is gone,
* too), and clear the flag. We can cross-check that the table is
* UNLOGGED, and that the main fork is at the minimum size (no actual
* data).
*
* XXX: Not used yet; reserved field for later support for UNLOGGED.
*/
bool unloggedReset;
} ColumnarMetapage;
/* represents a "physical" block+offset address */
typedef struct PhysicalAddr
{
BlockNumber blockno;
uint32 offset;
} PhysicalAddr;
#define COLUMNAR_METAPAGE_BLOCKNO 0
#define COLUMNAR_EMPTY_BLOCKNO 1
#define COLUMNAR_INVALID_STRIPE_ID 0
#define COLUMNAR_FIRST_STRIPE_ID 1
#define OLD_METAPAGE_VERSION_HINT "Use \"VACUUM\" to upgrade the columnar table format " \
"version or run \"ALTER EXTENSION citus UPDATE\"."
/* only for testing purposes */
PG_FUNCTION_INFO_V1(test_columnar_storage_write_new_page);
/*
* Map logical offsets to a physical page and offset where the data is kept.
*/
static inline PhysicalAddr
LogicalToPhysical(uint64 logicalOffset)
{
PhysicalAddr addr;
addr.blockno = logicalOffset / COLUMNAR_BYTES_PER_PAGE;
addr.offset = SizeOfPageHeaderData + (logicalOffset % COLUMNAR_BYTES_PER_PAGE);
return addr;
}
/*
* Map a physical page and offset address to a logical address.
*/
static inline uint64
PhysicalToLogical(PhysicalAddr addr)
{
return COLUMNAR_BYTES_PER_PAGE * addr.blockno + addr.offset - SizeOfPageHeaderData;
}
static void ColumnarOverwriteMetapage(Relation relation,
ColumnarMetapage columnarMetapage);
static ColumnarMetapage ColumnarMetapageRead(Relation rel, bool force);
static void ReadFromBlock(Relation rel, BlockNumber blockno, uint32 offset,
char *buf, uint32 len, bool force);
static void WriteToBlock(Relation rel, BlockNumber blockno, uint32 offset,
char *buf, uint32 len, bool clear);
static uint64 AlignReservation(uint64 prevReservation);
static bool ColumnarMetapageIsCurrent(ColumnarMetapage *metapage);
static bool ColumnarMetapageIsOlder(ColumnarMetapage *metapage);
static bool ColumnarMetapageIsNewer(ColumnarMetapage *metapage);
static void ColumnarMetapageCheckVersion(Relation rel, ColumnarMetapage *metapage);
/*
* ColumnarStorageInit - initialize a new metapage in an empty relation
* with the given storageId.
*
* Caller must hold AccessExclusiveLock on the relation.
*/
void
ColumnarStorageInit(SMgrRelation srel, uint64 storageId)
{
BlockNumber nblocks = smgrnblocks(srel, MAIN_FORKNUM);
if (nblocks > 0)
{
elog(ERROR,
"attempted to initialize metapage, but %d pages already exist",
nblocks);
}
/* create two pages */
#if PG_VERSION_NUM >= PG_VERSION_16
PGIOAlignedBlock block;
#else
PGAlignedBlock block;
#endif
Page page = block.data;
/* write metapage */
PageInit(page, BLCKSZ, 0);
PageHeader phdr = (PageHeader) page;
ColumnarMetapage metapage = { 0 };
metapage.storageId = storageId;
metapage.versionMajor = COLUMNAR_VERSION_MAJOR;
metapage.versionMinor = COLUMNAR_VERSION_MINOR;
metapage.reservedStripeId = COLUMNAR_FIRST_STRIPE_ID;
metapage.reservedRowNumber = COLUMNAR_FIRST_ROW_NUMBER;
metapage.reservedOffset = ColumnarFirstLogicalOffset;
metapage.unloggedReset = false;
memcpy_s(page + phdr->pd_lower, phdr->pd_upper - phdr->pd_lower,
(char *) &metapage, sizeof(ColumnarMetapage));
phdr->pd_lower += sizeof(ColumnarMetapage);
log_newpage(RelationPhysicalIdentifierBackend_compat(&srel), MAIN_FORKNUM,
COLUMNAR_METAPAGE_BLOCKNO, page, true);
PageSetChecksumInplace(page, COLUMNAR_METAPAGE_BLOCKNO);
smgrextend(srel, MAIN_FORKNUM, COLUMNAR_METAPAGE_BLOCKNO, page, true);
/* write empty page */
PageInit(page, BLCKSZ, 0);
log_newpage(RelationPhysicalIdentifierBackend_compat(&srel), MAIN_FORKNUM,
COLUMNAR_EMPTY_BLOCKNO, page, true);
PageSetChecksumInplace(page, COLUMNAR_EMPTY_BLOCKNO);
smgrextend(srel, MAIN_FORKNUM, COLUMNAR_EMPTY_BLOCKNO, page, true);
/*
* An immediate sync is required even if we xlog'd the page, because the
* write did not go through shared_buffers and therefore a concurrent
* checkpoint may have moved the redo pointer past our xlog record.
*/
smgrimmedsync(srel, MAIN_FORKNUM);
}
/*
* ColumnarStorageUpdateCurrent - update the metapage to the current
* version. No effect if the version already matches. If 'upgrade' is true,
* throw an error if metapage version is newer; if 'upgrade' is false, it's a
* downgrade, so throw an error if the metapage version is older.
*
* NB: caller must ensure that metapage already exists, which might not be the
* case on 10.0.
*/
void
ColumnarStorageUpdateCurrent(Relation rel, bool upgrade, uint64 reservedStripeId,
uint64 reservedRowNumber, uint64 reservedOffset)
{
LockRelationForExtension(rel, ExclusiveLock);
ColumnarMetapage metapage = ColumnarMetapageRead(rel, true);
if (ColumnarMetapageIsCurrent(&metapage))
{
/* nothing to do */
return;
}
if (upgrade && ColumnarMetapageIsNewer(&metapage))
{
elog(ERROR, "found newer columnar metapage while upgrading");
}
if (!upgrade && ColumnarMetapageIsOlder(&metapage))
{
elog(ERROR, "found older columnar metapage while downgrading");
}
metapage.versionMajor = COLUMNAR_VERSION_MAJOR;
metapage.versionMinor = COLUMNAR_VERSION_MINOR;
/* storageId remains the same */
metapage.reservedStripeId = reservedStripeId;
metapage.reservedRowNumber = reservedRowNumber;
metapage.reservedOffset = reservedOffset;
ColumnarOverwriteMetapage(rel, metapage);
UnlockRelationForExtension(rel, ExclusiveLock);
}
/*
* ColumnarStorageGetVersionMajor - return major version from the metapage.
*
* Throw an error if the metapage is not the current version, unless
* 'force' is true.
*/
uint64
ColumnarStorageGetVersionMajor(Relation rel, bool force)
{
ColumnarMetapage metapage = ColumnarMetapageRead(rel, force);
return metapage.versionMajor;
}
/*
* ColumnarStorageGetVersionMinor - return minor version from the metapage.
*
* Throw an error if the metapage is not the current version, unless
* 'force' is true.
*/
uint64
ColumnarStorageGetVersionMinor(Relation rel, bool force)
{
ColumnarMetapage metapage = ColumnarMetapageRead(rel, force);
return metapage.versionMinor;
}
/*
* ColumnarStorageGetStorageId - return storage ID from the metapage.
*
* Throw an error if the metapage is not the current version, unless
* 'force' is true.
*/
uint64
ColumnarStorageGetStorageId(Relation rel, bool force)
{
ColumnarMetapage metapage = ColumnarMetapageRead(rel, force);
return metapage.storageId;
}
/*
* ColumnarStorageGetReservedStripeId - return reserved stripe ID from the
* metapage.
*
* Throw an error if the metapage is not the current version, unless
* 'force' is true.
*/
uint64
ColumnarStorageGetReservedStripeId(Relation rel, bool force)
{
ColumnarMetapage metapage = ColumnarMetapageRead(rel, force);
return metapage.reservedStripeId;
}
/*
* ColumnarStorageGetReservedRowNumber - return reserved row number from the
* metapage.
*
* Throw an error if the metapage is not the current version, unless
* 'force' is true.
*/
uint64
ColumnarStorageGetReservedRowNumber(Relation rel, bool force)
{
ColumnarMetapage metapage = ColumnarMetapageRead(rel, force);
return metapage.reservedRowNumber;
}
/*
* ColumnarStorageGetReservedOffset - return reserved offset from the metapage.
*
* Throw an error if the metapage is not the current version, unless
* 'force' is true.
*/
uint64
ColumnarStorageGetReservedOffset(Relation rel, bool force)
{
ColumnarMetapage metapage = ColumnarMetapageRead(rel, force);
return metapage.reservedOffset;
}
/*
* ColumnarStorageIsCurrent - return true if metapage exists and is not
* the current version.
*/
bool
ColumnarStorageIsCurrent(Relation rel)
{
BlockNumber nblocks = smgrnblocks(RelationGetSmgr(rel), MAIN_FORKNUM);
if (nblocks < 2)
{
return false;
}
ColumnarMetapage metapage = ColumnarMetapageRead(rel, true);
return ColumnarMetapageIsCurrent(&metapage);
}
/*
* ColumnarStorageReserveRowNumber returns reservedRowNumber and advances
* it for next row number reservation.
*/
uint64
ColumnarStorageReserveRowNumber(Relation rel, uint64 nrows)
{
LockRelationForExtension(rel, ExclusiveLock);
ColumnarMetapage metapage = ColumnarMetapageRead(rel, false);
uint64 firstRowNumber = metapage.reservedRowNumber;
metapage.reservedRowNumber += nrows;
ColumnarOverwriteMetapage(rel, metapage);
UnlockRelationForExtension(rel, ExclusiveLock);
return firstRowNumber;
}
/*
* ColumnarStorageReserveStripeId returns stripeId and advances it for next
* stripeId reservation.
* Note that this function doesn't handle row number reservation.
* See ColumnarStorageReserveRowNumber function.
*/
uint64
ColumnarStorageReserveStripeId(Relation rel)
{
LockRelationForExtension(rel, ExclusiveLock);
ColumnarMetapage metapage = ColumnarMetapageRead(rel, false);
uint64 stripeId = metapage.reservedStripeId;
metapage.reservedStripeId++;
ColumnarOverwriteMetapage(rel, metapage);
UnlockRelationForExtension(rel, ExclusiveLock);
return stripeId;
}
/*
* ColumnarStorageReserveData - reserve logical data offsets for writing.
*/
uint64
ColumnarStorageReserveData(Relation rel, uint64 amount)
{
if (amount == 0)
{
return ColumnarInvalidLogicalOffset;
}
LockRelationForExtension(rel, ExclusiveLock);
ColumnarMetapage metapage = ColumnarMetapageRead(rel, false);
uint64 alignedReservation = AlignReservation(metapage.reservedOffset);
uint64 nextReservation = alignedReservation + amount;
metapage.reservedOffset = nextReservation;
/* write new reservation */
ColumnarOverwriteMetapage(rel, metapage);
/* last used PhysicalAddr of new reservation */
PhysicalAddr final = LogicalToPhysical(nextReservation - 1);
/* extend with new pages */
BlockNumber nblocks = smgrnblocks(RelationGetSmgr(rel), MAIN_FORKNUM);
while (nblocks <= final.blockno)
{
Buffer newBuffer = ReadBuffer(rel, P_NEW);
Assert(BufferGetBlockNumber(newBuffer) == nblocks);
ReleaseBuffer(newBuffer);
nblocks++;
}
UnlockRelationForExtension(rel, ExclusiveLock);
return alignedReservation;
}
/*
* ColumnarStorageRead - map the logical offset to a block and offset, then
* read the buffer from multiple blocks if necessary.
*/
void
ColumnarStorageRead(Relation rel, uint64 logicalOffset, char *data, uint32 amount)
{
/* if there's no work to do, succeed even with invalid offset */
if (amount == 0)
{
return;
}
if (!ColumnarLogicalOffsetIsValid(logicalOffset))
{
elog(ERROR,
"attempted columnar read on relation %d from invalid logical offset: "
UINT64_FORMAT,
rel->rd_id, logicalOffset);
}
uint64 read = 0;
while (read < amount)
{
PhysicalAddr addr = LogicalToPhysical(logicalOffset + read);
uint32 to_read = Min(amount - read, BLCKSZ - addr.offset);
ReadFromBlock(rel, addr.blockno, addr.offset, data + read, to_read,
false);
read += to_read;
}
}
/*
* ColumnarStorageWrite - map the logical offset to a block and offset, then
* write the buffer across multiple blocks if necessary.
*/
void
ColumnarStorageWrite(Relation rel, uint64 logicalOffset, char *data, uint32 amount)
{
/* if there's no work to do, succeed even with invalid offset */
if (amount == 0)
{
return;
}
if (!ColumnarLogicalOffsetIsValid(logicalOffset))
{
elog(ERROR,
"attempted columnar write on relation %d to invalid logical offset: "
UINT64_FORMAT,
rel->rd_id, logicalOffset);
}
uint64 written = 0;
while (written < amount)
{
PhysicalAddr addr = LogicalToPhysical(logicalOffset + written);
uint64 to_write = Min(amount - written, BLCKSZ - addr.offset);
WriteToBlock(rel, addr.blockno, addr.offset, data + written, to_write,
false);
written += to_write;
}
}
/*
* ColumnarStorageTruncate - truncate the columnar storage such that
* newDataReservation will be the first unused logical offset available. Free
* pages at the end of the relation.
*
* Caller must hold AccessExclusiveLock on the relation.
*
* Returns true if pages were truncated; false otherwise.
*/
bool
ColumnarStorageTruncate(Relation rel, uint64 newDataReservation)
{
if (!ColumnarLogicalOffsetIsValid(newDataReservation))
{
elog(ERROR,
"attempted to truncate relation %d to invalid logical offset: " UINT64_FORMAT,
rel->rd_id, newDataReservation);
}
BlockNumber old_rel_pages = smgrnblocks(RelationGetSmgr(rel), MAIN_FORKNUM);
if (old_rel_pages == 0)
{
/* nothing to do */
return false;
}
LockRelationForExtension(rel, ExclusiveLock);
ColumnarMetapage metapage = ColumnarMetapageRead(rel, false);
if (metapage.reservedOffset < newDataReservation)
{
elog(ERROR,
"attempted to truncate relation %d to offset " UINT64_FORMAT \
" which is higher than existing offset " UINT64_FORMAT,
rel->rd_id, newDataReservation, metapage.reservedOffset);
}
if (metapage.reservedOffset == newDataReservation)
{
/* nothing to do */
UnlockRelationForExtension(rel, ExclusiveLock);
return false;
}
metapage.reservedOffset = newDataReservation;
/* write new reservation */
ColumnarOverwriteMetapage(rel, metapage);
UnlockRelationForExtension(rel, ExclusiveLock);
PhysicalAddr final = LogicalToPhysical(newDataReservation - 1);
BlockNumber new_rel_pages = final.blockno + 1;
Assert(new_rel_pages <= old_rel_pages);
/*
* Truncate the storage. Note that RelationTruncate() takes care of
* Write Ahead Logging.
*/
if (new_rel_pages < old_rel_pages)
{
RelationTruncate(rel, new_rel_pages);
return true;
}
return false;
}
/*
* ColumnarOverwriteMetapage writes given columnarMetapage back to metapage
* for given relation.
*/
static void
ColumnarOverwriteMetapage(Relation relation, ColumnarMetapage columnarMetapage)
{
/* clear metapage because we are overwriting */
bool clear = true;
WriteToBlock(relation, COLUMNAR_METAPAGE_BLOCKNO, SizeOfPageHeaderData,
(char *) &columnarMetapage, sizeof(ColumnarMetapage), clear);
}
/*
* ColumnarMetapageRead - read the current contents of the metapage. Error if
* it does not exist. Throw an error if the metapage is not the current
* version, unless 'force' is true.
*
* NB: it's safe to read a different version of a metapage because we
* guarantee that fields will only be added and existing fields will never be
* changed. However, it's important that we don't depend on new fields being
* set properly when we read an old metapage; an old metapage should only be
* read for the purposes of upgrading or error checking.
*/
static ColumnarMetapage
ColumnarMetapageRead(Relation rel, bool force)
{
BlockNumber nblocks = smgrnblocks(RelationGetSmgr(rel), MAIN_FORKNUM);
if (nblocks == 0)
{
/*
* We only expect this to happen when upgrading citus.so. This is because,
* in current version of columnar, we immediately create the metapage
* for columnar tables, i.e right after creating the table.
* However in older versions, we were creating metapages lazily, i.e
* when ingesting data to columnar table.
*/
ereport(ERROR, (errmsg("columnar metapage for relation \"%s\" does not exist",
RelationGetRelationName(rel)),
errhint(OLD_METAPAGE_VERSION_HINT)));
}
/*
* Regardless of "force" parameter, always force read metapage block.
* We will check metapage version in ColumnarMetapageCheckVersion
* depending on "force".
*/
bool forceReadBlock = true;
ColumnarMetapage metapage;
ReadFromBlock(rel, COLUMNAR_METAPAGE_BLOCKNO, SizeOfPageHeaderData,
(char *) &metapage, sizeof(ColumnarMetapage), forceReadBlock);
if (!force)
{
ColumnarMetapageCheckVersion(rel, &metapage);
}
return metapage;
}
/*
* ReadFromBlock - read bytes from a page at the given offset. If 'force' is
* true, don't check pd_lower; useful when reading a metapage of unknown
* version.
*/
static void
ReadFromBlock(Relation rel, BlockNumber blockno, uint32 offset, char *buf,
uint32 len, bool force)
{
Buffer buffer = ReadBuffer(rel, blockno);
LockBuffer(buffer, BUFFER_LOCK_SHARE);
Page page = BufferGetPage(buffer);
PageHeader phdr = (PageHeader) page;
if (BLCKSZ < offset + len || (!force && (phdr->pd_lower < offset + len)))
{
elog(ERROR,
"attempt to read columnar data of length %d from offset %d of block %d of relation %d",
len, offset, blockno, rel->rd_id);
}
memcpy_s(buf, len, page + offset, len);
UnlockReleaseBuffer(buffer);
}
/*
* WriteToBlock - append data to a block, initializing if necessary, and emit
* WAL. If 'clear' is true, always clear the data on the page and reinitialize
* it first, and offset must be SizeOfPageHeaderData. Otherwise, offset must
* be equal to pd_lower and pd_lower will be set to the end of the written
* data.
*/
static void
WriteToBlock(Relation rel, BlockNumber blockno, uint32 offset, char *buf,
uint32 len, bool clear)
{
Buffer buffer = ReadBuffer(rel, blockno);
GenericXLogState *state = GenericXLogStart(rel);
LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
Page page = GenericXLogRegisterBuffer(state, buffer, GENERIC_XLOG_FULL_IMAGE);
PageHeader phdr = (PageHeader) page;
if (PageIsNew(page) || clear)
{
PageInit(page, BLCKSZ, 0);
}
if (phdr->pd_lower < offset || phdr->pd_upper - offset < len)
{
elog(ERROR,
"attempt to write columnar data of length %d to offset %d of block %d of relation %d",
len, offset, blockno, rel->rd_id);
}
/*
* After a transaction has been rolled-back, we might be
* over-writing the rolledback write, so phdr->pd_lower can be
* different from addr.offset.
*
* We reset pd_lower to reset the rolledback write.
*
* Given that we always align page reservation to the next page as of
* 10.2, having such a disk page is only possible if write operaion
* failed in an older version of columnar, but now user attempts writing
* to that table in version >= 10.2.
*/
if (phdr->pd_lower > offset)
{
ereport(DEBUG4, (errmsg("overwriting page %u", blockno),
errdetail("This can happen after a roll-back.")));
phdr->pd_lower = offset;
}
memcpy_s(page + phdr->pd_lower, phdr->pd_upper - phdr->pd_lower, buf, len);
phdr->pd_lower += len;
GenericXLogFinish(state);
UnlockReleaseBuffer(buffer);
}
/*
* AlignReservation - given an unused logical byte offset, align it so that it
* falls at the start of a page.
*
* XXX: Reconsider whether we want/need to do this at all.
*/
static uint64
AlignReservation(uint64 prevReservation)
{
PhysicalAddr prevAddr = LogicalToPhysical(prevReservation);
uint64 alignedReservation = prevReservation;
if (prevAddr.offset != SizeOfPageHeaderData)
{
/* not aligned; align on beginning of next page */
PhysicalAddr initial = { 0 };
initial.blockno = prevAddr.blockno + 1;
initial.offset = SizeOfPageHeaderData;
alignedReservation = PhysicalToLogical(initial);
}
Assert(alignedReservation >= prevReservation);
return alignedReservation;
}
/*
* ColumnarMetapageIsCurrent - is the metapage at the latest version?
*/
static bool
ColumnarMetapageIsCurrent(ColumnarMetapage *metapage)
{
return (metapage->versionMajor == COLUMNAR_VERSION_MAJOR &&
metapage->versionMinor == COLUMNAR_VERSION_MINOR);
}
/*
* ColumnarMetapageIsOlder - is the metapage older than the current version?
*/
static bool
ColumnarMetapageIsOlder(ColumnarMetapage *metapage)
{
return (metapage->versionMajor < COLUMNAR_VERSION_MAJOR ||
(metapage->versionMajor == COLUMNAR_VERSION_MAJOR &&
(int) metapage->versionMinor < (int) COLUMNAR_VERSION_MINOR));
}
/*
* ColumnarMetapageIsNewer - is the metapage newer than the current version?
*/
static bool
ColumnarMetapageIsNewer(ColumnarMetapage *metapage)
{
return (metapage->versionMajor > COLUMNAR_VERSION_MAJOR ||
(metapage->versionMajor == COLUMNAR_VERSION_MAJOR &&
metapage->versionMinor > COLUMNAR_VERSION_MINOR));
}
/*
* ColumnarMetapageCheckVersion - throw an error if accessing old
* version of metapage.
*/
static void
ColumnarMetapageCheckVersion(Relation rel, ColumnarMetapage *metapage)
{
if (!ColumnarMetapageIsCurrent(metapage))
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg(
"attempted to access relation \"%s\", which uses an older columnar format",
RelationGetRelationName(rel)),
errdetail(
"Columnar format version %d.%d is required, \"%s\" has version %d.%d.",
COLUMNAR_VERSION_MAJOR, COLUMNAR_VERSION_MINOR,
RelationGetRelationName(rel),
metapage->versionMajor, metapage->versionMinor),
errhint(OLD_METAPAGE_VERSION_HINT)));
}
}
/*
* test_columnar_storage_write_new_page is a UDF only used for testing
* purposes. It could make more sense to define this in columnar_debug.c,
* but the storage layer doesn't expose ColumnarMetapage to any other files,
* so we define it here.
*/
Datum
test_columnar_storage_write_new_page(PG_FUNCTION_ARGS)
{
Oid relationId = PG_GETARG_OID(0);
Relation relation = relation_open(relationId, AccessShareLock);
/*
* Allocate a new page, write some data to there, and set reserved offset
* to the start of that page. That way, for a subsequent write operation,
* storage layer would try to overwrite the page that we allocated here.
*/
uint64 newPageOffset = ColumnarStorageGetReservedOffset(relation, false);
ColumnarStorageReserveData(relation, 100);
ColumnarStorageWrite(relation, newPageOffset, "foo_bar", 8);
ColumnarMetapage metapage = ColumnarMetapageRead(relation, false);
metapage.reservedOffset = newPageOffset;
ColumnarOverwriteMetapage(relation, metapage);
relation_close(relation, AccessShareLock);
PG_RETURN_VOID();
}

File diff suppressed because it is too large Load Diff

View File

@ -11,20 +11,17 @@
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include <sys/stat.h>
#include <unistd.h>
#include "postgres.h"
#include "miscadmin.h"
#include "utils/guc.h"
#include "utils/rel.h"
#include "citus_version.h"
#include "columnar/columnar.h"
#include "columnar/columnar_tableam.h"
/* Default values for option parameters */
#define DEFAULT_STRIPE_ROW_COUNT 150000
@ -32,7 +29,7 @@
#if HAVE_LIBZSTD
#define DEFAULT_COMPRESSION_TYPE COMPRESSION_ZSTD
#elif HAVE_CITUS_LIBLZ4
#elif HAVE_LIBLZ4
#define DEFAULT_COMPRESSION_TYPE COMPRESSION_LZ4
#else
#define DEFAULT_COMPRESSION_TYPE COMPRESSION_PG_LZ
@ -47,7 +44,7 @@ static const struct config_enum_entry columnar_compression_options[] =
{
{ "none", COMPRESSION_NONE, false },
{ "pglz", COMPRESSION_PG_LZ, false },
#if HAVE_CITUS_LIBLZ4
#if HAVE_LIBLZ4
{ "lz4", COMPRESSION_LZ4, false },
#endif
#if HAVE_LIBZSTD
@ -56,14 +53,6 @@ static const struct config_enum_entry columnar_compression_options[] =
{ NULL, 0, false }
};
void
columnar_init(void)
{
columnar_init_gucs();
columnar_tableam_init();
}
void
columnar_init_gucs()
{

View File

@ -13,22 +13,14 @@
*/
#include "postgres.h"
#include "common/pg_lzcompress.h"
#include "lib/stringinfo.h"
#include "citus_version.h"
#include "pg_version_constants.h"
#include "columnar/columnar.h"
#include "common/pg_lzcompress.h"
#include "columnar/columnar_compression.h"
#if HAVE_CITUS_LIBLZ4
#if HAVE_LIBLZ4
#include <lz4.h>
#endif
#if PG_VERSION_NUM >= PG_VERSION_16
#include "varatt.h"
#endif
#if HAVE_LIBZSTD
#include <zstd.h>
#endif
@ -69,7 +61,7 @@ CompressBuffer(StringInfo inputBuffer,
{
switch (compressionType)
{
#if HAVE_CITUS_LIBLZ4
#if HAVE_LIBLZ4
case COMPRESSION_LZ4:
{
int maximumLength = LZ4_compressBound(inputBuffer->len);
@ -176,7 +168,7 @@ DecompressBuffer(StringInfo buffer,
return buffer;
}
#if HAVE_CITUS_LIBLZ4
#if HAVE_LIBLZ4
case COMPRESSION_LZ4:
{
StringInfo decompressedBuffer = makeStringInfo();
@ -232,8 +224,10 @@ DecompressBuffer(StringInfo buffer,
case COMPRESSION_PG_LZ:
{
StringInfo decompressedBuffer = NULL;
uint32 compressedDataSize = VARSIZE(buffer->data) - COLUMNAR_COMPRESS_HDRSZ;
uint32 decompressedDataSize = COLUMNAR_COMPRESS_RAWSIZE(buffer->data);
int32 decompressedByteCount = 0;
if (compressedDataSize + COLUMNAR_COMPRESS_HDRSZ != buffer->len)
{
@ -244,11 +238,17 @@ DecompressBuffer(StringInfo buffer,
char *decompressedData = palloc0(decompressedDataSize);
int32 decompressedByteCount = pglz_decompress(COLUMNAR_COMPRESS_RAWDATA(
buffer->data),
compressedDataSize,
decompressedData,
decompressedDataSize, true);
#if PG_VERSION_NUM >= 120000
decompressedByteCount = pglz_decompress(COLUMNAR_COMPRESS_RAWDATA(
buffer->data),
compressedDataSize, decompressedData,
decompressedDataSize, true);
#else
decompressedByteCount = pglz_decompress(COLUMNAR_COMPRESS_RAWDATA(
buffer->data),
compressedDataSize, decompressedData,
decompressedDataSize);
#endif
if (decompressedByteCount < 0)
{
@ -256,7 +256,7 @@ DecompressBuffer(StringInfo buffer,
errdetail("compressed data is corrupted")));
}
StringInfo decompressedBuffer = palloc0(sizeof(StringInfoData));
decompressedBuffer = palloc0(sizeof(StringInfoData));
decompressedBuffer->data = decompressedData;
decompressedBuffer->len = decompressedDataSize;
decompressedBuffer->maxlen = decompressedDataSize;

View File

@ -0,0 +1,500 @@
/*-------------------------------------------------------------------------
*
* columnar_customscan.c
*
* This file contains the implementation of a postgres custom scan that
* we use to push down the projections into the table access methods.
*
* $Id$
*
*-------------------------------------------------------------------------
*/
#include "citus_version.h"
#if HAS_TABLEAM
#include "postgres.h"
#include "access/skey.h"
#include "nodes/extensible.h"
#include "nodes/pg_list.h"
#include "nodes/plannodes.h"
#include "optimizer/optimizer.h"
#include "optimizer/pathnode.h"
#include "optimizer/paths.h"
#include "optimizer/restrictinfo.h"
#include "utils/relcache.h"
#include "columnar/columnar.h"
#include "columnar/columnar_customscan.h"
#include "columnar/columnar_tableam.h"
typedef struct ColumnarScanPath
{
CustomPath custom_path;
/* place for local state during planning */
} ColumnarScanPath;
typedef struct ColumnarScanScan
{
CustomScan custom_scan;
/* place for local state during execution */
} ColumnarScanScan;
typedef struct ColumnarScanState
{
CustomScanState custom_scanstate;
List *qual;
} ColumnarScanState;
static void ColumnarSetRelPathlistHook(PlannerInfo *root, RelOptInfo *rel, Index rti,
RangeTblEntry *rte);
static Path * CreateColumnarScanPath(PlannerInfo *root, RelOptInfo *rel,
RangeTblEntry *rte);
static Cost ColumnarScanCost(RangeTblEntry *rte);
static Plan * ColumnarScanPath_PlanCustomPath(PlannerInfo *root,
RelOptInfo *rel,
struct CustomPath *best_path,
List *tlist,
List *clauses,
List *custom_plans);
static Node * ColumnarScan_CreateCustomScanState(CustomScan *cscan);
static void ColumnarScan_BeginCustomScan(CustomScanState *node, EState *estate, int
eflags);
static TupleTableSlot * ColumnarScan_ExecCustomScan(CustomScanState *node);
static void ColumnarScan_EndCustomScan(CustomScanState *node);
static void ColumnarScan_ReScanCustomScan(CustomScanState *node);
static void ColumnarScan_ExplainCustomScan(CustomScanState *node, List *ancestors,
ExplainState *es);
/* saved hook value in case of unload */
static set_rel_pathlist_hook_type PreviousSetRelPathlistHook = NULL;
static bool EnableColumnarCustomScan = true;
static bool EnableColumnarQualPushdown = true;
const struct CustomPathMethods ColumnarScanPathMethods = {
.CustomName = "ColumnarScan",
.PlanCustomPath = ColumnarScanPath_PlanCustomPath,
};
const struct CustomScanMethods ColumnarScanScanMethods = {
.CustomName = "ColumnarScan",
.CreateCustomScanState = ColumnarScan_CreateCustomScanState,
};
const struct CustomExecMethods ColumnarExecuteMethods = {
.CustomName = "ColumnarScan",
.BeginCustomScan = ColumnarScan_BeginCustomScan,
.ExecCustomScan = ColumnarScan_ExecCustomScan,
.EndCustomScan = ColumnarScan_EndCustomScan,
.ReScanCustomScan = ColumnarScan_ReScanCustomScan,
.ExplainCustomScan = ColumnarScan_ExplainCustomScan,
};
/*
* columnar_customscan_init installs the hook required to intercept the postgres planner and
* provide extra paths for columnar tables
*/
void
columnar_customscan_init()
{
PreviousSetRelPathlistHook = set_rel_pathlist_hook;
set_rel_pathlist_hook = ColumnarSetRelPathlistHook;
/* register customscan specific GUC's */
DefineCustomBoolVariable(
"columnar.enable_custom_scan",
gettext_noop("Enables the use of a custom scan to push projections and quals "
"into the storage layer."),
NULL,
&EnableColumnarCustomScan,
true,
PGC_USERSET,
GUC_NO_SHOW_ALL,
NULL, NULL, NULL);
DefineCustomBoolVariable(
"columnar.enable_qual_pushdown",
gettext_noop("Enables qual pushdown into columnar. This has no effect unless "
"columnar.enable_custom_scan is true."),
NULL,
&EnableColumnarQualPushdown,
true,
PGC_USERSET,
GUC_NO_SHOW_ALL,
NULL, NULL, NULL);
RegisterCustomScanMethods(&ColumnarScanScanMethods);
}
static void
clear_paths(RelOptInfo *rel)
{
rel->pathlist = NIL;
rel->partial_pathlist = NIL;
rel->cheapest_startup_path = NULL;
rel->cheapest_total_path = NULL;
rel->cheapest_unique_path = NULL;
rel->cheapest_parameterized_paths = NIL;
}
static void
ColumnarSetRelPathlistHook(PlannerInfo *root, RelOptInfo *rel, Index rti,
RangeTblEntry *rte)
{
/* call into previous hook if assigned */
if (PreviousSetRelPathlistHook)
{
PreviousSetRelPathlistHook(root, rel, rti, rte);
}
if (!OidIsValid(rte->relid) || rte->rtekind != RTE_RELATION || rte->inh)
{
/* some calls to the pathlist hook don't have a valid relation set. Do nothing */
return;
}
/*
* Here we want to inspect if this relation pathlist hook is accessing a columnar table.
* If that is the case we want to insert an extra path that pushes down the projection
* into the scan of the table to minimize the data read.
*/
Relation relation = RelationIdGetRelation(rte->relid);
if (relation->rd_tableam == GetColumnarTableAmRoutine())
{
if (rte->tablesample != NULL)
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("sample scans not supported on columnar tables")));
}
/* columnar doesn't support parallel paths */
rel->partial_pathlist = NIL;
if (EnableColumnarCustomScan)
{
Path *customPath = CreateColumnarScanPath(root, rel, rte);
ereport(DEBUG1, (errmsg("pathlist hook for columnar table am")));
/* we propose a new path that will be the only path for scanning this relation */
clear_paths(rel);
add_path(rel, customPath);
}
}
RelationClose(relation);
}
static Path *
CreateColumnarScanPath(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
{
ColumnarScanPath *cspath = (ColumnarScanPath *) newNode(sizeof(ColumnarScanPath),
T_CustomPath);
/*
* popuate custom path information
*/
CustomPath *cpath = &cspath->custom_path;
cpath->methods = &ColumnarScanPathMethods;
/*
* populate generic path information
*/
Path *path = &cpath->path;
path->pathtype = T_CustomScan;
path->parent = rel;
path->pathtarget = rel->reltarget;
/* columnar scans are not parallel-aware, but they are parallel-safe */
path->parallel_safe = rel->consider_parallel;
/*
* We don't support pushing join clauses into the quals of a seqscan, but
* it could still have required parameterization due to LATERAL refs in
* its tlist.
*/
path->param_info = get_baserel_parampathinfo(root, rel,
rel->lateral_relids);
/*
* Add cost estimates for a columnar table scan, row count is the rows estimated by
* postgres' planner.
*/
path->rows = rel->rows;
path->startup_cost = 0;
path->total_cost = path->startup_cost + ColumnarScanCost(rte);
return (Path *) cspath;
}
/*
* ColumnarScanCost calculates the cost of scanning the columnar table. The cost is estimated
* by using all stripe metadata to estimate based on the columns to read how many pages
* need to be read.
*/
static Cost
ColumnarScanCost(RangeTblEntry *rte)
{
Relation rel = RelationIdGetRelation(rte->relid);
List *stripeList = StripesForRelfilenode(rel->rd_node);
RelationClose(rel);
uint32 maxColumnCount = 0;
uint64 totalStripeSize = 0;
ListCell *stripeMetadataCell = NULL;
rel = NULL;
foreach(stripeMetadataCell, stripeList)
{
StripeMetadata *stripeMetadata = (StripeMetadata *) lfirst(stripeMetadataCell);
totalStripeSize += stripeMetadata->dataLength;
maxColumnCount = Max(maxColumnCount, stripeMetadata->columnCount);
}
{
Bitmapset *attr_needed = rte->selectedCols;
double numberOfColumnsRead = bms_num_members(attr_needed);
double selectionRatio = 0;
/*
* When no stripes are in the table we don't have a count in maxColumnCount. To
* prevent a division by zero turning into a NaN we keep the ratio on zero.
* This will result in a cost of 0 for scanning the table which is a reasonable
* cost on an empty table.
*/
if (maxColumnCount != 0)
{
selectionRatio = numberOfColumnsRead / (double) maxColumnCount;
}
Cost scanCost = (double) totalStripeSize / BLCKSZ * selectionRatio;
return scanCost;
}
}
static Plan *
ColumnarScanPath_PlanCustomPath(PlannerInfo *root,
RelOptInfo *rel,
struct CustomPath *best_path,
List *tlist,
List *clauses,
List *custom_plans)
{
ColumnarScanScan *plan = (ColumnarScanScan *) newNode(sizeof(ColumnarScanScan),
T_CustomScan);
CustomScan *cscan = &plan->custom_scan;
cscan->methods = &ColumnarScanScanMethods;
/* Reduce RestrictInfo list to bare expressions; ignore pseudoconstants */
clauses = extract_actual_clauses(clauses, false);
cscan->scan.plan.targetlist = list_copy(tlist);
cscan->scan.plan.qual = clauses;
cscan->scan.scanrelid = best_path->path.parent->relid;
return (Plan *) plan;
}
static Node *
ColumnarScan_CreateCustomScanState(CustomScan *cscan)
{
ColumnarScanState *columnarScanState = (ColumnarScanState *) newNode(
sizeof(ColumnarScanState), T_CustomScanState);
CustomScanState *cscanstate = &columnarScanState->custom_scanstate;
cscanstate->methods = &ColumnarExecuteMethods;
if (EnableColumnarQualPushdown)
{
columnarScanState->qual = cscan->scan.plan.qual;
}
return (Node *) cscanstate;
}
static void
ColumnarScan_BeginCustomScan(CustomScanState *cscanstate, EState *estate, int eflags)
{
/* scan slot is already initialized */
}
static Bitmapset *
ColumnarAttrNeeded(ScanState *ss)
{
TupleTableSlot *slot = ss->ss_ScanTupleSlot;
int natts = slot->tts_tupleDescriptor->natts;
Bitmapset *attr_needed = NULL;
Plan *plan = ss->ps.plan;
int flags = PVC_RECURSE_AGGREGATES |
PVC_RECURSE_WINDOWFUNCS | PVC_RECURSE_PLACEHOLDERS;
List *vars = list_concat(pull_var_clause((Node *) plan->targetlist, flags),
pull_var_clause((Node *) plan->qual, flags));
ListCell *lc;
foreach(lc, vars)
{
Var *var = lfirst(lc);
if (var->varattno < 0)
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg(
"UPDATE and CTID scans not supported for ColumnarScan")));
}
if (var->varattno == 0)
{
elog(DEBUG1, "Need attribute: all");
/* all attributes are required, we don't need to add more so break*/
attr_needed = bms_add_range(attr_needed, 0, natts - 1);
break;
}
elog(DEBUG1, "Need attribute: %d", var->varattno);
attr_needed = bms_add_member(attr_needed, var->varattno - 1);
}
return attr_needed;
}
static TupleTableSlot *
ColumnarScanNext(ColumnarScanState *columnarScanState)
{
CustomScanState *node = (CustomScanState *) columnarScanState;
/*
* get information from the estate and scan state
*/
TableScanDesc scandesc = node->ss.ss_currentScanDesc;
EState *estate = node->ss.ps.state;
ScanDirection direction = estate->es_direction;
TupleTableSlot *slot = node->ss.ss_ScanTupleSlot;
if (scandesc == NULL)
{
/* the columnar access method does not use the flags, they are specific to heap */
uint32 flags = 0;
Bitmapset *attr_needed = ColumnarAttrNeeded(&node->ss);
/*
* We reach here if the scan is not parallel, or if we're serially
* executing a scan that was planned to be parallel.
*/
scandesc = columnar_beginscan_extended(node->ss.ss_currentRelation,
estate->es_snapshot,
0, NULL, NULL, flags, attr_needed,
columnarScanState->qual);
bms_free(attr_needed);
node->ss.ss_currentScanDesc = scandesc;
}
/*
* get the next tuple from the table
*/
if (table_scan_getnextslot(scandesc, direction, slot))
{
return slot;
}
return NULL;
}
/*
* SeqRecheck -- access method routine to recheck a tuple in EvalPlanQual
*/
static bool
ColumnarScanRecheck(ColumnarScanState *node, TupleTableSlot *slot)
{
return true;
}
static TupleTableSlot *
ColumnarScan_ExecCustomScan(CustomScanState *node)
{
return ExecScan(&node->ss,
(ExecScanAccessMtd) ColumnarScanNext,
(ExecScanRecheckMtd) ColumnarScanRecheck);
}
static void
ColumnarScan_EndCustomScan(CustomScanState *node)
{
/*
* get information from node
*/
TableScanDesc scanDesc = node->ss.ss_currentScanDesc;
/*
* Free the exprcontext
*/
ExecFreeExprContext(&node->ss.ps);
/*
* clean out the tuple table
*/
if (node->ss.ps.ps_ResultTupleSlot)
{
ExecClearTuple(node->ss.ps.ps_ResultTupleSlot);
}
ExecClearTuple(node->ss.ss_ScanTupleSlot);
/*
* close heap scan
*/
if (scanDesc != NULL)
{
table_endscan(scanDesc);
}
}
static void
ColumnarScan_ReScanCustomScan(CustomScanState *node)
{
TableScanDesc scanDesc = node->ss.ss_currentScanDesc;
if (scanDesc != NULL)
{
table_rescan(node->ss.ss_currentScanDesc, NULL);
}
}
static void
ColumnarScan_ExplainCustomScan(CustomScanState *node, List *ancestors,
ExplainState *es)
{
TableScanDesc scanDesc = node->ss.ss_currentScanDesc;
if (scanDesc != NULL)
{
int64 chunkGroupsFiltered = ColumnarScanChunkGroupsFiltered(scanDesc);
ExplainPropertyInteger("Columnar Chunk Groups Removed by Filter", NULL,
chunkGroupsFiltered, es);
}
}
#endif /* HAS_TABLEAM */

View File

@ -0,0 +1,99 @@
/*-------------------------------------------------------------------------
*
* columnar_debug.c
*
* Helper functions to debug column store.
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "pg_config.h"
#include "access/nbtree.h"
#include "catalog/pg_am.h"
#include "catalog/pg_type.h"
#include "distributed/pg_version_constants.h"
#include "distributed/tuplestore.h"
#include "miscadmin.h"
#include "storage/fd.h"
#include "storage/smgr.h"
#include "utils/guc.h"
#include "utils/memutils.h"
#include "utils/rel.h"
#include "utils/tuplestore.h"
#include "columnar/columnar.h"
#include "columnar/columnar_version_compat.h"
static void MemoryContextTotals(MemoryContext context, MemoryContextCounters *counters);
PG_FUNCTION_INFO_V1(column_store_memory_stats);
/*
* column_store_memory_stats returns a record of 3 values: size of
* TopMemoryContext, TopTransactionContext, and Write State context.
*/
Datum
column_store_memory_stats(PG_FUNCTION_ARGS)
{
TupleDesc tupleDescriptor = NULL;
const int resultColumnCount = 3;
#if PG_VERSION_NUM >= PG_VERSION_12
tupleDescriptor = CreateTemplateTupleDesc(resultColumnCount);
#else
tupleDescriptor = CreateTemplateTupleDesc(resultColumnCount, false);
#endif
TupleDescInitEntry(tupleDescriptor, (AttrNumber) 1, "TopMemoryContext",
INT8OID, -1, 0);
TupleDescInitEntry(tupleDescriptor, (AttrNumber) 2, "TopTransactionContext",
INT8OID, -1, 0);
TupleDescInitEntry(tupleDescriptor, (AttrNumber) 3, "WriteStateContext",
INT8OID, -1, 0);
MemoryContextCounters transactionCounters = { 0 };
MemoryContextCounters topCounters = { 0 };
MemoryContextCounters writeStateCounters = { 0 };
MemoryContextTotals(TopTransactionContext, &transactionCounters);
MemoryContextTotals(TopMemoryContext, &topCounters);
MemoryContextTotals(GetWriteContextForDebug(), &writeStateCounters);
bool nulls[3] = { false };
Datum values[3] = {
Int64GetDatum(topCounters.totalspace),
Int64GetDatum(transactionCounters.totalspace),
Int64GetDatum(writeStateCounters.totalspace)
};
Tuplestorestate *tupleStore = SetupTuplestore(fcinfo, &tupleDescriptor);
tuplestore_putvalues(tupleStore, tupleDescriptor, values, nulls);
tuplestore_donestoring(tupleStore);
PG_RETURN_DATUM(0);
}
/*
* MemoryContextTotals adds stats of the given memory context and its
* subtree to the given counters.
*/
static void
MemoryContextTotals(MemoryContext context, MemoryContextCounters *counters)
{
if (context == NULL)
{
return;
}
MemoryContext child;
for (child = context->firstchild; child != NULL; child = child->nextchild)
{
MemoryContextTotals(child, counters);
}
context->methods->stats(context, NULL, NULL, counters);
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -16,43 +16,32 @@
#include "postgres.h"
#include "miscadmin.h"
#include "safe_lib.h"
#include "access/heapam.h"
#include "access/nbtree.h"
#include "catalog/pg_am.h"
#include "miscadmin.h"
#include "storage/fd.h"
#include "storage/smgr.h"
#include "utils/guc.h"
#include "utils/memutils.h"
#include "utils/rel.h"
#include "pg_version_compat.h"
#include "pg_version_constants.h"
#include "utils/relfilenodemap.h"
#include "columnar/columnar.h"
#include "columnar/columnar_storage.h"
#include "columnar/columnar_version_compat.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "storage/relfilelocator.h"
#include "utils/relfilenumbermap.h"
#else
#include "utils/relfilenodemap.h"
#endif
struct ColumnarWriteState
struct TableWriteState
{
TupleDesc tupleDescriptor;
FmgrInfo **comparisonFunctionArray;
RelFileLocator relfilelocator;
RelFileNode relfilenode;
MemoryContext stripeWriteContext;
MemoryContext perTupleContext;
StripeBuffers *stripeBuffers;
StripeSkipList *stripeSkipList;
EmptyStripeReservation *emptyStripeReservation;
ColumnarOptions options;
ChunkData *chunkData;
@ -73,12 +62,12 @@ static StripeBuffers * CreateEmptyStripeBuffers(uint32 stripeMaxRowCount,
static StripeSkipList * CreateEmptyStripeSkipList(uint32 stripeMaxRowCount,
uint32 chunkRowCount,
uint32 columnCount);
static void FlushStripe(ColumnarWriteState *writeState);
static void FlushStripe(TableWriteState *writeState);
static StringInfo SerializeBoolArray(bool *boolArray, uint32 boolArrayLength);
static void SerializeSingleDatum(StringInfo datumBuffer, Datum datum,
bool datumTypeByValue, int datumTypeLength,
char datumTypeAlign);
static void SerializeChunkData(ColumnarWriteState *writeState, uint32 chunkIndex,
static void SerializeChunkData(TableWriteState *writeState, uint32 chunkIndex,
uint32 rowCount);
static void UpdateChunkSkipNodeMinMax(ColumnChunkSkipNode *chunkSkipNode,
Datum columnValue, bool columnTypeByValue,
@ -92,8 +81,8 @@ static StringInfo CopyStringInfo(StringInfo sourceString);
* handle. This handle should be used for adding the row values and finishing the
* data load operation.
*/
ColumnarWriteState *
ColumnarBeginWrite(RelFileLocator relfilelocator,
TableWriteState *
ColumnarBeginWrite(RelFileNode relfilenode,
ColumnarOptions options,
TupleDesc tupleDescriptor)
{
@ -127,19 +116,18 @@ ColumnarBeginWrite(RelFileLocator relfilelocator,
ALLOCSET_DEFAULT_SIZES);
bool *columnMaskArray = palloc(columnCount * sizeof(bool));
memset(columnMaskArray, true, columnCount * sizeof(bool));
memset(columnMaskArray, true, columnCount);
ChunkData *chunkData = CreateEmptyChunkData(columnCount, columnMaskArray,
options.chunkRowCount);
ColumnarWriteState *writeState = palloc0(sizeof(ColumnarWriteState));
writeState->relfilelocator = relfilelocator;
TableWriteState *writeState = palloc0(sizeof(TableWriteState));
writeState->relfilenode = relfilenode;
writeState->options = options;
writeState->tupleDescriptor = CreateTupleDescCopy(tupleDescriptor);
writeState->comparisonFunctionArray = comparisonFunctionArray;
writeState->stripeBuffers = NULL;
writeState->stripeSkipList = NULL;
writeState->emptyStripeReservation = NULL;
writeState->stripeWriteContext = stripeWriteContext;
writeState->chunkData = chunkData;
writeState->compressionBuffer = NULL;
@ -158,11 +146,9 @@ ColumnarBeginWrite(RelFileLocator relfilelocator,
* corresponding skip nodes. Then, whole chunk data is compressed at every
* rowChunkCount insertion. Then, if row count exceeds stripeMaxRowCount, we flush
* the stripe, and add its metadata to the table footer.
*
* Returns the "row number" assigned to written row.
*/
uint64
ColumnarWriteRow(ColumnarWriteState *writeState, Datum *columnValues, bool *columnNulls)
void
ColumnarWriteRow(TableWriteState *writeState, Datum *columnValues, bool *columnNulls)
{
uint32 columnIndex = 0;
StripeBuffers *stripeBuffers = writeState->stripeBuffers;
@ -183,16 +169,6 @@ ColumnarWriteRow(ColumnarWriteState *writeState, Datum *columnValues, bool *colu
writeState->stripeSkipList = stripeSkipList;
writeState->compressionBuffer = makeStringInfo();
Oid relationId = RelidByRelfilenumber(RelationTablespace_compat(
writeState->relfilelocator),
RelationPhysicalIdentifierNumber_compat(
writeState->relfilelocator));
Relation relation = relation_open(relationId, NoLock);
writeState->emptyStripeReservation =
ReserveEmptyStripe(relation, columnCount, chunkRowCount,
options->stripeRowCount);
relation_close(relation, NoLock);
/*
* serializedValueBuffer lives in stripe write memory context so it needs to be
* initialized when the stripe is created.
@ -249,8 +225,6 @@ ColumnarWriteRow(ColumnarWriteState *writeState, Datum *columnValues, bool *colu
SerializeChunkData(writeState, chunkIndex, chunkRowCount);
}
uint64 writtenRowNumber = writeState->emptyStripeReservation->stripeFirstRowNumber +
stripeBuffers->rowCount;
stripeBuffers->rowCount++;
if (stripeBuffers->rowCount >= options->stripeRowCount)
{
@ -258,8 +232,6 @@ ColumnarWriteRow(ColumnarWriteState *writeState, Datum *columnValues, bool *colu
}
MemoryContextSwitchTo(oldContext);
return writtenRowNumber;
}
@ -268,7 +240,7 @@ ColumnarWriteRow(ColumnarWriteState *writeState, Datum *columnValues, bool *colu
* stripe, we flush it.
*/
void
ColumnarEndWrite(ColumnarWriteState *writeState)
ColumnarEndWrite(TableWriteState *writeState)
{
ColumnarFlushPendingWrites(writeState);
@ -280,7 +252,7 @@ ColumnarEndWrite(ColumnarWriteState *writeState)
void
ColumnarFlushPendingWrites(ColumnarWriteState *writeState)
ColumnarFlushPendingWrites(TableWriteState *writeState)
{
StripeBuffers *stripeBuffers = writeState->stripeBuffers;
if (stripeBuffers != NULL)
@ -305,7 +277,7 @@ ColumnarFlushPendingWrites(ColumnarWriteState *writeState)
* Return per-tuple context for columnar write operation.
*/
MemoryContext
ColumnarWritePerTupleContext(ColumnarWriteState *state)
ColumnarWritePerTupleContext(TableWriteState *state)
{
return state->perTupleContext;
}
@ -379,6 +351,80 @@ CreateEmptyStripeSkipList(uint32 stripeMaxRowCount, uint32 chunkRowCount,
}
void
WriteToSmgr(Relation rel, uint64 logicalOffset, char *data, uint32 dataLength)
{
uint64 remaining = dataLength;
Buffer buffer;
while (remaining > 0)
{
SmgrAddr addr = logical_to_smgr(logicalOffset);
RelationOpenSmgr(rel);
BlockNumber nblocks PG_USED_FOR_ASSERTS_ONLY =
smgrnblocks(rel->rd_smgr, MAIN_FORKNUM);
Assert(addr.blockno < nblocks);
RelationCloseSmgr(rel);
buffer = ReadBuffer(rel, addr.blockno);
LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
Page page = BufferGetPage(buffer);
PageHeader phdr = (PageHeader) page;
if (PageIsNew(page))
{
PageInit(page, BLCKSZ, 0);
}
/*
* After a transaction has been rolled-back, we might be
* over-writing the rolledback write, so phdr->pd_lower can be
* different from addr.offset.
*
* We reset pd_lower to reset the rolledback write.
*/
if (phdr->pd_lower > addr.offset)
{
ereport(DEBUG1, (errmsg("over-writing page %u", addr.blockno),
errdetail("This can happen after a roll-back.")));
phdr->pd_lower = addr.offset;
}
Assert(phdr->pd_lower == addr.offset);
START_CRIT_SECTION();
uint64 to_write = Min(phdr->pd_upper - phdr->pd_lower, remaining);
memcpy_s(page + phdr->pd_lower, phdr->pd_upper - phdr->pd_lower, data, to_write);
phdr->pd_lower += to_write;
MarkBufferDirty(buffer);
if (RelationNeedsWAL(rel))
{
XLogBeginInsert();
/*
* Since columnar will mostly write whole pages we force the transmission of the
* whole image in the buffer
*/
XLogRegisterBuffer(0, buffer, REGBUF_FORCE_IMAGE);
XLogRecPtr recptr = XLogInsert(RM_GENERIC_ID, 0);
PageSetLSN(page, recptr);
}
END_CRIT_SECTION();
UnlockReleaseBuffer(buffer);
data += to_write;
remaining -= to_write;
logicalOffset += to_write;
}
}
/*
* FlushStripe flushes current stripe data into the file. The function first ensures
* the last data chunk for each column is properly serialized and compressed. Then,
@ -386,8 +432,9 @@ CreateEmptyStripeSkipList(uint32 stripeMaxRowCount, uint32 chunkRowCount,
* flushes the skip list, data, and footer buffers to the file.
*/
static void
FlushStripe(ColumnarWriteState *writeState)
FlushStripe(TableWriteState *writeState)
{
StripeMetadata stripeMetadata = { 0 };
uint32 columnIndex = 0;
uint32 chunkIndex = 0;
StripeBuffers *stripeBuffers = writeState->stripeBuffers;
@ -404,10 +451,8 @@ FlushStripe(ColumnarWriteState *writeState)
elog(DEBUG1, "Flushing Stripe of size %d", stripeBuffers->rowCount);
Oid relationId = RelidByRelfilenumber(RelationTablespace_compat(
writeState->relfilelocator),
RelationPhysicalIdentifierNumber_compat(
writeState->relfilelocator));
Oid relationId = RelidByRelfilenode(writeState->relfilenode.spcNode,
writeState->relfilenode.relNode);
Relation relation = relation_open(relationId, NoLock);
/*
@ -455,11 +500,11 @@ FlushStripe(ColumnarWriteState *writeState)
}
}
StripeMetadata *stripeMetadata =
CompleteStripeReservation(relation, writeState->emptyStripeReservation->stripeId,
stripeSize, stripeRowCount, chunkCount);
stripeMetadata = ReserveStripe(relation, stripeSize,
stripeRowCount, columnCount, chunkCount,
chunkRowCount);
uint64 currentFileOffset = stripeMetadata->fileOffset;
uint64 currentFileOffset = stripeMetadata.fileOffset;
/*
* Each stripe has only one section:
@ -482,8 +527,8 @@ FlushStripe(ColumnarWriteState *writeState)
columnBuffers->chunkBuffersArray[chunkIndex];
StringInfo existsBuffer = chunkBuffers->existsBuffer;
ColumnarStorageWrite(relation, currentFileOffset,
existsBuffer->data, existsBuffer->len);
WriteToSmgr(relation, currentFileOffset,
existsBuffer->data, existsBuffer->len);
currentFileOffset += existsBuffer->len;
}
@ -493,17 +538,17 @@ FlushStripe(ColumnarWriteState *writeState)
columnBuffers->chunkBuffersArray[chunkIndex];
StringInfo valueBuffer = chunkBuffers->valueBuffer;
ColumnarStorageWrite(relation, currentFileOffset,
valueBuffer->data, valueBuffer->len);
WriteToSmgr(relation, currentFileOffset,
valueBuffer->data, valueBuffer->len);
currentFileOffset += valueBuffer->len;
}
}
SaveChunkGroups(writeState->relfilelocator,
stripeMetadata->id,
SaveChunkGroups(writeState->relfilenode,
stripeMetadata.id,
writeState->chunkGroupRowCounts);
SaveStripeSkipList(writeState->relfilelocator,
stripeMetadata->id,
SaveStripeSkipList(writeState->relfilenode,
stripeMetadata.id,
stripeSkipList, tupleDescriptor);
writeState->chunkGroupRowCounts = NIL;
@ -520,7 +565,7 @@ static StringInfo
SerializeBoolArray(bool *boolArray, uint32 boolArrayLength)
{
uint32 boolArrayIndex = 0;
uint32 byteCount = ((boolArrayLength * sizeof(bool)) + (8 - sizeof(bool))) / 8;
uint32 byteCount = (boolArrayLength + 7) / 8;
StringInfo boolArrayBuffer = makeStringInfo();
enlargeStringInfo(boolArrayBuffer, byteCount);
@ -544,9 +589,6 @@ SerializeBoolArray(bool *boolArray, uint32 boolArrayLength)
/*
* SerializeSingleDatum serializes the given datum value and appends it to the
* provided string info buffer.
*
* Since we don't want to limit datum buffer size to RSIZE_MAX unnecessarily,
* we use memcpy instead of memcpy_s several places in this function.
*/
static void
SerializeSingleDatum(StringInfo datumBuffer, Datum datum, bool datumTypeByValue,
@ -568,13 +610,15 @@ SerializeSingleDatum(StringInfo datumBuffer, Datum datum, bool datumTypeByValue,
}
else
{
memcpy(currentDatumDataPointer, DatumGetPointer(datum), datumTypeLength); /* IGNORE-BANNED */
memcpy_s(currentDatumDataPointer, datumBuffer->maxlen - datumBuffer->len,
DatumGetPointer(datum), datumTypeLength);
}
}
else
{
Assert(!datumTypeByValue);
memcpy(currentDatumDataPointer, DatumGetPointer(datum), datumLength); /* IGNORE-BANNED */
memcpy_s(currentDatumDataPointer, datumBuffer->maxlen - datumBuffer->len,
DatumGetPointer(datum), datumLength);
}
datumBuffer->len += datumLengthAligned;
@ -586,7 +630,7 @@ SerializeSingleDatum(StringInfo datumBuffer, Datum datum, bool datumTypeByValue,
* compression type for every column.
*/
static void
SerializeChunkData(ColumnarWriteState *writeState, uint32 chunkIndex, uint32 rowCount)
SerializeChunkData(TableWriteState *writeState, uint32 chunkIndex, uint32 rowCount)
{
uint32 columnIndex = 0;
StripeBuffers *stripeBuffers = writeState->stripeBuffers;
@ -728,12 +772,7 @@ DatumCopy(Datum datum, bool datumTypeByValue, int datumTypeLength)
{
uint32 datumLength = att_addlength_datum(0, datumTypeLength, datum);
char *datumData = palloc0(datumLength);
/*
* We use IGNORE-BANNED here since we don't want to limit datum size to
* RSIZE_MAX unnecessarily.
*/
memcpy(datumData, DatumGetPointer(datum), datumLength); /* IGNORE-BANNED */
memcpy_s(datumData, datumLength, DatumGetPointer(datum), datumLength);
datumCopy = PointerGetDatum(datumData);
}
@ -756,12 +795,8 @@ CopyStringInfo(StringInfo sourceString)
targetString->data = palloc0(sourceString->len);
targetString->len = sourceString->len;
targetString->maxlen = sourceString->len;
/*
* We use IGNORE-BANNED here since we don't want to limit string
* buffer size to RSIZE_MAX unnecessarily.
*/
memcpy(targetString->data, sourceString->data, sourceString->len); /* IGNORE-BANNED */
memcpy_s(targetString->data, sourceString->len,
sourceString->data, sourceString->len);
}
return targetString;
@ -769,7 +804,7 @@ CopyStringInfo(StringInfo sourceString)
bool
ContainsPendingWrites(ColumnarWriteState *state)
ContainsPendingWrites(TableWriteState *state)
{
return state->stripeBuffers != NULL && state->stripeBuffers->rowCount != 0;
}

View File

@ -18,15 +18,26 @@
#include "citus_version.h"
#include "columnar/columnar.h"
#include "columnar/mod.h"
#ifdef HAS_TABLEAM
#include "columnar/columnar_tableam.h"
PG_MODULE_MAGIC;
void _PG_init(void);
#endif
void
_PG_init(void)
columnar_init(void)
{
columnar_init();
columnar_init_gucs();
#ifdef HAS_TABLEAM
columnar_tableam_init();
#endif
}
void
columnar_fini(void)
{
#if HAS_TABLEAM
columnar_tableam_finish();
#endif
}

View File

@ -1 +0,0 @@
../../../vendor/safestringlib/safeclib/

View File

@ -1,32 +0,0 @@
-- add columnar objects back
ALTER EXTENSION citus_columnar ADD SCHEMA columnar;
ALTER EXTENSION citus_columnar ADD SCHEMA columnar_internal;
ALTER EXTENSION citus_columnar ADD SEQUENCE columnar_internal.storageid_seq;
ALTER EXTENSION citus_columnar ADD TABLE columnar_internal.options;
ALTER EXTENSION citus_columnar ADD TABLE columnar_internal.stripe;
ALTER EXTENSION citus_columnar ADD TABLE columnar_internal.chunk_group;
ALTER EXTENSION citus_columnar ADD TABLE columnar_internal.chunk;
ALTER EXTENSION citus_columnar ADD FUNCTION columnar_internal.columnar_handler;
ALTER EXTENSION citus_columnar ADD ACCESS METHOD columnar;
ALTER EXTENSION citus_columnar ADD FUNCTION pg_catalog.alter_columnar_table_set;
ALTER EXTENSION citus_columnar ADD FUNCTION pg_catalog.alter_columnar_table_reset;
ALTER EXTENSION citus_columnar ADD FUNCTION citus_internal.upgrade_columnar_storage;
ALTER EXTENSION citus_columnar ADD FUNCTION citus_internal.downgrade_columnar_storage;
ALTER EXTENSION citus_columnar ADD FUNCTION citus_internal.columnar_ensure_am_depends_catalog;
ALTER EXTENSION citus_columnar ADD FUNCTION columnar.get_storage_id;
ALTER EXTENSION citus_columnar ADD VIEW columnar.storage;
ALTER EXTENSION citus_columnar ADD VIEW columnar.options;
ALTER EXTENSION citus_columnar ADD VIEW columnar.stripe;
ALTER EXTENSION citus_columnar ADD VIEW columnar.chunk_group;
ALTER EXTENSION citus_columnar ADD VIEW columnar.chunk;
-- move citus_internal functions to columnar_internal
ALTER FUNCTION citus_internal.upgrade_columnar_storage(regclass) SET SCHEMA columnar_internal;
ALTER FUNCTION citus_internal.downgrade_columnar_storage(regclass) SET SCHEMA columnar_internal;
ALTER FUNCTION citus_internal.columnar_ensure_am_depends_catalog() SET SCHEMA columnar_internal;

View File

@ -1 +0,0 @@
-- fake sql file 'Y'

View File

@ -1,19 +0,0 @@
-- citus_columnar--11.1-1--11.2-1
#include "udfs/columnar_ensure_am_depends_catalog/11.2-1.sql"
DELETE FROM pg_depend
WHERE classid = 'pg_am'::regclass::oid
AND objid IN (select oid from pg_am where amname = 'columnar')
AND objsubid = 0
AND refclassid = 'pg_class'::regclass::oid
AND refobjid IN (
'columnar_internal.stripe_first_row_number_idx'::regclass::oid,
'columnar_internal.chunk_group_pkey'::regclass::oid,
'columnar_internal.chunk_pkey'::regclass::oid,
'columnar_internal.options_pkey'::regclass::oid,
'columnar_internal.stripe_first_row_number_idx'::regclass::oid,
'columnar_internal.stripe_pkey'::regclass::oid
)
AND refobjsubid = 0
AND deptype = 'n';

View File

@ -1,435 +0,0 @@
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "CREATE EXTENSION citus_columnar" to load this file. \quit
-- columnar--9.5-1--10.0-1.sql
CREATE SCHEMA IF NOT EXISTS columnar;
SET search_path TO columnar;
CREATE SEQUENCE IF NOT EXISTS storageid_seq MINVALUE 10000000000 NO CYCLE;
CREATE TABLE IF NOT EXISTS options (
regclass regclass NOT NULL PRIMARY KEY,
chunk_group_row_limit int NOT NULL,
stripe_row_limit int NOT NULL,
compression_level int NOT NULL,
compression name NOT NULL
) WITH (user_catalog_table = true);
COMMENT ON TABLE options IS 'columnar table specific options, maintained by alter_columnar_table_set';
CREATE TABLE IF NOT EXISTS stripe (
storage_id bigint NOT NULL,
stripe_num bigint NOT NULL,
file_offset bigint NOT NULL,
data_length bigint NOT NULL,
column_count int NOT NULL,
chunk_row_count int NOT NULL,
row_count bigint NOT NULL,
chunk_group_count int NOT NULL,
first_row_number bigint NOT NULL,
PRIMARY KEY (storage_id, stripe_num),
CONSTRAINT stripe_first_row_number_idx UNIQUE (storage_id, first_row_number)
) WITH (user_catalog_table = true);
COMMENT ON TABLE stripe IS 'Columnar per stripe metadata';
CREATE TABLE IF NOT EXISTS chunk_group (
storage_id bigint NOT NULL,
stripe_num bigint NOT NULL,
chunk_group_num int NOT NULL,
row_count bigint NOT NULL,
PRIMARY KEY (storage_id, stripe_num, chunk_group_num)
);
COMMENT ON TABLE chunk_group IS 'Columnar chunk group metadata';
CREATE TABLE IF NOT EXISTS chunk (
storage_id bigint NOT NULL,
stripe_num bigint NOT NULL,
attr_num int NOT NULL,
chunk_group_num int NOT NULL,
minimum_value bytea,
maximum_value bytea,
value_stream_offset bigint NOT NULL,
value_stream_length bigint NOT NULL,
exists_stream_offset bigint NOT NULL,
exists_stream_length bigint NOT NULL,
value_compression_type int NOT NULL,
value_compression_level int NOT NULL,
value_decompressed_length bigint NOT NULL,
value_count bigint NOT NULL,
PRIMARY KEY (storage_id, stripe_num, attr_num, chunk_group_num)
) WITH (user_catalog_table = true);
COMMENT ON TABLE chunk IS 'Columnar per chunk metadata';
DO $proc$
BEGIN
-- from version 12 and up we have support for tableam's if installed on pg11 we can't
-- create the objects here. Instead we rely on citus_finish_pg_upgrade to be called by the
-- user instead to add the missing objects
IF substring(current_Setting('server_version'), '\d+')::int >= 12 THEN
EXECUTE $$
--#include "udfs/columnar_handler/10.0-1.sql"
CREATE OR REPLACE FUNCTION columnar.columnar_handler(internal)
RETURNS table_am_handler
LANGUAGE C
AS 'MODULE_PATHNAME', 'columnar_handler';
COMMENT ON FUNCTION columnar.columnar_handler(internal)
IS 'internal function returning the handler for columnar tables';
-- postgres 11.8 does not support the syntax for table am, also it is seemingly trying
-- to parse the upgrade file and erroring on unknown syntax.
-- normally this section would not execute on postgres 11 anyway. To trick it to pass on
-- 11.8 we wrap the statement in a plpgsql block together with an EXECUTE. This is valid
-- syntax on 11.8 and will execute correctly in 12
DO $create_table_am$
BEGIN
EXECUTE 'CREATE ACCESS METHOD columnar TYPE TABLE HANDLER columnar.columnar_handler';
END $create_table_am$;
--#include "udfs/alter_columnar_table_set/10.0-1.sql"
CREATE OR REPLACE FUNCTION pg_catalog.alter_columnar_table_set(
table_name regclass,
chunk_group_row_limit int DEFAULT NULL,
stripe_row_limit int DEFAULT NULL,
compression name DEFAULT null,
compression_level int DEFAULT NULL)
RETURNS void
LANGUAGE C
AS 'MODULE_PATHNAME', 'alter_columnar_table_set';
COMMENT ON FUNCTION pg_catalog.alter_columnar_table_set(
table_name regclass,
chunk_group_row_limit int,
stripe_row_limit int,
compression name,
compression_level int)
IS 'set one or more options on a columnar table, when set to NULL no change is made';
--#include "udfs/alter_columnar_table_reset/10.0-1.sql"
CREATE OR REPLACE FUNCTION pg_catalog.alter_columnar_table_reset(
table_name regclass,
chunk_group_row_limit bool DEFAULT false,
stripe_row_limit bool DEFAULT false,
compression bool DEFAULT false,
compression_level bool DEFAULT false)
RETURNS void
LANGUAGE C
AS 'MODULE_PATHNAME', 'alter_columnar_table_reset';
COMMENT ON FUNCTION pg_catalog.alter_columnar_table_reset(
table_name regclass,
chunk_group_row_limit bool,
stripe_row_limit bool,
compression bool,
compression_level bool)
IS 'reset on or more options on a columnar table to the system defaults';
$$;
END IF;
END$proc$;
-- (this function being dropped in 10.0.3)->#include "udfs/columnar_ensure_objects_exist/10.0-1.sql"
RESET search_path;
-- columnar--10.0.-1 --10.0.2
GRANT USAGE ON SCHEMA columnar TO PUBLIC;
GRANT SELECT ON ALL tables IN SCHEMA columnar TO PUBLIC ;
-- columnar--10.0-3--10.1-1.sql
-- Drop foreign keys between columnar metadata tables.
-- columnar--10.1-1--10.2-1.sql
-- For a proper mapping between tid & (stripe, row_num), add a new column to
-- columnar.stripe and define a BTREE index on this column.
-- Also include storage_id column for per-relation scans.
-- Populate first_row_number column of columnar.stripe table.
--
-- For simplicity, we calculate MAX(row_count) value across all the stripes
-- of all the columanar tables and then use it to populate first_row_number
-- column. This would introduce some gaps however we are okay with that since
-- it's already the case with regular INSERT/COPY's.
DO $$
DECLARE
max_row_count bigint;
-- this should be equal to columnar_storage.h/COLUMNAR_FIRST_ROW_NUMBER
COLUMNAR_FIRST_ROW_NUMBER constant bigint := 1;
BEGIN
SELECT MAX(row_count) INTO max_row_count FROM columnar.stripe;
UPDATE columnar.stripe SET first_row_number = COLUMNAR_FIRST_ROW_NUMBER +
(stripe_num - 1) * max_row_count;
END;
$$;
-- columnar--10.2-1--10.2-2.sql
-- revoke read access for columnar.chunk from unprivileged
-- user as it contains chunk min/max values
REVOKE SELECT ON columnar.chunk FROM PUBLIC;
-- columnar--10.2-2--10.2-3.sql
-- Since stripe_first_row_number_idx is required to scan a columnar table, we
-- need to make sure that it is created before doing anything with columnar
-- tables during pg upgrades.
--
-- However, a plain btree index is not a dependency of a table, so pg_upgrade
-- cannot guarantee that stripe_first_row_number_idx gets created when
-- creating columnar.stripe, unless we make it a unique "constraint".
--
-- To do that, drop stripe_first_row_number_idx and create a unique
-- constraint with the same name to keep the code change at minimum.
-- columnar--10.2-3--10.2-4.sql
-- columnar--11.0-2--11.1-1.sql
CREATE OR REPLACE FUNCTION pg_catalog.alter_columnar_table_set(
table_name regclass,
chunk_group_row_limit int DEFAULT NULL,
stripe_row_limit int DEFAULT NULL,
compression name DEFAULT null,
compression_level int DEFAULT NULL)
RETURNS void
LANGUAGE plpgsql AS
$alter_columnar_table_set$
declare
noop BOOLEAN := true;
cmd TEXT := 'ALTER TABLE ' || table_name::text || ' SET (';
begin
if (chunk_group_row_limit is not null) then
if (not noop) then cmd := cmd || ', '; end if;
cmd := cmd || 'columnar.chunk_group_row_limit=' || chunk_group_row_limit;
noop := false;
end if;
if (stripe_row_limit is not null) then
if (not noop) then cmd := cmd || ', '; end if;
cmd := cmd || 'columnar.stripe_row_limit=' || stripe_row_limit;
noop := false;
end if;
if (compression is not null) then
if (not noop) then cmd := cmd || ', '; end if;
cmd := cmd || 'columnar.compression=' || compression;
noop := false;
end if;
if (compression_level is not null) then
if (not noop) then cmd := cmd || ', '; end if;
cmd := cmd || 'columnar.compression_level=' || compression_level;
noop := false;
end if;
cmd := cmd || ')';
if (not noop) then
execute cmd;
end if;
return;
end;
$alter_columnar_table_set$;
COMMENT ON FUNCTION pg_catalog.alter_columnar_table_set(
table_name regclass,
chunk_group_row_limit int,
stripe_row_limit int,
compression name,
compression_level int)
IS 'set one or more options on a columnar table, when set to NULL no change is made';
CREATE OR REPLACE FUNCTION pg_catalog.alter_columnar_table_reset(
table_name regclass,
chunk_group_row_limit bool DEFAULT false,
stripe_row_limit bool DEFAULT false,
compression bool DEFAULT false,
compression_level bool DEFAULT false)
RETURNS void
LANGUAGE plpgsql AS
$alter_columnar_table_reset$
declare
noop BOOLEAN := true;
cmd TEXT := 'ALTER TABLE ' || table_name::text || ' RESET (';
begin
if (chunk_group_row_limit) then
if (not noop) then cmd := cmd || ', '; end if;
cmd := cmd || 'columnar.chunk_group_row_limit';
noop := false;
end if;
if (stripe_row_limit) then
if (not noop) then cmd := cmd || ', '; end if;
cmd := cmd || 'columnar.stripe_row_limit';
noop := false;
end if;
if (compression) then
if (not noop) then cmd := cmd || ', '; end if;
cmd := cmd || 'columnar.compression';
noop := false;
end if;
if (compression_level) then
if (not noop) then cmd := cmd || ', '; end if;
cmd := cmd || 'columnar.compression_level';
noop := false;
end if;
cmd := cmd || ')';
if (not noop) then
execute cmd;
end if;
return;
end;
$alter_columnar_table_reset$;
COMMENT ON FUNCTION pg_catalog.alter_columnar_table_reset(
table_name regclass,
chunk_group_row_limit bool,
stripe_row_limit bool,
compression bool,
compression_level bool)
IS 'reset on or more options on a columnar table to the system defaults';
-- rename columnar schema to columnar_internal and tighten security
REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA columnar FROM PUBLIC;
ALTER SCHEMA columnar RENAME TO columnar_internal;
REVOKE ALL PRIVILEGES ON SCHEMA columnar_internal FROM PUBLIC;
-- create columnar schema with public usage privileges
CREATE SCHEMA columnar;
GRANT USAGE ON SCHEMA columnar TO PUBLIC;
--#include "udfs/upgrade_columnar_storage/10.2-1.sql"
CREATE OR REPLACE FUNCTION columnar_internal.upgrade_columnar_storage(rel regclass)
RETURNS VOID
STRICT
LANGUAGE c AS 'MODULE_PATHNAME', $$upgrade_columnar_storage$$;
COMMENT ON FUNCTION columnar_internal.upgrade_columnar_storage(regclass)
IS 'function to upgrade the columnar storage, if necessary';
--#include "udfs/downgrade_columnar_storage/10.2-1.sql"
CREATE OR REPLACE FUNCTION columnar_internal.downgrade_columnar_storage(rel regclass)
RETURNS VOID
STRICT
LANGUAGE c AS 'MODULE_PATHNAME', $$downgrade_columnar_storage$$;
COMMENT ON FUNCTION columnar_internal.downgrade_columnar_storage(regclass)
IS 'function to downgrade the columnar storage, if necessary';
-- update UDF to account for columnar_internal schema
CREATE OR REPLACE FUNCTION columnar_internal.columnar_ensure_am_depends_catalog()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
AS $func$
BEGIN
INSERT INTO pg_depend
WITH columnar_schema_members(relid) AS (
SELECT pg_class.oid AS relid FROM pg_class
WHERE relnamespace =
COALESCE(
(SELECT pg_namespace.oid FROM pg_namespace WHERE nspname = 'columnar_internal'),
(SELECT pg_namespace.oid FROM pg_namespace WHERE nspname = 'columnar')
)
AND relname IN ('chunk',
'chunk_group',
'chunk_group_pkey',
'chunk_pkey',
'options',
'options_pkey',
'storageid_seq',
'stripe',
'stripe_first_row_number_idx',
'stripe_pkey')
)
SELECT -- Define a dependency edge from "columnar table access method" ..
'pg_am'::regclass::oid as classid,
(select oid from pg_am where amname = 'columnar') as objid,
0 as objsubid,
-- ... to each object that is registered to pg_class and that lives
-- in "columnar" schema. That contains catalog tables, indexes
-- created on them and the sequences created in "columnar" schema.
--
-- Given the possibility of user might have created their own objects
-- in columnar schema, we explicitly specify list of objects that we
-- are interested in.
'pg_class'::regclass::oid as refclassid,
columnar_schema_members.relid as refobjid,
0 as refobjsubid,
'n' as deptype
FROM columnar_schema_members
-- Avoid inserting duplicate entries into pg_depend.
EXCEPT TABLE pg_depend;
END;
$func$;
COMMENT ON FUNCTION columnar_internal.columnar_ensure_am_depends_catalog()
IS 'internal function responsible for creating dependencies from columnar '
'table access method to the rel objects in columnar schema';
SELECT columnar_internal.columnar_ensure_am_depends_catalog();
-- add utility function
CREATE FUNCTION columnar.get_storage_id(regclass) RETURNS bigint
LANGUAGE C STRICT
AS 'citus_columnar', $$columnar_relation_storageid$$;
-- create views for columnar table information
CREATE VIEW columnar.storage WITH (security_barrier) AS
SELECT c.oid::regclass AS relation,
columnar.get_storage_id(c.oid) AS storage_id
FROM pg_class c, pg_am am
WHERE c.relam = am.oid AND am.amname = 'columnar'
AND pg_has_role(c.relowner, 'USAGE');
COMMENT ON VIEW columnar.storage IS 'Columnar relation ID to storage ID mapping.';
GRANT SELECT ON columnar.storage TO PUBLIC;
CREATE VIEW columnar.options WITH (security_barrier) AS
SELECT regclass AS relation, chunk_group_row_limit,
stripe_row_limit, compression, compression_level
FROM columnar_internal.options o, pg_class c
WHERE o.regclass = c.oid
AND pg_has_role(c.relowner, 'USAGE');
COMMENT ON VIEW columnar.options
IS 'Columnar options for tables on which the current user has ownership privileges.';
GRANT SELECT ON columnar.options TO PUBLIC;
CREATE VIEW columnar.stripe WITH (security_barrier) AS
SELECT relation, storage.storage_id, stripe_num, file_offset, data_length,
column_count, chunk_row_count, row_count, chunk_group_count, first_row_number
FROM columnar_internal.stripe stripe, columnar.storage storage
WHERE stripe.storage_id = storage.storage_id;
COMMENT ON VIEW columnar.stripe
IS 'Columnar stripe information for tables on which the current user has ownership privileges.';
GRANT SELECT ON columnar.stripe TO PUBLIC;
CREATE VIEW columnar.chunk_group WITH (security_barrier) AS
SELECT relation, storage.storage_id, stripe_num, chunk_group_num, row_count
FROM columnar_internal.chunk_group cg, columnar.storage storage
WHERE cg.storage_id = storage.storage_id;
COMMENT ON VIEW columnar.chunk_group
IS 'Columnar chunk group information for tables on which the current user has ownership privileges.';
GRANT SELECT ON columnar.chunk_group TO PUBLIC;
CREATE VIEW columnar.chunk WITH (security_barrier) AS
SELECT relation, storage.storage_id, stripe_num, attr_num, chunk_group_num,
minimum_value, maximum_value, value_stream_offset, value_stream_length,
exists_stream_offset, exists_stream_length, value_compression_type,
value_compression_level, value_decompressed_length, value_count
FROM columnar_internal.chunk chunk, columnar.storage storage
WHERE chunk.storage_id = storage.storage_id;
COMMENT ON VIEW columnar.chunk
IS 'Columnar chunk information for tables on which the current user has ownership privileges.';
GRANT SELECT ON columnar.chunk TO PUBLIC;

View File

@ -1 +0,0 @@
-- citus_columnar--11.2-1--11.3-1

View File

@ -1 +0,0 @@
-- citus_columnar--11.3-1--12.2-1

View File

@ -1,4 +1,4 @@
-- columnar--10.0-1--10.0-2.sql
/* columnar--10.0-1--10.0-2.sql */
-- grant read access for columnar metadata tables to unprivileged user
GRANT USAGE ON SCHEMA columnar TO PUBLIC;

View File

@ -1,22 +0,0 @@
-- columnar--10.0-3--10.1-1.sql
-- Drop foreign keys between columnar metadata tables.
-- Postgres assigns different names to those foreign keys in PG11, so act accordingly.
DO $proc$
BEGIN
IF substring(current_Setting('server_version'), '\d+')::int >= 12 THEN
EXECUTE $$
ALTER TABLE columnar.chunk DROP CONSTRAINT chunk_storage_id_stripe_num_chunk_group_num_fkey;
ALTER TABLE columnar.chunk_group DROP CONSTRAINT chunk_group_storage_id_stripe_num_fkey;
$$;
ELSE
EXECUTE $$
ALTER TABLE columnar.chunk DROP CONSTRAINT chunk_storage_id_fkey;
ALTER TABLE columnar.chunk_group DROP CONSTRAINT chunk_group_storage_id_fkey;
$$;
END IF;
END$proc$;
-- since we dropped pg11 support, we don't need to worry about missing
-- columnar objects when upgrading postgres
DROP FUNCTION citus_internal.columnar_ensure_objects_exist();

View File

@ -1,32 +0,0 @@
-- columnar--10.1-1--10.2-1.sql
-- For a proper mapping between tid & (stripe, row_num), add a new column to
-- columnar.stripe and define a BTREE index on this column.
-- Also include storage_id column for per-relation scans.
ALTER TABLE columnar.stripe ADD COLUMN first_row_number bigint;
CREATE INDEX stripe_first_row_number_idx ON columnar.stripe USING BTREE(storage_id, first_row_number);
-- Populate first_row_number column of columnar.stripe table.
--
-- For simplicity, we calculate MAX(row_count) value across all the stripes
-- of all the columanar tables and then use it to populate first_row_number
-- column. This would introduce some gaps however we are okay with that since
-- it's already the case with regular INSERT/COPY's.
DO $$
DECLARE
max_row_count bigint;
-- this should be equal to columnar_storage.h/COLUMNAR_FIRST_ROW_NUMBER
COLUMNAR_FIRST_ROW_NUMBER constant bigint := 1;
BEGIN
SELECT MAX(row_count) INTO max_row_count FROM columnar.stripe;
UPDATE columnar.stripe SET first_row_number = COLUMNAR_FIRST_ROW_NUMBER +
(stripe_num - 1) * max_row_count;
END;
$$;
#include "udfs/upgrade_columnar_storage/10.2-1.sql"
#include "udfs/downgrade_columnar_storage/10.2-1.sql"
-- upgrade storage for all columnar relations
PERFORM citus_internal.upgrade_columnar_storage(c.oid) FROM pg_class c, pg_am a
WHERE c.relam = a.oid AND amname = 'columnar';

Some files were not shown because too many files have changed in this diff Show More