Commit Graph

6517 Commits (af0de8b19a920a50048a17eeaa73e14bed4f87dd)

Author SHA1 Message Date
onderkalaci af0de8b19a PG commit 376dc820531bafcbf105fff74c5b14c23d9950af 2023-04-26 22:54:13 +03:00
onderkalaci 3a773ee18f Not implemented, see PG commit e3ce2de09d814f8770b2e3b3c152b7671bcdb83f 2023-04-26 22:54:13 +03:00
onderkalaci 45259d2b87 Again, permission check refactor, be careful with details, PG commit a61b1f74823c9c4f79c95226a461f1e7a367764b 2023-04-26 22:54:13 +03:00
onderkalaci 5205411e4d PG commit 60684dd834a222fefedd49b19d1f0a6189c1632e 2023-04-26 22:54:13 +03:00
onderkalaci c56c564e80 Needs more attention, PG commit 858e776c84f48841e7e16fba7b690b76e54f3675 2023-04-26 22:54:13 +03:00
onderkalaci 70554b3a7b Minor PG refactor 320f92b744b44f961e5d56f5f21de003e8027a7f 2023-04-26 22:54:13 +03:00
onderkalaci c5f8094639 Remove AssertArg PG: b1099eca8f38ff5cfaf0901bb91cb6a22f909bc6 2023-04-26 22:54:13 +03:00
onderkalaci d536ee5832 pg_clean_ascii 45b1a67a0fcb3f1588df596431871de4c93cb76f 2023-04-26 22:54:13 +03:00
onderkalaci 723d80be82 guc 2023-04-26 22:54:13 +03:00
onderkalaci ecbf368dc0 guc vars 3057465acfbea2f3dd7a914a1478064022c6eecd 2023-04-26 22:54:13 +03:00
onderkalaci 036a96ce68 Placeholder commit for enabling columnar compile 2023-04-26 22:54:13 +03:00
onderkalaci 158b003366 Use RelFileLocator in PG 16 AND vacuum together 2023-04-26 22:54:13 +03:00
onderkalaci 3660b8fb0e PG commit 11470f544e3729c60fab890145b2e839cbc8905e 2023-04-26 22:54:13 +03:00
onderkalaci 2ee6d4cae4 tuplesort PGcommit: d37aa3d35832afde94e100c4d2a9618b3eb76472 2023-04-26 22:54:13 +03:00
onderkalaci 93f487d6d9 refactor ownership changed afbfc02983f86c4d71825efa6befd547fe81a926 2023-04-26 22:54:13 +03:00
onderkalaci eca67443de Use selectedCols from perminfo
Corresponding PG commit a61b1f74823c9c4f79c95226a461f1e7a367764b
2023-04-26 20:22:24 +03:00
onderkalaci 524bfc44e0 drop support for Abs, use fabs
PG commit 357cfefb09115292cfb98d504199e6df8201c957
2023-04-26 20:22:24 +03:00
onderkalaci 65c88fe854 Use RelFileLocator in PG 16
This is PG commit b0a55e43299c4ea2a9a8c757f9c26352407d0ccc
2023-04-26 20:22:24 +03:00
onderkalaci 890deac272 new header for varatt.h 2023-04-26 18:32:51 +03:00
onderkalaci 909ffe4613 enable configure 2023-04-26 18:32:51 +03:00
Hanefi Onaldi 5fc5931506
Skip some versions on changelog (#6882)
We had 10.1.5, 10.0.7, and 9.5.11 in the changelog, but those versions
are already used in enterprise repository. This commit skips those
versions and uses 10.1.6, 10.0.8, and 9.5.12 instead to prevent clashes.
2023-04-26 12:05:27 +03:00
Hanefi Onaldi 15152eac94
Add changelog entries for backport releases (#6869)
We plan to have a series of backport releases. This PR contains separate
commits for each patch version for 11.2 to 9.5 major versions. We plan
to cherry pick each commit to relevant release branches and hence the
need to have separate commits for each version.
2023-04-25 13:21:08 +03:00
Hanefi Onaldi f7fd0dbae7
Add changelog entries for 11.2.1 2023-04-25 13:06:59 +03:00
Hanefi Onaldi c36adc8426
Add changelog entries for 11.1.6 2023-04-25 13:06:01 +03:00
Hanefi Onaldi 214bc39a5a
Add changelog entries for 11.0.8 2023-04-25 13:05:44 +03:00
Hanefi Onaldi 65f957d345
Add changelog entries for 10.2.9 2023-04-25 13:05:20 +03:00
Hanefi Onaldi db77cb084b
Add changelog entries for 10.1.5 2023-04-25 13:04:58 +03:00
Hanefi Onaldi 61c7cc0a96
Add changelog entries for 10.0.7 2023-04-25 13:04:27 +03:00
Hanefi Onaldi da71b74f1d
Add changelog entries for 9.5.11 2023-04-25 13:03:23 +03:00
Jelte Fennema a5f4fece13
Fix running PG upgrade tests with run_test.py (#6829)
In #6814 we started using the Python test runner for upgrade tests in
run_test.py, instead of the Perl based one. This had a problem though,
not all tests in minimal_schedule can be run with the Python runner.
This adds a separate minimal schedule for the pg_upgrade tests which
doesn't include the tests that break with the Python runner.

This PR also fixes various other issues that came up while testing
the upgrade tests.
2023-04-24 15:54:32 +02:00
aykut-bozkurt a6a7271e63
Query generator test tool (#6686)
- Query generator is used to create queries, allowed by the grammar which is documented at `query_generator/query_gen.py` (currently contains only joins). 
- This PR adds a CI test which utilizes the query generator to compare the results of generated queries that are executed on Citus tables and local (undistributed) tables. It fails if there is an unexpected error at results. The error can be related to Citus, the query generator, or even Postgres.
- The tool is configured by the file `query_generator/config/config.yaml`, which limits table counts at generated queries and sets many table related parameters (e.g. row count).
- Run time of the CI task can be configured from the config file. By default, we run 250 queries with maximum table count of 40 inside each query.
2023-04-23 20:28:26 +03:00
aykut-bozkurt 08e2820c67
skip restriction clause if it contains placeholdervar (#6857)
`PlaceHolderVar` is not relevant to be processed inside a restriction
clause. Otherwise, `pull_var_clause_default` would throw error. PG would
create the restriction to physical `Var` that `PlaceHolderVar` points to
anyway, so it is safe to skip this restriction.

DESCRIPTION: Fixes a bug related to WHERE clause list which contains
placeholder.

Fixes https://github.com/citusdata/citus/issues/6758
2023-04-17 18:14:01 +03:00
Emel Şimşek 2675a68218
Make coordinator always in metadata by default in regression tests. (#6847)
DESCRIPTION: Changes the regression test setups adding the coordinator
to metadata by default.

When creating a Citus cluster, coordinator can be added in metadata
explicitly by running `citus_set_coordinator_host ` function. Adding the
coordinator to metadata allows to create citus managed local tables.
Other Citus functionality is expected to be unaffected.

This change adds the coordinator to metadata by default when creating
test clusters in regression tests.

There are 3 ways to run commands in a sql file (or a schedule which is a
sequence of sql files) with Citus regression tests. Below is how this PR
adds the coordinator to metadata for each.

1. `make <schedule_name>`
Changed the sql files (sql/multi_cluster_management.sql and
sql/minimal_cluster_management.sql) which sets up the test clusters such
that they call `citus_set_coordinator_host`. This ensures any following
tests will have the coordinator in metadata by default.
 
2. `citus_tests/run_test.py <sql_file_name>`
Changed the python code that sets up the cluster to always call `
citus_set_coordinator_host`.
For the upgrade tests, a version check is included to make sure
`citus_set_coordinator_host` function is available for a given version.

3. ` make check-arbitrary-configs  `     
Changed the python code that sets up the cluster to always call
`citus_set_coordinator_host `.

#6864 will be used to track the remaining work which is to change the
tests where coordinator is added/removed as a node.
2023-04-17 14:14:37 +03:00
Gokhan Gulbiz 8782ea1582
Ensure partitionKeyValue and colocationId are set for proper tenant stats gathering (#6834)
This PR updates the tenant stats implementation to set partitionKeyValue
and colocationId in ExecuteLocalTaskListExtended, in addition to
LocallyExecuteTaskPlan. This ensures that tenant stats can be properly
gathered regardless of the code path taken. The changes were initially
made while testing stored procedure calls for tenant stats.
2023-04-17 09:35:26 +03:00
Onur Tirtir f87a2d02b0
Move the common logic related to creating a Citus table down to CreateCitusTable (#6836)
.. rather than having it in user facing functions. That way, we
can use the same logic for creating Citus tables from other places
too.

This would be useful for creating tenant tables via a simple function
call in the utility hook, for schema-based sharding purposes.
2023-04-14 16:13:39 +03:00
aykut-bozkurt 3286ec59e9
fix 3 flaky tests in failure schedule (#6846)
Fixed 3 flaky tests in failure tests which caused flakiness in other
tests due to changed node and group sequence ids during node
addition-removal.
2023-04-13 13:13:28 +03:00
Halil Ozan Akgül 9ba70696f7
Add CPU usage to citus_stat_tenants (#6844)
This PR adds CPU usage to `citus_stat_tenants` monitor.
CPU usage is tracked in periods, similar to query counts.
2023-04-12 16:23:00 +03:00
Emel Şimşek e7a25d82c9
When creating a HTAB we need to use HASH_COMPARE flag in order to set a user defined comparison function. (#6845)
DESCRIPTION: Fixes memory errors, caught by valgrind, of type
"conditional jump or move depends on uninitialized value"

When running Citus tests under Postgres with valgrind, the test cases
calling into `NonBlockingShardSplit` function produce valgrind errors of
type "conditional jump or move depends on uninitialized value".

The issue is caused by creating a HTAB in a wrong way. HASH_COMPARE flag
should have been used when creating a HTAB with user defined comparison
function. In the absence of HASH_COMPARE flag, HTAB falls back into
built-in string comparison function. However, valgrind somehow discovers
that the match function is not assigned to the user defined function as
intended.

Fixes #6835
2023-04-11 21:24:33 +03:00
Halil Ozan Akgül 8b50e95dc8
Fix citus_stat_tenants period updating bug (#6833)
Fixes the bug that causes updating the citus_stat_tenants periods
incorrectly.

`TimestampDifferenceExceeds` expects the difference in milliseconds but
it was microseconds, this is fixed.
`tenantStats->lastQueryTime` was updated during monitoring too, now it's
updated only when there are tenant queries.
2023-04-11 17:40:07 +03:00
aykut-bozkurt a20f7e1a55
fixes update propagation bug when `citus_set_coordinator_host` is called more than once (#6837)
DESCRIPTION: Fixes update propagation bug when
`citus_set_coordinator_host` is called more than once.

Fixes https://github.com/citusdata/citus/issues/6731.
2023-04-11 11:27:16 +03:00
rajeshkt78 1713246e1b
Add build-cdc-* temporary directories to .gitignore (#6841)
The CDC decoder buillds different versions of CDC base decoders during
the build. Since the source files are copied to the temporay
directories, they come in git status for files to be added. So these
directories and a temporary CDC TAP test directory(tmpcheck) are added
to .gitignore file.
2023-04-10 15:40:20 +05:30
Onur Tirtir 0194657c5d
Bump Citus to 12.0devel (#6840) 2023-04-10 12:05:18 +03:00
rajeshkt78 29c8d9633a
Makefile changes to build CDC in builddir for pgoutput and wal2json. (#6827)
DESCRIPTION: 

Makefile changes to build different versions of CDC decoder for different base decoders like pgoutput and wal2json with the same name and copy it to $packagelib/cdc_decoders dir. This helps the user to use logical replication slots normally with pgoutput without being aware of CDC decoder.

1) Changed src/backend/distributed/cdc/Makefile to setup a build directory
for CDC in build-cdc-$(DECODER) dir and copy the source files (.c.h and Makefile.decoder) to
the build dir and build it for each base decoder.

2) copy the pgoutput.so and wal2json.so into the above build dir and
install them in PG packagelibdir/citus_decoders directory.

3)Added a testcase 016_cdc_wal2json.pl for testing the wal2json decoder
using pg_recv_logical_changes function.
2023-04-06 17:03:12 +05:30
Naisila Puka 84f2d8685a
Adds control for background task executors involving a node (#6771)
DESCRIPTION: Adds control for background task executors involving a node

### Background and motivation

Nonblocking concurrent task execution via background workers was
introduced in [#6459](https://github.com/citusdata/citus/pull/6459), and
concurrent shard moves in the background rebalancer were introduced in
[#6756](https://github.com/citusdata/citus/pull/6756) - with a hard
dependency that limits to 1 shard move per node. As we know, a shard
move consists of a shard moving from a source node to a target node. The
hard dependency was used because the background task runner didn't have
an option to limit the parallel shard moves per node.

With the motivation of controlling the number of concurrent shard
moves that involve a particular node, either as source or target, this
PR introduces a general new GUC
citus.max_background_task_executors_per_node to be used in the
background task runner infrastructure. So, why do we even want to
control and limit the concurrency? Well, it's all about resource
availability: because the moves involve the same nodes, extra
parallelism won’t make the rebalance complete faster if some resource is
already maxed out (usually cpu or disk). Or, if the cluster is being
used in a production setting, the moves might compete for resources with
production queries much more than if they had been executed
sequentially.

### How does it work?

A new column named nodes_involved is added to the catalog table that
keeps track of the scheduled background tasks,
pg_dist_background_task. It is of type integer[] - to store a list
of node ids. It is NULL by default - the column will be filled by the
rebalancer, but we may not care about the nodes involved in other uses
of the background task runner.

Table "pg_catalog.pg_dist_background_task"

     Column     |           Type           
============================================
 job_id         | bigint
 task_id        | bigint
 owner          | regrole
 pid            | integer
 status         | citus_task_status
 command        | text
 retry_count    | integer
 not_before     | timestamp with time zone
 message        | text
+nodes_involved | integer[]

A hashtable named ParallelTasksPerNode keeps track of the number of
parallel running background tasks per node. An entry in the hashtable is
as follows:

ParallelTasksPerNodeEntry
{
	node_id // The node is used as the hash table key 
	counter // Number of concurrent background tasks that involve node node_id
                // The counter limit is citus.max_background_task_executors_per_node
}

When the background task runner assigns a runnable task to a new
executor, it increments the counter for each of the nodes involved with
that runnable task. The limit of each counter is
citus.max_background_task_executors_per_node. If the limit is reached
for any of the nodes involved, this runnable task is skipped. And then,
later, when the running task finishes, the background task runner
decrements the counter for each of the nodes involved with the done
task. The following functions take care of these increment-decrement
steps:

IncrementParallelTaskCountForNodesInvolved(task)
DecrementParallelTaskCountForNodesInvolved(task)

citus.max_background_task_executors_per_node can be changed in the
fly. In the background rebalancer, we simply give {source_node,
target_node} as the nodesInvolved input to the
ScheduleBackgroundTask function. The rest is taken care of by the
general background task runner infrastructure explained above. Check
background_task_queue_monitor.sql and
background_rebalance_parallel.sql tests for detailed examples.

#### Note

This PR also adds a hard node dependency if a node is first being used
as a source for a move, and then later as a target. The reason this
should be a hard dependency is that the first move might make space for
the second move. So, we could run out of disk space (or at least
overload the node) if we move the second shard to it before the first
one is moved away.

Fixes https://github.com/citusdata/citus/issues/6716
2023-04-06 14:12:39 +03:00
Gokhan Gulbiz fa00fc6e3e
Add upgrade/downgrade paths between v11.2.2 and v11.3.1 (#6820)
DESCRIPTION: PR description that will go into the change log, up to 78
characters

---------

Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
2023-04-06 12:46:09 +03:00
Ahmet Gedemenli 83a2cfbfcf
Move cleanup record test to upgrade schedule (#6794)
DESCRIPTION: Move cleanup record test to upgrade schedule
2023-04-06 11:42:49 +03:00
Naisila Puka fc479bfa49
Fixes flakiness in multi_metadata_sync test (#6824)
Fixes flakiness in multi_metadata_sync test


https://app.circleci.com/pipelines/github/citusdata/citus/31863/workflows/ea937480-a4cc-4646-815c-bb2634361d98/jobs/1074457
```diff
SELECT
 	logicalrelid, repmodel
 FROM
 	pg_dist_partition
 WHERE
 	logicalrelid = 'mx_test_schema_1.mx_table_1'::regclass
 	OR logicalrelid = 'mx_test_schema_2.mx_table_2'::regclass;
         logicalrelid         | repmodel 
 -----------------------------+----------
- mx_test_schema_1.mx_table_1 | s
  mx_test_schema_2.mx_table_2 | s
+ mx_test_schema_1.mx_table_1 | s
 (2 rows)
```
This is a simple issue of missing `ORDER BY` clauses. I went ahead and
added some other missing ones in the same file as well. Also, I replaced
existing `ORDER BY logicalrelid` with `ORDER BY logicalrelid::text`, in
order to compare names, not OIDs.
2023-04-06 11:19:32 +03:00
Halil Ozan Akgül 52ad2d08c7
Multi tenant monitoring (#6725)
DESCRIPTION: Adds views that monitor statistics on tenant usages

This PR adds `citus_stats_tenants` view that monitors the tenants on the
cluster.

`citus_stats_tenants` shows the node id, colocation id, tenant
attribute, read count in this period and last period, and query count in
this period and last period of the tenant.
Tenant attribute currently is the tenant's distribution column value,
later when schema based sharding is introduced, this meaning might
change.
A period is a time bucket the queries are counted by. Read and query
counts for this period can increase until the current period ends. After
that those counts are moved to last period's counts, which cannot
change. The period length can be set using 'citus.stats_tenants_period'.

`SELECT` queries are counted as _read_ queries, `INSERT`, `UPDATE` and
`DELETE` queries are counted as _write_ queries. So in the view read
counts are `SELECT` counts and query counts are `SELECT`, `INSERT`,
`UPDATE` and `DELETE` count.

The data is stored in shared memory, in a struct named
`MultiTenantMonitor`.

`citus_stats_tenants` shows the data from local tenants.

`citus_stats_tenants` show up to `citus.stats_tenant_limit` number of
tenants.
The tenants are scored based on the number of queries they run and the
recency of those queries. Every query ran increases the score of tenant
by `ONE_QUERY_SCORE`, and after every period ends the scores are halved.
Halving is done lazily.
To retain information a longer the monitor keeps up to 3 times
`citus.stats_tenant_limit` tenants. When the tenant count hits `3 *
citus.stats_tenant_limit`, last `citus.stats_tenant_limit` tenants are
removed. To see all stored tenants you can use
`citus_stats_tenants(return_all_tenants := true)`

- [x] Create collector view that gets data from all nodes. #6761 
- [x] Add monitoring log #6762 
- [x] Create enable/disable GUC #6769 
- [x] Parse the annotation string correctly #6796 
- [x] Add local queries and prepared statements #6797
- [x] Rename to citus_stat_statements #6821 
- [x] Run pgbench
- [x] Fix role permissions #6812

---------

Co-authored-by: Gokhan Gulbiz <ggulbiz@gmail.com>
Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2023-04-05 17:44:17 +03:00
Jelte Fennema d04d32b314
In run_test.py actually return worker_count (#6830)
Fixes a small mistake that was missed in the refactor of run_test.py
that was done in #6816.
2023-04-05 16:38:57 +03:00
Naisila Puka eda3cc418a
Fixes flakiness in multi_cluster_management test (#6825)
Fixes flakiness in multi_cluster_management test

https://app.circleci.com/pipelines/github/citusdata/citus/31816/workflows/2f455a30-1c0b-4b21-9831-f7cf2169df5a/jobs/1071444
```diff
SELECT public.wait_until_metadata_sync();
+WARNING:  waiting for metadata sync timed out
  wait_until_metadata_sync 
 --------------------------
  
 (1 row)
```
Default timeout value is 15000. I increased it to 60000.
2023-04-05 15:50:22 +03:00