Commit Graph

2243 Commits (955eabfcd6b2e23b221ccd613f6a0107e70489f1)

Author SHA1 Message Date
Philip Dubé 4e22f02997 Fix various typos due to zealous repetition 2021-03-04 19:28:15 +00:00
Onur Tirtir 1bb7a0a268
Fix chunk_group_consistency regression test view (#4765) 2021-03-04 12:20:25 +03:00
Onur Tirtir 9728ce1167
Add tests for concurrent index deadlock issue (#4775) 2021-03-04 11:56:54 +03:00
Onder Kalaci d1cd198655 Prevent infinite recursion for queries that involve UNION ALL and JOIN
With this commit, we make sure to prevent infinite recursion for queries
in the format: [subquery with a UNION ALL] JOIN [table or subquery]

Also, fixes a bug where we pushdown UNION ALL below a JOIN even if the
UNION ALL is not safe to pushdown.
2021-03-03 12:27:26 +01:00
Naisila Puka 2f30614fe3
Reimplement citus_update_table_statistics to detect dist. deadlocks (#4752)
* Reimplement citus_update_table_statistics

* Update stats for the given table not colocation group

* Add tests for reimplemented citus_update_table_statistics

* Use coordinated transaction, merge with citus_shard_sizes functions

* Update the old master_update_table_statistics as well
2021-03-03 04:12:30 +03:00
Marco Slot dca615c5aa Normalize the ConvertTable notices 2021-03-01 10:36:12 +01:00
jeff-davis 9da9bd3dfd
Columnar: rename files and tests. (#4751)
* Columnar: rename files and tests.

* Columnar: Rename Table*State to Columnar*State.
2021-03-01 08:34:24 -08:00
SaitTalhaNisanci feee25dfbd
Use translated vars in postgres 13 as well (#4746)
* Use translated vars in postgres 13 as well

Postgres 13 removed translated vars with pg 13 so we had a special logic
for pg 13. However it had some bug, so now we copy the translated vars
before postgres deletes it. This also simplifies the logic.

* fix rtoffset with pg >= 13
2021-02-26 19:41:29 +03:00
Halil Ozan Akgul 5c5cb200f7 Adds GRANT for public to citus_tables 2021-02-26 16:24:33 +03:00
Önder Kalacı 0fe26a216c
Prevent cross join without any target list entries (#4750)
/*
 * The physical planner assumes that all worker queries would have
 * target list entries based on the fact that at least the column
 * on the JOINs have to be on the target list. However, there is
 * an exception to that if there is a cartesian product join and
 * there is no additional target list entries belong to one side
 * of the JOIN. Once we support cartesian product join, we should
 * remove this error.
 */
2021-02-26 11:04:21 +01:00
Onur Tirtir 54ac924bef Grant read access for columnar metadata tables to unprivileged user 2021-02-26 12:31:09 +03:00
Onur Tirtir dcc0207605 Add 10.0-2 schema version 2021-02-26 12:31:09 +03:00
Onur Tirtir 5ed954844c
Ensure table owner when using alter_columnar_table_set/alter_columnar_table_reset (#4748) 2021-02-26 12:27:51 +03:00
Naisila Puka 5ebd4eac7f
Preserve colocation with procedures in alter_distributed_table (#4743) 2021-02-25 19:52:47 +03:00
Hanefi Onaldi 5aff18b573 Fix flaky test 2021-02-24 17:09:08 +03:00
Hanefi Onaldi 9a792ef841 Remove length limitations for table renames 2021-02-24 03:35:27 +03:00
Hanefi Onaldi 7bebeb872d Failing long table name tests 2021-02-24 03:35:27 +03:00
Naisila Puka dbb88f6f8b
Fix insert query with CTEs/sublinks/subqueries etc (#4700)
* Fix insert query with CTE

* Add more cases with deferred pruning but false fast path

* Add more tests

* Better readability with if statements
2021-02-23 18:00:47 +03:00
Naisila Puka 105bb580e1
Add columnar regression tests (#4727)
* Add cursor tests for columnar tables

* Add columnar tests for data types w/out comp. operators

* Add more prepared statements with columnar tables

* Add constraint tests for columnar tables

* Add row level security, detach partition and rename columnar tests

* Add some ORDER BYs
2021-02-23 14:16:38 +03:00
Ahmet Gedemenli 1f345f65b4 Support dropping local table indexes along with a distributed index 2021-02-18 13:30:12 +03:00
Onur Tirtir 676d9a9726 Bump Citus to 10.1devel 2021-02-17 11:54:33 +03:00
Onur Tirtir d61fd6e478
Decide changing sequence dependencies on MX nodes according to resulting relation (#4713)
When executing alter_table / undistribute_table udf's, we should not try
to change sequence dependencies on MX workers if new table wouldn't
require syncing metadata.

Previously, we were checking that for input table. But in some cases, the
fact that input table requires syncing metadata doesn't imply the same
for resulting table (e.g when undistributing a Citus table).

Even more, doing that was giving an unexpected error when undistributing
a Citus table so this commit actually fixes that.
2021-02-15 19:20:26 +03:00
SaitTalhaNisanci bcbd24f8de
Only consider pseudo constants for shortcuts (#4712)
It seems that we need to consider only pseudo constants while doing some
shortcuts in planning. For example there could be a false clause but it
can contribute to the result in which case it will not be a pseudo
constant.
2021-02-15 18:39:37 +03:00
SaitTalhaNisanci 0f1ce7a913
Not skip relation in conversion if it doesn't have RelationRestriction (#4685)
We would exclude tables without relationRestriction from conversion
candidates in local-distributed table joins. This could leave a leftover
local table which should have been converted to a subquery.

Ideally I would expect that in each call to CreateDistributedPlan we
would pass a new plan id, but that seems like a bigger change.
2021-02-12 12:33:55 +03:00
Hadi Moshayedi e690d8b79b Move stripe.chunk_count to last position 2021-02-11 17:00:44 -08:00
Jeff Davis 1f1c3c362b Columnar: rename chunk_num -> chunk_group_num. 2021-02-11 09:27:00 -08:00
Onder Kalaci f297c96ec5 Add regression tests for COPY into colocated intermediate results
To add the tests without too much data, make the copy switchover
configurable.
2021-02-11 15:41:06 +01:00
Onder Kalaci 5d5a357487 Do not connection re-use for intermediate results
/*
 * Colocated intermediate results are just files and not required to use
 * the same connections with their co-located shards. So, we are free to
 * use any connection we can get.
 *
 * Also, the current connection re-use logic does not know how to handle
 * intermediate results as the intermediate results always truncates the
 * existing files. That's why, we use one connection per intermediate
 * result.
 */
2021-02-11 15:41:06 +01:00
Ahmet Gedemenli c8e83d1f26 Fix dropping fkey when distributing table 2021-02-11 15:48:35 +03:00
SaitTalhaNisanci 847b79078f
Not consider subplans in restriction list (#4679)
* Not consider subplans in restriction list

* Not consider sublink, alternative subplan in restrictions
2021-02-11 15:04:07 +03:00
Onur Tirtir ec7ab68f3b Test adding local table with long name to metadata 2021-02-10 18:05:04 +03:00
Onur Tirtir 9f619a85d6
Fix EXPLAIN ANALYZE exec when query returns no cols (#4672)
We do not include dummy column if original task didn't return any
columns.
Otherwise, number of columns that original task returned wouldn't
match number of columns returned by worker_save_query_explain_analyze.
2021-02-10 17:59:47 +03:00
Hadi Moshayedi 52297804ae Fix zero column tables 2021-02-09 23:05:11 -08:00
Hadi Moshayedi 2d09c76b76 Rename storageid to storage_id 2021-02-09 19:57:04 -08:00
Hadi Moshayedi 8270b598b6 Rename stripeid, chunkid, and attnum 2021-02-09 19:50:50 -08:00
Hadi Moshayedi be90c20457 Fix write path for zero column tables 2021-02-09 14:14:06 -08:00
Hadi Moshayedi c8d61a31e2 Columnar: chunk_group metadata table 2021-02-09 14:11:58 -08:00
Onder Kalaci c804c9aa21 Allow local execution for intermediate results in COPY
When COPY is used for copying into co-located files, it was
not allowed to use local execution. The primary reason was
Citus treating co-located intermediate results as co-located
shards, and COPY into the distributed table was done via
"format result". And, local execution of such COPY commands
was not implemented.

With this change, we implement support for local execution with
"format result". To do that, we use the buffer for every file
on shardState->copyOutState, similar to how local copy on
shards are implemented. In fact, the logic is similar to
local copy on shards, but instead of writing to the shards,
Citus writes the results to a file.

The logic relies on LOCAL_COPY_FLUSH_THRESHOLD, and flushes
only when the size exceeds the threshold. But, unlike local
copy on shards, in this case we write the headers and footers
just once.
2021-02-09 15:00:06 +01:00
SaitTalhaNisanci e96da4886f
Sort results in citus_shards and give raw size (#4649)
* Sort results in citus_shards and give raw size

Sort results so that it is consistent and also similar to citus_tables.

Use raw size in the output so that doing operations on the size is
easier.

* Change column ordering
2021-02-08 15:29:42 +03:00
Hadi Moshayedi 3e6b54b964 Normalize isolation_metadata_sync_deadlock 2021-02-06 15:59:28 -08:00
Hadi Moshayedi eff8cffaf3
Columnar: improve naming of limit config variables. (#4653)
* Rename chunk_row_count to chunk_group_row_limit

* Rename stripe_row_count to stripe_row_limit

* Undo couple of renames
2021-02-06 09:04:04 -08:00
Jeff Davis b1882d4400 Columnar: Call nextval_internal instead of DirectFunctionCall. 2021-02-06 01:45:30 -08:00
Hadi Moshayedi 4e53314e3f Make isolation_metadata_sync_deadlock more resilient 2021-02-06 01:05:24 -08:00
Hadi Moshayedi 0a9fd91d8f Use 'Chunk Groups' in EXPLAIN ANALYZE of columnar scan 2021-02-05 10:58:01 -08:00
Hadi Moshayedi 1d311b0709 Columnar: don't double count chunks filtered 2021-02-05 10:58:01 -08:00
Ahmet Gedemenli 5dd2a3da03 Convert RelabelTypes into CollateExprs in get_rule_expr function 2021-02-05 12:06:46 +03:00
Ahmet Gedemenli 503171d2f2
Merge branch 'master' into rename-master-parameter-for-dist-stat-activity 2021-02-04 15:37:13 +03:00
Ahmet Gedemenli 2443b20b2c Rename master to distributed for worker stat activity 2021-02-04 12:20:06 +03:00
Onder Kalaci fc9a23792c COPY uses adaptive connection management on local node
With #4338, the executor is smart enough to failover to
local node if there is not enough space in max_connections
for remote connections.

For COPY, the logic is different. With #4034, we made COPY
work with the adaptive connection management slightly
differently. The cause of the difference is that COPY doesn't
know which placements are going to be accessed hence requires
to get connections up-front.

Similarly, COPY decides to use local execution up-front.

With this commit, we change the logic for COPY on local nodes:

Try to reserve a connection to local host. This logic follows
the same logic (e.g., citus.local_shared_pool_size) as the
executor because COPY also relies on TryToIncrementSharedConnectionCounter().
If reservation to local node fails, switch to local execution
Apart from this, if local execution is disabled, we follow the
exact same logic for multi-node Citus. It means that if we are
out of the connection, we'd give an error.
2021-02-04 09:45:07 +01:00
Ahmet Gedemenli 34840ddc5c Rename master to citus for dist stat activity cols 2021-02-04 11:12:23 +03:00
Hadi Moshayedi 5fde617229 Columnar: disallow CREATE INDEX CONCURRENTLY 2021-02-03 12:10:00 -08:00
Jeff Davis 4043731c41 Columnar: fix inheritance planning. 2021-02-03 10:41:21 -08:00
Sait Talha Nisanci 24e60b44a1 Consider coordinator in intermediate result optimization
It seems that we were not considering the case where coordinator was
added to the cluster as a worker in the optimization of intermediate
results.

This could lead to errors when coordinator was added as a worker.
2021-02-03 20:02:03 +03:00
Onur Tirtir c0f2817b70
Disallow using alter_table udfs with tables having any identity cols (#4635)
pg_get_tableschemadef_string doesn't know how to deparse identity
columns so we cannot reflect those columns when creating table
from scratch. For this reason, we don't allow using alter_table udfs
with tables having any identity cols.
2021-02-03 19:33:54 +03:00
Onur Tirtir 3a403090fd
Disallow adding local table with identity column to metadata (#4633)
pg_get_tableschemadef_string doesn't know how to deparse identity
columns so we cannot reflect those columns when creating shell
relation.
For this reason, we don't allow adding local tables -having identity cols-
to metadata.
2021-02-03 19:05:17 +03:00
Onur Tirtir 5efb742f8a
Skip copying GENERATED ALWAYS AS STORED cols in ReplaceTable (#4616)
Postgres doesn't allow inserting into columns having GENERATED ALWAYS
AS (...) STORED expressions.
For this reason, when executing undistribute_table or an alter_* udf,
we should skip copying such columns.
This is not bad since Postgres would already generate such columns.
2021-02-03 17:55:16 +03:00
jeff-davis e03246dd45
Colummnar: mark custom scan path paralle_safe. (#4619)
Enables an overall plan to be parallel (e.g. over a partition
hierarchy), even though an individual ColumnarScan is not
parallel-aware.

Co-authored-by: Jeff Davis <jefdavi@microsoft.com>
2021-02-02 11:56:00 -08:00
jeff-davis e195af7e72
Columnar: always disable parallel paths. (#4617)
Previously, if columnar.enable_custom_scan was false, parallel paths
could remain, leading to an unexpected error.

Also, ensure that cheapest_parameterized_paths is cleared if a custom
scan is used.

Co-authored-by: Jeff Davis <jefdavi@microsoft.com>
2021-02-02 11:37:42 -08:00
Onur Tirtir 912d829757 Skip GENERATED AS ALWAYS STORED cols when processing cols owning sequences
When finding columns owning sequences, we shouldn't rely on atthasdef
since it might be true when column has GENERATED ALWAYS AS (...)
STORED expression.
2021-02-02 18:17:42 +03:00
Onur Tirtir c8a48c6eee
Not try to sync metadata for local tables (#4625) 2021-02-02 15:12:12 +03:00
Hadi Moshayedi bcb162976f Fix #4608 2021-02-01 16:23:16 -08:00
Hadi Moshayedi f5b1e49b79 Columnar: Fix lateral joins 2021-02-01 11:59:36 -08:00
Hadi Moshayedi ef927688fa Columnar: Fix ALTER TABLE ... ADD COLUMN. 2021-02-01 11:40:17 -08:00
Brian Bergeron 1253eeb9ff
Don't propagate ALTER ROLE SET when scoped to a different database (#4471)
Co-authored-by: brberger <brberger@microsoft.com>
2021-02-01 15:49:26 +03:00
Hanefi Önaldı cab17afce9 Introduce UDFs for fixing partitioned table constraint names 2021-01-29 17:32:20 +03:00
Hanefi Önaldı 92cf49b7e9 Limit shardId in partitioned table constraint names to only CHECK 2021-01-29 17:29:53 +03:00
SaitTalhaNisanci 738825cc38
Fix partition column index issue (#4591)
* Fix partition column index issue

We send column names to worker_hash/range_partition_table methods, and
in these methods we check the column name index from tuple descriptor.
Then this index is used to decide the bucket that the current row will
be sent for the repartition.

This becomes a problem when there are the same column names in the
tupleDescriptor. Then we can choose the wrong index. Hence the
partitioned data will be put to wrong workers. Then the result could
miss some data because workers might contain different range of data.

An example:
TupleDescriptor contains "trip_id", "car_id", "car_id" for one table.
It contains only "car_id" for the other table. And assuming that the
tables will be partitioned by car_id, it is not certain what should be
used for deciding the bucket number for the first table. Assuming value
2 goes to bucket 2 and value 3 goes to bucket 3, it is not certain which
bucket "1 2 3" (trip_id, car_id, car_id)  row will go to.

As a solution we send the index of partition column in targetList
instead of the column name.

The old API is kept so that if workers upgrade work, it still works
(though it will have the same bug)

* Use the same method so that backporting is easier
2021-01-29 14:40:40 +03:00
SaitTalhaNisanci 1ba399f5ca
Fix a flaky behaviour in shared_connection_stats (#4596)
With the previous query, we were not pushing down the pg_sleep hence the
number of connections to a worker could be different from run to run.
2021-01-28 18:42:49 +03:00
Onder Kalaci c7ea46067f Add regression tests 2021-01-28 12:45:57 +01:00
Onur Tirtir bb5962ee79
Early error out when creating citus local from a temp table (#4592) 2021-01-28 14:18:06 +03:00
Halil Ozan Akgul 913aa91449 Adds error message to AlterTableSetAccessMethod for below PG12 2021-01-28 11:32:02 +03:00
jeff-davis 15297cab49
Columnar: add GUC to control qual pushdown. (#4586) 2021-01-27 09:57:40 -08:00
Nils Dijk 07d3b4fd04
fix NaN cost estimate on empty columnar tables (#4593)
Fixing a division by zero in the cost calculations for scanning a columnar table.

Due to how the columns in a columnar table are counted an empty table would result in a division by zero. Instead this patch keeps the column selection ratio on zero when this happens, resulting in an accurate cost of zero pages to scan a columnar table.

fixes #4589
2021-01-27 17:32:17 +01:00
Onur Tirtir b20615cbbe
Advise dropping foreign key in addition to create_reference_table hint (#4590) 2021-01-27 17:59:06 +03:00
Onur Tirtir 8151c4b443 Merge remote-tracking branch 'origin/master' into rename-create_citus_local_table 2021-01-27 17:08:58 +03:00
Ahmet Gedemenli b2c1bbddd4
Merge branch 'master' into fix-dropping-mat-views-when-alter-table 2021-01-27 16:33:10 +03:00
Ahmet Gedemenli 35043c56f1 Fix dropping materialized views while doing alter table 2021-01-27 16:32:09 +03:00
Onur Tirtir dfcdccd0e7 Rename udf in regression tests (as per prev commit) 2021-01-27 15:52:37 +03:00
Onur Tirtir 1a4482a37c Get rid of the sql dir for new udf 2021-01-27 15:52:37 +03:00
Onur Tirtir 2f30be823e Rename create_citus_local_table to citus_add_local_table_to_metadata
For simplicity in downgrade test in multi_extension, didn't
actually remove create_citus_local_table udf.
2021-01-27 15:52:36 +03:00
Onur Tirtir c06fcc26e5 Hide notice messages when implicitly undistributing citus local tables 2021-01-27 13:42:06 +03:00
Onur Tirtir cacb76d2c6
Not mention citus local tables in error messages (#4579) 2021-01-27 12:36:53 +03:00
Naisila Puka 94bc2703bc
Make undistribute_table() and citus_create_local_table() work with columnar (#4563)
* Make undistribute_table() and citus_create_local_table() work with columnar

* Rename and use LocallyExecuteUtilityTask for UDF check

* Remove 'local' references in ExecuteUtilityCommand
2021-01-27 01:17:20 +03:00
Halil Ozan Akgul bafa692fc1 Adds error messages with names of indexes that will be dropped 2021-01-26 18:18:26 +03:00
Ahmet Gedemenli e99f052904 Fix index renaming when creating citus local tables 2021-01-26 15:52:48 +03:00
Jeff Davis d62e54dc09 Columnar: optimize write path. 2021-01-25 11:47:21 -08:00
Hadi Moshayedi 639952ffa8 Read chunk row count from catalog tables 2021-01-25 08:53:52 -08:00
Onur Tirtir 215d6630c3 Update foreign_key_to_reference_table so that test output doesn't change 2021-01-25 11:03:39 +03:00
Onur Tirtir b5ea033a0b Convert postgres tables to citus local when creating reference table having fkeys 2021-01-25 11:02:50 +03:00
Jeff Davis 53f7b019d5 Columnar: clean up old references to cstore. 2021-01-22 11:08:36 -08:00
Onur Tirtir 941c8fbf32
Automatically undistribute citus local tables when no more fkeys with reference tables (#4538) 2021-01-22 18:15:41 +03:00
Marco Slot 03328e9679 Rename citus_tables column names to be query-friendly 2021-01-21 18:58:30 +01:00
Ahmet Gedemenli 3ac30ef9d8
Merge branch 'master' into remove-deprecated-gucs-udfs 2021-01-22 13:06:13 +03:00
Ahmet Gedemenli 76354ff563
Merge branch 'master' into remove-deprecated-gucs-udfs 2021-01-22 12:47:06 +03:00
Ahmet Gedemenli 887b67953b
Merge branch 'master' into fix-bug-create-citus-local-table-with-stats 2021-01-22 12:46:47 +03:00
Hadi Moshayedi ff38996645 More meaningful columnar metadata table names 2021-01-21 21:29:07 -08:00
jeff-davis 0b5551faaf
Columnar: add explain info for chunk filtering (#4554)
Co-authored-by: Jeff Davis <jefdavi@microsoft.com>
2021-01-21 15:04:42 -08:00
jeff-davis 0581df23f4
Add columnar test for json (#4553)
Co-authored-by: Jeff Davis <jefdavi@microsoft.com>
2021-01-21 14:36:38 -08:00
Önder Kalacı 9b39b25390
Prevent citus local table creation via remote execution (#4540)
/*
 * Creating Citus local tables relies on functions that accesses
 * shards locally (e.g., ExecuteAndLogDDLCommand()). As long as
 * we don't teach those functions to access shards remotely, we
 * cannot relax this check.
*/
2021-01-21 11:26:45 +03:00
Onur Tirtir 433062e5d2
Add fkeys between citus local and reference tables in some tests (#4546) 2021-01-20 19:30:20 +03:00
Ahmet Gedemenli 89a6fe83f7 Replace to update_distributed_table_colocation for tests 2021-01-20 17:30:06 +03:00
Ahmet Gedemenli ceb6b503c0 Remove unused UDF mark_tables_colocated 2021-01-20 17:29:23 +03:00
Ahmet Gedemenli 2fa060a32d Fix bug creating citus local table with stats 2021-01-20 17:17:13 +03:00
Halil Ozan Akgul 434f5af030 Adds same access method check 2021-01-20 15:18:03 +03:00
Hadi Moshayedi 8a5b6a43fc Normalize citus_local_tables 2021-01-19 15:56:42 -08:00
Hadi Moshayedi 0e0fd6599a Faster logical replication tests.
Logical replication status can take wal_receiver_status_interval
seconds to get updated. Default is 10s, which means tests in
which logical replication is used can take a long time to finish.
We reduce it to 1 second to speed these tests up.

Logical replication apply launcher launches workers every
wal_retrieve_retry_interval, so if we have many shard moves with
logical replication consecutively, they will be throttled by this
parameter. Default is 5s, we reduce it to 1s so we finish tests
faster.
2021-01-19 07:48:47 -08:00
Hadi Moshayedi bc01c795a2 Reland #4419 2021-01-19 07:48:47 -08:00
SaitTalhaNisanci 745ffbc691
Separate schedules for mixed mode and normal mode in upgrade (#4420) 2021-01-19 14:08:11 +03:00
Halil Ozan Akgul 27c2bd1599 Moves creation of ALTER INDEX STATISTICS commands next to index commands 2021-01-18 16:55:53 +03:00
Naisila Puka 7124a7715d
Skip 'already exists' in CREATE TABLE IF NOT EXISTS PARTITION OF (#4507)
* Just skip 'already exists' in CT IF NOT EXISTS PARTITION OF

* Generalize to tables that are not already distributed partitions
2021-01-18 15:56:02 +03:00
Onur Tirtir f1ecbc3a53
Fix segfault when adding/dropping fkey from ref to citus local via remote exec (#4528) 2021-01-17 20:43:33 +03:00
Onur Tirtir 5a3e8a6e24
Skip postgres tables for UndistributeTable(cascadeViaFKeys) (#4530)
The reason behind skipping postgres tables is that we support
foreign keys between postgres tables and reference tables
(without converting postgres tables to citus local tables)
when enable_local_reference_table_foreign_keys is false or
when coordinator is not added to metadata.
2021-01-17 20:32:30 +03:00
Ahmet Gedemenli 107097ee28 Fix assert failure when creating statistics 2021-01-15 19:36:58 +03:00
Onur Tirtir 7dddfa2d0b
Not invalidate fkey cache if citus not installed (#4521) 2021-01-15 18:31:43 +03:00
Onder Kalaci 30d0a65f40 Adds citus.enable_local_reference_table_foreign_keys
When enabled any foreign keys between local tables and reference
tables supported by converting the local table to a citus local
table.

When the coordinator is not in the metadata, the logic is disabled
as foreign keys are not allowed in this configuration.
2021-01-15 18:04:52 +03:00
Onur Tirtir e718d24868 Add support for CREATE TABLE commands defining foreign keys 2021-01-15 17:46:06 +03:00
Ahmet Gedemenli 9a100bcdb9 Remove unused GUCs
Remove deprecated variables

Remove GUC citus.sslmode

Remove GUC citus.expire_cached_shards

Remove GUC citus.task_tracker_delay

Remove GUC citus.max_assign_task_batch_size

Remove GUC citus.max_tracked_tasks_per_node

Remove GUC citus.max_running_tasks_per_node

Remove GUC citus.large_table_shard_count

Remove GUC citus.max_task_string_size

Remove GUC citus.binary_master_copy_format
2021-01-15 13:30:45 +03:00
Onur Tirtir 787ed643dd
Undistribute table when cascade_via_foreign_keys=true even if rel has no fkeys (#4516)
If relation is not involved in any foreign key relationships,
foreign key graph would not return any relations for given
relationId as expected.

But even if it's the case, we should still undistribute the table
itself.
2021-01-15 12:45:44 +03:00
Onur Tirtir 36b418982f Add support for ALTER TABLE commands defining foreign keys 2021-01-14 17:12:00 +03:00
Nils Dijk a655ef27bc
Test columnar recovery (#4485)
DESCRIPTION: Add tests to verify crash recovery for columnar tables

Based on the Postgres TAP tooling we add a new test suite to the array of test suites for citus. It is modelled after `src/test/recovery` in the postgres project and takes the same place in our repository. It uses the perl modules defined in the postgres project to control the postgres nodes.

The test we add here focus on crash recovery. Our follower tests should cover the streaming replication behaviour.

It is hooked to our CI for both postgres 12 and postgres 13. We omit the recovery tests for postgres 11 as we do not have support for the columnar table access method.
2021-01-14 14:58:29 +01:00
Marco Slot b840e97cd6 Add a alter_old_partitions_set_access_method UDF 2021-01-14 10:44:14 +01:00
Ahmet Gedemenli 9b56ad48cb Recreate invalidation functions for Citus10
Fix multi_create_table

Add schema name to altered functions

Recreate invalidation functions when downgrading
2021-01-13 23:18:07 +03:00
jeff-davis ec319faa43
Only allow columnar tables with permanent storage (#4492). (#4495)
Co-authored-by: Jeff Davis <jefdavi@microsoft.com>
2021-01-13 10:37:34 -08:00
jeff-davis b49beda4c3
Stronger check for triggers on columnar tables (#4493). (#4494)
* Stronger check for triggers on columnar tables (#4493).

Previously, we used a simple ProcessUtility_hook. Change to use an
object_access_hook instead.

* Replace alter_table_set_access_method test on partition with foreign key

Co-authored-by: Jeff Davis <jefdavi@microsoft.com>
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2021-01-13 10:30:53 -08:00
Marco Slot de6aaaa648 Expand support for subqueries in target list through recursive planning 2021-01-13 17:26:09 +01:00
Onur Tirtir ccbc3de535 Enable reference/distributed table creation from citus local tables 2021-01-13 17:14:26 +03:00
Halil Ozan Akgul 2be14cce2e Adds alter_distributed_table and alter_table_set_access_method UDFs 2021-01-13 16:02:39 +03:00
SaitTalhaNisanci 724d56f949
Add citus shard helper view (#4361)
With citus shard helper view, we can easily see:
- where each shard is, which node, which port
- what kind of table it belongs to
- its size

With such a view, we can see shards that have a size bigger than some
value, which could be useful. Also debugging can be easier in production
as well with this view.

Fetch shards in one go per node

The previous implementation was slow because it would do a lot of round
trips, one per shard to be exact. Hence it is improved so that we fetch
all the shard_name, shard-size pairs per node in one go.

Construct shards_names, sizes query on coordinator
2021-01-13 13:58:47 +03:00
Önder Kalacı 7e0826a06b
Make sure that materialized views that contains only (#4499)
Make sure that materialized views that contains only  intermediate results work fine.
2021-01-13 13:17:43 +03:00
Ahmet Gedemenli 436c9d9d79
Remove the word 'master' from Citus UDFs (#4472)
* Replace master_add_node with citus_add_node

* Replace master_activate_node with citus_activate_node

* Replace master_add_inactive_node with citus_add_inactive_node

* Use master udfs in old scripts

* Replace master_add_secondary_node with citus_add_secondary_node

* Replace master_disable_node with citus_disable_node

* Replace master_drain_node with citus_drain_node

* Replace master_remove_node with citus_remove_node

* Replace master_set_node_property with citus_set_node_property

* Replace master_unmark_object_distributed with citus_unmark_object_distributed

* Replace master_update_node with citus_update_node

* Replace master_update_shard_statistics with citus_update_shard_statistics

* Replace master_update_table_statistics with citus_update_table_statistics

* Rename master_conninfo_cache_invalidate to citus_conninfo_cache_invalidate

Rename master_dist_local_group_cache_invalidate to citus_dist_local_group_cache_invalidate

* Replace master_copy_shard_placement with citus_copy_shard_placement

* Replace master_move_shard_placement with citus_move_shard_placement

* Rename master_dist_node_cache_invalidate to citus_dist_node_cache_invalidate

* Rename master_dist_object_cache_invalidate to citus_dist_object_cache_invalidate

* Rename master_dist_partition_cache_invalidate to citus_dist_partition_cache_invalidate

* Rename master_dist_placement_cache_invalidate to citus_dist_placement_cache_invalidate

* Rename master_dist_shard_cache_invalidate to citus_dist_shard_cache_invalidate

* Drop master_modify_multiple_shards

* Rename master_drop_all_shards to citus_drop_all_shards

* Drop master_create_distributed_table

* Drop master_create_worker_shards

* Revert old function definitions

* Add missing revoke statement for citus_disable_node
2021-01-13 12:10:43 +03:00
Onur Tirtir 2ef5879bcc
Fix error thrown for foreign keys from citus local to dist tables (#4490) 2021-01-13 10:15:12 +03:00
Onur Tirtir dd55ab394e
Disallow cascade_via_foreign_keys if any partition rel has non-inherited fkeys (#4487) 2021-01-11 21:50:09 +03:00
Naisila Puka 7b05777682
Add ALTER TABLE .. SET LOGGED/UNLOGGED support (#4486) 2021-01-11 20:39:06 +03:00
Marco Slot d900a7336e Automatically add placeholder record for coordinator 2021-01-08 15:09:53 +01:00
Marco Slot 597533b1ff Add citus_set_coordinator_host 2021-01-08 13:36:26 +01:00
Marco Slot e7f13978b5 Add a view for simple (time) partitions and their access methods 2021-01-08 11:28:15 +01:00
Onur Tirtir 5289785da4
Add cascade_via_foreign_keys option to create_citus_local_table (#4462) 2021-01-08 15:13:26 +03:00
Marco Slot 011283122b Add the shard rebalancer implementation 2021-01-07 16:51:55 +01:00
Onur Tirtir d9a3e26f20
Fix flaky test in multi_foreign_key_relation_graph (#4476)
CREATE TABLE does not invalidate foreign key graph but some other set of
ddl commands do.

Previously, as we run multi_foreign_key & multi_foreign_key_relation_graph
in parallel, it's possible that multi_foreign_key invalidates foreign key
graph via some ddl commands and create table test in
multi_foreign_key_relation_graph becomes flaky.

So we un-parallelize those two tests.
2021-01-07 16:19:11 +03:00
Onur Tirtir f3801143fb Add cascade option to undistribute_table 2021-01-07 15:41:49 +03:00
Marco Slot 47c1b19174 Revert "Do metadata sync in a separate background worker."
This reverts commit 4df723cf9b.
2021-01-07 10:30:04 +01:00
Marco Slot d9f175532b Revert "Trigger metadata sync at transaction commit"
This reverts commit a2c73bef27.
2021-01-07 10:30:00 +01:00
Marco Slot 5de3337b2f Support local execution for INSERT..SELECT with re-partitioning 2021-01-06 16:15:53 +01:00
Naisila Puka bcfc0aa4e9
Rethrow original concurrent index creation failure message (#4469)
* Rethrow original concurrent index creation failure message

* Alter test outputs for concurrent index creation

* Detect duplicate table failure in concurrent index creation

* Add test for conc. index creation w/out duplicates
2021-01-06 15:27:13 +03:00
Ahmet Gedemenli 1f36ff7c17
Prevent deadlock for long named partitioned index creation on single node (#4461)
* Prevent deadlock for long named partitioned index creation on single node

* Create IsSingleNodeCluster function

* Use both local and sequential execution
2021-01-05 13:39:13 +03:00
Ahmet Gedemenli f27649754b
Add alter index set statistics support (#4455)
* Add alter index set statistics support

* Use attNum instead of attName
2021-01-05 13:23:11 +03:00
Onur Tirtir 87e5276bdd
Fix fkey graph test for self reference (#4450) 2020-12-28 12:47:39 +03:00
Halil Ozan Akgül a8626d1944
Fixes the table used in the error message (#4449) 2020-12-25 16:48:50 +03:00
Naisila Puka 04aeb6938b
Merge branch 'master' into issue4237 2020-12-25 12:36:40 +03:00
Hadi Moshayedi a2c73bef27 Trigger metadata sync at transaction commit 2020-12-24 08:28:38 -08:00
Hadi Moshayedi 4df723cf9b Do metadata sync in a separate background worker. 2020-12-24 08:25:55 -08:00
Naisila Puka 0bb2c991f9
Merge branch 'master' into issue4237 2020-12-24 18:05:27 +03:00
Ahmet Gedemenli 5af585269a Add separate pg13 test for stats targets 2020-12-24 18:01:25 +03:00
naisila 59a81491e8 Add test for master_create_empty_shard on coordinator 2020-12-24 17:59:40 +03:00
Ahmet Gedemenli d4bc17f6f0 Propagate statistics with altered targets 2020-12-24 17:10:12 +03:00
Ahmet Gedemenli f7c70f9a63 Propagate alter stats target 2020-12-24 17:10:12 +03:00
Ahmet Gedemenli 5a1607b6c0 Propagate alter stats schema 2020-12-24 17:10:12 +03:00
Ahmet Gedemenli bdce4a7e67 Propagate rename statistics 2020-12-24 17:10:12 +03:00
Onur Tirtir 5ed9197041
Implement infra to get foreign key connected relations (#4439)
On top of our foreign key graph, implement the infrastructure to get
list of relations that are connected to input relation via a foreign key
graph.
We need this to support cascading create_citus_local_table &
undistribute_table operations.

Also add regression tests to see what our foreign key graph is able to
capture currently.
2020-12-24 16:42:40 +03:00
Onur Tirtir 57e7defa3c
Support CREATE INDEX commands without index name on citus tables (#4273) 2020-12-23 23:15:39 +03:00
Marco Slot e3dcc278e0 Remove upgrade_to_reference_table UDF 2020-12-23 00:40:14 +01:00
jeff-davis 90d63cb792
Add columnar pg_dump test. (#4433) 2020-12-22 15:57:35 -08:00
Ahmet Gedemenli 874fa1fc09 Propagate Drop Statistics 2020-12-22 18:34:46 +03:00
Marco Slot 321cc784c7 Collapse Citus 7.* scripts into Citus 8.0-1 2020-12-21 22:55:51 +01:00
Hadi Moshayedi dde0323b57
Columnar: enable zstd & lz4 compilation by default (#4402)
* Columnar: enable zstd & lz4 compilation by default

* Make zstd & lz4 tests more consistent

* Don't require lz4 & zstd for postgres 11

Co-authored-by: Nils Dijk <nils@citusdata.com>
2020-12-21 12:11:58 -08:00
Onur Tirtir cceaf31e4c
Add some more tests with views to test recursive planning on views (#4427)
(cherry picked from commit 51f422f3c6)
2020-12-21 11:53:37 +03:00
jeff-davis 49281202af
Add simple follower test for columnar. (#4432) 2020-12-18 13:59:20 -08:00
jeff-davis 3e0f1aaaab
Prevent inserting into logically-replicated columnar table. (#4429) 2020-12-18 12:29:30 -08:00
Marco Slot f2056e553f
Expose partition column of subqueries in optimizer (#4355)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2020-12-18 20:32:52 +01:00
SaitTalhaNisanci 145112f3a0
Fix attribute numbers in subquery conversions (#4426)
Attribute number in a subquery RTE and relation RTE means different
things. In a relation attribute number will point to the column number
in the table definition including the dropped columns as well however in
subquery, it means the index in the target list. When we convert a
relation RTE to subquery RTE we should either correct all the relevant
attribute numbers or we can just add a dummy column for the dropped
columns. We choose the latter in this commit because it is practically
too vulnerable to update all the vars in a query.

Another thing this commit fixes is that in case a join restriction
clause list contains a false clause, we should just returns a false
clause instead of the whole list, because the whole list will contain
restrictions from other RTEs as well and this breaks the query, which
can be seen from the output changes, now it is much simpler.

Also instead of adding single tests for dropped columns, we choose to
run the whole mixed queries with tables with dropped columns, this
revealed some bugs already, which are fixed in this commit.
2020-12-18 20:25:41 +03:00
Nils Dijk a748729998
rework ci 2020-12-18 18:04:45 +01:00
Ahmet Gedemenli 770d3da1ca Add dependencies for stat schemas 2020-12-18 17:04:13 +03:00
Ahmet Gedemenli 6c0465566a Propagate create statistics 2020-12-17 20:38:36 +03:00
Marco Slot 1e2518f83c
Add tests for router queries with catalog tables (#4422)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2020-12-17 15:07:50 +01:00
Marco Slot 100e5d3196 Address review feedback 2020-12-15 15:23:38 +01:00
Marco Slot 23dccd8941 Add some new tests for complex correlated subqueries in WHERE 2020-12-15 14:17:16 +01:00
Marco Slot 707a6554b1 Support co-located/recurring correlated subqueries 2020-12-15 14:17:16 +01:00
Sait Talha Nisanci 181a7e1d36 Skip dropped columns 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 7951273f74 Refactor WrapRteRelationIntoSubquery 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 0e53aa5d3b Add more tests 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci c4d3927956 Not allow local table updates with remote citus local tables 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci f5dd5379b2 Add more tests 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci f7c1509fed Not check if the query is routable for converting
It seems that there are only very few cases where that is useful, and
for now we prefer not having that check. This means that we might
perform some unnecessary checks, but that should be rare and not
performance critical.
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 1d82972ff4 Increase the performance with a trick
Instead of sending NULL's over a network, we now convert the subqueries
in the form of:

SELECT t.a, NULL, NULL FROM (SELECT a FROM table)t;

And we recursively plan the inner part so that we don't send the NULL's
over network. We still need the NULLs in the outer subquery because we
currently don't have an easy way of updating all the necessary places in
the query.

Add some documentation for how the conversion is done
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 3aed6c3ad0 Rename containsOnlyLocalTable as isLocalTableModification
Update error message in Modify View
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 13c43d5744 Improve table conversion logic in dist-local joins 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 5618f3a3fc Use BaseRestrictInfo for finding equality columns
Baseinfo also has pushed down filters etc, so it makes more sense to use
BaseRestrictInfo to determine what columns have constant equality
filters.

Also RteIdentity is used for removing conversion candidates instead of
rteIndex.
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 69992d58f9 Add broken local-dist table modifications tests
It seems that most of the updates were broken, we weren't aware of it
because there wasn't any data in the tables. They are broken mostly
because local tables do not have a shard id and some code paths should
be updated with that information, currently when there is an invalid
shard id, it is assumed to be pruned.

Consider local tables in router planner

In case there is a local table, the shard id will not be valid and there
are some checks that rely on shard id, we should skip these in case of
local tables, which is handled with a dummy placement.

Add citus local table dist table join tests

add local-dist table mixed joins tests
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 2a44029aaf Simplify ContainsTableToBeConvertedToSubquery
AllDataLocallyAccessible and ContainsLocalTableSubqueryJoin are removed.
We can possibly remove ModifiesLocalTableWithRemoteCitusLocalTable as
well. Though this removal has a side effect that now when all the data
is locally available, we could still wrap a relation into a subquery, I
guess that should be resolved in the router planner itself.

Add more tests
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 26d9f0b457 Use auto mode in tests and fix debug message 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 3bd53a24a3 Support update on postgres table from citus local table 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 4b6611460a Support foreign table joins as well 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 7e9204eba9 Update vars in quals while wrapping RTE to subquery
When we wrap an RTE to subquery we are updating the variables varno's as
1, however we should also update the varno's of vars in quals.

Also some other small code quality improvements are done.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 0689f2ac1a Recursively plan distributed tables only if all have unique filters
The previous algorithm was not consistent and it could convert different
RTEs based on the table orders in the query. Now we convert local tables
if there is a distributed table which doesn't have a unique index. So if
there are 4 tables, local1, local2, dist1, dist2_with_pkey then we will
convert local1 and local2 in `auto` mode. Converting a distributed table
is not that logical because as there is a distributed table without a
unique index, we will need to convert the local tables anyway. So
converting the distributed table with pkey is redundant.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci a008fc611c Support materialized view joins as well 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 5f46abffd9 Update check multi tests 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 3fe3c55023 Use ShouldConvertLocalTableJoinsToSubqueries
Remove FillLocalAndDistributedRTECandidates and use
ShouldConvertLocalTableJoinsToSubqueries, which simplifies things as we
rely on a single function to decide whether we should continue
converting RTE to subquery.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci eebcd995b3 Add some more tests 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 5693cabc41 Not convert an already routable plannable query
We should not recursively plan an already routable plannable query. An
example of this is (SELECT * FROM local JOIN (SELECT * FROM dist) d1
USING(a));

So we let the recursive planner do all of its work and at the end we
convert the final query to to handle unsupported joins. While doing each
conversion, we check if it is router plannable, if so we stop.

Only consider range table entries that are in jointree

If a range table is not in jointree then there is no point in
considering that because we are trying to convert range table entries to
subqueries for join use case.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 2ff65f3630 Enable partitioned distributed tables in local-dist table joins 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 44953579cf Enable citus-local distributed table joins
Check equality in quals

We want to recursively plan distributed tables only if they have an
equality filter on a unique column. So '>' and '<' operators will not
trigger recursive planning of distributed tables in local-distributed
table joins.

Recursively plan distributed table only if the filter is constant

If the filter is not a constant then the join might return multiple rows
and there is a chance that the distributed table will return huge data.
Hence if the filter is not constant we choose to recursively plan the
local table.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci f3d55448b3 Choose distributed table if it has a unique index in filter
When doing local-distributed table joins we convert one of them to
subquery. The current policy is that we convert distributed tables to
subquery if it has a unique index on a column that has unique
index(primary key also has a unique index).
2020-12-15 18:17:10 +03:00
Onder Kalaci f0aef67ed2 Update existing regression tests 2020-12-15 18:17:10 +03:00
Onder Kalaci 3f4952cc2b Pushdown projections when relations are recursively planned
This is important to limit the data transfer size.
2020-12-15 18:17:10 +03:00
Onder Kalaci 945193555b add basic regression tests 2020-12-15 18:17:10 +03:00
Onder Kalaci 594e001f3b Add filter pushdown regression tests
Also handle WHERE false
2020-12-15 18:17:10 +03:00
Onder Kalaci 82a4830c7d Adjust the existing regression tests 2020-12-15 18:17:10 +03:00
Marco Slot f2538a456f Support co-located/recurring sublinks in the target list 2020-12-13 15:45:24 +01:00
Marco Slot 8e8adcd92a Harden citus_tables against node failure 2020-12-13 15:10:40 +01:00
Hadi Moshayedi 4dd22cc4e4 Columnar: Fix ANALYZE for large number of rows. 2020-12-10 09:52:33 -08:00
Hadi Moshayedi b3dac5e9d1 Columnar: set default compression as zstd if available 2020-12-09 14:32:08 -08:00
Hadi Moshayedi 4668fe51a6 Columnar: Make compression level configurable 2020-12-09 08:48:50 -08:00
Hadi Moshayedi f5a4a4bc74 Columnar: Support zstd compression 2020-12-09 08:30:55 -08:00
Hadi Moshayedi 3f81ee26fd Columnar: Support LZ4 compression 2020-12-09 08:29:07 -08:00
jeff-davis 260a02180b
Add tests for unsupported columnar storage features (#4397)
Add negative tests:
 * Deletes
 * Sample scan
 * Special columns
 * Tuple locks
 * Indexes
2020-12-09 00:08:45 -08:00
Jeff Davis c91e5b052b more test fixups 2020-12-07 13:43:27 -08:00
Jeff Davis 7169ba21c4 more test fixes 2020-12-07 13:36:46 -08:00
Jeff Davis e26fdeb706 fixup tests some more 2020-12-07 13:22:16 -08:00
Jeff Davis 5b3c32eb38 fixup tests 2020-12-07 13:18:22 -08:00
Jeff Davis 068af7f38e fixup upgrade tests 2020-12-07 13:11:51 -08:00
Jeff Davis 3758e83850 Rename cstore->columnar in SQL objects and errors. 2020-12-07 13:01:53 -08:00
Jeff Davis ad919ff220 Tests for UPDATE and error message improvement.
UPDATEs on partitioned tables that affect only row partitions should
succeed, the rest should fail.

Also rename CStoreScan to ColumnarScan to make the error message more
relevant.
2020-12-07 11:25:30 -08:00
Ahmet Gedemenli 936775e8e3 Delete transactions when removing node
With this commit, we delete entries in pg_dist_transaction
for the primary nodes that are removed by `master_remove_node`.
2020-12-07 11:35:20 +03:00
Hadi Moshayedi 01da2a1c73 Columnar: track decompressed length in metadata 2020-12-04 09:09:39 -08:00
Onder Kalaci bd9827aed9 Add regression tests with different data types
We typically do not test Citus with these uncommon
data types. Now, we already have the tests for ADF
integration, add it to regression tests as well.
2020-12-04 10:25:00 +03:00
Hadi Moshayedi 4a9aebaa7b Columnar: rename block to chunk 2020-12-03 08:50:19 -08:00
Hadi Moshayedi 24bfd368a9 Columnar: Fix VACUUM for empty tables 2020-12-03 08:46:09 -08:00
Marco Slot c9b658daea Add a public.citus_tables view 2020-12-03 17:31:40 +01:00
Marco Slot 4098d33acb Allow citus size functions on replicated tables 2020-12-03 16:33:24 +01:00
Marco Slot c69ea2512a Fix flappy failure test 2020-12-03 13:54:02 +01:00
Onder Kalaci c546ec5e78 Local node connection management
When Citus needs to parallelize queries on the local node (e.g., the node
executing the distributed query and the shards are the same), we need to
be mindful about the connection management. The reason is that the client
backends that are running distributed queries are competing with the client
backends that Citus initiates to parallelize the queries in order to get
a slot on the max_connections.

In that regard, we implemented a "failover" mechanism where if the distributed
queries cannot get a connection, the execution failovers the tasks to the local
execution.

The failover logic is follows:

- As the connection manager if it is OK to get a connection
	- If yes, we are good.
	- If no, we fail the workerPool and the failure triggers
	  the failover of the tasks to local execution queue

The decision of getting a connection is follows:

/*
 * For local nodes, solely relying on citus.max_shared_pool_size or
 * max_connections might not be sufficient. The former gives us
 * a preview of the future (e.g., we let the new connections to establish,
 * but they are not established yet). The latter gives us the close to
 * precise view of the past (e.g., the active number of client backends).
 *
 * Overall, we want to limit both of the metrics. The former limit typically
 * kics in under regular loads, where the load of the database increases in
 * a reasonable pace. The latter limit typically kicks in when the database
 * is issued lots of concurrent sessions at the same time, such as benchmarks.
 */
2020-12-03 14:16:13 +03:00
Hadi Moshayedi c2f60b6422
Columnar: pg_upgrade support (#4354) 2020-12-02 08:46:59 -08:00
Ahmet Gedemenli 5242dcfe99 Add tests for propagating alter schema rename 2020-12-02 15:18:26 +03:00
Nils Dijk 6f9c040f76
DESCRIPTION: Propagate columnar table settings for distributed tables
When distributing a columnar table, as well as changing options on a distributed columnar table, this patch will forward the settings from the coordinator to the workers.

For propagating options changes on an already distributed table this change is pretty straight forward. Before applying the change in options locally we will create a `DDLJob` that contains a call to `alter_columnar_table_set(...)` for every shard placement with all settings of the current table. This goes both for setting an option as well as resetting. This will reset the values to the defaults configured on the coordinator. Having the effect that the coordinator is authoritative on the settings and makes sure the shards have the same settings set as the table on the coordinator.

When a columnar table is distributed it is using the `TableDDLCommand` infra structure to create a new kind of `TableDDLCommand`. This new type, called a `TableDDLCommandFunction` contains a context and 2 function pointers to execute. One function returns the command as applied on the table, the second function will return the sql command to apply to a shard with a given shard id. The schema name is ignored as it will use the fully qualified name of the shard in the same schema as the base table.
2020-12-02 13:02:42 +01:00
Halil Ozan Akgül ef0914a7f8
Adds ORDER BY to flaky test (#4305)
Co-authored-by: Önder Kalacı <onder@citusdata.com>
2020-12-02 14:24:05 +03:00
Onder Kalaci f7e1aa3f22 Multi-row INSERTs use local execution when placements are local
Multi-row execution already uses sequential execution. When shards
are local, using local execution is profitable as it avoids
an extra connection establishment to the local node.
2020-12-01 21:37:59 +03:00
Marco Slot 04cffdd925 Run master_copy_shard_placement separately 2020-11-30 20:34:03 +01:00
Marco Slot 48caca4084 Improve regression test settings 2020-11-30 20:34:03 +01:00
Ahmet Gedemenli 8e5f0487eb Add order by for flaky test 2020-12-01 10:54:52 +03:00
Ahmet Gedemenli 67761897ab Add test for citus table size func in transaction with modification
Add test for citus_relation_size
2020-12-01 10:38:15 +03:00
Hadi Moshayedi feecb7b423
Columnar: few fixes (#4371)
* Columnar: fix a memory issue

* Columnar: no need for deferred triggers

* Columnar: relax memory growth constraints
2020-11-30 18:09:43 -08:00
Hadi Moshayedi a94e8c9cda
Associate column store metadata with storage id (#4347) 2020-11-30 18:01:43 -08:00
Marco Slot ecbc1ab008 Run subquery_prepared_statements by itself 2020-11-30 08:53:06 +01:00
Sait Talha Nisanci 8b0aed521f Isolate join test
Join test gets too many clients error too frequently hence we should
not run anything concurrently with that. Hopefully this will fix the
flakiness of test.
2020-12-01 00:00:17 +03:00
SaitTalhaNisanci c31a8df380
Call 6 times not 7 in subquery_prepared_statements (#4357) 2020-11-30 21:20:51 +03:00
SaitTalhaNisanci 8c3dd6338e
Run pg12 and pg13 separately (#4352)
It seems that sometimes we get `too many clients errors` with this set
of parallel tests, hence two of them are separated.
2020-11-30 19:32:49 +03:00
Hadi Moshayedi 7f43804dae
Normalize VACUUM VERBOSE output (#4353)
This is to avoid flaky changes like the following in test outputs:

-CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
+CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.02 s.
2020-11-27 12:07:25 -08:00
Nils Dijk 383e334023
refactor options to their own table linked to the regclass (#4346)
Columnar options were by accident linked to the relfilenode instead of the regclass/relation oid. This PR moves everything related to columnar options to their own catalog table.
2020-11-27 11:22:08 -08:00
Onder Kalaci 629ecc3dee Add the infrastructure to count the number of client backends
Considering the adaptive connection management
improvements that we plan to roll soon, it makes it
very helpful to know the number of active client
backends.

We are doing this addition to simplify yhe adaptive connection
management for single node Citus. In single node Citus, both the
client backends and Citus parallel queries would compete to get
slots on Postgres' `max_connections` on the same Citus database.

With adaptive connection management, we have the counters for
Citus parallel queries. That helps us to adaptively decide
on the remote executions pool size (e.g., throttle connections
if necessary).

However, we do not have any counters for the total number of
client backends on the database. For single node Citus, we
should consider all the client backends, not only the remote
connections that Citus does.

Of course Postgres internally knows how many client
backends are active. However, to get that number Postgres
iterates over all the backends. For examaple, see [pg_stat_get_db_numbackends](8e90ec5580/src/backend/utils/adt/pgstatfuncs.c (L1240))
where Postgres iterates over all the backends.

For our purpuses, we need this information on every connection
establishment. That's why we cannot affort to do this kind of
iterattion.
2020-11-25 19:19:24 +01:00
Ahmet Gedemenli a64dc8a72b Fixes a bug preventing INSERT SELECT .. ON CONFLICT with a constraint name on local shards
Separate search relation shard function

Add tests
2020-11-25 15:10:46 +03:00
Onder Kalaci 7accbff3f6 Do not cache all the distributed table metadata during CitusTableTypeIdList()
CitusTableTypeIdList() function iterates on all the entries of pg_dist_partition
and loads all the metadata in to the cache. This can be quite memory intensive
especially when there are lots of distributed tables.

When partitioned tables are used, it is common to have many distributed tables
given that each partition also becomes a distributed table.

CitusTableTypeIdList() is used on every CREATE TABLE .. PARTITION OF.. command
as well. It means that, anytime a partition is created, Citus loads all the
metadata to the cache. Note that Citus typically only loads the accessed table's
metadata to the cache.
2020-11-24 17:44:06 +01:00
Önder Kalacı c760cd3470
Move local execution after remote execution (#4301)
* Move local execution after the remote execution

Before this commit, when both local and remote tasks
exist, the executor was starting the execution with
local execution. There is no strict requirements on
this.

Especially considering the adaptive connection management
improvements that we plan to roll soon, moving the local
execution after to the remote execution makes more sense.

The adaptive connection management for single node Citus
would look roughly as follows:

   - Try to connect back to the coordinator for running
     parallel queries.
        - If succeeds, go on and execute tasks in parallel
        - If fails, fallback to the local execution

So, we'll use local execution as a fallback mechanism. And,
moving it after to the remote execution allows us to implement
such further scenarios.
2020-11-24 13:43:38 +01:00
Hadi Moshayedi 40b52ab757 Fix memory leaks in column store 2020-11-23 11:26:12 -08:00
Jeff Davis ba6ec610e2 address review comment 2020-11-20 10:03:12 -08:00
Jeff Davis 8cee2b092b remove columnar FDW code 2020-11-20 10:03:12 -08:00
Onder Kalaci c433c66f2b Do not execute subplans multiple times with cursors
Before this commit, we let AdaptiveExecutorPreExecutorRun()
to be effective multiple times on every FETCH on cursors.
That does not affect the correctness of the query results,
but adds significant overhead.
2020-11-20 10:43:56 +01:00
Hadi Moshayedi b182a95389 Fix ALTER COLUMN ... SET TYPE for columnar 2020-11-19 15:36:45 -08:00
Jeff Davis cef1d0e915 fixup test output 2020-11-19 12:45:52 -08:00
Jeff Davis 91015deb9d rename UDFs also 2020-11-19 12:27:40 -08:00
Jeff Davis a2b698a766 rename cstore_tableam -> columnar 2020-11-19 12:15:51 -08:00
SaitTalhaNisanci 9c44911226
Improve error messages in shard pruning (#4324) 2020-11-18 17:16:06 +03:00
Hadi Moshayedi 2747fd80ff Add prepared materialized view tests for columnar 2020-11-17 20:13:20 -08:00
Hadi Moshayedi 6711340ea6 Add prepared xact & stmt tests for columnar 2020-11-17 20:00:57 -08:00
Hadi Moshayedi 97cba2d5b6 Implements write state management for tuple inserts.
TableAM API doesn't allow us to pass around a state variable along all of the tuple inserts belonging to the same command. We require this in columnar store, since we batch them, and when we have enough rows we flush them as stripes.

To do that, we keep a (relfilenode) -> stack of (subxact id, TableWriteState) global mapping.

**Inserts**

Whenever we want to insert a tuple, we look up for the relation's relfilenode in this mapping. If top of the stack matches current subtransaction, we us the existing TableWriteState. Otherwise, we allocate a new TableWriteState and push it on top of stack.

**(Sub)Transaction Commit/Aborts**

When the subtransaction or transaction is committed, we flush and pop all entries matching current SubTransactionId.

When the subtransaction or transaction is committed, we pop all entries matching current SubTransactionId and discard them without flushing.

**Reads**

Since we might have unwritten rows which needs to be read by a table scan, we flush write states on SELECTs. Since flushing the write state of upper transactions in a subtransaction will cause metadata being written in wrong subtransaction, we ERROR out if any of the upper subtransactions have unflushed rows.

**Table Drops**

We record in which subtransaction the table was dropped. When committing a subtransaction in which table was dropped, we propagate the drop to upper transaction. When aborting a subtransaction in which table was dropped, we mark table as not deleted.
2020-11-17 12:07:16 -08:00
Nils Dijk 725f4a37d0
change configure to not have options 2020-11-17 19:01:54 +01:00
Nils Dijk 22df8027b0
add extra output for multi_extension targeting pg11 2020-11-17 19:01:54 +01:00
Nils Dijk 7c891a01a9 create missing objects during upgrade path 2020-11-17 19:01:51 +01:00
Nils Dijk 2987535172
add pg upgrade tests verifying table am is created 2020-11-17 18:55:36 +01:00
Nils Dijk d065bb495d
Prepare downgrade script and bump development version to 10.0-1 2020-11-17 18:55:35 +01:00
Nils Dijk b6d4a1bbe2
fix style 2020-11-17 18:55:35 +01:00
Nils Dijk 3bb6554976
make tests run 2020-11-17 18:55:35 +01:00
Nils Dijk f89bd3eeb5
move columnar test files 2020-11-17 18:55:34 +01:00
SaitTalhaNisanci 34de1f645c
Update failure test dependencies (#4284)
* Update failure test dependencies

There was a security alert for cryptography. The vulnerability was fixed
in 3.2.0. The vulnebarility:

"RSA decryption was vulnerable to Bleichenbacher timing vulnerabilities,
which would impact people using RSA decryption in online scenarios."

The fix:
58494b41d6

It wasn't enough to only update crpytography because mitm was
incompatible with the new version, so mitm is also upgraded.

The steps to do in local:
python -m pip install -U cryptography
python -m pip install -U mitmproxy
2020-11-17 19:16:08 +03:00
Onur Tirtir 5e3dc9d707 Bump citus version to 10.0devel 2020-11-09 13:16:54 +03:00
Onur Tirtir 5d5966f700
Fix a flaky test in mixed_relkind_tests (#4300) 2020-11-06 14:53:30 +03:00
Onder Kalaci e0d2ac7620 Do not rely on set_rel_pathlist_hook for finding local relations
When a relation is used on an OUTER JOIN with FALSE filters,
set_rel_pathlist_hook may not be called for the table.

There might be other cases as well, so do not rely on the hook
for classification of the tables.
2020-11-06 11:14:30 +01:00
Onur Tirtir 0556952607
Normalize partitioned table aliases in explain output (#4295)
Aliases that postgres choose for partitioned tables in explain output
might change in different pg versions, so normalize them and remove
the alternative test output
2020-11-06 10:44:01 +03:00
Onur Tirtir d912d4bc38
Print full file path in valgrind testing (#4299) 2020-11-06 10:26:53 +03:00
Onur Tirtir cc8be422ce
Fix relkind checks in planner for relkinds other than RELKIND_RELATION (#4294)
We were qualifying relations with relkind != RELKIND_RELATION as
non-relations due to the strict checks around RangeTblEntry->relkind
in planner.
2020-11-05 14:21:02 +03:00
Hanefi Önaldı d6f19e2298
Honor error message conventions 2020-11-03 18:11:18 +03:00
Hanefi Önaldı 85a4b61a0e
Prevent undistribute_table calls for partitions 2020-11-03 18:10:20 +03:00
Hanefi Önaldı 5db380f33a
Prevent undistribute_table calls for foreign tables 2020-11-03 17:33:29 +03:00
Halil Ozan Akgul 77b3be8b6d Turn RelOptInfos to only used field of them, relids, to be able to copy 2020-10-22 13:42:28 +03:00
Onur Tirtir 790beea59f
Add intermediate result tests with unsupported outer joins (#4262) 2020-10-20 12:11:18 +03:00
SaitTalhaNisanci 0f209377c4
Fix incorrect join related fields (#4242)
* Fix incorrect join related fields

Ruleutils expect to give the original index of join columns hence we
should consider the dropped columns while setting the fields in
SetJoinRelatedFieldsCompat.

* add some more tests for joins

* Move tests to join.sql and create a utility function
2020-10-19 18:28:39 +03:00
Onur Tirtir c49077d594
Disallow outer joins `ON TRUE` with ref & dist tables when ref table is outer relation (#4255)
Disallow `ON TRUE` outer joins with reference & distributed tables
when reference table is outer relation by fixing the logic bug made
when calling `LeftListIsSubset` function.

Also, be more defensive when removing duplicate join restrictions
when join clause is empty for non-inner joins as they might still
contain useful information for non-inner joins.
2020-10-19 16:58:11 +03:00
Onder Kalaci bbedfca761 Improve the relation restriction counters
It seems like Postgres could call set_rel_pathlist() for
the same relation multiple times. This breaks the logic
where we assume relationCount eqauls to the number of
entries in relationRestrictionList.

In summary, relationRestrictionList may contain duplicate
entries.
2020-10-19 08:51:16 +02:00
Hadi Moshayedi 663549db33 Set explicit transfer_mode in tableam tests 2020-10-16 12:40:37 -07:00
Nils Dijk caabbf4b84 Table access method support for distributed tables 2020-10-16 12:02:25 -07:00
Marco Slot 8976f245ab Support reference table view in reference table modification 2020-10-16 11:31:24 +02:00
Onder Kalaci 596f7bf4a9 Add more regression test for single node Citus
Tests on commands with SCHEMA.
2020-10-15 17:32:32 +02:00
Onder Kalaci fe3caf3bc8 Local execution considers intermediate result size limit
With this commit, we make sure that local execution adds the
intermediate result size as the distributed execution adds. Plus,
it enforces the citus.max_intermediate_result_size value.
2020-10-15 17:18:55 +02:00
Marco Slot 31858c8a29 Check table existence in EnsureRelationKindSupported 2020-10-15 17:05:06 +02:00
Onder Kalaci 15e724c073 Add regression tests for outer/cross JOINs 2020-10-14 15:17:30 +02:00
Onder Kalaci de33079065 Improve outer join checks
Before this commit, the logic was:
    - As long as the outer side of the JOIN is not a JOIN (e.g., relation
      or subquery etc.), we check for the existence of any recurring
      tuples. There were two implications of this decision.

      First, even if a subquery which is on the outer side contains
      distributed table JOIN reference table, Citus would unnecessarily throw
      an error. Note that, the JOIN inside the subquery would already
      be going to be tested recursively. But, as long as that check
      passes, there is no reason for the upper JOIN to fail. An example, which
      used to fail and now works:

	SELECT * FROM (SELECT * FROM dist JOIN ref) as foo LEFT JOIN dist;

      Second, certain JOINs, especially with ON (true) conditions were not
      represented as Citus expects the JOINs to be in the format
      DeferredErrorIfUnsupportedRecurringTuplesJoin().
2020-10-14 15:17:30 +02:00
Onur Tirtir 1a28858c47
Disallow field indirection in INSERT/UPDATE queries (#4241) 2020-10-14 14:11:59 +03:00
Onur Tirtir 8efca3b60a
Fix a crash with inserting domain composite types in coord. evaluation (#4231)
Use short lived per-tuple context in citus_evaluate_expr like
(pg) evaluate_expr does.

We should not use planState->ExprContext when evaluating expressions
as it might lead to freeing the same executor twice (first one happens
in citus_evaluate_expr itself and the other one happens when postgres
doing clean-up for the top level executor state), which in turn might
cause seg.faults.

However, now as we don't have necessary planState info to evaluate
prepared statements, we also add planState->es_param_list_info to
per-tuple ExprContext.
2020-10-13 14:19:59 +03:00
Halil Ozan Akgul e2736c25bd Adds support for WITH TIES option 2020-10-12 19:34:18 +03:00
Sait Talha Nisanci dc40758355 Return early if there is no citus table in VACUUM 2020-10-09 11:10:00 +03:00
Sait Talha Nisanci 99bb79745a Commit transaction for VACUUM on shell table
With postgres 13, there is a global lock that prevents multiple VACUUMs
happening in the current database. This global lock is taken for a short
time but this creates a problem because of the following:

- We execute the VACUUM for the shell table through the standard process
utility. In this step the global lock is taken for the current database.
- If the current node has shard placements then it tries to execute
VACUUM over a connection to localhost with ExecuteUtilityTaskList.
- the VACUUM on shard placements cannot proceed because it is waiting
for the global lock for the current database to be released.
- The acquired lock from the VACUUM for shell table will not be released
until the transaction is committed.
- So there is a deadlock.

As a solution, we commit the current transaction in case of VACUUM after
the VACUUM is executed for the shell table. Executing the VACUUM on a
shell table is not important because the data there will probably be
truncated. PostprocessVacuumStmt takes the necessary locks on the shell
table so we don't need to take any extra locks after we commit the
current transaction.
2020-10-09 10:57:44 +03:00
Marco Slot 881e5df780 Fix a bug that could lead to multiple maintenance daemons 2020-10-08 16:18:14 +02:00
Marco Slot 18219843d0 Add maintenance daemon error tests 2020-10-08 16:17:33 +02:00
Marco Slot dbc348b7e0 Create sequence dependency during metadata syncing 2020-10-06 10:57:39 +02:00
Marco Slot 9bba8bb4e8 Remove master_drop_sequences 2020-10-06 10:57:33 +02:00
Sait Talha Nisanci 078dcae18c Write settings to postgres configuration file directly
In our test structure, we have been passing postgres configurations from
the terminal, which causes problems after it hits to a certain length
hence it cannot start the server and understanding why it failed is not
easy because there isn't a nice error message.

This commit changes this to write the settings directly to the postgres
configuration file. This way we can add as many postgres settings as we
want to without needing to worry about the length problem.
2020-10-05 22:09:08 +03:00
Onur Tirtir 2cd0a69dfb
Fix multi-row & router INSERT crash with local exec. when def. cols not specified (#4197)
Multi-row & router INSERT's were crashing with local execution if at
least one of the DEFAULT columns were not specified in VALUES list.

This was because, the changes we make on query->values_lists and
query->targetList was sufficient for deparsing given INSERT for remote
execution but not sufficient for local execution.

With this commit, DEFAULT value normalization for multi-row & router
INSERT's is fixed by adding dummy column references for unspecified
DEFAULT columns.
2020-10-05 10:45:17 +03:00
Hanefi Önaldı 6d8e83d24f
Replace worker_hash calls with partkey IS NOT NULL filters 2020-10-02 18:16:24 +03:00
Önder Kalacı df5aa0f0cc
Switch to sequential execution if the index name is long (#4209)
Citus has the logic to truncate the long shard names to prevent
various issues, including self-deadlocks. However, for partitioned
tables, when index is created on the parent table, the index names
on the partitions are auto-generated by Postgres. We use the same
Postgres function to generate the index names on the shards of the
partitions. If the length exceeds the limit, we switch to sequential
execution mode.
2020-10-02 13:39:34 +03:00
Ahmet Gedemenli 70e9edb4f2 Add subplan test with insert 2020-10-01 13:58:55 +03:00
Jelte Fennema 13ef8252e7 Add broken distributed subplan test 2020-10-01 13:52:42 +03:00
Ahmet Gedemenli 3357eea46b Add regression tests for PG13 WAL 2020-10-01 13:52:42 +03:00
Hanefi Önaldı 9ec85f1283
Remove some pgoptions to prevent hitting bash command character limits 2020-09-30 15:04:40 +03:00
Hanefi Önaldı b0a2c1ee5c
Disallow volatile functions on single shard update queries
We currently do not support volatile functions in update/delete statements
because the function evaluation logic does not know how to distinguish
volatile functions (that need to be evaluated per row) from stable functions
(that need to be evaluated per query), and it is also not safe to push the
volatile functions down on replicated tables.
2020-09-29 15:40:21 +03:00
Marco Slot b905c8043d Fix create index concurrently crash with local execution 2020-09-25 11:49:09 +02:00
Ahmet Gedemenli abfb79bda6 Sort explain analyze output by task time
Add sort method parameter for regression tests

Fix check-style

Change sorting method parameters to enum

Polish

Add task fields to OutTask

Add test into multi_explain

Fix isolation test
2020-09-24 11:38:40 +03:00
Onur Tirtir 64d5ac6a10
Do not downgrade if a citus local table exists (#4174)
As the previous versions of Citus don't know how to handle citus local
tables, we should prevent downgrading from 9.5 to older versions if any
citus local tables exists.
2020-09-22 14:19:50 +03:00
Onder Kalaci 5d017cd123 Improve node matedata when coordinator is added
Coordinator should always be always active, hasmetadata and
metadasynced. Prevent changing those fields.
2020-09-21 14:53:41 +02:00
Onder Kalaci 6fc1dea85c Improve the robustness of function call delegation
Pushing down the CALLs to the node that the CALL is executed is
dangerous and could lead to infinite recursion.

When the coordinator added as worker, Citus was by chance preventing
this. The coordinator was marked as "not metadatasynced" node
in pg_dist_node, which prevented CALL/function delegation to happen.

With this commit, we do the following:

  - Fix metadatasynced column for the coordinator on pg_dist_node
  - Prevent pushdown of function/procedure to the same node that
    the function/procedure is being executed. Today, we do not sync
    pg_dist_object (e.g., distributed functions metadata) to the
    worker nodes. But, even if we do it now, the function call delegation
    would prevent the infinite recursion.
2020-09-21 14:53:30 +02:00
Onur Tirtir 1b31b22635 Refactor the functions that return OID lists for citus tables 2020-09-18 16:42:46 +03:00
SaitTalhaNisanci dae2c69fd7
Not allow removing a single node with ref tables (#4127)
* Not allow removing a single node with ref tables

We should not allow removing a node if it is the only node in the
cluster and there is a data on it. We have this check for distributed
tables but we didn't have it for reference tables.

* Update src/test/regress/expected/single_node.out

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>

* Update src/test/regress/sql/single_node.sql

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2020-09-18 15:35:59 +03:00
Ahmet Gedemenli 1cf11b4632 Shorten insert_select_connection_leak_test 2020-09-18 10:07:15 +03:00
Önder Kalacı 8d3f353746
Add more tests for single node citus - distributetd tables (#4166) 2020-09-17 17:50:35 +02:00
Marco Slot c9d46c618b Fix EXPLAIN ANALYZE truncation 2020-09-17 14:42:21 +02:00
Onur Tirtir d81559b7f8
Use "table" instead of "reference table" in sequential truncate log (#4164)
We might get this debug message for citus local tables as well
2020-09-17 14:37:36 +03:00
Onur Tirtir 4118560b75
Prevent citus local table creation from a catalog table (#4158) 2020-09-15 14:30:48 +03:00
Önder Kalacı e7079d1384
Add orderbys to some tests (#4162) 2020-09-14 16:59:22 +02:00
Marco Slot b82f6ee163 Add tests for distributing catalog tables 2020-09-10 04:46:11 +02:00
Marco Slot bd12555b16 Fix distributing tables owned by extensions 2020-09-10 04:46:11 +02:00
Onur Tirtir 9a56c22917
Add udf tests with citus local tables (#4154) 2020-09-11 12:36:53 +03:00
Onur Tirtir 3a73fba810 Apply planner changes for citus local tables 2020-09-09 11:51:18 +03:00
Onur Tirtir 0b1cc118a9 Adapt other cache entry changes for citus local tables 2020-09-09 11:50:55 +03:00
Onur Tirtir a58a4395ab Extend citus local table utility command support
This commit brings following features:

Foreign key support from citus local tables to reference tables
* Foreign key support from reference tables to citus local tables
  (only with RESTRICT & NO ACTION behavior)
* ALTER TABLE ENABLE/DISABLE trigger command support
* CREATE/DROP/ALTER trigger command support

and disallows:
* ALTER TABLE ATTACH/DETACH PARTITION commands
* CREATE TABLE <postgres table> ATTACH PARTITION <citus local table>
  commands
* Foreign keys from postgres tables to citus local tables
  (the other way was already disallowed)

for citus local tables.
2020-09-09 11:50:55 +03:00
Onur Tirtir 17cc810372 Implement "citus local table" creation logic 2020-09-09 11:50:48 +03:00
Onur Tirtir ba208eae4d
Record non-distributed table accesses in local executor (#4139) 2020-09-07 18:19:08 +03:00
Hanefi Önaldı 024d398cd7
Allow distribution of functions that read from reference tables
create_distributed_function(function_name,
                            distribution_arg_name,
                            colocate_with text)

This UDF did not allow colocate_with parameters when there were no
disttribution_arg_name supplied. This commit changes the behaviour to
allow missing distribution_arg_name parameters when the function should
be colocated with a reference table.
2020-09-01 07:28:34 +03:00
Önder Kalacı 983206c5e1
Hide `citus.subquery_pushdown` flag and NOTICE when enabled (#4124)
* Hide citus.subquery_pushdown flag

This flag is dangerous and could likely to let queries
return wrong results.

The flag has a very specific purpose for a very specific
data distribution and query structure. In those cases, when
the flag is set, the user can skip recursive planning altogether
*at their own risk*.

The meaning of the flag is that "I know what I'm doing such that
the query structure/data distribution is on my control, so Citus
can skip many correctness checks".

For regular users, enabling this flag is discouraged. We have to
keep the support only for backward compatibility for some users.

In addition to that, give a NOTICE to discourage new users to
use it.
2020-08-28 14:53:09 +02:00
SaitTalhaNisanci 2459ba6eca
Update docker images (#4122)
* Update and separate test images

The build image was a single one and it would contain pg11, pg12 and
pg13. Now it is separated so that we can build each pg major
independently.

Tags are used as full postgres versions so that we can know which
version we use by looking at the tag. For example exttester:11.9 would
mean we are using pg11.9.

pg11 is updated from 11.5 to 11.9.
pg12 is updated from 12rc to 12.4.

* Ignore memory usage in pg13 explain

* Use citus instead of personal repo
2020-08-26 16:23:59 +03:00
SaitTalhaNisanci 20c39fae9a
Loosen the requirement to pushdown a subquery with ref tables (#4110)
AllTargetExpressionsAreColumnReferences would return false if a query
had an entry that is referencing the outer query. It seems safe to not
have this for non-distributed tables, such as reference tables. We
already have separate checks for other cases such as having limits.
2020-08-14 12:11:15 +03:00
Hadi Moshayedi 7b74eca22d Support EXPLAIN EXECUTE ANALYZE. 2020-08-10 13:44:30 -07:00
Philip Dubé 212ae7163f Fix non deterministic collation test to work with ancient libicu versions
CentOS 7's libicu is too old for und-u-ks-level2

@colStrength=secondary works with both older & newer versions of libicu
2020-08-07 12:34:32 +00:00
Halil Ozan Akgul 375310b7f1 Adds support for table undistribution 2020-08-05 14:36:03 +03:00
Sait Talha Nisanci fe4ac51d8c Normalize Output:.. since it changes with pg13
Fix indentation for better readability
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci d68bfc5687 Improve error for index operator class parameters
The error message when index has opclassopts is improved and the commit
from postgres side is also included for future reference.

Also some minor style related changes are applied.
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci 288aa58603 add alternative out for pg13 test 2020-08-04 15:38:13 +03:00
Sait Talha Nisanci d0b0c88920 Changelog: error out if index has opclassopts
Error out if index has opclassopts.

Changelog entry on PG13:
Allow CREATE INDEX to specify the GiST signature length and maximum number of integer ranges (Nikita Glukhov)
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci f7a1971361 Changelog: Alter type options
It seems that we don't support propagating commands related to base
types. Therefore Alter TYPE options doesn't seem to apply to us. I have
added a test to verify that we don't propagate them.

Changelog entry on pg13:
Add ALTER TYPE options useful for extensions, like TOAST and I/O functions control (Tomas Vondra, Tom Lane)
2020-08-04 15:38:11 +03:00
Sait Talha Nisanci 00633165fc Changelog: Test unicode escapes
Unicode escapes work as expected, related tests are added.

Changelog entry on PG13:
Allow Unicode escapes, e.g., E'\u####', U&'\####', to specify any character available in the database encoding, even when the database encoding is not UTF-8 (Tom Lane)
2020-08-04 15:36:30 +03:00
Sait Talha Nisanci 79dcb80140 Changelog: Test IS NORMALIZED for pg13
Tests for is_normalized and normalized ar eadded. One thing that seems
to be because of existent bug is that when we don't give the second
argument to normalize or is_normalized, which is optional, it crashes.
Because in the executor part, in the expression we don't have the
default argument.

Changelog entry in PG-13:
Add SQL functions NORMALIZE() to normalize Unicode strings, and IS NORMALIZED to check for normalization (Peter Eisentraut)

Commit on Postgres:
2991ac5fc9b3904ca4582be6d323497d7c3d17c9
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci ebabca16b7 Changelog: Test row suffix notation
It seems that row suffix notation is working fine with our code, a test
is added.

Changelog entry in PG13:
Allow ROW values values to have their members extracted with suffix notation (Tom Lane)
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 275ccd0400 Changelog: Test that alter view rename column works
Changelog entry in PG13:
Add ALTER VIEW syntax to rename view columns (Fujii Masao)
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 920d7211e4 Changelog: Test that we error out for DROP EXPRESSION
PG13 now supports dropping expression from a column such as generated
columns. We error out with this currently.

Changelog entry in postgres:
Add ALTER TABLE clause DROP EXPRESSION to remove generated properties from columns (Peter Eisentraut)
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 87088d92bc Changelog: handle VACUUM PARALLEL option
Postgres 13 added a new VACUUM option, PARALLEL. It is now supported
in our code as well.

Relevant changelog message on postgres:
Allow VACUUM to process indexes in parallel (Masahiko Sawada, Amit Kapila)
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 1070828465 update cte inline output for pg13
Make some macros in version_compat more robust
Remove commented code in ruleutils
Remove unnecessary variable assignments
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 157af140e4 ignore concurrent root page split debugs 2020-08-04 15:18:27 +03:00
Sait Talha Nisanci ff7a563c57 decrease log level to debug1 to prevent flaky debug 2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 6ff4e42706 Add alternative output for multi_function_in_join
With pg13, constants functions from "FROM" clause are replaced. This
means that in citus side, we will see the constraints in restriction
info, instead of the function call. For example:

SELECT * FROM table1 JOIN add(3,5) sum ON (id = sum) ORDER BY id ASC;

Assuming that the function `add` returns constant, it will be evaluated
on postgres side. This means that this query will be routable because
there will be only one shard after pruning with the restrictions.
However before pg13, this would be multi shard query. And it would go
into recursive planning, the function would be evaluated on the
coordinator because it can be.

This means that with pg13, users will need to distribute the function
because when it is routable executable, it will currently also send the
function call to the worker in the query. So the function should exist
in the worker.

It could be better to replace the constant in the query tree as well so
that the query string sent to the worker has the constant value and
therefore it doesn't need the function. However I feel like users would
already have the function in workers if they have any multi shard query.

Commit on Postgres side:
7266d0997dd2a0632da38a594c78e25ff21df67e
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci a34a1126ec add alternative output for pg13 in some tests 2020-08-04 15:18:27 +03:00
Sait Talha Nisanci c5c9ec288f fix multi_mx_create_table test 2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 76c7b3d1c6 Remove unused steps in isolation tests
PG13 gives a warning for unused steps therefore we should remove the
unused steps in isolation tests.
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 17388e2e91 update some tests 2020-08-04 15:18:27 +03:00
Sait Talha Nisanci de82d0ff79 add output for pg13 for propagate extension commands
CREATE EXTENSION <name> FROM <old_version> is not supported anymore with
postgres 13. An alternative output is added for pg13 where we basically
error for that statement.
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 80d2bc2317 normalize some output and sort test result 2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 0f6c21d418 sort result in ch_bench_having_mx test 2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 70f27c10e5 Add some normalization rules for tests
The not-null constraint message changed with pg13 slightly hence a
normalization rule is added for that, which converts it to pg < 13
output.

Commit on postgres:
05f18c6b6b6e4b44302ee20a042cedc664532aa2

An extra debug message is added related to indexes on postgres, these
are safe to be ignored, so we can delete them from tests.

Commit on Postgres side:
612a1ab76724aa1514b6509269342649f8cab375

varnoold is renamed as varnosyn and varoattno is renamed as varattnosyn
so in the output we normalize the values as the old ones to simply pass
the tests.
2020-08-04 15:10:22 +03:00
Onder Kalaci eeb8c81de2 Implement shared connection count reservation & enable `citus.max_shared_pool_size` for COPY
With this patch, we introduce `locally_reserved_shared_connections.c/h` files
which are responsible for reserving some space in shared memory counters
upfront.

We sometimes need to reserve connections, but not necessarily
establish them. For example:
-  COPY command should reserve connections as it cannot know which
   connections it needs in which order. COPY establishes connections
   as any input data hits the workers. For example, for router COPY
   command, it only establishes 1 connection.

   As discussed here (https://github.com/citusdata/citus/pull/3849#pullrequestreview-431792473),
   COPY needs to reserve connections up-front, otherwise we can end
   up with resource starvation/un-detected deadlocks.
2020-08-03 18:51:40 +02:00
nukoyluoglu 38987431e7
propagation of CHECK statements to workers with parentheses (#4039)
* ensure propagation of CHECK statements to workers with parantheses & adjust regression test outputs

* add tests for distributing tables with simple CHECK constraints

* added test for CHECK on bool variable
2020-07-27 15:08:37 +03:00
Benjamin Satzger a35a15a513
Distribute custom aggregates with multiple arguments (#4047)
Enable custom aggregates with multiple parameters to be executed on workers.

#2921 introduces distributed execution of custom aggregates. One of the limitations of this feature is that only aggregate functions with a single aggregation parameter can be pushed to worker nodes. Aim of this change is to remove that limitation and support handling of multi-parameter aggregates.

Resolves: #3997
See also: #2921
2020-07-24 15:16:00 -07:00
Halil Ozan Akgul 38b72ddd66 Fixes create index concurrently bug 2020-07-24 12:14:14 +03:00
Halil Ozan Akgül e9f89ed651
Fixes the non existing table bug (#4058) 2020-07-23 18:01:21 +03:00
Sait Talha Nisanci 01c23b0df2 update test outputs with task-tracker removal 2020-07-21 16:25:08 +03:00
Sait Talha Nisanci 1dbd545cf4 replace task-tracker with adaptive in tests 2020-07-21 16:21:01 +03:00
Sait Talha Nisanci 4308d867d9 remove task-tracker in comments, documentation 2020-07-21 16:21:01 +03:00
Hanefi Önaldı e534dbae4a
Accept list of values in a supported ALTER ROLE .. SET statement
Some GUCs support a list of values which is indicated by GUC_LIST_INPUT flag.

When an ALTER ROLE .. SET statement is executed, the new configuration
default for affected users and databases are stored in the
setconfig(text[]) column in a pg_db_role_setting record.

If a GUC that supports a list of values is used in an ALTER ROLE .. SET
statement, we need to split the text into items delimited by commas.
2020-07-21 03:49:57 +03:00
Nils Dijk 00a4a15d95
fix sorting on string litteral (#4045)
As noted by Talha https://github.com/citusdata/citus/pull/4029#issuecomment-660466972 there was still some sort order flappiness in the test.

The root cause is that sorting on `1::text` sorts on the literal `'1'` which causes sorting to be indeterministic.

This behaviour is consistent with Postgres' behaviour, so no bug on Citus' side.
2020-07-20 17:39:27 +02:00
Onder Kalaci c25de2cf22 Remove flag from
As it doesn't make any sense anymore
2020-07-20 12:45:05 +02:00
SaitTalhaNisanci b3af63c8ce
Remove task tracker executor (#3850)
* use adaptive executor even if task-tracker is set

* Update check-multi-mx tests for adaptive executor

Basically repartition joins are enabled where necessary. For parallel
tests max adaptive executor pool size is decresed to 2, otherwise we
would get too many clients error.

* Update limit_intermediate_size test

It seems that when we use adaptive executor instead of task tracker, we
exceed the intermediate result size less in the test. Therefore updated
the tests accordingly.

* Update multi_router_planner

It seems that there is one problem with multi_router_planner when we use
adaptive executor, we should fix the following error:
+ERROR:  relation "authors_range_840010" does not exist
+CONTEXT:  while executing command on localhost:57637

* update repartition join tests for check-multi

* update isolation tests for repartitioning

* Error out if shard_replication_factor > 1 with repartitioning

As we are removing the task tracker, we cannot switch to it if
shard_replication_factor > 1. In that case, we simply error out.

* Remove MULTI_EXECUTOR_TASK_TRACKER

* Remove multi_task_tracker_executor

Some utility methods are moved to task_execution_utils.c.

* Remove task tracker protocol methods

* Remove task_tracker.c methods

* remove unused methods from multi_server_executor

* fix style

* remove task tracker specific tests from worker_schedule

* comment out task tracker udf calls in tests

We were using task tracker udfs to test permissions in
multi_multiuser.sql. We should find some other way to test them, then we
should remove the commented out task tracker calls.

* remove task tracker test from follower schedule

* remove task tracker tests from multi mx schedule

* Remove task-tracker specific functions from worker functions

* remove multi task tracker extra schedule

* Remove unused methods from multi physical planner

* remove task_executor_type related things in tests

* remove LoadTuplesIntoTupleStore

* Do initial cleanup for repartition leftovers

During startup, task tracker would call TrackerCleanupJobDirectories and
TrackerCleanupJobSchemas to clean up leftover directories and job
schemas. With adaptive executor, while doing repartitions it is possible
to leak these things as well. We don't retry cleanups, so it is possible
to have leftover in case of errors.

TrackerCleanupJobDirectories is renamed as
RepartitionCleanupJobDirectories since it is repartition specific now,
however TrackerCleanupJobSchemas cannot be used currently because it is
task tracker specific. The thing is that this function is a no-op
currently.

We should add cleaning up intermediate schemas to DoInitialCleanup
method when that problem is solved(We might want to solve it in this PR
as well)

* Revert "remove task tracker tests from multi mx schedule"

This reverts commit 03ecc0a681.

* update multi mx repartition parallel tests

* not error with task_tracker_conninfo_cache_invalidate

* not run 4 repartition queries in parallel

It seems that when we run 4 repartition queries in parallel we get too
many clients error on CI even though we don't get it locally. Our guess
is that, it is because we open/close many connections without doing some
work and postgres has some delay to close the connections. Hence even
though connections are removed from the pg_stat_activity, they might
still not be closed. If the above assumption is correct, it is unlikely
for it to happen in practice because:
- There is some network latency in clusters, so this leaves some times
for connections to be able to close
- Repartition joins return some data and that also leaves some time for
connections to be fully closed.

As we don't get this error in our local, we currently assume that it is
not a bug. Ideally this wouldn't happen when we get rid of the
task-tracker repartition methods because they don't do any pruning and
might be opening more connections than necessary.

If this still gives us "too many clients" error, we can try to increase
the max_connections in our test suite(which is 100 by default).

Also there are different places where this error is given in postgres,
but adding some backtrace it seems that we get this from
ProcessStartupPacket. The backtraces can be found in this link:
https://circleci.com/gh/citusdata/citus/138702

* Set distributePlan->relationIdList when it is needed

It seems that we were setting the distributedPlan->relationIdList after
JobExecutorType is called, which would choose task-tracker if
replication factor > 1 and there is a repartition query. However, it
uses relationIdList to decide if the query has a repartition query, and
since it was not set yet, it would always think it is not a repartition
query and would choose adaptive executor when it should choose
task-tracker.

* use adaptive executor even with shard_replication_factor > 1

It seems that we were already using adaptive executor when
replication_factor > 1. So this commit removes the check.

* remove multi_resowner.c and deprecate some settings

* remove TaskExecution related leftovers

* change deprecated API error message

* not recursively plan single relatition repartition subquery

* recursively plan single relation repartition subquery

* test depreceated task tracker functions

* fix overlapping shard intervals in range-distributed test

* fix error message for citus_metadata_container

* drop task-tracker deprecated functions

* put the implemantation back to worker_cleanup_job_schema_cachesince citus cloud uses it

* drop some functions, add downgrade script

Some deprecated functions are dropped.
Downgrade script is added.
Some gucs are deprecated.
A new guc for repartition joins bucket size is added.

* order by a test to fix flappiness
2020-07-18 13:11:36 +03:00
Marco Slot 9cb8dc9d12 Improve error message when creating a foreign key to a local table 2020-07-13 13:57:22 +02:00
SaitTalhaNisanci bc011a6286
Add IsCitusTable check to citus table utilities (#4028) 2020-07-14 18:29:33 +03:00
Nils Dijk 23d44eba9f
fix flappy tests due to undeterministic order of test output (#4029)
As reported on #4011 https://github.com/citusdata/citus/pull/4011/files#r453804702 some of the tests were flapping due to an indeterministic order for test outputs.

This PR makes the test output ordered for all tests returning non-zero rows.

Needs to be backported to 9.2, 9.3, 9.4
2020-07-14 15:47:29 +02:00
SaitTalhaNisanci ab5be77709
test coordinator reference-distributed table join (#3698) 2020-07-14 11:43:03 +03:00
Sait Talha Nisanci 1b5ed45a58 add multi follower repartition tests 2020-07-13 19:50:50 +03:00
Sait Talha Nisanci 510535f558 address feedback 2020-07-13 19:45:02 +03:00
Sait Talha Nisanci 41ec76a6ad use ActiveReadableNodeList in JobExecutorType and task tracker
The reason we should use ActiveReadableNodeList instead of ActiveReadableNonCoordinatorNodeList is that if coordinator is added to cluster as a worker, it should be counted as well. Otherwise if there is only coordinator in the cluster, the count will be 0, hence we get a warning.

In MultiTaskTrackerExecute, we should connect to coordinator if it is
added to the cluster because it will also be assigned tasks.
2020-07-13 19:45:02 +03:00
Sait Talha Nisanci d97d03ec65 use ActivePrimaryNodeList to include coordinator
ActiveReadableWorkerNodeList doesn't include coordinator, however if
coordinator is added as a worker, we should also include that while
planning. The current methods are very easily misusable and this
requires a refactoring to make the distinction between methods that
include coordinator and that don't very explicit as they can introduce
subtle/major bugs pretty easily.
2020-07-13 19:20:15 +03:00
Sait Talha Nisanci db1b78148c send schema creation/cleanup to coordinator in repartitions
We were using ALL_WORKERS TargetWorkerSet while sending temporary schema
creation and cleanup. We(well mostly I) thought that ALL_WORKERS would also include coordinator when it is added as a worker. It turns out that it was FILTERING OUT the coordinator even if it is added as a worker to the cluster.

So to have some context here, in repartitions, for each jobId we create
(at least we were supposed to) a schema in each worker node in the cluster. Then we partition each shard table into some intermediate files, which is called the PARTITION step. So after this partition step each node has some intermediate files having tuples in those nodes. Then we fetch the partition files to necessary worker nodes, which is called the FETCH step. Then from the files we create intermediate tables in the temporarily created schemas, which is called a MERGE step. Then after evaluating the result, we remove the temporary schemas(one for each job ID in each node) and files.

If node 1 has file1, and node 2 has file2 after PARTITION step, it is
enough to either move file1 from node1 to node2 or vice versa. So we
prune one of them.

In the MERGE step, if the schema for a given jobID doesn't exist, the
node tries to use the `public` schema if it is a superuser, which is
actually added for testing in the past.

So when we were not sending schema creation comands for each job ID to
the coordinator(because we were using ALL_WORKERS flag, and it doesn't
include the coordinator), we would basically not have any schemas for
repartitions in the coordinator. The PARTITION step would be executed on
the coordinator (because the tasks are generated in the planner part)
and it wouldn't give us any error because it doesn't have anything to do
with the temporary schemas(that we didn't create). But later two things
would happen:

- If by chance the fetch is pruned on the coordinator side, we the other
nodes would fetch the partitioned files from the coordinator and execute
the query as expected, because it has all the information.
- If the fetch tasks are not pruned in the coordinator, in the MERGE
step, the coordinator would either error out saying that the necessary
schema doesn't exist, or it would try to create the temporary tables
under public schema ( if it is a superuser). But then if we had the same
task ID with different jobID it would fail saying that the table already
exists, which is an error we were getting.

In the first case, the query would work okay, but it would still not do
the cleanup, hence we would leave the partitioned files from the
PARTITION step there. Hence ensure_no_intermediate_data_leak would fail.

To make things more explicit and prevent such bugs in the future,
ALL_WORKERS is named as ALL_NON_COORD_WORKERS. And a new flag to return
all the active nodes is added as ALL_DATA_NODES. For repartition case,
we don't use the only-reference table nodes but this version makes the
code simpler and there shouldn't be any significant performance issue
with that.
2020-07-13 19:20:15 +03:00
SaitTalhaNisanci 76ddb85545
improve error message in secondaries (#4025) 2020-07-13 19:18:57 +03:00
Nils Dijk 449d1f0e91
force aliases in deparsing for queries with anonymous column references (#4011)
DESCRIPTION: Force aliases in deparsing for queries with anonymous column references

Fixes: #3985  

The root cause has todo with discrepancies in the query tree we create. I think in the future we should spend some time on categorising all changes we made to ruleutils and see if we can change the data structure `query` we pass to the deparser to have an actual valid postgres query for the deparser to render.

For now the fix is to keep track, besides changing the names of the entries in the target list, also if we have a reference to an anonymous columns. If there are anonymous columns we set the `printaliases` flag to true which forces the deparser to add the aliases.
2020-07-13 16:29:24 +02:00
Hadi Moshayedi 3651fc64ee Fix Subtransaction memory leak 2020-07-09 12:33:39 -07:00
Jelte Fennema 16242d5264
Fix write queries with const expressions and COLLATE in various places (#3973) 2020-07-08 18:19:53 +02:00
Jelte Fennema ab01571c9e
Fix crash with single node dummy placement (#3993)
Static analysis found an issue where we could dereference `NULL`, because 
`CreateDummyPlacement` could return `NULL` when there were no workers. This
PR changes it so that it never returns `NULL`, which was intended by 
@marcocitus when doing this change: https://github.com/citusdata/citus/pull/3887/files#r438136433

While adding tests for citus on a single node I also added some more basic
tests and it turns out we error out on repartition joins. This has been
present since `shouldhaveshards` was introduced and is not trivial to fix.
So I created a separate issue for this: https://github.com/citusdata/citus/issues/3996
2020-07-08 17:11:25 +02:00
Philip Dubé 444472ffc6 ruleutils: use get_rtable_name for deparsing resultRelation 2020-07-07 12:20:41 +00:00
Marco Slot b4fec63bc0 Rename master evaluation to coordinator evaluation 2020-07-07 10:37:41 +02:00
Jelte Fennema 8ab47f4f37
Add a CI check to see if all tests are part of a schedule (#3959)
I recently forgot to add tests to a schedule in two of my PRs. One of
these was caught by review, but the other one was not. This adds a
script to causes CI to ensure that each test in the repo is included in
at least one schedule.

Three tests were found that were currently not part of a schedule. This PR
adds those three tests to a schedule as well and it also fixes some small
issues with these tests.
2020-07-03 11:34:55 +02:00
Jelte Fennema 9311978487 Add README for CI scripts
We keep accumulating more and more scripts to flag issues in CI. This is
good, but we are currently missing consistent documentation for them.
This commit moves all these scripts to the `ci` directory and adds some
documentation for all of them in the README. It also makes sure that the
last line of output of a failed script points to this documentation.
2020-07-03 10:22:48 +02:00
Onur Tirtir be17ebb334 Bump citus version to 9.5devel 2020-07-01 14:46:55 +03:00
Hanefi Önaldı ca2ececb3b
Downgrade path from 9.4 to 9.3 to 9.2 2020-07-01 10:38:11 +03:00
Sait Talha Nisanci e5a21f07cb test aggregates with expressions 2020-06-30 11:41:16 -07:00
Jelte Fennema 392c5e2c34
Fix wrong cancellation message about distributed deadlocks (#3956) 2020-06-30 14:57:46 +02:00
Jelte Fennema 02fa942be1
Fix assertion error when rolling back to savepoint (#3868)
It was possible to get an assertion error, if a DML command was
cancelled that opened a connection and then "ROLLBACK TO SAVEPOINT" was
used to continue the transaction. The reason for this was that canceling
the transaction might leave the `claimedExclusively` flag on for (some
of) it's connections.

This caused an assertion failure because `CanUseExistingConnection`
would return false and a new connection would be opened, and then there
would be two connections doing DML for the same placement. Which is
disallowed. That this situation caused an assertion failure instead of
an error, means that without asserts this could possibly result in some
visibility bugs, similar to the ones described
https://github.com/citusdata/citus/issues/3867
2020-06-30 11:31:46 +02:00
Hadi Moshayedi 4ed59d2db3 Move more from insert_select_executor to insert_select_planner 2020-06-26 08:08:26 -07:00
Hadi Moshayedi cd25a27174 Fix crash caused by EXPLAIN EXECUTE INSERT ... SELECT 2020-06-25 08:55:48 -07:00
Hadi Moshayedi 4e8d79998e Save INSERT/SELECT method in DistributedPlan.
This is so we don't need to calculate it twice in
insert_select_executor.c and multi_explain.c, which can
cause discrepancy if an update in one of them is not
reflected in the other site.
2020-06-25 08:55:48 -07:00
Jelte Fennema 64506143e4
Replace flaky repartition analyze test with a non flaky one (#3950)
The flaky test was introduced in #3941. This removes that flaky test and
adds a new one that fails in the same manner when removing the fix in #3941.

An example of a random failure can be found here:
https://app.circleci.com/pipelines/github/citusdata/citus/9558/workflows/de76e7a5-6558-46c9-97e7-8b1dae1f173b/jobs/135876/steps
2020-06-25 15:19:15 +02:00
SaitTalhaNisanci 50e115fe3a
test task tracker repartition with replication >1 (#3944) 2020-06-24 14:54:20 +03:00
SaitTalhaNisanci f458d1fd1c
Fix/task execution (#3941)
* Not set TaskExecution with adaptive executor

Adaptive executor is using a utility method from task tracker for
repartition joins, however adaptive executor doesn't need taskExecution.
It is only used by task tracker. This causes a problem when explain
analyze is used because what taskExecution is pointing to might be
random.

We solve this by not setting taskExecution from adaptive executor. So it
will stay NULL as set by CreateTask.

* use same memory context as task for taskExecution

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2020-06-24 12:10:00 +03:00
Philip Dubé cd0b2ad5b5 citus_evaluate_expression: call expand_function_arguments beforehand to avoid segfaulting on implicit parameters 2020-06-23 18:06:46 +00:00
Jelte Fennema b3ec6fbe7a
Make check_enterprise_merge script stricter (#3918)
We've had two issues with merge conflicts to enterprise in the last week, that
suddenly happened. Because of this CI check this actually blocks all community
PRs from being merged.

This PR tries to improve on the previous script we had, by putting tougher
constraints on when a merge is allowed.

Previously the check would pass in two cases:
1. This PR be merged without conflicts into `enterprise-master`
2. A branch exists with the same name as this PR on enterprise and that can be
   merged into `enterprise-master`.

The first case stays the same, but I've changed the second case to require the
following instead:
1. A branch exists on enterprise with the same name as this PR
2. **NEW: This branch contains the the last commit of the community PR branch**
3. This branch can be merged into enterprise-master

This makes sure the enterprise branch is actually up to date and not forgotten about.

If we still get problems with this change, future improvements could be:
1. Check that the PR on enterprise passes CI
2. Check that the PR on enterprise has been approved
3. Require the enterprise PR branch to be merged before merging community.
2020-06-19 12:45:36 +02:00
SaitTalhaNisanci 3a789352b6
rename citus hammerdb branch prefix as citus_github_push (#3925)
When we are using hammerdb jobs, the job creates a branch on test
automation, since that branch should be deleted, it would have
`delete_me` prefix, however since the result branch on
release-test-results will have the test automation branch as prefix, it
will also have `delete_me` prefix, which seems a bit confusing.

This PR updates it as citus_github_push
2020-06-18 21:11:58 +03:00
Marco Slot 2a3234ca26 Rename masterQuery to combineQuery 2020-06-17 14:14:37 +02:00
Jelte Fennema 0259815d3a
Fix EXPLAIN ANALYZE received data counter issues (#3917)
In #3901 the "Data received from worker(s)" sections were added to EXPLAIN
ANALYZE. After merging @pykello posted some review comments. This addresses
those comments as well as fixing a other issues that I found while addressing 
them. The things this does:

1. Fix `EXPLAIN ANALYZE EXECUTE p1` to not increase received data on every
   execution
2. Fix `EXPLAIN ANALYZE EXECUTE p1(1)` to not return 0 bytes as received data
   allways.
3. Move `EXPLAIN ANALYZE` specific logic to `multi_explain.c` from
   `adaptive_executor.c`
4. Change naming of new explain sections to `Tuple data received from node(s)`.
   Firstly because a task can reference the coordinator too, so "worker(s)" was
   incorrect. Secondly to indicate that this is tuple data and not all network
   traffic that was performed.
5. Rename `totalReceivedData` in our codebase to `totalReceivedTupleData` to
   make it clearer that it's a tuple data counter, not all network traffic.
6. Actually add `binary_protocol` test to `multi_schedule` (woops)
7. Fix a randomly failing test in `local_shard_execution.sql`.
2020-06-17 11:33:38 +02:00
Jelte Fennema b71f82b31e
Use 5 second isolation test timeout (#3907)
Sometimes isolation tests get stuck in CI and we cannot see why, because
the job is killed by the CI runner. This will instead fail inside make
the testsuite continue, but mark it as a failure like this in the diff
output:
```diff
+isolationtester: canceling step s2-ddl-create-index-concurrently after 5 seconds
 step s2-ddl-create-index-concurrently: CREATE INDEX CONCURRENTLY select_append_index ON select_append(id);
+ERROR:  CONCURRENTLY-enabled index command failed
```

We should detect blockages very quickly and the queries we run are also
very fast, so 5 seconds should be more than enough to catch any random
slowness. The default from Postgres is 5 minutes, which is waaay to much
for us.
2020-06-16 14:57:49 +02:00
Jelte Fennema 799bfdab56
Temporarily disable connection leak tests that fail a lot (#3911)
MX connection leak failures:
1. https://app.circleci.com/pipelines/github/citusdata/citus/9296/workflows/e36d1088-662a-4f60-acec-293132632c2f/jobs/131908/steps
2. https://app.circleci.com/pipelines/github/citusdata/citus/9258/workflows/37659d82-2c5b-495e-b0e7-905811e30444/jobs/131299

Failure connection leak failures:
1. https://app.circleci.com/pipelines/github/citusdata/citus/9297/workflows/c0ebc326-8c93-468f-8b70-f470bd492fb9/jobs/131920
2. https://app.circleci.com/pipelines/github/citusdata/citus/9283/workflows/9af154d0-ff96-4c5d-ae19-81faae1e0c18/jobs/131668
2020-06-16 13:48:48 +02:00
Jelte Fennema 927de6d187
Show amount of data received in EXPLAIN ANALYZE (#3901)
Sadly this does not actually work yet for binary protocol data, because
when doing EXPLAIN ANALYZE we send two commands at the same time. This
means we cannot use `SendRemoteCommandParams`, and thus cannot use the
binary protocol. This can still be useful though when using the text
protocol, to find out that a lot of data is being sent.
2020-06-15 16:01:05 +02:00
Hadi Moshayedi ef778c1cd7 address feedback from Sait Talha & Hadi 2020-06-12 18:36:02 -07:00
Marco Slot 080f711e62 Remove useless debug message in router planner 2020-06-12 18:36:02 -07:00
Marco Slot 24feadc230 Handle joins between local/reference/cte via router planner 2020-06-12 18:36:01 -07:00
Nils Dijk f57711b3d2
fix test output for tdigest (#3909)
Due to the problem described in #3908 we don't cover the tdigest integration (and other extensions) on CI.

Due to this a bug got in the patch due to a change in `EXPLAIN VERBOSE` being merged concurrently with the tdigest integration. This PR fixes the test output that missed the newly added information.
2020-06-12 20:54:27 +02:00
Halil Ozan Akgül 8c5eb6b7ea
Insert Select Into Local Table (#3870)
* Insert select with master query

* Use relid to set custom_scan_tlist varno

* Reviews

* Fixes null check

Co-authored-by: Marco Slot <marco.slot@gmail.com>
2020-06-12 17:06:31 +03:00
Jelte Fennema 0e12d045b1
Support use of binary protocol in between nodes (#3877)
This can save a lot of data to be sent in some cases, thus improving
performance for which inter query bandwidth is the bottleneck.
There's some issues with enabling this as default, so that's currently not done.
2020-06-12 15:02:51 +02:00
Nils Dijk da8f2b0134
Feature: tdigest aggregate (#3897)
DESCRIPTION: Adds support to partially push down tdigest aggregates

tdigest extensions: https://github.com/tvondra/tdigest

This PR implements the partial pushdown of tdigest calculations when possible. The extension adds a tdigest type which can be combined into the same structure. There are several aggregate functions that can be used to get;
 - a quantile
 - a list of quantiles
 - the quantile of a hypothetical value
 - a list of quantiles for a list of hypothetical values

These function can work both on values or tdigest types.

Since we can create tdigest values either by combining them, or based on a group of values we can rewrite the aggregates in such a way that most of the computation gets delegated to the compute on the shards. This both speeds up the percentile calculations because the values don't have to be sorted while at the same time making the transfer size from the shards to the coordinator significantly less.
2020-06-12 13:50:28 +02:00
Philip Dubé 1722d8ac8b Allow routing modifying CTEs
We still recursively plan some cases, eg:
- INSERTs
- SELECT FOR UPDATE when reference tables in query
- Everything must be same single shard & replication model
2020-06-11 15:14:06 +00:00
Hadi Moshayedi 0e3140c14d Include execution duration in worker_last_saved_explain_analyze 2020-06-11 02:54:54 -07:00
Hadi Moshayedi 7c52c6edb0 CTE statistics in EXPLAIN ANALYZE 2020-06-11 02:39:59 -07:00
Hadi Moshayedi 1f6d6ee4a5 Show query text in EXPLAIN output 2020-06-11 02:19:55 -07:00
Hadi Moshayedi bb96ef5047 Does the EXPLAIN ANALYZE at the same time as execution, so avoids executing twice.
We wrap worker tasks in worker_save_query_explain_analyze() so we can fetch
their explain output later by a call worker_last_saved_explain_analyze().

Fixes #3519
Fixes #2347
Fixes #2613
Fixes #621
2020-06-11 01:55:57 -07:00
Hadi Moshayedi 6ca621bd16 Test we don't support multi-shard EXPLAIN EXECUTE 2020-06-10 17:11:27 -07:00
Onder Kalaci 640717bea2
Copy doesn't use more than MaxAdaptiveExecutor
Co-authored-by: Hanefi Önaldı <Hanefi.Onaldi@Microsoft.com>
2020-06-10 16:46:21 +03:00
Jelte Fennema b87bae71bb
Error out when using different users in the same transaction (#3869)
Fixes #3867

As described in the issue above we return incorrect results when
changing user within a transaction. This causes us to error out instead.
2020-06-10 14:07:40 +02:00
Onder Kalaci 06461ca55f Coerce types properly for INSERT
Also, unify similar code-paths to rely on more accurate function.
2020-06-10 10:40:28 +02:00
Hadi Moshayedi 5cdfa9f571 Implement EXPLAIN ANALYZE udfs.
Implements worker_save_query_explain_analyze and worker_last_saved_explain_analyze.

worker_save_query_explain_analyze executes and returns results of query while
saving its EXPLAIN ANALYZE to be fetched later.

worker_last_saved_explain_analyze returns the saved EXPLAIN ANALYZE result.
2020-06-09 10:02:05 -07:00
Hadi Moshayedi 45a41e249f Test EXPLAIN ANALYZE doesn't show repartition join tasks 2020-06-06 23:24:45 -07:00
Hadi Moshayedi 02cff1a7c6 Test that EXPLAIN ANALYZE is not supported for some forms of INSERT/SELECT 2020-06-06 23:24:45 -07:00
Onur Tirtir 8b39d12846
Append IF NOT EXISTS to deparsed CREATE SERVER commands (#3875)
Append IF NOT EXISTS to CREATE SERVER commands generated by
pg_get_serverdef_string function when deparsing an existing server
object that a foreign table depends.
2020-06-05 18:04:33 +03:00
Onur Tirtir dfcc18468c Error out for unsupported trigger objects
Error out if creating a citus table from a table having triggers.
Error out for CREATE TRIGGER commands that are run on citus tables.
2020-05-31 23:10:01 +03:00
MoYi 9e1f198155 Fix composite create type deparsing to preserve typmod 2020-05-15 13:12:54 +00:00
Sait Talha Nisanci 41fceb7849 Add optional ch_benchmark and tpcc_benchmark job
With this commit:
You can trigger two types of hammerdb benchmark jobs:
-ch_benchmark (analytical and transactional queries)
-tpcc_benchmark (only transactional queries)

Your branch will be run against `master` branch.

In order to trigger the jobs prepend `ch_benchmark/` or `tpcc_benchmark/` to your branch and push it.

For example if you were running on a feature/improvement branch with name `improve/adaptive_executor`. In order to trigger a tpcc benchmark, you can do the following:

```bash
git checkout improve/adaptive_executor
git checkout -b tpcc_benchmark/improve/adaptive_executor
git push origin tpcc_benchmark/improve/adaptive_executor # the tpcc benchmark job will be triggered.
```

You will see the results in a branch in [https://github.com/citusdata/release-test-results](https://github.com/citusdata/release-test-results).

The branch name will be something like: `delete_me/citusbot_tpcc_benchmark_rg/<date>/<date>`.

The resource groups will be deleted automatically but if the benchmark fails, they won't be deleted(If you don't see the results after a reasonable time, it might mean it failed, you can check the resource usage from portal, if it is almost 0 and you didn't see the results, it means it probably failed). In that case, you will need to delete the resource groups manually from portal, the resource groups are `citusbot_ch_benchmark_rg` and `citusbot_tpcc_benchmark_rg`.
2020-05-14 16:01:48 +03:00
SaitTalhaNisanci cf98b9d6d5
not wait forever for metadata sync in tests (#3760)
We shouldn't wait forever for metada sync in tests, otherwise when a
test gets stuck, we don't know which line causes the problem.
2020-05-14 10:51:24 +03:00
Nils Dijk 105de7beb8
Fix for pruned target list entries (#3818)
DESCRIPTION: Ignore pruned target list entries in coordinator plan

The postgres planner has the ability to prune target list entries that are proven not used in the output relation. When this happens at the `CitusCustomScan` boundary we need to _not_ return these pruned columns to not upset the rest of the planner.

By using the target list the planner asks us to return we fix issues that lead to Assertion failures, and potentially could be runtime errors when they hit in a production build.

Fixes #3809
2020-05-06 13:56:02 +02:00
Marco Slot 6ce2803777 Make sure we don't wrap GROUP BY expressions in any_value 2020-05-05 05:12:45 +02:00
SaitTalhaNisanci 4a9d516f1b
Add a job to check if merge to enterprise master would fail (#3777)
* add a job to check if merge to enterprise master would fail

Add a job to check if merge to enterprise master would fail.
The job does the following:
- It checks if there is already a branch with the same name on
enterprise, if so it tries to merge it to enterprise master, if the
merge fails the job fails.
- If the branch doesn't exist on the enterprise, it tries to merge the
current branch to enterprise master, it fails if there is any conflict
while merging.

The motivation is that if a branch on community would create a conflict
on enterprise-master, until we create a PR on enterprise that would
solve this conflict, we won't be able to merge the PR on community. This
way we won't have many conflicts when merging to enterprise master and
the author, who has the most context will be responsible for resolving
the conflict when he has the most context, not after 1 month.

* Improve test suite to be able to easily run locally

* Add documentation on how to resolve conflicts to enterprise master

* Improve enterprise merge script

* Improve merge conflict job README

* Improve merge conflict job README

* Improve merge conflict job README

* Improve merge conflict job README

Co-authored-by: Nils Dijk <nils@citusdata.com>
2020-05-04 17:08:17 +03:00
Onder Kalaci f9d4a9cf38 Remove assertion for subqueries in WHERE clause ANDed with FALSE
In the code, we had the assumption that if restriction information
is NULL, it means that we cannot have any disributetd tables in
the subquery.

However, for subqueries in WHERE clause, that is not the case when
the subquery is ANDed with FALSE. In that case, Citus operates
on the originalQuery (which doesn't go through the standard_planner()),
and rely on the restriction information generated by standard_plannner().
As Postgres is smart enough to no generate restriction information for
subqueries ANDed with FALSE, we hit the assertion.
2020-05-04 10:52:15 +02:00
Onder Kalaci 891d99efaf add order by to some tests to make the output consistent 2020-05-01 12:41:51 +02:00
SaitTalhaNisanci cbda951395
Fix task copy and appending empty task in ExtractLocalAndRemoteTasks (#3802)
* Not append empty task in ExtractLocalAndRemoteTasks

ExtractLocalAndRemoteTasks extracts the local and remote tasks. If we do
not have a local task the localTaskPlacementList will be NIL, in this
case we should not append anything to local tasks. Previously we would
first check if a task contains a single placement or not, now we first
check if there is any local task before doing anything.

* fix copy of node task

Task node has task query, which might contain a list of strings in its
fields. We were using postgres copyObject for these lists. Postgres
assumes that each element of list will be a node type. If it is not a
node type it will error.

As a solution to that, a new macro is introduced to copy a list of
strings.
2020-04-29 11:05:34 +03:00
Philip Dubé b6b3c1bc17 Fix COPY TO's COPY (SELECT) with distributed table having generated columns
It's necessary to omit generated columns from output
2020-04-28 14:40:47 +00:00
SaitTalhaNisanci 164c00cf08
Fix typo: longer visible -> no longer visible (#3803) 2020-04-27 16:32:46 +03:00
Onder Kalaci 0cb7ab2d05 Explicitly mark queries in physical planner for [not] having parameters
Physical planner doesn't support parameters. If the parameters have already
been resolved when the physical planner handling the queries, mark it.
The reason is that the executor is unaware of this, and sends the parameters
along with the worker queries, which fails for composite types.

(See `DissuadePlannerFromUsingPlan()` for the details of paramater resolving)
2020-04-24 12:49:43 +02:00
Onder Kalaci f517fa2e2a Re-enable isolation test for reference tables + distributed deadlock detection 2020-04-24 11:53:03 +02:00
SaitTalhaNisanci 07cbd84631
Add base isolation schedule (#3784)
We should do some setup steps in check-isolation-base target.
This PR adds base_isolation_schedule which will set up the cluster.
2020-04-24 12:38:37 +03:00
Onur Tirtir 2e927bd6b7
Bump Citus to 9.4devel (#3788) 2020-04-22 12:50:00 +03:00
Hanefi Önaldı e85b835065
Skip dependency setup on coordinator node 2020-04-21 12:06:31 +03:00