Commit Graph

970 Commits (remove-unnecessary-assertion)

Author SHA1 Message Date
Hadi Moshayedi 4a9aebaa7b Columnar: rename block to chunk 2020-12-03 08:50:19 -08:00
SaitTalhaNisanci f164575524
Add a utility to process each table index (#4382)
A utility function is added so that each caller can implement a handler
for each index on a given table. This means that the caller doesn't need
to worry about how to access each index, the only thing that it needs to
do each to implement a function to which each index on the table is
passed iteratively.
2020-12-03 16:33:13 +03:00
Onder Kalaci c546ec5e78 Local node connection management
When Citus needs to parallelize queries on the local node (e.g., the node
executing the distributed query and the shards are the same), we need to
be mindful about the connection management. The reason is that the client
backends that are running distributed queries are competing with the client
backends that Citus initiates to parallelize the queries in order to get
a slot on the max_connections.

In that regard, we implemented a "failover" mechanism where if the distributed
queries cannot get a connection, the execution failovers the tasks to the local
execution.

The failover logic is follows:

- As the connection manager if it is OK to get a connection
	- If yes, we are good.
	- If no, we fail the workerPool and the failure triggers
	  the failover of the tasks to local execution queue

The decision of getting a connection is follows:

/*
 * For local nodes, solely relying on citus.max_shared_pool_size or
 * max_connections might not be sufficient. The former gives us
 * a preview of the future (e.g., we let the new connections to establish,
 * but they are not established yet). The latter gives us the close to
 * precise view of the past (e.g., the active number of client backends).
 *
 * Overall, we want to limit both of the metrics. The former limit typically
 * kics in under regular loads, where the load of the database increases in
 * a reasonable pace. The latter limit typically kicks in when the database
 * is issued lots of concurrent sessions at the same time, such as benchmarks.
 */
2020-12-03 14:16:13 +03:00
Hadi Moshayedi c2f60b6422
Columnar: pg_upgrade support (#4354) 2020-12-02 08:46:59 -08:00
Ahmet Gedemenli 514c6a76ac Propagate alter schema rename 2020-12-02 15:18:26 +03:00
Nils Dijk 6f9c040f76
DESCRIPTION: Propagate columnar table settings for distributed tables
When distributing a columnar table, as well as changing options on a distributed columnar table, this patch will forward the settings from the coordinator to the workers.

For propagating options changes on an already distributed table this change is pretty straight forward. Before applying the change in options locally we will create a `DDLJob` that contains a call to `alter_columnar_table_set(...)` for every shard placement with all settings of the current table. This goes both for setting an option as well as resetting. This will reset the values to the defaults configured on the coordinator. Having the effect that the coordinator is authoritative on the settings and makes sure the shards have the same settings set as the table on the coordinator.

When a columnar table is distributed it is using the `TableDDLCommand` infra structure to create a new kind of `TableDDLCommand`. This new type, called a `TableDDLCommandFunction` contains a context and 2 function pointers to execute. One function returns the command as applied on the table, the second function will return the sql command to apply to a shard with a given shard id. The schema name is ignored as it will use the fully qualified name of the shard in the same schema as the base table.
2020-12-02 13:02:42 +01:00
Onder Kalaci f7e1aa3f22 Multi-row INSERTs use local execution when placements are local
Multi-row execution already uses sequential execution. When shards
are local, using local execution is profitable as it avoids
an extra connection establishment to the local node.
2020-12-01 21:37:59 +03:00
Hadi Moshayedi a94e8c9cda
Associate column store metadata with storage id (#4347) 2020-11-30 18:01:43 -08:00
Onur Tirtir 7f3d1182ed
Handle invalid connection hash entries (#4362)
If MemoryContextAlloc errors out -e.g. during an OOM-, ConnectionHashEntry->connections
stays as NULL.

With this commit, we add isValid flag to ConnectionHashEntry that should be set to true
right after we allocate & initialize ConnectionHashEntry->connections list properly, and we
check it before accesing to ConnectionHashEntry->connections.
2020-11-30 19:44:03 +03:00
Nils Dijk 383e334023
refactor options to their own table linked to the regclass (#4346)
Columnar options were by accident linked to the relfilenode instead of the regclass/relation oid. This PR moves everything related to columnar options to their own catalog table.
2020-11-27 11:22:08 -08:00
Nils Dijk 326e6afa53
refactor table ddl events scoped for shards (#4342)
Refactor internals on how Citus creates the SQL commands it sends to recreate shards.

Before Citus collected solely ddl commands as `char *`'s to recreate a table. If they were used to create a shard they were wrapped with `worker_apply_shard_ddl_command` and send to the workers. On the workers the UDF wrapping the ddl command would rewrite the parsetree to replace tables names with their shard name equivalent.

This worked well, but poses an issue when adding columnar. Due to limitations in Postgres on creating custom options on table access methods we need to fall back on a UDF to set columnar specific options. Now, to recreate the table, we can not longer rely on having solely DDL statements to recreate a table.

A prototype was made to run this UDF wrapped in `worker_apply_shard_ddl_command`. This became pretty messy, hard to understand and subsequently hard to maintain.

This PR proposes a refactor of the internal representation of table ddl commands into a `TableDDLCommand` structure. The current implementation only supports a `char *` as its contents. Based on the use of the DDL statement (eg. creating the table -mx- or creating a shard) one of two different functions can be called to get the statement to send to the worker:
 - `GetTableDDLCommand(TableDDLCommand *command)`: This function returns that ddl command to create the table. In this implementation it will just return the `char *`. This has the same functionality as getting the old list and not wrapping it.
 - `GetShardedTableDDLCommand(TableDDLCommand *command, uint64 shardId, char *schemaName)`: This function returns the ddl command wrapped in `worker_apply_shard_ddl_command` with the `shardId` as an argument. Due to backwards compatibility it also accepts a. `schemaName`. The exact purpose is not directly clear. Ideally new implementations would work with fully qualified statements and ignore the `schemaName`.

A future implementation could accept 2.function pointers and a `void *` for context to let the two pointers work on. This gives greater flexibility in controlling what commands get send in which situations. Also, in a future, we could implement the intermediate step of creating the `parsetree` datastructure of statements based on the contents in the catalog with a corresponding deparser. For sharded queries a mutator could be ran over the parsetree to rewrite the tablenames to the names with the shard identifier. This will completely omit the requirement for `worker_apply_shard_ddl_command`.
2020-11-26 13:31:59 +01:00
Onder Kalaci 629ecc3dee Add the infrastructure to count the number of client backends
Considering the adaptive connection management
improvements that we plan to roll soon, it makes it
very helpful to know the number of active client
backends.

We are doing this addition to simplify yhe adaptive connection
management for single node Citus. In single node Citus, both the
client backends and Citus parallel queries would compete to get
slots on Postgres' `max_connections` on the same Citus database.

With adaptive connection management, we have the counters for
Citus parallel queries. That helps us to adaptively decide
on the remote executions pool size (e.g., throttle connections
if necessary).

However, we do not have any counters for the total number of
client backends on the database. For single node Citus, we
should consider all the client backends, not only the remote
connections that Citus does.

Of course Postgres internally knows how many client
backends are active. However, to get that number Postgres
iterates over all the backends. For examaple, see [pg_stat_get_db_numbackends](8e90ec5580/src/backend/utils/adt/pgstatfuncs.c (L1240))
where Postgres iterates over all the backends.

For our purpuses, we need this information on every connection
establishment. That's why we cannot affort to do this kind of
iterattion.
2020-11-25 19:19:24 +01:00
Onur Tirtir 46be63d76b
Refactor PreprocessIndexStmt (#4272) 2020-11-25 12:19:37 +03:00
Hadi Moshayedi 40b52ab757 Fix memory leaks in column store 2020-11-23 11:26:12 -08:00
Jeff Davis 8cee2b092b remove columnar FDW code 2020-11-20 10:03:12 -08:00
Onder Kalaci c433c66f2b Do not execute subplans multiple times with cursors
Before this commit, we let AdaptiveExecutorPreExecutorRun()
to be effective multiple times on every FETCH on cursors.
That does not affect the correctness of the query results,
but adds significant overhead.
2020-11-20 10:43:56 +01:00
Jeff Davis a2b698a766 rename cstore_tableam -> columnar 2020-11-19 12:15:51 -08:00
Hadi Moshayedi 97cba2d5b6 Implements write state management for tuple inserts.
TableAM API doesn't allow us to pass around a state variable along all of the tuple inserts belonging to the same command. We require this in columnar store, since we batch them, and when we have enough rows we flush them as stripes.

To do that, we keep a (relfilenode) -> stack of (subxact id, TableWriteState) global mapping.

**Inserts**

Whenever we want to insert a tuple, we look up for the relation's relfilenode in this mapping. If top of the stack matches current subtransaction, we us the existing TableWriteState. Otherwise, we allocate a new TableWriteState and push it on top of stack.

**(Sub)Transaction Commit/Aborts**

When the subtransaction or transaction is committed, we flush and pop all entries matching current SubTransactionId.

When the subtransaction or transaction is committed, we pop all entries matching current SubTransactionId and discard them without flushing.

**Reads**

Since we might have unwritten rows which needs to be read by a table scan, we flush write states on SELECTs. Since flushing the write state of upper transactions in a subtransaction will cause metadata being written in wrong subtransaction, we ERROR out if any of the upper subtransactions have unflushed rows.

**Table Drops**

We record in which subtransaction the table was dropped. When committing a subtransaction in which table was dropped, we propagate the drop to upper transaction. When aborting a subtransaction in which table was dropped, we mark table as not deleted.
2020-11-17 12:07:16 -08:00
Nils Dijk 725f4a37d0
change configure to not have options 2020-11-17 19:01:54 +01:00
Nils Dijk 213eb93e6d
make columnar compile and functionally working 2020-11-17 18:55:34 +01:00
Nils Dijk 527d3ce0bb
move headers to include directory 2020-11-17 18:55:34 +01:00
Önder Kalacı 0c0fc69f2a
Remove unused field (#4275) 2020-11-17 11:41:57 +01:00
Hanefi Onaldi d3019f1b6d
Introduce foreach_ptr_modify macro (#4303)
If one wishes to iterate through a List and insert list elements in
PG13, it is not safe to use for_each_ptr as the List representation
in PostgreSQL no longer linked lists, but arrays, and it is possible
that the whole array is repalloc'ed if ther is not sufficient space
available.

See postgres commit 1cff1b95ab6ddae32faa3efe0d95a820dbfdc164 for more
information
2020-11-09 12:03:59 +03:00
Onder Kalaci e0d2ac7620 Do not rely on set_rel_pathlist_hook for finding local relations
When a relation is used on an OUTER JOIN with FALSE filters,
set_rel_pathlist_hook may not be called for the table.

There might be other cases as well, so do not rely on the hook
for classification of the tables.
2020-11-06 11:14:30 +01:00
Halil Ozan Akgul 77b3be8b6d Turn RelOptInfos to only used field of them, relids, to be able to copy 2020-10-22 13:42:28 +03:00
Onder Kalaci 5c4c9304ba Remove RemoveDuplicateJoinRestrictions() function
RemoveDuplicateJoinRestrictions() function was introduced with the aim of decrasing the overall planning times by eliminating the duplicate JOIN restriction entries (#1989). However, it turns out that the function itself is so CPU intensive with a very high algorithmic complexity, it hurts a lot more than it helps. The function is a clear example of premature optimization.

The table below shows the difference clearly:

"distributed query planning
 time master"	RemoveDuplicateJoinRestrictions() execution time on master	"Remove the function RemoveDuplicateJoinRestrictions()
this PR"
5 table INNER JOIN	9 msec	2msec	7 msec
10 table INNER JOIN	227 msec	194 msec	29  msec
20 table INNER JOIN	1 sec 235 msec	1  sec 139  msec	90 msecs
50 table INNER JOIN	24 seconds	21 seconds	1.5 seconds
100 table INNER JOIN	2 minutes 16 secods	1 minute 53 seconds	23 seconds
250 table INNER JOIN	Bottleneck on JoinClauseList	18 minutes 52 seconds	Bottleneck on JoinClauseList

5 table INNER JOIN in subquery	9 msec	0 msec	6 msec
10 table INNER JOIN subquery	33 msec	10 msec	32 msec
20 table INNER JOIN subquery	132 msec	67 msec	123 msec
50 table INNER JOIN subquery	1.2  seconds	900 msec	500 msec
100 table INNER JOIN subquery	6 seconds	5  seconds	2 seconds
250 table INNER JOIN subquery	54 seconds	37 seconds	20  seconds

5 table LEFT JOIN	5 msec	0 msec	5 msec
10 table LEFT JOIN	11 msec	0 msec	13 msec
20 table LEFT JOIN	26 msec	2 msec	30 msec
50 table LEFT JOIN	150 msec	15 msec	193 msec
100 table LEFT JOIN	757 msec	71 msec	722 msec
250 table LEFT JOIN	8 seconds	600 msec	8 seconds

5 JOINs among 2 table JOINs 	37 msec	11 msec	25 msec
10 JOINs among 2 table JOINs 	536 msec	306 msec	352 msec
20 JOINs among 2 table JOINs 	794 msec	181 msec	640 msec
50 JOINs among 2 table JOINs 	25 seconds	2 seconds	22 seconds
100 JOINs among 2 table JOINs 	Bottleneck on JoinClauseList	9 seconds	Bottleneck on JoinClauseList
150 JOINs among 2 table JOINs 	Bottleneck on JoinClauseList	46 seconds	Bottleneck on JoinClauseList

On top of the performance penalty, the function had a critical bug #4255, and with #4254 we hit one more important bug. It should be fixed by adding the followig check to the ContextCoversJoinRestriction():
```
static bool
JoinRelIdsSame(JoinRestriction *leftRestriction, JoinRestriction *rightRestriction)
{
	Relids leftInnerRelIds = leftRestriction->innerrel->relids;
	Relids rightInnerRelIds = rightRestriction->innerrel->relids;
	if (!bms_equal(leftInnerRelIds, rightInnerRelIds))
	{
		return false;
	}

	Relids leftOuterRelIds = leftRestriction->outerrel->relids;
	Relids rightOuterRelIds = rightRestriction->outerrel->relids;
	if (!bms_equal(leftOuterRelIds, rightOuterRelIds))
	{
		return false;
	}

	return true;
}
```

However, adding this eliminates all the benefits tha RemoveDuplicateJoinRestrictions() brings.

I've used the commands here to generate the JOINs mentioned in the PR: https://gist.github.com/onderkalaci/fe8654f9df5916c7af4c7c5eb892561e#file-gistfile1-txt

Inner and outer JOINs behave roughly the same, to simplify the table only added INNER joins.
2020-10-21 10:29:39 +02:00
SaitTalhaNisanci 0f209377c4
Fix incorrect join related fields (#4242)
* Fix incorrect join related fields

Ruleutils expect to give the original index of join columns hence we
should consider the dropped columns while setting the fields in
SetJoinRelatedFieldsCompat.

* add some more tests for joins

* Move tests to join.sql and create a utility function
2020-10-19 18:28:39 +03:00
Onur Tirtir c49077d594
Disallow outer joins `ON TRUE` with ref & dist tables when ref table is outer relation (#4255)
Disallow `ON TRUE` outer joins with reference & distributed tables
when reference table is outer relation by fixing the logic bug made
when calling `LeftListIsSubset` function.

Also, be more defensive when removing duplicate join restrictions
when join clause is empty for non-inner joins as they might still
contain useful information for non-inner joins.
2020-10-19 16:58:11 +03:00
Onur Tirtir f80f4839ad Remove unused functions that cppcheck found 2020-10-19 13:50:52 +03:00
Onder Kalaci bbedfca761 Improve the relation restriction counters
It seems like Postgres could call set_rel_pathlist() for
the same relation multiple times. This breaks the logic
where we assume relationCount eqauls to the number of
entries in relationRestrictionList.

In summary, relationRestrictionList may contain duplicate
entries.
2020-10-19 08:51:16 +02:00
Nils Dijk caabbf4b84 Table access method support for distributed tables 2020-10-16 12:02:25 -07:00
Onur Tirtir 7cb07c70fa
Move hasSemiJoin to JoinRestrictionContext (#4256) 2020-10-16 18:37:39 +03:00
Onur Tirtir de6f2d3f42
Refactor JoinRestrictionListExistsInContext to improve readability (#4249) 2020-10-16 12:24:56 +03:00
Onder Kalaci fe3caf3bc8 Local execution considers intermediate result size limit
With this commit, we make sure that local execution adds the
intermediate result size as the distributed execution adds. Plus,
it enforces the citus.max_intermediate_result_size value.
2020-10-15 17:18:55 +02:00
Marco Slot 31858c8a29 Check table existence in EnsureRelationKindSupported 2020-10-15 17:05:06 +02:00
Sait Talha Nisanci ecde6c6eef Introduce GetCurrentLocalExecutionStatus wrapper
We should not access CurrentLocalExecutionStatus directly because that
would mean that we could also set it directly, which we shouldn't
because we have checks to see if the new state is possible, otherwise we
error.
2020-10-15 15:38:19 +03:00
Halil Ozan Akgul e2736c25bd Adds support for WITH TIES option 2020-10-12 19:34:18 +03:00
Marco Slot 73fc054c27 Rename DDL command functions 2020-10-06 11:30:56 +02:00
Marco Slot dbc348b7e0 Create sequence dependency during metadata syncing 2020-10-06 10:57:39 +02:00
Ahmet Gedemenli 81db4dca5c Degrade gracefully when no background workers available 2020-10-05 16:55:00 +03:00
Hanefi Önaldı 6d8e83d24f
Replace worker_hash calls with partkey IS NOT NULL filters 2020-10-02 18:16:24 +03:00
Önder Kalacı df5aa0f0cc
Switch to sequential execution if the index name is long (#4209)
Citus has the logic to truncate the long shard names to prevent
various issues, including self-deadlocks. However, for partitioned
tables, when index is created on the parent table, the index names
on the partitions are auto-generated by Postgres. We use the same
Postgres function to generate the index names on the shards of the
partitions. If the length exceeds the limit, we switch to sequential
execution mode.
2020-10-02 13:39:34 +03:00
Onder Kalaci 56ca256374 Forcefully terminate connections after citus.node_connection_timeout
After the connection timeout, we fail the session/pool. However, the
underlying connection can still be trying to connect. That is dangerous
because the new placement executions have already been in place. The
executor cannot handle the situation where multiple of
EXECUTION_ORDER_ANY task executions succeeds.

Adding a regression test doesn't seem easily doable. To reproduce the issue
- Add 2 worker nodes
- create a reference table
- set citus.node_connection_timeout to 1ms (requires code change)
- Continiously execute `SELECT count(*) FROM ref_table`
- Sometime later, you hit an out-of-array access in
  `ScheduleNextPlacementExecution()` hence crashing.
- The reason for that is sometimes the first connection
  successfully established while the executor is already
  trying to execute the query on the second node.
2020-09-30 18:24:24 +02:00
Marco Slot b905c8043d Fix create index concurrently crash with local execution 2020-09-25 11:49:09 +02:00
Ahmet Gedemenli abfb79bda6 Sort explain analyze output by task time
Add sort method parameter for regression tests

Fix check-style

Change sorting method parameters to enum

Polish

Add task fields to OutTask

Add test into multi_explain

Fix isolation test
2020-09-24 11:38:40 +03:00
SaitTalhaNisanci e7cd1ed0ee
Not take ShareUpdateExlusiveLock on pg_dist_transaction (#4184)
* Not take ShareUpdateExlusiveLock on pg_dist_transaction

We were taking ShareUpdateExlusiveLock on pg_dist_transaction during
recovery to prevent multiple recoveries happening concurrenly. VACUUM(
not FULL) also takes ShareUpdateExclusiveLock, and they can conflict. It
seems that VACUUM will skip the table if there is a conflicting lock
already taken unless it is doing the vacuum to prevent id wraparound, in
which case there can be a deadlock. I guess the deadlock happens if:

- VACUUM takes a lock on pg_dist_transaction and is done for id
wraparound problem
- The transaction in the maintenance tries to take a lock but
cannot as that conflicts with the lock acquired by VACUUM
- The transaction in the maintenance daemon has a very old xid hence
VACUUM cannot proceed.

If we take a row exclusive lock in transaction recovery then it wouldn't
conflict with VACUUM hence it could proceed so the deadlock would be
resolved. To prevent concurrent transaction recoveries happening, an
advisory lock is taken with ShareUpdateExlusiveLock as before.

* Use CITUS_OPERATIONS tag
2020-09-21 15:20:38 +03:00
Onur Tirtir 1b31b22635 Refactor the functions that return OID lists for citus tables 2020-09-18 16:42:46 +03:00
SaitTalhaNisanci dae2c69fd7
Not allow removing a single node with ref tables (#4127)
* Not allow removing a single node with ref tables

We should not allow removing a node if it is the only node in the
cluster and there is a data on it. We have this check for distributed
tables but we didn't have it for reference tables.

* Update src/test/regress/expected/single_node.out

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>

* Update src/test/regress/sql/single_node.sql

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2020-09-18 15:35:59 +03:00
Onur Tirtir 4118560b75
Prevent citus local table creation from a catalog table (#4158) 2020-09-15 14:30:48 +03:00
Marco Slot bd12555b16 Fix distributing tables owned by extensions 2020-09-10 04:46:11 +02:00
Onur Tirtir 3a73fba810 Apply planner changes for citus local tables 2020-09-09 11:51:18 +03:00
Onur Tirtir a58a4395ab Extend citus local table utility command support
This commit brings following features:

Foreign key support from citus local tables to reference tables
* Foreign key support from reference tables to citus local tables
  (only with RESTRICT & NO ACTION behavior)
* ALTER TABLE ENABLE/DISABLE trigger command support
* CREATE/DROP/ALTER trigger command support

and disallows:
* ALTER TABLE ATTACH/DETACH PARTITION commands
* CREATE TABLE <postgres table> ATTACH PARTITION <citus local table>
  commands
* Foreign keys from postgres tables to citus local tables
  (the other way was already disallowed)

for citus local tables.
2020-09-09 11:50:55 +03:00
Onur Tirtir 17cc810372 Implement "citus local table" creation logic 2020-09-09 11:50:48 +03:00
Onur Tirtir ba208eae4d
Record non-distributed table accesses in local executor (#4139) 2020-09-07 18:19:08 +03:00
Nils Dijk bbf42063a7
export LookupShardTransferMode 2020-09-03 16:06:38 +02:00
Nils Dijk 6e4862c57f
expose transfermode for ensure reference table existance 2020-09-03 16:06:37 +02:00
SaitTalhaNisanci 366461ccdb
Introduce cache entry/table utilities (#4132)
Introduce table entry utility functions

Citus table cache entry utilities are introduced so that we can easily
extend existing functionality with minimum changes, specifically changes
to these functions. For example IsNonDistributedTableCacheEntry can be
extended for citus local tables without the need to scan the whole
codebase and update each relevant part.

* Introduce utility functions to find the type of tables

A table type can be a reference table, a hash/range/append distributed
table. Utility methods are created so that we don't have to worry about
how a table is considered as a reference table etc. This also makes it
easy to extend the table types.

* Add IsCitusTableType utilities

* Rename IsCacheEntryCitusTableType -> IsCitusTableTypeCacheEntry

* Change citus table types in some checks
2020-09-02 22:26:05 +03:00
Jelte Fennema 451ea04508 Rename ForceXxx functions to to XxxOrError
This clearer naming was suggested in https://github.com/citusdata/citus/pull/4001
2020-09-01 11:19:17 +02:00
Hanefi Önaldı 024d398cd7
Allow distribution of functions that read from reference tables
create_distributed_function(function_name,
                            distribution_arg_name,
                            colocate_with text)

This UDF did not allow colocate_with parameters when there were no
disttribution_arg_name supplied. This commit changes the behaviour to
allow missing distribution_arg_name parameters when the function should
be colocated with a reference table.
2020-09-01 07:28:34 +03:00
Hanefi Onaldi f47b3a7e7d
Remove unused parameters from round robin reordering and friends (#4120) 2020-08-20 12:45:01 +03:00
SaitTalhaNisanci 679bf0d2b2
Create CanPushdownSubqery wrapper for better readability (#4108) 2020-08-12 17:28:20 +03:00
SaitTalhaNisanci 73ef40886b
Rename FindNodeCheckXXX functions (#4106)
FindNodeCheck is not clear about what the function is doing. They are
renamed to FindNodeMatchingCheckFunctionXXX. Also for choosing elements in these
functions, CheckNodeFunc type is introduced.
2020-08-11 15:01:23 +03:00
Hadi Moshayedi 7b74eca22d Support EXPLAIN EXECUTE ANALYZE. 2020-08-10 13:44:30 -07:00
Halil Ozan Akgul 375310b7f1 Adds support for table undistribution 2020-08-05 14:36:03 +03:00
Sait Talha Nisanci fe4ac51d8c Normalize Output:.. since it changes with pg13
Fix indentation for better readability
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci 63ed126ad4 Set buffer usage with explain
It seems that currently we process even postgres tables in explain
commands. This is because we register a hook for explain and we don't
have any check to see if the query has any citus table.

With this commit, we now send the buffer usage as well to the relevant
API. There is some duplicate in the code but it is because of the
existing structure, we can refactor this separately.
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci fe1e1c9b68 Replace Set_ptr_value as SetListCellPtr to be more explicit
Move header to right place and fix comment style
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci 8e9b52971c Use new var field names in the codebase
The codebase is updated to use varattnosync and varnosyn and we defined
the macros for older versions. This way we can just remove the macros
when we drop an older version.
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci b641f63bfd Use CMDTAG_SELECT_COMPAT
CMDTAG_SELECT exists in PG12 hence defining a MACRO such as
CMDTAG_SELECT -> "SELECT" is not possible. I chose CMDTAG_SELECT_COMPAT
because with the COMPAT suffix it is explicit that it maps to different
things in different versions and also has a less chance of mapping
something irrevelant. For example if we used SELECT as a macro, then it
would map every SELECT to whatever it is mapping to, which might have
unexpected/undesired behaviour.
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci d68bfc5687 Improve error for index operator class parameters
The error message when index has opclassopts is improved and the commit
from postgres side is also included for future reference.

Also some minor style related changes are applied.
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci 1070828465 update cte inline output for pg13
Make some macros in version_compat more robust
Remove commented code in ruleutils
Remove unnecessary variable assignments
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 1112b254a7 adapt recently added code for pg13
This commit mostly adds pg_get_triggerdef_command to our ruleutils_13.
This doesn't add anything extra for ruleutils 13 so it is basically a copy
of the change on ruleutils_12
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 38aaf1faba use QueryCompletion struct
Postgres introduced QueryCompletion struct. Hence a compat utility is
added to finish query completion for older versions and pg >= 13.

The commit on Postgres side:
2f9661311b83dc481fc19f6e3bda015392010a40
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 9f1ec792b3 add queryString to distributed_planner
distributed_planner now takes query string as a parameter.

related commit on PG side:
6aba63ef3e606db71beb596210dd95fa73c44ce2
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 1a7ccac6ef Add RangeTableEntryFromNSItem macro
addRangeTableEntryXXX methods return a ParseNamespaceItem with pg >= 13.
RangeTableEntryFromNSItem macro is added so that we return the range
table entry from the ParseNamespaceItem in pg>=13 and for pg < 13 rte
would already be returned with addRangeTableEntryXXX methods.

Commit on Postgres side:
5815696bc66b3092f6361f53e0394909647042c8
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 4ed30a0824 create Set_ptr_value
Since PG13 changed the list, a listcell doesn't contain data anymore.
Therefore Set_ptr_value macro is created, so that depending on the
version it will either use cell->data.ptr_value or cell->ptr_value.

Commit on Postgres side:
1cff1b95ab6ddae32faa3efe0d95a820dbfdc164
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci ab85a8129d map varoattno and varnoold fields in Var
With PG13 varoattno and varnoold fields were renamed as varattnosyn and
varnosyn. A macro is defined for these.

Commit on Postgres side:
9ce77d75c5ab094637cc4a446296dc3be6e3c221

Command on Postgres side:
git log --all --grep="varoattno"
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 688ab16bba Introduce ExplainOnePlanCompat
Since ExplainOnePlan expects BufferUsage as well with PG >= 13,
ExplainOnePlanCompat is added.

Commit on Postgres side:
ed7a5095716ee498ecc406e1b8d5ab92c7662d10
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 6314eba5df introduce standard_planner_compat
standard_planner now takes the query string as a parameter as well with
pg >= 13.

Commit on Postgres Side:
66888f7424f7d6c7cea2c26e181054d1455d4e7a
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 991f49efc9 introduce getOwnedSequencesCompat macro
Commit on Postgres side:
19781729f789f3c6b2540e02b96f8aa500460322
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 00e7386007 introduce PortalDefineQuerySelectCompat
PortalDefineQuery doesn't accept char* for command tag anymore with PG
>= 13. We are currently only using it with Select, therefore a Portal
define query compat for select is created.

Commit on PG side:
2f9661311b83dc481fc19f6e3bda015392010a40
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 62879ee8c1 introduce planner_compat and pg_plan_query_compat macros
As the new planner and pg_plan_query_compat methods expect the query
string as well, macros are defined to be compatible in different
versions of postgres.

Relevant commit on Postgres:
6aba63ef3e606db71beb596210dd95fa73c44ce2

Command on Postgres:
git log --all --grep="pg_plan_query"
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci bf831d2e59 Use table_openXXX methods in the codebase
With PG13 heap_* (heap_open, heap_close etc) are replaced with table_*
(table_open, table_close etc).

It is better to use the new table access methods in the codebase and
define the macros for the previous versions as we can easily remove the
macro without having to change the codebase when we drop the support for
the old version.

Commits that introduced this change on Postgres:
f25968c49697db673f6cd2a07b3f7626779f1827
e0c4ec07284db817e1f8d9adfb3fffc952252db0
4b21acf522d751ba5b6679df391d5121b6c4a35f

Command to see relevant commits on Postgres side:
git log --all --grep="heap_open"
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 0819b79631 introduce list compat macros
Pass the list to lnext API
lnext API now expects the list as well.
The commit on Postgres that introduced the change: 1cff1b95ab6ddae32faa3efe0d95a820dbfdc164

lnext_compat and list_delete_cell_compat macros are introduced so that
we can use these macros in the codebase without having to use #if
directives in the codebase.

Related commit on postgres:
1cff1b95ab6ddae32faa3efe0d95a820dbfdc164

Command to search in postgres:
git log --all --grep="list_delete_cell"

add ListCellAndListWrapper

When iterating a list in separate function calls, we need both the list
and the current cell starting from PG13, therefore
ListCellAndListWrapper is added to store both as a wrapper.

Use ListCellAndListWrapper in foreign key test udfs

As we iterate a list in these udfs using a functionContext, we need to
use the wrapper to be able to access both the list and the current cell.
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 30549dc0e2 add copy of ruleutils_12 as ruleutils_13 2020-08-04 13:34:13 +03:00
Onder Kalaci eeb8c81de2 Implement shared connection count reservation & enable `citus.max_shared_pool_size` for COPY
With this patch, we introduce `locally_reserved_shared_connections.c/h` files
which are responsible for reserving some space in shared memory counters
upfront.

We sometimes need to reserve connections, but not necessarily
establish them. For example:
-  COPY command should reserve connections as it cannot know which
   connections it needs in which order. COPY establishes connections
   as any input data hits the workers. For example, for router COPY
   command, it only establishes 1 connection.

   As discussed here (https://github.com/citusdata/citus/pull/3849#pullrequestreview-431792473),
   COPY needs to reserve connections up-front, otherwise we can end
   up with resource starvation/un-detected deadlocks.
2020-08-03 18:51:40 +02:00
SaitTalhaNisanci ef841115de
Fix int32 overflow and use PG macros for INT32_XX (#4061)
* Use CalculateUniformHashRangeIndex in HashPartitionId

INT32_MIN definition can change among different platforms hence it is
possible to get overflow, we would see crashes because of this in debian
distros. We have already solved a similar problem with introducing
CalculateUniformHashRangeIndex method, hence to solve it we can use the
same method, this also removes some duplication and has a single place
to decide that.

* Use PG_INT32_XX instead of INT32_XX to be safer
2020-07-23 18:30:08 +03:00
Onder Kalaci cfb633601d Minor refactorings in COPY command execution
1) Rename CONNECTION_PER_PLACEMENT to REQUIRE_CLEAN_CONNECTION. This is
mostly to make things clear as the new name reveals more.

2) We also make sure that mark all the copy connections critical,
even if they are accessed earlier in the transction
2020-07-23 15:36:19 +02:00
Onder Kalaci 52c0fccb08 Move executor specific logic to a function
Because as we're planning to use the same logic, it'd be nice to
use the exact same functions.
2020-07-22 15:09:47 +02:00
Onder Kalaci ff6555299c Unify node sort ordering
The executor relies on WorkerPool, and many other places rely on WorkerNode.
With this commit, we make sure that they are sorted via the same function/logic.
2020-07-22 11:03:25 +02:00
Hanefi Önaldı e534dbae4a
Accept list of values in a supported ALTER ROLE .. SET statement
Some GUCs support a list of values which is indicated by GUC_LIST_INPUT flag.

When an ALTER ROLE .. SET statement is executed, the new configuration
default for affected users and databases are stored in the
setconfig(text[]) column in a pg_db_role_setting record.

If a GUC that supports a list of values is used in an ALTER ROLE .. SET
statement, we need to split the text into items delimited by commas.
2020-07-21 03:49:57 +03:00
Onder Kalaci c25de2cf22 Remove flag from
As it doesn't make any sense anymore
2020-07-20 12:45:05 +02:00
SaitTalhaNisanci b3af63c8ce
Remove task tracker executor (#3850)
* use adaptive executor even if task-tracker is set

* Update check-multi-mx tests for adaptive executor

Basically repartition joins are enabled where necessary. For parallel
tests max adaptive executor pool size is decresed to 2, otherwise we
would get too many clients error.

* Update limit_intermediate_size test

It seems that when we use adaptive executor instead of task tracker, we
exceed the intermediate result size less in the test. Therefore updated
the tests accordingly.

* Update multi_router_planner

It seems that there is one problem with multi_router_planner when we use
adaptive executor, we should fix the following error:
+ERROR:  relation "authors_range_840010" does not exist
+CONTEXT:  while executing command on localhost:57637

* update repartition join tests for check-multi

* update isolation tests for repartitioning

* Error out if shard_replication_factor > 1 with repartitioning

As we are removing the task tracker, we cannot switch to it if
shard_replication_factor > 1. In that case, we simply error out.

* Remove MULTI_EXECUTOR_TASK_TRACKER

* Remove multi_task_tracker_executor

Some utility methods are moved to task_execution_utils.c.

* Remove task tracker protocol methods

* Remove task_tracker.c methods

* remove unused methods from multi_server_executor

* fix style

* remove task tracker specific tests from worker_schedule

* comment out task tracker udf calls in tests

We were using task tracker udfs to test permissions in
multi_multiuser.sql. We should find some other way to test them, then we
should remove the commented out task tracker calls.

* remove task tracker test from follower schedule

* remove task tracker tests from multi mx schedule

* Remove task-tracker specific functions from worker functions

* remove multi task tracker extra schedule

* Remove unused methods from multi physical planner

* remove task_executor_type related things in tests

* remove LoadTuplesIntoTupleStore

* Do initial cleanup for repartition leftovers

During startup, task tracker would call TrackerCleanupJobDirectories and
TrackerCleanupJobSchemas to clean up leftover directories and job
schemas. With adaptive executor, while doing repartitions it is possible
to leak these things as well. We don't retry cleanups, so it is possible
to have leftover in case of errors.

TrackerCleanupJobDirectories is renamed as
RepartitionCleanupJobDirectories since it is repartition specific now,
however TrackerCleanupJobSchemas cannot be used currently because it is
task tracker specific. The thing is that this function is a no-op
currently.

We should add cleaning up intermediate schemas to DoInitialCleanup
method when that problem is solved(We might want to solve it in this PR
as well)

* Revert "remove task tracker tests from multi mx schedule"

This reverts commit 03ecc0a681.

* update multi mx repartition parallel tests

* not error with task_tracker_conninfo_cache_invalidate

* not run 4 repartition queries in parallel

It seems that when we run 4 repartition queries in parallel we get too
many clients error on CI even though we don't get it locally. Our guess
is that, it is because we open/close many connections without doing some
work and postgres has some delay to close the connections. Hence even
though connections are removed from the pg_stat_activity, they might
still not be closed. If the above assumption is correct, it is unlikely
for it to happen in practice because:
- There is some network latency in clusters, so this leaves some times
for connections to be able to close
- Repartition joins return some data and that also leaves some time for
connections to be fully closed.

As we don't get this error in our local, we currently assume that it is
not a bug. Ideally this wouldn't happen when we get rid of the
task-tracker repartition methods because they don't do any pruning and
might be opening more connections than necessary.

If this still gives us "too many clients" error, we can try to increase
the max_connections in our test suite(which is 100 by default).

Also there are different places where this error is given in postgres,
but adding some backtrace it seems that we get this from
ProcessStartupPacket. The backtraces can be found in this link:
https://circleci.com/gh/citusdata/citus/138702

* Set distributePlan->relationIdList when it is needed

It seems that we were setting the distributedPlan->relationIdList after
JobExecutorType is called, which would choose task-tracker if
replication factor > 1 and there is a repartition query. However, it
uses relationIdList to decide if the query has a repartition query, and
since it was not set yet, it would always think it is not a repartition
query and would choose adaptive executor when it should choose
task-tracker.

* use adaptive executor even with shard_replication_factor > 1

It seems that we were already using adaptive executor when
replication_factor > 1. So this commit removes the check.

* remove multi_resowner.c and deprecate some settings

* remove TaskExecution related leftovers

* change deprecated API error message

* not recursively plan single relatition repartition subquery

* recursively plan single relation repartition subquery

* test depreceated task tracker functions

* fix overlapping shard intervals in range-distributed test

* fix error message for citus_metadata_container

* drop task-tracker deprecated functions

* put the implemantation back to worker_cleanup_job_schema_cachesince citus cloud uses it

* drop some functions, add downgrade script

Some deprecated functions are dropped.
Downgrade script is added.
Some gucs are deprecated.
A new guc for repartition joins bucket size is added.

* order by a test to fix flappiness
2020-07-18 13:11:36 +03:00
Hadi Moshayedi 13003d8d05 Use TupleDestination API for partitioning in insert/select. 2020-07-17 09:43:46 -07:00
Nils Dijk d0b6e62c9a
change wording to allowlist and the likes (#3906)
In the same line as #3904

Change wording to better reflect use and remove words that enforce/maintain bias.
2020-07-15 16:24:40 +02:00
Sait Talha Nisanci 510535f558 address feedback 2020-07-13 19:45:02 +03:00
Sait Talha Nisanci db1b78148c send schema creation/cleanup to coordinator in repartitions
We were using ALL_WORKERS TargetWorkerSet while sending temporary schema
creation and cleanup. We(well mostly I) thought that ALL_WORKERS would also include coordinator when it is added as a worker. It turns out that it was FILTERING OUT the coordinator even if it is added as a worker to the cluster.

So to have some context here, in repartitions, for each jobId we create
(at least we were supposed to) a schema in each worker node in the cluster. Then we partition each shard table into some intermediate files, which is called the PARTITION step. So after this partition step each node has some intermediate files having tuples in those nodes. Then we fetch the partition files to necessary worker nodes, which is called the FETCH step. Then from the files we create intermediate tables in the temporarily created schemas, which is called a MERGE step. Then after evaluating the result, we remove the temporary schemas(one for each job ID in each node) and files.

If node 1 has file1, and node 2 has file2 after PARTITION step, it is
enough to either move file1 from node1 to node2 or vice versa. So we
prune one of them.

In the MERGE step, if the schema for a given jobID doesn't exist, the
node tries to use the `public` schema if it is a superuser, which is
actually added for testing in the past.

So when we were not sending schema creation comands for each job ID to
the coordinator(because we were using ALL_WORKERS flag, and it doesn't
include the coordinator), we would basically not have any schemas for
repartitions in the coordinator. The PARTITION step would be executed on
the coordinator (because the tasks are generated in the planner part)
and it wouldn't give us any error because it doesn't have anything to do
with the temporary schemas(that we didn't create). But later two things
would happen:

- If by chance the fetch is pruned on the coordinator side, we the other
nodes would fetch the partitioned files from the coordinator and execute
the query as expected, because it has all the information.
- If the fetch tasks are not pruned in the coordinator, in the MERGE
step, the coordinator would either error out saying that the necessary
schema doesn't exist, or it would try to create the temporary tables
under public schema ( if it is a superuser). But then if we had the same
task ID with different jobID it would fail saying that the table already
exists, which is an error we were getting.

In the first case, the query would work okay, but it would still not do
the cleanup, hence we would leave the partitioned files from the
PARTITION step there. Hence ensure_no_intermediate_data_leak would fail.

To make things more explicit and prevent such bugs in the future,
ALL_WORKERS is named as ALL_NON_COORD_WORKERS. And a new flag to return
all the active nodes is added as ALL_DATA_NODES. For repartition case,
we don't use the only-reference table nodes but this version makes the
code simpler and there shouldn't be any significant performance issue
with that.
2020-07-13 19:20:15 +03:00
SaitTalhaNisanci 15290bc43b
remove unused worker methods (#4017) 2020-07-10 13:45:55 +03:00
SaitTalhaNisanci 3f50165365
rename TargetWorkerSet enums (#4015)
Rename TargetWorkerSet enums to make them more explicit about what they
mean. Ideally it would be good to treat everything as a node without the
'worker' concept because it makes things complicated. Another
improvement could be to rename TargetWorkerSet as TargetNodeSet but it
goes to renaming many occurrences of Worker, which is probably too big
for this PR.
2020-07-10 11:21:27 +03:00
Jelte Fennema 759e628dd5
Handle some NULL issues that static analysis found (#4001)
Static analysis found some issues where we used the result from
ExtractResultRelationRTE, without checking that it wasn't NULL. It seems
like in all these cases it can never actually be NULL, since we have checked
before that it isn't a SELECT query. So, this PR is mostly to make static
analysis happy (and protect a bit against future changes of the code).
2020-07-09 15:46:42 +02:00
SaitTalhaNisanci 96adce77d6
rename node/worker utilities (#4003)
The names were not explicit about what they do, and we have many
misusages in the codebase, so they are renamed to be more explicit.
2020-07-09 15:30:35 +03:00
Jelte Fennema f6e2f1b1cb
Replace words that have bad associations (#3992)
We had a few words in our codebase that static analysis flagged as having bad
associations.
2020-07-08 14:57:48 +02:00
Hadi Moshayedi 23fa421639 Fix task->fetchedExplainAnalyzePlan memory issue. 2020-07-07 07:58:02 -07:00
citus bot f0693e2f75 Remove unused MaxMasterConnectionCount function 2020-07-07 10:37:57 +02:00
citus bot bdfeb380d3 Fix some more master->coordinator comments 2020-07-07 10:37:53 +02:00
Marco Slot b4fec63bc0 Rename master evaluation to coordinator evaluation 2020-07-07 10:37:41 +02:00
Marco Slot eeffbde8bd Fix pushdown of constants in aggregate queries 2020-06-30 11:41:16 -07:00
Jelte Fennema 392c5e2c34
Fix wrong cancellation message about distributed deadlocks (#3956) 2020-06-30 14:57:46 +02:00
Marco Slot 634d6cf9d7
Improve performance of metadata cache (#3924)
#3866 removed the shard ID hash in metadata_cache.c to simplify cache management, 
but we observed a significant performance regression that was being masked by the
performance improvement provided by #3654 in our benchmarks, but #3654 only 
applies to specific workloads.

This PR brings back the shard ID cache as it existed before #3866 with some extra
 measures to handle invalidation. When we load a table entry, we overwrite 
ShardIdCacheEntry->tableEntry pointers for all the shards in that table, though 
it's possible that the table no longer contains the old shard ID or the table 
entry is never reloaded, which would leave a dangling pointer once the table 
entry is freed. To handle that case, we remove all shard ID cache entries that 
point exactly to that table entry when a table is freed (at the end of the 
transaction or any call to CitusTableCacheFlushInvalidatedEntries).

Co-authored-by: SaitTalhaNisanci <s.talhanisanci@gmail.com>
Co-authored-by: Marco Slot <marco.slot@gmail.com>
Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2020-06-30 12:10:10 +02:00
Hadi Moshayedi 4ed59d2db3 Move more from insert_select_executor to insert_select_planner 2020-06-26 08:08:26 -07:00
Hadi Moshayedi d34c21890f Rename CoordinatorInsertSelect... to NonPushableInsertSelect 2020-06-25 08:55:48 -07:00
Hadi Moshayedi 4e8d79998e Save INSERT/SELECT method in DistributedPlan.
This is so we don't need to calculate it twice in
insert_select_executor.c and multi_explain.c, which can
cause discrepancy if an update in one of them is not
reflected in the other site.
2020-06-25 08:55:48 -07:00
SaitTalhaNisanci f458d1fd1c
Fix/task execution (#3941)
* Not set TaskExecution with adaptive executor

Adaptive executor is using a utility method from task tracker for
repartition joins, however adaptive executor doesn't need taskExecution.
It is only used by task tracker. This causes a problem when explain
analyze is used because what taskExecution is pointing to might be
random.

We solve this by not setting taskExecution from adaptive executor. So it
will stay NULL as set by CreateTask.

* use same memory context as task for taskExecution

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2020-06-24 12:10:00 +03:00
Marco Slot 2a3234ca26 Rename masterQuery to combineQuery 2020-06-17 14:14:37 +02:00
Jelte Fennema 0259815d3a
Fix EXPLAIN ANALYZE received data counter issues (#3917)
In #3901 the "Data received from worker(s)" sections were added to EXPLAIN
ANALYZE. After merging @pykello posted some review comments. This addresses
those comments as well as fixing a other issues that I found while addressing 
them. The things this does:

1. Fix `EXPLAIN ANALYZE EXECUTE p1` to not increase received data on every
   execution
2. Fix `EXPLAIN ANALYZE EXECUTE p1(1)` to not return 0 bytes as received data
   allways.
3. Move `EXPLAIN ANALYZE` specific logic to `multi_explain.c` from
   `adaptive_executor.c`
4. Change naming of new explain sections to `Tuple data received from node(s)`.
   Firstly because a task can reference the coordinator too, so "worker(s)" was
   incorrect. Secondly to indicate that this is tuple data and not all network
   traffic that was performed.
5. Rename `totalReceivedData` in our codebase to `totalReceivedTupleData` to
   make it clearer that it's a tuple data counter, not all network traffic.
6. Actually add `binary_protocol` test to `multi_schedule` (woops)
7. Fix a randomly failing test in `local_shard_execution.sql`.
2020-06-17 11:33:38 +02:00
Marco Slot d1bab78d79 Remove master from file hierarchy 2020-06-16 17:49:09 +02:00
Philip Dubé 39400319e6 Defer freeing CitusTableCacheEntry, as there were memory safety issues before
Shard id to index mapping stored in cache entry as there may now be multiple entries alive for a given relation

insert_select_executor: revert copying cache entry, which was a hack added to avoid memory safety issues
2020-06-15 16:20:50 +00:00
Jelte Fennema 927de6d187
Show amount of data received in EXPLAIN ANALYZE (#3901)
Sadly this does not actually work yet for binary protocol data, because
when doing EXPLAIN ANALYZE we send two commands at the same time. This
means we cannot use `SendRemoteCommandParams`, and thus cannot use the
binary protocol. This can still be useful though when using the text
protocol, to find out that a lot of data is being sent.
2020-06-15 16:01:05 +02:00
Marco Slot 4f7989ad8e Rename WorkersContainingAllShards to PlacementsForWorkersContainingAllShards 2020-06-12 18:36:02 -07:00
Marco Slot d953f084db Rename FindRouterWorkerList to CreateTaskPlacementListForShardIntervals 2020-06-12 18:36:01 -07:00
Marco Slot 24feadc230 Handle joins between local/reference/cte via router planner 2020-06-12 18:36:01 -07:00
Halil Ozan Akgül 8c5eb6b7ea
Insert Select Into Local Table (#3870)
* Insert select with master query

* Use relid to set custom_scan_tlist varno

* Reviews

* Fixes null check

Co-authored-by: Marco Slot <marco.slot@gmail.com>
2020-06-12 17:06:31 +03:00
Jelte Fennema 0e12d045b1
Support use of binary protocol in between nodes (#3877)
This can save a lot of data to be sent in some cases, thus improving
performance for which inter query bandwidth is the bottleneck.
There's some issues with enabling this as default, so that's currently not done.
2020-06-12 15:02:51 +02:00
Nils Dijk da8f2b0134
Feature: tdigest aggregate (#3897)
DESCRIPTION: Adds support to partially push down tdigest aggregates

tdigest extensions: https://github.com/tvondra/tdigest

This PR implements the partial pushdown of tdigest calculations when possible. The extension adds a tdigest type which can be combined into the same structure. There are several aggregate functions that can be used to get;
 - a quantile
 - a list of quantiles
 - the quantile of a hypothetical value
 - a list of quantiles for a list of hypothetical values

These function can work both on values or tdigest types.

Since we can create tdigest values either by combining them, or based on a group of values we can rewrite the aggregates in such a way that most of the computation gets delegated to the compute on the shards. This both speeds up the percentile calculations because the values don't have to be sorted while at the same time making the transfer size from the shards to the coordinator significantly less.
2020-06-12 13:50:28 +02:00
Philip Dubé 1722d8ac8b Allow routing modifying CTEs
We still recursively plan some cases, eg:
- INSERTs
- SELECT FOR UPDATE when reference tables in query
- Everything must be same single shard & replication model
2020-06-11 15:14:06 +00:00
Hadi Moshayedi 7c52c6edb0 CTE statistics in EXPLAIN ANALYZE 2020-06-11 02:39:59 -07:00
Hadi Moshayedi bb96ef5047 Does the EXPLAIN ANALYZE at the same time as execution, so avoids executing twice.
We wrap worker tasks in worker_save_query_explain_analyze() so we can fetch
their explain output later by a call worker_last_saved_explain_analyze().

Fixes #3519
Fixes #2347
Fixes #2613
Fixes #621
2020-06-11 01:55:57 -07:00
Marco Slot 1243b6a948 Execute shard creation as utility tasks 2020-06-10 11:29:49 +02:00
Onder Kalaci 06461ca55f Coerce types properly for INSERT
Also, unify similar code-paths to rely on more accurate function.
2020-06-10 10:40:28 +02:00
Hadi Moshayedi 5cdfa9f571 Implement EXPLAIN ANALYZE udfs.
Implements worker_save_query_explain_analyze and worker_last_saved_explain_analyze.

worker_save_query_explain_analyze executes and returns results of query while
saving its EXPLAIN ANALYZE to be fetched later.

worker_last_saved_explain_analyze returns the saved EXPLAIN ANALYZE result.
2020-06-09 10:02:05 -07:00
Onur Tirtir a4f1c41391
Implement GetQueryLockMode helper (#3860)
If we want to get necessary lockmode for a relation RangeVar within
a query, we can get the lockmode easily from the RangeVar itself (if
pg version >= 12).

However, if we want to decide the lockmode appropriate for the
"query", we can derive this information by using GetQueryLockMode
according to the code comment from RangeTblEntry->rellockmode.
2020-06-09 13:08:44 +03:00
Hadi Moshayedi 198d5d8b0f typedef TupleDestination once 2020-06-08 20:38:28 -07:00
Hadi Moshayedi 0bfd39ea52 Implement TupleDestination intereface.
Implements a new `TupleDestination` interface to allow custom tuple processing per task.

This can be specially useful if a task contains multiple queries. An example of this EXPLAIN
ANALYZE, where it needs to add some UDF calls to the query to fetch the explain output
from worker after fetching the actual query results.
2020-06-05 17:47:40 -07:00
SaitTalhaNisanci d0f47eb338
Check the removeType in IsDropCitusStmt (#3859)
We should check the remove type in IsDropCitusStmt because if the remove
type is not OBJECT_EXTENSION then the stored objects in
dropStmt->objects may not be of type Value. This was crashing PG-13.

Also rename the method as IsDropCitusExtensionStmt.
2020-06-05 20:49:54 +03:00
Onur Tirtir f7224a12f2
Implement PushOverrideEmptySearchPath (#3874)
To reduce code duplication, implement function that pushes search_path
to be NIL and sets addCatalog to true so that all objects outside of
pg_catalog will be schema-prefixed.
2020-06-05 19:23:59 +03:00
Onur Tirtir f3f711e097 Implement IndexIsImpliedByAConstraint 2020-06-05 15:33:54 +03:00
Onur Tirtir dfcc18468c Error out for unsupported trigger objects
Error out if creating a citus table from a table having triggers.
Error out for CREATE TRIGGER commands that are run on citus tables.
2020-05-31 23:10:01 +03:00
Onur Tirtir 6e6bc155a9 Implement methods to process & recreate triggers on citus tables 2020-05-31 15:28:17 +03:00
Philip Dubé c0515dcd67 This prepares for routing modifying CTEs, where modLevel should not be used to infer whether a plan is a select or not
SELECT_TASK is renamed to READ_TASK as a SELECT with modifying CTEs will be a MODIFYING_TASK

RouterInsertJob: Assert originalQuery->commandType == CMD_INSERT
CreateModifyPlan: Assert originalQuery->commandType != CMD_SELECT

Remove unused function IsModifyDistributedPlan

DistributedExecution, ExecutionParams, DistributedPlan: Rename hasReturning to expectResults
SELECTs set expectResults to true

Rename CreateSingleTaskRouterPlan to CreateSingleTaskRouterSelectPlan
2020-05-20 17:26:12 +00:00
Onur Tirtir 79a688ffe0 Refactor the methods accessing to pg_constraint
Implement internal functions to accces to pg_contraint
and utilize them in existing foreign key checks.
2020-05-20 17:27:17 +03:00
SaitTalhaNisanci 22c903b151
remove ExecuteUtilityTaskListWithoutResults (#3696)
This PR removes ExecuteUtilityTaskListWithoutResults and uses the same
path for local execution via ExecuteTaskListExtended.
ExecuteUtilityTaskList is added. ExecuteLocalTaskListExtended now has a
parameter for utility commands so that it can call the right method. In
order not to change the existing calls,
ExecuteTaskListExtendedInternal is added, which is the main method that
runs the execution, via local and remote execution.
2020-05-07 13:30:50 +03:00
Marco Slot 6ce2803777 Make sure we don't wrap GROUP BY expressions in any_value 2020-05-05 05:12:45 +02:00
Onder Kalaci 0cb7ab2d05 Explicitly mark queries in physical planner for [not] having parameters
Physical planner doesn't support parameters. If the parameters have already
been resolved when the physical planner handling the queries, mark it.
The reason is that the executor is unaware of this, and sends the parameters
along with the worker queries, which fails for composite types.

(See `DissuadePlannerFromUsingPlan()` for the details of paramater resolving)
2020-04-24 12:49:43 +02:00
Onur Tirtir b8dd8f50d1
Fix build issue in GCC 10 (#3790)
As reported in #3787, we were having issues while building citus with "GCC Red Hat 10" (maybe in some other versions of gcc as well).
Fixes "multiple definition of 'CitusNodeTagNames'" error by explicitly specifying storage of CitusNodeTagNames to be extern.
2020-04-22 16:41:34 +03:00
Philip Dubé c0a95a3adb Copy data from CitusTableCacheEntry more often
This copies over fixes from reference counting branch,
all CitusTableCacheEntry data may be freed when a GetCitusTableCacheEntry call occurs for its relationId

This fix is not complete, but reference counting is being deferred until 9.4

CopyShardInterval: remove dest parameter, always return newly allocated object
2020-04-17 14:17:18 +00:00
Önder Kalacı a919f09c96
Remove the entries from the shared connection counter hash when no connections remain (#3775)
We initially considered removing entries just before any change to
pg_dist_node. However, that ended-up being very complex and making
MX even more complex.

Instead, we're switching to a simpler solution, where we remove entries
when the counter gets to 0.

With certain workloads, this may have some performance penalty. But, two
notes on that:
 - When counter == 0, it implies that the cluster is not busy
 - With cached connections, that's not possible
2020-04-17 17:14:58 +03:00
SaitTalhaNisanci a9a3be15cc
introduce TASK_QUERY_NULL task type (#3774)
When we call SetTaskQueryString we would set the task type to
TASK_QUERY_TEXT, and some parts of the codebase rely on the fact that if
TASK_QUERY_TEXT is set, the data can be read safely. However if
SetTaskQueryString is called with a NULL taskQueryString this can cause
crashes. In that case taskQueryType will simply be set to
TASK_QUERY_NULL.
2020-04-17 14:59:22 +03:00
Nils Dijk 1d6ba1d09e
Refactor alter role to work on distributed roles (#3739)
DESCRIPTION: Alter role only works for citus managed roles

Alter role was implemented before we implemented good role management that hooks into the object propagation framework. This is a refactor of all alter role commands that have been implemented to
 - be on by default
 - only work for supported roles
 - make the citus extension owner a supported role

Instead of distributing the alter role commands for roles at the beginning of the node activation role it now _only_ executes the alter role commands for all users in all databases and in the current database.

In preparation of full role support small refactors have been done in the deparser.

Earlier tests targeting other roles than the citus extension owner have been either slightly changed or removed to be put back where we have full role support.

Fixes #2549
2020-04-16 12:23:27 +02:00
Marco Slot 8b83306a27 Issue worker messages with the same log level 2020-04-14 21:08:25 +02:00
SaitTalhaNisanci 132efdbc56
add execution params struct (#3747)
We had 9+ parameters in some of the functions related to execution.
Execution params is created to simplify this a bit so that we can set
only the fields that we are interested in and it is easier to read.
2020-04-14 14:32:40 +03:00