In our test structure, we have been passing postgres configurations from
the terminal, which causes problems after it hits to a certain length
hence it cannot start the server and understanding why it failed is not
easy because there isn't a nice error message.
This commit changes this to write the settings directly to the postgres
configuration file. This way we can add as many postgres settings as we
want to without needing to worry about the length problem.
Multi-row & router INSERT's were crashing with local execution if at
least one of the DEFAULT columns were not specified in VALUES list.
This was because, the changes we make on query->values_lists and
query->targetList was sufficient for deparsing given INSERT for remote
execution but not sufficient for local execution.
With this commit, DEFAULT value normalization for multi-row & router
INSERT's is fixed by adding dummy column references for unspecified
DEFAULT columns.
Citus has the logic to truncate the long shard names to prevent
various issues, including self-deadlocks. However, for partitioned
tables, when index is created on the parent table, the index names
on the partitions are auto-generated by Postgres. We use the same
Postgres function to generate the index names on the shards of the
partitions. If the length exceeds the limit, we switch to sequential
execution mode.
In postmasters execution of _PG_init, IsUnderPostmaster will be false and
we want to do the cleanup at that time only, otherwise there is a chance that
there will be parallel queries and we might do a cleanup for things that are
already in use.
After the connection timeout, we fail the session/pool. However, the
underlying connection can still be trying to connect. That is dangerous
because the new placement executions have already been in place. The
executor cannot handle the situation where multiple of
EXECUTION_ORDER_ANY task executions succeeds.
Adding a regression test doesn't seem easily doable. To reproduce the issue
- Add 2 worker nodes
- create a reference table
- set citus.node_connection_timeout to 1ms (requires code change)
- Continiously execute `SELECT count(*) FROM ref_table`
- Sometime later, you hit an out-of-array access in
`ScheduleNextPlacementExecution()` hence crashing.
- The reason for that is sometimes the first connection
successfully established while the executor is already
trying to execute the query on the second node.
We currently do not support volatile functions in update/delete statements
because the function evaluation logic does not know how to distinguish
volatile functions (that need to be evaluated per row) from stable functions
(that need to be evaluated per query), and it is also not safe to push the
volatile functions down on replicated tables.
Add sort method parameter for regression tests
Fix check-style
Change sorting method parameters to enum
Polish
Add task fields to OutTask
Add test into multi_explain
Fix isolation test
As the previous versions of Citus don't know how to handle citus local
tables, we should prevent downgrading from 9.5 to older versions if any
citus local tables exists.
* Merge enterprise branch if it exists
We should merge the enterprise branch if it exists in the check
enterpise merge job, otherwise the following can happen:
- there is some change on community that breaks the compilation on
enterprise without creating any conflicts
- we fix the compilation issue by opening a branch on enterprise
- the job doesn't see the enterprise specific fix because it doesn't try
to merge enterprise branch if there are no conflicts
* Update ci/check_enterprise_merge.sh
Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
* Simplify the steps
Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
Pushing down the CALLs to the node that the CALL is executed is
dangerous and could lead to infinite recursion.
When the coordinator added as worker, Citus was by chance preventing
this. The coordinator was marked as "not metadatasynced" node
in pg_dist_node, which prevented CALL/function delegation to happen.
With this commit, we do the following:
- Fix metadatasynced column for the coordinator on pg_dist_node
- Prevent pushdown of function/procedure to the same node that
the function/procedure is being executed. Today, we do not sync
pg_dist_object (e.g., distributed functions metadata) to the
worker nodes. But, even if we do it now, the function call delegation
would prevent the infinite recursion.
* Not take ShareUpdateExlusiveLock on pg_dist_transaction
We were taking ShareUpdateExlusiveLock on pg_dist_transaction during
recovery to prevent multiple recoveries happening concurrenly. VACUUM(
not FULL) also takes ShareUpdateExclusiveLock, and they can conflict. It
seems that VACUUM will skip the table if there is a conflicting lock
already taken unless it is doing the vacuum to prevent id wraparound, in
which case there can be a deadlock. I guess the deadlock happens if:
- VACUUM takes a lock on pg_dist_transaction and is done for id
wraparound problem
- The transaction in the maintenance tries to take a lock but
cannot as that conflicts with the lock acquired by VACUUM
- The transaction in the maintenance daemon has a very old xid hence
VACUUM cannot proceed.
If we take a row exclusive lock in transaction recovery then it wouldn't
conflict with VACUUM hence it could proceed so the deadlock would be
resolved. To prevent concurrent transaction recoveries happening, an
advisory lock is taken with ShareUpdateExlusiveLock as before.
* Use CITUS_OPERATIONS tag
* Not allow removing a single node with ref tables
We should not allow removing a node if it is the only node in the
cluster and there is a data on it. We have this check for distributed
tables but we didn't have it for reference tables.
* Update src/test/regress/expected/single_node.out
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
* Update src/test/regress/sql/single_node.sql
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Introduce citus local tables
The commits in this pr are merged from other sub-pr's:
* community/#3852: Brings lazy&fast table creation logic for create_citus_local_table udf
* community/#3995: Brings extended utility command support for citus local tables
* community/#4133: Brings changes in planner and in several places to integrate citus local tables into our distributed execution logic
We are introducing citus local tables, which a new table type to citus.
To be able to create a citus local table, first we need to add coordinator as a worker
node.
Then, we can create a citus local table via SELECT create_citus_local_table(<tableName>).
Calling this udf from coordinator will actually create a single-shard table whose shard
is on the coordinator.
Also, from the citus metadata perspective, for citus local tables:
* partitionMethod is set to DISTRIBUTE_BY_NONE (like reference tables) and
* replicationModel is set to the current value of citus.replication_model, which
already can't be equal to REPLICATION_MODEL_2PC, which is only used for reference
tables internally.
Note that currently we support creating citus local tables only from postgres tables
living in the coordinator.
That means, it is not allowed to execute this udf from worker nodes or it is not allowed
to move shard of a citus local table to any other nodes.
Also, run-time complexity of calling create_citus_local_table udf does not depend
on the size of the relation, that means, creating citus local tables is actually a
non-blocking operation.
This is because, instead of copying the data to a new shard, this udf just does the
following:
* convert input postgres table to the single-shard of the citus local table by suffixing
the shardId to it's name, constraints, indexes and triggers etc.,
* create a shell table for citus local table in coordinator and in mx-worker nodes when
metadata sycn is enabled.
* create necessary objects on shell table.
Here, we should also note we can execute queries/dml's from mx worker nodes
as citus local tables are already first class citus tables.
Even more, we brought trigger support for citus local tables.
That means, we can define triggers on citus local tables so that users can define trigger
objects to perform execution of custom functions that might even modify other citus tables
and other postgres tables.
Other than trigger support, citus local tables can also be involved in foreign key relationships
with reference tables.
Here the only restriction is, foreign keys from reference tables to citus local tables cannot
have behaviors other than RESTRICT & NO ACTION behavior.
Other than that, foreign keys between citus local tables and reference tables just work fine.
All in all, citus local tables are actually just local tables living in the coordinator, but natively
accessible from other nodes like other first class citus tables and this enables us to set foreign
keys constraints between very big coordinator tables and reference tables without having to
do any data replication to worker nodes for local tables.