Errors thrown in the COMMIT handler will cause Postgres to segfault,
there's nothing it can do it abort the transaction by the time that
handler is called!
RemoveIntermediateResultsDirectory is problematic for two reasons:
- It has calls to ereport(ERROR which have been known to trigger
- It makes memory allocations which raise ERRORs when they fail
Once the COMMIT process has begun we don't use the intermediate results,
so it's safe to remove them a little earlier in the process. A failure
here will abort the transaction. That's pretty unnecessary, it's not
that important that we remove the results, but it's still better than a
crash.
In case a failure happens when a transaction is rollbacked,
we used to hit an assertion for ensuring there is no pending
activity on the connection. However, that's not true after the
changes in #2031. Thus, we've replaced the assertion with a more
generic function call to consume any pending activity, if exists.
- changes in ruleutils_11.c is reflected
- vacuum statement api change is handled. We now allow
multi-table vacuum commands.
- some other function header changes are reflected
- api conflicts between PG11 and earlier versions
are handled by adding shims in version_compat.h
- various regression tests are fixed due output and
functionality in PG1
- no change is made to support new features in PG11
they need to be handled by new commit
VLAs aren't supported by Visual Studio.
- Remove all existing instances of VLAs.
- Add a flag, -Werror=vla, which makes gcc refuse to compile if we add
VLAs in the future.
- variable length arrays (VLAs) do not work with Visual Studio
- fix an off-by-one error. We incorrectly assumed there would always at
least as many edges as there were nodes.
- refactor: reduce scope of transactionNodeStack by moving it into the
function which uses it.
- refactor: break up the distinct uses of currentStackDepth into
separate variables.
It's against our coding convention to call functions inside parameter
lists; when single-stepping with a debugger it's difficult to determine
what the function returned.
That wouldn't be good enough reason to change this code but while
porting Citus to Windows I ran into this line of code.
assign_distributed_transaction_id was called with a weird timestamp and
I wasn't able to find the problem without first making this change.
With this fix, we traverse the graph with DFS which was originally
intended. Note that, before the fix, we traverse the graph with BFS
which might lead to killing some unrelated backend that is not
involved in the distributed deadlock.
This commit introduces a new GUC to limit the intermediate
result size which we handle when we use read_intermediate_result
function for CTEs and complex subqueries.
Postgres provides OS agnosting formatting macros for
formatting 64 bit numbers. Replaced %ld %lu with
INT64_FORMAT and UINT64_FORMAT respectively.
Also found some incorrect usages of formatting
flags and fixed them.
Autovacuum process cancels itself if any modification starts
on the table in order to avoid blocking your regular Postgres
sessions. That's normal and expected. Thus, any locks held by
autovacuum process cannot involve in a distributed deadlock
since it'll be released if needed.
These locks are held for a very short duration time and cannot
contribute to a deadlock. Speculative locks are used by Postgres
for internal notification mechanism among transactions.
Short-term share/exclusive page-level locks are used for
read/write access. Locks are released immediately after
each index row is fetched or inserted.
Since those locks may not lead to any deadlocks, it's safe
to ignore them in the distributed deadlock detection.
We should skip if the process blocked on the relation
extension since those locks are hold for a short duration
while the relation is actually extended on the disk and
released as soon as the extension is done. Thus, recording
such waits on our lock graphs could yield detecting wrong
distributed deadlocks.
We sent multiple commands to worker when starting a transaction.
Previously we only checked the result of the first command that
is transaction 'BEGIN' which always succeeds. Any failure on
following commands were not checked.
With this commit, we make sure all command results are checked.
If there is any error we report the first error found.
If after the distributed deadlock detection decides to cancel
a backend, the backend has been terminated/killed/cancelled
externally, we might be accessing to a NULL pointer. This commit
prevents that case by ignoring the current distributed deadlock.
This change adds support for SAVEPOINT, ROLLBACK TO SAVEPOINT, and RELEASE SAVEPOINT.
When transaction connections are not established yet, savepoints are kept in a stack and sent to the worker when the connection is later established. After establishing connections, savepoint commands are sent as they arrive.
This change fixes#1493 .
We added a new field to the transaction id that is set to true only
for the transactions initialized on the coordinator. This is only
useful for MX in order to distinguish the transaction that started
the distributed transaction on the coordinator where we could
have the same transactions' worker queries on the same node.
We added a new GUC citus.log_distributed_deadlock_detection
which is off by default. When set to on, we log some debug messages
related to the distributed deadlock to the server logs.
This commit adds all the necessary pieces to do the distributed
deadlock detection.
Each distributed transaction is already assigned with distributed
transaction ids introduced with
3369f3486f. The dependency among the
distributed transactions are gathered with
80ea233ec1.
With this commit, we implement a DFS (depth first seach) on the
dependency graph and search for cycles. Finding a cycle reveals
a distributed deadlock.
Once we find the deadlock, we examine the path that the cycle exists
and cancel the youngest distributed transaction.
Note that, we're not yet enabling the deadlock detection by default
with this commit.