diff --git a/images/2pc-recovery.png b/images/2pc-recovery.png
new file mode 100755
index 000000000..8fbe80124
Binary files /dev/null and b/images/2pc-recovery.png differ
diff --git a/images/deadlock-detection.png b/images/deadlock-detection.png
new file mode 100755
index 000000000..3a7eeb25b
Binary files /dev/null and b/images/deadlock-detection.png differ
diff --git a/src/backend/distributed/README.md b/src/backend/distributed/README.md
index 2db8fb700..a7c0f258e 100644
--- a/src/backend/distributed/README.md
+++ b/src/backend/distributed/README.md
@@ -1983,7 +1983,9 @@ Most multi-tenant and high-performance CRUD workloads only involve transactions
For transactions that write to multiple nodes, Citus uses the built-in two-phase commit (2PC) machinery in PostgreSQL. In the pre-commit callback, a “prepare transaction” command is sent over all connections to worker nodes with open transaction blocks, then a commit record is stored on the coordinator. In the post-commit callback, “commit prepared” commands are sent to commit on the worker nodes. The maintenance daemon takes care of recovering failed 2PC transactions by comparing the commit records on the coordinator to the list of pending prepared transactions on the worker. The presence of a record implies the transaction was committed, while the absence implies it was aborted. Pending prepared transactions are moved forward accordingly.
-Nice animation at: How Citus Executes Distributed Transactions on Postgres (citusdata.com)
+
+
+Nice animation at: [How Citus Executes Distributed Transactions on Postgres](https://www.citusdata.com/blog/2017/11/22/how-citus-executes-distributed-transactions/)
## No distributed snapshot isolation
@@ -2071,6 +2073,8 @@ While doing the DFS, we also keep track of the other backends that are involved
If there is a cycle in the local graph, typically Postgres’ deadlock detection kicks in before Citus’ deadlock detection, hence breaks the cycle. There is a safe race condition between Citus’ deadlock detection and Postgres’ deadlock detection. Even if the race happens, the worst-case scenario is that the multiple backends from the same cycle is cancelled. In practice, we do not see much, because Citus deadlock detection runs `2x` slower (e.g., `citus.distributed_deadlock_detection_factor`) than Postgres deadlock detection.
+
+
For debugging purposes, you can enable logging with distributed deadlock detection: `citus.log_distributed_deadlock_detection`
With query from any node, we run the deadlock detection from all nodes. However, each node would only try to find deadlocks on the backends that are initiated on them. This helps to scale deadlock detection workload across all nodes.