Commit Graph

2147 Commits (32b8235da37b8e271b1a2f3012d3020c3cef24b6)

Author SHA1 Message Date
Hanefi Onaldi 32b8235da3
Bump version to 8.0.3 2019-01-09 09:37:12 +03:00
Hanefi Onaldi 16bb2c618f
Add changelog entry for 8.0.3 2019-01-09 09:34:06 +03:00
Murat Tuncer 54a893523e
Move repeated code to a function 2019-01-09 09:31:05 +03:00
Murat Tuncer 57d51b280e
Fix multi_view tests 2019-01-09 09:31:04 +03:00
Murat Tuncer ba00e930ea
Fix having clause bug for complex joins
We update column attributes of various clauses for a query
inluding target columns, select clauses when we introduce
new range table entries in the query.

It seems having clause column attributes were not updated.

This fix resolves the issue
2019-01-09 09:31:04 +03:00
Murat Tuncer c5371965f6
Make sure spinlock is not left unreleased when an exception is thrown
A spinlock is not released when an exception is thrown after
spinlock is acquired. This has caused infinite wait and eventual
crash in maintenance daemon.

This work moves the code than can fail to the outside of spinlock
scope so that in the case of failure spinlock is not left locked
since it was not locked in the first place.
2019-01-09 09:31:03 +03:00
Hanefi Onaldi 1e3969cae1 Bump version to 8.0.2 2018-12-13 16:22:48 +03:00
Hanefi Onaldi 0fa6049cce Add changelog entry for 8.0.2 2018-12-13 16:19:03 +03:00
Burak Yucesoy 2a1ae6489b Fix crashes caused by stack size increase under high memory load
Each PostgreSQL backend starts with a predefined amount of stack and this stack
size can be increased if there is a need. However, stack size increase during
high memory load may cause unexpected crashes, because if there is not enough
memory for stack size increase, there is nothing to do for process apart from
crashing. An interesting thing is; the process would get OOM error instead of
crash, if the process had an explicit memory request (with palloc) for example.
However, in the case of stack size increase, there is no system call to get OOM
error, so the process simply crashes.

With this change, we are increasing the stack size explicitly by requesting extra
memory from the stack, so that, even if there is not memory, we can at least get
an OOM instead of a crash.
2018-12-13 10:46:28 +03:00
Onder Kalaci 507964dde3 Ensure to use initialized MaxBackends
Postgresql loads shared libraries before calculating MaxBackends.
However, Citus relies on MaxBackends being set. Thus, with this
commit we use the same steps to calculate MaxBackends while
Citus is being loaded (e.g., PG_Init is called).

Note that this is safe since all the elements that are used to
calculate MaxBackends are PGC_POSTMASTER gucs and a constant
value.
2018-12-13 10:44:19 +03:00
Jason Petersen 2de7b85b89
Bump version to 8.0.1 2018-11-28 00:42:05 -07:00
Jason Petersen 0b0c0fef25
Add changelog entry for 8.0.1 2018-11-27 23:27:25 -07:00
Marco Slot 606e2b18d7
Merge pull request #2491 from citusdata/backport_copytt_80
Backport #2487 to release-8.0
2018-11-23 11:21:43 +01:00
Nils Dijk 67f058c5f6 Description: Fix failures of tests on recent postgres builds
In recent postgres builds you cannot set client_min_messages to
values higher then ERROR, if will silently set it to ERROR if so.

During some tests we would set it to fatal to hide random values
(eg. pid's of processes) from the test output. This patch will use
different tactics for hiding these values.
2018-11-22 19:52:43 +01:00
Marco Slot 4392cc2f9c Test current user in task-tracker queries 2018-11-22 19:39:23 +01:00
Marco Slot ca8a4dc735 COPY to a task file no longer switches to superuser 2018-11-22 19:38:59 +01:00
velioglu da0b98c991 Bump Citus version to 8.0.0 2018-10-31 14:45:36 +03:00
velioglu 480797e600 Add changelog entry for 8.0.0 2018-10-31 14:36:25 +03:00
Onder Kalaci 2311a3614a Make sure to access PARAM_EXTERN accurately in PG 11
PG 11 has change the way that PARAM_EXTERN is processed.
This commit ensures that Citus follows the same pattern.

For details see the related Postgres commit:
6719b238e8
2018-10-26 11:35:09 +03:00
Hadi Moshayedi c22cbb7a13 Don't throw error for DROP DATABASE IF EXISTS 2018-10-25 21:14:46 +03:00
Onder Kalaci 5d805dba27 Processes that are blocked on advisory locks show up in wait edges
Assign the distributed transaction id before trying to acquire the
executor advisory locks. This is useful to show this backend in citus
lock graphs (e.g., dump_global_wait_edges() and citus_lock_waits).
2018-10-24 14:01:30 +03:00
Murat Tuncer b08106b5cf Don't allow PG11 travis failures anymore
We made PG11 builds optional when we had an issue
with mx isolation test that we could not solve back then.

This commit solves the issue with a workaround by running
start_metadata_sync_to_node outside the transaction block.
2018-10-19 16:57:48 +03:00
Jason Petersen 87817aec9d Attempt to address planner context crashes
Both of these are a bit of a shot in the dark. In one case, we noticed
a stack trace where a caller received a null pointer and attempted to
dereference the memory context field (at 0x010). In the other, I saw
that any error thrown from within AdjustParseTree could keep the stack
from being cleaned up (presumably if we push we should always pop).

Both stack traces were collected during times of high memory pressure
and locally reproducing the problem locally or otherwise has been very
tricky (i.e. it hasn't been reproduced reliably at all).
2018-10-19 10:24:07 +03:00
Hadi Moshayedi 431ac80563
Keep track of cached entries in case of interruption. (#2433)
* Keep track of cached entries in case of interruption.

Previously we set DistTableCacheEntry->sortedShardIntervalArray
and DistTableCacheEntry->shardIntervalArrayLength after we entered
all related shard entries into DistShardCacheHash. The drawback was
that if populating DistShardCacheHash was interrupted,
ResetDistTableCacheEntry() didn't see the shard hash entries created,
so was unable to clean them up.

This patch fixes that by setting sortedShardIntervalArray earlier,
and incrementing shardIntervalArrayLength as we enter shards into
the cache.
2018-10-15 14:06:56 -04:00
Marco Slot a9f183a284
Merge pull request #2432 from citusdata/fix_typos
Fix user-facing typos
2018-10-10 14:25:59 -07:00
Jason Petersen 9fb951c312
Fix user-facing typos
Lintian found these (presumably by looking in the text section and
running them through e.g. aspell).
2018-10-09 16:54:03 -07:00
Burak Velioglu 8b9aeb374b
Merge pull request #2425 from citusdata/real-time-select-failure
Add failure tests for real time select queries
2018-10-09 14:27:45 -07:00
velioglu 5713019058 Add failure tests for real time select queries 2018-10-09 14:12:02 -07:00
Önder Kalacı d5ebf22ba1
Merge pull request #2424 from citusdata/clear_intermediate_results
Make sure not to leak intermediate result folders on the workers
2018-10-09 23:50:27 +03:00
Onder Kalaci 73696a03e4 Make sure not to leak intermediate result folders on the workers 2018-10-09 22:47:56 +03:00
Marco Slot 5886e69a3a
Merge pull request #2423 from citusdata/writable_standby_coordinator
Allow simple DML commands from hot standby
2018-10-09 11:43:08 -07:00
Jason Petersen 1cb48416eb
Add reference table failure tests
Fairly straightforward; verified that modifications fail atomically if
a worker is down or fails mid-transaction (i.e. all workers need to ack
modifications to reference tables in order to persist changes).
2018-10-09 09:39:30 -07:00
Jason Petersen 9bcf2873a7
Add single-shard router select failure tests
Including several examples from #1926. I couldn't understand why the
recover_prepared_transactions "should be an error", and EXPLAIN has
changed since the original bug (so that it runs EXPLAINs in txns, I
think for EXPLAIN ANALYZE to not have side effects); other than that,
most of the reported bugs now error out rather than crash or return
an empty result set.
2018-10-09 08:51:10 -07:00
Jason Petersen 8f2aa00951
Add failure tests for VACUUM/ANALYZE
VACUUM runs outside of a transaction, so the failure modes for it are
somewhat straightforward, though ANALYZE runs in a 1pc transaction and
multi-table VACUUM can fail between statements (PG 11 and higher).
2018-10-09 08:50:37 -07:00
Jason Petersen ee4114bc7a Failure tests for modifying multiple shards in txn
Tests various failure points during a multi-shard modification within
a transaction with multiple statements. Verifies three cases:

  * Reference tables (single shard, many placements)
  * Normal table with replication factor two
  * Multi-shard table with no replication

In the replication-factor case, we expect shard health to be affected
in some transactions; most others fail the transaction entirely and
all we need verify is that no effects of the transaction are visible.

Had trouble testing the final PREPARE/COMMIT/ROLLBACK phase of the 2pc,
in particular because the error message produced includes the PID of
the backend, which is unpredictable.
2018-10-09 09:17:32 -06:00
Murat Tuncer b45754a4d0
Merge pull request #2428 from citusdata/fix_mx_drop_schema_with_partitions
Fix drop schema in mx with partitioned tables
2018-10-09 03:50:32 +03:00
Murat Tuncer 4f8042085c Fix drop schema in mx with partitioned tables
Drop schema command fails in mx mode if there
is a partitioned table with active partitions.

This is due to fact that sql drop trigger receives
all the dropped objects including partitions. When
we call drop table on parent partition, it also drops
the partitions on the mx node. This causes the drop
table command on partitions to fail on mx node because
they are already dropped when the partition parent was
dropped.

With this work we did not require the table to exist on
worker_drop_distributed_table.
2018-10-08 17:01:54 -07:00
Murat Tuncer 24e247c1b9
Merge pull request #2426 from citusdata/failure_pull_push_insert_select
Add failure tests for insert/select via coordinator
2018-10-08 19:32:29 +03:00
Hadi Moshayedi 7509c6c8fb Add tests which check we disallow writes to local tables. 2018-10-06 10:54:44 +02:00
Marco Slot d56baefe3d Allow simple DML commands from hot standby 2018-10-06 10:54:44 +02:00
Murat Tuncer 71a910d2fa Add failure tests for insert/select via coordinator 2018-10-04 18:01:19 +03:00
Murat Tuncer c8151818e7
Merge pull request #2318 from citusdata/mt_failure_test
Add new failure tests for multi-shard/CTE modify and cte coordinator pull
2018-10-03 17:07:03 +03:00
Murat Tuncer 0a987e9c0e Fix cte subquery failure test 2018-10-03 15:43:48 +03:00
Murat Tuncer d26b312cad Add failure test for coordinator pull/push for cte 2018-10-03 15:43:48 +03:00
Murat Tuncer 6c66033455 Add failure tests for multi-shard update/delete
Failure tests for update/delete  on hash distributed tables
using 1PC and 2PC
2018-10-03 15:43:48 +03:00
Burak Velioglu 322dd54eee
Merge pull request #2412 from citusdata/add_all_transactions_to_views
Show router modify,select and real-time queries on MX views
2018-10-02 22:23:47 +03:00
velioglu 512d23934f Show router modify,select and real-time queries on MX views 2018-10-02 13:59:38 +03:00
Murat Tuncer 43a4ef939a
Merge pull request #2410 from citusdata/mx_partition_foreign_key
Do not create inherited constraints at worker tables
2018-09-28 16:53:13 +03:00
Murat Tuncer 9bdef67bab Do not create inherited constraints on worker shards
PG now allows foreign keys on partitioned tables.
Each foreign key constraint on partitioned table
is propagated down to partitions.

We used to create all constraints on shards when we are creating
a new shard, or when just simply moving a shard from one worker
to another. We also used the same logic when creating a copy of
coordinator table in mx node.

With this change we create the constraint on worker node only if
it is not an inherited constraint.
2018-09-28 14:14:51 +03:00
Murat Tuncer 0aa9988ae9
Merge pull request #2413 from citusdata/fix_memory_leak_minimal
Fix memory leak in FinishRemoteTransactionPrepare
2018-09-28 13:54:07 +03:00