Commit Graph

1135 Commits (e0c960e248f873513d5aca8f7933b6b9e0909c6b)

Author SHA1 Message Date
Andres Freund 4bf6b8cdfa Perform range based pruning if equality pruning has survivor.
We previously dismissed this as unimportant, but it turns out to be
very useful for the upcoming subquery pushdown, where a user might
specify an equality constraint in a subquery, and the subquery
pushdown machinery adds >= and <= restrictions on the shard boundary.
Previously the latter restriction was ignored.
2017-04-28 17:35:18 -07:00
Andres Freund 042020eabf Use stricter qual for pruning if both >/< and >=/<= are present.
Previously, if both =< and < (>= and < respectively) were specified,
we always used the latter restriction.  Instead use the stricter one.
2017-04-28 17:35:18 -07:00
Marco Slot 6d53c2e79c Merge pull request #1368 from citusdata/fix_get_live_node_count
Fix list length lookup in WorkerGetLiveNodeCount
2017-04-28 17:26:25 -07:00
Marco Slot 053f10d91c Fix list length lookup in WorkerGetLiveNodeCount 2017-04-29 02:13:20 +02:00
Marco Slot c6392bd98b Merge pull request #1349 from citusdata/fix_check_vanilla
Fix check-vanilla tests
2017-04-28 17:10:48 -07:00
Burak Yucesoy edd69310fd Fix check-vanilla tests
It semms that GEQO optimizations, when it is set to on, create their own memory context
and free it after when it is no longer necessary. In join multi_join_restriction_hook
we allocate our variables in the CurrentMemoryContext, which is GEQO's memory context
if it is active. To prevent deallocation of our variables when GEQO's memory context is
freed, we started to allocate memory fo these variables in separate MemoryContext.
2017-04-29 01:55:18 +02:00
Marco Slot 7d77176842 Merge pull request #1365 from citusdata/fix_size
Check whether relation ID exists in DistributedTableSize
2017-04-28 16:51:41 -07:00
Marco Slot 97d36b7dfe Check whether relation ID exists in citus_relation_size 2017-04-29 01:39:39 +02:00
Andres Freund c7b4f170fd Merge pull request #1331 from citusdata/feature/faster-pruning
Faster Shard Pruning Implementation
2017-04-28 15:01:41 -07:00
Andres Freund f6ef7f2c03 Faster shard pruning.
So far citus used postgres' predicate proofing logic for shard
pruning, except for INSERT and COPY which were already optimized for
speed.  That turns out to be too slow:
* Shard pruning for SELECTs is currently O(#shards), because
  PruneShardList calls predicate_refuted_by() for every
  shard. Obviously using an O(N) type algorithm for general pruning
  isn't good.
* predicate_refuted_by() is quite expensive on its own right. That's
  primarily because it's optimized for doing a single refutation
  proof, rather than performing the same proof over and over.
* predicate_refuted_by() does not keep persistent state (see 2.) for
  function calls, which means that a lot of syscache lookups will be
  performed. That's particularly bad if the partitioning key is a
  composite key, because without a persistent FunctionCallInfo
  record_cmp() has to repeatedly look-up the type definition of the
  composite key. That's quite expensive.

Thus replace this with custom-code that works in two phases:
1) Search restrictions for constraints that can be pruned upon
2) Use those restrictions to search for matching shards in the most
   efficient manner available:
   a) Binary search / Hash Lookup in case of hash partitioned tables
   b) Binary search for equal clauses in case of range or append
      tables without overlapping shards.
   c) Binary search for inequality clauses, searching for both lower
      and upper boundaries, again in case of range or append
      tables without overlapping shards.
   d) exhaustive search testing each ShardInterval

My measurements suggest that we are considerably, often orders of
magnitude, faster than the previous solution, even if we have to fall
back to exhaustive pruning.
2017-04-28 14:40:41 -07:00
Andres Freund 2013090a77 Add DistTableCacheEntry->hasOverlappingShardInterval.
This determines whether it's possible to perform binary search on
sortedShardIntervalArray or not.  If e.g. two shards have overlapping
ranges, that'd be prohibitive.

That'll be useful in later commit introducing faster shard pruning.
2017-04-28 14:40:38 -07:00
Andres Freund 15d427f931 Add DistTableCacheEntry->shardValueCompareFunction.
That's useful when comparing values a hash-partitioned table is
filtered by.  The existing shardIntervalCompareFunction is about
comparing hashed values, not unhashed ones.

The added btree opclass function is so we can get a comparator
back. This should be changed much more widely, but is not necessary so
far.
2017-04-28 14:40:38 -07:00
Andres Freund f3172e9719 Build DistTableCacheEntry->shardIntervalCompareFunction even for 0 shards.
Previously we, unnecessarily, used a the first shard's type
information to to look up the comparison function.  But that
information is already available, so use it.  That's helpful because
we sometimes want to access the comparator function even if there's no
shards.
2017-04-28 14:40:38 -07:00
Andres Freund 99642306ed Fix: Make FindShardIntervalIndex robust against 0 shards. 2017-04-28 14:40:38 -07:00
Metin Döşlü 669b5e243f Merge pull request #1361 from citusdata/explain_with_savepoint
Send explain queries with savepoints
2017-04-28 13:43:27 -07:00
Metin Doslu d411892fe6 Send explain queries with savepoints
With this commit, we started to send explain queries within a savepoint. After
running explain query, we rollback to savepoint. This saves us from side effects
of EXPLAIN ANALYZE on DML queries.
2017-04-28 12:13:48 -07:00
Jason Petersen a5ad70379b Merge pull request #1353 from citusdata/fix_copy_crasher
Refactor COPY to not directly use cache entry

cr: @marcocitus
2017-04-27 16:06:11 -06:00
Jason Petersen 1c353e68aa Remove FastShardPruning method
With the other simplifications, it doesn't make sense to keep around.
2017-04-27 13:32:36 -06:00
Jason Petersen 06497d74f5 Refactor FindShardInterval to use cacheEntry
All callers fetch a cache entry and extract/compute arguments for the
eventual FindShardInterval call, so it makes more sense to refactor
into that function itself; this solves the use-after-free bug, too.
2017-04-27 13:32:36 -06:00
Andres Freund 9cefa97972 Merge pull request #1351 from citusdata/feature/remove_pruning_debug
Remove Pruning Debug Output
2017-04-26 11:58:52 -07:00
Andres Freund 4fe14bdeda Some cleanup in multi_subquery test.
Remove trailing whitespace and use of EXPLAIN instead of
EXPLAIN (COSTS OFF).
2017-04-26 11:33:56 -07:00
Andres Freund f064c33d5c Add back pruning coverage lost in last commit.
Because we can't rely on the debuggin message anymore, add a bunch of
explain statements that roughly fulfill the same purpose.
2017-04-26 11:33:56 -07:00
Andres Freund 5b389eb6d7 Boring regression test output adjustments.
Soon shard pruning will be optimized not to generally work linearly
anymore.  Thus we can't print the pruned shard intervals as currently
done anymore.

The current printing of shard ids also prevents us from running tests
in parallel, as otherwise shard ids aren't linearly numbered.
2017-04-26 11:33:56 -07:00
Andres Freund 8923fe7f54 Merge pull request #1354 from citusdata/feature/faster-copartitioned-check
Skip exhaustive test in CoPartitionedTables() if declared colocated.
2017-04-26 11:33:31 -07:00
Andres Freund 9e4ec991d8 Skip exhaustive test in CoPartitionedTables() if declared colocated.
That's considerably cheaper.
2017-04-26 11:19:17 -07:00
Andres Freund c34e357885 Merge pull request #1350 from citusdata/fix/vpath-builds
Fix VPATH builds broken in 087d8427e3.
2017-04-25 16:25:54 -07:00
Andres Freund 5524d389cb Fix VPATH builds broken in 087d8427e3.
1) Generated files reside in the build directory, not the source
   directory.
2) As a generated file is now included in the build, add it to the
   include path (-I)
2017-04-25 16:04:42 -07:00
Marco Slot 1b4ebd490d Only process error if not NULL in StoreErrorMessage 2017-04-21 17:01:01 +02:00
Marco Slot 326f8d9d61 Use right sizeof in UpdateRelationColocationGroup 2017-04-21 16:37:09 +02:00
Burak Yücesoy e291887227 Merge pull request #1294 from citusdata/fix_test_outputs_for_valgrind
Prepare for valgrind automation
2017-04-21 05:51:14 -08:00
Burak Yucesoy a35d0cd8af Configure valgrind command line arguments 2017-04-21 16:30:12 +03:00
Burak Yucesoy 9312ef8bcf Stabilize test outputs 2017-04-21 16:08:52 +03:00
Eren Basak 71d99b72ce Add support for proper valgrind tests
This change allows valgrind tests (`make check-multi-vg`) to be
run seamlessly without test output errors and timeout problems.
2017-04-21 16:08:52 +03:00
Marco Slot 384f32b191 Merge pull request #1302 from citusdata/serial_partition_column
Support expressions in the partition column in INSERTs
2017-04-21 14:18:13 +02:00
Marco Slot 7d1f7b8923 Support expressions in the partition column in INSERTs 2017-04-21 14:05:52 +02:00
Burak Velioglu 41bb84c9c0 Merge pull request #1292 from citusdata/alter_add_constraint_m
Alter Table Add Constraint
2017-04-20 15:33:02 +03:00
velioglu a26edd2249 Implement ALTER TABLE ADD CONSTRAINT command 2017-04-20 15:02:33 +03:00
Burak Velioglu 5170731848 Merge pull request #1316 from citusdata/add_guc_for_cross_shard
Log cross-shard queries
2017-04-20 14:08:21 +03:00
velioglu 5b3e47de7a Log message of across shard queries according to the log level 2017-04-20 12:24:46 +03:00
Burak Velioglu 1e6e5fd512 Merge pull request #1324 from citusdata/insert_into_select_wo_native
Replace native hash function with worker_hash
2017-04-19 22:30:52 +03:00
velioglu be3cdb14ea Change native hash function with worker_hash 2017-04-19 22:16:55 +03:00
Jason Petersen e619e50590 Merge pull request #1312 from citusdata/rename_support
Enable distributed ALTER TABLE ... RENAME COLUMN

cr: @byucesoy
2017-04-18 22:57:12 -06:00
Jason Petersen f999bcd7ca Enable distributed ALTER TABLE ... RENAME COLUMN
Pretty straightforward. Had some concerns about locking, but due to the
fact that all distributed operations use either some level of deparsing
or need to enumerate column names, they all block during any concurrent
column renames (due to the AccessExclusive lock).

In addition, I had some misgivings about permitting renames of the dis-
tribution column, but nothing bad comes from just allowing them.

Finally, I tried to trigger any sort of error using prepared statements
and could not trigger any errors not also exhibited by plain PostgreSQL
tables.
2017-04-18 22:47:48 -06:00
Marco Slot 065d167d2e Merge pull request #1208 from citusdata/remove_job_id_seq
Stop using a sequence to generate job IDs
2017-04-18 12:02:07 +02:00
Marco Slot 0f63edc5b4 Add basic read-only transaction tests 2017-04-18 11:42:33 +02:00
Marco Slot 53899946e7 Remove redundant pg_dist_jobid_seq restarts in tests 2017-04-18 11:42:32 +02:00
Marco Slot d7a5f6997c Set citus.enable_unique_job_ids in tests with job ID in output 2017-04-18 11:42:32 +02:00
Marco Slot c7603215dd Stop using a sequence to generate unique job IDs 2017-04-18 11:31:51 +02:00
Burak Yücesoy f035885269 Merge pull request #1332 from citusdata/set_isactive_to_true
Set default value of isactive to true
2017-04-17 23:38:45 -08:00
Burak Yucesoy 58a809b0e8 Set default value of isactive to true
With this change, we set to default value of isactive column to true so that
upgrading users all nodes will be marked as active to not break their environment.
2017-04-18 09:40:44 +03:00