While attaching a partition to a distributed table in schema, we mistakenly
used unqualified name to find partitioned table's oid. This caused problems
while using partitioned tables with schemas. We are fixing this issue in
this PR.
Short-term share/exclusive page-level locks are used for
read/write access. Locks are released immediately after
each index row is fetched or inserted.
Since those locks may not lead to any deadlocks, it's safe
to ignore them in the distributed deadlock detection.
In DistributedTablesSize() we didn't close the relations that had
replication factor > 2. This caused relcache reference leaks, and
warning messages like following in logs:
WARNING: relcache reference leak: relation "researchers" not closed
It's possible to build INSERT SELECT queries which include implicit
casts, currently we attempt to support these by adding explicit casts to
the SELECT query, but this sometimes crashes because we don't update all
nodes with the new types. (SortClauses, for instance)
This commit removes those explicit casts and passes an unmodified SELECT
query to the COPY executor (how we implement INSERT SELECT under the
scenes). In lieu of those cases, COPY has been given some extra logic to
inspect queries, notice that the types don't line up with the table it's
supposed to be inserting into, and "manually" casting every tuple before
sending them to workers.
Previously we compiled PostgreSQL for PR and we used already compiled
PostgreSQL packages for push builds. Compiling PostgreSQL allows us to
run isolation/vanilla tests. Thus we only ran those tests for PR builds.
We did not used compiled PostgreSQL for both builds because;
- We wanted to run our tests against packages
- Compiling takes too much time.
However, there are some benefits in using custom compiled PostgreSQL
in push builds instead of PR builds such as;
- At the moment, we do not run isolation/vanilla tests until we open a
PR, which does not make any sense.
- After merging a PR (i.e. after push to the master) push builds run
and we do not run isolation tests.
- While merging community to enterprise, we have to open a temporary
PR to make travis run isolation/vanilla tests.
- With this change master branch will have its own cache that means
every branch originated from master branch can re-use master branch's
cache. (Currently we cannot use cache for the first PR build)
With this PR we switch from using custom compiled for PR builds to push
builds.
ShardPlacementList's implementation can return NIL. In previous implementation
we got a segmentation fault in this case. The relation can be dropped after
getting distributed table list but before calling SingleReplicatedTable().
If we don't propagate the errors we are catching in PG_CATCH(), database's
internal state might not be clean. So we do PG_TRY() inside a subtransaction
so we can rollback to it after catching errors.
This patch adds --with-reports-host configure option, which sets the
REPORTS_BASE_URL constant. The default is reports.citusdata.com.
It also enables stats collection in tests.
Curl writes the received response to stdout if we don't specify a response
callback or an output file. This can pollute the PostgreSQL log. In this change
we add a callback function so the response messages aren't added to the log file.
Sends a request to /v1/releases/latest?flavor=$CITUS_EDITION once a day,
which returns a response similar to {"version": "7.1.0", "major": 7,
"minor": 1, "patch": 0}. Then compares it with current Citus version,
and if the latest release is newer, logs a LOG message.
This includes:
(1) Wrap everything inside a StartTransactionCommand()/CommitTransactionCommand().
This is so we can access the database. This also switches to a new memory context
and releases it, so we don't have to do our own memory management.
(2) LockCitusExtension() so the extension cannot be dropped or created concurrently.
(3) Check CitusHasBeenLoaded() && CheckCitusVersion() before doing any work.
(4) Do not PG_TRY() inside a loop.
This commit makes a change in relay_event_utility.c to check if the
Alter Table command adds a constraint using index. If this is the
case, it appends the shard id to the index name.
By this commit, citus minds the replica identity of the table when
we distribute the table. So the shards of the distributed table
have the same replica identity with the local table.
Expands count distinct coverage by allowing more cases. We used to support
count distinct only if we can push down distinct aggregate to worker query
i.e. the count distinct clause was on the partition column of the table,
or there was a grouping on the partition column.
Now we can support
- non-partition columns, with or without grouping on partition column
- partition, and non partition column in the same query
- having clause
- single table subqueries
- insert into select queries
- join queries where count distinct is on partition, or non-partition column
- filters on count distinct clauses (extends existing support)
We first try to push down aggregate to worker query (original case), if we
can't then we modify worker query to return distinct columns to coordinator
node. We do that by adding distinct column targets to group by clauses. Then
we perform count distinct operation on the coordinator node.
This work should reduce the cases where HLL is used as it can address anything
that HLL can. However, if we start having performance issues due to very large
number rows, then we can recommend hll use.