The link in our readme directly goes to our channel, meaning people
finding the link here for the first time are unable to join slack this
way.
Given that the target audience using this link is most likely not part
of the slack channel yet it would be better to link to our self serve
signup flow at slack.citusdata.com, which is the same we use on
citusdata.com.
From simple testing you should still get redirected to the channel if
you are already joined and signed in.
This PR fixes the following:
- in oraclelinux-7 `Make` step
```
/usr/bin/ld: utils/replication_origin_session_utils.o: relocation R_X86_64_PC32 against undefined symbol
`IsLocalReplicationOriginSessionActive' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
```
`IsLocalReplicationOriginSessionActive` function has improper inline
declaration, fixed that
- in centos-7 `Make` step
```
utils/background_jobs.c: In function 'StartCitusBackgroundTaskExecutor':
utils/background_jobs.c:1746:6: warning: function might be possible candidate for 'gnu_printf' format attribute
[-Wsuggest-attribute=format]
database, user, jobId, taskId);
^
```
should use `pg_attribute_printf(3,4)` instead of
`pg_attribute_printf(3,0)` since the number of arguments varies for
`SafeSnprintf(char *str, rsize_t count, const char *fmt, ...)`
---------
Co-authored-by: naisila <nicypp@gmail.com>
Some clients send ALTER TABLE .. ADD COLUMN .. commands together
with some other DDLs and this makes it impossible to directly send
the original DDL command to the workers.
For this reason, this commit adds support for deparsing such ALTER
TABLE commands so that we can avoid from directly sending the original
one to the workers.
Partially fixes https://github.com/citusdata/citus/issues/690.
Fixes#3678
We allow materialized view to exist in distrbuted schema but they should
not be tried to be converted to a tenant table since they cannot be
distributed.
Fixes https://github.com/citusdata/citus/issues/7041
Inserting into `pg_dist_schema` causes unexpected duplicate key errors,
for distributed schemas that already exist. With this commit we skip the
insertion if the schema already exists in `pg_dist_schema`.
The error:
```sql
SET citus.enable_schema_based_sharding TO ON;
CREATE SCHEMA sc2;
CREATE SCHEMA IF NOT EXISTS sc2;
NOTICE: schema "sc2" already exists, skipping
ERROR: duplicate key value violates unique constraint "pg_dist_schema_pkey"
DETAIL: Key (schemaid)=(17294) already exists.
```
fixes: #7042
This PR
* Addresses a concurrency issue in the probabilistic approach of tenant
monitoring by acquiring a shared lock for tenant existence checks.
* Changes `citus.stat_tenants_sample_rate_for_new_tenants` type to
double
* Renames `citus.stat_tenants_sample_rate_for_new_tenants` to
`citus.stat_tenants_untracked_sample_rate`
DESCRIPTION: Change default rebalance strategy to by_disk_size
When introducing rebalancing by disk size we didn't make it the default
initially. The main reason was, because we expected some problems with
it. We have indeed had some problems/bugs with it over the years, and
have fixed all of them. By now we're quite confident in its stability,
and that it pretty much always gives better results than by_shard_count.
So this PR makes by_disk_size the new default. We don't change the
default when some other strategy than by_shard_count is the current
default. This is in case someone defined their own rebalance strategy
and marked this as the default themselves.
Note: It explicitly does nothing during a downgrade, because there's no
way of knowing if the rebalance strategy before the upgrade was
by_disk_size or by_shard_count. And even in previous versions
by_disk_size is considered superior for quite some time.
One problem with rebalancing by disk size is that shards in newly
created collocation groups are considered extremely small. This can
easily result in bad balances if there are some other collocation groups
that do have some data. One extremely bad example of this is:
1. You have 2 workers
2. Both contain about 100GB of data, but there's a 70MB difference.
3. You create 100 new distributed schemas with a few empty tables in
them
4. You run the rebalancer
5. Now all new distributed schemas are placed on the node with that had
70MB less.
6. You start loading some data in these shards and quickly the balance
is completely off
To address this edge case, this PR changes the by_disk_size rebalance
strategy to add a a base size of 100MB to the actual size of each
shard group. This can still result in a bad balance when shard groups
are empty, but it solves some of the worst cases.