Compare commits

..

11 Commits

Author SHA1 Message Date
Onur Tirtir b7ae596fe8
Bump citus version to 11.1.1 (#6356) 2022-09-16 12:33:09 +03:00
Onur Tirtir 6f4324623c Add changelog entries for 11.1.1 (#6354)
(cherry picked from commit 8b5cdaf0e9)
2022-09-16 12:17:25 +03:00
Marco Slot d5db0adc17 Allow create_distributed_table_concurrently on an empty node (#6353)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2022-09-16 12:17:25 +03:00
Onur Tirtir 099523452e Add changelog entries for 11.1.0 (#6349)
Created by executing `prepare_changelog.pl citus 11.1.0 2022-03-29`.

(cherry picked from commit 57e354ac91)
2022-09-16 11:20:00 +03:00
Onder Kalaci af448da1a7 Prevent failures on partitioned distributed tables with statistics objects on PG 15
Comment from the code is clear on this:
/*
 * The statistics objects of the distributed table are not relevant
 * for the distributed planning, so we can override it.
 *
 * Normally, we should not need this. However, the combination of
 * Postgres commit 269b532aef55a579ae02a3e8e8df14101570dfd9 and
 * Citus function AdjustPartitioningForDistributedPlanning()
 * forces us to do this. The commit expects statistics objects
 * of partitions to have "inh" flag set properly. Whereas, the
 * function overrides "inh" flag. To avoid Postgres to throw error,
 * we override statlist such that Postgres does not try to process
 * any statistics objects during the standard_planner() on the
 * coordinator. In the end, we do not need the standard_planner()
 * on the coordinator to generate an optimized plan. We call
 * into standard_planner() for other purposes, such as generating the
 * relationRestrictionContext here.
 *
 * AdjustPartitioningForDistributedPlanning() is a hack that we use
 * to prevent Postgres' standard_planner() to expand all the partitions
 * for the distributed planning when a distributed partitioned table
 * is queried. It is required for both correctness and performance
 * reasons. Although we can eliminate the use of the function for
 * the correctness (e.g., make sure that rest of the planner can handle
 * partitions), it's performance implication is hard to avoid. Certain
 * planning logic of Citus (such as router or query pushdown) relies
 * heavily on the relationRestrictionList. If
 * AdjustPartitioningForDistributedPlanning() is removed, all the
 * partitions show up in the, causing high planning times for
 * such queries.
 */
2022-09-15 14:37:28 +03:00
Sameer Awasekar acccad9879 Introduce code changes to fix Issue:6303 (#6328)
The PR introduces code changes to fix Issue
[6303](https://github.com/citusdata/citus/issues/6303)

`create_distributed_table_concurrently` following drop column, creates a
buggy situation in split decoder.
 * Consider the below scenario:
* Session1 : Drop column followed by
create_distributed_table_concurrently
 * Session2 : Concurrent insert workload

The child shards created by `create_distributed_table_concurrently` will
have less columns than the source shard because some column were
dropped. The incoming tuple from session2 will have more columns as the
writes happened on source shard. But now the tuple needs to be applied
on child shard. So we need to format existing tuple according to child
schema and skip dropped column values.
The PR fixes this by reformatting the tuple according the target child
schema.

Test:
1) isolation_create_distributed_concurrently_after_drop_column - Repros
the issue and tests on the same.
2022-09-15 09:48:43 +02:00
aykut-bozkurt 77947da17c ensure we have more active nodes than replication factor. (#6341)
DESCRIPTION: Fixes floating exception during
create_distributed_table_concurrently.

Fixes #6332.
During create_distributed_table_concurrently, when there is no active
primary node, it fails with floating exception. We added similar check
with create_distributed_table. It will fail with proper message if
current active node is less than replication factor.

(cherry picked from commit 739b91afa6)
2022-09-14 18:22:58 +03:00
Marco Slot 7d56c25e28 Fix escaping in sequence dependency queries (#6345)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2022-09-14 16:52:25 +02:00
Marco Slot eba70af7a2 Fix bugs in CheckIfRelationWithSameNameExists (#6343)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2022-09-14 15:59:43 +02:00
Hanefi Onaldi 3f33390f45 Bump Citus version to 11.1.0 2022-09-14 14:21:43 +03:00
Nils Dijk 7b51f3eee2
Fix: rebalance stop non super user (#6334)
No need for description, fixing issue introduced with new feature for
11.1

Fixes #6333 

Due to Postgres' C api being o-indexed and postgres' attributes being
1-indexed, we were reading the wrong Datum as the Task owner when
cancelling. Here we add a test to show the error and fix the off-by-one
error.
2022-09-13 23:20:06 +02:00
1764 changed files with 62695 additions and 216751 deletions

1066
.circleci/config.yml Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +0,0 @@
exclude_patterns:
- "src/backend/distributed/utils/citus_outfuncs.c"
- "src/backend/distributed/deparser/ruleutils_*.c"
- "src/include/distributed/citus_nodes.h"
- "src/backend/distributed/safeclib"
- "src/backend/columnar/safeclib"
- "**/vendor/"

View File

@ -1,33 +0,0 @@
# gdbpg.py contains scripts to nicely print the postgres datastructures
# while in a gdb session. Since the vscode debugger is based on gdb this
# actually also works when debugging with vscode. Providing nice tools
# to understand the internal datastructures we are working with.
source /root/gdbpg.py
# when debugging postgres it is convenient to _always_ have a breakpoint
# trigger when an error is logged. Because .gdbinit is sourced before gdb
# is fully attached and has the sources loaded. To make sure the breakpoint
# is added when the library is loaded we temporary set the breakpoint pending
# to on. After we have added out breakpoint we revert back to the default
# configuration for breakpoint pending.
# The breakpoint is hard to read, but at entry of the function we don't have
# the level loaded in elevel. Instead we hardcode the location where the
# level of the current error is stored. Also gdb doesn't understand the
# ERROR symbol so we hardcode this to the value of ERROR. It is very unlikely
# this value will ever change in postgres, but if it does we might need to
# find a way to conditionally load the correct breakpoint.
set breakpoint pending on
break elog.c:errfinish if errordata[errordata_stack_depth].elevel == 21
set breakpoint pending auto
echo \n
echo ----------------------------------------------------------------------------------\n
echo when attaching to a postgres backend a breakpoint will be set on elog.c:errfinish \n
echo it will only break on errors being raised in postgres \n
echo \n
echo to disable this breakpoint from vscode run `-exec disable 1` in the debug console \n
echo this assumes it's the first breakpoint loaded as it is loaded from .gdbinit \n
echo this can be verified with `-exec info break`, enabling can be done with \n
echo `-exec enable 1` \n
echo ----------------------------------------------------------------------------------\n
echo \n

View File

@ -1 +0,0 @@
postgresql-*.tar.bz2

View File

@ -1,7 +0,0 @@
\timing on
\pset linestyle unicode
\pset border 2
\setenv PAGER 'pspg --no-mouse -bX --no-commandbar --no-topbar'
\set HISTSIZE 100000
\set PROMPT1 '\n%[%033[1m%]%M %n@%/:%> (PID: %p)%R%[%033[0m%]%# '
\set PROMPT2 ' '

View File

@ -1,12 +0,0 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
docopt = "*"
[dev-packages]
[requires]
python_version = "3.9"

28
.devcontainer/.vscode/Pipfile.lock generated vendored
View File

@ -1,28 +0,0 @@
{
"_meta": {
"hash": {
"sha256": "6956a6700ead5804aa56bd597c93bb4a13f208d2d49d3b5399365fd240ca0797"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.9"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.org/simple",
"verify_ssl": true
}
]
},
"default": {
"docopt": {
"hashes": [
"sha256:49b3a825280bd66b3aa83585ef59c4a8c82f2c8a522dbe754a8bc8d08c85c491"
],
"index": "pypi",
"version": "==0.6.2"
}
},
"develop": {}
}

View File

@ -1,84 +0,0 @@
#! /usr/bin/env pipenv-shebang
"""Generate C/C++ properties file for VSCode.
Uses pgenv to iterate postgres versions and generate
a C/C++ properties file for VSCode containing the
include paths for the postgres headers.
Usage:
generate_c_cpp_properties-json.py <target_path>
generate_c_cpp_properties-json.py (-h | --help)
generate_c_cpp_properties-json.py --version
Options:
-h --help Show this screen.
--version Show version.
"""
import json
import subprocess
from docopt import docopt
def main(args):
target_path = args['<target_path>']
output = subprocess.check_output(['pgenv', 'versions'])
# typical output is:
# 14.8 pgsql-14.8
# * 15.3 pgsql-15.3
# 16beta2 pgsql-16beta2
# where the line marked with a * is the currently active version
#
# we are only interested in the first word of each line, which is the version number
# thus we strip the whitespace and the * from the line and split it into words
# and take the first word
versions = [line.strip('* ').split()[0] for line in output.decode('utf-8').splitlines()]
# create the list of configurations per version
configurations = []
for version in versions:
configurations.append(generate_configuration(version))
# create the json file
c_cpp_properties = {
"configurations": configurations,
"version": 4
}
# write the c_cpp_properties.json file
with open(target_path, 'w') as f:
json.dump(c_cpp_properties, f, indent=4)
def generate_configuration(version):
"""Returns a configuration for the given postgres version.
>>> generate_configuration('14.8')
{
"name": "Citus Development Configuration - Postgres 14.8",
"includePath": [
"/usr/local/include",
"/home/citus/.pgenv/src/postgresql-14.8/src/**",
"${workspaceFolder}/**",
"${workspaceFolder}/src/include/",
],
"configurationProvider": "ms-vscode.makefile-tools"
}
"""
return {
"name": f"Citus Development Configuration - Postgres {version}",
"includePath": [
"/usr/local/include",
f"/home/citus/.pgenv/src/postgresql-{version}/src/**",
"${workspaceFolder}/**",
"${workspaceFolder}/src/include/",
],
"configurationProvider": "ms-vscode.makefile-tools"
}
if __name__ == '__main__':
arguments = docopt(__doc__, version='0.1.0')
main(arguments)

View File

@ -1,40 +0,0 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach Citus (devcontainer)",
"type": "cppdbg",
"request": "attach",
"processId": "${command:pickProcess}",
"program": "/home/citus/.pgenv/pgsql/bin/postgres",
"additionalSOLibSearchPath": "/home/citus/.pgenv/pgsql/lib",
"setupCommands": [
{
"text": "handle SIGUSR1 noprint nostop pass",
"description": "let gdb not stop when SIGUSR1 is sent to process",
"ignoreFailures": true
}
],
},
{
"name": "Open core file",
"type": "cppdbg",
"request": "launch",
"program": "/home/citus/.pgenv/pgsql/bin/postgres",
"coreDumpPath": "${input:corefile}",
"cwd": "${workspaceFolder}",
"MIMode": "gdb",
}
],
"inputs": [
{
"id": "corefile",
"type": "command",
"command": "extension.commandvariable.file.pickFile",
"args": {
"dialogTitle": "Select core file",
"include": "**/core*",
},
},
],
}

View File

@ -1,222 +0,0 @@
FROM ubuntu:22.04 AS base
# environment is to make python pass an interactive shell, probably not the best timezone given a wide variety of colleagues
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# install build tools
RUN apt update && apt install -y \
bison \
bzip2 \
cpanminus \
curl \
docbook-xml \
docbook-xsl \
flex \
gcc \
git \
libcurl4-gnutls-dev \
libicu-dev \
libkrb5-dev \
liblz4-dev \
libpam0g-dev \
libreadline-dev \
libselinux1-dev \
libssl-dev \
libxml2-utils \
libxslt-dev \
libzstd-dev \
locales \
make \
perl \
pkg-config \
python3 \
python3-pip \
software-properties-common \
sudo \
uuid-dev \
valgrind \
xsltproc \
zlib1g-dev \
&& add-apt-repository ppa:deadsnakes/ppa -y \
&& apt install -y \
python3.9-full \
# software properties pulls in pkexec, which makes the debugger unusable in vscode
&& apt purge -y \
software-properties-common \
&& apt autoremove -y \
&& apt clean
RUN sudo pip3 install pipenv pipenv-shebang
RUN cpanm install IPC::Run
RUN locale-gen en_US.UTF-8
# add the citus user to sudoers and allow all sudoers to login without a password prompt
RUN useradd -ms /bin/bash citus \
&& usermod -aG sudo citus \
&& echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
WORKDIR /home/citus
USER citus
# run all make commands with the number of cores available
RUN echo "export MAKEFLAGS=\"-j \$(nproc)\"" >> "/home/citus/.bashrc"
RUN git clone --branch v1.3.2 --depth 1 https://github.com/theory/pgenv.git .pgenv
COPY --chown=citus:citus pgenv/config/ .pgenv/config/
ENV PATH="/home/citus/.pgenv/bin:${PATH}"
ENV PATH="/home/citus/.pgenv/pgsql/bin:${PATH}"
USER citus
# build postgres versions separately for effective parrallelism and caching of already built versions when changing only certain versions
FROM base AS pg15
RUN MAKEFLAGS="-j $(nproc)" pgenv build 15.14
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
# create a staging directory with all files we want to copy from our pgenv build
# we will copy the contents of the staged folder into the final image at once
RUN mkdir .pgenv-staging/
RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg16
RUN MAKEFLAGS="-j $(nproc)" pgenv build 16.10
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
# create a staging directory with all files we want to copy from our pgenv build
# we will copy the contents of the staged folder into the final image at once
RUN mkdir .pgenv-staging/
RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg17
RUN MAKEFLAGS="-j $(nproc)" pgenv build 17.6
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
# create a staging directory with all files we want to copy from our pgenv build
# we will copy the contents of the staged folder into the final image at once
RUN mkdir .pgenv-staging/
RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS uncrustify-builder
RUN sudo apt update && sudo apt install -y cmake tree
WORKDIR /uncrustify
RUN curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.82.0.tar.gz | tar xz
WORKDIR /uncrustify/uncrustify-uncrustify-0.82.0/
RUN mkdir build
WORKDIR /uncrustify/uncrustify-uncrustify-0.82.0/build/
RUN cmake ..
RUN MAKEFLAGS="-j $(nproc)" make -s
RUN make install DESTDIR=/uncrustify
# builder for all pipenv's to get them contained in a single layer
FROM base AS pipenv
WORKDIR /workspaces/citus/
# tools to sync pgenv with vscode
COPY --chown=citus:citus .vscode/Pipfile .vscode/Pipfile.lock .devcontainer/.vscode/
RUN ( cd .devcontainer/.vscode && pipenv install )
# environment to run our failure tests
COPY --chown=citus:citus src/ src/
RUN ( cd src/test/regress && pipenv install )
# assemble the final container by copying over the artifacts from separately build containers
FROM base AS devcontainer
LABEL org.opencontainers.image.source=https://github.com/citusdata/citus
LABEL org.opencontainers.image.description="Development container for the Citus project"
LABEL org.opencontainers.image.licenses=AGPL-3.0-only
RUN yes | sudo unminimize
# install developer productivity tools
RUN sudo apt update \
&& sudo apt install -y \
autoconf2.69 \
bash-completion \
fswatch \
gdb \
htop \
libdbd-pg-perl \
libdbi-perl \
lsof \
man \
net-tools \
psmisc \
pspg \
tree \
vim \
&& sudo apt clean
# Since gdb will run in the context of the root user when debugging citus we will need to both
# download the gdbpg.py script as the root user, into their home directory, as well as add .gdbinit
# as a file owned by root
# This will make that as soon as the debugger attaches to a postgres backend (or frankly any other process)
# the gdbpg.py script will be sourced and the developer can direcly use it.
RUN sudo curl -o /root/gdbpg.py https://raw.githubusercontent.com/tvesely/gdbpg/6065eee7872457785f830925eac665aa535caf62/gdbpg.py
COPY --chown=root:root .gdbinit /root/
# install developer dependencies in the global environment
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt pip install -r requirements.txt
# for persistent bash history across devcontainers we need to have
# a) a directory to store the history in
# b) a prompt command to append the history to the file
# c) specify the history file to store the history in
# b and c are done in the .bashrc to make it persistent across shells only
RUN sudo install -d -o citus -g citus /commandhistory \
&& echo "export PROMPT_COMMAND='history -a' && export HISTFILE=/commandhistory/.bash_history" >> "/home/citus/.bashrc"
# install citus-dev
RUN git clone --branch develop https://github.com/citusdata/tools.git citus-tools \
&& ( cd citus-tools/citus_dev && pipenv install ) \
&& mkdir -p ~/.local/bin \
&& ln -s /home/citus/citus-tools/citus_dev/citus_dev-pipenv .local/bin/citus_dev \
&& sudo make -C citus-tools/uncrustify install bindir=/usr/local/bin pkgsysconfdir=/usr/local/etc/ \
&& mkdir -p ~/.local/share/bash-completion/completions/ \
&& ln -s ~/citus-tools/citus_dev/bash_completion ~/.local/share/bash-completion/completions/citus_dev
# TODO some LC_ALL errors, possibly solved by locale-gen
RUN git clone https://github.com/so-fancy/diff-so-fancy.git \
&& mkdir -p ~/.local/bin \
&& ln -s /home/citus/diff-so-fancy/diff-so-fancy .local/bin/
COPY --link --from=uncrustify-builder /uncrustify/usr/ /usr/
COPY --link --from=pg15 /home/citus/.pgenv-staging/ /home/citus/.pgenv/
COPY --link --from=pg16 /home/citus/.pgenv-staging/ /home/citus/.pgenv/
COPY --link --from=pg17 /home/citus/.pgenv-staging/ /home/citus/.pgenv/
COPY --link --from=pipenv /home/citus/.local/share/virtualenvs/ /home/citus/.local/share/virtualenvs/
# place to run your cluster with citus_dev
VOLUME /data
RUN sudo mkdir /data \
&& sudo chown citus:citus /data
COPY --chown=citus:citus .psqlrc .
# with the copy linking of layers github actions seem to misbehave with the ownership of the
# directories leading upto the link, hence a small patch layer to have to right ownerships set
RUN sudo chown --from=root:root citus:citus -R ~
# sets default pg version
RUN pgenv switch 17.6
# make connecting to the coordinator easy
ENV PGPORT=9700

View File

@ -1,11 +0,0 @@
init: ../.vscode/c_cpp_properties.json ../.vscode/launch.json
../.vscode:
mkdir -p ../.vscode
../.vscode/launch.json: ../.vscode .vscode/launch.json
cp .vscode/launch.json ../.vscode/launch.json
../.vscode/c_cpp_properties.json: ../.vscode
./.vscode/generate_c_cpp_properties-json.py ../.vscode/c_cpp_properties.json

View File

@ -1,39 +0,0 @@
{
"image": "ghcr.io/citusdata/citus-devcontainer:main",
"runArgs": [
"--cap-add=SYS_PTRACE",
"--cap-add=SYS_NICE", // allow NUMA page inquiry
"--security-opt=seccomp=unconfined", // unblocks move_pages() in the container
"--ulimit=core=-1",
],
"forwardPorts": [
9700
],
"customizations": {
"vscode": {
"extensions": [
"eamodio.gitlens",
"GitHub.copilot-chat",
"GitHub.copilot",
"github.vscode-github-actions",
"github.vscode-pull-request-github",
"ms-vscode.cpptools-extension-pack",
"ms-vsliveshare.vsliveshare",
"rioj7.command-variable",
],
"settings": {
"files.exclude": {
"**/*.o": true,
"**/.deps/": true,
}
},
}
},
"mounts": [
"type=volume,target=/data",
"source=citus-bashhistory,target=/commandhistory,type=volume",
],
"updateContentCommand": "./configure",
"postCreateCommand": "make -C .devcontainer/",
}

View File

@ -1,15 +0,0 @@
PGENV_MAKE_OPTIONS=(-s)
PGENV_CONFIGURE_OPTIONS=(
--enable-debug
--enable-depend
--enable-cassert
--enable-tap-tests
'CFLAGS=-ggdb -Og -g3 -fno-omit-frame-pointer -DUSE_VALGRIND'
--with-openssl
--with-libxml
--with-libxslt
--with-uuid=e2fs
--with-icu
--with-lz4
)

View File

@ -1,9 +0,0 @@
black==24.3.0
click==8.1.7
isort==5.12.0
mypy-extensions==1.0.0
packaging==23.2
pathspec==0.11.2
platformdirs==4.0.0
tomli==2.0.1
typing_extensions==4.8.0

View File

@ -1,28 +0,0 @@
[[source]]
name = "pypi"
url = "https://pypi.python.org/simple"
verify_ssl = true
[packages]
mitmproxy = {editable = true, ref = "main", git = "https://github.com/citusdata/mitmproxy.git"}
construct = "*"
docopt = "==0.6.2"
cryptography = ">=41.0.4"
pytest = "*"
psycopg = "*"
filelock = "*"
pytest-asyncio = "*"
pytest-timeout = "*"
pytest-xdist = "*"
pytest-repeat = "*"
pyyaml = "*"
werkzeug = "==3.0.6"
[dev-packages]
black = "*"
isort = "*"
flake8 = "*"
flake8-bugbear = "*"
[requires]
python_version = "3.9"

File diff suppressed because it is too large Load Diff

View File

@ -17,7 +17,7 @@ trim_trailing_whitespace = true
insert_final_newline = unset
trim_trailing_whitespace = unset
[*.{sql,sh,py,toml}]
[*.{sql,sh,py}]
indent_style = space
indent_size = 4
tab_width = 4

View File

@ -1,7 +0,0 @@
[flake8]
# E203 is ignored for black
extend-ignore = E203
# black will truncate to 88 characters usually, but long string literals it
# might keep. That's fine in most cases unless it gets really excessive.
max-line-length = 150
exclude = .git,__pycache__,vendor,tmp_*

5
.gitattributes vendored
View File

@ -25,10 +25,9 @@ configure -whitespace
# except these exceptions...
src/backend/distributed/utils/citus_outfuncs.c -citus-style
src/backend/distributed/deparser/ruleutils_13.c -citus-style
src/backend/distributed/deparser/ruleutils_14.c -citus-style
src/backend/distributed/deparser/ruleutils_15.c -citus-style
src/backend/distributed/deparser/ruleutils_16.c -citus-style
src/backend/distributed/deparser/ruleutils_17.c -citus-style
src/backend/distributed/deparser/ruleutils_18.c -citus-style
src/backend/distributed/commands/index_pg_source.c -citus-style
src/include/distributed/citus_nodes.h -citus-style

View File

@ -1,23 +0,0 @@
name: 'Parallelization matrix'
inputs:
count:
required: false
default: 32
outputs:
json:
value: ${{ steps.generate_matrix.outputs.json }}
runs:
using: "composite"
steps:
- name: Generate parallelization matrix
id: generate_matrix
shell: bash
run: |-
json_array="{\"include\": ["
for ((i = 1; i <= ${{ inputs.count }}; i++)); do
json_array+="{\"id\":\"$i\"},"
done
json_array=${json_array%,}
json_array+=" ]}"
echo "json=$json_array" >> "$GITHUB_OUTPUT"
echo "json=$json_array"

View File

@ -1,38 +0,0 @@
name: save_logs_and_results
inputs:
folder:
required: false
default: "log"
runs:
using: composite
steps:
- uses: actions/upload-artifact@v4.6.0
name: Upload logs
with:
name: ${{ inputs.folder }}
if-no-files-found: ignore
path: |
src/test/**/proxy.output
src/test/**/results/
src/test/**/tmp_check/master/log
src/test/**/tmp_check/worker.57638/log
src/test/**/tmp_check/worker.57637/log
src/test/**/*.diffs
src/test/**/out/ddls.sql
src/test/**/out/queries.sql
src/test/**/logfile_*
/tmp/pg_upgrade_newData_logs
- name: Publish regression.diffs
run: |-
diffs="$(find src/test/regress -name "*.diffs" -exec cat {} \;)"
if ! [ -z "$diffs" ]; then
echo '```diff' >> $GITHUB_STEP_SUMMARY
echo -E "$diffs" >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo -E $diffs
fi
shell: bash
- name: Print stack traces
run: "./ci/print_stack_trace.sh"
if: failure()
shell: bash

View File

@ -1,35 +0,0 @@
name: setup_extension
inputs:
pg_major:
required: false
skip_installation:
required: false
default: false
type: boolean
runs:
using: composite
steps:
- name: Expose $PG_MAJOR to Github Env
run: |-
if [ -z "${{ inputs.pg_major }}" ]; then
echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
else
echo "PG_MAJOR=${{ inputs.pg_major }}" >> $GITHUB_ENV
fi
shell: bash
- uses: actions/download-artifact@v4.1.8
with:
name: build-${{ env.PG_MAJOR }}
- name: Install Extension
if: ${{ inputs.skip_installation == 'false' }}
run: tar xfv "install-$PG_MAJOR.tar" --directory /
shell: bash
- name: Configure
run: |-
chown -R circleci .
git config --global --add safe.directory ${GITHUB_WORKSPACE}
gosu circleci ./configure --without-pg-version-check
shell: bash
- name: Enable core dumps
run: ulimit -c unlimited
shell: bash

View File

@ -1,15 +0,0 @@
name: coverage
inputs:
flags:
required: false
codecov_token:
required: true
runs:
using: composite
steps:
- uses: codecov/codecov-action@v3
with:
flags: ${{ inputs.flags }}
token: ${{ inputs.codecov_token }}
verbose: true
gcov: true

View File

@ -1,3 +0,0 @@
base:
- ".* warning: ignoring old recipe for target [`']check'"
- ".* warning: overriding recipe for target [`']check'"

View File

@ -1,51 +0,0 @@
#!/bin/bash
set -ex
# Function to get the OS version
get_rpm_os_version() {
if [[ -f /etc/centos-release ]]; then
cat /etc/centos-release | awk '{print $4}'
elif [[ -f /etc/oracle-release ]]; then
cat /etc/oracle-release | awk '{print $5}'
else
echo "Unknown"
fi
}
package_type=${1}
# Since $HOME is set in GH_Actions as /github/home, pyenv fails to create virtualenvs.
# For this script, we set $HOME to /root and then set it back to /github/home.
GITHUB_HOME="${HOME}"
export HOME="/root"
eval "$(pyenv init -)"
pyenv versions
pyenv virtualenv ${PACKAGING_PYTHON_VERSION} packaging_env
pyenv activate packaging_env
git clone -b v0.8.27 --depth=1 https://github.com/citusdata/tools.git tools
python3 -m pip install -r tools/packaging_automation/requirements.txt
echo "Package type: ${package_type}"
echo "OS version: $(get_rpm_os_version)"
# For RHEL 7, we need to install urllib3<2 due to below execution error
# ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl'
# module is compiled with 'OpenSSL 1.0.2k-fips 26 Jan 2017'.
# See: https://github.com/urllib3/urllib3/issues/2168
if [[ ${package_type} == "rpm" && $(get_rpm_os_version) == 7* ]]; then
python3 -m pip uninstall -y urllib3
python3 -m pip install 'urllib3<2'
fi
python3 -m tools.packaging_automation.validate_build_output --output_file output.log \
--ignore_file .github/packaging/packaging_ignore.yml \
--package_type ${package_type}
pyenv deactivate
# Set $HOME back to /github/home
export HOME=${GITHUB_HOME}
# Print the output to the console

View File

@ -1,547 +0,0 @@
name: Build & Test
run-name: Build & Test - ${{ github.event.pull_request.title || github.ref_name }}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
skip_test_flakyness:
required: false
default: false
type: boolean
push:
branches:
- "main"
- "release-*"
pull_request:
types: [opened, reopened,synchronize]
merge_group:
jobs:
# Since GHA does not interpolate env varibles in matrix context, we need to
# define them in a separate job and use them in other jobs.
params:
runs-on: ubuntu-latest
name: Initialize parameters
outputs:
build_image_name: "ghcr.io/citusdata/extbuilder"
test_image_name: "ghcr.io/citusdata/exttester"
citusupgrade_image_name: "ghcr.io/citusdata/citusupgradetester"
fail_test_image_name: "ghcr.io/citusdata/failtester"
pgupgrade_image_name: "ghcr.io/citusdata/pgupgradetester"
style_checker_image_name: "ghcr.io/citusdata/stylechecker"
style_checker_tools_version: "0.8.33"
sql_snapshot_pg_version: "17.6"
image_suffix: "-ve4d3aa0"
pg15_version: '{ "major": "15", "full": "15.14" }'
pg16_version: '{ "major": "16", "full": "16.10" }'
pg17_version: '{ "major": "17", "full": "17.6" }'
upgrade_pg_versions: "15.14-16.10-17.6"
steps:
# Since GHA jobs need at least one step we use a noop step here.
- name: Set up parameters
run: echo 'noop'
check-sql-snapshots:
needs: params
runs-on: ubuntu-latest
container:
image: ${{ needs.params.outputs.build_image_name }}:${{ needs.params.outputs.sql_snapshot_pg_version }}${{ needs.params.outputs.image_suffix }}
options: --user root
steps:
- uses: actions/checkout@v4
- name: Check Snapshots
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
ci/check_sql_snapshots.sh
check-style:
needs: params
runs-on: ubuntu-latest
container:
image: ${{ needs.params.outputs.style_checker_image_name }}:${{ needs.params.outputs.style_checker_tools_version }}${{ needs.params.outputs.image_suffix }}
steps:
- name: Check Snapshots
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check C Style
run: citus_indent --check
- name: Check Python style
run: black --check .
- name: Check Python import order
run: isort --check .
- name: Check Python lints
run: flake8 .
- name: Fix whitespace
run: ci/editorconfig.sh && git diff --exit-code
- name: Remove useless declarations
run: ci/remove_useless_declarations.sh && git diff --cached --exit-code
- name: Sort and group includes
run: ci/sort_and_group_includes.sh && git diff --exit-code
- name: Normalize test output
run: ci/normalize_expected.sh && git diff --exit-code
- name: Check for C-style comments in migration files
run: ci/disallow_c_comments_in_migrations.sh && git diff --exit-code
- name: 'Check for comment--cached ns that start with # character in spec files'
run: ci/disallow_hash_comments_in_spec_files.sh && git diff --exit-code
- name: Check for gitignore entries .for source files
run: ci/fix_gitignore.sh && git diff --exit-code
- name: Check for lengths of changelog entries
run: ci/disallow_long_changelog_entries.sh
- name: Check for banned C API usage
run: ci/banned.h.sh
- name: Check for tests missing in schedules
run: ci/check_all_tests_are_run.sh
- name: Check if all CI scripts are actually run
run: ci/check_all_ci_scripts_are_run.sh
- name: Check if all GUCs are sorted alphabetically
run: ci/check_gucs_are_alphabetically_sorted.sh
- name: Check for missing downgrade scripts
run: ci/check_migration_files.sh
build:
needs: params
name: Build for PG${{ fromJson(matrix.pg_version).major }}
strategy:
fail-fast: false
matrix:
image_name:
- ${{ needs.params.outputs.build_image_name }}
image_suffix:
- ${{ needs.params.outputs.image_suffix}}
pg_version:
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
- ${{ needs.params.outputs.pg17_version }}
runs-on: ubuntu-latest
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ matrix.image_suffix }}"
options: --user root
steps:
- uses: actions/checkout@v4
- name: Expose $PG_MAJOR to Github Env
run: echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
shell: bash
- name: Build
run: "./ci/build-citus.sh"
shell: bash
- uses: actions/upload-artifact@v4.6.0
with:
name: build-${{ env.PG_MAJOR }}
path: |-
./build-${{ env.PG_MAJOR }}/*
./install-${{ env.PG_MAJOR }}.tar
test-citus:
name: PG${{ fromJson(matrix.pg_version).major }} - ${{ matrix.make }}
strategy:
fail-fast: false
matrix:
suite:
- regress
image_name:
- ${{ needs.params.outputs.test_image_name }}
pg_version:
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
- ${{ needs.params.outputs.pg17_version }}
make:
- check-split
- check-multi
- check-multi-1
- check-multi-mx
- check-vanilla
- check-isolation
- check-operations
- check-follower-cluster
- check-add-backup-node
- check-columnar
- check-columnar-isolation
- check-enterprise
- check-enterprise-isolation
- check-enterprise-isolation-logicalrep-1
- check-enterprise-isolation-logicalrep-2
- check-enterprise-isolation-logicalrep-3
include:
- make: check-failure
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-failure
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-failure
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: installcheck
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
pg_version: ${{ needs.params.outputs.pg15_version }}
- make: installcheck
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
pg_version: ${{ needs.params.outputs.pg16_version }}
- make: installcheck
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
pg_version: ${{ needs.params.outputs.pg17_version }}
- make: check-query-generator
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-query-generator
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-query-generator
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
runs-on: ubuntu-latest
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: >-
--user root
--dns=8.8.8.8
--cap-add=SYS_NICE
--security-opt seccomp=unconfined
# Due to Github creates a default network for each job, we need to use
# --dns= to have similar DNS settings as our other CI systems or local
# machines. Otherwise, we may see different results.
# and grant caps so PG18's NUMA introspection (pg_shmem_allocations_numa -> move_pages)
# doesn't fail with EPERM in CI.
needs:
- params
- build
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
- name: Run Test
run: gosu circleci make -C src/test/${{ matrix.suite }} ${{ matrix.make }}
timeout-minutes: 20
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ fromJson(matrix.pg_version).major }}_${{ matrix.make }}
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.PG_MAJOR }}_${{ matrix.suite }}_${{ matrix.make }}
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-arbitrary-configs:
name: PG${{ fromJson(matrix.pg_version).major }} - check-arbitrary-configs-${{ matrix.parallel }}
runs-on: ["self-hosted", "1ES.Pool=1es-gha-citusdata-pool"]
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
strategy:
fail-fast: false
matrix:
image_name:
- ${{ needs.params.outputs.fail_test_image_name }}
pg_version:
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
- ${{ needs.params.outputs.pg17_version }}
parallel: [0,1,2,3,4,5] # workaround for running 6 parallel jobs
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
- name: Test arbitrary configs
run: |-
# we use parallel jobs to split the tests into 6 parts and run them in parallel
# the script below extracts the tests for the current job
N=6 # Total number of jobs (see matrix.parallel)
X=${{ matrix.parallel }} # Current job number
TESTS=$(src/test/regress/citus_tests/print_test_names.py |
tr '\n' ',' | awk -v N="$N" -v X="$X" -F, '{
split("", parts)
for (i = 1; i <= NF; i++) {
parts[i % N] = parts[i % N] $i ","
}
print substr(parts[X], 1, length(parts[X])-1)
}')
echo $TESTS
gosu circleci \
make -C src/test/regress \
check-arbitrary-configs parallel=4 CONFIGS=$TESTS
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ env.PG_MAJOR }}_arbitrary_configs_${{ matrix.parallel }}
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.PG_MAJOR }}_arbitrary_configs_${{ matrix.parallel }}
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-pg-upgrade:
name: PG${{ matrix.old_pg_major }}-PG${{ matrix.new_pg_major }} - check-pg-upgrade
runs-on: ubuntu-latest
container:
image: "${{ needs.params.outputs.pgupgrade_image_name }}:${{ needs.params.outputs.upgrade_pg_versions }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
strategy:
fail-fast: false
matrix:
include:
- old_pg_major: 15
new_pg_major: 16
- old_pg_major: 16
new_pg_major: 17
- old_pg_major: 15
new_pg_major: 17
env:
old_pg_major: ${{ matrix.old_pg_major }}
new_pg_major: ${{ matrix.new_pg_major }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
with:
pg_major: "${{ env.old_pg_major }}"
- uses: "./.github/actions/setup_extension"
with:
pg_major: "${{ env.new_pg_major }}"
- name: Install and test postgres upgrade
run: |-
gosu circleci \
make -C src/test/regress \
check-pg-upgrade \
old-bindir=/usr/lib/postgresql/${{ env.old_pg_major }}/bin \
new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin \
test-with-columnar=false
gosu circleci \
make -C src/test/regress \
check-pg-upgrade \
old-bindir=/usr/lib/postgresql/${{ env.old_pg_major }}/bin \
new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin \
test-with-columnar=true
- name: Copy pg_upgrade logs for newData dir
run: |-
mkdir -p /tmp/pg_upgrade_newData_logs
if ls src/test/regress/tmp_upgrade/newData/*.log 1> /dev/null 2>&1; then
cp src/test/regress/tmp_upgrade/newData/*.log /tmp/pg_upgrade_newData_logs
fi
if: failure()
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ env.old_pg_major }}_${{ env.new_pg_major }}_upgrade
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.old_pg_major }}_${{ env.new_pg_major }}_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-citus-upgrade:
name: PG${{ fromJson(matrix.pg_version).major }} - check-citus-upgrade
runs-on: ubuntu-latest
container:
image: "${{ needs.params.outputs.citusupgrade_image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
strategy:
fail-fast: false
matrix:
pg_version:
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
with:
skip_installation: true
- name: Install and test citus upgrade
run: |-
# run make check-citus-upgrade for all citus versions
# the image has ${CITUS_VERSIONS} set with all versions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
check-citus-upgrade \
bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
citus-old-version=${citus_version} \
citus-pre-tar=/install-pg${PG_MAJOR}-citus${citus_version}.tar \
citus-post-tar=${GITHUB_WORKSPACE}/install-$PG_MAJOR.tar; \
done;
# run make check-citus-upgrade-mixed for all citus versions
# the image has ${CITUS_VERSIONS} set with all versions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
check-citus-upgrade-mixed \
citus-old-version=${citus_version} \
bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
citus-pre-tar=/install-pg${PG_MAJOR}-citus${citus_version}.tar \
citus-post-tar=${GITHUB_WORKSPACE}/install-$PG_MAJOR.tar; \
done;
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ env.PG_MAJOR }}_citus_upgrade
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.PG_MAJOR }}_citus_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
ch_benchmark:
name: CH Benchmark
if: startsWith(github.ref, 'refs/heads/ch_benchmark/')
runs-on: ubuntu-latest
needs:
- build
steps:
- uses: actions/checkout@v4
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: install dependencies and run ch_benchmark tests
uses: azure/CLI@v1
with:
inlineScript: |
cd ./src/test/hammerdb
chmod +x run_hammerdb.sh
run_hammerdb.sh citusbot_ch_benchmark_rg
tpcc_benchmark:
name: TPCC Benchmark
if: startsWith(github.ref, 'refs/heads/tpcc_benchmark/')
runs-on: ubuntu-latest
needs:
- build
steps:
- uses: actions/checkout@v4
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: install dependencies and run tpcc_benchmark tests
uses: azure/CLI@v1
with:
inlineScript: |
cd ./src/test/hammerdb
chmod +x run_hammerdb.sh
run_hammerdb.sh citusbot_tpcc_benchmark_rg
prepare_parallelization_matrix_32:
name: Prepare parallelization matrix
if: ${{ needs.test-flakyness-pre.outputs.tests != ''}}
needs: test-flakyness-pre
runs-on: ubuntu-latest
outputs:
json: ${{ steps.parallelization.outputs.json }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/parallelization"
id: parallelization
with:
count: 32
test-flakyness-pre:
name: Detect regression tests need to be ran
if: ${{ !inputs.skip_test_flakyness }}}
runs-on: ubuntu-latest
needs: build
outputs:
tests: ${{ steps.detect-regression-tests.outputs.tests }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Detect regression tests need to be ran
id: detect-regression-tests
run: |-
detected_changes=$(git diff origin/main... --name-only --diff-filter=AM | (grep 'src/test/regress/sql/.*\.sql\|src/test/regress/spec/.*\.spec\|src/test/regress/citus_tests/test/test_.*\.py' || true))
tests=${detected_changes}
# split the tests to be skipped --today we only skip upgrade tests
# and snapshot based node addition tests.
# snapshot based node addition tests are not flaky, as they promote
# the streaming replica (clone) to a PostgreSQL primary node that is one way
# operation
skipped_tests=""
not_skipped_tests=""
for test in $tests; do
if [[ $test =~ ^src/test/regress/sql/upgrade_ ]] || [[ $test =~ ^src/test/regress/sql/multi_add_node_from_backup ]]; then
skipped_tests="$skipped_tests $test"
else
not_skipped_tests="$not_skipped_tests $test"
fi
done
if [ ! -z "$skipped_tests" ]; then
echo "Skipped tests " $skipped_tests
fi
if [ -z "$not_skipped_tests" ]; then
echo "Not detected any tests that flaky test detection should run"
else
echo "Detected tests " $not_skipped_tests
fi
echo 'tests<<EOF' >> $GITHUB_OUTPUT
echo "$not_skipped_tests" >> "$GITHUB_OUTPUT"
echo 'EOF' >> $GITHUB_OUTPUT
test-flakyness:
if: ${{ needs.test-flakyness-pre.outputs.tests != ''}}
name: Test flakyness
runs-on: ubuntu-latest
container:
image: ${{ needs.params.outputs.fail_test_image_name }}:${{ fromJson(needs.params.outputs.pg17_version).full }}${{ needs.params.outputs.image_suffix }}
options: --user root
env:
runs: 8
needs:
- params
- build
- test-flakyness-pre
- prepare_parallelization_matrix_32
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.prepare_parallelization_matrix_32.outputs.json) }}
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4.1.8
- uses: "./.github/actions/setup_extension"
- name: Run minimal tests
run: |-
tests="${{ needs.test-flakyness-pre.outputs.tests }}"
tests_array=($tests)
for test in "${tests_array[@]}"
do
test_name=$(echo "$test" | sed -r "s/.+\/(.+)\..+/\1/")
gosu circleci src/test/regress/citus_tests/run_test.py $test_name --repeat ${{ env.runs }} --use-whole-schedule-line
done
shell: bash
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: test_flakyness_parallel_${{ matrix.id }}

View File

@ -1,78 +0,0 @@
name: "CodeQL"
on:
schedule:
- cron: '59 23 * * 6'
workflow_dispatch:
jobs:
analyze:
name: Analyze
runs-on: ubuntu-22.04
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'cpp', 'python']
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
- name: Install package dependencies
run: |
# Create the file repository configuration:
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main 15" > /etc/apt/sources.list.d/pgdg.list'
# Import the repository signing key:
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
sudo apt-get install -y --no-install-recommends \
autotools-dev \
build-essential \
ca-certificates \
curl \
debhelper \
devscripts \
fakeroot \
flex \
libcurl4-openssl-dev \
libdistro-info-perl \
libedit-dev \
libfile-fcntllock-perl \
libicu-dev \
libkrb5-dev \
liblz4-1 \
liblz4-dev \
libpam0g-dev \
libreadline-dev \
libselinux1-dev \
libssl-dev \
libxslt-dev \
libzstd-dev \
libzstd1 \
lintian \
postgresql-server-dev-17 \
python3-pip \
python3-setuptools \
wget \
zlib1g-dev
- name: Configure, Build and Install Citus
if: matrix.language == 'cpp'
run: |
./configure
make -sj8
sudo make install-all
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3

View File

@ -1,54 +0,0 @@
name: "Build devcontainer"
# Since building of containers can be quite time consuming, and take up some storage,
# there is no need to finish a build for a tag if new changes are concurrently being made.
# This cancels any previous builds for the same tag, and only the latest one will be kept.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
push:
paths:
- ".devcontainer/**"
workflow_dispatch:
jobs:
docker:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
attestations: write
id-token: write
steps:
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/citusdata/citus-devcontainer
tags: |
type=ref,event=branch
type=sha
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: 'Login to GitHub Container Registry'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{github.actor}}
password: ${{secrets.GITHUB_TOKEN}}
-
name: Build and push
uses: docker/build-push-action@v5
with:
context: "{{defaultContext}}:.devcontainer"
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@ -1,79 +0,0 @@
name: Flaky test debugging
run-name: Flaky test debugging - ${{ inputs.flaky_test }} (${{ inputs.flaky_test_runs_per_job }}x${{ inputs.flaky_test_parallel_jobs }})
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
flaky_test:
required: true
type: string
description: Test to run
flaky_test_runs_per_job:
required: false
default: 8
type: number
description: Number of times to run the test
flaky_test_parallel_jobs:
required: false
default: 32
type: number
description: Number of parallel jobs to run
jobs:
build:
name: Build Citus
runs-on: ubuntu-latest
container:
image: ${{ vars.build_image_name }}:${{ vars.pg15_version }}${{ vars.image_suffix }}
options: --user root
steps:
- uses: actions/checkout@v4
- name: Configure, Build, and Install
run: |
echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
./ci/build-citus.sh
shell: bash
- uses: actions/upload-artifact@v4.6.0
with:
name: build-${{ env.PG_MAJOR }}
path: |-
./build-${{ env.PG_MAJOR }}/*
./install-${{ env.PG_MAJOR }}.tar
prepare_parallelization_matrix:
name: Prepare parallelization matrix
runs-on: ubuntu-latest
outputs:
json: ${{ steps.parallelization.outputs.json }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/parallelization"
id: parallelization
with:
count: ${{ inputs.flaky_test_parallel_jobs }}
test_flakyness:
name: Test flakyness
runs-on: ubuntu-latest
container:
image: ${{ vars.fail_test_image_name }}:${{ vars.pg15_version }}${{ vars.image_suffix }}
options: --user root
needs:
[build, prepare_parallelization_matrix]
env:
test: "${{ inputs.flaky_test }}"
runs: "${{ inputs.flaky_test_runs_per_job }}"
skip: false
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.prepare_parallelization_matrix.outputs.json) }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
- name: Run minimal tests
run: |-
gosu circleci src/test/regress/citus_tests/run_test.py ${{ env.test }} --repeat ${{ env.runs }} --use-whole-schedule-line
shell: bash
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: check_flakyness_parallel_${{ matrix.id }}

View File

@ -1,177 +0,0 @@
name: Build tests in packaging images
on:
pull_request:
types: [opened, reopened,synchronize]
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
get_postgres_versions_from_file:
runs-on: ubuntu-latest
outputs:
pg_versions: ${{ steps.get-postgres-versions.outputs.pg_versions }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Get Postgres Versions
id: get-postgres-versions
run: |
set -euxo pipefail
# Postgres versions are stored in .github/workflows/build_and_test.yml
# file in json strings with major and full keys.
# Below command extracts the versions and get the unique values.
pg_versions=$(cat .github/workflows/build_and_test.yml | grep -oE '"major": "[0-9]+", "full": "[0-9.]+"' | sed -E 's/"major": "([0-9]+)", "full": "([0-9.]+)"/\1/g' | sort | uniq | tr '\n', ',')
pg_versions_array="[ ${pg_versions} ]"
echo "Supported PG Versions: ${pg_versions_array}"
# Below line is needed to set the output variable to be used in the next job
echo "pg_versions=${pg_versions_array}" >> $GITHUB_OUTPUT
shell: bash
rpm_build_tests:
name: rpm_build_tests
needs: get_postgres_versions_from_file
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
# While we use separate images for different Postgres versions in rpm
# based distros
# For this reason, we need to use a "matrix" to generate names of
# rpm images, e.g. citus/packaging:centos-7-pg12
packaging_docker_image:
- oraclelinux-8
- almalinux-8
- almalinux-9
POSTGRES_VERSION: ${{ fromJson(needs.get_postgres_versions_from_file.outputs.pg_versions) }}
container:
image: citus/packaging:${{ matrix.packaging_docker_image }}-pg${{ matrix.POSTGRES_VERSION }}
options: --user root
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set Postgres and python parameters for rpm based distros
run: |
echo "/usr/pgsql-${{ matrix.POSTGRES_VERSION }}/bin" >> $GITHUB_PATH
echo "/root/.pyenv/bin:$PATH" >> $GITHUB_PATH
echo "PACKAGING_PYTHON_VERSION=3.8.16" >> $GITHUB_ENV
- name: Configure
run: |
echo "Current Shell:$0"
echo "GCC Version: $(gcc --version)"
./configure 2>&1 | tee output.log
- name: Make clean
run: |
make clean
- name: Make
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
make CFLAGS="-Wno-missing-braces" -sj$(cat /proc/cpuinfo | grep "core id" | wc -l) 2>&1 | tee -a output.log
# Check the exit code of the make command
make_exit_code=${PIPESTATUS[0]}
# If the make command returned a non-zero exit code, exit with the same code
if [[ $make_exit_code -ne 0 ]]; then
echo "make command failed with exit code $make_exit_code"
exit $make_exit_code
fi
- name: Make install
run: |
make CFLAGS="-Wno-missing-braces" install 2>&1 | tee -a output.log
- name: Validate output
env:
POSTGRES_VERSION: ${{ matrix.POSTGRES_VERSION }}
PACKAGING_DOCKER_IMAGE: ${{ matrix.packaging_docker_image }}
run: |
echo "Postgres version: ${POSTGRES_VERSION}"
./.github/packaging/validate_build_output.sh "rpm"
deb_build_tests:
name: deb_build_tests
needs: get_postgres_versions_from_file
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
# On deb based distros, we use the same docker image for
# builds based on different Postgres versions because deb
# based images include all postgres installations.
# For this reason, we have multiple runs --which is 3 today--
# for each deb based image and we use POSTGRES_VERSION to set
# PG_CONFIG variable in each of those runs.
packaging_docker_image:
- debian-bookworm-all
- debian-bullseye-all
- ubuntu-focal-all
- ubuntu-jammy-all
POSTGRES_VERSION: ${{ fromJson(needs.get_postgres_versions_from_file.outputs.pg_versions) }}
container:
image: citus/packaging:${{ matrix.packaging_docker_image }}
options: --user root
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set pg_config path and python parameters for deb based distros
run: |
echo "PG_CONFIG=/usr/lib/postgresql/${{ matrix.POSTGRES_VERSION }}/bin/pg_config" >> $GITHUB_ENV
echo "/root/.pyenv/bin:$PATH" >> $GITHUB_PATH
echo "PACKAGING_PYTHON_VERSION=3.8.16" >> $GITHUB_ENV
- name: Configure
run: |
echo "Current Shell:$0"
echo "GCC Version: $(gcc --version)"
./configure 2>&1 | tee output.log
- name: Make clean
run: |
make clean
- name: Make
shell: bash
run: |
set -e
git config --global --add safe.directory ${GITHUB_WORKSPACE}
make -sj$(cat /proc/cpuinfo | grep "core id" | wc -l) 2>&1 | tee -a output.log
# Check the exit code of the make command
make_exit_code=${PIPESTATUS[0]}
# If the make command returned a non-zero exit code, exit with the same code
if [[ $make_exit_code -ne 0 ]]; then
echo "make command failed with exit code $make_exit_code"
exit $make_exit_code
fi
- name: Make install
run: |
make install 2>&1 | tee -a output.log
- name: Validate output
env:
POSTGRES_VERSION: ${{ matrix.POSTGRES_VERSION }}
PACKAGING_DOCKER_IMAGE: ${{ matrix.packaging_docker_image }}
run: |
echo "Postgres version: ${POSTGRES_VERSION}"
./.github/packaging/validate_build_output.sh "deb"

7
.gitignore vendored
View File

@ -38,9 +38,6 @@ lib*.pc
/Makefile.global
/src/Makefile.custom
/compile_commands.json
/src/backend/distributed/cdc/build-cdc-*/*
/src/test/cdc/tmp_check/*
# temporary files vim creates
*.swp
@ -54,7 +51,3 @@ lib*.pc
# style related temporary outputs
*.uncrustify
.venv
# added output when modifying check_gucs_are_alphabetically_sorted.sh
guc.out

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +0,0 @@
# Microsoft Open Source Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
Resources:
- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
- Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns

View File

@ -11,52 +11,6 @@ sign a Contributor License Agreement (CLA). For an explanation of
why we ask this as well as instructions for how to proceed, see the
[Microsoft CLA](https://cla.opensource.microsoft.com/).
### Devcontainer / Github Codespaces
The easiest way to start contributing is via our devcontainer. This container works both locally in visual studio code with docker-desktop/docker-for-mac as well as [Github Codespaces](https://github.com/features/codespaces). To open the project in vscode you will need the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers). For codespaces you will need to [create a new codespace](https://codespace.new/citusdata/citus).
With the extension installed you can run the following from the command pallet to get started
```
> Dev Containers: Clone Repository in Container Volume...
```
In the subsequent popup paste the url to the repo and hit enter.
```
https://github.com/citusdata/citus
```
This will create an isolated Workspace in vscode, complete with all tools required to build, test and run the Citus extension. We keep this container up to date with the supported postgres versions as well as the exact versions of tooling we use.
To quickly start we suggest splitting your terminal once to have two shells. The left one in the `/workspaces/citus`, the second one changed to `/data`. The left terminal will be used to interact with the project, the right one with a testing cluster.
To get citus installed from source we run `make install -s` in the first terminal. Once installed you can start a Citus cluster in the second terminal via `citus_dev make citus`. The cluster will run in the background, and can be interacted with via `citus_dev`. To get an overview of the available commands.
With the Citus cluster running you can connect to the coordinator in the first terminal via `psql -p9700`. Because the coordinator is the most common entrypoint the `PGPORT` environment is set accordingly, so a simple `psql` will connect directly to the coordinator.
### Debugging in the VS code
1. Start Debugging: Press F5 in VS Code to start debugging. When prompted, you'll need to attach the debugger to the appropriate PostgreSQL process.
2. Identify the Process: If you're running a psql command, take note of the PID that appears in your psql prompt. For example:
```
[local] citus@citus:9700 (PID: 5436)=#
```
This PID (5436 in this case) indicates the process that you should attach the debugger to.
If you are uncertain about which process to attach, you can list all running PostgreSQL processes using the following command:
```
ps aux | grep postgres
```
Look for the process associated with the PID you noted. For example:
```
citus 5436 0.0 0.0 0 0 ? S 14:00 0:00 postgres: citus citus
```
4. Attach the Debugger: Once you've identified the correct PID, select that process when prompted in VS Code to attach the debugger. You should now be able to debug the PostgreSQL session tied to the psql command.
5. Set Breakpoints and Debug: With the debugger attached, you can set breakpoints within the code. This allows you to step through the code execution, inspect variables, and fully debug the PostgreSQL instance running in your container.
### Getting and building
[PostgreSQL documentation](https://www.postgresql.org/support/versioning/) has a
@ -87,8 +41,6 @@ that are missing in earlier minor versions.
cd citus
./configure
# If you have already installed the project, you need to clean it first
make clean
make
make install
# Optionally, you might instead want to use `make install-all`
@ -127,8 +79,6 @@ that are missing in earlier minor versions.
git clone https://github.com/citusdata/citus.git
cd citus
./configure
# If you have already installed the project previously, you need to clean it first
make clean
make
sudo make install
# Optionally, you might instead want to use `sudo make install-all`
@ -179,8 +129,6 @@ that are missing in earlier minor versions.
git clone https://github.com/citusdata/citus.git
cd citus
PG_CONFIG=/usr/pgsql-14/bin/pg_config ./configure
# If you have already installed the project previously, you need to clean it first
make clean
make
sudo make install
# Optionally, you might instead want to use `sudo make install-all`
@ -197,7 +145,43 @@ that are missing in earlier minor versions.
### Following our coding conventions
Our coding conventions are documented in [STYLEGUIDE.md](STYLEGUIDE.md).
CircleCI will automatically reject any PRs which do not follow our coding
conventions. The easiest way to ensure your PR adheres to those conventions is
to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify)
tool. This tool uses `uncrustify` under the hood.
```bash
# Uncrustify changes the way it formats code every release a bit. To make sure
# everyone formats consistently we use version 0.68.1:
curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.68.1.tar.gz | tar xz
cd uncrustify-uncrustify-0.68.1/
mkdir build
cd build
cmake ..
make -j5
sudo make install
cd ../..
git clone https://github.com/citusdata/tools.git
cd tools
make uncrustify/.install
```
Once you've done that, you can run the `make reindent` command from the top
directory to recursively check and correct the style of any source files in the
current directory. Under the hood, `make reindent` will run `citus_indent` and
some other style corrections for you.
You can also run the following in the directory of this repository to
automatically format all the files that you have changed before committing:
```bash
cat > .git/hooks/pre-commit << __EOF__
#!/bin/bash
citus_indent --check --diff || { citus_indent --diff; exit 1; }
__EOF__
chmod +x .git/hooks/pre-commit
```
### Making SQL changes
@ -234,50 +218,3 @@ style `#include` statements like this:
Any other SQL you can put directly in the main sql file, e.g.
`src/backend/distributed/sql/citus--8.3-1--9.0-1.sql`.
### Backporting a commit to a release branch
1. Check out the release branch that you want to backport to `git checkout release-11.3`
2. Make sure you have the latest changes `git pull`
3. Create a new release branch with a unique name `git checkout -b release-11.3-<yourname>`
4. Cherry-pick the commit that you want to backport `git cherry-pick -x <sha>` (the `-x` is important)
5. Push the branch `git push`
6. Wait for tests to pass
7. If the cherry-pick required non-trivial merge conflicts, create a PR and ask
for a review.
8. After the tests pass on CI, fast-forward the release branch `git push origin release-11.3-<yourname>:release-11.3`
### Running tests
See [`src/test/regress/README.md`](https://github.com/citusdata/citus/blob/master/src/test/regress/README.md)
### Documentation
User-facing documentation is published on [docs.citusdata.com](https://docs.citusdata.com/). When adding a new feature, function, or setting, you can open a pull request or issue against the [Citus docs repo](https://github.com/citusdata/citus_docs/).
Detailed descriptions of the implementation for Citus developers are provided in the [Citus Technical Documentation](src/backend/distributed/README.md). It is currently a single file for ease of searching. Please update the documentation if you make any changes that affect the design or add major new features.
# Making a pull request ready for reviews
Asking for help and asking for reviews are two different things. When you're asking for help, you're asking for someone to help you with something that you're not expected to know.
But when you're asking for a review, you're asking for someone to review your work and provide feedback. So, when you're asking for a review, you're expected to make sure that:
* Your changes don't perform **unnecessary line addition / deletions / style changes on unrelated files / lines**.
* All CI jobs are **passing**, including **style checks** and **flaky test detection jobs**. Note that if you're an external contributor, you don't have to wait CI jobs to run (and finish) because they don't get automatically triggered for external contributors.
* Your PR has necessary amount of **tests** and that they're passing.
* You separated as much as possible work into **separate PRs**, e.g., a prerequisite bugfix, a refactoring etc..
* Your PR doesn't introduce a typo or something that you can easily fix yourself.
* After all CI jobs pass, code-coverage measurement job (CodeCov as of today) then kicks in. That's why it's important to make the **tests passing** first. At that point, you're expected to check **CodeCov annotations** that can be seen in the **Files Changed** tab and expected to make sure that it doesn't complain about any lines that are not covered. For example, it's ok if CodeCov complains about an `ereport()` call that you put for an "unexpected-but-better-than-crashing" case, but it's not ok if it complains about an uncovered `if` branch that you added.
* And finally, perform a **self-review** to make sure that:
* Code and code-comments reflects the idea **without requiring an extra explanation** via a chat message / email / PR comment.
This is important because we don't expect developers to reach out to author / read about the whole discussion in the PR to understand the idea behind a commit merged into `main` branch.
* PR description is clear enough.
* If-and-only-if you're **introducing a user facing change / bugfix**, your PR has a line that starts with `DESCRIPTION: <Present simple tense word that starts with a capital letter, e.g., Adds support for / Fixes / Disallows>`.
* **Commit messages** are clear enough if the commits are doing logically different things.

View File

@ -1,43 +0,0 @@
# Devcontainer
## Coredumps
When postgres/citus crashes, there is the option to create a coredump. This is useful for debugging the issue. Coredumps are enabled in the devcontainer by default. However, not all environments are configured correctly out of the box. The most important configuration that is not standardized is the `core_pattern`. The configuration can be verified from the container, however, you cannot change this setting from inside the container as the filesystem containing this setting is in read only mode while inside the container.
To verify if corefiles are written run the following command in a terminal. This shows the filename pattern with which the corefile will be written.
```bash
cat /proc/sys/kernel/core_pattern
```
This should be configured with a relative path or simply a simple filename, such as `core`. When your environment shows an absolute path you will need to change this setting. How to change this setting depends highly on the underlying system as the setting needs to be changed on the kernel of the host running the container.
You can put any pattern in `/proc/sys/kernel/core_pattern` as you see fit. eg. You can add the PID to the core pattern in one of two ways;
- You either include `%p` in the core_pattern. This gets substituted with the PID of the crashing process.
- Alternatively you could set `/proc/sys/kernel/core_uses_pid` to `1` in the same way as you set `core_pattern`. This will append the PID to the corefile if `%p` is not explicitly contained in the core_pattern.
When a coredump is written you can use the debug/launch configuration `Open core file` which is preconfigured in the devcontainer. This will open a fileprompt that lists all coredumps that are found in your workspace. When you want to debug coredumps from `citus_dev` that are run in your `/data` directory, you can add the data directory to your workspace. In the command pallet of vscode you can run `>Workspace: Add Folder to Workspace...` and select the `/data` directory. This will allow you to open the coredumps from the `/data` directory in the `Open core file` debug configuration.
### Windows (docker desktop)
When running in docker desktop on windows you will most likely need to change this setting. The linux guest in WSL2 that runs your container is the `docker-desktop` environment. The easiest way to get onto the host, where you can change this setting, is to open a powershell window and verify you have the docker-desktop environment listed.
```powershell
wsl --list
```
Among others this should list both `docker-desktop` and `docker-desktop-data`. You can then open a shell in the `docker-desktop` environment.
```powershell
wsl -d docker-desktop
```
Inside this shell you can verify that you have the right environment by running
```bash
cat /proc/sys/kernel/core_pattern
```
This should show the same configuration as the one you see inside the devcontainer. You can then change the setting by running the following command.
This will change the setting for the current session. If you want to make the change permanent you will need to add this to a startup script.
```bash
echo "core" > /proc/sys/kernel/core_pattern
```

View File

@ -1,78 +0,0 @@
Below table is created with Citus 12.1.7 on PG16
| Extension Name | Works as Expected | Notes |
|:-----------------------------|:--------------------|:--------|
| address_standardizer | Yes | |
| address_standardizer_data_us | Yes | |
| age | Partially | Works fine side by side, but graph data cannot be distributed. |
| amcheck | Yes | |
| anon | Partially | Cannot anonymize distributed tables. It is possible to anonymize local tables. |
| auto_explain | No | [Issue #6448](https://github.com/citusdata/citus/issues/6448) |
| azure | Yes | |
| azure_ai | Yes | |
| azure_storage | Yes | |
| bloom | Yes | |
| Btree_gin | Yes | |
| btree_gist | Yes | |
| citext | Yes | |
| Citus_columnar | Yes | |
| cube | Yes | |
| dblink | Yes | |
| dict_int | Yes | |
| dict_xsyn | Yes | |
| earthdistance | Yes | |
| fuzzystrmatch | Yes | |
| hll | Yes | |
| hstore | Yes | |
| hypopg | Partially | Hypopg can work on local tables and individual shards, however, when we create a hypothetical index on a distributed table, citus does not propagate the index creation command to worker nodes, and thus, hypothetical index is not used in explain statements. |
| intagg | Yes | |
| intarray | Yes | |
| isn | Yes | |
| lo | Partially | Extension relies on triggers, but Citus does not support triggers over distributed tables |
| login_hook | Yes | |
| ltree | Yes | |
| oracle_fdw | Yes | |
| orafce | Yes | |
| pageinspect | Yes | |
| pg_buffercache | Yes | |
| pg_cron | Yes | |
| pg_diskann | Yes | |
| pg_failover_slots | To be tested | |
| pg_freespacemap | Partially | Users can set citus.override_table_visibility='off'; to get accurate calculation of free space map. |
| pg_hint_plan | Partially | Works fine side by side, but hints are ignored for distributed queries |
| pg_partman | Yes | |
| pg_prewarm | Partially | In order to prewarm distributed tables, set " citus.override_table_visibility" to off, and run prewarm for each shard. This needs to be done at each node. |
| pg_repack | Partially | Extension relies on triggers, but Citus does not support triggers over distributed tables. It works fine on local tables. |
| pg_squeeze | Partially | It can work on local tables, but it is not aware of distributed tables. Users can set citus.override_table_visibility='off'; and then run pg_squeeze for each shard. This needs to be done at each node. |
| pg_stat_statements | Yes | |
| pg_trgm | Yes | |
| pg_visibility | Partially | In order to get visibility map of a distributed table, customers can run the functions for shard tables. |
| pgaadauth | Yes | |
| pgaudit | Yes | |
| pgcrypto | Yes | |
| pglogical | No | |
| pgrowlocks | Partially | It works only with individual shards, not with distributed table names. |
| pgstattuple | Yes | |
| plpgsql | Yes | |
| plv8 | Yes | |
| postgis | Yes | |
| postgis_raster | Yes | |
| postgis_sfcgal | Yes | |
| postgis_tiger_geocoder | No | |
| postgis_topology | No | |
| postgres_fdw | Yes | |
| postgres_protobuf | Yes | |
| semver | Yes | |
| session_variable | No | |
| sslinfo | Yes | |
| tablefunc | Yes | |
| tdigest | Yes | |
| tds_fdw | Yes | |
| timescaledb | No | [Known to be incompatible with Citus](https://www.citusdata.com/blog/2021/10/22/how-to-scale-postgres-for-time-series-data-with-citus/#:~:text=Postgres%E2%80%99%20built-in%20partitioning) |
| topn | Yes | |
| tsm_system_rows | Yes | |
| tsm_system_time | Yes | |
| unaccent | Yes | |
| uuid-ossp | Yes | |
| vector (aka pg_vector) | Yes | |
| wal2json | To be tested | |
| xml2 | To be tested | |

View File

@ -11,7 +11,7 @@ endif
include Makefile.global
all: extension
all: extension pg_send_cancellation
# build columnar only
@ -40,28 +40,32 @@ clean-full:
install-downgrades:
$(MAKE) -C src/backend/distributed/ install-downgrades
install-all: install-headers
install-all: install-headers install-pg_send_cancellation
$(MAKE) -C src/backend/columnar/ install-all
$(MAKE) -C src/backend/distributed/ install-all
# build citus_send_cancellation binary
pg_send_cancellation:
$(MAKE) -C src/bin/pg_send_cancellation/ all
install-pg_send_cancellation: pg_send_cancellation
$(MAKE) -C src/bin/pg_send_cancellation/ install
clean-pg_send_cancellation:
$(MAKE) -C src/bin/pg_send_cancellation/ clean
.PHONY: pg_send_cancellation install-pg_send_cancellation clean-pg_send_cancellation
# Add to generic targets
install: install-extension install-headers
clean: clean-extension
install: install-extension install-headers install-pg_send_cancellation
clean: clean-extension clean-pg_send_cancellation
# apply or check style
reindent:
${citus_abs_top_srcdir}/ci/fix_style.sh
check-style:
black . --check --quiet
isort . --check --quiet
flake8
cd ${citus_abs_top_srcdir} && citus_indent --quiet --check
.PHONY: reindent check-style
# depend on install-all so that downgrade scripts are installed as well
check: all install-all
# explicetely does not use $(MAKE) to avoid parallelism
make -C src/test/regress check
$(MAKE) -C src/test/regress check-full
.PHONY: all check clean install install-downgrades install-all

150
README.md
View File

@ -1,14 +1,14 @@
| **<br/>The Citus database is 100% open source.<br/><img width=1000/><br/>Learn what's new in the [Citus 13.0 release blog](https://www.citusdata.com/blog/2025/02/06/distribute-postgresql-17-with-citus-13/) and the [Citus Updates page](https://www.citusdata.com/updates/).<br/><br/>**|
| **<br/>Citus is now 100% open source and supports querying from any node.<br/><img width=1000/><br/>Read about it on the [Citus 11.0 release blog](https://www.citusdata.com/blog/2022/06/17/citus-11-goes-fully-open-source/) and the [Citus Updates page](https://www.citusdata.com/updates/).<br/><br/>** |
|---|
<br/>
<br/>
![Citus Banner](images/citus-readme-banner.png)
![Citus Banner](/citus-readme-banner.png)
[![Latest Docs](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://docs.citusdata.com/)
[![Stack Overflow](https://img.shields.io/badge/Stack%20Overflow-%20-545353?logo=Stack%20Overflow)](https://stackoverflow.com/questions/tagged/citus)
[![Slack](https://cituscdn.azureedge.net/images/social/slack-badge.svg)](https://slack.citusdata.com/)
[![Slack Status](https://citus-slack.herokuapp.com/badge.svg)](https://citus-public.slack.com/)
[![Code Coverage](https://codecov.io/gh/citusdata/citus/branch/master/graph/badge.svg)](https://app.codecov.io/gh/citusdata/citus)
[![Twitter](https://img.shields.io/twitter/follow/citusdata.svg?label=Follow%20@citusdata)](https://twitter.com/intent/follow?screen_name=citusdata)
@ -31,15 +31,13 @@ You can use these Citus superpowers to make your Postgres database scale-out rea
Our [SIGMOD '21](https://2021.sigmod.org/) paper [Citus: Distributed PostgreSQL for Data-Intensive Applications](https://doi.org/10.1145/3448016.3457551) gives a more detailed look into what Citus is, how it works, and why it works that way.
![Citus scales out from a single node](images/citus-scale-out.png)
![Citus scales out from a single node](/citus-scale-out.png)
Since Citus is an extension to Postgres, you can use Citus with the latest Postgres versions. And Citus works seamlessly with the PostgreSQL tools and extensions you are already familiar with.
- [Why Citus?](#why-citus)
- [Getting Started](#getting-started)
- [Using Citus](#using-citus)
- [Schema-based sharding](#schema-based-sharding)
- [Setting up with High Availability](#setting-up-with-high-availability)
- [Documentation](#documentation)
- [Architecture](#architecture)
- [When to Use Citus](#when-to-use-citus)
@ -65,11 +63,11 @@ Developers choose Citus for two reasons:
## Getting Started
The quickest way to get started with Citus is to use the [Azure Cosmos DB for PostgreSQL](https://learn.microsoft.com/azure/cosmos-db/postgresql/quickstart-create-portal) managed service in the cloud—or [set up Citus locally](https://docs.citusdata.com/en/stable/installation/single_node.html).
The quickest way to get started with Citus is to use the [Hyperscale (Citus)](https://docs.microsoft.com/azure/postgresql/quickstart-create-hyperscale-portal) deployment option in the Azure Database for PostgreSQL managed service—or [set up Citus locally](https://docs.citusdata.com/en/stable/installation/single_node.html).
### Citus Managed Service on Azure
### Hyperscale (Citus) on Azure Database for PostgreSQL
You can get a fully-managed Citus cluster in minutes through the [Azure Cosmos DB for PostgreSQL portal](https://azure.microsoft.com/products/cosmos-db/). Azure will manage your backups, high availability through auto-failover, software updates, monitoring, and more for all of your servers. To get started Citus on Azure, use the [Azure Cosmos DB for PostgreSQL Quickstart](https://learn.microsoft.com/azure/cosmos-db/postgresql/quickstart-create-portal).
You can get a fully-managed Citus cluster in minutes through the Hyperscale (Citus) deployment option in the [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) portal. Azure will manage your backups, high availability through auto-failover, software updates, monitoring, and more for all of your servers. To get started with Hyperscale (Citus), use the [Hyperscale (Citus) Quickstart](https://docs.microsoft.com/azure/postgresql/quickstart-create-hyperscale-portal) in the Azure docs.
### Running Citus using Docker
@ -95,14 +93,14 @@ Install packages on Ubuntu / Debian:
```bash
curl https://install.citusdata.com/community/deb.sh > add-citus-repo.sh
sudo bash add-citus-repo.sh
sudo apt-get -y install postgresql-17-citus-13.0
sudo apt-get -y install postgresql-14-citus-11.0
```
Install packages on Red Hat:
Install packages on CentOS / Fedora / Red Hat:
```bash
curl https://install.citusdata.com/community/rpm.sh > add-citus-repo.sh
sudo bash add-citus-repo.sh
sudo yum install -y citus130_17
sudo yum install -y citus110_14
```
To add Citus to your local PostgreSQL database, add the following to `postgresql.conf`:
@ -236,41 +234,6 @@ Time: 209.961 ms
Co-location also helps you scale [INSERT..SELECT](https://docs.citusdata.com/en/stable/articles/aggregation.html), [stored procedures](https://www.citusdata.com/blog/2020/11/21/making-postgres-stored-procedures-9x-faster-in-citus/), and [distributed transactions](https://www.citusdata.com/blog/2017/06/02/scaling-complex-sql-transactions/).
### Distributing Tables without interrupting the application
Some of you already start with Postgres, and decide to distribute tables later on while your application using the tables. In that case, you want to avoid downtime for both reads and writes. `create_distributed_table` command block writes (e.g., DML commands) on the table until the command is finished. Instead, with `create_distributed_table_concurrently` command, your application can continue to read and write the data even during the command.
```sql
CREATE TABLE device_logs (
device_id bigint primary key,
log text
);
-- insert device logs
INSERT INTO device_logs (device_id, log)
SELECT s, 'device log:'||s FROM generate_series(0, 99) s;
-- convert device_logs into a distributed table without interrupting the application
SELECT create_distributed_table_concurrently('device_logs', 'device_id', colocate_with := 'devices');
-- get the count of the logs, parallelized across shards
SELECT count(*) FROM device_logs;
┌───────┐
│ count │
├───────┤
│ 100 │
└───────┘
(1 row)
Time: 48.734 ms
```
### Creating Reference Tables
When you need fast joins or foreign keys that do not include the distribution column, you can use `create_reference_table` to replicate a table across all nodes in the cluster.
@ -348,72 +311,9 @@ When using columnar storage, you should only load data in batch using `COPY` or
To learn more about columnar storage, check out the [columnar storage README](https://github.com/citusdata/citus/blob/master/src/backend/columnar/README.md).
## Schema-based sharding
Available since Citus 12.0, [schema-based sharding](https://docs.citusdata.com/en/stable/get_started/concepts.html#schema-based-sharding) is the shared database, separate schema model, the schema becomes the logical shard within the database. Multi-tenant apps can a use a schema per tenant to easily shard along the tenant dimension. Query changes are not required and the application usually only needs a small modification to set the proper search_path when switching tenants. Schema-based sharding is an ideal solution for microservices, and for ISVs deploying applications that cannot undergo the changes required to onboard row-based sharding.
### Creating distributed schemas
You can turn an existing schema into a distributed schema by calling `citus_schema_distribute`:
```sql
SELECT citus_schema_distribute('user_service');
```
Alternatively, you can set `citus.enable_schema_based_sharding` to have all newly created schemas be automatically converted into distributed schemas:
```sql
SET citus.enable_schema_based_sharding TO ON;
CREATE SCHEMA AUTHORIZATION user_service;
CREATE SCHEMA AUTHORIZATION time_service;
CREATE SCHEMA AUTHORIZATION ping_service;
```
### Running queries
Queries will be properly routed to schemas based on `search_path` or by explicitly using the schema name in the query.
For [microservices](https://docs.citusdata.com/en/stable/get_started/tutorial_microservices.html) you would create a USER per service matching the schema name, hence the default `search_path` would contain the schema name. When connected the user queries would be automatically routed and no changes to the microservice would be required.
```sql
CREATE USER user_service;
CREATE SCHEMA AUTHORIZATION user_service;
```
For typical multi-tenant applications, you would set the search path to the tenant schema name in your application:
```sql
SET search_path = tenant_name, public;
```
## Setting up with High Availability
One of the most popular high availability solutions for PostgreSQL, [Patroni 3.0](https://github.com/zalando/patroni), has [first class support for Citus 10.0 and above](https://patroni.readthedocs.io/en/latest/citus.html#citus), additionally since Citus 11.2 ships with improvements for smoother node switchover in Patroni.
An example of patronictl list output for the Citus cluster:
```bash
postgres@coord1:~$ patronictl list demo
```
```text
+ Citus cluster: demo ----------+--------------+---------+----+-----------+
| Group | Member | Host | Role | State | TL | Lag in MB |
+-------+---------+-------------+--------------+---------+----+-----------+
| 0 | coord1 | 172.27.0.10 | Replica | running | 1 | 0 |
| 0 | coord2 | 172.27.0.6 | Sync Standby | running | 1 | 0 |
| 0 | coord3 | 172.27.0.4 | Leader | running | 1 | |
| 1 | work1-1 | 172.27.0.8 | Sync Standby | running | 1 | 0 |
| 1 | work1-2 | 172.27.0.2 | Leader | running | 1 | |
| 2 | work2-1 | 172.27.0.5 | Sync Standby | running | 1 | 0 |
| 2 | work2-2 | 172.27.0.7 | Leader | running | 1 | |
+-------+---------+-------------+--------------+---------+----+-----------+
```
## Documentation
If youre ready to get started with Citus or want to know more, we recommend reading the [Citus open source documentation](https://docs.citusdata.com/en/stable/). Or, if you are using Citus on Azure, then the [Azure Cosmos DB for PostgreSQL](https://learn.microsoft.com/azure/cosmos-db/postgresql/introduction) is the place to start.
If youre ready to get started with Citus or want to know more, we recommend reading the [Citus open source documentation](https://docs.citusdata.com/en/stable/). Or, if you are using Citus on Azure, then the [Hyperscale (Citus) documentation](https://docs.microsoft.com/azure/postgresql/hyperscale/) is online and available as part of the Azure Database for PostgreSQL docs.
Our Citus docs contain comprehensive use case guides on how to build a [multi-tenant SaaS application](https://docs.citusdata.com/en/stable/use_cases/multi_tenant.html), [real-time analytics dashboard]( https://docs.citusdata.com/en/stable/use_cases/realtime_analytics.html), or work with [time series data](https://docs.citusdata.com/en/stable/use_cases/timeseries.html).
@ -423,13 +323,11 @@ A Citus database cluster grows from a single PostgreSQL node into a cluster by a
Data in distributed tables is stored in “shards”, which are actually just regular PostgreSQL tables on the worker nodes. When querying a distributed table on the coordinator node, Citus will send regular SQL queries to the worker nodes. That way, all the usual PostgreSQL optimizations and extensions can automatically be used with Citus.
![Citus architecture](images/citus-architecture.png)
![Citus architecture](/citus-architecture.png)
When you send a query in which all (co-located) distributed tables have the same filter on the distribution column, Citus will automatically detect that and send the whole query to the worker node that stores the data. That way, arbitrarily complex queries are supported with minimal routing overhead, which is especially useful for scaling transactional workloads. If queries do not have a specific filter, each shard is queried in parallel, which is especially useful in analytical workloads. The Citus distributed executor is adaptive and is designed to handle both query types at the same time on the same system under high concurrency, which enables large-scale mixed workloads.
The schema and metadata of distributed tables and reference tables are automatically synchronized to all the nodes in the cluster. That way, you can connect to any node to run distributed queries. Schema changes and cluster administration still need to go through the coordinator.
Detailed descriptions of the implementation for Citus developers are provided in the [Citus Technical Documentation](src/backend/distributed/README.md).
As of Citus 11.0, the schema and metadata of distributed tables and reference tables are automatically synchronized to all the nodes in the cluster. That way, you can connect to any node to run distributed queries. Schema changes and cluster administration still need to go through the coordinator.
## When to use Citus
@ -440,23 +338,21 @@ Citus is uniquely capable of scaling both analytical and transactional workloads
The advanced parallel, distributed query engine in Citus combined with PostgreSQL features such as [array types](https://www.postgresql.org/docs/current/arrays.html), [JSONB](https://www.postgresql.org/docs/current/datatype-json.html), [lateral joins](https://heap.io/blog/engineering/postgresqls-powerful-new-join-type-lateral), and extensions like [HyperLogLog](https://github.com/citusdata/postgresql-hll) and [TopN](https://github.com/citusdata/postgresql-topn) allow you to build responsive analytics dashboards no matter how many customers or how much data you have.
Example real-time analytics users: [Algolia](https://www.citusdata.com/customers/algolia)
Example real-time analytics users: [Algolia](https://www.citusdata.com/customers/algolia), [Heap](https://www.citusdata.com/customers/heap)
- **[Time series data](http://docs.citusdata.com/en/stable/use_cases/timeseries.html)**:
Citus enables you to process and analyze very large amounts of time series data. The biggest Citus clusters store well over a petabyte of time series data and ingest terabytes per day.
Citus integrates seamlessly with [Postgres table partitioning](https://www.postgresql.org/docs/current/ddl-partitioning.html) and has [built-in functions for partitioning by time](https://www.citusdata.com/blog/2021/10/22/how-to-scale-postgres-for-time-series-data-with-citus/), which can speed up queries and writes on time series tables. You can take advantage of Cituss parallel, distributed query engine for fast analytical queries, and use the built-in *columnar storage* to compress old partitions.
Example users: [MixRank](https://www.citusdata.com/customers/mixrank)
Example users: [MixRank](https://www.citusdata.com/customers/mixrank), [Windows team](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/architecting-petabyte-scale-analytics-by-scaling-out-postgres-on/ba-p/969685)
- **[Software-as-a-service (SaaS) applications](http://docs.citusdata.com/en/stable/use_cases/multi_tenant.html)**:
SaaS and other multi-tenant applications need to be able to scale their database as the number of tenants/customers grows. Citus enables you to transparently shard a complex data model by the tenant dimension, so your database can grow along with your business.
By distributing tables along a tenant ID column and co-locating data for the same tenant, Citus can horizontally scale complex (tenant-scoped) queries, transactions, and foreign key graphs. Reference tables and distributed DDL commands make database management a breeze compared to manual sharding. On top of that, you have a built-in distributed query engine for doing cross-tenant analytics inside the database.
Example multi-tenant SaaS users: [Salesloft](https://fivetran.com/case-studies/replicating-sharded-databases-a-case-study-of-salesloft-citus-data-and-fivetran), [ConvertFlow](https://www.citusdata.com/customers/convertflow)
- **[Microservices](https://docs.citusdata.com/en/stable/get_started/tutorial_microservices.html)**: Citus supports schema based sharding, which allows distributing regular database schemas across many machines. This sharding methodology fits nicely with typical Microservices architecture, where storage is fully owned by the service hence cant share the same schema definition with other tenants. Citus allows distributing horizontally scalable state across services, solving one of the [main problems](https://stackoverflow.blog/2020/11/23/the-macro-problem-with-microservices/) of microservices.
Example multi-tenant SaaS users: [Copper](https://www.citusdata.com/customers/copper), [Salesloft](https://fivetran.com/case-studies/replicating-sharded-databases-a-case-study-of-salesloft-citus-data-and-fivetran), [ConvertFlow](https://www.citusdata.com/customers/convertflow)
- **Geospatial**:
Because of the powerful [PostGIS](https://postgis.net/) extension to Postgres that adds support for geographic objects into Postgres, many people run spatial/GIS applications on top of Postgres. And since spatial location information has become part of our daily life, well, there are more geospatial applications than ever. When your Postgres database needs to scale out to handle an increased workload, Citus is a good fit.
@ -469,25 +365,19 @@ Citus is uniquely capable of scaling both analytical and transactional workloads
- **GitHub issues**: Please submit issues via [GitHub issues](https://github.com/citusdata/citus/issues).
- **Documentation**: Our [Citus docs](https://docs.citusdata.com ) have a wealth of resources, including sections on [query performance tuning](https://docs.citusdata.com/en/stable/performance/performance_tuning.html), [useful diagnostic queries](https://docs.citusdata.com/en/stable/admin_guide/diagnostic_queries.html), and [common error messages](https://docs.citusdata.com/en/stable/reference/common_errors.html).
- **Docs issues**: You can also submit documentation issues via [GitHub issues for our Citus docs](https://github.com/citusdata/citus_docs/issues).
- **Updates & Release Notes**: Learn about what's new in each Citus version on the [Citus Updates page](https://www.citusdata.com/updates/).
- **Updates**: Learn about what's new in each Citus version on the [Citus Updates page](https://www.citusdata.com/updates/).
## Contributing
Citus is built on and of open source, and we welcome your contributions. The [CONTRIBUTING.md](CONTRIBUTING.md) file explains how to get started developing the Citus extension itself and our code quality guidelines.
## Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Stay Connected
- **Twitter**: Follow us [@citusdata](https://twitter.com/citusdata) to track the latest posts & updates on whats happening.
- **Citus Blog**: Read our popular [Citus Open Source Blog](https://www.citusdata.com/blog/) for posts about PostgreSQL and Citus.
- **Citus Blog**: Read our popular [Citus Blog](https://www.citusdata.com/blog/) for useful & informative posts about PostgreSQL and Citus.
- **Citus Newsletter**: Subscribe to our monthly technical [Citus Newsletter](https://www.citusdata.com/join-newsletter) to get a curated collection of our favorite posts, videos, docs, talks, & other Postgres goodies.
- **Slack**: Our [Citus Public slack](https://slack.citusdata.com/) is a good way to stay connected, not just with us but with other Citus users.
- **Sister Blog**: Read the PostgreSQL posts on the [Azure Cosmos DB for PostgreSQL blog](https://devblogs.microsoft.com/cosmosdb/category/postgresql/) about our managed service on Azure.
- **Sister Blog**: Read our Azure Database for PostgreSQL [sister blog on Microsoft TechCommunity](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/bg-p/ADforPostgreSQL) for posts relating to Postgres (and Citus) on Azure.
- **Videos**: Check out this [YouTube playlist](https://www.youtube.com/playlist?list=PLixnExCn6lRq261O0iwo4ClYxHpM9qfVy) of some of our favorite Citus videos and demos. If you want to deep dive into how Citus extends PostgreSQL, you might want to check out Marco Slots talk at Carnegie Mellon titled [Citus: Distributed PostgreSQL as an Extension](https://youtu.be/X-aAgXJZRqM) that was part of Andy Pavlos Vaccination Database Talks series at CMUDB.
- **Our other Postgres projects**: Our team also works on other awesome PostgreSQL open source extensions & projects, including: [pg_cron](https://github.com/citusdata/pg_cron), [HyperLogLog](https://github.com/citusdata/postgresql-hll), [TopN](https://github.com/citusdata/postgresql-topn), [pg_auto_failover](https://github.com/citusdata/pg_auto_failover), [activerecord-multi-tenant](https://github.com/citusdata/activerecord-multi-tenant), and [django-multitenant](https://github.com/citusdata/django-multitenant).

View File

@ -1,41 +0,0 @@
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->

View File

@ -1,160 +0,0 @@
# Coding style
The existing code-style in our code-base is not super consistent. There are multiple reasons for that. One big reason is because our code-base is relatively old and our standards have changed over time. The second big reason is that our style-guide is different from style-guide of Postgres and some code is copied from Postgres source code and is slightly modified. The below rules are for new code. If you're changing existing code that uses a different style, use your best judgement to decide if you use the rules here or if you match the existing style.
## Using citus_indent
CI pipeline will automatically reject any PRs which do not follow our coding
conventions. The easiest way to ensure your PR adheres to those conventions is
to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify)
tool. This tool uses `uncrustify` under the hood.
```bash
# Uncrustify changes the way it formats code every release a bit. To make sure
# everyone formats consistently we use version 0.82.0:
curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.82.0.tar.gz | tar xz
cd uncrustify-uncrustify-0.82.0/
mkdir build
cd build
cmake ..
make -j5
sudo make install
cd ../..
git clone https://github.com/citusdata/tools.git
cd tools
make uncrustify/.install
```
Once you've done that, you can run the `make reindent` command from the top
directory to recursively check and correct the style of any source files in the
current directory. Under the hood, `make reindent` will run `citus_indent` and
some other style corrections for you.
You can also run the following in the directory of this repository to
automatically format all the files that you have changed before committing:
```bash
cat > .git/hooks/pre-commit << __EOF__
#!/bin/bash
citus_indent --check --diff || { citus_indent --diff; exit 1; }
__EOF__
chmod +x .git/hooks/pre-commit
```
## Other rules we follow that citus_indent does not enforce
* We almost always use **CamelCase**, when naming functions, variables etc., **not snake_case**.
* We also have the habits of using a **lowerCamelCase** for some variables named from their type or from their function name, as shown in the examples:
```c
bool IsCitusExtensionLoaded = false;
bool
IsAlterTableRenameStmt(RenameStmt *renameStmt)
{
AlterTableCmd *alterTableCommand = NULL;
..
..
bool isAlterTableRenameStmt = false;
..
}
```
* We **start functions with a comment**:
```c
/*
* MyNiceFunction <something in present simple tense, e.g., processes / returns / checks / takes X as input / does Y> ..
* <some more nice words> ..
* <some more nice words> ..
*/
<static?> <return type>
MyNiceFunction(..)
{
..
..
}
```
* `#includes` needs to be sorted based on below ordering and then alphabetically and we should not include what we don't need in a file:
* System includes (eg. #include<...>)
* Postgres.h (eg. #include "postgres.h")
* Toplevel imports from postgres, not contained in a directory (eg. #include "miscadmin.h")
* General postgres includes (eg . #include "nodes/...")
* Toplevel citus includes, not contained in a directory (eg. #include "citus_verion.h")
* Columnar includes (eg. #include "columnar/...")
* Distributed includes (eg. #include "distributed/...")
* Comments:
```c
/* single line comments start with a lower-case */
/*
* We start multi-line comments with a capital letter
* and keep adding a star to the beginning of each line
* until we close the comment with a star and a slash.
*/
```
* Order of function implementations and their declarations in a file:
We define static functions after the functions that call them. For example:
```c
#include<..>
#include<..>
..
..
typedef struct
{
..
..
} MyNiceStruct;
..
..
PG_FUNCTION_INFO_V1(my_nice_udf1);
PG_FUNCTION_INFO_V1(my_nice_udf2);
..
..
// .. somewhere on top of the file …
static void MyNiceStaticlyDeclaredFunction1(…);
static void MyNiceStaticlyDeclaredFunction2(…);
..
..
void
MyNiceFunctionExternedViaHeaderFile(..)
{
..
..
MyNiceStaticlyDeclaredFunction1(..);
..
..
MyNiceStaticlyDeclaredFunction2(..);
..
}
..
..
// we define this first because it's called by MyNiceFunctionExternedViaHeaderFile()
// before MyNiceStaticlyDeclaredFunction2()
static void
MyNiceStaticlyDeclaredFunction1(…)
{
}
..
..
// then we define this
static void
MyNiceStaticlyDeclaredFunction2(…)
{
}
```

View File

@ -283,14 +283,6 @@ actually run in CI. This is most commonly forgotten for newly added CI tests
that the developer only ran locally. It also checks that all CI scripts have a
section in this `README.md` file and that they include `ci/ci_helpers.sh`.
## `check_migration_files.sh`
A branch that touches a set of upgrade scripts is also expected to touch
corresponding downgrade scripts as well. If this script fails, read the output
and make sure you update the downgrade scripts in the printed list. If you
really don't need a downgrade to run any SQL. You can write a comment in the
file explaining why a downgrade step is not necessary.
## `disallow_c_comments_in_migrations.sh`
We do not use C-style comments in migration files as the stripped
@ -381,22 +373,3 @@ This script checks and fixes issues with `.gitignore` rules:
This script checks the order of the GUCs defined in `shared_library_init.c`.
To solve this failure, please check `shared_library_init.c` and make sure that the GUC
definitions are in alphabetical order.
## `print_stack_trace.sh`
This script prints stack traces for failed tests, if they left core files.
## `sort_and_group_includes.sh`
This script checks and fixes issues with include grouping and sorting in C files.
Includes are grouped in the following groups:
- System includes (eg. `#include <math>`)
- Postgres.h include (eg. `#include "postgres.h"`)
- Toplevel postgres includes (includes not in a directory eg. `#include "miscadmin.h`)
- Postgres includes in a directory (eg. `#include "catalog/pg_type.h"`)
- Toplevel citus includes (includes not in a directory eg. `#include "pg_version_constants.h"`)
- Columnar includes (eg. `#include "columnar/columnar.h"`)
- Distributed includes (eg. `#include "distributed/maintenanced.h"`)
Within every group the include lines are sorted alphabetically.

View File

@ -15,6 +15,9 @@ PG_MAJOR=${PG_MAJOR:?please provide the postgres major version}
codename=${VERSION#*(}
codename=${codename%)*}
# get project from argument
project="${CIRCLE_PROJECT_REPONAME}"
# we'll do everything with absolute paths
basedir="$(pwd)"
@ -25,7 +28,7 @@ build_ext() {
pg_major="$1"
builddir="${basedir}/build-${pg_major}"
echo "Beginning build for PostgreSQL ${pg_major}..." >&2
echo "Beginning build of ${project} for PostgreSQL ${pg_major}..." >&2
# do everything in a subdirectory to avoid clutter in current directory
mkdir -p "${builddir}" && cd "${builddir}"

View File

@ -14,8 +14,8 @@ ci_scripts=$(
grep -v -E '^(ci_helpers.sh|fix_style.sh)$'
)
for script in $ci_scripts; do
if ! grep "\\bci/$script\\b" -r .github > /dev/null; then
echo "ERROR: CI script with name \"$script\" is not actually used in .github folder"
if ! grep "\\bci/$script\\b" .circleci/config.yml > /dev/null; then
echo "ERROR: CI script with name \"$script\" is not actually used in .circleci/config.yml"
exit 1
fi
if ! grep "^## \`$script\`\$" ci/README.md > /dev/null; then

96
ci/check_enterprise_merge.sh Executable file
View File

@ -0,0 +1,96 @@
#!/bin/bash
# Testing this script locally requires you to set the following environment
# variables:
# CIRCLE_BRANCH, GIT_USERNAME and GIT_TOKEN
# fail if trying to reference a variable that is not set.
set -u
# exit immediately if a command fails
set -e
# Fail on pipe failures
set -o pipefail
PR_BRANCH="${CIRCLE_BRANCH}"
ENTERPRISE_REMOTE="https://${GIT_USERNAME}:${GIT_TOKEN}@github.com/citusdata/citus-enterprise"
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# List executed commands. This is done so debugging this script is easier when
# it fails. It's explicitly done after git remote add so username and password
# are not shown in CI output (even though it's also filtered out by CircleCI)
set -x
check_compile () {
echo "INFO: checking if merged code can be compiled"
./configure --without-libcurl
make -j10
}
# Clone current git repo (which should be community) to a temporary working
# directory and go there
GIT_DIR_ROOT="$(git rev-parse --show-toplevel)"
TMP_GIT_DIR="$(mktemp --directory -t citus-merge-check.XXXXXXXXX)"
git clone "$GIT_DIR_ROOT" "$TMP_GIT_DIR"
cd "$TMP_GIT_DIR"
# Fails in CI without this
git config user.email "citus-bot@microsoft.com"
git config user.name "citus bot"
# Disable "set -x" temporarily, because $ENTERPRISE_REMOTE contains passwords
{ set +x ; } 2> /dev/null
git remote add enterprise "$ENTERPRISE_REMOTE"
set -x
git remote set-url --push enterprise no-pushing
# Fetch enterprise-master
git fetch enterprise enterprise-master
git checkout "enterprise/enterprise-master"
if git merge --no-commit "origin/$PR_BRANCH"; then
echo "INFO: community PR branch could be merged into enterprise-master"
# check that we can compile after the merge
if check_compile; then
exit 0
fi
echo "WARN: Failed to compile after community PR branch was merged into enterprise"
fi
# undo partial merge
git merge --abort
# If we have a conflict on enterprise merge on the master branch, we have a problem.
# Provide an error message to indicate that enterprise merge is needed to fix this check.
if [[ $PR_BRANCH = master ]]; then
echo "ERROR: Master branch has merge conflicts with enterprise-master."
echo "Try re-running this CI job after merging your changes into enterprise-master."
exit 1
fi
if ! git fetch enterprise "$PR_BRANCH" ; then
echo "ERROR: enterprise/$PR_BRANCH was not found and community PR branch could not be merged into enterprise-master"
exit 1
fi
# Show the top commit of the enterprise PR branch to make debugging easier
git log -n 1 "enterprise/$PR_BRANCH"
# Check that this branch contains the top commit of the current community PR
# branch. If it does not it means it's not up to date with the current PR, so
# the enterprise branch should be updated.
if ! git merge-base --is-ancestor "origin/$PR_BRANCH" "enterprise/$PR_BRANCH" ; then
echo "ERROR: enterprise/$PR_BRANCH is not up to date with community PR branch"
exit 1
fi
# Now check if we can merge the enterprise PR into enterprise-master without
# issues.
git merge --no-commit "enterprise/$PR_BRANCH"
# check that we can compile after the merge
check_compile

View File

@ -4,22 +4,7 @@ set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# Find the line that exactly matches "RegisterCitusConfigVariables(void)" in
# shared_library_init.c. grep command returns something like
# "934:RegisterCitusConfigVariables(void)" and we extract the line number
# with cut.
RegisterCitusConfigVariables_begin_linenumber=$(grep -n "^RegisterCitusConfigVariables(void)$" src/backend/distributed/shared_library_init.c | cut -d: -f1)
# Consider the lines starting from $RegisterCitusConfigVariables_begin_linenumber,
# grep the first line that starts with "}" and extract the line number with cut
# as in the previous step.
RegisterCitusConfigVariables_length=$(tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | grep -n -m 1 "^}$" | cut -d: -f1)
# extract the function definition of RegisterCitusConfigVariables into a temp file
tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | head -n $(($RegisterCitusConfigVariables_length)) > RegisterCitusConfigVariables_func_def.out
# extract citus gucs in the form of <tab><tab>"citus.X"
grep -P "^[\t][\t]\"citus\.[a-zA-Z_0-9]+\"" RegisterCitusConfigVariables_func_def.out > gucs.out
LC_COLLATE=C sort -c gucs.out
# extract citus gucs in the form of "citus.X"
grep -o -E "(\.*\"citus.\w+\")," src/backend/distributed/shared_library_init.c > gucs.out
sort -c gucs.out
rm gucs.out
rm RegisterCitusConfigVariables_func_def.out

View File

@ -1,33 +0,0 @@
#! /bin/bash
set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# This file checks for the existence of downgrade scripts for every upgrade script that is changed in the branch.
# create list of migration files for upgrades
upgrade_files=$(git diff --name-only origin/main | { grep "src/backend/distributed/sql/citus--.*sql" || exit 0 ; })
downgrade_files=$(git diff --name-only origin/main | { grep "src/backend/distributed/sql/downgrades/citus--.*sql" || exit 0 ; })
ret_value=0
for file in $upgrade_files
do
# There should always be 2 matches, and no need to avoid splitting here
# shellcheck disable=SC2207
versions=($(grep --only-matching --extended-regexp "[0-9]+\.[0-9]+[-.][0-9]+" <<< "$file"))
from_version=${versions[0]};
to_version=${versions[1]};
downgrade_migration_file="src/backend/distributed/sql/downgrades/citus--$to_version--$from_version.sql"
# check for the existence of migration scripts
if [[ $(grep --line-regexp --count "$downgrade_migration_file" <<< "$downgrade_files") == 0 ]]
then
echo "$file is updated, but $downgrade_migration_file is not updated in branch"
ret_value=1
fi
done
exit $ret_value;

View File

@ -5,8 +5,7 @@ source ci/ci_helpers.sh
# Remove all the ignored files from git tree, and error out
# find all ignored files in git tree, and use quotation marks to prevent word splitting on filenames with spaces in them
# NOTE: Option --cached is needed to avoid a bug in git ls-files command.
ignored_lines_in_git_tree=$(git ls-files --ignored --cached --exclude-standard | sed 's/.*/"&"/')
ignored_lines_in_git_tree=$(git ls-files --ignored --exclude-standard | sed 's/.*/"&"/')
if [[ -n $ignored_lines_in_git_tree ]]
then

View File

@ -9,8 +9,6 @@ cidir="${0%/*}"
cd ${cidir}/..
citus_indent . --quiet
black . --quiet
isort . --quiet
ci/editorconfig.sh
ci/remove_useless_declarations.sh
ci/disallow_c_comments_in_migrations.sh
@ -18,5 +16,3 @@ ci/disallow_hash_comments_in_spec_files.sh
ci/disallow_long_changelog_entries.sh
ci/normalize_expected.sh
ci/fix_gitignore.sh
ci/print_stack_trace.sh
ci/sort_and_group_includes.sh

View File

@ -1,157 +0,0 @@
#!/usr/bin/env python3
"""
easy command line to run against all citus-style checked files:
$ git ls-files \
| git check-attr --stdin citus-style \
| grep 'citus-style: set' \
| awk '{print $1}' \
| cut -d':' -f1 \
| xargs -n1 ./ci/include_grouping.py
"""
import collections
import os
import sys
def main(args):
if len(args) < 2:
print("Usage: include_grouping.py <file>")
return
file = args[1]
if not os.path.isfile(file):
sys.exit(f"File '{file}' does not exist")
with open(file, "r") as in_file:
with open(file + ".tmp", "w") as out_file:
includes = []
skipped_lines = []
# This calls print_sorted_includes on a set of consecutive #include lines.
# This implicitly keeps separation of any #include lines that are contained in
# an #ifdef, because it will order the #include lines inside and after the
# #ifdef completely separately.
for line in in_file:
# if a line starts with #include we don't want to print it yet, instead we
# want to collect all consecutive #include lines
if line.startswith("#include"):
includes.append(line)
skipped_lines = []
continue
# if we have collected any #include lines, we want to print them sorted
# before printing the current line. However, if the current line is empty
# we want to perform a lookahead to see if the next line is an #include.
# To maintain any separation between #include lines and their subsequent
# lines we keep track of all lines we have skipped inbetween.
if len(includes) > 0:
if len(line.strip()) == 0:
skipped_lines.append(line)
continue
# we have includes that need to be grouped before printing the current
# line.
print_sorted_includes(includes, file=out_file)
includes = []
# print any skipped lines
print("".join(skipped_lines), end="", file=out_file)
skipped_lines = []
print(line, end="", file=out_file)
# move out_file to file
os.rename(file + ".tmp", file)
def print_sorted_includes(includes, file=sys.stdout):
default_group_key = 1
groups = collections.defaultdict(set)
# define the groups that we separate correctly. The matchers are tested in the order
# of their priority field. The first matcher that matches the include is used to
# assign the include to a group.
# The groups are printed in the order of their group_key.
matchers = [
{
"name": "system includes",
"matcher": lambda x: x.startswith("<"),
"group_key": -2,
"priority": 0,
},
{
"name": "toplevel postgres includes",
"matcher": lambda x: "/" not in x,
"group_key": 0,
"priority": 9,
},
{
"name": "postgres.h",
"matcher": lambda x: x.strip() in ['"postgres.h"'],
"group_key": -1,
"priority": -1,
},
{
"name": "toplevel citus inlcudes",
"matcher": lambda x: x.strip()
in [
'"citus_version.h"',
'"pg_version_compat.h"',
'"pg_version_constants.h"',
],
"group_key": 3,
"priority": 0,
},
{
"name": "columnar includes",
"matcher": lambda x: x.startswith('"columnar/'),
"group_key": 4,
"priority": 1,
},
{
"name": "distributed includes",
"matcher": lambda x: x.startswith('"distributed/'),
"group_key": 5,
"priority": 1,
},
]
matchers.sort(key=lambda x: x["priority"])
# throughout our codebase we have some includes where either postgres or citus
# includes are wrongfully included with the syntax for system includes. Before we
# try to match those we will change the <> to "" to make them match our system. This
# will also rewrite the include to the correct syntax.
common_system_include_error_prefixes = ["<nodes/", "<distributed/"]
# assign every include to a group
for include in includes:
# extract the group key from the include
include_content = include.split(" ")[1]
# fix common system includes which are secretly postgres or citus includes
for common_prefix in common_system_include_error_prefixes:
if include_content.startswith(common_prefix):
include_content = '"' + include_content.strip()[1:-1] + '"'
include = include.split(" ")[0] + " " + include_content + "\n"
break
group_key = default_group_key
for matcher in matchers:
if matcher["matcher"](include_content):
group_key = matcher["group_key"]
break
groups[group_key].add(include)
# iterate over all groups in the natural order of its keys
for i, group in enumerate(sorted(groups.items())):
if i > 0:
print(file=file)
includes = group[1]
print("".join(sorted(includes)), end="", file=file)
if __name__ == "__main__":
main(sys.argv)

View File

@ -1,25 +0,0 @@
#!/bin/bash
set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# find all core files
core_files=( $(find . -type f -regex .*core.*\d*.*postgres) )
if [ ${#core_files[@]} -gt 0 ]; then
# print stack traces for the core files
for core_file in "${core_files[@]}"
do
# set print frame-arguments all: show all scalars + structures in the frame
# set print pretty on: show structures in indented mode
# set print addr off: do not show pointer address
# thread apply all bt full: show stack traces for all threads
gdb --batch \
-ex "set print frame-arguments all" \
-ex "set print pretty on" \
-ex "set print addr off" \
-ex "thread apply all bt full" \
postgres "${core_file}"
done
fi

View File

@ -1,12 +0,0 @@
#!/bin/bash
set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
git ls-files \
| git check-attr --stdin citus-style \
| grep 'citus-style: set' \
| awk '{print $1}' \
| cut -d':' -f1 \
| xargs -n1 ./ci/include_grouping.py

View File

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 94 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

20
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for Citus 14.0devel.
# Generated by GNU Autoconf 2.69 for Citus 11.1.1.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Citus'
PACKAGE_TARNAME='citus'
PACKAGE_VERSION='14.0devel'
PACKAGE_STRING='Citus 14.0devel'
PACKAGE_VERSION='11.1.1'
PACKAGE_STRING='Citus 11.1.1'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@ -1262,7 +1262,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures Citus 14.0devel to adapt to many kinds of systems.
\`configure' configures Citus 11.1.1 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1324,7 +1324,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of Citus 14.0devel:";;
short | recursive ) echo "Configuration of Citus 11.1.1:";;
esac
cat <<\_ACEOF
@ -1429,7 +1429,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
Citus configure 14.0devel
Citus configure 11.1.1
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1912,7 +1912,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Citus $as_me 14.0devel, which was
It was created by Citus $as_me 11.1.1, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -2588,7 +2588,7 @@ fi
if test "$with_pg_version_check" = no; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: building against PostgreSQL $version_num (skipped compatibility check)" >&5
$as_echo "$as_me: building against PostgreSQL $version_num (skipped compatibility check)" >&6;}
elif test "$version_num" != '15' -a "$version_num" != '16' -a "$version_num" != '17'; then
elif test "$version_num" != '13' -a "$version_num" != '14' -a "$version_num" != '15'; then
as_fn_error $? "Citus is not compatible with the detected PostgreSQL version ${version_num}." "$LINENO" 5
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: building against PostgreSQL $version_num" >&5
@ -5393,7 +5393,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by Citus $as_me 14.0devel, which was
This file was extended by Citus $as_me 11.1.1, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -5455,7 +5455,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
Citus config.status 14.0devel
Citus config.status 11.1.1
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [14.0devel])
AC_INIT([Citus], [11.1.1])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands
@ -80,7 +80,7 @@ AC_SUBST(with_pg_version_check)
if test "$with_pg_version_check" = no; then
AC_MSG_NOTICE([building against PostgreSQL $version_num (skipped compatibility check)])
elif test "$version_num" != '15' -a "$version_num" != '16' -a "$version_num" != '17'; then
elif test "$version_num" != '13' -a "$version_num" != '14' -a "$version_num" != '15'; then
AC_MSG_ERROR([Citus is not compatible with the detected PostgreSQL version ${version_num}.])
else
AC_MSG_NOTICE([building against PostgreSQL $version_num])

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 168 KiB

View File

@ -1,40 +0,0 @@
[tool.isort]
profile = 'black'
[tool.black]
include = '(src/test/regress/bin/diff-filter|\.pyi?|\.ipynb)$'
[tool.pytest.ini_options]
addopts = [
"--import-mode=importlib",
"--showlocals",
"--tb=short",
]
pythonpath = 'src/test/regress/citus_tests'
asyncio_mode = 'auto'
# Make test discovery quicker from the root dir of the repo
testpaths = ['src/test/regress/citus_tests/test']
# Make test discovery quicker from other directories than root directory
norecursedirs = [
'*.egg',
'.*',
'build',
'venv',
'ci',
'vendor',
'backend',
'bin',
'include',
'tmp_*',
'results',
'expected',
'sql',
'spec',
'data',
'__pycache__',
]
# Don't find files with test at the end such as run_test.py
python_files = ['test_*.py']

View File

@ -1,3 +0,0 @@
# The directory used to store columnar sql files after pre-processing them
# with 'cpp' in build-time, see src/backend/columnar/Makefile.
/build/

View File

@ -10,51 +10,14 @@ OBJS += \
MODULE_big = citus_columnar
EXTENSION = citus_columnar
template_sql_files = $(patsubst $(citus_abs_srcdir)/%,%,$(wildcard $(citus_abs_srcdir)/sql/*.sql))
template_downgrade_sql_files = $(patsubst $(citus_abs_srcdir)/sql/downgrades/%,%,$(wildcard $(citus_abs_srcdir)/sql/downgrades/*.sql))
generated_sql_files = $(patsubst %,$(citus_abs_srcdir)/build/%,$(template_sql_files))
generated_downgrade_sql_files += $(patsubst %,$(citus_abs_srcdir)/build/sql/%,$(template_downgrade_sql_files))
DATA_built = $(generated_sql_files)
columnar_sql_files = $(patsubst $(citus_abs_srcdir)/%,%,$(wildcard $(citus_abs_srcdir)/sql/*.sql))
columnar_downgrade_sql_files = $(patsubst $(citus_abs_srcdir)/%,%,$(wildcard $(citus_abs_srcdir)/sql/downgrades/*.sql))
DATA = $(columnar_sql_files) \
$(columnar_downgrade_sql_files)
PG_CPPFLAGS += -I$(libpq_srcdir) -I$(safestringlib_srcdir)/include
include $(citus_top_builddir)/Makefile.global
SQL_DEPDIR=.deps/sql
SQL_BUILDDIR=build/sql
$(generated_sql_files): $(citus_abs_srcdir)/build/%: %
@mkdir -p $(citus_abs_srcdir)/$(SQL_DEPDIR) $(citus_abs_srcdir)/$(SQL_BUILDDIR)
@# -MF is used to store dependency files(.Po) in another directory for separation
@# -MT is used to change the target of the rule emitted by dependency generation.
@# -P is used to inhibit generation of linemarkers in the output from the preprocessor.
@# -undef is used to not predefine any system-specific or GCC-specific macros.
@# `man cpp` for further information
cd $(citus_abs_srcdir) && cpp -undef -w -P -MMD -MP -MF$(SQL_DEPDIR)/$(*F).Po -MT$@ $< > $@
$(generated_downgrade_sql_files): $(citus_abs_srcdir)/build/sql/%: sql/downgrades/%
@mkdir -p $(citus_abs_srcdir)/$(SQL_DEPDIR) $(citus_abs_srcdir)/$(SQL_BUILDDIR)
@# -MF is used to store dependency files(.Po) in another directory for separation
@# -MT is used to change the target of the rule emitted by dependency generation.
@# -P is used to inhibit generation of linemarkers in the output from the preprocessor.
@# -undef is used to not predefine any system-specific or GCC-specific macros.
@# `man cpp` for further information
cd $(citus_abs_srcdir) && cpp -undef -w -P -MMD -MP -MF$(SQL_DEPDIR)/$(*F).Po -MT$@ $< > $@
.PHONY: install install-downgrades install-all
cleanup-before-install:
rm -f $(DESTDIR)$(datadir)/$(datamoduledir)/citus_columnar.control
rm -f $(DESTDIR)$(datadir)/$(datamoduledir)/columnar--*
rm -f $(DESTDIR)$(datadir)/$(datamoduledir)/citus_columnar--*
install: cleanup-before-install
# install and install-downgrades should be run sequentially
.PHONY: install-all
install-all: install
$(MAKE) install-downgrades
install-downgrades: $(generated_downgrade_sql_files)
$(INSTALL_DATA) $(generated_downgrade_sql_files) '$(DESTDIR)$(datadir)/$(datamoduledir)/'

View File

@ -52,7 +52,8 @@ Benefits of Citus Columnar over cstore_fdw:
... FOR UPDATE``)
* No support for serializable isolation level
* Support for PostgreSQL server versions 12+ only
* No support for foreign keys
* No support for foreign keys, unique constraints, or exclusion
constraints
* No support for logical decoding
* No support for intra-node parallel scans
* No support for ``AFTER ... FOR EACH ROW`` triggers
@ -233,14 +234,16 @@ CREATE TABLE perf_columnar(LIKE perf_row) USING COLUMNAR;
## Data
```sql
CREATE OR REPLACE FUNCTION random_words(n INT4) RETURNS TEXT LANGUAGE sql AS $$
WITH words(w) AS (
SELECT ARRAY['zero','one','two','three','four','five','six','seven','eight','nine','ten']
),
random (word) AS (
SELECT w[(random()*array_length(w, 1))::int] FROM generate_series(1, $1) AS i, words
)
SELECT string_agg(word, ' ') FROM random;
CREATE OR REPLACE FUNCTION random_words(n INT4) RETURNS TEXT LANGUAGE plpython2u AS $$
import random
t = ''
words = ['zero','one','two','three','four','five','six','seven','eight','nine','ten']
for i in xrange(0,n):
if (i != 0):
t += ' '
r = random.randint(0,len(words)-1)
t += words[r]
return t
$$;
```

View File

@ -1,6 +1,6 @@
# Columnar extension
comment = 'Citus Columnar extension'
default_version = '14.0-1'
default_version = '11.1-1'
module_pathname = '$libdir/citus_columnar'
relocatable = false
schema = pg_catalog

View File

@ -11,18 +11,16 @@
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include <sys/stat.h>
#include <unistd.h>
#include "postgres.h"
#include "miscadmin.h"
#include "utils/guc.h"
#include "utils/rel.h"
#include "citus_version.h"
#include "columnar/columnar.h"
#include "columnar/columnar_tableam.h"

View File

@ -13,22 +13,16 @@
*/
#include "postgres.h"
#include "citus_version.h"
#include "common/pg_lzcompress.h"
#include "lib/stringinfo.h"
#include "citus_version.h"
#include "pg_version_constants.h"
#include "columnar/columnar_compression.h"
#if HAVE_CITUS_LIBLZ4
#include <lz4.h>
#endif
#if PG_VERSION_NUM >= PG_VERSION_16
#include "varatt.h"
#endif
#if HAVE_LIBZSTD
#include <zstd.h>
#endif

View File

@ -10,24 +10,18 @@
*-------------------------------------------------------------------------
*/
#include <math.h>
#include "citus_version.h"
#include "postgres.h"
#include "miscadmin.h"
#include <math.h>
#include "access/amapi.h"
#include "access/skey.h"
#include "catalog/pg_am.h"
#include "catalog/pg_statistic.h"
#include "commands/defrem.h"
#include "columnar/columnar_version_compat.h"
#if PG_VERSION_NUM >= PG_VERSION_18
#include "commands/explain_format.h"
#endif
#include "executor/executor.h" /* for ExecInitExprWithParams(), ExecEvalExpr() */
#include "nodes/execnodes.h" /* for ExprState, ExprContext, etc. */
#include "miscadmin.h"
#include "nodes/extensible.h"
#include "nodes/makefuncs.h"
#include "nodes/nodeFuncs.h"
@ -39,10 +33,6 @@
#include "optimizer/paths.h"
#include "optimizer/plancat.h"
#include "optimizer/restrictinfo.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "parser/parse_relation.h"
#include "parser/parsetree.h"
#endif
#include "utils/builtins.h"
#include "utils/lsyscache.h"
#include "utils/relcache.h"
@ -50,13 +40,10 @@
#include "utils/selfuncs.h"
#include "utils/spccache.h"
#include "citus_version.h"
#include "columnar/columnar.h"
#include "columnar/columnar_customscan.h"
#include "columnar/columnar_metadata.h"
#include "columnar/columnar_tableam.h"
#include "distributed/listutils.h"
/*
@ -140,9 +127,6 @@ static List * set_deparse_context_planstate(List *dpcontext, Node *node,
/* other helpers */
static List * ColumnarVarNeeded(ColumnarScanState *columnarScanState);
static Bitmapset * ColumnarAttrNeeded(ScanState *ss);
#if PG_VERSION_NUM >= PG_VERSION_16
static Bitmapset * fixup_inherited_columns(Oid parentId, Oid childId, Bitmapset *columns);
#endif
/* saved hook value in case of unload */
static set_rel_pathlist_hook_type PreviousSetRelPathlistHook = NULL;
@ -214,7 +198,7 @@ columnar_customscan_init()
&EnableColumnarCustomScan,
true,
PGC_USERSET,
GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE,
GUC_NO_SHOW_ALL,
NULL, NULL, NULL);
DefineCustomBoolVariable(
"columnar.enable_qual_pushdown",
@ -224,7 +208,7 @@ columnar_customscan_init()
&EnableColumnarQualPushdown,
true,
PGC_USERSET,
GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE,
GUC_NO_SHOW_ALL,
NULL, NULL, NULL);
DefineCustomRealVariable(
"columnar.qual_pushdown_correlation_threshold",
@ -238,7 +222,7 @@ columnar_customscan_init()
0.0,
1.0,
PGC_USERSET,
GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE,
GUC_NO_SHOW_ALL,
NULL, NULL, NULL);
DefineCustomIntVariable(
"columnar.max_custom_scan_paths",
@ -250,7 +234,7 @@ columnar_customscan_init()
1,
1024,
PGC_USERSET,
GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE,
GUC_NO_SHOW_ALL,
NULL, NULL, NULL);
DefineCustomEnumVariable(
"columnar.planner_debug_level",
@ -370,7 +354,7 @@ ColumnarGetRelationInfoHook(PlannerInfo *root, Oid relationObjectId,
/* disable index-only scan */
IndexOptInfo *indexOptInfo = NULL;
foreach_declared_ptr(indexOptInfo, rel->indexlist)
foreach_ptr(indexOptInfo, rel->indexlist)
{
memset(indexOptInfo->canreturn, false, indexOptInfo->ncolumns * sizeof(bool));
}
@ -388,7 +372,7 @@ RemovePathsByPredicate(RelOptInfo *rel, PathPredicate removePathPredicate)
List *filteredPathList = NIL;
Path *path = NULL;
foreach_declared_ptr(path, rel->pathlist)
foreach_ptr(path, rel->pathlist)
{
if (!removePathPredicate(path))
{
@ -435,7 +419,7 @@ static void
CostColumnarPaths(PlannerInfo *root, RelOptInfo *rel, Oid relationId)
{
Path *path = NULL;
foreach_declared_ptr(path, rel->pathlist)
foreach_ptr(path, rel->pathlist)
{
if (IsA(path, IndexPath))
{
@ -551,7 +535,7 @@ ColumnarIndexScanAdditionalCost(PlannerInfo *root, RelOptInfo *rel,
* "anti-correlated" (-1) since both help us avoiding from reading the
* same stripe again and again.
*/
double absIndexCorrelation = float_abs(indexCorrelation);
double absIndexCorrelation = Abs(indexCorrelation);
/*
* To estimate the number of stripes that we need to read, we do linear
@ -670,7 +654,7 @@ CheckVarStats(PlannerInfo *root, Var *var, Oid sortop, float4 *absVarCorrelation
* If the Var is not highly correlated, then the chunk's min/max bounds
* will be nearly useless.
*/
if (float_abs(varCorrelation) < ColumnarQualPushdownCorrelationThreshold)
if (Abs(varCorrelation) < ColumnarQualPushdownCorrelationThreshold)
{
if (absVarCorrelation)
{
@ -678,7 +662,7 @@ CheckVarStats(PlannerInfo *root, Var *var, Oid sortop, float4 *absVarCorrelation
* Report absVarCorrelation if caller wants to know why given
* var is rejected.
*/
*absVarCorrelation = float_abs(varCorrelation);
*absVarCorrelation = Abs(varCorrelation);
}
return false;
}
@ -790,7 +774,7 @@ ExtractPushdownClause(PlannerInfo *root, RelOptInfo *rel, Node *node)
List *pushdownableArgs = NIL;
Node *boolExprArg = NULL;
foreach_declared_ptr(boolExprArg, boolExpr->args)
foreach_ptr(boolExprArg, boolExpr->args)
{
Expr *pushdownableArg = ExtractPushdownClause(root, rel,
(Node *) boolExprArg);
@ -1058,15 +1042,6 @@ FindCandidateRelids(PlannerInfo *root, RelOptInfo *rel, List *joinClauses)
candidateRelids = bms_del_members(candidateRelids, rel->relids);
candidateRelids = bms_del_members(candidateRelids, rel->lateral_relids);
/*
* For the relevant PG16 commit requiring this addition:
* postgres/postgres@2489d76
*/
#if PG_VERSION_NUM >= PG_VERSION_16
candidateRelids = bms_del_members(candidateRelids, root->outer_join_rels);
#endif
return candidateRelids;
}
@ -1328,8 +1303,11 @@ AddColumnarScanPath(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte,
cpath->methods = &ColumnarScanPathMethods;
#if (PG_VERSION_NUM >= PG_VERSION_15)
/* necessary to avoid extra Result node in PG15 */
cpath->flags = CUSTOMPATH_SUPPORT_PROJECTION;
#endif
/*
* populate generic path information
@ -1393,43 +1371,7 @@ AddColumnarScanPath(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte,
cpath->custom_private = list_make2(NIL, NIL);
}
int numberOfColumnsRead = 0;
#if PG_VERSION_NUM >= PG_VERSION_16
if (rte->perminfoindex > 0)
{
/*
* If perminfoindex > 0, that means that this relation's permission info
* is directly found in the list of rteperminfos of the Query(root->parse)
* So, all we have to do here is retrieve that info.
*/
RTEPermissionInfo *perminfo = getRTEPermissionInfo(root->parse->rteperminfos,
rte);
numberOfColumnsRead = bms_num_members(perminfo->selectedCols);
}
else
{
/*
* If perminfoindex = 0, that means we are skipping the check for permission info
* for this relation, which means that it's either a partition or an inheritance child.
* In these cases, we need to access the permission info of the top parent of this relation.
* After thorough checking, we found that the index of the top parent pointing to the correct
* range table entry in Query's range tables (root->parse->rtable) is found under
* RelOptInfo rel->top_parent->relid.
* For reference, check expand_partitioned_rtentry and expand_inherited_rtentry PG functions
*/
Assert(rel->top_parent);
RangeTblEntry *parent_rte = rt_fetch(rel->top_parent->relid, root->parse->rtable);
RTEPermissionInfo *perminfo = getRTEPermissionInfo(root->parse->rteperminfos,
parent_rte);
numberOfColumnsRead = bms_num_members(fixup_inherited_columns(perminfo->relid,
rte->relid,
perminfo->
selectedCols));
}
#else
numberOfColumnsRead = bms_num_members(rte->selectedCols);
#endif
int numberOfColumnsRead = bms_num_members(rte->selectedCols);
int numberOfClausesPushed = list_length(allClauses);
CostColumnarScan(root, rel, rte->relid, cpath, numberOfColumnsRead,
@ -1449,69 +1391,6 @@ AddColumnarScanPath(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte,
}
#if PG_VERSION_NUM >= PG_VERSION_16
/*
* fixup_inherited_columns
*
* Exact function Copied from PG16 as it's static.
*
* When user is querying on a table with children, it implicitly accesses
* child tables also. So, we also need to check security label of child
* tables and columns, but there is no guarantee attribute numbers are
* same between the parent and children.
* It returns a bitmapset which contains attribute number of the child
* table based on the given bitmapset of the parent.
*/
static Bitmapset *
fixup_inherited_columns(Oid parentId, Oid childId, Bitmapset *columns)
{
Bitmapset *result = NULL;
/*
* obviously, no need to do anything here
*/
if (parentId == childId)
{
return columns;
}
int index = -1;
while ((index = bms_next_member(columns, index)) >= 0)
{
/* bit numbers are offset by FirstLowInvalidHeapAttributeNumber */
AttrNumber attno = index + FirstLowInvalidHeapAttributeNumber;
/*
* whole-row-reference shall be fixed-up later
*/
if (attno == InvalidAttrNumber)
{
result = bms_add_member(result, index);
continue;
}
char *attname = get_attname(parentId, attno, false);
attno = get_attnum(childId, attname);
if (attno == InvalidAttrNumber)
{
elog(ERROR, "cache lookup failed for attribute %s of relation %u",
attname, childId);
}
result = bms_add_member(result,
attno - FirstLowInvalidHeapAttributeNumber);
pfree(attname);
}
return result;
}
#endif
/*
* CostColumnarScan calculates the cost of scanning the columnar table. The
* cost is estimated by using all stripe metadata to estimate based on the
@ -1556,13 +1435,13 @@ ColumnarPerStripeScanCost(RelOptInfo *rel, Oid relationId, int numberOfColumnsRe
ereport(ERROR, (errmsg("could not open relation with OID %u", relationId)));
}
List *stripeList = StripesForRelfilelocator(relation);
List *stripeList = StripesForRelfilenode(relation->rd_node);
RelationClose(relation);
uint32 maxColumnCount = 0;
uint64 totalStripeSize = 0;
StripeMetadata *stripeMetadata = NULL;
foreach_declared_ptr(stripeMetadata, stripeList)
foreach_ptr(stripeMetadata, stripeList)
{
totalStripeSize += stripeMetadata->dataLength;
maxColumnCount = Max(maxColumnCount, stripeMetadata->columnCount);
@ -1613,7 +1492,7 @@ ColumnarTableStripeCount(Oid relationId)
ereport(ERROR, (errmsg("could not open relation with OID %u", relationId)));
}
List *stripeList = StripesForRelfilelocator(relation);
List *stripeList = StripesForRelfilenode(relation->rd_node);
int stripeCount = list_length(stripeList);
RelationClose(relation);
@ -1935,6 +1814,11 @@ ColumnarScan_EndCustomScan(CustomScanState *node)
*/
TableScanDesc scanDesc = node->ss.ss_currentScanDesc;
/*
* Free the exprcontext
*/
ExecFreeExprContext(&node->ss.ps);
/*
* clean out the tuple table
*/

View File

@ -11,12 +11,12 @@
#include "postgres.h"
#include "funcapi.h"
#include "miscadmin.h"
#include "access/nbtree.h"
#include "access/table.h"
#include "catalog/pg_am.h"
#include "catalog/pg_type.h"
#include "distributed/pg_version_constants.h"
#include "miscadmin.h"
#include "storage/fd.h"
#include "storage/smgr.h"
#include "utils/guc.h"
@ -25,8 +25,6 @@
#include "utils/tuplestore.h"
#include "pg_version_compat.h"
#include "pg_version_constants.h"
#include "columnar/columnar.h"
#include "columnar/columnar_storage.h"
#include "columnar/columnar_version_compat.h"
@ -161,5 +159,5 @@ MemoryContextTotals(MemoryContext context, MemoryContextCounters *counters)
MemoryContextTotals(child, counters);
}
context->methods->stats(context, NULL, NULL, counters, true);
context->methods->stats_compat(context, NULL, NULL, counters, true);
}

File diff suppressed because it is too large Load Diff

View File

@ -22,15 +22,16 @@
#include "access/xact.h"
#include "catalog/pg_am.h"
#include "commands/defrem.h"
#include "distributed/listutils.h"
#include "nodes/makefuncs.h"
#include "nodes/nodeFuncs.h"
#include "optimizer/clauses.h"
#include "optimizer/optimizer.h"
#include "optimizer/clauses.h"
#include "optimizer/restrictinfo.h"
#include "storage/fd.h"
#include "utils/guc.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/lsyscache.h"
#include "utils/rel.h"
#include "columnar/columnar.h"
@ -38,8 +39,6 @@
#include "columnar/columnar_tableam.h"
#include "columnar/columnar_version_compat.h"
#include "distributed/listutils.h"
#define UNEXPECTED_STRIPE_READ_ERR_MSG \
"attempted to read an unexpected stripe while reading columnar " \
"table %s, stripe with id=" UINT64_FORMAT " is not flushed"
@ -255,9 +254,8 @@ ColumnarReadFlushPendingWrites(ColumnarReadState *readState)
{
Assert(!readState->snapshotRegisteredByUs);
RelFileNumber relfilenumber = RelationPhysicalIdentifierNumber_compat(
RelationPhysicalIdentifier_compat(readState->relation));
FlushWriteStateForRelfilenumber(relfilenumber, GetCurrentSubTransactionId());
Oid relfilenode = readState->relation->rd_node.relNode;
FlushWriteStateForRelfilenode(relfilenode, GetCurrentSubTransactionId());
if (readState->snapshot == InvalidSnapshot || !IsMVCCSnapshot(readState->snapshot))
{
@ -758,10 +756,8 @@ SnapshotMightSeeUnflushedStripes(Snapshot snapshot)
}
default:
{
return false;
}
}
}
@ -882,7 +878,7 @@ ReadChunkGroupNextRow(ChunkGroupReadState *chunkGroupReadState, Datum *columnVal
memset(columnNulls, true, sizeof(bool) * chunkGroupReadState->columnCount);
int attno;
foreach_declared_int(attno, chunkGroupReadState->projectedColumnList)
foreach_int(attno, chunkGroupReadState->projectedColumnList)
{
const ChunkData *chunkGroupData = chunkGroupReadState->chunkGroupData;
const int rowIndex = chunkGroupReadState->currentRow;
@ -988,7 +984,7 @@ ColumnarTableRowCount(Relation relation)
{
ListCell *stripeMetadataCell = NULL;
uint64 totalRowCount = 0;
List *stripeList = StripesForRelfilelocator(relation);
List *stripeList = StripesForRelfilenode(relation->rd_node);
foreach(stripeMetadataCell, stripeList)
{
@ -1016,7 +1012,7 @@ LoadFilteredStripeBuffers(Relation relation, StripeMetadata *stripeMetadata,
bool *projectedColumnMask = ProjectedColumnMask(columnCount, projectedColumnList);
StripeSkipList *stripeSkipList = ReadStripeSkipList(relation,
StripeSkipList *stripeSkipList = ReadStripeSkipList(relation->rd_node,
stripeMetadata->id,
tupleDescriptor,
stripeMetadata->chunkCount,
@ -1489,7 +1485,7 @@ ProjectedColumnMask(uint32 columnCount, List *projectedColumnList)
bool *projectedColumnMask = palloc0(columnCount * sizeof(bool));
int attno;
foreach_declared_int(attno, projectedColumnList)
foreach_int(attno, projectedColumnList)
{
/* attno is 1-indexed; projectedColumnMask is 0-indexed */
int columnIndex = attno - 1;
@ -1561,7 +1557,7 @@ DeserializeDatumArray(StringInfo datumBuffer, bool *existsArray, uint32 datumCou
datumTypeLength);
currentDatumDataOffset = att_addlength_datum(currentDatumDataOffset,
datumTypeLength,
datumArray[datumIndex]);
currentDatumDataPointer);
currentDatumDataOffset = att_align_nominal(currentDatumDataOffset,
datumTypeAlign);

View File

@ -36,11 +36,11 @@
#include "postgres.h"
#include "miscadmin.h"
#include "safe_lib.h"
#include "access/generic_xlog.h"
#include "catalog/storage.h"
#include "miscadmin.h"
#include "storage/bufmgr.h"
#include "storage/lmgr.h"
@ -169,11 +169,7 @@ ColumnarStorageInit(SMgrRelation srel, uint64 storageId)
}
/* create two pages */
#if PG_VERSION_NUM >= PG_VERSION_16
PGIOAlignedBlock block;
#else
PGAlignedBlock block;
#endif
Page page = block.data;
/* write metapage */
@ -192,7 +188,7 @@ ColumnarStorageInit(SMgrRelation srel, uint64 storageId)
(char *) &metapage, sizeof(ColumnarMetapage));
phdr->pd_lower += sizeof(ColumnarMetapage);
log_newpage(RelationPhysicalIdentifierBackend_compat(&srel), MAIN_FORKNUM,
log_newpage(&srel->smgr_rnode.node, MAIN_FORKNUM,
COLUMNAR_METAPAGE_BLOCKNO, page, true);
PageSetChecksumInplace(page, COLUMNAR_METAPAGE_BLOCKNO);
smgrextend(srel, MAIN_FORKNUM, COLUMNAR_METAPAGE_BLOCKNO, page, true);
@ -200,7 +196,7 @@ ColumnarStorageInit(SMgrRelation srel, uint64 storageId)
/* write empty page */
PageInit(page, BLCKSZ, 0);
log_newpage(RelationPhysicalIdentifierBackend_compat(&srel), MAIN_FORKNUM,
log_newpage(&srel->smgr_rnode.node, MAIN_FORKNUM,
COLUMNAR_EMPTY_BLOCKNO, page, true);
PageSetChecksumInplace(page, COLUMNAR_EMPTY_BLOCKNO);
smgrextend(srel, MAIN_FORKNUM, COLUMNAR_EMPTY_BLOCKNO, page, true);
@ -547,8 +543,7 @@ ColumnarStorageTruncate(Relation rel, uint64 newDataReservation)
if (!ColumnarLogicalOffsetIsValid(newDataReservation))
{
elog(ERROR,
"attempted to truncate relation %d to "
"invalid logical offset: " UINT64_FORMAT,
"attempted to truncate relation %d to invalid logical offset: " UINT64_FORMAT,
rel->rd_id, newDataReservation);
}

View File

@ -1,38 +1,41 @@
#include <math.h>
#include "citus_version.h"
#include "postgres.h"
#include "miscadmin.h"
#include "pgstat.h"
#include "safe_lib.h"
#include <math.h>
#include "miscadmin.h"
#include "access/detoast.h"
#include "access/genam.h"
#include "access/heapam.h"
#include "access/multixact.h"
#include "access/rewriteheap.h"
#include "access/tableam.h"
#include "access/tsmapi.h"
#include "access/detoast.h"
#include "access/xact.h"
#include "catalog/catalog.h"
#include "catalog/index.h"
#include "catalog/namespace.h"
#include "catalog/objectaccess.h"
#include "catalog/pg_am.h"
#include "catalog/pg_extension.h"
#include "catalog/pg_publication.h"
#include "catalog/pg_trigger.h"
#include "catalog/pg_extension.h"
#include "catalog/storage.h"
#include "catalog/storage_xlog.h"
#include "commands/defrem.h"
#include "commands/extension.h"
#include "commands/progress.h"
#include "commands/vacuum.h"
#include "commands/extension.h"
#include "executor/executor.h"
#include "nodes/makefuncs.h"
#include "optimizer/plancat.h"
#include "pgstat.h"
#include "safe_lib.h"
#include "storage/bufmgr.h"
#include "storage/bufpage.h"
#include "storage/bufmgr.h"
#include "storage/lmgr.h"
#include "storage/predicate.h"
#include "storage/procarray.h"
@ -40,22 +43,17 @@
#include "tcop/utility.h"
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/pg_rusage.h"
#include "utils/rel.h"
#include "utils/relcache.h"
#include "utils/lsyscache.h"
#include "utils/syscache.h"
#include "citus_version.h"
#include "pg_version_compat.h"
#include "columnar/columnar.h"
#include "columnar/columnar_customscan.h"
#include "columnar/columnar_storage.h"
#include "columnar/columnar_tableam.h"
#include "columnar/columnar_version_compat.h"
#include "distributed/listutils.h"
/*
@ -117,7 +115,9 @@ static RangeVar * ColumnarProcessAlterTable(AlterTableStmt *alterTableStmt,
List **columnarOptions);
static void ColumnarProcessUtility(PlannedStmt *pstmt,
const char *queryString,
#if PG_VERSION_NUM >= PG_VERSION_14
bool readOnlyTree,
#endif
ProcessUtilityContext context,
ParamListInfo params,
struct QueryEnvironment *queryEnv,
@ -208,8 +208,7 @@ columnar_beginscan_extended(Relation relation, Snapshot snapshot,
uint32 flags, Bitmapset *attr_needed, List *scanQual)
{
CheckCitusColumnarVersion(ERROR);
RelFileNumber relfilenumber = RelationPhysicalIdentifierNumber_compat(
RelationPhysicalIdentifier_compat(relation));
Oid relfilenode = relation->rd_node.relNode;
/*
* A memory context to use for scan-wide data, including the lazily
@ -239,7 +238,7 @@ columnar_beginscan_extended(Relation relation, Snapshot snapshot,
scan->scanQual = copyObject(scanQual);
scan->scanContext = scanContext;
if (PendingWritesInUpperTransactions(relfilenumber, GetCurrentSubTransactionId()))
if (PendingWritesInUpperTransactions(relfilenode, GetCurrentSubTransactionId()))
{
elog(ERROR,
"cannot read from table when there is unflushed data in upper transactions");
@ -435,9 +434,8 @@ columnar_index_fetch_begin(Relation rel)
{
CheckCitusColumnarVersion(ERROR);
RelFileNumber relfilenumber = RelationPhysicalIdentifierNumber_compat(
RelationPhysicalIdentifier_compat(rel));
if (PendingWritesInUpperTransactions(relfilenumber, GetCurrentSubTransactionId()))
Oid relfilenode = rel->rd_node.relNode;
if (PendingWritesInUpperTransactions(relfilenode, GetCurrentSubTransactionId()))
{
/* XXX: maybe we can just flush the data and continue */
elog(ERROR, "cannot read from index when there is unflushed data in "
@ -667,6 +665,7 @@ columnar_tuple_satisfies_snapshot(Relation rel, TupleTableSlot *slot,
}
#if PG_VERSION_NUM >= PG_VERSION_14
static TransactionId
columnar_index_delete_tuples(Relation rel,
TM_IndexDeleteOp *delstate)
@ -715,6 +714,19 @@ columnar_index_delete_tuples(Relation rel,
}
#else
static TransactionId
columnar_compute_xid_horizon_for_tuples(Relation rel,
ItemPointerData *tids,
int nitems)
{
elog(ERROR, "columnar_compute_xid_horizon_for_tuples not implemented");
}
#endif
static void
columnar_tuple_insert(Relation relation, TupleTableSlot *slot, CommandId cid,
int options, BulkInsertState bistate)
@ -727,9 +739,7 @@ columnar_tuple_insert(Relation relation, TupleTableSlot *slot, CommandId cid,
*/
ColumnarWriteState *writeState = columnar_init_write_state(relation,
RelationGetDescr(relation),
slot->tts_tableOid,
GetCurrentSubTransactionId());
MemoryContext oldContext = MemoryContextSwitchTo(ColumnarWritePerTupleContext(
writeState));
@ -771,14 +781,8 @@ columnar_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples,
{
CheckCitusColumnarVersion(ERROR);
/*
* The callback to .multi_insert is table_multi_insert() and this is only used for the COPY
* command, so slot[i]->tts_tableoid will always be equal to relation->id. Thus, we can send
* RelationGetRelid(relation) as the tupSlotTableOid
*/
ColumnarWriteState *writeState = columnar_init_write_state(relation,
RelationGetDescr(relation),
RelationGetRelid(relation),
GetCurrentSubTransactionId());
ColumnarCheckLogicalReplication(relation);
@ -819,7 +823,7 @@ static TM_Result
columnar_tuple_update(Relation relation, ItemPointer otid, TupleTableSlot *slot,
CommandId cid, Snapshot snapshot, Snapshot crosscheck,
bool wait, TM_FailureData *tmfd,
LockTupleMode *lockmode, TU_UpdateIndexes *update_indexes)
LockTupleMode *lockmode, bool *update_indexes)
{
elog(ERROR, "columnar_tuple_update not implemented");
}
@ -845,8 +849,8 @@ columnar_finish_bulk_insert(Relation relation, int options)
static void
columnar_relation_set_new_filelocator(Relation rel,
const RelFileLocator *newrlocator,
columnar_relation_set_new_filenode(Relation rel,
const RelFileNode *newrnode,
char persistence,
TransactionId *freezeXid,
MultiXactId *minmulti)
@ -865,19 +869,16 @@ columnar_relation_set_new_filelocator(Relation rel,
* state. If they are equal, this is a new relation object and we don't
* need to clean anything.
*/
if (RelationPhysicalIdentifierNumber_compat(RelationPhysicalIdentifier_compat(rel)) !=
RelationPhysicalIdentifierNumberPtr_compat(newrlocator))
if (rel->rd_node.relNode != newrnode->relNode)
{
MarkRelfilenumberDropped(RelationPhysicalIdentifierNumber_compat(
RelationPhysicalIdentifier_compat(rel)),
GetCurrentSubTransactionId());
MarkRelfilenodeDropped(rel->rd_node.relNode, GetCurrentSubTransactionId());
DeleteMetadataRows(rel);
DeleteMetadataRows(rel->rd_node);
}
*freezeXid = RecentXmin;
*minmulti = GetOldestMultiXactId();
SMgrRelation srel = RelationCreateStorage(*newrlocator, persistence, true);
SMgrRelation srel = RelationCreateStorage_compat(*newrnode, persistence, true);
ColumnarStorageInit(srel, ColumnarMetadataNewStorageId());
InitColumnarOptions(rel->rd_id);
@ -892,12 +893,12 @@ static void
columnar_relation_nontransactional_truncate(Relation rel)
{
CheckCitusColumnarVersion(ERROR);
RelFileLocator relfilelocator = RelationPhysicalIdentifier_compat(rel);
RelFileNode relfilenode = rel->rd_node;
NonTransactionDropWriteState(RelationPhysicalIdentifierNumber_compat(relfilelocator));
NonTransactionDropWriteState(relfilenode.relNode);
/* Delete old relfilenode metadata */
DeleteMetadataRows(rel);
DeleteMetadataRows(relfilenode);
/*
* No need to set new relfilenode, since the table was created in this
@ -914,7 +915,7 @@ columnar_relation_nontransactional_truncate(Relation rel)
static void
columnar_relation_copy_data(Relation rel, const RelFileLocator *newrnode)
columnar_relation_copy_data(Relation rel, const RelFileNode *newrnode)
{
elog(ERROR, "columnar_relation_copy_data not implemented");
}
@ -960,7 +961,7 @@ columnar_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,
ColumnarOptions columnarOptions = { 0 };
ReadColumnarOptions(OldHeap->rd_id, &columnarOptions);
ColumnarWriteState *writeState = ColumnarBeginWrite(NewHeap,
ColumnarWriteState *writeState = ColumnarBeginWrite(NewHeap->rd_node,
columnarOptions,
targetDesc);
@ -1011,7 +1012,7 @@ NeededColumnsList(TupleDesc tupdesc, Bitmapset *attr_needed)
for (int i = 0; i < tupdesc->natts; i++)
{
if (TupleDescAttr(tupdesc, i)->attisdropped)
if (tupdesc->attrs[i].attisdropped)
{
continue;
}
@ -1035,7 +1036,7 @@ NeededColumnsList(TupleDesc tupdesc, Bitmapset *attr_needed)
static uint64
ColumnarTableTupleCount(Relation relation)
{
List *stripeList = StripesForRelfilelocator(relation);
List *stripeList = StripesForRelfilenode(relation->rd_node);
uint64 tupleCount = 0;
ListCell *lc = NULL;
@ -1098,55 +1099,12 @@ columnar_vacuum_rel(Relation rel, VacuumParams *params,
List *indexList = RelationGetIndexList(rel);
int nindexes = list_length(indexList);
#if PG_VERSION_NUM >= PG_VERSION_16
struct VacuumCutoffs cutoffs;
vacuum_get_cutoffs(rel, params, &cutoffs);
Assert(MultiXactIdPrecedesOrEquals(cutoffs.MultiXactCutoff, cutoffs.OldestMxact));
Assert(TransactionIdPrecedesOrEquals(cutoffs.FreezeLimit, cutoffs.OldestXmin));
/*
* Columnar storage doesn't hold any transaction IDs, so we can always
* just advance to the most aggressive value.
*/
TransactionId newRelFrozenXid = cutoffs.OldestXmin;
MultiXactId newRelminMxid = cutoffs.OldestMxact;
double new_live_tuples = ColumnarTableTupleCount(rel);
/* all visible pages are always 0 */
BlockNumber new_rel_allvisible = 0;
bool frozenxid_updated;
bool minmulti_updated;
/* for PG 18+, vac_update_relstats gained a new “all_frozen” param */
#if PG_VERSION_NUM >= PG_VERSION_18
/* all frozen pages are always 0, because columnar stripes never store XIDs */
BlockNumber new_rel_allfrozen = 0;
vac_update_relstats(rel, new_rel_pages, new_live_tuples,
new_rel_allvisible, /* allvisible */
new_rel_allfrozen, /* all_frozen */
nindexes > 0,
newRelFrozenXid, newRelminMxid,
&frozenxid_updated, &minmulti_updated,
false);
#else
vac_update_relstats(rel, new_rel_pages, new_live_tuples,
new_rel_allvisible, nindexes > 0,
newRelFrozenXid, newRelminMxid,
&frozenxid_updated, &minmulti_updated,
false);
#endif
#else
TransactionId oldestXmin;
TransactionId freezeLimit;
MultiXactId multiXactCutoff;
/* initialize xids */
#if (PG_VERSION_NUM >= PG_VERSION_15) && (PG_VERSION_NUM < PG_VERSION_16)
#if PG_VERSION_NUM >= PG_VERSION_15
MultiXactId oldestMxact;
vacuum_set_xid_limits(rel,
params->freeze_min_age,
@ -1176,7 +1134,7 @@ columnar_vacuum_rel(Relation rel, VacuumParams *params,
* just advance to the most aggressive value.
*/
TransactionId newRelFrozenXid = oldestXmin;
#if (PG_VERSION_NUM >= PG_VERSION_15) && (PG_VERSION_NUM < PG_VERSION_16)
#if PG_VERSION_NUM >= PG_VERSION_15
MultiXactId newRelminMxid = oldestMxact;
#else
MultiXactId newRelminMxid = multiXactCutoff;
@ -1187,7 +1145,7 @@ columnar_vacuum_rel(Relation rel, VacuumParams *params,
/* all visible pages are always 0 */
BlockNumber new_rel_allvisible = 0;
#if (PG_VERSION_NUM >= PG_VERSION_15) && (PG_VERSION_NUM < PG_VERSION_16)
#if PG_VERSION_NUM >= PG_VERSION_15
bool frozenxid_updated;
bool minmulti_updated;
@ -1200,21 +1158,11 @@ columnar_vacuum_rel(Relation rel, VacuumParams *params,
new_rel_allvisible, nindexes > 0,
newRelFrozenXid, newRelminMxid, false);
#endif
#endif
#if PG_VERSION_NUM >= PG_VERSION_18
pgstat_report_vacuum(RelationGetRelid(rel),
rel->rd_rel->relisshared,
Max(new_live_tuples, 0), /* live tuples */
0, /* dead tuples */
GetCurrentTimestamp()); /* start time */
#else
pgstat_report_vacuum(RelationGetRelid(rel),
rel->rd_rel->relisshared,
Max(new_live_tuples, 0),
0);
#endif
pgstat_progress_end_command();
}
@ -1226,6 +1174,7 @@ static void
LogRelationStats(Relation rel, int elevel)
{
ListCell *stripeMetadataCell = NULL;
RelFileNode relfilenode = rel->rd_node;
StringInfo infoBuf = makeStringInfo();
int compressionStats[COMPRESSION_COUNT] = { 0 };
@ -1236,23 +1185,19 @@ LogRelationStats(Relation rel, int elevel)
uint64 droppedChunksWithData = 0;
uint64 totalDecompressedLength = 0;
List *stripeList = StripesForRelfilelocator(rel);
List *stripeList = StripesForRelfilenode(relfilenode);
int stripeCount = list_length(stripeList);
foreach(stripeMetadataCell, stripeList)
{
StripeMetadata *stripe = lfirst(stripeMetadataCell);
Snapshot snapshot = RegisterSnapshot(GetTransactionSnapshot());
StripeSkipList *skiplist = ReadStripeSkipList(rel, stripe->id,
StripeSkipList *skiplist = ReadStripeSkipList(relfilenode, stripe->id,
RelationGetDescr(rel),
stripe->chunkCount,
snapshot);
UnregisterSnapshot(snapshot);
GetTransactionSnapshot());
for (uint32 column = 0; column < skiplist->columnCount; column++)
{
bool attrDropped = TupleDescAttr(tupdesc, column)->attisdropped;
bool attrDropped = tupdesc->attrs[column].attisdropped;
for (uint32 chunk = 0; chunk < skiplist->chunkCount; chunk++)
{
ColumnChunkSkipNode *skipnode =
@ -1382,7 +1327,7 @@ TruncateColumnar(Relation rel, int elevel)
* new stripes be added beyond highestPhysicalAddress while
* we're truncating.
*/
uint64 newDataReservation = Max(GetHighestUsedAddress(rel) + 1,
uint64 newDataReservation = Max(GetHighestUsedAddress(rel->rd_node) + 1,
ColumnarFirstLogicalOffset);
BlockNumber old_rel_pages = smgrnblocks(RelationGetSmgr(rel), MAIN_FORKNUM);
@ -1450,32 +1395,15 @@ ConditionalLockRelationWithTimeout(Relation rel, LOCKMODE lockMode, int timeout,
static bool
columnar_scan_analyze_next_block(TableScanDesc scan,
#if PG_VERSION_NUM >= PG_VERSION_17
ReadStream *stream)
#else
BlockNumber blockno,
columnar_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,
BufferAccessStrategy bstrategy)
#endif
{
/*
* Our access method is not pages based, i.e. tuples are not confined
* to pages boundaries. So not much to do here. We return true anyway
* so acquire_sample_rows() in analyze.c would call our
* columnar_scan_analyze_next_tuple() callback.
* In PG17, we return false in case there is no buffer left, since
* the outer loop changed in acquire_sample_rows(), and it is
* expected for the scan_analyze_next_block function to check whether
* there are any blocks left in the block sampler.
*/
#if PG_VERSION_NUM >= PG_VERSION_17
Buffer buf = read_stream_next_buffer(stream, NULL);
if (!BufferIsValid(buf))
{
return false;
}
ReleaseBuffer(buf);
#endif
return true;
}
@ -1548,7 +1476,8 @@ columnar_index_build_range_scan(Relation columnarRelation,
if (!IsBootstrapProcessingMode() && !indexInfo->ii_Concurrent)
{
/* ignore lazy VACUUM's */
OldestXmin = GetOldestNonRemovableTransactionId(columnarRelation);
OldestXmin = GetOldestNonRemovableTransactionId_compat(columnarRelation,
PROCARRAY_FLAGS_VACUUM);
}
Snapshot snapshot = { 0 };
@ -1876,7 +1805,7 @@ ColumnarReadMissingRowsIntoIndex(TableScanDesc scan, Relation indexRelation,
Relation columnarRelation = scan->rs_rd;
IndexUniqueCheck indexUniqueCheck =
indexInfo->ii_Unique ? UNIQUE_CHECK_YES : UNIQUE_CHECK_NO;
index_insert(indexRelation, indexValues, indexNulls, columnarItemPointer,
index_insert_compat(indexRelation, indexValues, indexNulls, columnarItemPointer,
columnarRelation, indexUniqueCheck, false, indexInfo);
validateIndexState->tups_inserted += 1;
@ -1906,8 +1835,8 @@ TupleSortSkipSmallerItemPointers(Tuplesortstate *tupleSort, ItemPointer targetIt
Datum *abbrev = NULL;
Datum tsDatum;
bool tsDatumIsNull;
if (!tuplesort_getdatum_compat(tupleSort, forwardDirection, false,
&tsDatum, &tsDatumIsNull, abbrev))
if (!tuplesort_getdatum(tupleSort, forwardDirection, &tsDatum,
&tsDatumIsNull, abbrev))
{
ItemPointerSetInvalid(&tsItemPointerData);
break;
@ -2081,7 +2010,7 @@ columnar_tableam_init()
&EnableVersionChecksColumnar,
true,
PGC_USERSET,
GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE,
GUC_NO_SHOW_ALL,
NULL, NULL, NULL);
}
@ -2148,13 +2077,12 @@ ColumnarTableDropHook(Oid relid)
* tableam tables storage is managed by postgres.
*/
Relation rel = table_open(relid, AccessExclusiveLock);
RelFileLocator relfilelocator = RelationPhysicalIdentifier_compat(rel);
RelFileNode relfilenode = rel->rd_node;
DeleteMetadataRows(rel);
DeleteMetadataRows(relfilenode);
DeleteColumnarTableOptions(rel->rd_id, true);
MarkRelfilenumberDropped(RelationPhysicalIdentifierNumber_compat(relfilelocator),
GetCurrentSubTransactionId());
MarkRelfilenodeDropped(relfilenode.relNode, GetCurrentSubTransactionId());
/* keep the lock since we did physical changes to the relation */
table_close(rel, NoLock);
@ -2271,6 +2199,7 @@ ColumnarProcessAlterTable(AlterTableStmt *alterTableStmt, List **columnarOptions
columnarRangeVar = alterTableStmt->relation;
}
}
#if PG_VERSION_NUM >= PG_VERSION_15
else if (alterTableCmd->subtype == AT_SetAccessMethod)
{
if (columnarRangeVar || *columnarOptions)
@ -2281,15 +2210,14 @@ ColumnarProcessAlterTable(AlterTableStmt *alterTableStmt, List **columnarOptions
"Specify SET ACCESS METHOD before storage parameters, or use separate ALTER TABLE commands.")));
}
destIsColumnar = (strcmp(alterTableCmd->name ? alterTableCmd->name :
default_table_access_method,
COLUMNAR_AM_NAME) == 0);
destIsColumnar = (strcmp(alterTableCmd->name, COLUMNAR_AM_NAME) == 0);
if (srcIsColumnar && !destIsColumnar)
{
DeleteColumnarTableOptions(RelationGetRelid(rel), true);
}
}
#endif /* PG_VERSION_15 */
}
relation_close(rel, NoLock);
@ -2304,17 +2232,21 @@ ColumnarProcessAlterTable(AlterTableStmt *alterTableStmt, List **columnarOptions
static void
ColumnarProcessUtility(PlannedStmt *pstmt,
const char *queryString,
#if PG_VERSION_NUM >= PG_VERSION_14
bool readOnlyTree,
#endif
ProcessUtilityContext context,
ParamListInfo params,
struct QueryEnvironment *queryEnv,
DestReceiver *dest,
QueryCompletion *completionTag)
{
#if PG_VERSION_NUM >= PG_VERSION_14
if (readOnlyTree)
{
pstmt = copyObject(pstmt);
}
#endif
Node *parsetree = pstmt->utilityStmt;
@ -2410,11 +2342,10 @@ ColumnarProcessUtility(PlannedStmt *pstmt,
}
default:
{
/* FALL THROUGH */
break;
}
}
if (columnarOptions != NIL && columnarRangeVar == NULL)
{
@ -2432,7 +2363,7 @@ ColumnarProcessUtility(PlannedStmt *pstmt,
CheckCitusColumnarAlterExtensionStmt(parsetree);
}
PrevProcessUtilityHook(pstmt, queryString, false, context,
PrevProcessUtilityHook_compat(pstmt, queryString, false, context,
params, queryEnv, dest, completionTag);
if (columnarOptions != NIL)
@ -2561,7 +2492,11 @@ static const TableAmRoutine columnar_am_methods = {
.tuple_get_latest_tid = columnar_get_latest_tid,
.tuple_tid_valid = columnar_tuple_tid_valid,
.tuple_satisfies_snapshot = columnar_tuple_satisfies_snapshot,
#if PG_VERSION_NUM >= PG_VERSION_14
.index_delete_tuples = columnar_index_delete_tuples,
#else
.compute_xid_horizon_for_tuples = columnar_compute_xid_horizon_for_tuples,
#endif
.tuple_insert = columnar_tuple_insert,
.tuple_insert_speculative = columnar_tuple_insert_speculative,
@ -2572,11 +2507,7 @@ static const TableAmRoutine columnar_am_methods = {
.tuple_lock = columnar_tuple_lock,
.finish_bulk_insert = columnar_finish_bulk_insert,
#if PG_VERSION_NUM >= PG_VERSION_16
.relation_set_new_filelocator = columnar_relation_set_new_filelocator,
#else
.relation_set_new_filenode = columnar_relation_set_new_filelocator,
#endif
.relation_set_new_filenode = columnar_relation_set_new_filenode,
.relation_nontransactional_truncate = columnar_relation_nontransactional_truncate,
.relation_copy_data = columnar_relation_copy_data,
.relation_copy_for_cluster = columnar_relation_copy_for_cluster,
@ -2591,13 +2522,8 @@ static const TableAmRoutine columnar_am_methods = {
.relation_estimate_size = columnar_estimate_rel_size,
#if PG_VERSION_NUM < PG_VERSION_18
/* these two fields were removed in PG18 */
.scan_bitmap_next_block = NULL,
.scan_bitmap_next_tuple = NULL,
#endif
.scan_sample_next_block = columnar_scan_sample_next_block,
.scan_sample_next_tuple = columnar_scan_sample_next_tuple
};
@ -2635,20 +2561,15 @@ detoast_values(TupleDesc tupleDesc, Datum *orig_values, bool *isnull)
for (int i = 0; i < tupleDesc->natts; i++)
{
if (!isnull[i] && TupleDescAttr(tupleDesc, i)->attlen == -1 &&
if (!isnull[i] && tupleDesc->attrs[i].attlen == -1 &&
VARATT_IS_EXTENDED(values[i]))
{
/* make a copy */
if (values == orig_values)
{
values = palloc(sizeof(Datum) * natts);
/*
* We use IGNORE-BANNED here since we don't want to limit
* size of the buffer that holds the datum array to RSIZE_MAX
* unnecessarily.
*/
memcpy(values, orig_values, sizeof(Datum) * natts); /* IGNORE-BANNED */
memcpy_s(values, sizeof(Datum) * natts,
orig_values, sizeof(Datum) * natts);
}
/* will be freed when per-tuple context is reset */
@ -2679,12 +2600,21 @@ ColumnarCheckLogicalReplication(Relation rel)
return;
}
#if PG_VERSION_NUM >= PG_VERSION_15
{
PublicationDesc pubdesc;
RelationBuildPublicationDesc(rel, &pubdesc);
pubActionInsert = pubdesc.pubactions.pubinsert;
}
#else
if (rel->rd_pubactions == NULL)
{
GetRelationPublicationActions(rel);
Assert(rel->rd_pubactions != NULL);
}
pubActionInsert = rel->rd_pubactions->pubinsert;
#endif
if (pubActionInsert)
{
@ -2986,7 +2916,7 @@ MajorVersionsCompatibleColumnar(char *leftVersion, char *rightVersion)
}
else
{
rightComparisionLimit = strlen(rightVersion);
rightComparisionLimit = strlen(leftVersion);
}
/* we can error out early if hypens are not in the same position */
@ -3061,8 +2991,6 @@ AvailableExtensionVersionColumnar(void)
ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("citus extension is not found")));
return NULL; /* keep compiler happy */
}
@ -3125,7 +3053,7 @@ DefElem *
GetExtensionOption(List *extensionOptions, const char *defname)
{
DefElem *defElement = NULL;
foreach_declared_ptr(defElement, extensionOptions)
foreach_ptr(defElement, extensionOptions)
{
if (IsA(defElement, DefElem) &&
strncmp(defElement->defname, defname, NAMEDATALEN) == 0)

View File

@ -16,43 +16,28 @@
#include "postgres.h"
#include "miscadmin.h"
#include "safe_lib.h"
#include "access/heapam.h"
#include "access/nbtree.h"
#include "catalog/pg_am.h"
#include "miscadmin.h"
#include "storage/fd.h"
#include "storage/smgr.h"
#include "utils/guc.h"
#include "utils/memutils.h"
#include "utils/rel.h"
#include "pg_version_compat.h"
#include "pg_version_constants.h"
#include "utils/relfilenodemap.h"
#include "columnar/columnar.h"
#include "columnar/columnar_storage.h"
#include "columnar/columnar_version_compat.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "storage/relfilelocator.h"
#include "utils/relfilenumbermap.h"
#else
#include "utils/relfilenodemap.h"
#endif
struct ColumnarWriteState
{
TupleDesc tupleDescriptor;
FmgrInfo **comparisonFunctionArray;
RelFileLocator relfilelocator;
/*
* We can't rely on RelidByRelfilenumber for temp tables since
* PG18(it was backpatched through PG13).
*/
Oid temp_relid;
RelFileNode relfilenode;
MemoryContext stripeWriteContext;
MemoryContext perTupleContext;
@ -99,12 +84,10 @@ static StringInfo CopyStringInfo(StringInfo sourceString);
* data load operation.
*/
ColumnarWriteState *
ColumnarBeginWrite(Relation rel,
ColumnarBeginWrite(RelFileNode relfilenode,
ColumnarOptions options,
TupleDesc tupleDescriptor)
{
RelFileLocator relfilelocator = RelationPhysicalIdentifier_compat(rel);
/* get comparison function pointers for each of the columns */
uint32 columnCount = tupleDescriptor->natts;
FmgrInfo **comparisonFunctionArray = palloc0(columnCount * sizeof(FmgrInfo *));
@ -141,8 +124,7 @@ ColumnarBeginWrite(Relation rel,
options.chunkRowCount);
ColumnarWriteState *writeState = palloc0(sizeof(ColumnarWriteState));
writeState->relfilelocator = relfilelocator;
writeState->temp_relid = RelationPrecomputeOid(rel);
writeState->relfilenode = relfilenode;
writeState->options = options;
writeState->tupleDescriptor = CreateTupleDescCopy(tupleDescriptor);
writeState->comparisonFunctionArray = comparisonFunctionArray;
@ -192,9 +174,8 @@ ColumnarWriteRow(ColumnarWriteState *writeState, Datum *columnValues, bool *colu
writeState->stripeSkipList = stripeSkipList;
writeState->compressionBuffer = makeStringInfo();
Oid relationId = ColumnarRelationId(writeState->temp_relid,
writeState->relfilelocator);
Oid relationId = RelidByRelfilenode(writeState->relfilenode.spcNode,
writeState->relfilenode.relNode);
Relation relation = relation_open(relationId, NoLock);
writeState->emptyStripeReservation =
ReserveEmptyStripe(relation, columnCount, chunkRowCount,
@ -412,9 +393,8 @@ FlushStripe(ColumnarWriteState *writeState)
elog(DEBUG1, "Flushing Stripe of size %d", stripeBuffers->rowCount);
Oid relationId = ColumnarRelationId(writeState->temp_relid,
writeState->relfilelocator);
Oid relationId = RelidByRelfilenode(writeState->relfilenode.spcNode,
writeState->relfilenode.relNode);
Relation relation = relation_open(relationId, NoLock);
/*
@ -506,12 +486,10 @@ FlushStripe(ColumnarWriteState *writeState)
}
}
SaveChunkGroups(writeState->temp_relid,
writeState->relfilelocator,
SaveChunkGroups(writeState->relfilenode,
stripeMetadata->id,
writeState->chunkGroupRowCounts);
SaveStripeSkipList(writeState->temp_relid,
writeState->relfilelocator,
SaveStripeSkipList(writeState->relfilenode,
stripeMetadata->id,
stripeSkipList, tupleDescriptor);
@ -553,9 +531,6 @@ SerializeBoolArray(bool *boolArray, uint32 boolArrayLength)
/*
* SerializeSingleDatum serializes the given datum value and appends it to the
* provided string info buffer.
*
* Since we don't want to limit datum buffer size to RSIZE_MAX unnecessarily,
* we use memcpy instead of memcpy_s several places in this function.
*/
static void
SerializeSingleDatum(StringInfo datumBuffer, Datum datum, bool datumTypeByValue,
@ -577,13 +552,15 @@ SerializeSingleDatum(StringInfo datumBuffer, Datum datum, bool datumTypeByValue,
}
else
{
memcpy(currentDatumDataPointer, DatumGetPointer(datum), datumTypeLength); /* IGNORE-BANNED */
memcpy_s(currentDatumDataPointer, datumBuffer->maxlen - datumBuffer->len,
DatumGetPointer(datum), datumTypeLength);
}
}
else
{
Assert(!datumTypeByValue);
memcpy(currentDatumDataPointer, DatumGetPointer(datum), datumLength); /* IGNORE-BANNED */
memcpy_s(currentDatumDataPointer, datumBuffer->maxlen - datumBuffer->len,
DatumGetPointer(datum), datumLength);
}
datumBuffer->len += datumLengthAligned;
@ -737,12 +714,7 @@ DatumCopy(Datum datum, bool datumTypeByValue, int datumTypeLength)
{
uint32 datumLength = att_addlength_datum(0, datumTypeLength, datum);
char *datumData = palloc0(datumLength);
/*
* We use IGNORE-BANNED here since we don't want to limit datum size to
* RSIZE_MAX unnecessarily.
*/
memcpy(datumData, DatumGetPointer(datum), datumLength); /* IGNORE-BANNED */
memcpy_s(datumData, datumLength, DatumGetPointer(datum), datumLength);
datumCopy = PointerGetDatum(datumData);
}
@ -765,12 +737,8 @@ CopyStringInfo(StringInfo sourceString)
targetString->data = palloc0(sourceString->len);
targetString->len = sourceString->len;
targetString->maxlen = sourceString->len;
/*
* We use IGNORE-BANNED here since we don't want to limit string
* buffer size to RSIZE_MAX unnecessarily.
*/
memcpy(targetString->data, sourceString->data, sourceString->len); /* IGNORE-BANNED */
memcpy_s(targetString->data, sourceString->len,
sourceString->data, sourceString->len);
}
return targetString;

View File

@ -1,19 +0,0 @@
-- citus_columnar--11.1-1--11.2-1
#include "udfs/columnar_ensure_am_depends_catalog/11.2-1.sql"
DELETE FROM pg_depend
WHERE classid = 'pg_am'::regclass::oid
AND objid IN (select oid from pg_am where amname = 'columnar')
AND objsubid = 0
AND refclassid = 'pg_class'::regclass::oid
AND refobjid IN (
'columnar_internal.stripe_first_row_number_idx'::regclass::oid,
'columnar_internal.chunk_group_pkey'::regclass::oid,
'columnar_internal.chunk_pkey'::regclass::oid,
'columnar_internal.options_pkey'::regclass::oid,
'columnar_internal.stripe_first_row_number_idx'::regclass::oid,
'columnar_internal.stripe_pkey'::regclass::oid
)
AND refobjsubid = 0
AND deptype = 'n';

View File

@ -1 +0,0 @@
-- citus_columnar--11.2-1--11.3-1

View File

@ -1 +0,0 @@
-- citus_columnar--11.3-1--12.2-1

View File

@ -1,3 +0,0 @@
-- citus_columnar--12.2-1--13.2-1.sql
#include "udfs/columnar_finish_pg_upgrade/13.2-1.sql"

View File

@ -1,2 +0,0 @@
-- citus_columnar--13.2-1--14.0-1
-- bump version to 14.0-1

View File

@ -10,17 +10,6 @@
--
-- To do that, drop stripe_first_row_number_idx and create a unique
-- constraint with the same name to keep the code change at minimum.
--
-- If we have a pg_depend entry for this index, we can not drop it as
-- the extension depends on it. Remove the pg_depend entry if it exists.
DELETE FROM pg_depend
WHERE classid = 'pg_am'::regclass::oid
AND objid IN (select oid from pg_am where amname = 'columnar')
AND objsubid = 0
AND refclassid = 'pg_class'::regclass::oid
AND refobjid = 'columnar.stripe_first_row_number_idx'::regclass::oid
AND refobjsubid = 0
AND deptype = 'n';
DROP INDEX columnar.stripe_first_row_number_idx;
ALTER TABLE columnar.stripe ADD CONSTRAINT stripe_first_row_number_idx
UNIQUE (storage_id, first_row_number);

View File

@ -1,4 +0,0 @@
-- citus_columnar--11.2-1--11.1-1
-- Note that we intentionally do not re-insert the pg_depend records that we
-- deleted via citus_columnar--11.1-1--11.2-1.sql.

View File

@ -1 +0,0 @@
-- citus_columnar--11.3-1--11.2-1

View File

@ -1 +0,0 @@
-- citus_columnar--12.2-1--11.3-1

View File

@ -1,3 +0,0 @@
-- citus_columnar--13.2-1--12.2-1.sql
DROP FUNCTION IF EXISTS pg_catalog.columnar_finish_pg_upgrade();

View File

@ -1,2 +0,0 @@
-- citus_columnar--14.0-1--13.2-1
-- downgrade version to 13.2-1

View File

@ -8,16 +8,5 @@ DROP FUNCTION citus_internal.upgrade_columnar_storage(regclass);
DROP FUNCTION citus_internal.downgrade_columnar_storage(regclass);
-- drop "first_row_number" column and the index defined on it
--
-- If we have a pg_depend entry for this index, we can not drop it as
-- the extension depends on it. Remove the pg_depend entry if it exists.
DELETE FROM pg_depend
WHERE classid = 'pg_am'::regclass::oid
AND objid IN (select oid from pg_am where amname = 'columnar')
AND objsubid = 0
AND refclassid = 'pg_class'::regclass::oid
AND refobjid = 'columnar.stripe_first_row_number_idx'::regclass::oid
AND refobjsubid = 0
AND deptype = 'n';
DROP INDEX columnar.stripe_first_row_number_idx;
ALTER TABLE columnar.stripe DROP COLUMN first_row_number;

View File

@ -1,14 +1,4 @@
-- columnar--10.2-3--10.2-2.sql
--
-- If we have a pg_depend entry for this index, we can not drop it as
-- the extension depends on it. Remove the pg_depend entry if it exists.
DELETE FROM pg_depend
WHERE classid = 'pg_am'::regclass::oid
AND objid IN (select oid from pg_am where amname = 'columnar')
AND objsubid = 0
AND refclassid = 'pg_class'::regclass::oid
AND refobjid = 'columnar.stripe_first_row_number_idx'::regclass::oid
AND refobjsubid = 0
AND deptype = 'n';
ALTER TABLE columnar.stripe DROP CONSTRAINT stripe_first_row_number_idx;
CREATE INDEX stripe_first_row_number_idx ON columnar.stripe USING BTREE(storage_id, first_row_number);

View File

@ -1,43 +0,0 @@
CREATE OR REPLACE FUNCTION columnar_internal.columnar_ensure_am_depends_catalog()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
AS $func$
BEGIN
INSERT INTO pg_depend
WITH columnar_schema_members(relid) AS (
SELECT pg_class.oid AS relid FROM pg_class
WHERE relnamespace =
COALESCE(
(SELECT pg_namespace.oid FROM pg_namespace WHERE nspname = 'columnar_internal'),
(SELECT pg_namespace.oid FROM pg_namespace WHERE nspname = 'columnar')
)
AND relname IN ('chunk',
'chunk_group',
'options',
'storageid_seq',
'stripe')
)
SELECT -- Define a dependency edge from "columnar table access method" ..
'pg_am'::regclass::oid as classid,
(select oid from pg_am where amname = 'columnar') as objid,
0 as objsubid,
-- ... to some objects registered as regclass and that lives in
-- "columnar" schema. That contains catalog tables and the sequences
-- created in "columnar" schema.
--
-- Given the possibility of user might have created their own objects
-- in columnar schema, we explicitly specify list of objects that we
-- are interested in.
'pg_class'::regclass::oid as refclassid,
columnar_schema_members.relid as refobjid,
0 as refobjsubid,
'n' as deptype
FROM columnar_schema_members
-- Avoid inserting duplicate entries into pg_depend.
EXCEPT TABLE pg_depend;
END;
$func$;
COMMENT ON FUNCTION columnar_internal.columnar_ensure_am_depends_catalog()
IS 'internal function responsible for creating dependencies from columnar '
'table access method to the rel objects in columnar schema';

View File

@ -1,4 +1,4 @@
CREATE OR REPLACE FUNCTION columnar_internal.columnar_ensure_am_depends_catalog()
CREATE OR REPLACE FUNCTION citus_internal.columnar_ensure_am_depends_catalog()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
@ -14,17 +14,22 @@ BEGIN
)
AND relname IN ('chunk',
'chunk_group',
'chunk_group_pkey',
'chunk_pkey',
'options',
'options_pkey',
'storageid_seq',
'stripe')
'stripe',
'stripe_first_row_number_idx',
'stripe_pkey')
)
SELECT -- Define a dependency edge from "columnar table access method" ..
'pg_am'::regclass::oid as classid,
(select oid from pg_am where amname = 'columnar') as objid,
0 as objsubid,
-- ... to some objects registered as regclass and that lives in
-- "columnar" schema. That contains catalog tables and the sequences
-- created in "columnar" schema.
-- ... to each object that is registered to pg_class and that lives
-- in "columnar" schema. That contains catalog tables, indexes
-- created on them and the sequences created in "columnar" schema.
--
-- Given the possibility of user might have created their own objects
-- in columnar schema, we explicitly specify list of objects that we
@ -38,6 +43,6 @@ BEGIN
EXCEPT TABLE pg_depend;
END;
$func$;
COMMENT ON FUNCTION columnar_internal.columnar_ensure_am_depends_catalog()
COMMENT ON FUNCTION citus_internal.columnar_ensure_am_depends_catalog()
IS 'internal function responsible for creating dependencies from columnar '
'table access method to the rel objects in columnar schema';

View File

@ -1,13 +0,0 @@
CREATE OR REPLACE FUNCTION pg_catalog.columnar_finish_pg_upgrade()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
AS $cppu$
BEGIN
-- set dependencies for columnar table access method
PERFORM columnar_internal.columnar_ensure_am_depends_catalog();
END;
$cppu$;
COMMENT ON FUNCTION pg_catalog.columnar_finish_pg_upgrade()
IS 'perform tasks to properly complete a Postgres upgrade for columnar extension';

View File

@ -1,13 +0,0 @@
CREATE OR REPLACE FUNCTION pg_catalog.columnar_finish_pg_upgrade()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
AS $cppu$
BEGIN
-- set dependencies for columnar table access method
PERFORM columnar_internal.columnar_ensure_am_depends_catalog();
END;
$cppu$;
COMMENT ON FUNCTION pg_catalog.columnar_finish_pg_upgrade()
IS 'perform tasks to properly complete a Postgres upgrade for columnar extension';

View File

@ -1,17 +1,21 @@
#include "citus_version.h"
#include "postgres.h"
#include "columnar/columnar.h"
#include <math.h>
#include "postgres.h"
#include "miscadmin.h"
#include "pgstat.h"
#include "access/genam.h"
#include "access/heapam.h"
#include "access/heaptoast.h"
#include "access/multixact.h"
#include "access/rewriteheap.h"
#include "access/tsmapi.h"
#include "access/heaptoast.h"
#include "common/hashfn.h"
#include "access/xact.h"
#include "catalog/catalog.h"
#include "catalog/index.h"
@ -22,12 +26,13 @@
#include "catalog/storage_xlog.h"
#include "commands/progress.h"
#include "commands/vacuum.h"
#include "common/hashfn.h"
#include "executor/executor.h"
#include "nodes/makefuncs.h"
#include "optimizer/plancat.h"
#include "pgstat.h"
#include "storage/bufmgr.h"
#include "storage/bufpage.h"
#include "storage/bufmgr.h"
#include "storage/lmgr.h"
#include "storage/predicate.h"
#include "storage/procarray.h"
@ -38,10 +43,6 @@
#include "utils/rel.h"
#include "utils/syscache.h"
#include "citus_version.h"
#include "pg_version_compat.h"
#include "columnar/columnar.h"
#include "columnar/columnar_customscan.h"
#include "columnar/columnar_tableam.h"
#include "columnar/columnar_version_compat.h"
@ -76,7 +77,7 @@ typedef struct SubXidWriteState
typedef struct WriteStateMapEntry
{
/* key of the entry */
RelFileNumber relfilenumber;
Oid relfilenode;
/*
* If a table is dropped, we set dropped to true and set dropSubXid to the
@ -112,7 +113,6 @@ CleanupWriteStateMap(void *arg)
ColumnarWriteState *
columnar_init_write_state(Relation relation, TupleDesc tupdesc,
Oid tupSlotRelationId,
SubTransactionId currentSubXid)
{
bool found;
@ -131,7 +131,7 @@ columnar_init_write_state(Relation relation, TupleDesc tupdesc,
HASHCTL info;
uint32 hashFlags = (HASH_ELEM | HASH_FUNCTION | HASH_CONTEXT);
memset(&info, 0, sizeof(info));
info.keysize = sizeof(RelFileNumber);
info.keysize = sizeof(Oid);
info.hash = oid_hash;
info.entrysize = sizeof(WriteStateMapEntry);
info.hcxt = WriteStateContext;
@ -145,10 +145,7 @@ columnar_init_write_state(Relation relation, TupleDesc tupdesc,
MemoryContextRegisterResetCallback(WriteStateContext, &cleanupCallback);
}
WriteStateMapEntry *hashEntry = hash_search(WriteStateMap,
&RelationPhysicalIdentifierNumber_compat(
RelationPhysicalIdentifier_compat(
relation)),
WriteStateMapEntry *hashEntry = hash_search(WriteStateMap, &relation->rd_node.relNode,
HASH_ENTER, &found);
if (!found)
{
@ -179,19 +176,10 @@ columnar_init_write_state(Relation relation, TupleDesc tupdesc,
MemoryContext oldContext = MemoryContextSwitchTo(WriteStateContext);
ColumnarOptions columnarOptions = { 0 };
/*
* In case of a table rewrite, we need to fetch table options based on the
* relation id of the source tuple slot.
*
* For this reason, we always pass tupSlotRelationId here; which should be
* same as the target table if the write operation is not related to a table
* rewrite etc.
*/
ReadColumnarOptions(tupSlotRelationId, &columnarOptions);
ReadColumnarOptions(relation->rd_id, &columnarOptions);
SubXidWriteState *stackEntry = palloc0(sizeof(SubXidWriteState));
stackEntry->writeState = ColumnarBeginWrite(relation,
stackEntry->writeState = ColumnarBeginWrite(relation->rd_node,
columnarOptions,
tupdesc);
stackEntry->subXid = currentSubXid;
@ -208,16 +196,14 @@ columnar_init_write_state(Relation relation, TupleDesc tupdesc,
* Flushes pending writes for given relfilenode in the given subtransaction.
*/
void
FlushWriteStateForRelfilenumber(RelFileNumber relfilenumber,
SubTransactionId currentSubXid)
FlushWriteStateForRelfilenode(Oid relfilenode, SubTransactionId currentSubXid)
{
if (WriteStateMap == NULL)
{
return;
}
WriteStateMapEntry *entry = hash_search(WriteStateMap, &relfilenumber, HASH_FIND,
NULL);
WriteStateMapEntry *entry = hash_search(WriteStateMap, &relfilenode, HASH_FIND, NULL);
Assert(!entry || !entry->dropped);
@ -324,14 +310,14 @@ DiscardWriteStateForAllRels(SubTransactionId currentSubXid, SubTransactionId par
* Called when the given relfilenode is dropped.
*/
void
MarkRelfilenumberDropped(RelFileNumber relfilenumber, SubTransactionId currentSubXid)
MarkRelfilenodeDropped(Oid relfilenode, SubTransactionId currentSubXid)
{
if (WriteStateMap == NULL)
{
return;
}
WriteStateMapEntry *entry = hash_search(WriteStateMap, &relfilenumber, HASH_FIND,
WriteStateMapEntry *entry = hash_search(WriteStateMap, &relfilenode, HASH_FIND,
NULL);
if (!entry || entry->dropped)
{
@ -347,11 +333,11 @@ MarkRelfilenumberDropped(RelFileNumber relfilenumber, SubTransactionId currentSu
* Called when the given relfilenode is dropped in non-transactional TRUNCATE.
*/
void
NonTransactionDropWriteState(RelFileNumber relfilenumber)
NonTransactionDropWriteState(Oid relfilenode)
{
if (WriteStateMap)
{
hash_search(WriteStateMap, &relfilenumber, HASH_REMOVE, false);
hash_search(WriteStateMap, &relfilenode, HASH_REMOVE, false);
}
}
@ -360,16 +346,14 @@ NonTransactionDropWriteState(RelFileNumber relfilenumber)
* Returns true if there are any pending writes in upper transactions.
*/
bool
PendingWritesInUpperTransactions(RelFileNumber relfilenumber,
SubTransactionId currentSubXid)
PendingWritesInUpperTransactions(Oid relfilenode, SubTransactionId currentSubXid)
{
if (WriteStateMap == NULL)
{
return false;
}
WriteStateMapEntry *entry = hash_search(WriteStateMap, &relfilenumber, HASH_FIND,
NULL);
WriteStateMapEntry *entry = hash_search(WriteStateMap, &relfilenode, HASH_FIND, NULL);
if (entry && entry->writeStateStack != NULL)
{

View File

@ -18,7 +18,7 @@ generated_downgrade_sql_files += $(patsubst %,$(citus_abs_srcdir)/build/sql/%,$(
DATA_built = $(generated_sql_files)
# directories with source files
SUBDIRS = . commands connection ddl deparser executor metadata operations planner progress relay safeclib shardsplit stats test transaction utils worker clock
SUBDIRS = . commands connection ddl deparser executor metadata operations planner progress relay safeclib shardsplit test transaction utils worker
# enterprise modules
SUBDIRS += replication
@ -32,12 +32,7 @@ OBJS += \
$(patsubst $(citus_abs_srcdir)/%.c,%.o,$(foreach dir,$(SUBDIRS), $(sort $(wildcard $(citus_abs_srcdir)/$(dir)/*.c))))
# be explicit about the default target
.PHONY: cdc
all: cdc
cdc:
$(MAKE) -C cdc all
all:
NO_PGXS = 1
@ -84,22 +79,13 @@ ifneq (,$(SQL_Po_files))
include $(SQL_Po_files)
endif
.PHONY: clean-full install install-downgrades install-all install-cdc clean-cdc
clean: clean-cdc
clean-cdc:
$(MAKE) -C cdc clean
.PHONY: clean-full install install-downgrades install-all
cleanup-before-install:
rm -f $(DESTDIR)$(datadir)/$(datamoduledir)/citus.control
rm -f $(DESTDIR)$(datadir)/$(datamoduledir)/citus--*
install: cleanup-before-install install-cdc
install-cdc:
$(MAKE) -C cdc install
install: cleanup-before-install
# install and install-downgrades should be run sequentially
install-all: install
@ -110,5 +96,4 @@ install-downgrades: $(generated_downgrade_sql_files)
clean-full:
$(MAKE) clean
$(MAKE) -C cdc clean-full
rm -rf $(safestringlib_builddir)

File diff suppressed because it is too large Load Diff

View File

@ -1,45 +0,0 @@
citus_top_builddir = ../../../..
include $(citus_top_builddir)/Makefile.global
citus_subdir = src/backend/distributed/cdc
SRC_DIR = $(citus_abs_top_srcdir)/$(citus_subdir)
#List of supported based decoders. Add new decoders here.
cdc_base_decoders :=pgoutput wal2json
all: build-cdc-decoders
copy-decoder-files-to-build-dir:
$(eval DECODER_BUILD_DIR=build-cdc-$(DECODER))
mkdir -p $(DECODER_BUILD_DIR)
@for file in $(SRC_DIR)/*.c $(SRC_DIR)/*.h; do \
if [ -f $$file ]; then \
if [ -f $(DECODER_BUILD_DIR)/$$(basename $$file) ]; then \
if ! diff -q $$file $(DECODER_BUILD_DIR)/$$(basename $$file); then \
cp $$file $(DECODER_BUILD_DIR)/$$(basename $$file); \
fi \
else \
cp $$file $(DECODER_BUILD_DIR)/$$(basename $$file); \
fi \
fi \
done
cp $(SRC_DIR)/Makefile.decoder $(DECODER_BUILD_DIR)/Makefile
build-cdc-decoders:
$(foreach base_decoder,$(cdc_base_decoders),$(MAKE) DECODER=$(base_decoder) build-cdc-decoder;)
install-cdc-decoders:
$(foreach base_decoder,$(cdc_base_decoders),$(MAKE) DECODER=$(base_decoder) -C build-cdc-$(base_decoder) install;)
clean-cdc-decoders:
$(foreach base_decoder,$(cdc_base_decoders),rm -rf build-cdc-$(base_decoder);)
build-cdc-decoder:
$(MAKE) DECODER=$(DECODER) copy-decoder-files-to-build-dir
$(MAKE) DECODER=$(DECODER) -C build-cdc-$(DECODER)
install: install-cdc-decoders
clean: clean-cdc-decoders

View File

@ -1,19 +0,0 @@
MODULE_big = citus_$(DECODER)
citus_decoders_dir = $(DESTDIR)$(pkglibdir)/citus_decoders
citus_top_builddir = ../../../../..
citus_subdir = src/backend/distributed/cdc/cdc_$(DECODER)
OBJS += cdc_decoder.o cdc_decoder_utils.o
include $(citus_top_builddir)/Makefile.global
override CFLAGS += -DDECODER=\"$(DECODER)\" -I$(citus_abs_top_srcdir)/include
override CPPFLAGS += -DDECODER=\"$(DECODER)\" -I$(citus_abs_top_srcdir)/include
install: install-cdc
install-cdc:
mkdir -p '$(citus_decoders_dir)'
$(INSTALL_SHLIB) citus_$(DECODER)$(DLSUFFIX) '$(citus_decoders_dir)/$(DECODER)$(DLSUFFIX)'

Some files were not shown because too many files have changed in this diff Show More