This commit adds INSERT INTO ... SELECT feature for distributed tables. We implement INSERT INTO ... SELECT by pushing down the SELECT to each shard. To compute that we use the router planner, by adding an "uninstantiated" constraint that the partition column be equal to a certain value. standard_planner() distributes that constraint to all the tables where it knows how to push the restriction safely. An example is that the tables that are connected via equi joins. The router planner then iterates over the target table's shards, for each we replace the "uninstantiated" restriction, with one that PruneShardList() handles. Do so by replacing the partitioning qual parameter added in multi_planner() with the current shard's actual boundary values. Also, add the current shard's boundary values to the top level subquery to ensure that even if the partitioning qual is not distributed to all the tables, we never run the queries on the shards that don't match with the current shard boundaries. Finally, perform the normal shard pruning to decide on whether to push the query to the current shard or not. We do not support certain SQLs on the subquery, which are described/commented on ErrorIfInsertSelectQueryNotSupported(). We also added some locking on the router executor. When an INSERT/SELECT command runs on a distributed table with replication factor >1, we need to ensure that it sees the same result on each placement of a shard. So we added the ability such that router executor takes exclusive locks on shards from which the SELECT in an INSERT/SELECT reads in order to prevent concurrent changes. This is not a very optimal solution, but it's simple and correct. The citus.all_modifications_commutative can be used to avoid aggressive locking. An INSERT/SELECT whose filters are known to exclude any ongoing writes can be marked as commutative. See RequiresConsistentSnapshot() for the details. We also moved the decison of whether the multiPlan should be executed on the router executor or not to the planning phase. This allowed us to integrate multi task router executor tasks to the router executor smoothly. |
||
---|---|---|
src | ||
.gitattributes | ||
.gitignore | ||
.travis.yml | ||
CHANGELOG.md | ||
CONTRIBUTING.md | ||
LICENSE | ||
Makefile | ||
Makefile.global.in | ||
README.md | ||
autogen.sh | ||
configure | ||
configure.in | ||
github-banner.png | ||
prep_buildtree |
README.md
What is Citus?
- Open-source PostgreSQL extension (not a fork)
- Scalable across multiple hosts through sharding and replication
- Distributed engine for query parallelization
- Highly available in the face of host failures
Citus horizontally scales PostgreSQL across commodity servers using sharding and replication. Its query engine parallelizes incoming SQL queries across these servers to enable real-time responses on large datasets.
Citus extends the underlying database rather than forking it, which gives developers and enterprises the power and familiarity of a traditional relational database. As an extension, Citus supports new PostgreSQL releases, allowing users to benefit from new features while maintaining compatibility with existing PostgreSQL tools. Note that Citus supports many (but not all) SQL commands; see the FAQ for more details.
Common Use-Cases:
- Powering real-time analytic dashboards
- Exploratory queries on events as they happen
- Large dataset archival and reporting
- Session analytics (funnels, segmentation, and cohorts)
To learn more, visit citusdata.com and join the mailing list to stay on top of the latest developments.
Quickstart
Local Citus Cluster
-
(Mac only) connect to Docker VM
eval $(docker-machine env default)
-
Pull and start the docker images
wget https://raw.githubusercontent.com/citusdata/docker/master/docker-compose.yml docker-compose -p citus up -d
-
Connect to the master database
docker exec -it citus_master psql -U postgres -d postgres
-
Follow the first tutorial instructions
-
To shut the cluster down, run
docker-compose -p citus down
Talk to Contributors and Learn More
Documentation | Try the Citus
tutorials for a hands-on introduction or the documentation for a more comprehensive reference. |
Google Groups | The Citus Google Group is our place for detailed questions and discussions. |
Slack | Chat with us in our community Slack channel. |
Github Issues | We track specific bug reports and feature requests on our project issues. |
Follow @citusdata for general updates and PostgreSQL scaling tips. | |
Training and Support | See our support page for training and dedicated support options. |
Contributing
Citus is built on and of open source. We welcome your contributions, and have added a helpwanted label to issues which are accessible to new contributors. The CONTRIBUTING.md file explains how to get started developing the Citus extension itself and our code quality guidelines.
Who is Using Citus?
Citus is deployed in production by many customers, ranging from technology start-ups to large enterprises. Here are some examples:
- CloudFlare uses Citus to provide real-time analytics on 100 TBs of data from over 4 million customer websites. Case Study
- MixRank uses Citus to efficiently collect and analyze vast amounts of data to allow inside B2B sales teams to find new customers. Case Study
- Neustar builds and maintains scalable ad-tech infrastructure that counts billions of events per day using Citus and HyperLogLog.
- Agari uses Citus to secure more than 85 percent of U.S. consumer emails on two 6-8 TB clusters. Case Study
- Heap uses Citus to run dynamic funnel, segmentation, and cohort queries across billions of users and tens of billions of events. Watch Video
Copyright © 2012–2016 Citus Data, Inc.