Making tasks in CI required before merging to master is important and
useful. The way this works is by saving the exact names of the required
tasks in the admin interface of the repo. It has a search box to add
them so it's not completely horrible, but doing so is quite a hassle
since we have so many jobs. So limiting the amount of churn in this list
of required jobs is quite useful.
This changes the names of tasks to only include the major versions of
Postgres, not the minor ones. Otherwise the next time we bump the minor
versions we would have to remove and re-add each of the jobs.
(cherry picked from commit 83e3fb817d)
Since GHA does not interpolate env variables in a matrix context, This
PR defines them in a separate job and uses them in other jobs.
(cherry picked from commit 2bf1472c8e)
We should not omit to free PGResult when we receive single tuple result
from an internal backend.
Single tuple results are normally freed by our ReceiveResults for
`tupleDescriptor != NULL` flow but not for those with `tupleDescriptor
== NULL`. See PR #6722 for details.
DESCRIPTION: Fixes memory leak issue with query results that returns
single row.
(cherry picked from commit 9e69dd0e7f)
Previously, we were wrapping targetlist nodes with Vars that reference
to the result of the worker query, if the node itself is not `Const` or
not a `Param`. Indeed, we should not do that unless the node itself is
a `Var` node or contains a `Var` within it (e.g.: `OpExpr(Var(column_a) > 2)`).
Otherwise, when worker query returns empty result set, then combine
query exec would crash since the `Var` would be pointing to an empty
tuple slot, which is not desirable for the node-executor methods.
(cherry picked from commit 79442df1b7)
This config is used to generate components on ADO(Azure devops).
Currently this might not be super useful because we don't really use ADO
but when run, we can see the warnings/issues with components that we
use. A component is like a dependency basically.
(cherry picked from commit 7c11aa124b)
With local query caching, we try to avoid deparse/parse stages as the
operation is too costly.
However, we can do deparse/parse operations once per cached queries, right
before we put the plan into the cache. With that, we avoid edge
cases like (4239) or (5038).
In a sense, we are making the local plan caching behave similar for non-cached
local/remote queries, by forcing to deparse the query once.
(cherry picked from commit 69ca943e58)
It seems that sometimes we get `too many clients errors` with this set
of parallel tests, hence two of them are separated.
(cherry picked from commit 8c3dd6338e)
It seems that we were not considering the case where coordinator was
added to the cluster as a worker in the optimization of intermediate
results.
This could lead to errors when coordinator was added as a worker.
(cherry picked from commit 24e60b44a1)