Compare commits

...

240 Commits
1.1.0 ... main

Author SHA1 Message Date
Zsolt Parragi 9e0a252873
PG-1674: Fix comment parsing logic at two places (#542)
* PG-1674: Fix query hash calculation comment removal logic

The previous conditions only removed the first few starting characters
of the comment, and leaved everything else there.

This modification fixes this and correctly removes everything.

* PG-1674: Fix performance issues with comment extraction

The previous logic used complex regex parsers, which caused performance
issues with large (megabyte sized) queries. This change removes the
regex dependency and uses the same (fixed) logic from the query hashing
code, which is much faster.

It also checks the related GUC variable, which the previous code
ignored: if we do not want to display extracted comments, we won't
extract them in the first place.

This commit doesn't try to address other issues with comment parsing logic:

* we shouldn't treat comment like things within strings as comments
* we should handle nested C style comments
* we don't extract `--` style comments

All of these issues are still there, as before.
2025-06-23 13:46:15 +01:00
Artem Gavrilov 3653dd6041
Add back case for zero cmd_type value in get_cmd_type function (#543) 2025-06-19 15:44:41 +02:00
Artem Gavrilov 76b0802142
PG-1313 Fix decode_error_level SQL function (#539)
Update decode_error_level funcion to support error codes up to PG
version 17.
2025-06-19 15:38:44 +02:00
Zsolt Parragi 61662cc58f
PG-1621: fix cmd_type mostly showing 0 values (#538)
This was actually caused by two bugs internally:

* cmd_type was only set in some codepaths, other parts of the code
never set a value. Depending on which query / how was executed,
it was possibly never changed (after a reset to 0)
* the update first set the cmd_type, then reset all counters. As
the cmd_type is stored within the counters for some reason, this
reset its value to 0 in most execution paths, even if it was corretly
set before.

And according to this the fix is simple:

* cmd_type is now set in all codepaths except for failing queries,
as we only have the error string in this case, without the type.
* in the update logic, we again overwrite cmd_type with the proper
value after a reset
2025-06-17 14:52:28 +01:00
dependabot[bot] f7dc7fb5fe
Bump github/codeql-action from 3.28.19 to 3.29.0 (#541)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.19 to 3.29.0.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](fca7ace96b...ce28f5bb42)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.29.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 14:25:52 +02:00
dependabot[bot] 38f13e893f
Bump github/codeql-action from 3.28.18 to 3.28.19 (#540)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.18 to 3.28.19.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](ff0a06e83c...fca7ace96b)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.19
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-11 14:09:21 +02:00
dependabot[bot] 9d2f2cd8cc
Bump ossf/scorecard-action from 2.4.1 to 2.4.2 (#537)
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.4.1 to 2.4.2.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](f49aabe0b5...05b42c6244)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-version: 2.4.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-06 13:46:18 +02:00
dependabot[bot] d0237f8d83
Bump github/codeql-action from 3.28.17 to 3.28.18 (#534)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.17 to 3.28.18.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](60168efe1c...ff0a06e83c)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.18
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-21 12:01:55 +02:00
dependabot[bot] d116dd47fe
Bump codecov/codecov-action from 5.4.2 to 5.4.3 (#535)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.4.2 to 5.4.3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](ad3126e916...18283e04ce)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.4.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-21 12:01:33 +02:00
dependabot[bot] 9c72c2e73d
Bump github/codeql-action from 3.28.16 to 3.28.17 (#533)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.16 to 3.28.17.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](28deaeda66...60168efe1c)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.17
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 23:58:23 +02:00
dependabot[bot] 76424b6c64
Bump github/codeql-action from 3.28.14 to 3.28.16 (#532)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.14 to 3.28.16.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](fc7e4a0fa0...28deaeda66)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-28 20:44:08 +02:00
dependabot[bot] f76b1860e3
Bump codecov/codecov-action from 5.4.0 to 5.4.2 (#531)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.4.0 to 5.4.2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](0565863a31...ad3126e916)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.4.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-23 19:02:17 +02:00
Artem Gavrilov a7edd766e3
Update CODEOWNERS (#529) 2025-04-16 15:44:18 +02:00
Artem Gavrilov 24c1c59416
PG-1370 PGSM 2.1.1 release (#514)
PG-1370 Bump PGSM version up to 2.1.1
2025-04-09 18:57:44 +02:00
dependabot[bot] b860effd97
Bump github/codeql-action from 3.28.13 to 3.28.14 (#528)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.13 to 3.28.14.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](1b549b9259...fc7e4a0fa0)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.14
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-08 10:22:49 +02:00
dependabot[bot] 64b08e422c
Bump github/codeql-action from 3.28.11 to 3.28.13 (#525)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.11 to 3.28.13.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](6bb031afdd...1b549b9259)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-02 18:22:48 +02:00
dependabot[bot] acd559842f
Bump actions/upload-artifact from 4.6.1 to 4.6.2 (#526)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.1 to 4.6.2.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](4cec3d8aa0...ea165f8d65)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-02 18:22:24 +02:00
Muhammad Aqeel 8bbb49e409
Adds date timestamp to keep packages in different directories. (#527) 2025-03-27 16:31:25 +05:00
dependabot[bot] 7bddd5a033
Bump github/codeql-action from 3.28.10 to 3.28.11 (#524)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.10 to 3.28.11.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](b56ba49b26...6bb031afdd)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-11 14:24:19 +02:00
dependabot[bot] 5312f6f8a7
Bump actions/upload-artifact from 4.6.0 to 4.6.1 (#523)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.0 to 4.6.1.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](65c4c4a1dd...4cec3d8aa0)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-05 12:26:33 +02:00
dependabot[bot] c305b8a086
Bump github/codeql-action from 3.28.9 to 3.28.10 (#522)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.9 to 3.28.10.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](9e8d0789d4...b56ba49b26)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-05 12:25:58 +02:00
dependabot[bot] 8dcf24a879
Bump ossf/scorecard-action from 2.4.0 to 2.4.1 (#521)
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.4.0 to 2.4.1.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](62b2cac7ed...f49aabe0b5)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-05 12:25:36 +02:00
dependabot[bot] 32b1beb6ff
Bump codecov/codecov-action from 5.3.1 to 5.4.0 (#520)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.3.1 to 5.4.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](13ce06bfc6...0565863a31)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-05 12:25:14 +02:00
Andreas Karlsson 9333608c3a PG-1349 Remove call to LWLockRelease() in PG_CATCH()
It is not safe to release an LWLock in a catch section without
incrementing InterruptHoldoffCount so let's isntead simply not release
the lock here.
2025-02-20 17:30:24 +01:00
Andreas Karlsson 4ebb3d1f36 PG-1349 Prevent LWLock deadlocks from happening
Instead of trying to fix every case where we could throw an error and
handling that properly we just make sure to disable the error capture
of the hook while our backend holds the lock.

We keep the check for IsSystemOOM() in the hook even though that might
not be relevant anymore because if we are in OOM it is not like there
would be any point to log the error anyway.

This is done via a global variable, similar to the
__pgsm_do_not_capture_error variable that we are replacing, which we
also use in one place to disable recursive calls to the log hook
where we do not hold the lock.

A potential future improvement would be to make this variable a counter,
or have two separate globals, so that we could guard against recursive
calls to the hook running us out of stack and not just prevent the
deadlocks.
2025-02-20 11:38:37 +01:00
Artem Gavrilov fd43b75153
Revert "PG-156: replace query placeholders with actual arguments for… (#517)
Revert "PG -156: replace query placeholders with actual arguments for prepared statements (#481)"

This reverts commit c921d483a8.
2025-02-17 19:13:15 +02:00
Artem Gavrilov c949d21656
Add OSSF best practices badge (#507) 2025-02-14 14:34:31 +02:00
dependabot[bot] fbdff8b444
Bump apache/skywalking-eyes from 0.6.0 to 0.7.0 (#512)
Bumps [apache/skywalking-eyes](https://github.com/apache/skywalking-eyes) from 0.6.0 to 0.7.0.
- [Release notes](https://github.com/apache/skywalking-eyes/releases)
- [Changelog](https://github.com/apache/skywalking-eyes/blob/main/CHANGES.md)
- [Commits](cd7b195c51...5c5b974209)

---
updated-dependencies:
- dependency-name: apache/skywalking-eyes
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-11 11:35:44 +02:00
dependabot[bot] e099628e18
Bump github/codeql-action from 3.28.8 to 3.28.9 (#513)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.8 to 3.28.9.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](dd746615b3...9e8d0789d4)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-11 11:35:16 +02:00
dependabot[bot] d2b2eafc85
Bump github/codeql-action from 3.28.5 to 3.28.8 (#511)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.5 to 3.28.8.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](f6091c0113...dd746615b3)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 14:30:59 +02:00
dependabot[bot] ba8d7bd83b
Bump codecov/codecov-action from 5.1.2 to 5.3.1 (#509)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.1.2 to 5.3.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](1e68e06f1d...13ce06bfc6)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-28 16:04:20 +02:00
dependabot[bot] b47ae95fa8
Bump github/codeql-action from 3.28.1 to 3.28.5 (#510)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.1 to 3.28.5.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](b6a472f63d...f6091c0113)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-28 16:03:56 +02:00
Artem Gavrilov 980116acea
Add issue assingees (#508) 2025-01-24 12:01:44 -03:00
dependabot[bot] ef9518c98e
Bump github/codeql-action from 3.28.0 to 3.28.1 (#504)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.0 to 3.28.1.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](48ab28a6f5...b6a472f63d)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-14 14:59:26 +02:00
dependabot[bot] 2769e6dcb2
Bump actions/upload-artifact from 4.5.0 to 4.6.0 (#505)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.5.0 to 4.6.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](6f51ac03b9...65c4c4a1dd)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-14 14:59:01 +02:00
dependabot[bot] d0b67ef32a
Bump github/codeql-action from 3.27.9 to 3.28.0 (#503)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.27.9 to 3.28.0.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](df409f7d92...48ab28a6f5)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-23 19:38:41 +02:00
dependabot[bot] d008bbbfa7
Bump actions/upload-artifact from 4.4.3 to 4.5.0 (#502)
* Bump actions/upload-artifact from 4.4.3 to 4.5.0

Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.4.3 to 4.5.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4.4.3...6f51ac03b9356f520e9adb1b1b7802705f340c2b)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update .github/workflows/scorecard.yml

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Artem Gavrilov <artem.gavrilov@percona.com>
2024-12-23 19:38:15 +02:00
dependabot[bot] 971c62025e
Bump codecov/codecov-action from 5.1.1 to 5.1.2 (#501)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.1.1 to 5.1.2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](7f8b4b4bde...1e68e06f1d)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-23 19:36:56 +02:00
dependabot[bot] 8ed7f1dbb7
Bump github/codeql-action from 3.27.6 to 3.27.9 (#499)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.27.6 to 3.27.9.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](aa57810251...df409f7d92)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 19:55:54 +02:00
dependabot[bot] 2a5a2f07c4
Bump github/codeql-action from 3.27.5 to 3.27.6 (#498)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.27.5 to 3.27.6.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](f09c1c0a94...aa57810251)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-11 14:41:17 +02:00
dependabot[bot] 7d57e2476f
Bump codecov/codecov-action from 5.0.7 to 5.1.1 (#497)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.0.7 to 5.1.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](015f24e681...7f8b4b4bde)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-11 14:40:51 +02:00
dependabot[bot] 6c7512afa5
Bump codecov/codecov-action from 5.0.2 to 5.0.7 (#496)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.0.2 to 5.0.7.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](5c47607acb...015f24e681)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-25 21:21:02 +02:00
dependabot[bot] ee7f6d071f
Bump github/codeql-action from 3.27.4 to 3.27.5 (#495)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.27.4 to 3.27.5.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](ea9e4e3799...f09c1c0a94)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-25 21:20:38 +02:00
Muhammad Aqeel 7f1344e12d
Fixes llvm-devel package installation issue. (#494) 2024-11-22 12:27:18 +05:00
Artem Gavrilov 85f0401b96
Create pull_request_template.md (#489) 2024-11-19 09:31:37 +01:00
dependabot[bot] df65136090
Bump codecov/codecov-action from 4.6.0 to 5.0.2 (#491)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4.6.0 to 5.0.2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](b9fd7d16f6...5c47607acb)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 22:21:04 +02:00
dependabot[bot] 534790f39b
Bump ossf/scorecard-action from 2.3.1 to 2.4.0 (#492)
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.3.1 to 2.4.0.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](0864cf1902...62b2cac7ed)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 22:20:38 +02:00
dependabot[bot] 646f01420f
Bump github/codeql-action from 3.27.3 to 3.27.4 (#493)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.27.3 to 3.27.4.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](396bb3e453...ea9e4e3799)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 22:20:11 +02:00
dependabot[bot] c63e172100
Bump actions/checkout from 4.1.1 to 4.2.2 (#490)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.1 to 4.2.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4.1.1...11bd71901bbe5b1630ceea73d27597364c9af683)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 22:19:34 +02:00
StepSecurity Bot 091b5866d4
[StepSecurity] ci: Harden GitHub Actions (#488)
Signed-off-by: StepSecurity Bot <bot@stepsecurity.io>
2024-11-14 15:19:16 +02:00
Artem Gavrilov 186c2e4795
Add OSSF Scorecard (#487)
* Create scorecard.yml

* Add OSSF scorecard badge

* Update README.md
2024-11-14 11:11:37 +02:00
Diego Fronza c921d483a8
PG -156: replace query placeholders with actual arguments for prepared statements (#481)
* Denormalize prepared statement queries

Added support for extracting query arguments for prepared statements
when `pg_stat_monitor.pgsm_normalized_query` is off.

Previously pg_stat_monitor was unable to extract the arguments for
prepared statements, thus leaving queries with placeholders $1
.. $N instead of the actual arguments.

* Optmize query denormalization

Instead of copying original query text byte by byte, copy data between
query placeholders in chunks, example:

`INSERT INTO foo(a, b, c) VALUES('test', 100, 'test again)'`

Would result in normalized query:

`INSERT INTO foo(a, b, c) VALUES($1, $2, $3)`

The original patch would copy the parts between placeholders byte by
byte, e.g. `INSERT INTO foo(a, b, c) VALUES(`, instead we can copy this
whole block at once, 1 function call and maybe 1 buffer re-allocation
per call.

Also make use of `appendBinaryStringInfo` to avoid calculating string
length as we have this info already.

* Optmize query denormalization(2)

Avoid allocating an array of strings for extracting query argument
values, instead append the current parameter value directly in the
buffer used to store the denormalized query.

This avoids not only unnecessary memory allocations, but also copying
data between temporary memory and the buffer.

* Store denormalized query only under certain constraints

This commit introduces a little optimization along with a feature, it
stores the query in denormalized form only under the circumstances
below:

- The psgm_normalized_query GUC is disabled (off).
- The query is seem for the first time, or the query total
  execution time exceeds the mean execution time calculated for
  the previous queries.

Having the query which took most execution time along with it's
arguments could help users in further investigating performance issues.

* Fix regression tests

When query normalization is disabled utility queries like SELECT 10+20
are now stored as is, instead of SELECT $1+$2.

Also when functions or sub queries are created the arguments used
internally by the function or subqueries will be replaced by NULL instead
of $1..$N. The actual arguments will be displayed when the function or
subquery is actually invoked.

* Add query denormalization regression test for prepared statements

Ensures that the denormalization of prepared statements is working, also
ensure that a query which takes more time to execute replaces the
previous denormalized query.

* Updated pgsm_query_id regression tests

With the query dernomalization feature, having integer literals used in
sql like 1, or 2 could create some confusion on whether those are
placeholders or constant values, thus this commit updates the
pgsm_query_id regression test to use different integer literals to avoid
confusion.

* Improve query denormalization regression test

Add a new test case:

1. Execute a prepared statement with larger execution time first.
2. Execute the same prepared statement with cheap execution time.
3. Ensures that the denormalized heavy query is not replaced by the
   cheaper.

* Format source using pgindent

* Fix top query regression tests on PG 12,13

On PG 12, 13, the internal return instruction in the following function:
```
CREATE OR REPLACE FUNCTION add(int, int) RETURNS INTEGER AS
  $$
  BEGIN
     return (select $1 + $2);
  END; $$ language plpgsql;
```

Is stored as SELECT (select expr1 + expr2)

On PG 14 onward it's stored just as SELECT (expr1 + expr2)
2024-11-01 19:28:16 -03:00
Artem Gavrilov 467394fb6e
Add timeouts to CI jobs (#484) 2024-08-26 12:00:58 -03:00
Artem Gavrilov bcf7bed60b
Use PG17-beta3 (#483) 2024-08-16 11:52:31 -03:00
Artem Gavrilov 0c50b23d6f
Prepare release 2.1.0 (#482)
* Update META.json

* Drop RELEASE_NOTES.md

* Update meson.build

* Remove mkdocs.yml
2024-08-08 12:17:38 +02:00
Artem Gavrilov 3bb65798fd
Format sources (#475)
* Temporary disable workflows

* Add indent target to makefiel

* Add CI workflow to check if sources formatted

* Fix

* Fix

* Fix

* Fix

* Fix

* Fix

* Fix

* Fix

* Fix

* Format sources

* Add comments

* Revert "Temporary disable workflows"

This reverts commit 7e11cf6154.

* Revert "Format sources"

This reverts commit 6ef992d9f0.

* Use PG17 for code formatt

* Format sources

* Revert "Format sources"

This reverts commit 34061e1f82.

* Format sources
2024-08-07 15:12:24 +02:00
Zsolt Parragi 130d6b5fce
PG-592: Treat queries with different parent queries as separate entries (#403)
* PG-592: Treat queries with different parent queries as separate entries

1. Previously pg_stat_monitor had a `topquery` and `topqueryid` field, but it was only a sample:
it showed one of the top queries executing the specific query.

With this change, the same entry executed by two different functions will result in two entries in the statistics table.

2. This also fixes a bug where the content of these field disappeared for every second query executed:
previously the update function changed topqueryid to `0` if it was non zero, and changed it to the proper id when it was 0.
This resulted in an alternating behavior, where for every second executed query the top query disappeared.

After these changes, the top query is always shown.

3. The previous implementation also leaked dsa memory used to store the parent queries. This is now also fixed.

* PG-502: Fixing review comments

* dsa_free changed to assert as it can never happen
* restructured the ifs to be cleaner
  Note: kept the two-level ifs, as that makes more sense with the assert
  Note: didn't convert nested_level checks to macro, as it is used differently at different parts of the code

* PG-502: Fixing review comments

* PG-592 Add regression test

* Make test compatible with PG12

* Remove redundant line

---------

Co-authored-by: Artem Gavrilov <artem.gavrilov@percona.com>
2024-08-06 23:43:48 +02:00
Artem Gavrilov 1aa3081eaf
Drop adopters list (#480) 2024-08-01 09:12:20 -03:00
Artem Gavrilov 7680ceafd6
Drop CI workflows for PG11 (#479) 2024-08-01 09:23:26 +02:00
Muhammad Aqeel 778043a5db
[PKG-144]: Fixes issue in command to get clang version. (#478) 2024-07-25 14:02:19 +05:00
Artem Gavrilov d7999f1acf
[PG-644] Add option to disable application name tracking (#469)
* Cache application name for every backed instance

* Improve pg_get_backend_status performance for PG16 and PG17

* Fix

* Make application_name tracking disabled by default

* Meke app name tracking opt-out

* Format newly added code with pgindent

* Fix build for PG17

* Fix
2024-07-23 18:49:33 +02:00
Muhammad Aqeel 16ec8362e2
[PKG-140]: Updates build scripts to build pg_stat_monitor with LLVM 1… (#476)
[PKG-140]: Updates build scripts to build pg_stat_monitor with LLVM 17.0.
2024-07-23 14:53:07 +05:00
Artem Gavrilov dacb41f9e4
[PG-810] PG-17 Support (#463)
* Temporary disable all workflows

* Add build workflow with PG17

* Fix incompatibilities

* Fix 007_settings_pgsm_query_shared_buffer.pl test

* Fix 018_column_names.pl

* Fix 025_compare_pgss.pl

* Remove tuplestore_donestoring usage at all

* Rename I/O timing statistics columns to shared_blk_{read|write}_time

* Fix comments with fileds numbers

* Fix format

* Revert "Temporary disable all workflows"

This reverts commit 12e75beb63.

* Disable all workflows except check and build for PG 15, 16 and 17

* Fix

* Fix comments

* Fix migration

* Use REL_17_BETA1 in CI

* Add timers tests to 028_temp_block.pl

* Add local blocks timing statistics columns local_blk_{write|read}_time

* Fix t/027_local_blocks.pl test for older PG versions

* Fix

* Add jit_deform_{count|time} metrics

* Fix

* Add stats_since and minmax_stats_since fields

* Revert "Disable all workflows except check and build for PG 15, 16 and 17"

This reverts commit 73febf3aee.

* Fix t/028_temp_block.pl for PG14 and below

* Fix build for PG12

* Add pgdg workflow for PG17

* Try to fix PG pgdg workflow

* Fixes and formatting

* Format code

* Add level tracking regression test

* Fix nesting level tracking

* Format code

* Add level tracking test expected result for PG13

* Fix for PG12

* Skip level tracking regression test for PG version less than 14

* Fix toplevel calculation for older PG version

* Fix level tracking test results

* Fix nesting level counting for older PG version

* Revert "Fix nesting level counting for older PG version"

This reverts commit 3e91da8010.

* Fix level tracking for older PG versions once again

* Set REL_17_BETA2 tag for PG

* Add CI badge for PG17

* Use PG17 for examples in readme
2024-07-18 14:59:57 +02:00
Artem Gavrilov c796995e0c
Add instructions for Trunk, add PGXN badge (#473) 2024-07-12 18:23:38 +03:00
Artem Gavrilov fdec44af94
Add CODEOWNERS file (#472)
Add codeowners file
2024-07-12 13:46:44 +02:00
Artem Gavrilov 0db7f70028
PGXN integration complete (#471)
* Fix and debug

* Temorary disable all workflows

* Try invalid version tag

* Enable upload step

* Remove pull request trigger

* Revert "Temorary disable all workflows"

This reverts commit 757e04ba58.
2024-07-12 13:46:25 +02:00
Artem Gavrilov d83d202b9c
PGXN integration (#470)
* Update PGXN META.json

* Temorary disable all workflows

* Add PGXN release workflow draft

* Add pull request trigger

* Install dependencies

* Add sudo

* More sudo

* Try older ubuntu version

* Try

* Once again

* Update PGXN workflow

* Revert "Temorary disable all workflows"

This reverts commit 8d15520a51.

* Use ubunut 22.04
2024-07-11 16:41:03 +02:00
Muhammad Aqeel 74d98475a8
Fixes clang version issue that conflicts with llvm version in percona… (#468)
Fixes clang version issue that conflicts with llvm version in percona repositories
2024-06-24 11:36:24 +05:00
Muhammad Aqeel 508e35943e
Needs to install percona-release package to get GPG key. (#465) 2024-05-13 12:00:31 +05:00
Muhammad Aqeel 8d974c958f
percona-release.sh is required from release-1.0-28 branch to setup AR… (#464)
percona-release.sh is required from release-1.0-28 branch to setup ARM repo.
2024-05-10 12:02:54 +05:00
Artem Gavrilov a88c23a626
Remove redundant pgsm unistallation step from readme (#462) 2024-04-26 11:11:41 +02:00
Artem Gavrilov 288ec6325f
Add license headers validation (#458)
* Add .licenserc.yaml file

* Fix license headers

* Add github action to check license headers

* Fix workflow

* Fix checkout path

* Rename workflow

* Add debug info

* Disable workflows

* Try fix

* Split check workflow in two jobs

* Try invalid license header

* Comment of failure

* Disable cppcheck job

* Fix licenserc file

* Enable debug logging

* Prevent comments from licence-eye

* Revert "Disable cppcheck job"

This reverts commit 10f55373ea.

* Revert "Disable workflows"

This reverts commit 2e2ead2fa5.

* Fix typo

* Revert "Try invalid license header"

This reverts commit 0cc0c883d2.

* Update year in license headers

* Cleanup

* Fix indention in license header
2024-04-26 10:55:50 +02:00
Muhammad Aqeel 61256faf83
[PKG-33]: Fixes PPG repo name issue for EL9. (#461) 2024-04-25 12:23:12 +05:00
Muhammad Aqeel 2b9817d3ba
[PKG-33]: Fixes PPG repo name issue from Jenkins. (#460) 2024-04-24 23:18:07 +05:00
Muhammad Aqeel 0ba80547e6
[PKG-33]: Updates scripts to build pg_stat_monitor (#459) 2024-04-23 14:49:47 +05:00
Naeem Akhter 5d7c424fdc
Added a tap test case to load multiple PPG extensions in the server before running a test load. (#456)
1. We load and create other extensions that are distributed by Percona
   in PPG. (postgis, pg_repack, pgaudit, pgaudit_set_user, pgpool)
2. Run test data with pgbench.

To make the above test case work, updated the workflows to install above mentioned extension
where we use installed packages from PPG. On workflows where we build server or use packages
from PGDG, we are skipping this test case.
2024-04-23 02:49:57 +05:00
Artem Gavrilov c2923b4d61
Create SECURITY.md (#452)
* Create SECURITY.md

* Update supported versions section
2024-04-19 14:48:48 +02:00
Naeem Akhter dce1913154
Updated README to reflect badge for pg-16 and content on the landing page. (#455) 2024-04-19 12:28:00 +02:00
Artem Gavrilov 2ebd163225
Tune CI triggers (#444)
* Use commong CI triggers for all workflows

* Tune CI triggers, fix version tag regex

* Escape regex
2024-04-18 16:52:36 +02:00
Artem Gavrilov e3d6dc4af7
Update code-of-conduct.md (#453) 2024-04-17 15:04:50 +02:00
dependabot[bot] 0275fc742e
Bump codecov/codecov-action from 2 to 4 (#451)
* Bump codecov/codecov-action from 2 to 4

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 2 to 4.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v2...v4)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add token parameter to codecov action

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Artem Gavrilov <artem.gavrilov@percona.com>
2024-04-16 15:17:59 +02:00
Artem Gavrilov f72b8a9537
Add forum badge in readme (#447) 2024-04-16 12:34:37 +02:00
Artem Gavrilov 175e568515
[Proposal] Add issue templates (#446)
Add issue templates
2024-04-16 12:33:56 +02:00
dependabot[bot] 0bf2846748
Bump actions/upload-artifact from 2 to 4 (#450)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 2 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v2...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-10 12:34:50 +02:00
dependabot[bot] e303899652
Bump actions/checkout from 2 to 4 (#449)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-10 12:25:02 +02:00
Artem Gavrilov 43f3c27141
Add dependabot config (#443) 2024-04-10 11:54:39 +02:00
Artem Gavrilov e1ad88a580
Fix workflows after CI image upgrade (#445)
* Temporary disable all workflows except one

* Try ubuntu 20.04

* Remove redundant working dir config

* Fix

* Try perform some operations from default user

* Try fix

* Revert "Try fix"

This reverts commit 3ed843c7462f69d7ad74ba6f60c93544e1ea549c.

* Revert "Try perform some operations from default user"

This reverts commit 206046714d888b518bce2f83f567176978a73af9.

* Switch back to ubuntu 22.04

* Add debug

* Try fix

* Hit CI

* More debug

* Revert "Remove redundant working dir config"

This reverts commit 3d1ade8948.

* Revert "Fix"

This reverts commit 05dbeed894.

* Try fix

* Revert some changes

* Revert "Temporary disable all workflows except one"

This reverts commit 93b35036fb.

* Fix pgdg workflows

* Fix ppg workflows
2024-04-09 18:12:15 +02:00
Artem Gavrilov 7829869dc7
Fix partition_prune testcase (#440)
* Disable workflows

* Disable pg_stat_monitor tests

* Add no-locale to initdb

* Try with enabled compute_query_id

* Enable tests

* Cleanup

* Set compute_query_id parameter to regress mode

* Revert "Disable workflows"

This reverts commit f0b85b8b4a.

* Fix pg 14 and 15 build workflows

* Fix

* Cleanup
2024-04-09 14:05:58 +02:00
Artem Gavrilov 684e6483b5
Fix cppcheck workflow (#441)
* Upgrade ubuntu from 20.04 to 22.04

* Temporary remove all workflows except cppcheck

* Try ununty 23.10

* Revert "Try ununty 23.10"

This reverts commit c8590b60ed.

* Try cppcheck built from sources

* Add sudo

* Bump checkout action version in cppcheck workflow

* Revert "Temporary remove all workflows except cppcheck"

This reverts commit 9f32e94992.
2024-04-05 19:42:40 +02:00
Artem Gavrilov c89879e372
Fix IPC::Run perl module name in CI (#438) 2024-04-05 19:42:00 +02:00
Artem Gavrilov 64c71f98de
Fix integer overflow (#435)
* Fix MAX_BUCKETS_MEM overflow

* Fix MAX_QUERY_BUF overflow

* Fix int overflow in IsBucketValid function

* Add missing newline

* Remove test for max value of pgsm_query_shared_buffer parameter

* Tune tests

* Cleanup

* Use int64 type instead of long long
2024-04-05 14:34:30 +02:00
Muhammad Aqeel 7ea569e6bc
Merge pull request #437 from maqeel75/main
Rename .ddeb extension to .deb
2024-03-05 18:03:16 +05:00
Muhammad Aqeel 3d846105c5 Rename .ddeb extension to .deb 2024-03-05 16:22:19 +05:00
Muhammad Aqeel 7ecd10a7de
Merge pull request #436 from maqeel75/main
[DISTPG-724]: Updated version of pg_stat_monitor and fixed build scri…
2024-02-07 14:47:47 +05:00
Muhammad Aqeel 5bb67963e7 [DISTPG-724]: Updated version of pg_stat_monitor and fixed build script issues. 2024-02-05 13:34:41 +05:00
Hamid Akhtar 75f86f54b1
Version bumped for the 2.0.4 release (#434)
Version bumped for the 2.0.4 release.
2023-12-12 17:26:55 +01:00
Hamid Akhtar 4863020ccd PG-646: pg_stat_monitor hangs in pgsm_store
A potential lock contention could've been caused when an OOM warning
was being emitted by the pgsm_store function. This could lead the
pg_store_error function calling pgsm_store function and thereby trying
to acquire and exclusive lock when a shared was already by the same
process. This warning is now guarded by protection block.
2023-11-24 15:29:45 +05:00
Hamid Akhtar 0a8ac38de9
Version bumped for the 2.0.3 release. (#430) 2023-11-14 17:30:46 +05:00
Hamid Akhtar a35689bd36
Merge pull request #427 from Naeem-Akhter/pg16
Added workflow file for pgsm ppg-16 package testing.
2023-11-08 14:11:46 +05:00
Naeem Akhter 3ed25d5511 Added workflow file for pgsm ppg-16 package testing. 2023-11-07 00:01:07 +05:00
Hamid Akhtar ddc5a1745b
Merge pull request #425 from codeforall/main
PG-645: pg_stat_monitor crashes PostgreSQL if there is citus ..
2023-11-01 13:36:26 +05:00
Muhammad Usama 823bfb9aa7 PG-645: pg_stat_monitor crashes PostgreSQL if there is citus ..
Do not look for the query in the hash if no query string is
provided in the planner hook.
2023-11-01 10:54:35 +05:00
Hamid Akhtar 67493acda4
Merge pull request #422 from Naeem-Akhter/pg16
Adding pg-16 build and PGDG workflows.
2023-10-23 16:40:49 +05:00
Naeem Akhter 715a4ceb82 Adding pg-16 build and PGDG workflows. 2023-09-12 14:37:19 +05:00
Hamid Akhtar 3b9d125ba0 Version bumped for the 2.0.2 release. 2023-09-12 12:46:13 +05:00
Hamid Akhtar 9cf2fb8d56 PostgreSQL 16 support for PGSM
* Fixing issues with GUC initialization and function renames
    * Fixed regression issues with PG16
2023-09-12 12:45:58 +05:00
Hamid Akhtar f2228798ad
Merge pull request #419 from dutow/pg16
Postgres 16 support for PGSM
2023-09-11 13:11:15 +05:00
Zsolt Parragi 38ee75cc60 Postgres 16 support for PGSM
* PG16 requires changes around one of the hooks, ifdef added
* Meson build file added
2023-08-17 18:06:00 +02:00
Hamid Akhtar 726556dbaf Updating release notes for the 2.0.1 release. 2023-05-25 19:43:49 +05:00
Hamid Akhtar 67a54c792d
Merge pull request #399 from codeforall/main
Merging changes back to the main branch after the 2.0.1 release
2023-05-24 14:42:01 -06:00
EvgeniyPatlan d2f133657f ENG-7 Update pg-stat-monitor.spec 2023-05-24 12:29:00 +05:00
Hamid Akhtar 639bf6f158 Version bumped for the 2.0.1 release. 2023-05-24 12:28:16 +05:00
Hamid Akhtar 39d9419bd0 PG-624: pg_stat_monitor: Possible server crash when running pgbench with pg_stat_monitor loaded (#396)
PG-624: pg_stat_monitor: Possible server crash when running pgbench
with pg_stat_monitor loaded

It appears that this issue was being caused by improper handling of
dynamic number of buckets. This commit resolves the issue.

Also, as part of a larger cleanup, memory context has been moved to
local space from shared storage. Also, some unwanted
and no-longer-needed variables are removed.

Co-authored-by: Muhammad Usama <muhammad.usama@percona.com>
2023-05-24 12:27:40 +05:00
Hamid Akhtar 2ceb47e3cd PG-613: Postgresql crashes with Segmentation fault when query plan is enabled on large queries
The return value for snprintf was incorrectly being recorded as plan
length. That's been resolved.

As part of this fix, we've also elminated the possibility of a potential
memory leak when plan text was being saved.

Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
Co-authored-by: Muhammad Usama <muhammad.usama@percona.com>
2023-05-24 12:26:30 +05:00
Mikail 7623f15e68
Make pg_config configurable in Makefile invocation (#392)
This allows to override the `PG_CONFIG` variable through environment variables
when wanting to use another one that is not in the `PATH`.
2023-05-02 16:26:55 +05:00
Hamid Akhtar 1617f0df09 Updating release notes for the 2.0.0 release 2023-03-01 19:40:24 +05:00
Hamid Akhtar cf5be6dd5d Adding pgsm_query_id for pgsm_store_error function 2023-03-01 19:40:24 +05:00
Hamid Akhtar 9ecd2ccbb7 Updating formatting of source code 2023-03-01 19:40:24 +05:00
EvgeniyPatlan c7cb3d08be
Merge pull request #390 from EvgeniyPatlan/main
DISTPG-530 Update build flow
2023-02-27 12:15:35 +01:00
EvgeniyPatlan df3945aa39
Fix indentation 2023-02-27 12:11:42 +01:00
EvgeniyPatlan a896be9f4d
DISTPG-530 Update build flow 2023-02-27 12:04:28 +01:00
Hamid Akhtar 088c85f8db Update RELEASE_NOTES.md
Resolving formatting issue.
2023-02-27 15:01:31 +05:00
Hamid Akhtar faa938b8f1 Fixing code indentation with pgindent 2023-02-27 14:47:27 +05:00
Hamid Akhtar 1883b05fc7 Updating release notes for pg_stat_monitor 2.0.0 release 2023-02-27 14:46:28 +05:00
Naeem Akhter 3541ac0d26
PG-608: pg_stat_monitor: Update histogram TAP testcase for sub-ms. (#386) 2023-02-24 02:15:36 +05:00
Muhammad Usama c31d3ba332
PG-609: Bump version to 2.0.0 for the release. (#385)
Version bumped for the 2.0.0 release.
2023-02-23 23:02:18 +05:00
Muhammad Usama fe23d31bf9
PG-607: Allow histogram to track queries in sub-ms time brackets (#384)
* PG-607: Allow histogram to track queries in sub-ms time brackets

Updated the GUC configuration and the relevant histogram functionality
to track queries in lower cardinality than ms. This is done by saving
the GUC values for histogram min and max values in real (double) type.

All test cases except for the 030 tap test are passing. The test case
needs an update.

* Fixing regression issues for v12 and below because of histogram changes.
2023-02-23 21:24:40 +05:00
Muhammad Usama 05ffcac2fa
PG-606: New GUC required for enabling/disabling of pgsm_query_id calculation… (#383)
* PG-606: New GUC required for enabling/disabling of pgsm_query_id calculation

Adds a new GUC pg_stat_monitor.pgsm_enable_pgsm_query_id to enable/disable
pgsm query id calculation. Apart from that patch also refactors the GUC-related
code to match PostgreSQL conventions.

Moreover, the commit also changes the pgsm_enable_overflow GUC to boolean
instead of enum.
2023-02-23 19:08:09 +05:00
Ibrar Ahmed ce32f6f15d
Merge pull request #382 from EngineeredVirus/main
PG-542: Performance improvement of pg_stat_monitor.
2023-02-23 14:06:03 +05:00
Hamid Akhtar 9d2efb8913 PG-542: Performance improvement of pg_stat_monitor.
Refining the code for storing ip locally.
2023-02-23 13:18:38 +05:00
Hamid Akhtar ccaa910c35 PG-542: Performance improvement of pg_stat_monitor.
Saving the client IP address once per the lifetime of a backend. This avoid
the expensive operation multiple times, and hence improving performance
significantly.
2023-02-23 02:41:39 +05:00
Hamid Akhtar 8482bcc347 Revert "PG-542: Performance improvement of pg_stat_monitor."
This reverts commit 7b0e603bcf.
2023-02-23 02:41:39 +05:00
Ibrar Ahmed a8fc7081d0
Merge pull request #380 from EngineeredVirus/main
PG-542: Performance improvement of pg_stat_monitor.
2023-02-23 01:55:13 +05:00
Hamid Akhtar 5f6425a6f8 PG-542: Performance improvement of pg_stat_monitor.
Performance related changes where some calculations are moved out
of the spinlock in the pgsm_update_entry function. This should
improve the performance a bit.

Also, moved the histogram calculation function to init. The update
function now only searches an array rather than recalculatiing the
histogram bucket timings.

Updated conditional statement to update parent query only when
required.
2023-02-23 01:53:30 +05:00
Ibrar Ahmed 1fcdbcefaf
Merge pull request #379 from codeforall/main
PG-542: Performance improvement of pg_stat_monitor.
2023-02-23 01:49:28 +05:00
Muhammad Usama 7b0e603bcf PG-542: Performance improvement of pg_stat_monitor.
Saving the client IP address once per the lifetime of a backend. This avoid
the expensive operation multiple times, and hence improving performance
significantly.
2023-02-23 01:33:23 +05:00
Ibrar Ahmed 7b9711eb7d
Merge pull request #378 from EngineeredVirus/main
PG-588: Some queries are not being normalised.
2023-02-23 01:24:31 +05:00
Naeem Akhter be1b4af180
PG-605: Fix TAP Tests framework crash. (#377) 2023-02-22 23:47:03 +05:00
Hamid Akhtar de66ef0fce PG-588: Some queries are not being normalised.
This bug uncovered serious issues with how the data was being stored by PSGM.
So it require a complete redesign.

pg_stat_monitor now stores the data locally within the backend process's local
memory. The data is only stored when the query completes. This reduces the
number of lock acquisitions that were previously needed during various stages
of the execution. Also, this avoids data loss in case the current bucket
changes during execution. Also, the unavailability of jumble state during later
stages of executions was causing pg_stat_monitor to save non-normalized query.
This was a major problem as well.

pg_stat_monitor specific memory context is implemented. It is used for saving
data locally. The context memory callback helps us clear the locally saved data
so that we do not store it multiple times in the shared hash.

As part of this major rewrite, pgss reference in function and variable names
is changed to pgsm. Memory footprint for the entries is reduced, data types
are corrected where needed, and we've removed unused variables, functions and
macros.

This patch was mutually created by:
Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
Co-authored-by: Muhammad Usama <muhammad.usama@percona.com>
2023-02-22 19:31:52 +05:00
Naeem Akhter 837bacdf3a
PG-571: Update Jobs to run on PR and Push. (#374) 2023-02-09 00:35:51 +05:00
Naeem Akhter 4352d97af0
PG-603: Fix/Update String::Util module in TAP test perl. (#373)
Removed use of String::Util perl module from TAP test cases, and now using
Text::Trim module instead, as that is more stable. Also removed the
Data::Str2Num perl module as it was not needed any more.
2023-02-02 15:41:43 +05:00
Vadim Yalovets 32b1219087
Merge pull request #372 from adivinho/Fix-build-script
Fix build script for PG13
2023-02-01 14:14:38 +02:00
Vadim Yalovets 11b9924b3c Fix build script for PG13 2023-02-01 12:13:09 +02:00
Muhammad Usama 8193e527da
PG-587: pg_stat_monitor: Validate the upgrade from 1.x to 2.0 version (#370)
Disallow V1 API to be used with V2.0 lib and remove pg_stat_monitor--1.0.sql
as part of that. A few adjustments to 1.x to 2.0 upgrade script are also
part of the commit
2023-02-01 01:38:02 +05:00
Naeem Akhter 62d2ad6d8e
Merge pull request #371 from capri1989/PG-571
PG-571: Update badges in README
2023-01-30 19:21:32 +05:00
Kai Wagner 2ccbe416c2 PG-571: Update badges in README
Signed-off-by: Kai Wagner <kai.wagner@percona.com>
2023-01-30 14:14:07 +01:00
Kai Wagner 9b9e1f6eef
Merge pull request #368 from capri1989/PG-602
PG-602: Updated the README and added PG15
2023-01-30 13:38:33 +01:00
Ibrar Ahmed 9382f6de8f
Merge pull request #369 from Naeem-Akhter/PG-559-testcase
PG-559: Add a TAP testcase for historgram feature.
2023-01-30 17:27:30 +05:00
Naeem Akhter 6939ea282a PG-559: Add a TAP testcase for historgram feature. 2023-01-30 15:45:11 +05:00
Kai Wagner 347ee6cf19 PG-602: Changed release notes to mention the initial PG15 support
Signed-off-by: Kai Wagner <kai.wagner@percona.com>
2023-01-30 09:17:41 +01:00
Kai Wagner 70decec03c PG-602: Updated the README and added PG15 and increased the copyright year to 2023
Signed-off-by: Kai Wagner <kai.wagner@percona.com>
2023-01-30 08:35:48 +01:00
Ibrar Ahmed a4e60b97bb
Merge pull request #367 from EngineeredVirus/main
PG-601: Histogram ranges are not correct
2023-01-26 14:59:17 +05:00
Hamid Akhtar 3b6fc3846c PG-601: Histogram ranges are not correct
Resolved the issue with histogram outlier buckets. Also updated
the printing of bucket ranges to be in correct set notation with
reference to brackets. The lower bounds of buckets always have an
exclusive range except for the first bucket, and the upper bounds
always have an inclusive value.
( or ) => exclusive
{ or } => inclusive

The entire range is enclosed within the {} brackets.
2023-01-25 20:31:14 +05:00
Hamid Akhtar 3487e70cc6 PG-599: PGSM build failure on PG-11
Resolving the compilation issue caused by ereport statement.
2023-01-25 12:51:57 +05:00
Hamid Akhtar 9327c864d3 PG-586: pg_stat_monitor: CPU and user timing should be captured
for utility statements as well

Setting user and sys time to 0 in case there is a problem getting
rusage details.
2023-01-25 12:50:29 +05:00
Hamid Akhtar ee18c16149 PG-586: pg_stat_monitor: CPU and user timing should be captured
for utility statements as well

Added the necessary capture of resource usage  in the process
utility function. We are now storing CPU and user timings for a
utility statement.
2023-01-25 12:50:29 +05:00
Muhammad Usama ac8800a637
PG-597: pg_stat_monitor: Remove rounding off for floating point values (#364)
As an observability tool that serves data to other tools, data must be output without any loss. So rounding off causes data loss and rounding off errors when comparing different columns.

Therefore, it was decided to eliminate rounding off when outputting values. Any consumer of this data should round off data to whatever precision it prefers.

This behaviour is also consistent with pg_stat_statements.
2023-01-24 16:54:13 +05:00
Naeem Akhter e10c615dfb PG-572: Verify 025_compare_pgss.pl TAP test case.
Updated test case for column name change (rows_retrieved -> rows).
2023-01-24 16:10:45 +05:00
Naeem Akhter fa0ee037a2
PG-574: Verify 026_shared_blocks.pl TAP test case. (#363)
1) Added the Group by clause to make sure that bucket change doesn't have any
impact on aggregates of queries.
2) Updated column names where required.
3) Updated pgbench parameters to reduce the time taken by test case, around
70-80% decrease in time taken by test case.
2023-01-24 02:23:17 +05:00
Naeem Akhter 80608394a2
PG-572: Verify 025_compare_pgss.pl TAP test case. (#362)
1) Added the Group by clause to make sure that bucket change doesn't have any
impact on aggregates of queries.
2) Updated column names where required.
3) Updated pgbench parameters to reduce the time taken by test case, around
70-80% decrease in time taken by test case.
2023-01-24 02:22:39 +05:00
Naeem Akhter b559221a39
PG-573: Verify 024_check_timings.pl TAP test case. (#361)
Added the Group by caluse to make sure that bucket change doesn't have any
impact on aggregates of queries. Updated column names where required.
2023-01-24 02:21:04 +05:00
Muhammad Usama 5648b99eee
PG-585: pg_stat_monitor: Add code comments to the DSA related funcs.. (#360)
Adding code comments for the DSA related functionality.
2023-01-23 14:36:23 +05:00
Hamid Akhtar dfd41519cf PG-588: Some queries are not being normalised.
There is no specific test case where I can either reproduce or validate
the fix. Though, one of the suspects is this condition in pgss_store.
Therefore removed, and it requires verification.
2023-01-23 12:39:18 +05:00
Hamid Akhtar 1662e9efa1 PG-562: Histogram Ranges/Buckets are not correct.
Replaced the error on server start with a warning. The functionality
now handles "pgsm_histogram_buckets" as the maximum number of histogram
buckets to be created. On init, pg_stat_monitor calculates the max
number of buckets that can be created within the given min/max time
range. If the number is below the user configuration, it emits a
warning in the log file stating the number of max buckets set.
2023-01-23 12:37:51 +05:00
Hamid Akhtar 209f370cef PG-562: Histogram Ranges/Buckets are not correct.
Added buckets for queries that take less than minimum histogram time
and one for the ones taking more than the max value specified.

Also, in case the buckets end up overlapping, on server start, an
error will be thrown informing the user of this issue and requesting
a rectification.

Refactored the code to consolidate the calculations in a single
function.
2023-01-23 12:37:51 +05:00
Hamid Akhtar 1286427445
PG-543: pg_stat_monitor: PostgreSQL's pg_stat_statements compatible view. (#352)
* PG-543: pg_stat_monitor: PostgreSQL's pg_stat_statements compatible view.

The view now carries all the columns as pg_stat_statements. This required fixing
data types of some of the columns, renaming a few, as well inclusion of new
columns to make the view fully compatible with pg_stat_statements.

* PG-543: pg_stat_monitor: PostgreSQL's pg_stat_statements compatible view.

Updating the upgrade sql file from 1.0 to 2.0 version linked with this issue
changes.

* PG-543: pg_stat_monitor: PostgreSQL's pg_stat_statements compatible view.

Updating datum calls to use UInt64 rather than Int64.
2023-01-19 01:55:20 +05:00
Hamid Akhtar 7dece7cf1d
PG-582: blk_read_time and blk_write_time are not being rounded. (#353)
* PG-582: blk_read_time and blk_write_time are not being rounded.

Added the round off within the internal function so that values for
blk_read_time, blk_write_time are rounded off to 4 decimal places.

Additionally, added rounding off for the PG15+ columns of
temp_blk_read_time and temp_blk_write_time.

* PG-582: blk_read_time and blk_write_time are not being rounded.

Added rounding off for four JIT related columns introduced for PG15.
2023-01-18 17:17:23 +05:00
Ibrar Ahmed 492682e44e
Merge pull request #356 from codeforall/main
PG-400: pg_stat_monitor: Timezone in msgtime column...
2023-01-18 17:06:40 +05:00
Naeem Akhter 402b73e792
PG-584: Verify and 007_settings_pgsm_query_shared_buffer.pl TAP test (#355)
PG-584: Verify and 007_settings_pgsm_query_shared_buffer.pl TAP test case
2023-01-18 17:01:47 +05:00
Muhammad Usama a75e47add9 PG-400: pg_stat_monitor: Timezone in msgtime column...
The bucket start time reported by pg_stat_monitor does not match the PG time and
timezone. The fix is to use TimestampTz for recording the bucket start time.
2023-01-18 16:38:23 +05:00
Naeem Akhter f9ef1455ae PG-581: top_queryid expected output verification and change. 2023-01-18 13:05:00 +05:00
Muhammad Usama caeb5f5e73
PG-579: Querying pg_stat_monitor crashes the server ... (#351)
pgsm_get_ss() must only be called when pg_stat_monitor.so is loaded.
Fix is to move the pgsm_get_ss() call after checking if the pg_stat_monotor
library is loaded or not.
2023-01-17 15:49:38 +05:00
Muhammad Usama 2c5e12af0a
PG-488: pg_stat_monitor: Overflow management. (#342)
* PG-488: pg_stat_monitor: Overflow management.

Reimplement the storage mechanism of buckets (for PG-15 onward) and query texts
using Dynamic shared memory. Since the dynamic shared memory can grow into a
swap area, so we get the overflow out of the box.

As PostgreSQL versions prior to V15 does not support sequence scan on dynamic
shared memory hashes, so older versions has to live with the classic shared
memory hash for storing the buckets.

Another noteworthy change with the new design is: it saves the query pointer
inside the bucket, and eventually, the query text gets evicted with the bucket
recycle.

Finally, the dynamic shared memory hash has a built-in locking mechanism, so we
can revisit the whole locking in pg_stat_monitor has the potential for lots of
performance improvements.

* Fixing tap test reported issues and also disabling dynamic hash for all versions

* Updating the expected out file for top_query test case

Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
2023-01-10 17:54:17 +05:00
Ibrar Ahmed ff75b23257
Merge pull request #347 from Naeem-Akhter/PG-575-Update
PG-575: Enable installcheck-world on PG 14 & 15.
2023-01-05 17:19:50 +05:00
Naeem Akhter 7e7bcb4559 PG-575: Enable installcheck-world on PG 14 & 15.
As we are using compute_query_id on pg14 onwards for PGSM and it
causes the server installcheck-world to fail (same behaviour with PGSS).
To test installcheck-world on pg14 onwards we need to disable compute_query_id
and run server installcheck-world. But for PGSM regression we will still have
compute_query_id on.
2023-01-04 18:59:10 +05:00
Ibrar Ahmed 14b357e8df
Merge pull request #346 from Naeem-Akhter/PG310
PG-310: pg_stat_monitor: Bucket is Done vs still being current/last
2023-01-04 17:24:53 +05:00
Naeem Akhter 653b3be2a0 PG-310: pg_stat_monitor: Bucket is Done vs still being current/last
Added a TAP test case to verify the behavior of the new 'bucket_done' column.
2023-01-04 14:26:36 +05:00
Ibrar Ahmed 51b5a5a8fb
Merge pull request #345 from Naeem-Akhter/PG570
PG-570: Fix counters test case.
2023-01-04 09:42:02 +05:00
Naeem Akhter 56f4735ab0 PG-570: Fix counters test case.
Updated the test case and expected outout, and also removed the unneeded output
files.
2023-01-03 18:50:53 +05:00
Ibrar Ahmed 7c989337f1
Merge pull request #344 from EngineeredVirus/main
PG-576 - Segmentation fault caused by pg_stat_monitor unique queryid creation mechanism.
2023-01-03 17:58:14 +05:00
Hamid Akhtar f170322f38 PG-576 - Segmentation fault caused by pg_stat_monitor unique
queryid creation mechanism.

Resolving the crash identified by regression and reported by Naeem.
This fix resolves the issue with incorrect query length in case of
normalized query when the query length exceeds PGSM_QUERY_MAX_LEN.

Resolving the crash identify by regression and reported by Naeem.
2023-01-03 17:55:44 +05:00
Ibrar Ahmed b60eece145
Merge pull request #341 from EngineeredVirus/main
PG-545: pg_stat_monitor: Same query text should generate same queryid
2022-12-30 04:51:46 +05:00
Hamid Akhtar 30441b6972 PG-545: pg_stat_monitor: Same query text should generate same queryid
Updating tap test case and upgrade SQL file from version 1.0 to 2.0.
2022-12-29 14:45:17 +05:00
Naeem Akhter e0cea058ed
PG-568: Add GH Workflow for PGDG-15 and PPG-15 packages. (#343) 2022-12-29 02:28:36 +05:00
Hamid Akhtar b20eda7066 PG-545: pg_stat_monitor: Same query text should generate same queryid
Regardless of the database or the user, the same query will yield the
same query ID. As part of this, a new column, 'pgsm_query_id', is added.

* pgsm_query_id:
pgsm_query_id has the same data type of int8 as the queryid column. If
the incoming SQL command includes any constants, it internally normalizes
the query to remove those constant values with placeholders. Otherwise,
it uses the query directly to generate the query hash.

Since we no longer depend on the server's parse tree mechanism, we can
generate the same hash for the same query text for all server versions.

Also, it is important to note that the hash being calculated is a database,
schema and user independent. So same query text in different databases
will generate the same hash.

This column is not part of the key; rather, for observability purposes only.

* Regression
SQL test case pgsm_query_id.sql is added to the SQL regression.
2022-12-28 14:24:19 +05:00
Ibrar Ahmed f8866272a2
Merge pull request #340 from Naeem-Akhter/PG563
PG-563: Update TAP testcases and output due to changes by DEV.
2022-12-28 10:03:05 +05:00
Naeem Akhter b154da01da PG-563: Update TAP testcases and output due to changes by DEV.
As part of this PR, also updated regression test cases that are related to
following JIRA issues as well.

PG-354	pg_stat_monitor: Remove pg_stat_monitor_settings view
Now we not using pg_stat_monitor_settings view, due to this change majority of
TAP testcase requried output changes.

PG-558: Create test case to verify the function names and count in PGSM.
Added additional output file for SQL test case.

PG-554:	Remove redundant expected output files from regression.
Removed unnecessary output files in TAP testcases where these were not needed.
2022-12-27 18:14:32 +05:00
Naeem Akhter 56001d683f
Merge pull request #324 from ibrarahmad/PG-312
PG-312: Changing the default value of Histogram GUC.
2022-12-24 22:31:39 +05:00
Naeem Akhter d03fb8f0b7
Merge branch 'main' into PG-312 2022-12-24 22:30:23 +05:00
Ibrar Ahmed 96a1d52f08
Merge pull request #339 from Naeem-Akhter/PG-354
PG-354: Update expected output file for functions test case.
2022-12-23 05:22:38 +05:00
Naeem Akhter 0656d5f22d PG-354: Update expected output file for functions testcase. 2022-12-23 01:00:13 +05:00
Ibrar Ahmed 802774a2a7
PG-488: Revert pg_stat_monitor: Overflow management. (#338)
PG-488: Revert pg_stat_monitor: Overflow management.

This patch does not work for  < PostgreSQL - 15. More work required.
2022-12-22 19:15:14 +05:00
Ibrar Ahmed 7c5ad48276
Merge pull request #334 from EngineeredVirus/main
PG-354: pg_stat_monitor: Remove pg_stat_monitor_settings view
2022-12-21 20:34:00 +05:00
Ibrar Ahmed 8dffa8cc97
Merge pull request #336 from codeforall/main
PG-488: pg_stat_monitor: Overflow management.
2022-12-21 00:42:47 +05:00
Muhammad Usama df0580b741 PG-488: pg_stat_monitor: Overflow management.
Reimplement the storage mechanism of buckets and query texts
using Dynamic shared memory. Since the dynamic shared memory
can grow into a swap area, so we get the overflow out of the box.

oreover the new design saves the query pointer inside the bucket
and eventually, the query text gets evicted with the bucket recycle.

Finally, the dynamic shared memory hash has a built-in locking
mechanism so we can revisit the whole locking in pg_stat_monitor
has potential for lots of performance improvements
2022-12-20 17:29:15 +05:00
Naeem Akhter 5a6b824737
PG-373: Update test case - Remove WAL fields for PG12 and below. (#335) 2022-12-14 12:50:29 +05:00
Hamid Akhtar 2917ae6805 PG-354: pg_stat_monitor: Remove pg_stat_monitor_settings view
Removing the view for 2.0. Updating the required SQL files to manage
the upgrade. Downgrade from 2.x to 1.x is not supported.

Also part of this fix is the SQL regression. This does not update the
tap test cases.
2022-12-13 17:05:46 +05:00
Ibrar Ahmed a6099d6a84
Merge pull request #333 from Naeem-Akhter/PG-558
PG-558: Create test case to verify the function names and count in PGSM.
2022-12-13 16:29:52 +05:00
Naeem Akhter 1037fb08a8 PG-558: Create test case to verify the function names and count in PGSM. 2022-12-12 23:11:38 +05:00
Naeem Akhter 5cd4f255d1
Merge pull request #332 from ibrarahmad/PG-373
PG-373: Remove WAL fields for PG12 and below.
2022-12-12 15:17:43 +05:00
Naeem Akhter 2838eaa94d
Merge pull request #331 from ibrarahmad/PGSM-518
PG-518: Internal Functions should NOT be visible in PGSM API.
2022-12-12 15:16:53 +05:00
Ibrar Ahmed 3076d5bf5c PG-373: Remove wal fields for PG12 and below. 2022-12-07 15:03:14 +00:00
Ibrar Ahmed 5ae0f3a0bb PG-518: Internal Functions should NOT be visible in PGSM API. 2022-12-07 14:52:45 +00:00
Muhammad Usama 913064b68d
PG-435: Adding new counters that are available in PG15 (#329)
In line with pg_stat_statments for PG15, This commit adds eight new cumulative
counters for jit operations, making it easier to diagnose how JIT is used in an
installation. And two new columns, temp_blk_read_time, and temp_blk_write_time,
respectively, show the time spent reading and writing temporary file blocks
on disk.
Moreover, The commit also contains a few indentations and API adjustments.
2022-12-07 15:40:13 +05:00
Naeem Akhter 4a254a538b PG-553: Add a testcase to verify columns names in PGSM. 2022-12-01 12:06:08 +05:00
Ibrar Ahmed 354b92b8b6 PG-312: Changing the default value of Histogram GUC. 2022-11-23 14:49:21 +00:00
Ibrar Ahmed f7860b472f
PG-310: Bucket is “Done” vs still being current/last. (#321)
A new column is added to mention that bucket is active or done. there is    
some timing based adjustment was required with that too.   

Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
2022-11-23 02:23:28 +05:00
Naeem Akhter b4ab2ccc84
PG-557: Update PGSM+PMM GH workflows to pick intended target branch. (#323) 2022-11-23 02:22:13 +05:00
Naeem Akhter 8e265b9bfb
PG-556: Fix expected output of test case version. (#322) 2022-11-23 02:21:33 +05:00
Naeem Akhter fe83f56ab7
PG-554: Remove redundant files and fix regression. (#319)
PG-554: Remove reduntant files and fix regression.

Removed old files with same name and add these files to fix sql regression
on PG 14 & 15.

1- regression/expected/error_1.out
2- regression/expected/error_insert_1.out
3- regression/expected/top_query_1.out
2022-11-23 02:20:43 +05:00
Muhammad Usama 2f2c40ed22
PG-555 :Infrastructure to allow multiple SQL APIs (#320)
Creating the infrastructure that'll allow using newer versions
of the loadable module with old SQL declarations.
Also updating the build version to 2.0.0-dev
2022-11-21 18:27:21 +05:00
Naeem Akhter 7f015b5a16
Merge pull request #318 from ibrarahmad/PG544
PG-544: Regression cleanup.
2022-11-17 04:52:54 +05:00
Ibrar Ahmed 1bc14fb759 PG-544: Regression cleanup. 2022-11-16 21:31:16 +00:00
Naeem Akhter 741dea66a8
Merge pull request #317 from ibrarahmad/PG-544
PG-544: Regression cleanup.
2022-11-17 01:18:48 +05:00
Ibrar Ahmed 6643854c47
Merge branch 'percona:main' into PG-544 2022-11-17 01:08:13 +05:00
Naeem Akhter fcb6dac321
Merge pull request #316 from ibrarahmad/PG320
PG-320: Removing the query state code from the view.
2022-11-17 00:56:16 +05:00
Ibrar Ahmed a3830624bb PG-544: Regression cleanup. 2022-11-16 19:47:07 +00:00
Ibrar Ahmed 710103cd0d PG-320: Removing the query state code from the view. 2022-11-16 19:37:15 +00:00
Naeem Akhter 7f743b142a
Merge pull request #315 from ibrarahmad/PG-306
PG-306: The bucket start time should be timestamp instead of TEXT.
2022-11-16 00:44:20 +05:00
Naeem Akhter 2f62ee695b
Merge pull request #314 from ibrarahmad/PG-518
PG-518: Drop the internal function permission from PUBLIC.
2022-11-16 00:44:06 +05:00
Naeem Akhter 4f281a8c12
Merge pull request #313 from ibrarahmad/PG-552
PG-552: Remove unnecessary columns from PostgreSQL 11 and 12 views.
2022-11-16 00:43:50 +05:00
Naeem Akhter bcb1a3b1b8
Merge pull request #312 from ibrarahmad/PG-320
PG-320: Removing the query state code from the view.
2022-11-16 00:43:16 +05:00
Ibrar Ahmed bc19c99c0b PG-306: The bucket start time should be timestamp instead of TEXT. 2022-11-15 18:11:10 +00:00
Ibrar Ahmed a392c98b5c PG-518: Drop the internal function permission from PUBLIC.
It will be a security problem to provide the internal function access to PUBLIC.
This commit will revoke all permission of internal functions from PUBLIC.
2022-11-15 17:45:42 +00:00
Ibrar Ahmed 40afdce2eb PG-552: Remove unnecessary columns from PostgreSQL 11 and 12 views.
There was a typo while checking the PostgreSQL version in the SQL file. This commit
will fix the typo, and only the necessary columns will be visible in the view.
2022-11-15 17:10:42 +00:00
Ibrar Ahmed db5a6aa30e PG-320: Removing the query state code from the view.
The query status monitoring code was used to track the current query state, for example,     
parsing, executing and finishing. After careful review, we have figured out that  
it does not make sense while a lot of time same query is running. Therefore it  
is also consuming resources. This commit will remove that feature. The upgrade
SQL from 1.0 - 2.0 is also updated.
2022-11-15 16:31:37 +00:00
Kai Wagner fddc0967e3
Merge pull request #311 from EngineeredVirus/main
Merging changes back to the main branch after the 1.1.1 release
2022-11-11 10:14:18 +01:00
Puneet Kala 00067680de PMM-7 Adding updates on integration pipelines (#308)
* PMM-7 Fix the github action

* PMM-7 fix version 12

* PMM-7 Fix Typo

* PMM-7 Fix Typo

* PMM-7 Increase timeout

* PMM-7 Increase timeout

* PMM-7 Add support for pgsql13

* PMM-7 Adding support for PG 14

* PMM-7 Increase timer

* PMM-7 Adding integration with PG15

* PMM-7 Handle PG 11 changes

* PMM-7 handle PG 12 changes

* PMM-7 Temp commit for regression

* PMM-7 Adding commit

* PMM-7 UI tests branch

* PMM-7 Revert temp branch

* PMM-7 Revert temp branch

* PMM-7 revert temp branch

* PMM-7 Revert the changes
2022-11-08 23:31:11 +05:00
Kai Wagner 2cef796e92 PG-526: bump version to 1.1.1 and adding release notes
Signed-off-by: Kai Wagner <kai.wagner@percona.com>
2022-11-08 23:31:01 +05:00
Naeem Akhter af2da8885a PG-525: Update PGSM TAP Test Cases to accommodate PG15 changes.
Following changes are included in this commit:

1. Updated pgsm.pm to enable runtime loading of PG server version dependent
perl modules that are needed for TAP testing. Similarly removed unneeded
code from this file that is not needed right now.

2. Added generic settings and helper functions to pgsm.pm that could be used
across different test cases.

3.Updated following TAP test case to use pgsm.pm based global settings and helper
functions while making sure that we reduce the clutter and duplicate code in
test cases, where possible.

t/001_settings_default.pl
t/002_settings_pgsm_track_planning.pl
t/003_settings_pgms_extract_comments.pl
t/004_settings_pgsm_track.pl
t/005_settings_pgsm_enable_query_plan.pl
t/006_settings_pgsm_overflow_target.pl
t/007_settings_pgsm_query_shared_buffer.pl
t/008_settings_pgsm_histogram_buckets.pl
t/009_settings_pgsm_histogram_max.pl
t/010_settings_pgsm_histogram_min.pl
t/011_settings_pgsm_bucket_time.pl
t/012_settings_pgsm_max_buckets.pl
t/013_settings_pgsm_normalized_query.pl
t/014_settings_pgsm_track_utility.pl
t/015_settings_pgsm_query_max_len.pl
t/016_settings_pgsm_max.pl
t/017_execution_stats.pl
t/019_insufficient_shared_space.pl
t/020_buffer_overflow.pl
t/021_misc_1.pl
t/022_misc_2.pl
t/023_missing_queries.pl
t/024_check_timings.pl
t/025_compare_pgss.pl
t/026_shared_blocks.pl
t/027_local_blocks.pl
t/028_temp_block.pl

4. Removed following TAP test cases as these are no longer needed and are
covered by existing other test cases.

0001_settings_pgsm_track_planning.pl
0002_settings_pgsm_enable_query_plan.pl

5. Added more out files for histogram sql test cases to cover the behavior for
bucket_start_time and server versions.

regression/expected/histogram_3.out
regression/expected/histogram_4.out
regression/expected/histogram_5.out
regression/expected/histogram_6.out

6. Added following out file that is PG server version 15 specific.

t/expected/007_settings_pgsm_query_shared_buffer.out.15
2022-11-08 23:30:39 +05:00
Muhammad Usama 0fe9908d5f PG-520 pg_stat_monitor does not work with PG15
PG 15 requires additional shared memory and LWLocks requests to be made from the
newly introduced shmem_request_hook and disallows the requests initiated
from outside the hook.
The commit makes moves the additional shared memory and LWLocks requests
from _PG_init to shmem_request_hook for PG15
2022-11-08 23:30:29 +05:00
Ibrar Ahmed 634f0ce580
PG-174: Code cleanup. (#310)
pg_stat_monitor is a bit longer; therefore, it requires some code cleanup.
Therefore I decided to turn these tasks into multiple commits and PR to avoid
various changes in one PR. This will ease the review and Q/A process.
In this commit, I have done these tasks.

1 - Fixing compilation issue, cause by previous commit, where we removed
the benchmarking code. It was causing problem for PostgreSQL-12.
2022-10-25 01:04:23 +05:00
Hamid Akhtar d7e8a0ae79
Merge pull request #309 from ibrarahmad/PG174
PG-174: Code cleanup.
2022-10-24 23:19:43 +05:00
Ibrar Ahmed 0af6295513 PG-174: Code cleanup.
pg_stat_monitor is a bit longer; therefore, it requires some code cleanup.
Therefore I decided to turn these tasks into multiple commits and PR to avoid
various changes in one PR. This will ease the review and Q/A process.
In this commit, I have done these tasks.

1 - Remove all benchmarking and debugging code.
2022-10-24 17:42:38 +00:00
Ibrar Ahmed c622bf35a8 PG-174: Code cleanup.
pg_stat_monitor is a bit longer; therefore, it requires some code cleanup.
Therefore I decided to turn these tasks into multiple commits and PR to avoid
various changes in one PR. This will ease the review and Q/A process.
In this commit, I have done these tasks.

 1 - Delete all the SQL.in files because these version-dependent files
 are becoming significant in quantity. Now added a single SQL file for which
 contains the dynamic SQL based on the PostgreSQL Version.

 2 - New SQL files (pg_stat_monitor--2.0.sql) added for pg_stat_monitor version 2.

 3 - A new SQL file (pg_stat_monitor--1.0--2.0.sql) is created, which will be
 used to upgrade from version 1.0 to 2.0. Currently, this file is empty. But
 whenever we add some API changes into 2.0, we need to update that file too.

 4 - The control file (pg_stat_monitor.control) is updated for version 2.0.
 This change will make the CREATE EXTENSION default to pg_stat_monitor version 2.0
2022-10-24 17:21:59 +00:00
Hamid Akhtar b920224e0f
Merging the 1.1.0 branch back to main branch (#303)
* PG-475: Inconsistent behaviour of PGSM

Reverting the bucket locking mechanism to previous behavior. This has
a lot of room for improvement that needs to be part of a major refactoring
in the 2.x release.

* PG-481 Release notes 1.1.0 (#294)

modified:   RELEASE_NOTES.md

* PG-500: Bump the version of pg_stat_monitor to 1.1.0 (#297)

* PG-501: Missing Buckets and incorrect calls count. (#298)

prev_bucket_sec holds the actual time at which the previous bucket was created
and it is used to compute if the previous bucket time has elapsed and when is
the time to create a new one. But since the bucket start time is rounded down
to logical time window start, that makes the prev_bucket_sec and bucket start
time out of sync with each other, and depending on the query arrival time there
is a high probability that a bucket gets missed especially when the last bucket
was created around the end of the bucket time window.

Solution is to keep the prev_bucket_sec and bucket start time in-sync.

Moreover, we are using the unint64 for storing the prev_bucket_sec which is kind
of an overkill and a simple uint should be good enough for the purpose. But that
change can be taken up as part of the create-bucket function refactoring task.

* PG-501: Missing Buckets and incorrect calls count.

Ensuring the outer bound for the bucket is an exclusive boundary and it
as it belongs to the next bucket. To explain the point further, a set of
five second bucket would be:
    Bucket 1: 00:00:00.00 -> 00:00:04.99...
    Bucket 2: 00:00:00.05 -> 00:00:09.99...
    Bucket 3: 00:00:00.10 -> 00:00:14.99...
    ...

Co-authored-by: Ibrar Ahmed <ibrar.ahmed@percona.com>
Co-authored-by: Anastasia Alexandrova <anastasia.alexandrova@percona.com>
Co-authored-by: Muhammad Usama <m.usama@gmail.com>
2022-09-13 15:59:39 +05:00
Vadim Yalovets 16536ae07d
Merge pull request #296 from adivinho/PG-430-Update-rpm-changelog-automatically
PG-430 update rpm changelog automatically
2022-08-29 10:33:09 +03:00
Vadim Yalovets 0dd2a89bb7 PG-430 Update rpm changelog automatically during build 2022-08-23 13:20:35 +03:00
Vadim Yalovets f9504fe2b9 PG-430 Update rpm changelog automatically during build 2022-08-23 13:20:21 +03:00
Vadim Yalovets 33df4d1717 PG-430 Update rpm changelog automatically during build 2022-08-23 13:20:01 +03:00
177 changed files with 10952 additions and 16906 deletions

4
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,4 @@
# https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners
# Order is important; the last matching pattern takes the most precedence.
* @artemgavrilov @dutow

60
.github/ISSUE_TEMPLATE/bug.yml vendored Normal file
View File

@ -0,0 +1,60 @@
name: Bug Report
description: File a bug report
labels: ["bug"]
assignees:
- artemgavrilov
- dutow
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report! Please provide as much information as possible, it will help us to address this problem faster.
- type: textarea
id: description
attributes:
label: Description
description: Please describe the problem.
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected Results
description: What did you expect to happen?
validations:
required: true
- type: textarea
id: actual
attributes:
label: Actual Results
description: What actually happened?
validations:
required: true
- type: textarea
id: version
attributes:
label: Version
description: What version of PostgreSQL and pg_stat_monitor are you running?
placeholder: PostgreSQL 16.2, pg_stat_monitor v2.0.4
validations:
required: true
- type: textarea
id: steps
attributes:
label: Steps to reproduce
description: Which steps do we need to take to reproduce this error?
- type: textarea
id: logs
attributes:
label: Relevant logs
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: Shell
- type: checkboxes
id: terms
attributes:
label: Code of Conduct
description: By submitting this issue, you agree to follow [Percona Community Code of Conduct](https://github.com/percona/community/blob/main/content/contribute/coc.md)
options:
- label: I agree to follow Percona Community Code of Conduct
required: true

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Forum
url: https://forums.percona.com/
about: Please join our forums for general questions ans discussions.

37
.github/ISSUE_TEMPLATE/feature.yml vendored Normal file
View File

@ -0,0 +1,37 @@
name: Feature Request
description: Suggest an idea for this project
labels: ["feature"]
assignees:
- artemgavrilov
- dutow
body:
- type: markdown
attributes:
value: |
Thank you for suggesting an idea to make pg_stat_monitor better! Please complete the below form to ensure we have all the details to get things started.
- type: textarea
id: description
attributes:
label: Description
description: Description of the feature and of the problem it solves.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Suggested solution
description: A concise description of your preferred solution.
- type: textarea
id: context
attributes:
label: Additional context
description: Any information that may help.
- type: checkboxes
id: terms
attributes:
label: Code of Conduct
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/percona/community/blob/main/content/contribute/coc.md)
options:
- label: I agree to follow this project's Code of Conduct
required: true

7
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,7 @@
---
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

9
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,9 @@
PG-0
### Description
<!--- Describe your changes in detail -->
### Links
<!--- Please provide links to any related PRs in this or other repositories --->

95
.github/workflows/check.yml vendored Normal file
View File

@ -0,0 +1,95 @@
name: Checks
on:
pull_request:
jobs:
cppcheck:
name: Cppcheck
runs-on: ubuntu-22.04
timeout-minutes: 5
steps:
- name: Checkout sources
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: src/pg_stat_monitor
- name: Checkout cppcheck sources
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: "danmar/cppcheck"
ref: "2.13.4"
path: src/cppcheck
- name: Build and install cppcheck
working-directory: src/cppcheck
run: |
mkdir build
cd build
cmake ..
cmake --build .
sudo cmake --install .
- name: Execute linter check with cppcheck
working-directory: src/pg_stat_monitor
run: |
set -x
cppcheck --enable=all --inline-suppr --template='{file}:{line},{severity},{id},{message}' --error-exitcode=1 --suppress=missingIncludeSystem --suppress=missingInclude --suppress=unmatchedSuppression:pg_stat_monitor.c --check-config .
format:
name: Format
runs-on: ubuntu-22.04
timeout-minutes: 5
steps:
- name: Clone postgres repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'postgres/postgres'
ref: 'REL_17_STABLE'
- name: Checkout sources
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'contrib/pg_stat_monitor'
- name: Configure postgres
run: ./configure
- name: Install perltidy
run: sudo cpan -T SHANCOCK/Perl-Tidy-20230309.tar.gz
- name: Install pg_bsd_indent
working-directory: src/tools/pg_bsd_indent
run: sudo make install
- name: Add pg_bsd_indent and pgindent to path
run: |
echo "/usr/local/pgsql/bin" >> $GITHUB_PATH
echo "${{ github.workspace }}/src/tools/pgindent" >> $GITHUB_PATH
- name: Format sources
working-directory: contrib/pg_stat_monitor
run: |
make update-typedefs
make indent
- name: Check files are formatted and no source code changes
working-directory: contrib/pg_stat_monitor
run: |
git status
git diff --exit-code
license:
name: License
runs-on: ubuntu-22.04
timeout-minutes: 5
steps:
- name: Checkout sources
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Check license headers
uses: apache/skywalking-eyes/header@5c5b974209f0de5d905f37deb69369068ebfc15c # v0.7.0
with:
token: "" # Prevent comments

View File

@ -1,16 +1,25 @@
name: code-coverage-test
on: ["push", "pull_request"]
on:
pull_request:
push:
branches:
- main
permissions:
contents: read
jobs:
build:
name: coverage-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone postgres repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'postgres/postgres'
ref: 'REL_14_STABLE'
ref: 'REL_15_STABLE'
- name: Install dependencies
run: |
@ -28,9 +37,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Create pgsql dir
run: mkdir -p /opt/pgsql
@ -46,11 +54,11 @@ jobs:
'--libexecdir=${prefix}/lib/x86_64-linux-gnu' '--with-perl' \
'--with-python' '--with-pam' '--with-openssl' '--with-libxml' \
'--with-libxslt' 'PYTHON=/usr/bin/python3' '--enable-nls' \
'--mandir=/usr/share/postgresql/14/man' '--enable-thread-safety' \
'--docdir=/usr/share/doc/postgresql-doc-14' '--enable-dtrace' \
'--mandir=/usr/share/postgresql/15/man' '--enable-thread-safety' \
'--docdir=/usr/share/doc/postgresql-doc-15' '--enable-dtrace' \
'--sysconfdir=/etc/postgresql-common' '--datarootdir=/usr/share' \
'--datadir=/usr/share/postgresql/14' '--with-uuid=e2fs' \
'--bindir=/usr/lib/postgresql/14/bin' '--with-gnu-ld' \
'--datadir=/usr/share/postgresql/15' '--with-uuid=e2fs' \
'--bindir=/usr/lib/postgresql/15/bin' '--with-gnu-ld' \
'--libdir=/usr/lib/x86_64-linux-gnu' '--enable-tap-tests' \
'--libexecdir=/usr/lib/postgresql' '--enable-debug' \
'--includedir=/usr/include/postgresql' '--disable-rpath' \
@ -69,13 +77,13 @@ jobs:
- name: Start postgresql cluster
run: |
export PATH="/usr/lib/postgresql/14/bin:$PATH"
sudo cp /usr/lib/postgresql/14/bin/pg_config /usr/bin
export PATH="/usr/lib/postgresql/15/bin:$PATH"
sudo cp /usr/lib/postgresql/15/bin/pg_config /usr/bin
initdb -D /opt/pgsql/data
pg_ctl -D /opt/pgsql/data -l logfile start
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -87,7 +95,7 @@ jobs:
- name: Load pg_stat_monitor library and Restart Server
run: |
export PATH="/usr/lib/postgresql/14/bin:$PATH"
export PATH="/usr/lib/postgresql/15/bin:$PATH"
pg_ctl -D /opt/pgsql/data -l logfile stop
echo "shared_preload_libraries = 'pg_stat_monitor'" \
>> /opt/pgsql/data/postgresql.conf
@ -97,7 +105,7 @@ jobs:
- name: Start pg_stat_monitor_tests & Run code coverage
run: |
make installcheck
/usr/lib/postgresql/14/bin/psql -d postgres -p 5432 -c "\list"
/usr/lib/postgresql/15/bin/psql -d postgres -p 5432 -c "\list"
gcov -abcfu pg_stat_monitor.c
gcov -abcfu guc.c
gcov -abcfu hash_query.c
@ -105,9 +113,10 @@ jobs:
working-directory: src/pg_stat_monitor
- name: Upload
uses: codecov/codecov-action@v2
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
with:
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
working-directory: ./src/pg_stat_monitor
files: ./pg_stat_monitor.c.gcov,./hash_query.c.gcov,./guc.c.gcov
@ -121,7 +130,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |

View File

@ -1,18 +0,0 @@
name: cppcheck-action-test
on: [push]
jobs:
build:
name: cppcheck-test
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: install cppcheck
run: sudo apt-get install cppcheck
- name: Execute linter check with cppcheck
run: |
set -x
cppcheck --enable=all --inline-suppr --template='{file}:{line},{severity},{id},{message}' --error-exitcode=1 --suppress=missingIncludeSystem --suppress=missingInclude --suppress=unmatchedSuppression:pg_stat_monitor.c --check-config .

37
.github/workflows/pgxn-release.yml vendored Normal file
View File

@ -0,0 +1,37 @@
name: PGXN
on:
workflow_dispatch:
inputs:
version:
description: 'Version to release'
required: true
type: string
permissions:
contents: read
jobs:
release:
name: Release
runs-on: ubuntu-22.04
timeout-minutes: 10
container: pgxn/pgxn-tools
steps:
- name: Validate version tag
run: '[[ ${{ inputs.version }} =~ ^[0-9]+.[0-9]+.[0-9]+ ]]'
shell: bash
- name: Check out
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: '${{ inputs.version }}'
- name: Bundle
id: bundle
run: pgxn-bundle
- name: Upload
env:
PGXN_USERNAME: percona
PGXN_PASSWORD: ${{ secrets.PGXN_PERCONA }}
run: pgxn-release

View File

@ -1,121 +0,0 @@
name: postgresql-11-build
on: [push]
jobs:
build:
name: pg-11-build-test
runs-on: ubuntu-22.04
steps:
- name: Clone postgres repository
uses: actions/checkout@v2
with:
repository: 'postgres/postgres'
ref: 'REL_11_STABLE'
- name: Install dependencies
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev \
zlib1g-dev libssl-dev libpam0g-dev bison flex \
libipc-run-perl -y docbook-xsl docbook-xsl libxml2 libxml2-utils \
libxml2-dev libxslt-dev xsltproc libkrb5-dev libldap2-dev \
libsystemd-dev gettext tcl-dev libperl-dev pkg-config clang-11 \
llvm-11 llvm-11-dev libselinux1-dev python3-dev uuid-dev liblz4-dev
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
- name: Create pgsql dir
run: mkdir -p /opt/pgsql
- name: Build postgres
run: |
export PATH="/opt/pgsql/bin:$PATH"
./configure '--build=x86_64-linux-gnu' '--prefix=/usr' \
'--includedir=/usr/include' '--mandir=/usr/share/man' \
'--infodir=/usr/share/info' '--sysconfdir=/etc' '--enable-nls' \
'--localstatedir=/var' '--libdir=/usr/lib/x86_64-linux-gnu' \
'runstatedir=/run' '--with-icu' '--with-tcl' '--with-perl' \
'--with-python' '--with-pam' '--with-openssl' '--with-libxml' \
'--with-libxslt' 'PYTHON=/usr/bin/python3' 'MKDIR_P=/bin/mkdir -p' \
'--mandir=/usr/share/postgresql/11/man' '--enable-dtrace' \
'--docdir=/usr/share/doc/postgresql-doc-11' '--enable-debug' \
'--sysconfdir=/etc/postgresql-common' '--datarootdir=/usr/share' \
'--datadir=/usr/share/postgresql/11' '--enable-thread-safety' \
'--bindir=/usr/lib/postgresql/11/bin' '--enable-tap-tests' \
'--libdir=/usr/lib/x86_64-linux-gnu' '--disable-rpath' \
'--libexecdir=/usr/lib/postgresql' '--with-uuid=e2fs' \
'--includedir=/usr/include/postgresql' '--with-gnu-ld' \
'--with-pgport=5432' '--with-system-tzdata=/usr/share/zoneinfo' \
'--with-llvm' 'LLVM_CONFIG=/usr/bin/llvm-config-11' \
'CLANG=/usr/bin/clang-11' '--with-systemd' '--with-selinux' \
'PROVE=/usr/bin/prove' 'TAR=/bin/tar' '--with-gssapi' '--with-ldap' \
'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now' \
'--with-includes=/usr/include/mit-krb5' '--with-libs=/usr/lib/mit-krb5' \
'--with-libs=/usr/lib/x86_64-linux-gnu/mit-krb5' \
'build_alias=x86_64-linux-gnu' \
'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' \
'CFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -fno-omit-frame-pointer' \
'CXXFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security'
make world
sudo make install-world
- name: Start postgresql cluster
run: |
export PATH="/usr/lib/postgresql/11/bin:$PATH"
sudo cp /usr/lib/postgresql/11/bin/pg_config /usr/bin
initdb -D /opt/pgsql/data
pg_ctl -D /opt/pgsql/data -l logfile start
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
with:
path: 'src/pg_stat_monitor'
- name: Build pg_stat_monitor
run: |
make USE_PGXS=1
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor/
- name: Load pg_stat_monitor library and Restart Server
run: |
export PATH="/usr/lib/postgresql/11/bin:$PATH"
pg_ctl -D /opt/pgsql/data -l logfile stop
echo "shared_preload_libraries = 'pg_stat_monitor'" >> \
/opt/pgsql/data/postgresql.conf
pg_ctl -D /opt/pgsql/data -l logfile start
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
make installcheck
working-directory: src/pg_stat_monitor
- name: Report on pg_stat_monitor test fail
uses: actions/upload-artifact@v2
if: ${{ failure() }}
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/logfile
retention-days: 1
- name: Start Server installcheck-world tests
run: |
make installcheck-world
- name: Report on installcheck-world test suites fail
uses: actions/upload-artifact@v2
if: ${{ failure() }}
with:
name: Regressions output files of failed testsuite, and pg log
path: |
**/regression.diffs
**/regression.out
src/pg_stat_monitor/logfile
retention-days: 3

View File

@ -1,62 +0,0 @@
name: postgresql-11-pgdg-package
on: [push]
jobs:
build:
name: pg-11-pgdg-package-test
runs-on: ubuntu-20.04
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
with:
path: 'src/pg_stat_monitor'
- name: Delete old postgresql files
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
- name: Install PG Distribution Postgresql 11
run: |
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt \
$(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
sudo wget --quiet -O - \
https://www.postgresql.org/media/keys/ACCC4CF8.asc |
sudo apt-key add -
sudo apt-get -y update
sudo apt-get -y install postgresql-11 postgresql-server-dev-11
- name: Change src owner to postgres
run: |
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
run: |
sudo -u postgres bash -c 'make USE_PGXS=1'
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
sudo service postgresql stop
echo "shared_preload_libraries = 'pg_stat_monitor'" |
sudo tee -a /etc/postgresql/11/main/postgresql.conf
sudo service postgresql start
sudo psql -V
sudo -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Report on test fail
uses: actions/upload-artifact@v2
if: ${{ failure() }}
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/logfile
retention-days: 3

View File

@ -1,71 +0,0 @@
name: postgresql-11-ppg-package
on: [push]
jobs:
build:
name: pg-11-ppg-package-test
runs-on: ubuntu-20.04
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
with:
path: 'src/pg_stat_monitor'
- name: Install dependencies
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev \
zlib1g-dev libssl-dev libpam0g-dev python3-dev bison flex \
libipc-run-perl wget
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
- name: Install percona-release script
run: |
sudo apt-get -y update
sudo apt-get -y upgrade
sudo apt-get install -y wget gnupg2 curl lsb-release
sudo wget \
https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
- name: Install Percona Distribution Postgresql 11
run: |
sudo percona-release setup ppg-11
sudo apt-get update -y
sudo apt-get install -y percona-postgresql-11 \
percona-postgresql-contrib percona-postgresql-server-dev-all
- name: Change src owner to postgres
run: |
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
run: |
sudo -u postgres bash -c 'make USE_PGXS=1'
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
sudo service postgresql stop
echo "shared_preload_libraries = 'pg_stat_monitor'" |
sudo tee -a /etc/postgresql/11/main/postgresql.conf
sudo service postgresql start
sudo psql -V
sudo -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Report on test fail
uses: actions/upload-artifact@v2
if: ${{ failure() }}
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/logfile
retention-days: 3

View File

@ -1,13 +1,24 @@
name: postgresql-12-build
on: [push]
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-12-build-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone postgres repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'postgres/postgres'
ref: 'REL_12_STABLE'
@ -27,9 +38,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Create pgsql dir
run: mkdir -p /opt/pgsql
@ -74,7 +84,7 @@ jobs:
pg_ctl -D /opt/pgsql/data -l logfile start
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -108,7 +118,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
@ -131,7 +141,7 @@ jobs:
make installcheck-world
- name: Report on installcheck-world test suites fail
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ failure() }}
with:
name: Regressions output files of failed testsuite, and pg log

View File

@ -1,13 +1,24 @@
name: postgresql-12-pgdg-package
on: [push]
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-12-pgdg-package-test
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -23,9 +34,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install PG Distribution Postgresql 12
run: |
@ -39,6 +49,7 @@ jobs:
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
@ -54,7 +65,9 @@ jobs:
sudo tee -a /etc/postgresql/12/main/postgresql.conf
sudo service postgresql start
sudo psql -V
sudo -u postgres bash -c 'make installcheck USE_PGXS=1'
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
@ -67,7 +80,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |

View File

@ -1,14 +1,23 @@
name: postgresql-12-pmm-integration
on: push
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-12-pgsm-pmm-integration-test
runs-on: ubuntu-latest
timeout-minutes: 20
timeout-minutes: 30
steps:
- name: Clone QA Integration repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'Percona-Lab/qa-integration'
ref: 'main'
@ -17,15 +26,23 @@ jobs:
- name: Get branch and Repo Name
run: echo 'The branch and Repo Name is' ${{ github.head_ref }} ${{ github.actor }}/pg_stat_monitor
- name: "Set TARGET_BRANCH variable for a PR run"
if: github.event_name == 'pull_request'
run: echo "TARGET_BRANCH=${{ github.event.pull_request.base.ref }}" >> $GITHUB_ENV
- name: "Set TARGET_BRANCH variable for a PUSH run"
if: github.event_name == 'push'
run: echo "TARGET_BRANCH=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
- name: Run PMM & PGSM Setup, E2E Tests
run: bash -xe ./pmm_pgsm_setup/pmm_pgsm_setup.sh --pgsql-version=12
run: bash -xe ./pmm_pgsm_setup/pmm_pgsm_setup.sh --pgsql-version=12 --pgstat-monitor-branch=${{ env.TARGET_BRANCH }}
- name: Get PMM-Agent Logs from the Container
if: success() || failure() # run this step even if previous step failed
run: docker exec pgsql_pgsm_12 cat pmm-agent.log > ./pmm-ui-tests/tests/output/pmm-agent.log
- name: Upload Tests Artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: success() || failure() # run this step even if previous step failed
with:
name: tests-artifact

View File

@ -1,13 +1,23 @@
name: postgresql-12-ppg-package
on: [push]
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-12-ppg-package-test
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -26,9 +36,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install percona-release script
run: |
@ -39,15 +48,23 @@ jobs:
https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
- name: Install Percona Distribution Postgresql 12
- name: Install Percona Distribution Postgresql 12 & Extensions
run: |
sudo percona-release setup ppg-12
sudo apt-get update -y
sudo apt-get install -y percona-postgresql-12 \
percona-postgresql-contrib percona-postgresql-server-dev-all
percona-postgresql-contrib percona-postgresql-server-dev-all \
percona-pgpool2 libpgpool2 percona-postgresql-12-pgaudit \
percona-postgresql-12-pgaudit-dbgsym percona-postgresql-12-repack \
percona-postgresql-12-repack-dbgsym percona-pgaudit12-set-user \
percona-pgaudit12-set-user-dbgsym percona-postgresql-12-postgis-3 \
percona-postgresql-12-postgis-3-scripts \
percona-postgresql-postgis-scripts percona-postgresql-postgis \
percona-postgis
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
@ -63,7 +80,9 @@ jobs:
sudo tee -a /etc/postgresql/12/main/postgresql.conf
sudo service postgresql start
sudo psql -V
sudo -u postgres bash -c 'make installcheck USE_PGXS=1'
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
@ -76,7 +95,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |

View File

@ -1,13 +1,23 @@
name: postgresql-13-build
on: [push]
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-13-build-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone postgres repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'postgres/postgres'
ref: 'REL_13_STABLE'
@ -27,9 +37,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Create pgsql dir
run: mkdir -p /opt/pgsql
@ -74,7 +83,7 @@ jobs:
pg_ctl -D /opt/pgsql/data -l logfile start
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -108,7 +117,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
@ -131,7 +140,7 @@ jobs:
make installcheck-world
- name: Report on installcheck-world test suites fail
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ failure() }}
with:
name: Regressions output files of failed testsuite, and pg log

View File

@ -1,13 +1,23 @@
name: postgresql-13-pgdg-package
on: [push]
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-13-pgdg-package-test
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -23,9 +33,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install PG Distribution Postgresql 13
run: |
@ -39,6 +48,7 @@ jobs:
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
@ -54,7 +64,9 @@ jobs:
sudo tee -a /etc/postgresql/13/main/postgresql.conf
sudo service postgresql start
sudo psql -V
sudo -u postgres bash -c 'make installcheck USE_PGXS=1'
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
@ -67,7 +79,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |

View File

@ -1,14 +1,23 @@
name: postgresql-13-pmm-integration
on: push
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-13-pgsm-pmm-integration-test
runs-on: ubuntu-latest
timeout-minutes: 20
timeout-minutes: 30
steps:
- name: Clone QA Integration repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'Percona-Lab/qa-integration'
ref: 'main'
@ -17,15 +26,23 @@ jobs:
- name: Get branch and Repo Name
run: echo 'The branch and Repo Name is' ${{ github.head_ref }} ${{ github.actor }}/pg_stat_monitor
- name: "Set TARGET_BRANCH variable for a PR run"
if: github.event_name == 'pull_request'
run: echo "TARGET_BRANCH=${{ github.event.pull_request.base.ref }}" >> $GITHUB_ENV
- name: "Set TARGET_BRANCH variable for a PUSH run"
if: github.event_name == 'push'
run: echo "TARGET_BRANCH=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
- name: Run PMM & PGSM Setup, E2E Tests
run: bash -xe ./pmm_pgsm_setup/pmm_pgsm_setup.sh --pgsql-version=13
run: bash -xe ./pmm_pgsm_setup/pmm_pgsm_setup.sh --pgsql-version=13 --pgstat-monitor-branch=${{ env.TARGET_BRANCH }}
- name: Get PMM-Agent Logs from the Container
if: success() || failure() # run this step even if previous step failed
run: docker exec pgsql_pgsm_13 cat pmm-agent.log > ./pmm-ui-tests/tests/output/pmm-agent.log
- name: Upload Tests Artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: success() || failure() # run this step even if previous step failed
with:
name: tests-artifact

View File

@ -1,13 +1,23 @@
name: postgresql-13-ppg-package
on: [push]
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-13-ppg-package-test
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -23,9 +33,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install percona-release script
run: |
@ -36,15 +45,23 @@ jobs:
https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
- name: Install Percona Distribution Postgresql 13
- name: Install Percona Distribution Postgresql 13 & Extensions
run: |
sudo percona-release setup ppg-13
sudo apt-get update -y
sudo apt-get install -y percona-postgresql-13 \
percona-postgresql-contrib percona-postgresql-server-dev-all
percona-postgresql-contrib percona-postgresql-server-dev-all \
percona-pgpool2 libpgpool2 percona-postgresql-13-pgaudit \
percona-postgresql-13-pgaudit-dbgsym percona-postgresql-13-repack \
percona-postgresql-13-repack-dbgsym percona-pgaudit13-set-user \
percona-pgaudit13-set-user-dbgsym percona-postgresql-13-postgis-3 \
percona-postgresql-13-postgis-3-scripts \
percona-postgresql-postgis-scripts percona-postgresql-postgis \
percona-postgis
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
@ -60,7 +77,9 @@ jobs:
sudo tee -a /etc/postgresql/13/main/postgresql.conf
sudo service postgresql start
sudo psql -V
sudo -u postgres bash -c 'make installcheck USE_PGXS=1'
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
@ -73,7 +92,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |

View File

@ -1,13 +1,23 @@
name: postgresql-14-build
on: ["push", "pull_request"]
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-14-build-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone postgres repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'postgres/postgres'
ref: 'REL_14_STABLE'
@ -28,9 +38,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Create pgsql dir
run: mkdir -p /opt/pgsql
@ -74,7 +83,7 @@ jobs:
pg_ctl -D /opt/pgsql/data -l logfile start
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -84,12 +93,13 @@ jobs:
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Load pg_stat_monitor library and Restart Server
- name: Configure and Restart Server
run: |
export PATH="/usr/lib/postgresql/14/bin:$PATH"
pg_ctl -D /opt/pgsql/data -l logfile stop
echo "shared_preload_libraries = 'pg_stat_monitor'" >> \
/opt/pgsql/data/postgresql.conf
echo "compute_query_id = regress" >> /opt/pgsql/data/postgresql.conf
pg_ctl -D /opt/pgsql/data -l logfile start
working-directory: src/pg_stat_monitor
@ -108,7 +118,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
@ -125,3 +135,17 @@ jobs:
!src/pg_stat_monitor/tmp_check/**/pgdata/
if-no-files-found: warn
retention-days: 3
- name: Start Server installcheck-world tests
run: make installcheck-world
- name: Report on installcheck-world test suites fail
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ failure() }}
with:
name: Regressions output files of failed testsuite, and pg log
path: |
**/regression.diffs
**/regression.out
src/pg_stat_monitor/logfile
retention-days: 3

View File

@ -1,13 +1,23 @@
name: postgresql-14-pgdg-package
on: [push]
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-14-pgdg-package-test
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -22,9 +32,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install PG Distribution Postgresql 14
run: |
@ -38,6 +47,7 @@ jobs:
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
@ -53,7 +63,9 @@ jobs:
sudo tee -a /etc/postgresql/14/main/postgresql.conf
sudo service postgresql start
sudo psql -V
sudo -u postgres bash -c 'make installcheck USE_PGXS=1'
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
@ -66,7 +78,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |

View File

@ -1,14 +1,23 @@
name: postgresql-14-pmm-integration
on: push
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-14-pgsm-pmm-integration-test
runs-on: ubuntu-latest
timeout-minutes: 20
timeout-minutes: 30
steps:
- name: Clone QA Integration repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'Percona-Lab/qa-integration'
ref: 'main'
@ -17,15 +26,23 @@ jobs:
- name: Get branch and Repo Name
run: echo 'The branch and Repo Name is' ${{ github.head_ref }} ${{ github.actor }}/pg_stat_monitor
- name: "Set TARGET_BRANCH variable for a PR run"
if: github.event_name == 'pull_request'
run: echo "TARGET_BRANCH=${{ github.event.pull_request.base.ref }}" >> $GITHUB_ENV
- name: "Set TARGET_BRANCH variable for a PUSH run"
if: github.event_name == 'push'
run: echo "TARGET_BRANCH=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
- name: Run PMM & PGSM Setup, E2E Tests
run: bash -xe ./pmm_pgsm_setup/pmm_pgsm_setup.sh --pgsql-version=14
run: bash -xe ./pmm_pgsm_setup/pmm_pgsm_setup.sh --pgsql-version=14 --pgstat-monitor-branch=${{ env.TARGET_BRANCH }}
- name: Get PMM-Agent Logs from the Container
if: success() || failure() # run this step even if previous step failed
run: docker exec pgsql_pgsm_14 cat pmm-agent.log > ./pmm-ui-tests/tests/output/pmm-agent.log
- name: Upload Tests Artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: success() || failure() # run this step even if previous step failed
with:
name: tests-artifact

View File

@ -1,13 +1,23 @@
name: postgresql-14-ppg-package
on: [push]
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-14-ppg-package-test
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
@ -23,9 +33,8 @@ jobs:
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::RUN'
sudo /usr/bin/perl -MCPAN -e 'install String::Util'
sudo /usr/bin/perl -MCPAN -e 'install Data::Str2Num'
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install percona-release script
run: |
@ -36,15 +45,23 @@ jobs:
https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
- name: Install Percona Distribution Postgresql 14
- name: Install Percona Distribution Postgresql 14 & Extensions
run: |
sudo percona-release setup ppg-14
sudo apt-get update -y
sudo apt-get install -y percona-postgresql-14 \
percona-postgresql-contrib percona-postgresql-server-dev-all
percona-postgresql-contrib percona-postgresql-server-dev-all \
percona-pgpool2 libpgpool2 percona-postgresql-14-pgaudit \
percona-postgresql-14-pgaudit-dbgsym percona-postgresql-14-repack \
percona-postgresql-14-repack-dbgsym percona-pgaudit14-set-user \
percona-pgaudit14-set-user-dbgsym percona-postgresql-14-postgis-3 \
percona-postgresql-14-postgis-3-scripts \
percona-postgresql-postgis-scripts percona-postgresql-postgis \
percona-postgis
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
@ -60,7 +77,9 @@ jobs:
sudo tee -a /etc/postgresql/14/main/postgresql.conf
sudo service postgresql start
sudo psql -V
sudo -u postgres bash -c 'make installcheck USE_PGXS=1'
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
@ -73,7 +92,7 @@ jobs:
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |

View File

@ -0,0 +1,151 @@
name: postgresql-15-build
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-15-build-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone postgres repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'postgres/postgres'
ref: 'REL_15_STABLE'
- name: Install dependencies
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev \
zlib1g-dev libssl-dev libpam0g-dev bison flex \
libipc-run-perl -y docbook-xsl docbook-xsl libxml2 libxml2-utils \
libxml2-dev libxslt-dev xsltproc libkrb5-dev libldap2-dev \
libsystemd-dev gettext tcl-dev libperl-dev pkg-config clang-11 \
llvm-11 llvm-11-dev libselinux1-dev python3-dev \
uuid-dev liblz4-dev
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Create pgsql dir
run: mkdir -p /opt/pgsql
- name: Build postgres
run: |
export PATH="/opt/pgsql/bin:$PATH"
./configure '--build=x86_64-linux-gnu' '--prefix=/usr' \
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' \
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' \
'--localstatedir=/var' '--libdir=${prefix}/lib/x86_64-linux-gnu' \
'--libexecdir=${prefix}/lib/x86_64-linux-gnu' '--with-icu' \
'--with-tcl' '--with-perl' '--with-python' '--with-pam' \
'--with-openssl' '--with-libxml' '--with-libxslt' '--with-ldap' \
'PYTHON=/usr/bin/python3' '--mandir=/usr/share/postgresql/15/man' \
'--docdir=/usr/share/doc/postgresql-doc-15' '--with-pgport=5432' \
'--sysconfdir=/etc/postgresql-common' '--datarootdir=/usr/share' \
'--datadir=/usr/share/postgresql/15' '--with-uuid=e2fs' \
'--bindir=/usr/lib/postgresql/15/bin' '--enable-tap-tests' \
'--libdir=/usr/lib/x86_64-linux-gnu' '--enable-debug' \
'--libexecdir=/usr/lib/postgresql' '--with-gnu-ld' \
'--includedir=/usr/include/postgresql' '--enable-dtrace' \
'--enable-nls' '--enable-thread-safety' '--disable-rpath' \
'--with-system-tzdata=/usr/share/zoneinfo' '--with-llvm' \
'LLVM_CONFIG=/usr/bin/llvm-config-11' 'CLANG=/usr/bin/clang-11' \
'--with-systemd' '--with-selinux' 'MKDIR_P=/bin/mkdir -p' \
'PROVE=/usr/bin/prove' 'TAR=/bin/tar' 'XSLTPROC=xsltproc --nonet' \
'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now' \
'build_alias=x86_64-linux-gnu' '--with-gssapi' \
'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' \
'CFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -fno-omit-frame-pointer' \
'CXXFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security'
make world
sudo make install-world
- name: Start postgresql cluster
run: |
export PATH="/usr/lib/postgresql/15/bin:$PATH"
sudo cp /usr/lib/postgresql/15/bin/pg_config /usr/bin
initdb -D /opt/pgsql/data
pg_ctl -D /opt/pgsql/data -l logfile start
- name: Clone pg_stat_monitor repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
- name: Build pg_stat_monitor
run: |
make USE_PGXS=1
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Configure and Restart Server
run: |
export PATH="/usr/lib/postgresql/15/bin:$PATH"
pg_ctl -D /opt/pgsql/data -l logfile stop
echo "shared_preload_libraries = 'pg_stat_monitor'" >> \
/opt/pgsql/data/postgresql.conf
echo "compute_query_id = regress" >> /opt/pgsql/data/postgresql.conf
pg_ctl -D /opt/pgsql/data -l logfile start
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
make installcheck
working-directory: src/pg_stat_monitor/
- name: Change dir permissions on fail
if: ${{ failure() }}
run: |
sudo chmod -R ugo+rwx t
sudo chmod -R ugo+rwx tmp_check
exit 2 # regenerate error so that we can upload files in next step
working-directory: src/pg_stat_monitor
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/regression.out
src/pg_stat_monitor/logfile
src/pg_stat_monitor/t/results/
src/pg_stat_monitor/tmp_check/log/
!src/pg_stat_monitor/tmp_check/**/archives/*
!src/pg_stat_monitor/tmp_check/**/backup/*
!src/pg_stat_monitor/tmp_check/**/pgdata/*
!src/pg_stat_monitor/tmp_check/**/archives/
!src/pg_stat_monitor/tmp_check/**/backup/
!src/pg_stat_monitor/tmp_check/**/pgdata/
if-no-files-found: warn
retention-days: 3
- name: Start Server installcheck-world tests
run: make installcheck-world
- name: Report on installcheck-world test suites fail
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ failure() }}
with:
name: Regressions output files of failed testsuite, and pg log
path: |
**/regression.diffs
**/regression.out
src/pg_stat_monitor/logfile
retention-days: 3

View File

@ -0,0 +1,97 @@
name: postgresql-15-pgdg-package
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-15-pgdg-package-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
- name: Delete old postgresql files
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev wget \
zlib1g-dev libssl-dev libpam0g-dev bison flex libipc-run-perl
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install PG Distribution Postgresql 15
run: |
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt \
$(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
sudo wget --quiet -O - \
https://www.postgresql.org/media/keys/ACCC4CF8.asc |
sudo apt-key add -
sudo apt update
sudo apt -y install postgresql-15 postgresql-server-dev-15
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
run: |
sudo -u postgres bash -c 'make USE_PGXS=1'
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
sudo service postgresql stop
echo "shared_preload_libraries = 'pg_stat_monitor'" |
sudo tee -a /etc/postgresql/15/main/postgresql.conf
sudo service postgresql start
sudo psql -V
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
if: ${{ failure() }}
run: |
sudo chmod -R ugo+rwx t
sudo chmod -R ugo+rwx tmp_check
exit 2 # regenerate error so that we can upload files in next step
working-directory: src/pg_stat_monitor
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/regression.out
src/pg_stat_monitor/logfile
src/pg_stat_monitor/t/results/
src/pg_stat_monitor/tmp_check/log/
!src/pg_stat_monitor/tmp_check/**/archives/*
!src/pg_stat_monitor/tmp_check/**/backup/*
!src/pg_stat_monitor/tmp_check/**/pgdata/*
!src/pg_stat_monitor/tmp_check/**/archives/
!src/pg_stat_monitor/tmp_check/**/backup/
!src/pg_stat_monitor/tmp_check/**/pgdata/
if-no-files-found: warn
retention-days: 3

View File

@ -1,14 +1,23 @@
name: postgresql-11-pmm-integration
on: push
name: postgresql-15-pmm-integration
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-11-pgsm-pmm-integration-test
name: pg-15-pgsm-pmm-integration-test
runs-on: ubuntu-latest
timeout-minutes: 20
timeout-minutes: 30
steps:
- name: Clone QA Integration repository
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'Percona-Lab/qa-integration'
ref: 'main'
@ -17,15 +26,23 @@ jobs:
- name: Get branch and Repo Name
run: echo 'The branch and Repo Name is' ${{ github.head_ref }} ${{ github.actor }}/pg_stat_monitor
- name: "Set TARGET_BRANCH variable for a PR run"
if: github.event_name == 'pull_request'
run: echo "TARGET_BRANCH=${{ github.event.pull_request.base.ref }}" >> $GITHUB_ENV
- name: "Set TARGET_BRANCH variable for a PUSH run"
if: github.event_name == 'push'
run: echo "TARGET_BRANCH=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
- name: Run PMM & PGSM Setup, E2E Tests
run: bash -xe ./pmm_pgsm_setup/pmm_pgsm_setup.sh --pgsql-version=11
run: bash -xe ./pmm_pgsm_setup/pmm_pgsm_setup.sh --pgsql-version=15 --pgstat-monitor-branch=${{ env.TARGET_BRANCH }}
- name: Get PMM-Agent Logs from the Container
if: success() || failure() # run this step even if previous step failed
run: docker exec pgsql_pgsm_11 cat pmm-agent.log > ./pmm-ui-tests/tests/output/pmm-agent.log
run: docker exec pgsql_pgsm_15 cat pmm-agent.log > ./pmm-ui-tests/tests/output/pmm-agent.log
- name: Upload Tests Artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: success() || failure() # run this step even if previous step failed
with:
name: tests-artifact

View File

@ -0,0 +1,111 @@
name: postgresql-15-ppg-package
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-15-ppg-package-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
- name: Delete old postgresql files
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev \
zlib1g-dev libssl-dev libpam0g-dev python3-dev bison flex \
libipc-run-perl wget
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install percona-release script
run: |
sudo apt-get -y update
sudo apt-get -y upgrade
sudo apt-get install -y wget gnupg2 curl lsb-release
sudo wget \
https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
- name: Install Percona Distribution Postgresql 15 & Extensions
run: |
sudo percona-release setup ppg-15
sudo apt-get update -y
sudo apt-get install -y percona-postgresql-15 \
percona-postgresql-contrib percona-postgresql-server-dev-all \
percona-pgpool2 libpgpool2 percona-postgresql-15-pgaudit \
percona-postgresql-15-pgaudit-dbgsym percona-postgresql-15-repack \
percona-postgresql-15-repack-dbgsym percona-pgaudit15-set-user \
percona-pgaudit15-set-user-dbgsym percona-postgresql-15-postgis-3 \
percona-postgresql-15-postgis-3-scripts \
percona-postgresql-postgis-scripts percona-postgresql-postgis \
percona-postgis
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
run: |
sudo -u postgres bash -c 'make USE_PGXS=1'
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
sudo service postgresql stop
echo "shared_preload_libraries = 'pg_stat_monitor'" |
sudo tee -a /etc/postgresql/15/main/postgresql.conf
sudo service postgresql start
sudo psql -V
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
if: ${{ failure() }}
run: |
sudo chmod -R ugo+rwx t
sudo chmod -R ugo+rwx tmp_check
exit 2 # regenerate error so that we can upload files in next step
working-directory: src/pg_stat_monitor
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/regression.out
src/pg_stat_monitor/logfile
src/pg_stat_monitor/t/results/
src/pg_stat_monitor/tmp_check/log/
!src/pg_stat_monitor/tmp_check/**/archives/*
!src/pg_stat_monitor/tmp_check/**/backup/*
!src/pg_stat_monitor/tmp_check/**/pgdata/*
!src/pg_stat_monitor/tmp_check/**/archives/
!src/pg_stat_monitor/tmp_check/**/backup/
!src/pg_stat_monitor/tmp_check/**/pgdata/
if-no-files-found: warn
retention-days: 3

View File

@ -0,0 +1,151 @@
name: postgresql-16-build
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-16-build-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone postgres repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'postgres/postgres'
ref: 'REL_16_STABLE'
- name: Install dependencies
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev \
zlib1g-dev libssl-dev libpam0g-dev bison flex \
libipc-run-perl -y docbook-xsl docbook-xsl libxml2 libxml2-utils \
libxml2-dev libxslt-dev xsltproc libkrb5-dev libldap2-dev \
libsystemd-dev gettext tcl-dev libperl-dev pkg-config clang-11 \
llvm-11 llvm-11-dev libselinux1-dev python3-dev \
uuid-dev liblz4-dev
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Create pgsql dir
run: mkdir -p /opt/pgsql
- name: Build postgres
run: |
export PATH="/opt/pgsql/bin:$PATH"
./configure '--build=x86_64-linux-gnu' '--prefix=/usr' \
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' \
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' \
'--localstatedir=/var' '--libdir=${prefix}/lib/x86_64-linux-gnu' \
'--libexecdir=${prefix}/lib/x86_64-linux-gnu' '--with-icu' \
'--with-tcl' '--with-perl' '--with-python' '--with-pam' \
'--with-openssl' '--with-libxml' '--with-libxslt' '--with-ldap' \
'PYTHON=/usr/bin/python3' '--mandir=/usr/share/postgresql/16/man' \
'--docdir=/usr/share/doc/postgresql-doc-16' '--with-pgport=5432' \
'--sysconfdir=/etc/postgresql-common' '--datarootdir=/usr/share' \
'--datadir=/usr/share/postgresql/16' '--with-uuid=e2fs' \
'--bindir=/usr/lib/postgresql/16/bin' '--enable-tap-tests' \
'--libdir=/usr/lib/x86_64-linux-gnu' '--enable-debug' \
'--libexecdir=/usr/lib/postgresql' '--with-gnu-ld' \
'--includedir=/usr/include/postgresql' '--enable-dtrace' \
'--enable-nls' '--enable-thread-safety' '--disable-rpath' \
'--with-system-tzdata=/usr/share/zoneinfo' '--with-llvm' \
'LLVM_CONFIG=/usr/bin/llvm-config-11' 'CLANG=/usr/bin/clang-11' \
'--with-systemd' '--with-selinux' 'MKDIR_P=/bin/mkdir -p' \
'PROVE=/usr/bin/prove' 'TAR=/bin/tar' 'XSLTPROC=xsltproc --nonet' \
'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now' \
'build_alias=x86_64-linux-gnu' '--with-gssapi' \
'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' \
'CFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -fno-omit-frame-pointer' \
'CXXFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security'
make world
sudo make install-world
- name: Start postgresql cluster
run: |
export PATH="/usr/lib/postgresql/16/bin:$PATH"
sudo cp /usr/lib/postgresql/16/bin/pg_config /usr/bin
initdb -D /opt/pgsql/data
pg_ctl -D /opt/pgsql/data -l logfile start
- name: Clone pg_stat_monitor repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
- name: Build pg_stat_monitor
run: |
make USE_PGXS=1
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Configure and Restart Server
run: |
export PATH="/usr/lib/postgresql/16/bin:$PATH"
pg_ctl -D /opt/pgsql/data -l logfile stop
echo "shared_preload_libraries = 'pg_stat_monitor'" >> \
/opt/pgsql/data/postgresql.conf
echo "compute_query_id = regress" >> /opt/pgsql/data/postgresql.conf
pg_ctl -D /opt/pgsql/data -l logfile start
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
make installcheck
working-directory: src/pg_stat_monitor/
- name: Change dir permissions on fail
if: ${{ failure() }}
run: |
sudo chmod -R ugo+rwx t
sudo chmod -R ugo+rwx tmp_check
exit 2 # regenerate error so that we can upload files in next step
working-directory: src/pg_stat_monitor
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/regression.out
src/pg_stat_monitor/logfile
src/pg_stat_monitor/t/results/
src/pg_stat_monitor/tmp_check/log/
!src/pg_stat_monitor/tmp_check/**/archives/*
!src/pg_stat_monitor/tmp_check/**/backup/*
!src/pg_stat_monitor/tmp_check/**/pgdata/*
!src/pg_stat_monitor/tmp_check/**/archives/
!src/pg_stat_monitor/tmp_check/**/backup/
!src/pg_stat_monitor/tmp_check/**/pgdata/
if-no-files-found: warn
retention-days: 3
- name: Start Server installcheck-world tests
run: make installcheck-world
- name: Report on installcheck-world test suites fail
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ failure() }}
with:
name: Regressions output files of failed testsuite, and pg log
path: |
**/regression.diffs
**/regression.out
src/pg_stat_monitor/logfile
retention-days: 3

View File

@ -0,0 +1,97 @@
name: postgresql-16-pgdg-package
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-16-pgdg-package-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
- name: Delete old postgresql files
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev wget \
zlib1g-dev libssl-dev libpam0g-dev bison flex libipc-run-perl
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install PG Distribution Postgresql 16
run: |
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt \
$(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
sudo wget --quiet -O - \
https://www.postgresql.org/media/keys/ACCC4CF8.asc |
sudo apt-key add -
sudo apt update
sudo apt -y install postgresql-16 postgresql-server-dev-16
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
run: |
sudo -u postgres bash -c 'make USE_PGXS=1'
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
sudo service postgresql stop
echo "shared_preload_libraries = 'pg_stat_monitor'" |
sudo tee -a /etc/postgresql/16/main/postgresql.conf
sudo service postgresql start
sudo psql -V
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
if: ${{ failure() }}
run: |
sudo chmod -R ugo+rwx t
sudo chmod -R ugo+rwx tmp_check
exit 2 # regenerate error so that we can upload files in next step
working-directory: src/pg_stat_monitor
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/regression.out
src/pg_stat_monitor/logfile
src/pg_stat_monitor/t/results/
src/pg_stat_monitor/tmp_check/log/
!src/pg_stat_monitor/tmp_check/**/archives/*
!src/pg_stat_monitor/tmp_check/**/backup/*
!src/pg_stat_monitor/tmp_check/**/pgdata/*
!src/pg_stat_monitor/tmp_check/**/archives/
!src/pg_stat_monitor/tmp_check/**/backup/
!src/pg_stat_monitor/tmp_check/**/pgdata/
if-no-files-found: warn
retention-days: 3

View File

@ -0,0 +1,111 @@
name: postgresql-16-ppg-package
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-16-ppg-package-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
- name: Delete old postgresql files
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev \
zlib1g-dev libssl-dev libpam0g-dev python3-dev bison flex \
libipc-run-perl wget
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install percona-release script
run: |
sudo apt-get -y update
sudo apt-get -y upgrade
sudo apt-get install -y wget gnupg2 curl lsb-release
sudo wget \
https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
- name: Install Percona Distribution Postgresql 16 & Extensions
run: |
sudo percona-release setup ppg-16
sudo apt-get update -y
sudo apt-get install -y percona-postgresql-16 \
percona-postgresql-contrib percona-postgresql-server-dev-all \
percona-pgpool2 libpgpool2 percona-postgresql-16-pgaudit \
percona-postgresql-16-pgaudit-dbgsym percona-postgresql-16-repack \
percona-postgresql-16-repack-dbgsym percona-pgaudit16-set-user \
percona-pgaudit16-set-user-dbgsym percona-postgresql-16-postgis-3 \
percona-postgresql-16-postgis-3-scripts \
percona-postgresql-postgis-scripts percona-postgresql-postgis \
percona-postgis
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
run: |
sudo -u postgres bash -c 'make USE_PGXS=1'
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
sudo service postgresql stop
echo "shared_preload_libraries = 'pg_stat_monitor'" |
sudo tee -a /etc/postgresql/16/main/postgresql.conf
sudo service postgresql start
sudo psql -V
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
if: ${{ failure() }}
run: |
sudo chmod -R ugo+rwx t
sudo chmod -R ugo+rwx tmp_check
exit 2 # regenerate error so that we can upload files in next step
working-directory: src/pg_stat_monitor
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/regression.out
src/pg_stat_monitor/logfile
src/pg_stat_monitor/t/results/
src/pg_stat_monitor/tmp_check/log/
!src/pg_stat_monitor/tmp_check/**/archives/*
!src/pg_stat_monitor/tmp_check/**/backup/*
!src/pg_stat_monitor/tmp_check/**/pgdata/*
!src/pg_stat_monitor/tmp_check/**/archives/
!src/pg_stat_monitor/tmp_check/**/backup/
!src/pg_stat_monitor/tmp_check/**/pgdata/
if-no-files-found: warn
retention-days: 3

View File

@ -0,0 +1,151 @@
name: postgresql-17-build
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-17-build-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone postgres repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: 'postgres/postgres'
ref: 'REL_17_STABLE'
- name: Install dependencies
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev \
zlib1g-dev libssl-dev libpam0g-dev bison flex \
libipc-run-perl -y docbook-xsl docbook-xsl libxml2 libxml2-utils \
libxml2-dev libxslt-dev xsltproc libkrb5-dev libldap2-dev \
libsystemd-dev gettext tcl-dev libperl-dev pkg-config clang-11 \
llvm-11 llvm-11-dev libselinux1-dev python3-dev \
uuid-dev liblz4-dev
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Create pgsql dir
run: mkdir -p /opt/pgsql
- name: Build postgres
run: |
export PATH="/opt/pgsql/bin:$PATH"
./configure '--build=x86_64-linux-gnu' '--prefix=/usr' \
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' \
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' \
'--localstatedir=/var' '--libdir=${prefix}/lib/x86_64-linux-gnu' \
'--libexecdir=${prefix}/lib/x86_64-linux-gnu' '--with-icu' \
'--with-tcl' '--with-perl' '--with-python' '--with-pam' \
'--with-openssl' '--with-libxml' '--with-libxslt' '--with-ldap' \
'PYTHON=/usr/bin/python3' '--mandir=/usr/share/postgresql/17/man' \
'--docdir=/usr/share/doc/postgresql-doc-17' '--with-pgport=5432' \
'--sysconfdir=/etc/postgresql-common' '--datarootdir=/usr/share' \
'--datadir=/usr/share/postgresql/17' '--with-uuid=e2fs' \
'--bindir=/usr/lib/postgresql/17/bin' '--enable-tap-tests' \
'--libdir=/usr/lib/x86_64-linux-gnu' '--enable-debug' \
'--libexecdir=/usr/lib/postgresql' '--with-gnu-ld' \
'--includedir=/usr/include/postgresql' '--enable-dtrace' \
'--enable-nls' '--enable-thread-safety' '--disable-rpath' \
'--with-system-tzdata=/usr/share/zoneinfo' '--with-llvm' \
'LLVM_CONFIG=/usr/bin/llvm-config-11' 'CLANG=/usr/bin/clang-11' \
'--with-systemd' '--with-selinux' 'MKDIR_P=/bin/mkdir -p' \
'PROVE=/usr/bin/prove' 'TAR=/bin/tar' 'XSLTPROC=xsltproc --nonet' \
'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now' \
'build_alias=x86_64-linux-gnu' '--with-gssapi' \
'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' \
'CFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -fno-omit-frame-pointer' \
'CXXFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security'
make world
sudo make install-world
- name: Start postgresql cluster
run: |
export PATH="/usr/lib/postgresql/17/bin:$PATH"
sudo cp /usr/lib/postgresql/17/bin/pg_config /usr/bin
initdb -D /opt/pgsql/data
pg_ctl -D /opt/pgsql/data -l logfile start
- name: Clone pg_stat_monitor repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
- name: Build pg_stat_monitor
run: |
make USE_PGXS=1
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Configure and Restart Server
run: |
export PATH="/usr/lib/postgresql/17/bin:$PATH"
pg_ctl -D /opt/pgsql/data -l logfile stop
echo "shared_preload_libraries = 'pg_stat_monitor'" >> \
/opt/pgsql/data/postgresql.conf
echo "compute_query_id = regress" >> /opt/pgsql/data/postgresql.conf
pg_ctl -D /opt/pgsql/data -l logfile start
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
make installcheck
working-directory: src/pg_stat_monitor/
- name: Change dir permissions on fail
if: ${{ failure() }}
run: |
sudo chmod -R ugo+rwx t
sudo chmod -R ugo+rwx tmp_check
exit 2 # regenerate error so that we can upload files in next step
working-directory: src/pg_stat_monitor
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/regression.out
src/pg_stat_monitor/logfile
src/pg_stat_monitor/t/results/
src/pg_stat_monitor/tmp_check/log/
!src/pg_stat_monitor/tmp_check/**/archives/*
!src/pg_stat_monitor/tmp_check/**/backup/*
!src/pg_stat_monitor/tmp_check/**/pgdata/*
!src/pg_stat_monitor/tmp_check/**/archives/
!src/pg_stat_monitor/tmp_check/**/backup/
!src/pg_stat_monitor/tmp_check/**/pgdata/
if-no-files-found: warn
retention-days: 3
- name: Start Server installcheck-world tests
run: make installcheck-world
- name: Report on installcheck-world test suites fail
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ failure() }}
with:
name: Regressions output files of failed testsuite, and pg log
path: |
**/regression.diffs
**/regression.out
src/pg_stat_monitor/logfile
retention-days: 3

View File

@ -0,0 +1,97 @@
name: postgresql-17-pgdg-package
on:
pull_request:
push:
branches:
- main
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
permissions:
contents: read
jobs:
build:
name: pg-17-pgdg-package-test
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Clone pg_stat_monitor repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: 'src/pg_stat_monitor'
- name: Delete old postgresql files
run: |
sudo apt-get update
sudo apt purge postgresql-client-common postgresql-common \
postgresql postgresql*
sudo apt-get install -y libreadline6-dev systemtap-sdt-dev wget \
zlib1g-dev libssl-dev libpam0g-dev bison flex libipc-run-perl
sudo rm -rf /var/lib/postgresql /var/log/postgresql /etc/postgresql \
/usr/lib/postgresql /usr/include/postgresql /usr/share/postgresql \
/etc/postgresql
sudo rm -f /usr/bin/pg_config
sudo /usr/bin/perl -MCPAN -e 'install IPC::Run'
sudo /usr/bin/perl -MCPAN -e 'install Text::Trim'
- name: Install PG Distribution Postgresql 17
run: |
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt \
$(lsb_release -cs)-pgdg main 17" > /etc/apt/sources.list.d/pgdg.list'
sudo wget --quiet -O - \
https://www.postgresql.org/media/keys/ACCC4CF8.asc |
sudo apt-key add -
sudo apt update
sudo apt -y install postgresql-17 postgresql-server-dev-17
- name: Change src owner to postgres
run: |
sudo chmod o+rx ~
sudo chown -R postgres:postgres src
- name: Build pg_stat_monitor
run: |
sudo -u postgres bash -c 'make USE_PGXS=1'
sudo make USE_PGXS=1 install
working-directory: src/pg_stat_monitor
- name: Start pg_stat_monitor_tests
run: |
sudo service postgresql stop
echo "shared_preload_libraries = 'pg_stat_monitor'" |
sudo tee -a /etc/postgresql/17/main/postgresql.conf
sudo service postgresql start
sudo psql -V
export PG_TEST_PORT_DIR=${GITHUB_WORKSPACE}/src/pg_stat_monitor
echo $PG_TEST_PORT_DIR
sudo -E -u postgres bash -c 'make installcheck USE_PGXS=1'
working-directory: src/pg_stat_monitor
- name: Change dir permissions on fail
if: ${{ failure() }}
run: |
sudo chmod -R ugo+rwx t
sudo chmod -R ugo+rwx tmp_check
exit 2 # regenerate error so that we can upload files in next step
working-directory: src/pg_stat_monitor
- name: Upload logs on fail
if: ${{ failure() }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Regressions diff and postgresql log
path: |
src/pg_stat_monitor/regression.diffs
src/pg_stat_monitor/regression.out
src/pg_stat_monitor/logfile
src/pg_stat_monitor/t/results/
src/pg_stat_monitor/tmp_check/log/
!src/pg_stat_monitor/tmp_check/**/archives/*
!src/pg_stat_monitor/tmp_check/**/backup/*
!src/pg_stat_monitor/tmp_check/**/pgdata/*
!src/pg_stat_monitor/tmp_check/**/archives/
!src/pg_stat_monitor/tmp_check/**/backup/
!src/pg_stat_monitor/tmp_check/**/pgdata/
if-no-files-found: warn
retention-days: 3

48
.github/workflows/scorecard.yml vendored Normal file
View File

@ -0,0 +1,48 @@
name: Scorecard
on:
# To guarantee Maintained check is occasionally updated. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained
schedule:
- cron: "24 3 * * 1"
push:
branches:
- main
# Declare default permissions as read only.
permissions: read-all
jobs:
analysis:
name: Analysis
runs-on: ubuntu-latest
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
# Needed to publish results and get a badge (see publish_results below).
id-token: write
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Run analysis
uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
with:
results_file: results.sarif
results_format: sarif
publish_results: true
- name: Upload results
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard (optional).
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
with:
sarif_file: results.sarif

4
.gitignore vendored
View File

@ -45,6 +45,7 @@
*.mod*
*.cmd
.tmp_versions/
.deps/
modules.order
Module.symvers
Mkfile.old
@ -59,3 +60,6 @@ dkms.conf
## .vscode
.vscode/
.vscode/*
# tools files
typedefs-full.list

19
.licenserc.yaml Normal file
View File

@ -0,0 +1,19 @@
header:
paths:
- "**/*.c"
- "**/*.h"
license:
pattern: |
.*\.(c|h)
.*
Portions Copyright © 2018-2024, Percona LLC and/or its affiliates
Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
Portions Copyright (c) 1994, The Regents of the University of California
IDENTIFICATION
contrib/pg_stat_monitor/.*\.(c|h)
comment: never

View File

@ -1,20 +0,0 @@
# List of pg_stat_monitor Adopters
This is the list of organizations and users that publicly shared details of how
they are using pg_stat_monitor.
Please send us a pull request if you want to be added or removed from this
list.
The list of organizations that have publicly shared the usage of
pg_stat_monitor:
| Organization | Description | Success Story |
| :--- | :--- | :--- |
| [Example](https://example.com/) | Example company running pg_stat_monitor for dev and production for core application | [English](./adopters/example/README.md) |
The list of users that have publicly shared the usage of pg_stat_monitor.
| User | Description | Success Story |
| :--- | :--- | :--- |
| [Example User](https://github.com/username) | Personal tests of pg_stat_monitor | [English](./adopters/users/username/README.md) |

View File

@ -2,30 +2,33 @@
"name": "pg_stat_monitor",
"abstract": "PostgreSQL Query Performance Monitoring Tool",
"description": "pg_stat_monitor is a PostgreSQL Query Performance Monitoring tool, based on PostgreSQL's contrib module pg_stat_statements. PostgreSQL’s pg_stat_statements provides the basic statistics, which is sometimes not enough. The major shortcoming in pg_stat_statements is that it accumulates all the queries and their statistics and does not provide aggregated statistics nor histogram information. In this case, a user would need to calculate the aggregates, which is quite an expensive operation.",
"version": "1.1.0-dev",
"version": "2.1.1",
"maintainer": [
"ibrar.ahmed@percona.com"
"Artem Gavrilov <artem.gavrilov@percona.com>",
"Diego dos Santos Fronza <diego.fronza@percona.com>"
],
"license": "postgresql",
"license": {
"PostgreSQL": "https://www.postgresql.org/about/licence"
},
"provides": {
"pg_stat_monitor": {
"abstract": "PostgreSQL Query Performance Monitoring Tool",
"file": "pg_stat_monitor--1.0.sql",
"file": "pg_stat_monitor--2.0--2.1.sql",
"docfile": "README.md",
"version": "1.1.0-dev"
"version": "2.1.1"
}
},
"prereqs": {
"runtime": {
"requires": {
"PostgreSQL": "11.0.0"
"PostgreSQL": "12.0.0"
}
}
},
"resources": {
"homepage": "https://percona.github.io/pg_stat_monitor/",
"homepage": "https://github.com/percona/pg_stat_monitor",
"bugtracker": {
"web": "https://jira.percona.com/projects/PG/issues"
"web": "https://perconadev.atlassian.net/jira/software/c/projects/PG/issues"
},
"repository": {
"url": "https://github.com/percona/pg_stat_monitor.git",
@ -33,7 +36,7 @@
"type": "git"
}
},
"generated_by": "ibrar.ahmed@percona.com",
"generated_by": "Artem Gavrilov",
"meta-spec": {
"version": "1.0.0",
"url": "http://pgxn.org/meta/spec.txt"

View File

@ -4,7 +4,7 @@ MODULE_big = pg_stat_monitor
OBJS = hash_query.o guc.o pg_stat_monitor.o $(WIN32RES)
EXTENSION = pg_stat_monitor
DATA = pg_stat_monitor--1.0.sql
DATA = pg_stat_monitor--2.0.sql pg_stat_monitor--1.0--2.0.sql pg_stat_monitor--2.0--2.1.sql pg_stat_monitor--2.1--2.2.sql
PGFILEDESC = "pg_stat_monitor - execution statistics of SQL statements"
@ -12,17 +12,35 @@ LDFLAGS_SL += $(filter -lm, $(LIBS))
TAP_TESTS = 1
REGRESS_OPTS = --temp-config $(top_srcdir)/contrib/pg_stat_monitor/pg_stat_monitor.conf --inputdir=regression
REGRESS = basic version guc counters relations database error_insert application_name application_name_unique top_query cmd_type error rows tags histogram
REGRESS = basic \
version \
guc \
pgsm_query_id \
functions \
counters \
relations \
database \
error_insert \
application_name \
application_name_unique \
top_query \
different_parent_queries \
cmd_type \
error \
rows \
tags \
user \
level_tracking \
decode_error_level
# Disabled because these tests require "shared_preload_libraries=pg_stat_statements",
# which typical installcheck users do not have (e.g. buildfarm clients).
# NO_INSTALLCHECK = 1
PG_CONFIG = pg_config
PGSM_INPUT_SQL_VERSION := 1.0
PG_CONFIG ?= pg_config
ifdef USE_PGXS
MAJORVERSION := $(shell pg_config --version | awk {'print $$2'} | cut -f1 -d".")
MAJORVERSION := $(shell $(PG_CONFIG) --version | awk {'print $$2'} | cut -f1 -d".")
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
else
@ -32,19 +50,13 @@ include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif
ifeq ($(shell test $(MAJORVERSION) -gt 12; echo $$?),0)
PGSM_INPUT_SQL_VERSION := ${PGSM_INPUT_SQL_VERSION}.${MAJORVERSION}
endif
# Fetches typedefs list for PostgreSQL core and merges it with typedefs defined in this project.
# https://wiki.postgresql.org/wiki/Running_pgindent_on_non-core_code_or_development_code
update-typedefs:
wget -q -O - "https://buildfarm.postgresql.org/cgi-bin/typedefs.pl?branch=REL_17_STABLE" | cat - typedefs.list | sort | uniq > typedefs-full.list
$(info Using pg_stat_monitor--${PGSM_INPUT_SQL_VERSION}.sql.in file to generate sql filea.)
# Indents projects sources.
indent:
pgindent --typedefs=typedefs-full.list .
ifneq (,$(wildcard ../pg_stat_monitor--${PGSM_INPUT_SQL_VERSION}.sql.in))
CP := $(shell cp -v ../pg_stat_monitor--${PGSM_INPUT_SQL_VERSION}.sql.in ../pg_stat_monitor--1.0.sql)
endif
ifneq (,$(wildcard pg_stat_monitor--${PGSM_INPUT_SQL_VERSION}.sql.in))
CP := $(shell cp -v pg_stat_monitor--${PGSM_INPUT_SQL_VERSION}.sql.in pg_stat_monitor--1.0.sql)
endif
clean:
rm -rf ${DATA}
rm -rf t/results
.PHONY: update-typedefs indent

View File

@ -1,6 +1,15 @@
[![postgresql-14-build](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-14-build.yml/badge.svg)](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-14-build.yml) [![postgresql-14-pgdg-package](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-14-pgdg-package.yml/badge.svg)](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-14-pgdg-package.yml) [![postgresql-14-ppg-package](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-14-ppg-package.yml/badge.svg)](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-14-ppg-package.yml)
[![postgresql-12-pgdg-package](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-12-pgdg-package.yml/badge.svg)](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-12-pgdg-package.yml)
[![postgresql-13-pgdg-package](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-13-pgdg-package.yml/badge.svg)](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-13-pgdg-package.yml)
[![postgresql-14-pgdg-package](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-14-pgdg-package.yml/badge.svg)](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-14-pgdg-package.yml)
[![postgresql-15-pgdg-package](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-15-pgdg-package.yml/badge.svg)](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-15-pgdg-package.yml)
[![postgresql-16-pgdg-package](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-16-pgdg-package.yml/badge.svg)](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-16-pgdg-package.yml)
[![postgresql-17-pgdg-package](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-17-pgdg-package.yml/badge.svg)](https://github.com/percona/pg_stat_monitor/actions/workflows/postgresql-17-pgdg-package.yml)
[![PGXN version](https://badge.fury.io/pg/pg_stat_monitor.svg)](https://badge.fury.io/pg/pg_stat_monitor)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/percona/pg_stat_monitor/badge)](https://scorecard.dev/viewer/?uri=github.com/percona/pg_stat_monitor)
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/9703/badge)](https://www.bestpractices.dev/projects/9703)
[![Code coverage](https://codecov.io/gh/percona/pg_stat_monitor/branch/main/graph/badge.svg)](https://codecov.io/gh/percona/pg_stat_monitor)
[![Forum](https://img.shields.io/badge/Forum-join-brightgreen)](https://forums.percona.com/)
# pg_stat_monitor: Query Performance Monitoring Tool for PostgreSQL
## Table of Contents
@ -16,6 +25,8 @@
- [Installing from Percona repositories](#installing-from-percona-repositories)
- [Installing from PostgreSQL `yum` repositories](#installing-from-postgresql-yum-repositories)
- [Installing from PGXN](#installing-from-pgxn)
- [Installing from Trunk](#installing-from-trunk)
- [Installing from sources](#building-from-source)
- [Configuration](#configuration)
- [Setup](#setup)
- [Building from source](#building-from-source)
@ -43,7 +54,7 @@ To learn about other features, available in `pg_stat_monitor`, see the [Features
`pg_stat_monitor` supports PostgreSQL versions 11 and above. It is compatible with both PostgreSQL provided by PostgreSQL Global Development Group (PGDG) and [Percona Distribution for PostgreSQL](https://www.percona.com/software/postgresql-distribution).
The `RPM` (for RHEL and CentOS) and the `DEB` (for Debian and Ubuntu) packages are available from Percona repositories for PostgreSQL versions [11](https://www.percona.com/downloads/percona-postgresql-11/LATEST/), [12](https://www.percona.com/downloads/postgresql-distribution-12/LATEST/), [13](https://www.percona.com/downloads/postgresql-distribution-13/LATEST/) and [14](https://www.percona.com/downloads/postgresql-distribution-14/LATEST/).
The `RPM` (for RHEL and CentOS) and the `DEB` (for Debian and Ubuntu) packages are available from Percona repositories for PostgreSQL versions [12](https://www.percona.com/downloads/postgresql-distribution-12/LATEST/), [13](https://www.percona.com/downloads/postgresql-distribution-13/LATEST/), [14](https://www.percona.com/downloads/postgresql-distribution-14/LATEST/), [15](https://www.percona.com/downloads/postgresql-distribution-15/LATEST/), [16](https://www.percona.com/downloads/postgresql-distribution-16/LATEST/) and [17](https://www.percona.com/downloads/postgresql-distribution-17/LATEST/).
The RPM packages are also available in the official PostgreSQL (PGDG) yum repositories.
@ -53,8 +64,8 @@ The `pg_stat_monitor` should work on the latest version of both [Percona Distrib
| **Distribution** | **Version** | **Provider** |
| ---------------- | --------------- | ------------ |
|[Percona Distribution for PostgreSQL](https://www.percona.com/software/postgresql-distribution)| [11](https://www.percona.com/downloads/percona-postgresql-11/LATEST/), [12](https://www.percona.com/downloads/postgresql-distribution-12/LATEST/), [13](https://www.percona.com/downloads/postgresql-distribution-13/LATEST/) and [14](https://www.percona.com/downloads/postgresql-distribution-14/LATEST/)| Percona|
| PostgreSQL | 11, 12, 13 and 14 | PostgreSQL Global Development Group (PGDG) |
|[Percona Distribution for PostgreSQL](https://www.percona.com/software/postgresql-distribution)| [12](https://www.percona.com/downloads/postgresql-distribution-12/LATEST/), [13](https://www.percona.com/downloads/postgresql-distribution-13/LATEST/), [14](https://www.percona.com/downloads/postgresql-distribution-14/LATEST/), [15](https://www.percona.com/downloads/postgresql-distribution-15/LATEST/), [16](https://www.percona.com/downloads/postgresql-distribution-16/LATEST/) and [17](https://www.percona.com/downloads/postgresql-distribution-17/LATEST/)| Percona|
| PostgreSQL | 12, 13, 14, 15, 16 and 17 | PostgreSQL Global Development Group (PGDG) |
### Features
@ -85,7 +96,7 @@ The following are useful links in [`pg_stat_monitor` documentation](https://docs
The PostgreSQL YUM repository supports `pg_stat_monitor` for all [supported versions](#supported-versions) for the following platforms:
* Red Hat Enterprise/Rocky/CentOS/Oracle Linux 7 and 8
* Red Hat Enterprise/Rocky/CentOS/Oracle Linux 7, 8 and 9
* Fedora 33 and 34
Find the list of supported platforms for `pg_stat_monitor` within [Percona Distribution for PostgreSQL](https://www.percona.com/software/postgresql-distribution) on the [Percona Release Lifecycle Overview](https://www.percona.com/services/policies/percona-software-support-lifecycle#pgsql) page.
@ -95,10 +106,11 @@ Find the list of supported platforms for `pg_stat_monitor` within [Percona Distr
You can install `pg_stat_monitor` from the following sources:
* [Percona repositories](#installing-from-percona-repositories),
* [PostgreSQL PGDG yum repositories](#installing-from-postgresql-yum-repositories),
* [PGXN](#installing-from-pgxn) and
* [source code](#building-from-source).
* [Percona repositories](#installing-from-percona-repositories)
* [PostgreSQL PGDG yum repositories](#installing-from-postgresql-yum-repositories)
* [PGXN](#installing-from-pgxn)
* [Trunk](#installing-from-trunk)
* [source code](#building-from-source)
#### Installing from Percona repositories
@ -109,19 +121,19 @@ To install `pg_stat_monitor` from Percona repositories, you need to use the `per
2. Enable Percona repository:
``` sh
percona-release setup ppgXX
percona-release setup ppg-XX
```
Replace XX with the desired PostgreSQL version. For example, to install `pg_stat_monitor ` for PostgreSQL 13, specify `ppg13`.
Replace XX with the desired PostgreSQL version. For example, to install `pg_stat_monitor ` for PostgreSQL 17, specify `ppg-17`.
3. Install `pg_stat_monitor` package
* For Debian and Ubuntu:
``` sh
apt-get install percona-pg-stat-monitor13
apt-get install percona-pg-stat-monitor17
```
* For RHEL and CentOS:
``` sh
yum install percona-pg-stat-monitor13
yum install percona-pg-stat-monitor17
```
#### Installing from PostgreSQL `yum` repositories
@ -134,12 +146,12 @@ Install `pg_stat_monitor`:
dnf install -y pg_stat_monitor_<VERSION>
```
Replace the `VERSION` variable with the PostgreSQL version you are using (e.g. specify `pg_stat_monitor_13` for PostgreSQL 13)
Replace the `VERSION` variable with the PostgreSQL version you are using (e.g. specify `pg_stat_monitor_17` for PostgreSQL 17)
#### Installing from PGXN
You can install `pg_stat_monitor` from PGXN (PostgreSQL Extensions Network) using the [PGXN client](https://pgxn.github.io/pgxnclient/).
You can install `pg_stat_monitor` from [PGXN (PostgreSQL Extensions Network)](https://pgxn.org/) using the [PGXN client](https://pgxn.github.io/pgxnclient/).
Use the following command:
@ -147,9 +159,19 @@ Use the following command:
pgxn install pg_stat_monitor
```
#### Installing from Trunk
You can install `pg_stat_monitor` from [Trunk (A PostgreSQL Extensions Registry)](https://pgt.dev/) using the [Trunk CLI](https://github.com/tembo-io/trunk?tab=readme-ov-file#installation).
Use the following command:
```
trunk install pg_stat_monitor
```
### Configuration
You can find the configuration parameters of the `pg_stat_monitor` extension in the `pg_stat_monitor_settings` view. To change the default configuration, specify new values for the desired parameters using the GUC (Grant Unified Configuration) system. To learn more, refer to the [Configuration parameters](https://docs.percona.com/pg-stat-monitor/configuration.html) section of the documentation.
You can find the configuration parameters of the `pg_stat_monitor` extension in the `pg_settings` view. To change the default configuration, specify new values for the desired parameters using the GUC (Grant Unified Configuration) system. To learn more, refer to the [Configuration parameters](https://docs.percona.com/pg-stat-monitor/configuration.html) section of the documentation.
### Setup
@ -185,7 +207,7 @@ sudo systemctl restart postgresql.service
```sh
sudo systemctl restart postgresql-13
sudo systemctl restart postgresql-17
```
Create the extension using the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. Using this command requires the privileges of a superuser or a database owner. Connect to `psql` as a superuser for a database and run the following command:
@ -247,19 +269,13 @@ make USE_PGXS=1 install
To uninstall `pg_stat_monitor`, do the following:
1. Disable statistics collection. From the `psql` terminal, run the following command:
```sql
ALTER SYSTEM SET pg_stat_monitor.pgsm_enable = 0;
```
2. Drop `pg_stat_monitor` extension:
1. Drop `pg_stat_monitor` extension:
```sql
DROP EXTENSION pg_stat_monitor;
```
3. Remove `pg_stat_monitor` from the `shared_preload_libraries` configuration parameter:
2. Remove `pg_stat_monitor` from the `shared_preload_libraries` configuration parameter:
```sql
ALTER SYSTEM SET shared_preload_libraries = '';
@ -267,7 +283,7 @@ To uninstall `pg_stat_monitor`, do the following:
**Important**: If the `shared_preload_libraries` parameter includes other modules, specify them all for the `ALTER SYSTEM SET` command to keep using them.
4. Restart the `postgresql` instance to apply the changes. The following command restarts PostgreSQL 13. Replace the version value with the one you are using.
3. Restart the `postgresql` instance to apply the changes. The following command restarts PostgreSQL 17. Replace the version value with the one you are using.
* On Debian and Ubuntu:
@ -279,7 +295,7 @@ To uninstall `pg_stat_monitor`, do the following:
```sh
sudo systemctl restart postgresql-13
sudo systemctl restart postgresql-17
```
### How we work
@ -319,6 +335,6 @@ This project is licensed under the same open liberal terms and conditions as the
### Copyright notice
* Portions Copyright © 2018-2021, Percona LLC and/or its affiliates
* Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
* Portions Copyright © 2018-2024, Percona LLC and/or its affiliates
* Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, The Regents of the University of California

View File

@ -1,204 +0,0 @@
# Release Notes
Below is the complete list of release notes for every version of ``pg_stat_monitor``.
## 1.0.1
### Bugs Fixed
[PG-382](https://jira.percona.com/browse/PG-382): Histogram default settings changed to prevent the PostgreSQL server to crash
[PG-417](https://jira.percona.com/browse/PG-417): Addressed security vulnerabilities to prevent an attacker from precreating functions
[DISTPG-427](https://jira.percona.com/browse/DISTPG-427): Fixed the issue with the extensions not working when pg_stat_monitor is enabled by replacing the `return` with `goto exit` for the `pgsm_emit_log_hook` function
## 1.0.0
Bump version from 1.0.0-rc.2 to 1.0.0.
## 1.0.0-rc.2
### Improvements
[PG-331](https://jira.percona.com/browse/PG-331): Changed the default value for the `pg_stat_monitor.pgsm_query_max_len` parameter from 1024 to 2048 for better data presentation in PMM
[PG-355](https://jira.percona.com/browse/PG-355): Changed the collection of `sys_time` and `user_time` metrics so that they are now presented as an accumulative value
[PG-286](https://jira.percona.com/browse/PG-286): Improved pg_stat_monitor performance by decreasing the overhead by more than 50%.
[PG-267](https://jira.percona.com/browse/PG-267): Added test case to verify histogram feature
[PG-359](https://jira.percona.com/browse/PG-359): Documentation: updated the `pg_stat_monitor_settings` view reference.
[PG-344](https://jira.percona.com/browse/PG-344): Documentation: Updated the extensions order and behavior with data collection for PostgreSQL 14.
[PG-358](https://jira.percona.com/browse/PG-358): Documentation: data display of `** blk **` and `** wal **` columns when both `pg_stat_monitor` and `pg_stat_statements` are loaded together.
### Bugs Fixed
[PG-350](https://jira.percona.com/browse/PG-350): Fixed bucket time overflow
[PG-338](https://jira.percona.com/browse/PG-338): Fixed query calls count by setting the default value for `pg_stat_monitor.pgsm_track` to `top`.
[PG-291](https://jira.percona.com/browse/PG-338): Fixed calls count.
[PG-325](https://jira.percona.com/browse/PG-325): Fixed deadlock that occurred when the query length exceeded the `pgsm_query_max_len` value.
[PG-326](https://jira.percona.com/browse/PG-326): Added validation for `pgsm_histogram_min` and `pgsm_histogram_max` ranges
[PG-329](https://jira.percona.com/browse/PG-329): Fixed creation of `pg_stat_monitor_errors` view on SQL files.
[PG-296](https://jira.percona.com/browse/PG-296): Fixed issue with the application name not displaying in the view when changed.
[PG-290](https://jira.percona.com/browse/PG-290): Fixed issue with PostgreSQL crashing after enabling debug log level and when `pg_stat_monitor` is enabled.
[PG-166](https://jira.percona.com/browse/PG-166): Fixed issue with displaying the actual system time values instead of `NULL`
[PG-369](https://jira.percona.com/browse/PG-358): Fixed issue with incorrect `wal_bytes` values for PostgreSQL 11 and 12 that caused Query Analytics failure in PMM by ignoring the `WalUsage` variable value for these versions.
## 1.0.0-rc.1
### Improvements
[PG-165](https://jira.percona.com/browse/PG-165): Recycle expired buckets
[PG-167](https://jira.percona.com/browse/PG-167): Make SQL error codes readable by updating their data types
[PG-193](https://jira.percona.com/browse/PG-193): Create a comment based tags to identify different parameters
[PG-199](https://jira.percona.com/browse/PG-199): Documentation: Add the integration with PMM section in User Guide
[PG-210](https://jira.percona.com/browse/PG-210): Documentation: Update column names per POstgreSQL version to match the upstream ones
### Bugs Fixed
[PG-177](https://jira.percona.com/browse/PG-177): Fixed the error in histogram ranges
[PG-214](https://jira.percona.com/browse/PG-214): Fixed the issue with the display of the error message as part of the query column in `pg_stat_monitor` view
[PG-246](https://jira.percona.com/browse/PG-246): Fixed the issue with significant CPU and memory resource usage when `pg_stat_monitor.pgsm_enable_query_plan` parameter is enabled
[PG-262](https://jira.percona.com/browse/PG-262): Fixed the way the comments are extracted in pg_stat_monitor view
[PG-271](https://jira.percona.com/browse/PG-271): Fixed the issue with enabling the ``pg_stat_monitor.pgsm_overflow_target`` configuration parameter.
[PG-272](https://jira.percona.com/browse/PG-272): Fixed the server crash when calling the `pg_stat_monitor_reset()` function by using the correct `PGSM_MAX_BUCKETS` GUC as the limit to the loop
## REL0_9_0_STABLE
### Improvements
[PG-186](https://jira.percona.com/browse/PG-186): Add support to monitor query execution plan
[PG-147](https://jira.percona.com/browse/PG-147): Store top query, instead of parent query.
[PG-188](https://jira.percona.com/browse/PG-188): Added a new column to monitor the query state i.e PARSING/PLANNING/ACTIVE/FINISHED.
[PG-180](https://jira.percona.com/browse/PG-180): Schema Qualified table/relations names.
Regression Test Suite.
### Bugs Fixed
[PG-189](https://jira.percona.com/browse/PG-189): Regression crash in case of PostgreSQL 11.
[PG-187](https://jira.percona.com/browse/PG-187): Compilation Error for PostgreSQL 11 and PostgreSQL 12.
[PG-186](https://jira.percona.com/browse/PG-186): Add support to monitor query execution plan.
[PG-182](https://jira.percona.com/browse/PG-182): Added a new option for the query buffer overflow.
[PG-181](https://jira.percona.com/browse/PG-181): Segmentation fault in case of track_utility is ON.
Some Code refactoring.
## REL0_8_1
[PG-147](https://jira.percona.com/browse/PG-147): Stored Procedure Support add parentid to track caller.
[PG-177](https://jira.percona.com/browse/PG-177): Error in Histogram ranges.
## REL0_8_0_STABLE
### Improvements
Column userid (int64) was removed.
Column dbid (int64) was removed.
Column user (string) was added (replacement for userid).
Column datname (string) was added (replacement for dbid).
[PG-176](https://jira.percona.com/browse/PG-176): Extract fully qualified relations name.
[PG-175](https://jira.percona.com/browse/PG-175): Only Superuser / Privileged user can view IP address.
[PG-174](https://jira.percona.com/browse/PG-174): Code cleanup.
[PG-173](https://jira.percona.com/browse/PG-173): Added new WAL usage statistics.
[PG-172](https://jira.percona.com/browse/PG-172): Exponential histogram for time buckets.
[PG-164](https://jira.percona.com/browse/PG-164): Query timing will be four decimal places instead of two.
[PG-167](https://jira.percona.com/browse/PG-167): SQLERRCODE must be in readable format.
### Bugs Fixed
[PG-169](https://jira.percona.com/browse/PG-169): Fixing message buffer overrun and incorrect index access to fix the server crash.
[PG-168](https://jira.percona.com/browse/PG-168): "calls" and histogram parameter does not match.
[PG-166](https://jira.percona.com/browse/PG-166): Display actual system time instead of null.
[PG-165](https://jira.percona.com/browse/PG-165): Recycle expired buckets.
[PG-150](https://jira.percona.com/browse/PG-150): Error while logging CMD Type like SELECT, UPDATE, INSERT, DELETE.
## REL0_7_2
[PG-165](https://jira.percona.com/browse/PG-165): Recycle expired buckets.
[PG-164](https://jira.percona.com/browse/PG-164): Query timing will be four decimal places instead of two.
[PG-161](https://jira.percona.com/browse/PG-161): Miscellaneous small issues.
## REL0_7_1
[PG-158](https://jira.percona.com/browse/PG-158): Segmentation fault while using pgbench with clients > 1.
[PG-159](https://jira.percona.com/browse/PG-159): Bucket start time (bucket_start_time) should be aligned with bucket_time.
[PG-160](https://jira.percona.com/browse/PG-160): Integration with PGXN.
## REL0_7_0_STABLE
### Improvements
[PG-153](https://jira.percona.com/browse/PG-153): Capture and record the application_name executing the query.
[PG-145](https://jira.percona.com/browse/PG-143): Add a new View/Query to show the actual Database name and Username.
[PG-110](https://jira.percona.com/browse/PG-110); Aggregate the number of warnings.
[PG-109](https://jira.percona.com/browse/PG-109): Log failed queries or queries with warning messages.
[PG-150](https://jira.percona.com/browse/PG-150): Differentiate different types of queries such as SELECT, UPDATE, INSERT or DELETE.
### Bugs Fixed
[PG-111](https://jira.percona.com/browse/PG-111) Show information for incomplete buckets.
[PG-148](https://jira.percona.com/browse/PG-148) Loss of query statistics/monitoring due to not enough “slots” available.
## v0.6.0
Initial Release.
## Master
### Improvements
[PG-156](https://jira.percona.com/browse/PG-156): Adding a placeholder replacement function for the prepared statement

24
SECURITY.md Normal file
View File

@ -0,0 +1,24 @@
# Security Policy
## Supported Versions
pg_stat_monitor project follows rolling release strategy. So all security updates go to new versions.
## Reporting a Vulnerability
Please report any vulnerabilities to our project in [Jira](https://perconadev.atlassian.net/jira/software/c/projects/PG/issues).
If the vulnerability is accepted and confirmed by our experts, you should normally expect us to deliver
a version with a fix according to the timelines provided below:
For Percona created software (our engineers wrote the code):
- Low/Medium: 120 days
- High: 90 days
- Critical: ASAP but should not exceed 30 days
For Non-Percona created software (upstream provided/packaged) from the time the vendor releases a patch:
- Low/Medium: 2nd release from current version
- High: Next release
- Critical: Hotfix or no later than next release (our regular release cadence is once every month)

View File

@ -1,4 +1,4 @@
# Percona Distribution for PostgreSQL Operator Code of Conduct
# Percona Code of Conduct
All Percona Products follow the [Percona Community Code of Conduct](https://github.com/percona/community/blob/main/content/contribute/coc.md).

527
guc.c
View File

@ -1,14 +1,14 @@
/*-------------------------------------------------------------------------
*
* guc.c: guc variable handling of pg_stat_monitor
* guc.c
* guc variable handling of pg_stat_monitor
*
* Portions Copyright © 2018-2020, Percona LLC and/or its affiliates
* Portions Copyright © 2018-2024, Percona LLC and/or its affiliates
*
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
* Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
*
* Portions Copyright (c) 1994, The Regents of the University of California
*
*
* IDENTIFICATION
* contrib/pg_stat_monitor/guc.c
*
@ -18,15 +18,30 @@
#include "pg_stat_monitor.h"
GucVariable conf[MAX_SETTINGS];
static void DefineIntGUC(GucVariable * conf);
static void DefineIntGUCWithCheck(GucVariable * conf, GucIntCheckHook check);
static void DefineBoolGUC(GucVariable * conf);
static void DefineEnumGUC(GucVariable * conf, const struct config_enum_entry *options);
/* GUC variables */
int pgsm_max;
int pgsm_query_max_len;
int pgsm_bucket_time;
int pgsm_max_buckets;
int pgsm_histogram_buckets;
double pgsm_histogram_min;
double pgsm_histogram_max;
int pgsm_query_shared_buffer;
bool pgsm_track_planning;
bool pgsm_extract_comments;
bool pgsm_enable_query_plan;
bool pgsm_enable_overflow;
bool pgsm_normalized_query;
bool pgsm_track_utility;
bool pgsm_track_application_names;
bool pgsm_enable_pgsm_query_id;
int pgsm_track;
static int pgsm_overflow_target; /* Not used since 2.0 */
/* Check hooks to ensure histogram_min < histogram_max */
static bool check_histogram_min(int *newval, void **extra, GucSource source);
static bool check_histogram_max(int *newval, void **extra, GucSource source);
static bool check_histogram_min(double *newval, void **extra, GucSource source);
static bool check_histogram_max(double *newval, void **extra, GucSource source);
static bool check_overflow_targer(int *newval, void **extra, GucSource source);
/*
* Define (or redefine) custom GUC variables.
@ -34,287 +49,269 @@ static bool check_histogram_max(int *newval, void **extra, GucSource source);
void
init_guc(void)
{
int i = 0,
j;
pgsm_track = PGSM_TRACK_TOP;
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_max",
.guc_desc = "Sets the maximum size of shared memory in (MB) used for statement's metadata tracked by pg_stat_monitor.",
.guc_default = 100,
.guc_min = 1,
.guc_max = 1000,
.guc_restart = true,
.guc_unit = GUC_UNIT_MB,
.guc_value = &PGSM_MAX
};
DefineIntGUC(&conf[i++]);
DefineCustomIntVariable("pg_stat_monitor.pgsm_max", /* name */
"Sets the maximum size of shared memory in (MB) used for statement's metadata tracked by pg_stat_monitor.", /* short_desc */
NULL, /* long_desc */
&pgsm_max, /* value address */
256, /* boot value */
10, /* min value */
10240, /* max value */
PGC_POSTMASTER, /* context */
GUC_UNIT_MB, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_query_max_len",
.guc_desc = "Sets the maximum length of query.",
.guc_default = 2048,
.guc_min = 1024,
.guc_max = INT_MAX,
.guc_unit = 0,
.guc_restart = true,
.guc_value = &PGSM_QUERY_MAX_LEN
};
DefineIntGUC(&conf[i++]);
DefineCustomIntVariable("pg_stat_monitor.pgsm_query_max_len", /* name */
"Sets the maximum length of query.", /* short_desc */
NULL, /* long_desc */
&pgsm_query_max_len, /* value address */
2048, /* boot value */
1024, /* min value */
INT_MAX, /* max value */
PGC_POSTMASTER, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_track_utility",
.guc_desc = "Selects whether utility commands are tracked.",
.guc_default = 1,
.guc_min = 0,
.guc_max = 0,
.guc_restart = false,
.guc_unit = 0,
.guc_value = &PGSM_TRACK_UTILITY
};
DefineBoolGUC(&conf[i++]);
DefineCustomIntVariable("pg_stat_monitor.pgsm_max_buckets", /* name */
"Sets the maximum number of buckets.", /* short_desc */
NULL, /* long_desc */
&pgsm_max_buckets, /* value address */
10, /* boot value */
1, /* min value */
20000, /* max value */
PGC_POSTMASTER, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_normalized_query",
.guc_desc = "Selects whether save query in normalized format.",
.guc_default = 0,
.guc_min = 0,
.guc_max = 0,
.guc_restart = false,
.guc_unit = 0,
.guc_value = &PGSM_NORMALIZED_QUERY
};
DefineBoolGUC(&conf[i++]);
DefineCustomIntVariable("pg_stat_monitor.pgsm_bucket_time", /* name */
"Sets the time in seconds per bucket.", /* short_desc */
NULL, /* long_desc */
&pgsm_bucket_time, /* value address */
60, /* boot value */
1, /* min value */
INT_MAX, /* max value */
PGC_POSTMASTER, /* context */
GUC_UNIT_S, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_max_buckets",
.guc_desc = "Sets the maximum number of buckets.",
.guc_default = 10,
.guc_min = 1,
.guc_max = 10,
.guc_restart = true,
.guc_unit = 0,
.guc_value = &PGSM_MAX_BUCKETS
};
DefineIntGUC(&conf[i++]);
DefineCustomRealVariable("pg_stat_monitor.pgsm_histogram_min", /* name */
"Sets the time in millisecond.", /* short_desc */
NULL, /* long_desc */
&pgsm_histogram_min, /* value address */
1, /* boot value */
0, /* min value */
HISTOGRAM_MAX_TIME, /* max value */
PGC_POSTMASTER, /* context */
GUC_UNIT_MS, /* flags */
check_histogram_min, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_bucket_time",
.guc_desc = "Sets the time in seconds per bucket.",
.guc_default = 60,
.guc_min = 1,
.guc_max = INT_MAX,
.guc_restart = true,
.guc_unit = 0,
.guc_value = &PGSM_BUCKET_TIME
};
DefineIntGUC(&conf[i++]);
DefineCustomRealVariable("pg_stat_monitor.pgsm_histogram_max", /* name */
"Sets the time in millisecond.", /* short_desc */
NULL, /* long_desc */
&pgsm_histogram_max, /* value address */
100000.0, /* boot value */
10.0, /* min value */
HISTOGRAM_MAX_TIME, /* max value */
PGC_POSTMASTER, /* context */
GUC_UNIT_MS, /* flags */
check_histogram_max, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_histogram_min",
.guc_desc = "Sets the time in millisecond.",
.guc_default = 0,
.guc_min = 0,
.guc_max = INT_MAX,
.guc_restart = true,
.guc_unit = 0,
.guc_value = &PGSM_HISTOGRAM_MIN
};
DefineIntGUCWithCheck(&conf[i++], check_histogram_min);
DefineCustomIntVariable("pg_stat_monitor.pgsm_histogram_buckets", /* name */
"Sets the maximum number of histogram buckets.", /* short_desc */
NULL, /* long_desc */
&pgsm_histogram_buckets, /* value address */
20, /* boot value */
2, /* min value */
MAX_RESPONSE_BUCKET, /* max value */
PGC_POSTMASTER, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_histogram_max",
.guc_desc = "Sets the time in millisecond.",
.guc_default = 100000,
.guc_min = 10,
.guc_max = INT_MAX,
.guc_restart = true,
.guc_unit = 0,
.guc_value = &PGSM_HISTOGRAM_MAX
};
DefineIntGUCWithCheck(&conf[i++], check_histogram_max);
DefineCustomIntVariable("pg_stat_monitor.pgsm_query_shared_buffer", /* name */
"Sets the maximum size of shared memory in (MB) used for query tracked by pg_stat_monitor.", /* short_desc */
NULL, /* long_desc */
&pgsm_query_shared_buffer, /* value address */
20, /* boot value */
1, /* min value */
10000, /* max value */
PGC_POSTMASTER, /* context */
GUC_UNIT_MB, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_histogram_buckets",
.guc_desc = "Sets the maximum number of histogram buckets",
.guc_default = 10,
.guc_min = 2,
.guc_max = MAX_RESPONSE_BUCKET,
.guc_restart = true,
.guc_unit = 0,
.guc_value = &PGSM_HISTOGRAM_BUCKETS
};
DefineIntGUC(&conf[i++]);
/* deprecated in V 2.0 */
DefineCustomIntVariable("pg_stat_monitor.pgsm_overflow_target", /* name */
"Sets the overflow target for pg_stat_monitor. (Deprecated, use pgsm_enable_overflow)", /* short_desc */
NULL, /* long_desc */
&pgsm_overflow_target, /* value address */
1, /* boot value */
0, /* min value */
1, /* max value */
PGC_POSTMASTER, /* context */
0, /* flags */
check_overflow_targer, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_query_shared_buffer",
.guc_desc = "Sets the maximum size of shared memory in (MB) used for query tracked by pg_stat_monitor.",
.guc_default = 20,
.guc_min = 1,
.guc_max = 10000,
.guc_restart = true,
.guc_unit = GUC_UNIT_MB,
.guc_value = &PGSM_QUERY_SHARED_BUFFER
};
DefineIntGUC(&conf[i++]);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_overflow_target",
.guc_desc = "Sets the overflow target for pg_stat_monitor",
.guc_default = 1,
.guc_min = 0,
.guc_max = 1,
.guc_restart = true,
.guc_unit = 0,
.guc_value = &PGSM_OVERFLOW_TARGET
};
DefineIntGUC(&conf[i++]);
DefineCustomBoolVariable("pg_stat_monitor.pgsm_track_utility", /* name */
"Selects whether utility commands are tracked.", /* short_desc */
NULL, /* long_desc */
&pgsm_track_utility, /* value address */
true, /* boot value */
PGC_USERSET, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_enable_query_plan",
.guc_desc = "Enable/Disable query plan monitoring",
.guc_default = 0,
.guc_min = 0,
.guc_max = 0,
.guc_restart = false,
.guc_unit = 0,
.guc_value = &PGSM_QUERY_PLAN
};
DefineBoolGUC(&conf[i++]);
DefineCustomBoolVariable("pg_stat_monitor.pgsm_track_application_names", /* name */
"Enable/Disable application names tracking.", /* short_desc */
NULL, /* long_desc */
&pgsm_track_application_names, /* value address */
true, /* boot value */
PGC_USERSET, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_track",
.guc_desc = "Selects which statements are tracked by pg_stat_monitor.",
.n_options = 3,
.guc_default = PGSM_TRACK_TOP,
.guc_min = PSGM_TRACK_NONE,
.guc_max = PGSM_TRACK_ALL,
.guc_restart = false,
.guc_unit = 0,
.guc_value = &PGSM_TRACK
};
for (j = 0; j < conf[i].n_options; ++j)
{
strlcpy(conf[i].guc_options[j], track_options[j].name, sizeof(conf[i].guc_options[j]));
}
DefineEnumGUC(&conf[i++], track_options);
DefineCustomBoolVariable("pg_stat_monitor.pgsm_enable_pgsm_query_id", /* name */
"Enable/disable PGSM specific query id calculation which is very useful in comparing same query across databases and clusters..", /* short_desc */
NULL, /* long_desc */
&pgsm_enable_pgsm_query_id, /* value address */
true, /* boot value */
PGC_USERSET, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_extract_comments",
.guc_desc = "Enable/Disable extracting comments from queries.",
.guc_default = 0,
.guc_min = 0,
.guc_max = 0,
.guc_restart = false,
.guc_unit = 0,
.guc_value = &PGSM_EXTRACT_COMMENTS
};
DefineBoolGUC(&conf[i++]);
DefineCustomBoolVariable("pg_stat_monitor.pgsm_normalized_query", /* name */
"Selects whether save query in normalized format.", /* short_desc */
NULL, /* long_desc */
&pgsm_normalized_query, /* value address */
false, /* boot value */
PGC_USERSET, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
DefineCustomBoolVariable("pg_stat_monitor.pgsm_enable_overflow", /* name */
"Enable/Disable pg_stat_monitor to grow beyond shared memory into swap space.", /* short_desc */
NULL, /* long_desc */
&pgsm_enable_overflow, /* value address */
true, /* boot value */
PGC_POSTMASTER, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
DefineCustomBoolVariable("pg_stat_monitor.pgsm_enable_query_plan", /* name */
"Enable/Disable query plan monitoring.", /* short_desc */
NULL, /* long_desc */
&pgsm_enable_query_plan, /* value address */
false, /* boot value */
PGC_USERSET, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
DefineCustomBoolVariable("pg_stat_monitor.pgsm_extract_comments", /* name */
"Enable/Disable extracting comments from queries.", /* short_desc */
NULL, /* long_desc */
&pgsm_extract_comments, /* value address */
false, /* boot value */
PGC_USERSET, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
DefineCustomEnumVariable("pg_stat_monitor.pgsm_track", /* name */
"Selects which statements are tracked by pg_stat_monitor.", /* short_desc */
NULL, /* long_desc */
&pgsm_track, /* value address */
PGSM_TRACK_TOP, /* boot value */
track_options, /* enum options */
PGC_USERSET, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
#if PG_VERSION_NUM >= 130000
conf[i] = (GucVariable)
{
.guc_name = "pg_stat_monitor.pgsm_track_planning",
.guc_desc = "Selects whether planning statistics are tracked.",
.guc_default = 0,
.guc_min = 0,
.guc_max = 0,
.guc_restart = false,
.guc_unit = 0,
.guc_value = &PGSM_TRACK_PLANNING
};
DefineBoolGUC(&conf[i++]);
DefineCustomBoolVariable("pg_stat_monitor.pgsm_track_planning", /* name */
"Selects whether planning statistics are tracked.", /* short_desc */
NULL, /* long_desc */
&pgsm_track_planning, /* value address */
false, /* boot value */
PGC_USERSET, /* context */
0, /* flags */
NULL, /* check_hook */
NULL, /* assign_hook */
NULL /* show_hook */
);
#endif
}
static void
DefineIntGUCWithCheck(GucVariable * conf, GucIntCheckHook check)
{
conf->type = PGC_INT;
DefineCustomIntVariable(conf->guc_name,
conf->guc_desc,
NULL,
conf->guc_value,
conf->guc_default,
conf->guc_min,
conf->guc_max,
conf->guc_restart ? PGC_POSTMASTER : PGC_USERSET,
conf->guc_unit,
check,
NULL,
NULL);
}
static void
DefineIntGUC(GucVariable * conf)
{
DefineIntGUCWithCheck(conf, NULL);
}
static void
DefineBoolGUC(GucVariable * conf)
{
conf->type = PGC_BOOL;
DefineCustomBoolVariable(conf->guc_name,
conf->guc_desc,
NULL,
(bool *) conf->guc_value,
conf->guc_default,
conf->guc_restart ? PGC_POSTMASTER : PGC_USERSET,
0,
NULL,
NULL,
NULL);
}
static void
DefineEnumGUC(GucVariable * conf, const struct config_enum_entry *options)
{
conf->type = PGC_ENUM;
DefineCustomEnumVariable(conf->guc_name,
conf->guc_desc,
NULL,
conf->guc_value,
conf->guc_default,
options,
conf->guc_restart ? PGC_POSTMASTER : PGC_USERSET,
0,
NULL,
NULL,
NULL);
}
GucVariable *
get_conf(int i)
{
return &conf[i];
}
/* Maximum value must be greater or equal to minimum + 1.0 */
static bool
check_histogram_min(int *newval, void **extra, GucSource source)
check_histogram_min(double *newval, void **extra, GucSource source)
{
/*
* During module initialization PGSM_HISTOGRAM_MIN is initialized before
* PGSM_HISTOGRAM_MAX, in this case PGSM_HISTOGRAM_MAX will be zero.
*/
return (PGSM_HISTOGRAM_MAX == 0 || *newval < PGSM_HISTOGRAM_MAX);
return (pgsm_histogram_max == 0 || (*newval + 1.0) <= pgsm_histogram_max);
}
static bool
check_histogram_max(int *newval, void **extra, GucSource source)
check_histogram_max(double *newval, void **extra, GucSource source)
{
return (*newval > PGSM_HISTOGRAM_MIN);
return (*newval >= (pgsm_histogram_min + 1.0));
}
static bool
check_overflow_targer(int *newval, void **extra, GucSource source)
{
if (source != PGC_S_DEFAULT)
elog(WARNING, "pg_stat_monitor.pgsm_overflow_target is deprecated, use pgsm_enable_overflow");
return true;
}

View File

@ -1,11 +1,11 @@
/*-------------------------------------------------------------------------
*
* hash_query.c
* Track statement execution times across a whole database cluster.
* Track statement execution times across a whole database cluster.
*
* Portions Copyright © 2018-2020, Percona LLC and/or its affiliates
* Portions Copyright © 2018-2024, Percona LLC and/or its affiliates
*
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
* Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
*
* Portions Copyright (c) 1994, The Regents of the University of California
*
@ -16,85 +16,255 @@
*/
#include "postgres.h"
#include "nodes/pg_list.h"
#include "pg_stat_monitor.h"
static pgsmLocalState pgsmStateLocal;
static PGSM_HASH_TABLE_HANDLE pgsm_create_bucket_hash(pgsmSharedState *pgsm, dsa_area *dsa);
static Size pgsm_get_shared_area_size(void);
static void InitializeSharedState(pgsmSharedState *pgsm);
static pgssSharedState *pgss;
static HTAB *pgss_hash;
static HTAB *pgss_query_hash;
#define PGSM_BUCKET_INFO_SIZE (sizeof(TimestampTz) * pgsm_max_buckets)
#define PGSM_SHARED_STATE_SIZE (sizeof(pgsmSharedState) + PGSM_BUCKET_INFO_SIZE)
#if USE_DYNAMIC_HASH
/* parameter for the shared hash */
static dshash_parameters dsh_params = {
sizeof(pgsmHashKey),
sizeof(pgsmEntry),
dshash_memcmp,
dshash_memhash
};
#endif
static HTAB *
hash_init(const char *hash_name, int key_size, int entry_size, int hash_size)
/*
* Returns the shared memory area size for storing the query texts.
* USE_DYNAMIC_HASH also creates the hash table in the same memory space,
* so add the required bucket memory size to the query text area size
*/
static Size
pgsm_query_area_size(void)
{
HASHCTL info;
Size sz = MAX_QUERY_BUF;
#if USE_DYNAMIC_HASH
/* Dynamic hash also lives DSA area */
sz = add_size(sz, MAX_BUCKETS_MEM);
#endif
return MAXALIGN(sz);
}
memset(&info, 0, sizeof(info));
info.keysize = key_size;
info.entrysize = entry_size;
return ShmemInitHash(hash_name, hash_size, hash_size, &info, HASH_ELEM | HASH_BLOBS);
/*
* Total shared memory area required by pgsm
*/
Size
pgsm_ShmemSize(void)
{
Size sz = MAXALIGN(PGSM_SHARED_STATE_SIZE);
sz = add_size(sz, MAX_QUERY_BUF);
#if USE_DYNAMIC_HASH
sz = add_size(sz, MAX_BUCKETS_MEM);
#else
sz = add_size(sz, hash_estimate_size(MAX_BUCKET_ENTRIES, sizeof(pgsmEntry)));
#endif
return MAXALIGN(sz);
}
/*
* Returns the shared memory area size for storing the query texts and pgsm
* shared state structure,
* Moreover, for USE_DYNAMIC_HASH, both the hash table and raw query text area
* get allocated as a single shared memory chunk.
*/
static Size
pgsm_get_shared_area_size(void)
{
Size sz;
#if USE_DYNAMIC_HASH
sz = pgsm_ShmemSize();
#else
sz = MAXALIGN(sizeof(pgsmSharedState));
sz = add_size(sz, pgsm_query_area_size());
#endif
return sz;
}
void
pgss_startup(void)
pgsm_startup(void)
{
bool found = false;
pgsmSharedState *pgsm;
/* reset in case this is a restart within the postmaster */
pgss = NULL;
pgss_hash = NULL;
pgsmStateLocal.dsa = NULL;
pgsmStateLocal.shared_hash = NULL;
pgsmStateLocal.shared_pgsmState = NULL;
/*
* Create or attach to the shared memory state, including hash table
*/
LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
pgss = ShmemInitStruct("pg_stat_monitor", sizeof(pgssSharedState), &found);
pgsm = ShmemInitStruct("pg_stat_monitor", pgsm_get_shared_area_size(), &found);
if (!found)
{
/* First time through ... */
pgss->lock = &(GetNamedLWLockTranche("pg_stat_monitor"))->lock;
SpinLockInit(&pgss->mutex);
ResetSharedState(pgss);
dsa_area *dsa;
char *p = (char *) pgsm;
pgsm->pgsm_oom = false;
pgsm->lock = &(GetNamedLWLockTranche("pg_stat_monitor"))->lock;
SpinLockInit(&pgsm->mutex);
InitializeSharedState(pgsm);
/* the allocation of pgsmSharedState itself */
p += MAXALIGN(PGSM_SHARED_STATE_SIZE);
pgsm->raw_dsa_area = p;
dsa = dsa_create_in_place(pgsm->raw_dsa_area,
pgsm_query_area_size(),
LWLockNewTrancheId(), 0);
dsa_pin(dsa);
dsa_set_size_limit(dsa, pgsm_query_area_size());
pgsm->hash_handle = pgsm_create_bucket_hash(pgsm, dsa);
/*
* If overflow is enabled, set the DSA size to unlimited, and allow
* the DSA to grow beyond the shared memory space into the swap area
*/
if (pgsm_enable_overflow)
dsa_set_size_limit(dsa, -1);
pgsmStateLocal.shared_pgsmState = pgsm;
/*
* Postmaster will never access the dsa again, thus free it's local
* references.
*/
dsa_detach(dsa);
pgsmStateLocal.pgsm_mem_cxt = AllocSetContextCreate(TopMemoryContext,
"pg_stat_monitor local store",
ALLOCSET_DEFAULT_SIZES);
}
#ifdef BENCHMARK
init_hook_stats();
#endif
set_qbuf((unsigned char *) ShmemAlloc(MAX_QUERY_BUF));
pgss_hash = hash_init("pg_stat_monitor: bucket hashtable", sizeof(pgssHashKey), sizeof(pgssEntry), MAX_BUCKET_ENTRIES);
pgss_query_hash = hash_init("pg_stat_monitor: queryID hashtable", sizeof(uint64), sizeof(pgssQueryEntry), MAX_BUCKET_ENTRIES);
LWLockRelease(AddinShmemInitLock);
/*
* If we're in the postmaster (or a standalone backend...), set up a shmem
* exit hook to dump the statistics to disk.
*/
on_shmem_exit(pgss_shmem_shutdown, (Datum) 0);
on_shmem_exit(pgsm_shmem_shutdown, (Datum) 0);
}
pgssSharedState *
static void
InitializeSharedState(pgsmSharedState *pgsm)
{
pg_atomic_init_u64(&pgsm->current_wbucket, 0);
pg_atomic_init_u64(&pgsm->prev_bucket_sec, 0);
}
/*
* Create the classic or dshahs hash table for storing the query statistics.
*/
static PGSM_HASH_TABLE_HANDLE
pgsm_create_bucket_hash(pgsmSharedState *pgsm, dsa_area *dsa)
{
PGSM_HASH_TABLE_HANDLE bucket_hash;
#if USE_DYNAMIC_HASH
dshash_table *dsh;
pgsm->hash_tranche_id = LWLockNewTrancheId();
dsh_params.tranche_id = pgsm->hash_tranche_id;
dsh = dshash_create(dsa, &dsh_params, 0);
bucket_hash = dshash_get_hash_table_handle(dsh);
dshash_detach(dsh);
#else
HASHCTL info;
memset(&info, 0, sizeof(info));
info.keysize = sizeof(pgsmHashKey);
info.entrysize = sizeof(pgsmEntry);
bucket_hash = ShmemInitHash("pg_stat_monitor: bucket hashtable", MAX_BUCKET_ENTRIES, MAX_BUCKET_ENTRIES, &info, HASH_ELEM | HASH_BLOBS);
#endif
return bucket_hash;
}
/*
* Attach to a DSA area created by the postmaster, in the case of
* USE_DYNAMIC_HASH, also attach the local dshash handle to
* the dshash created by the postmaster.
*
* Note: The dsa area and dshash for the process may be mapped at a
* different virtual address in this process.
*
*/
void
pgsm_attach_shmem(void)
{
MemoryContext oldcontext;
if (pgsmStateLocal.dsa)
return;
/*
* We want the dsa to remain valid throughout the lifecycle of this
* process. so switch to TopMemoryContext before attaching
*/
oldcontext = MemoryContextSwitchTo(TopMemoryContext);
pgsmStateLocal.dsa = dsa_attach_in_place(pgsmStateLocal.shared_pgsmState->raw_dsa_area,
NULL);
/*
* pin the attached area to keep the area attached until end of session or
* explicit detach.
*/
dsa_pin_mapping(pgsmStateLocal.dsa);
#if USE_DYNAMIC_HASH
dsh_params.tranche_id = pgsmStateLocal.shared_pgsmState->hash_tranche_id;
pgsmStateLocal.shared_hash = dshash_attach(pgsmStateLocal.dsa, &dsh_params,
pgsmStateLocal.shared_pgsmState->hash_handle, 0);
#else
pgsmStateLocal.shared_hash = pgsmStateLocal.shared_pgsmState->hash_handle;
#endif
MemoryContextSwitchTo(oldcontext);
}
MemoryContext
GetPgsmMemoryContext(void)
{
return pgsmStateLocal.pgsm_mem_cxt;
}
dsa_area *
get_dsa_area_for_query_text(void)
{
pgsm_attach_shmem();
return pgsmStateLocal.dsa;
}
PGSM_HASH_TABLE *
get_pgsmHash(void)
{
pgsm_attach_shmem();
return pgsmStateLocal.shared_hash;
}
pgsmSharedState *
pgsm_get_ss(void)
{
return pgss;
pgsm_attach_shmem();
return pgsmStateLocal.shared_pgsmState;
}
HTAB *
pgsm_get_hash(void)
{
return pgss_hash;
}
HTAB *
pgsm_get_query_hash(void)
{
return pgss_query_hash;
}
/*
* shmem_shutdown hook: Dump statistics into file.
@ -103,58 +273,50 @@ pgsm_get_query_hash(void)
* other processes running when this is called.
*/
void
pgss_shmem_shutdown(int code, Datum arg)
pgsm_shmem_shutdown(int code, Datum arg)
{
/* Don't try to dump during a crash. */
elog(LOG, "[pg_stat_monitor] pgsm_shmem_shutdown: Shutdown initiated.");
if (code)
return;
pgss = NULL;
pgsmStateLocal.shared_pgsmState = NULL;
/* Safety check ... shouldn't get here unless shmem is set up. */
if (!IsHashInitialize())
return;
}
Size
hash_memsize(void)
pgsmEntry *
hash_entry_alloc(pgsmSharedState *pgsm, pgsmHashKey *key, int encoding)
{
Size size;
size = MAXALIGN(sizeof(pgssSharedState));
size += MAXALIGN(MAX_QUERY_BUF);
size = add_size(size, hash_estimate_size(MAX_BUCKET_ENTRIES, sizeof(pgssEntry)));
size = add_size(size, hash_estimate_size(MAX_BUCKET_ENTRIES, sizeof(pgssQueryEntry)));
return size;
}
pgssEntry *
hash_entry_alloc(pgssSharedState *pgss, pgssHashKey *key, int encoding)
{
pgssEntry *entry = NULL;
pgsmEntry *entry = NULL;
bool found = false;
if (hash_get_num_entries(pgss_hash) >= MAX_BUCKET_ENTRIES)
{
elog(DEBUG1, "pg_stat_monitor: out of memory");
return NULL;
}
/* Find or create an entry with desired hash code */
entry = (pgssEntry *) hash_search(pgss_hash, key, HASH_ENTER_NULL, &found);
entry = (pgsmEntry *) pgsm_hash_find_or_insert(pgsmStateLocal.shared_hash, key, &found);
if (entry == NULL)
elog(DEBUG1, "hash_entry_alloc: OUT OF MEMORY");
elog(DEBUG1, "[pg_stat_monitor] hash_entry_alloc: OUT OF MEMORY.");
else if (!found)
{
pgss->bucket_entry[pg_atomic_read_u64(&pgss->current_wbucket)]++;
/* New entry, initialize it */
/* reset the statistics */
memset(&entry->counters, 0, sizeof(Counters));
entry->query_text.query_pos = InvalidDsaPointer;
entry->counters.info.parent_query = InvalidDsaPointer;
entry->stats_since = GetCurrentTimestamp();
entry->minmax_stats_since = entry->stats_since;
/* set the appropriate initial usage count */
/* re-initialize the mutex each time ... we assume no one using it */
SpinLockInit(&entry->mutex);
/* ... and don't forget the query text metadata */
entry->encoding = encoding;
}
#if USE_DYNAMIC_HASH
if (entry)
dshash_release_lock(pgsmStateLocal.shared_hash, entry);
#endif
return entry;
}
@ -162,167 +324,132 @@ hash_entry_alloc(pgssSharedState *pgss, pgssHashKey *key, int encoding)
/*
* Prepare resources for using the new bucket:
* - Deallocate finished hash table entries in new_bucket_id (entries whose
* state is PGSS_FINISHED or PGSS_FINISHED).
* state is PGSM_EXEC or PGSM_ERROR).
* - Clear query buffer for new_bucket_id.
* - If old_bucket_id != -1, move all pending hash table entries in
* old_bucket_id to the new bucket id, also move pending queries from the
* previous query buffer (query_buffer[old_bucket_id]) to the new one
* (query_buffer[new_bucket_id]).
*
* Caller must hold an exclusive lock on pgss->lock.
* Caller must hold an exclusive lock on pgsm->lock.
*/
void
hash_entry_dealloc(int new_bucket_id, int old_bucket_id, unsigned char *query_buffer)
{
HASH_SEQ_STATUS hash_seq;
pgssEntry *entry = NULL;
PGSM_HASH_SEQ_STATUS hstat;
pgsmEntry *entry = NULL;
/* Store pending query ids from the previous bucket. */
List *pending_entries = NIL;
ListCell *pending_entry;
if (!pgsmStateLocal.shared_hash)
return;
/* Iterate over the hash table. */
hash_seq_init(&hash_seq, pgss_hash);
while ((entry = hash_seq_search(&hash_seq)) != NULL)
pgsm_hash_seq_init(&hstat, pgsmStateLocal.shared_hash, true);
while ((entry = pgsm_hash_seq_next(&hstat)) != NULL)
{
dsa_pointer pdsa;
/*
* Remove all entries if new_bucket_id == -1. Otherwise remove entry
* in new_bucket_id if it has finished already.
*/
if (new_bucket_id < 0 ||
(entry->key.bucket_id == new_bucket_id &&
(entry->counters.state == PGSS_FINISHED || entry->counters.state == PGSS_ERROR)))
(entry->key.bucket_id == new_bucket_id))
{
if (new_bucket_id == -1)
{
/*
* pg_stat_monitor_reset(), remove entry from query hash table
* too.
*/
hash_search(pgss_query_hash, &(entry->key.queryid), HASH_REMOVE, NULL);
}
dsa_pointer parent_qdsa = entry->counters.info.parent_query;
entry = hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
}
pdsa = entry->query_text.query_pos;
/*
* If we detect a pending query residing in the previous bucket id, we
* add it to a list of pending elements to be moved to the new bucket
* id. Can't update the hash table while iterating it inside this
* loop, as this may introduce all sort of problems.
*/
if (old_bucket_id != -1 && entry->key.bucket_id == old_bucket_id)
{
if (entry->counters.state == PGSS_PARSE ||
entry->counters.state == PGSS_PLAN ||
entry->counters.state == PGSS_EXEC)
{
pgssEntry *bkp_entry = malloc(sizeof(pgssEntry));
pgsm_hash_delete_current(&hstat, pgsmStateLocal.shared_hash, &entry->key);
if (!bkp_entry)
{
elog(DEBUG1, "hash_entry_dealloc: out of memory");
if (DsaPointerIsValid(pdsa))
dsa_free(pgsmStateLocal.dsa, pdsa);
/*
* No memory, If the entry has calls > 1 then we change
* the state to finished, as the pending query will likely
* finish execution during the new bucket time window. The
* pending query will vanish in this case, can't list it
* until it completes.
*
* If there is only one call to the query and it's
* pending, remove the entry from the previous bucket and
* allow it to finish in the new bucket, in order to avoid
* the query living in the old bucket forever.
*/
if (entry->counters.calls.calls > 1)
entry->counters.state = PGSS_FINISHED;
else
entry = hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
continue;
}
if (DsaPointerIsValid(parent_qdsa))
dsa_free(pgsmStateLocal.dsa, parent_qdsa);
/* Save key/data from the previous entry. */
memcpy(bkp_entry, entry, sizeof(pgssEntry));
/* Update key to use the new bucket id. */
bkp_entry->key.bucket_id = new_bucket_id;
/* Add the entry to a list of nodes to be processed later. */
pending_entries = lappend(pending_entries, bkp_entry);
/*
* If the entry has calls > 1 then we change the state to
* finished in the previous bucket, as the pending query will
* likely finish execution during the new bucket time window.
* Can't remove it from the previous bucket as it may have
* many calls and we would lose the query statistics.
*
* If there is only one call to the query and it's pending,
* remove the entry from the previous bucket and allow it to
* finish in the new bucket, in order to avoid the query
* living in the old bucket forever.
*/
if (entry->counters.calls.calls > 1)
entry->counters.state = PGSS_FINISHED;
else
entry = hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
}
pgsmStateLocal.shared_pgsmState->pgsm_oom = false;
}
}
/*
* Iterate over the list of pending queries in order to add them back to
* the hash table with the updated bucket id.
*/
foreach(pending_entry, pending_entries)
{
bool found = false;
pgssEntry *new_entry;
pgssEntry *old_entry = (pgssEntry *) lfirst(pending_entry);
new_entry = (pgssEntry *) hash_search(pgss_hash, &old_entry->key, HASH_ENTER_NULL, &found);
if (new_entry == NULL)
elog(DEBUG1, "%s", "pg_stat_monitor: out of memory");
else if (!found)
{
/* Restore counters and other data. */
new_entry->counters = old_entry->counters;
SpinLockInit(&new_entry->mutex);
new_entry->encoding = old_entry->encoding;
new_entry->query_pos = old_entry->query_pos;
}
free(old_entry);
}
list_free(pending_entries);
}
/*
* Release all entries.
*/
void
hash_entry_reset()
{
pgssSharedState *pgss = pgsm_get_ss();
HASH_SEQ_STATUS hash_seq;
pgssEntry *entry;
LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
hash_seq_init(&hash_seq, pgss_hash);
while ((entry = hash_seq_search(&hash_seq)) != NULL)
{
hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
}
pg_atomic_write_u64(&pgss->current_wbucket, 0);
LWLockRelease(pgss->lock);
pgsm_hash_seq_term(&hstat);
}
bool
IsHashInitialize(void)
{
return (pgss != NULL &&
pgss_hash != NULL);
return (pgsmStateLocal.shared_pgsmState != NULL);
}
bool
IsSystemOOM(void)
{
return (IsHashInitialize() && pgsmStateLocal.shared_pgsmState->pgsm_oom);
}
/*
* pgsm_* functions are just wrapper functions over the hash table standard
* API and call the appropriate hash table function based on USE_DYNAMIC_HASH
*/
void *
pgsm_hash_find_or_insert(PGSM_HASH_TABLE * shared_hash, pgsmHashKey *key, bool *found)
{
#if USE_DYNAMIC_HASH
void *entry;
entry = dshash_find_or_insert(shared_hash, key, found);
return entry;
#else
return hash_search(shared_hash, key, HASH_ENTER_NULL, found);
#endif
}
void *
pgsm_hash_find(PGSM_HASH_TABLE * shared_hash, pgsmHashKey *key, bool *found)
{
#if USE_DYNAMIC_HASH
return dshash_find(shared_hash, key, false);
#else
return hash_search(shared_hash, key, HASH_FIND, found);
#endif
}
void
pgsm_hash_seq_init(PGSM_HASH_SEQ_STATUS * hstat, PGSM_HASH_TABLE * shared_hash, bool lock)
{
#if USE_DYNAMIC_HASH
dshash_seq_init(hstat, shared_hash, lock);
#else
hash_seq_init(hstat, shared_hash);
#endif
}
void *
pgsm_hash_seq_next(PGSM_HASH_SEQ_STATUS * hstat)
{
#if USE_DYNAMIC_HASH
return dshash_seq_next(hstat);
#else
return hash_seq_search(hstat);
#endif
}
void
pgsm_hash_seq_term(PGSM_HASH_SEQ_STATUS * hstat)
{
#if USE_DYNAMIC_HASH
dshash_seq_term(hstat);
#endif
}
void
pgsm_hash_delete_current(PGSM_HASH_SEQ_STATUS * hstat, PGSM_HASH_TABLE * shared_hash, void *key)
{
#if USE_DYNAMIC_HASH
dshash_delete_current(hstat);
#else
hash_search(shared_hash, key, HASH_REMOVE, NULL);
#endif
}

58
meson.build Normal file
View File

@ -0,0 +1,58 @@
# Copyright (c) 2022-2023, PostgreSQL Global Development Group
pg_stat_monitor_sources = files(
'pg_stat_monitor.c',
)
pg_stat_monitor = shared_module('pg_stat_monitor',
pg_stat_monitor_sources,
kwargs: contrib_mod_args + {
'dependencies': contrib_mod_args['dependencies'],
},
)
contrib_targets += pg_stat_monitor
install_data(
'pg_stat_monitor.control',
'pg_stat_monitor--2.0.sql',
'pg_stat_monitor--1.0--2.0.sql',
'pg_stat_monitor--2.0--2.1.sql',
'pg_stat_monitor--2.1--2.2.sql',
kwargs: contrib_data_args,
)
tests += {
'name': 'pg_stat_monitor',
'sd': meson.current_source_dir(),
'bd': meson.current_build_dir(),
'regress': {
'sql': [
'application_name',
'application_name_unique',
'basic',
'cmd_type',
'counters',
'database',
'different_parent_queries'
'error_insert',
'error',
'functions',
'guc',
'histogram',
'level_tracking'
'pgsqm_query_id',
'relations',
'rows',
'state',
'tags',
'top_query',
'user',
'version'
],
'regress_args': ['--temp-config', files('pg_stat_monitor.conf')],
# Disabled because these tests require
# "shared_preload_libraries=pg_stat_monitor", which typical
# runningcheck users do not have (e.g. buildfarm clients).
'runningcheck': false,
},
}

View File

@ -1,101 +0,0 @@
# MkDocs configuration for Netlify builds
site_name: pg_stat_monitor Documentation
site_description: Documentation
site_author: Percona LLC
copyright: Percona LLC, &#169; 2022
repo_name: percona/pg_stat_monitor
repo_url: https://github.com/percona/pg_stat_monitor
edit_uri: edit/master/docs/
use_directory_urls: false
# Theme for netlify testing
theme:
name: material
logo: _images/percona-logo.svg
favicon: _images/percona-favicon.ico
custom_dir: docs/overrides
palette:
# Light mode
- media: "(prefers-color-scheme: light)"
scheme: percona-light
toggle:
icon: material/toggle-switch-off-outline
name: Switch to dark mode
# Dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
toggle:
icon: material/toggle-switch
name: Switch to light mode
# Theme features
features:
- search.highlight
- navigation.top
extra_css:
- https://unicons.iconscout.com/release/v3.0.3/css/line.css
- https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.4.0/css/font-awesome.min.css
- css/version-select.css
- css/toctree.css
- css/percona.css
extra_javascript:
- js/version-select.js
markdown_extensions:
- attr_list
- toc:
permalink: True
- admonition
- footnotes
- def_list # https://michelf.ca/projects/php-markdown/extra/#def-list
- meta
- smarty:
smart_angled_quotes: true
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.tabbed
- pymdownx.tilde
- pymdownx.superfences
- pymdownx.highlight:
linenums: false
- pymdownx.emoji:
emoji_index: !!python/name:materialx.emoji.twemoji
emoji_generator: !!python/name:materialx.emoji.to_svg
#- plantuml_markdown
plugins:
- search
- git-revision-date
- section-index # Adds links to nodes - comment out when creating PDF
# - htmlproofer # Uncomment to check links - but extends build time significantly
- mike:
version_selector: true
css_dir: css
javascript_dir: js
canonical_version: null
extra:
version:
provider: mike
nav:
- index.md
- setup.md
- User guide:
- USER_GUIDE.md
- REFERENCE.md
- COMPARISON.md
- Release notes:
- RELEASE_NOTES.md
# - Version Selector: "../"

99
percona-packaging/scripts/pg_stat_monitor_builder.sh Normal file → Executable file
View File

@ -21,6 +21,7 @@ Usage: $0 [OPTIONS]
--rpm_release RPM version( default = 1)
--deb_release DEB version( default = 1)
--pg_release PPG version build on( default = 11)
--ppg_repo_name PPG repo name (default ppg-11.18)
--version product version
--help) usage ;;
Example $0 --builddir=/tmp/test --get_sources=1 --build_src_rpm=1 --build_rpm=1
@ -57,6 +58,7 @@ append_arg_to_args () {
--rpm_release=*) RPM_RELEASE="$val" ;;
--deb_release=*) DEB_RELEASE="$val" ;;
--pg_release=*) PG_RELEASE="$val" ;;
--ppg_repo_name=*) PPG_REPO_NAME="$val";;
--version=*) VERSION="$val" ;;
--help) usage ;;
*)
@ -84,12 +86,28 @@ check_workdir(){
return
}
add_percona_yum_repo(){
if [ ! -f /etc/yum.repos.d/percona-dev.repo ]; then
curl -o /etc/yum.repos.d/percona-dev.repo https://jenkins.percona.com/yum-repo/percona-dev.repo
sed -i 's:$basearch:x86_64:g' /etc/yum.repos.d/percona-dev.repo
set_changelog(){
if [ -z $1 ]
then
echo "No spec file is provided"
return
else
start_line=0
while read -r line; do
(( start_line++ ))
if [ "$line" = "%changelog" ]
then
(( start_line++ ))
echo "$start_line"
current_date=$(date +"%a %b %d %Y")
sed -i "$start_line,$ d" $1
echo "* $current_date Percona Build/Release Team <eng-build@percona.com> - ${VERSION}-${RPM_RELEASE}" >> $1
echo "- Release ${VERSION}-${RPM_RELEASE}" >> $1
echo >> $1
return
fi
done <$1
fi
return
}
get_sources(){
@ -137,12 +155,15 @@ get_sources(){
sed -i "s:@@RPM_RELEASE@@:${RPM_RELEASE}:g" rpm/pg-stat-monitor.spec
sed -i "s:@@VERSION@@:${VERSION}:g" rpm/pg-stat-monitor.spec
set_changelog rpm/pg-stat-monitor.spec
cd ${WORKDIR}
#
source pg-stat-monitor.properties
#
tar --owner=0 --group=0 --exclude=.* -czf ${PRODUCT_FULL}.tar.gz ${PRODUCT_FULL}
echo "UPLOAD=UPLOAD/experimental/BUILDS/${PRODUCT}/${PRODUCT_FULL}/${BRANCH}/${REVISION}/${BUILD_ID}" >> pg-stat-monitor.properties
DATE_TIMESTAMP=$(date +%F_%H-%M-%S)
echo "UPLOAD=UPLOAD/experimental/BUILDS/${PRODUCT}/${PRODUCT_FULL}/${BRANCH}/${REVISION}/${DATE_TIMESTAMP}/${BUILD_ID}" >> pg-stat-monitor.properties
mkdir $WORKDIR/source_tarball
mkdir $CURDIR/source_tarball
cp ${PRODUCT_FULL}.tar.gz $WORKDIR/source_tarball
@ -182,20 +203,30 @@ install_deps() {
CURPLACE=$(pwd)
if [ "$OS" == "rpm" ]
then
yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm
add_percona_yum_repo
if [[ ${PG_RELEASE} == "11" ]]; then
percona-release enable ppg-11 release
elif [[ $PG_RELEASE == "12" ]]; then
percona-release enable ppg-12 release
fi
yum -y install git wget
PKGLIST="percona-postgresql-common percona-postgresql${PG_RELEASE}-devel"
PKGLIST+=" clang-devel git clang llvm-devel rpmdevtools vim wget"
yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm
wget https://raw.githubusercontent.com/percona/percona-repositories/release-1.0-28/scripts/percona-release.sh
mv percona-release.sh /usr/bin/percona-release
chmod 777 /usr/bin/percona-release
percona-release enable ${PPG_REPO_NAME} testing
if [ x"$RHEL" = x8 ];
then
clang_version=$(yum list --showduplicates clang-devel | grep "17.0" | grep clang | awk '{print $2}' | head -n 1)
llvm_version=$(yum list --showduplicates llvm-devel | grep "17.0" | grep llvm | awk '{print $2}' | head -n 1)
yum install -y clang-devel-${clang_version} clang-${clang_version} llvm-devel-${llvm_version}
dnf module -y disable llvm-toolset
else
yum install -y clang-devel clang llvm-devel
fi
PKGLIST="percona-postgresql${PG_RELEASE}-devel"
PKGLIST+=" git rpmdevtools vim wget"
PKGLIST+=" perl binutils gcc gcc-c++"
PKGLIST+=" clang-devel llvm-devel git rpm-build rpmdevtools wget gcc make autoconf"
if [[ "${RHEL}" -eq 8 ]]; then
dnf -y module disable postgresql
PKGLIST+=" git rpm-build rpmdevtools wget gcc make autoconf"
if [[ "${RHEL}" -ge 8 ]]; then
dnf config-manager --set-enabled ol${RHEL}_codeready_builder
dnf -y module disable postgresql || true
elif [[ "${RHEL}" -eq 7 ]]; then
PKGLIST+=" llvm-toolset-7-clang llvm-toolset-7-llvm-devel llvm5.0-devel"
until yum -y install epel-release centos-release-scl; do
@ -215,14 +246,12 @@ install_deps() {
done
else
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install lsb-release gnupg git wget
DEBIAN_FRONTEND=noninteractive apt-get -y install lsb-release gnupg git wget curl
wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb && dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
if [[ "${PG_RELEASE}" == "11" ]]; then
percona-release enable ppg-11 release
elif [[ "${PG_RELEASE}" == "12" ]]; then
percona-release enable ppg-12 release
fi
wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
dpkg -i percona-release_latest.generic_all.deb
rm -f percona-release_latest.generic_all.deb
percona-release enable ${PPG_REPO_NAME} testing
PKGLIST="percona-postgresql-${PG_RELEASE} percona-postgresql-common percona-postgresql-server-dev-all"
@ -244,8 +273,8 @@ install_deps() {
fi
fi
PKGLIST+=" debconf debhelper clang-7 devscripts dh-exec dh-systemd git wget libkrb5-dev libssl-dev"
PKGLIST+=" build-essential debconf debhelper devscripts dh-exec dh-systemd git wget libxml-checker-perl"
PKGLIST+=" debconf debhelper clang devscripts dh-exec git wget libkrb5-dev libssl-dev"
PKGLIST+=" build-essential debconf debhelper devscripts dh-exec git wget libxml-checker-perl"
PKGLIST+=" libxml-libxml-perl libio-socket-ssl-perl libperl-dev libssl-dev libxml2-dev txt2man zlib1g-dev libpq-dev"
until DEBIAN_FRONTEND=noninteractive apt-get -y install ${PKGLIST}; do
@ -425,6 +454,19 @@ build_source_deb(){
cp *.orig.tar.gz $CURDIR/source_deb
}
change_ddeb_package_to_deb(){
directory=$1
for file in "$directory"/*.ddeb; do
if [ -e "$file" ]; then
# Change extension to .deb
mv "$file" "${file%.ddeb}.deb"
echo "Changed extension of $file to ${file%.ddeb}.deb"
fi
done
}
build_deb(){
if [ $DEB = 0 ]
then
@ -456,11 +498,13 @@ build_deb(){
sed -i "s:\. :${WORKDIR}/percona-pg-stat-monitor-${VERSION} :g" debian/rules
dch -m -D "${OS_NAME}" --force-distribution -v "1:${VERSION}-${DEB_RELEASE}.${OS_NAME}" 'Update distribution'
unset $(locale|cut -d= -f1)
pg_buildext updatecontrol
dpkg-buildpackage -rfakeroot -us -uc -b
mkdir -p $CURDIR/deb
mkdir -p $WORKDIR/deb
cp $WORKDIR/*.*deb $WORKDIR/deb
cp $WORKDIR/*.*deb $CURDIR/deb
change_ddeb_package_to_deb "$CURDIR/deb"
}
CURDIR=$(pwd)
@ -484,6 +528,7 @@ DEB_RELEASE=1
REPO="https://github.com/Percona/pg_stat_monitor.git"
VERSION="1.0.0"
PG_RELEASE=11
PPG_REPO_NAME=ppg-11
parse_arguments PICK-ARGS-FROM-ARGV "$@"
check_workdir

View File

@ -0,0 +1,415 @@
/* contrib/pg_stat_monitor/pg_stat_monitor--1.0--2.0.sql */
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "ALTER EXTENSION pg_stat_monitor" to load this file. \quit
DROP FUNCTION pg_stat_monitor_internal CASCADE;
DROP FUNCTION histogram;
DROP FUNCTION get_state;
DROP FUNCTION pg_stat_monitor_settings CASCADE;
CREATE FUNCTION pg_stat_monitor_internal(
IN showtext boolean,
OUT bucket int8, -- 0
OUT userid oid,
OUT username text,
OUT dbid oid,
OUT datname text,
OUT client_ip int8,
OUT queryid int8, -- 4
OUT planid int8,
OUT query text,
OUT query_plan text,
OUT pgsm_query_id int8,
OUT top_queryid int8,
OUT top_query text,
OUT application_name text,
OUT relations text, -- 11
OUT cmd_type int,
OUT elevel int,
OUT sqlcode TEXT,
OUT message text,
OUT bucket_start_time timestamptz,
OUT calls int8, -- 16
OUT total_exec_time float8,
OUT min_exec_time float8,
OUT max_exec_time float8,
OUT mean_exec_time float8,
OUT stddev_exec_time float8,
OUT rows int8,
OUT plans int8, -- 23
OUT total_plan_time float8,
OUT min_plan_time float8,
OUT max_plan_time float8,
OUT mean_plan_time float8,
OUT stddev_plan_time float8,
OUT shared_blks_hit int8, -- 29
OUT shared_blks_read int8,
OUT shared_blks_dirtied int8,
OUT shared_blks_written int8,
OUT local_blks_hit int8,
OUT local_blks_read int8,
OUT local_blks_dirtied int8,
OUT local_blks_written int8,
OUT temp_blks_read int8,
OUT temp_blks_written int8,
OUT blk_read_time float8,
OUT blk_write_time float8,
OUT temp_blk_read_time float8,
OUT temp_blk_write_time float8,
OUT resp_calls text, -- 41
OUT cpu_user_time float8,
OUT cpu_sys_time float8,
OUT wal_records int8,
OUT wal_fpi int8,
OUT wal_bytes numeric,
OUT comments TEXT,
OUT jit_functions int8,
OUT jit_generation_time float8,
OUT jit_inlining_count int8,
OUT jit_inlining_time float8,
OUT jit_optimization_count int8,
OUT jit_optimization_time float8,
OUT jit_emission_count int8,
OUT jit_emission_time float8,
OUT toplevel BOOLEAN,
OUT bucket_done BOOLEAN
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor_2_0'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE FUNCTION histogram(_bucket int, _quryid int8)
RETURNS SETOF RECORD AS $$
DECLARE
rec record;
BEGIN
FOR rec IN
WITH stat AS (select queryid, bucket, unnest(range()) AS range,
unnest(resp_calls)::int freq FROM pg_stat_monitor) select range,
freq, repeat('â– ', (freq::float / max(freq) over() * 30)::int) AS bar
FROM stat WHERE queryid = _quryid and bucket = _bucket
LOOP
RETURN next rec;
END loop;
END
$$ language plpgsql;
-- Register a view on the function for ease of use.
CREATE FUNCTION pgsm_create_11_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time AS total_time,
min_exec_time AS min_time,
max_exec_time AS max_time,
mean_exec_time AS mean_time,
stddev_exec_time AS stddev_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
bucket_done
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_13_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
-- PostgreSQL-13 Specific Coulumns
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_14_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_15_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
temp_blk_read_time,
temp_blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time,
jit_functions,
jit_generation_time,
jit_inlining_count,
jit_inlining_time,
jit_optimization_count,
jit_optimization_time,
jit_emission_count,
jit_emission_time
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_view() RETURNS INT AS
$$
DECLARE ver integer;
BEGIN
SELECT current_setting('server_version_num') INTO ver;
IF (ver >= 150000) THEN
return pgsm_create_15_view();
END IF;
IF (ver >= 140000) THEN
return pgsm_create_14_view();
END IF;
IF (ver >= 130000) THEN
return pgsm_create_13_view();
END IF;
IF (ver >= 110000) THEN
return pgsm_create_11_view();
END IF;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
SELECT pgsm_create_view();
REVOKE ALL ON FUNCTION pgsm_create_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_11_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_13_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_14_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_15_view FROM PUBLIC;
GRANT EXECUTE ON FUNCTION range TO PUBLIC;
GRANT EXECUTE ON FUNCTION decode_error_level TO PUBLIC;
GRANT EXECUTE ON FUNCTION get_histogram_timings TO PUBLIC;
GRANT EXECUTE ON FUNCTION get_cmd_type TO PUBLIC;
GRANT EXECUTE ON FUNCTION pg_stat_monitor_internal TO PUBLIC;
GRANT SELECT ON pg_stat_monitor TO PUBLIC;
-- Reset is only available to super user
REVOKE ALL ON FUNCTION pg_stat_monitor_reset FROM PUBLIC;

View File

@ -1,266 +0,0 @@
/* contrib/pg_stat_monitor/pg_stat_monitor--1.0.sql */
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "CREATE EXTENSION pg_stat_monitor" to load this file. \quit
-- Register functions.
CREATE FUNCTION pg_stat_monitor_reset()
RETURNS void
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION pg_stat_monitor_version()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION get_histogram_timings()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION range()
RETURNS text[] AS $$
SELECT string_to_array(get_histogram_timings(), ',');
$$ LANGUAGE SQL;
CREATE FUNCTION pg_stat_monitor_internal(IN showtext boolean,
OUT bucket int8, -- 0
OUT userid oid,
OUT dbid oid,
OUT client_ip int8,
OUT queryid text, -- 4
OUT planid text,
OUT query text,
OUT query_plan text,
OUT state_code int8,
OUT top_queryid text,
OUT top_query text,
OUT application_name text,
OUT relations text, -- 11
OUT cmd_type int,
OUT elevel int,
OUT sqlcode TEXT,
OUT message text,
OUT bucket_start_time text,
OUT calls int8, -- 16
OUT total_exec_time float8,
OUT min_exec_time float8,
OUT max_exec_time float8,
OUT mean_exec_time float8,
OUT stddev_exec_time float8,
OUT rows_retrieved int8,
OUT plans_calls int8, -- 23
OUT total_plan_time float8,
OUT min_plan_time float8,
OUT max_plan_time float8,
OUT mean_plan_time float8,
OUT stddev_plan_time float8,
OUT shared_blks_hit int8, -- 29
OUT shared_blks_read int8,
OUT shared_blks_dirtied int8,
OUT shared_blks_written int8,
OUT local_blks_hit int8,
OUT local_blks_read int8,
OUT local_blks_dirtied int8,
OUT local_blks_written int8,
OUT temp_blks_read int8,
OUT temp_blks_written int8,
OUT blk_read_time float8,
OUT blk_write_time float8,
OUT resp_calls text, -- 41
OUT cpu_user_time float8,
OUT cpu_sys_time float8,
OUT wal_records int8,
OUT wal_fpi int8,
OUT wal_bytes numeric,
OUT comments TEXT,
OUT toplevel BOOLEAN
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE FUNCTION get_state(state_code int8) RETURNS TEXT AS
$$
SELECT
CASE
WHEN state_code = 0 THEN 'PARSING'
WHEN state_code = 1 THEN 'PLANNING'
WHEN state_code = 2 THEN 'ACTIVE'
WHEN state_code = 3 THEN 'FINISHED'
WHEN state_code = 4 THEN 'FINISHED WITH ERROR'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION get_cmd_type (cmd_type INTEGER) RETURNS TEXT AS
$$
SELECT
CASE
WHEN cmd_type = 0 THEN ''
WHEN cmd_type = 1 THEN 'SELECT'
WHEN cmd_type = 2 THEN 'UPDATE'
WHEN cmd_type = 3 THEN 'INSERT'
WHEN cmd_type = 4 THEN 'DELETE'
WHEN cmd_type = 5 THEN 'UTILITY'
WHEN cmd_type = 6 THEN 'NOTHING'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION pg_stat_monitor_settings(
OUT name text,
OUT value text,
OUT default_value text,
OUT description text,
OUT minimum INTEGER,
OUT maximum INTEGER,
OUT options text,
OUT restart text
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor_settings'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE VIEW pg_stat_monitor_settings AS SELECT
name,
value,
default_value,
description,
minimum,
maximum,
options,
restart
FROM pg_stat_monitor_settings();
-- Register a view on the function for ease of use.
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid::regrole,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
queryid,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows_retrieved,
plans_calls,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
state_code,
get_state(state_code) as state
FROM pg_stat_monitor_internal(TRUE) p, pg_database d WHERE dbid = oid
ORDER BY bucket_start_time;
CREATE FUNCTION decode_error_level(elevel int)
RETURNS text
AS
$$
SELECT
CASE
WHEN elevel = 0 THEN ''
WHEN elevel = 10 THEN 'DEBUG5'
WHEN elevel = 11 THEN 'DEBUG4'
WHEN elevel = 12 THEN 'DEBUG3'
WHEN elevel = 13 THEN 'DEBUG2'
WHEN elevel = 14 THEN 'DEBUG1'
WHEN elevel = 15 THEN 'LOG'
WHEN elevel = 16 THEN 'LOG_SERVER_ONLY'
WHEN elevel = 17 THEN 'INFO'
WHEN elevel = 18 THEN 'NOTICE'
WHEN elevel = 19 THEN 'WARNING'
WHEN elevel = 20 THEN 'ERROR'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION histogram(_bucket int, _quryid text)
RETURNS SETOF RECORD AS $$
DECLARE
rec record;
BEGIN
for rec in
with stat as (select queryid, bucket, unnest(range()) as range, unnest(resp_calls)::int freq from pg_stat_monitor) select range, freq, repeat('â– ', (freq::float / max(freq) over() * 30)::int) as bar from stat where queryid = _quryid and bucket = _bucket
loop
return next rec;
end loop;
END
$$ language plpgsql;
--CREATE FUNCTION pg_stat_monitor_hook_stats(
-- OUT hook text,
-- OUT min_time float8,
-- OUT max_time float8,
-- OUT total_time float8,
-- OUT ncalls int8
--)
--RETURNS SETOF record
--AS 'MODULE_PATHNAME', 'pg_stat_monitor_hook_stats'
--LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
--CREATE VIEW pg_stat_monitor_hook_stats AS SELECT
-- hook,
-- min_time,
-- max_time,
-- total_time,
-- total_time / greatest(ncalls, 1) as avg_time,
-- ncalls,
-- ROUND(CAST(total_time / greatest(sum(total_time) OVER(), 0.00000001) * 100 as numeric), 2)::text || '%' as load_comparison
-- FROM pg_stat_monitor_hook_stats();
GRANT SELECT ON pg_stat_monitor TO PUBLIC;
GRANT SELECT ON pg_stat_monitor_settings TO PUBLIC;
-- Don't want this to be available to non-superusers.
REVOKE ALL ON FUNCTION pg_stat_monitor_reset() FROM PUBLIC;

View File

@ -1,267 +0,0 @@
/* contrib/pg_stat_monitor/pg_stat_monitor--1.0.sql */
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "CREATE EXTENSION pg_stat_monitor" to load this file. \quit
-- Register functions.
CREATE FUNCTION pg_stat_monitor_reset()
RETURNS void
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION pg_stat_monitor_version()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION get_histogram_timings()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION range()
RETURNS text[] AS $$
SELECT string_to_array(get_histogram_timings(), ',');
$$ LANGUAGE SQL;
CREATE FUNCTION pg_stat_monitor_internal(IN showtext boolean,
OUT bucket int8, -- 0
OUT userid oid,
OUT dbid oid,
OUT client_ip int8,
OUT queryid text, -- 4
OUT planid text,
OUT query text,
OUT query_plan text,
OUT state_code int8,
OUT top_queryid text,
OUT top_query text,
OUT application_name text,
OUT relations text, -- 11
OUT cmd_type int,
OUT elevel int,
OUT sqlcode TEXT,
OUT message text,
OUT bucket_start_time text,
OUT calls int8, -- 16
OUT total_exec_time float8,
OUT min_exec_time float8,
OUT max_exec_time float8,
OUT mean_exec_time float8,
OUT stddev_exec_time float8,
OUT rows_retrieved int8,
OUT plans_calls int8, -- 23
OUT total_plan_time float8,
OUT min_plan_time float8,
OUT max_plan_time float8,
OUT mean_plan_time float8,
OUT stddev_plan_time float8,
OUT shared_blks_hit int8, -- 29
OUT shared_blks_read int8,
OUT shared_blks_dirtied int8,
OUT shared_blks_written int8,
OUT local_blks_hit int8,
OUT local_blks_read int8,
OUT local_blks_dirtied int8,
OUT local_blks_written int8,
OUT temp_blks_read int8,
OUT temp_blks_written int8,
OUT blk_read_time float8,
OUT blk_write_time float8,
OUT resp_calls text, -- 41
OUT cpu_user_time float8,
OUT cpu_sys_time float8,
OUT wal_records int8,
OUT wal_fpi int8,
OUT wal_bytes numeric,
OUT comments TEXT,
OUT toplevel BOOLEAN
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE FUNCTION get_state(state_code int8) RETURNS TEXT AS
$$
SELECT
CASE
WHEN state_code = 0 THEN 'PARSING'
WHEN state_code = 1 THEN 'PLANNING'
WHEN state_code = 2 THEN 'ACTIVE'
WHEN state_code = 3 THEN 'FINISHED'
WHEN state_code = 4 THEN 'FINISHED WITH ERROR'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION get_cmd_type (cmd_type INTEGER) RETURNS TEXT AS
$$
SELECT
CASE
WHEN cmd_type = 0 THEN ''
WHEN cmd_type = 1 THEN 'SELECT'
WHEN cmd_type = 2 THEN 'UPDATE'
WHEN cmd_type = 3 THEN 'INSERT'
WHEN cmd_type = 4 THEN 'DELETE'
WHEN cmd_type = 5 THEN 'UTILITY'
WHEN cmd_type = 6 THEN 'NOTHING'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION pg_stat_monitor_settings(
OUT name text,
OUT value text,
OUT default_value text,
OUT description text,
OUT minimum INTEGER,
OUT maximum INTEGER,
OUT options text,
OUT restart text
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor_settings'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE VIEW pg_stat_monitor_settings AS SELECT
name,
value,
default_value,
description,
minimum,
maximum,
options,
restart
FROM pg_stat_monitor_settings();
-- Register a view on the function for ease of use.
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid::regrole,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows_retrieved,
plans_calls,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
state_code,
get_state(state_code) as state
FROM pg_stat_monitor_internal(TRUE) p, pg_database d WHERE dbid = oid
ORDER BY bucket_start_time;
CREATE FUNCTION decode_error_level(elevel int)
RETURNS text
AS
$$
SELECT
CASE
WHEN elevel = 0 THEN ''
WHEN elevel = 10 THEN 'DEBUG5'
WHEN elevel = 11 THEN 'DEBUG4'
WHEN elevel = 12 THEN 'DEBUG3'
WHEN elevel = 13 THEN 'DEBUG2'
WHEN elevel = 14 THEN 'DEBUG1'
WHEN elevel = 15 THEN 'LOG'
WHEN elevel = 16 THEN 'LOG_SERVER_ONLY'
WHEN elevel = 17 THEN 'INFO'
WHEN elevel = 18 THEN 'NOTICE'
WHEN elevel = 19 THEN 'WARNING'
WHEN elevel = 20 THEN 'ERROR'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION histogram(_bucket int, _quryid text)
RETURNS SETOF RECORD AS $$
DECLARE
rec record;
BEGIN
for rec in
with stat as (select queryid, bucket, unnest(range()) as range, unnest(resp_calls)::int freq from pg_stat_monitor) select range, freq, repeat('â– ', (freq::float / max(freq) over() * 30)::int) as bar from stat where queryid = _quryid and bucket = _bucket
loop
return next rec;
end loop;
END
$$ language plpgsql;
--CREATE FUNCTION pg_stat_monitor_hook_stats(
-- OUT hook text,
-- OUT min_time float8,
-- OUT max_time float8,
-- OUT total_time float8,
-- OUT ncalls int8
--)
--RETURNS SETOF record
--AS 'MODULE_PATHNAME', 'pg_stat_monitor_hook_stats'
--LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
--
--CREATE VIEW pg_stat_monitor_hook_stats AS SELECT
-- hook,
-- min_time,
-- max_time,
-- total_time,
-- total_time / greatest(ncalls, 1) as avg_time,
-- ncalls,
-- ROUND(CAST(total_time / greatest(sum(total_time) OVER(), 0.00000001) * 100 as numeric), 2)::text || '%' as load_comparison
-- FROM pg_stat_monitor_hook_stats();
GRANT SELECT ON pg_stat_monitor TO PUBLIC;
GRANT SELECT ON pg_stat_monitor_settings TO PUBLIC;
-- Don't want this to be available to non-superusers.
REVOKE ALL ON FUNCTION pg_stat_monitor_reset() FROM PUBLIC;

View File

@ -1,267 +0,0 @@
/* contrib/pg_stat_monitor/pg_stat_monitor--1.0.sql */
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "CREATE EXTENSION pg_stat_monitor" to load this file. \quit
-- Register functions.
CREATE FUNCTION pg_stat_monitor_reset()
RETURNS void
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION pg_stat_monitor_version()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION get_histogram_timings()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION range()
RETURNS text[] AS $$
SELECT string_to_array(get_histogram_timings(), ',');
$$ LANGUAGE SQL;
CREATE FUNCTION pg_stat_monitor_internal(IN showtext boolean,
OUT bucket int8, -- 0
OUT userid oid,
OUT dbid oid,
OUT client_ip int8,
OUT queryid text, -- 4
OUT planid text,
OUT query text,
OUT query_plan text,
OUT state_code int8,
OUT top_queryid text,
OUT top_query text,
OUT application_name text,
OUT relations text, -- 11
OUT cmd_type int,
OUT elevel int,
OUT sqlcode TEXT,
OUT message text,
OUT bucket_start_time text,
OUT calls int8, -- 16
OUT total_exec_time float8,
OUT min_exec_time float8,
OUT max_exec_time float8,
OUT mean_exec_time float8,
OUT stddev_exec_time float8,
OUT rows_retrieved int8,
OUT plans_calls int8, -- 23
OUT total_plan_time float8,
OUT min_plan_time float8,
OUT max_plan_time float8,
OUT mean_plan_time float8,
OUT stddev_plan_time float8,
OUT shared_blks_hit int8, -- 29
OUT shared_blks_read int8,
OUT shared_blks_dirtied int8,
OUT shared_blks_written int8,
OUT local_blks_hit int8,
OUT local_blks_read int8,
OUT local_blks_dirtied int8,
OUT local_blks_written int8,
OUT temp_blks_read int8,
OUT temp_blks_written int8,
OUT blk_read_time float8,
OUT blk_write_time float8,
OUT resp_calls text, -- 41
OUT cpu_user_time float8,
OUT cpu_sys_time float8,
OUT wal_records int8,
OUT wal_fpi int8,
OUT wal_bytes numeric,
OUT comments TEXT,
OUT toplevel BOOLEAN
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE FUNCTION get_state(state_code int8) RETURNS TEXT AS
$$
SELECT
CASE
WHEN state_code = 0 THEN 'PARSING'
WHEN state_code = 1 THEN 'PLANNING'
WHEN state_code = 2 THEN 'ACTIVE'
WHEN state_code = 3 THEN 'FINISHED'
WHEN state_code = 4 THEN 'FINISHED WITH ERROR'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION get_cmd_type (cmd_type INTEGER) RETURNS TEXT AS
$$
SELECT
CASE
WHEN cmd_type = 0 THEN ''
WHEN cmd_type = 1 THEN 'SELECT'
WHEN cmd_type = 2 THEN 'UPDATE'
WHEN cmd_type = 3 THEN 'INSERT'
WHEN cmd_type = 4 THEN 'DELETE'
WHEN cmd_type = 5 THEN 'UTILITY'
WHEN cmd_type = 6 THEN 'NOTHING'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION pg_stat_monitor_settings(
OUT name text,
OUT value text,
OUT default_value text,
OUT description text,
OUT minimum INTEGER,
OUT maximum INTEGER,
OUT options text,
OUT restart text
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor_settings'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE VIEW pg_stat_monitor_settings AS SELECT
name,
value,
default_value,
description,
minimum,
maximum,
options,
restart
FROM pg_stat_monitor_settings();
-- Register a view on the function for ease of use.
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid::regrole,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows_retrieved,
plans_calls,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
state_code,
get_state(state_code) as state
FROM pg_stat_monitor_internal(TRUE) p, pg_database d WHERE dbid = oid
ORDER BY bucket_start_time;
CREATE FUNCTION decode_error_level(elevel int)
RETURNS text
AS
$$
SELECT
CASE
WHEN elevel = 0 THEN ''
WHEN elevel = 10 THEN 'DEBUG5'
WHEN elevel = 11 THEN 'DEBUG4'
WHEN elevel = 12 THEN 'DEBUG3'
WHEN elevel = 13 THEN 'DEBUG2'
WHEN elevel = 14 THEN 'DEBUG1'
WHEN elevel = 15 THEN 'LOG'
WHEN elevel = 16 THEN 'LOG_SERVER_ONLY'
WHEN elevel = 17 THEN 'INFO'
WHEN elevel = 18 THEN 'NOTICE'
WHEN elevel = 19 THEN 'WARNING'
WHEN elevel = 20 THEN 'ERROR'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION histogram(_bucket int, _quryid text)
RETURNS SETOF RECORD AS $$
DECLARE
rec record;
BEGIN
for rec in
with stat as (select queryid, bucket, unnest(range()) as range, unnest(resp_calls)::int freq from pg_stat_monitor) select range, freq, repeat('â– ', (freq::float / max(freq) over() * 30)::int) as bar from stat where queryid = _quryid and bucket = _bucket
loop
return next rec;
end loop;
END
$$ language plpgsql;
--CREATE FUNCTION pg_stat_monitor_hook_stats(
-- OUT hook text,
-- OUT min_time float8,
-- OUT max_time float8,
-- OUT total_time float8,
-- OUT ncalls int8
--)
--RETURNS SETOF record
--AS 'MODULE_PATHNAME', 'pg_stat_monitor_hook_stats'
--LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
--
--CREATE VIEW pg_stat_monitor_hook_stats AS SELECT
-- hook,
-- min_time,
-- max_time,
-- total_time,
-- total_time / greatest(ncalls, 1) as avg_time,
-- ncalls,
-- ROUND(CAST(total_time / greatest(sum(total_time) OVER(), 0.00000001) * 100 as numeric), 2)::text || '%' as load_comparison
-- FROM pg_stat_monitor_hook_stats();
GRANT SELECT ON pg_stat_monitor TO PUBLIC;
GRANT SELECT ON pg_stat_monitor_settings TO PUBLIC;
-- Don't want this to be available to non-superusers.
REVOKE ALL ON FUNCTION pg_stat_monitor_reset() FROM PUBLIC;

View File

@ -1,255 +0,0 @@
/* contrib/pg_stat_monitor/pg_stat_monitor--1.0.sql */
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "CREATE EXTENSION pg_stat_monitor" to load this file. \quit
-- Register functions.
CREATE FUNCTION pg_stat_monitor_reset()
RETURNS void
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION pg_stat_monitor_version()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION get_histogram_timings()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION range()
RETURNS text[] AS $$
SELECT string_to_array(get_histogram_timings(), ',');
$$ LANGUAGE SQL;
CREATE FUNCTION pg_stat_monitor_internal(IN showtext boolean,
OUT bucket int8, -- 0
OUT userid oid,
OUT dbid oid,
OUT client_ip int8,
OUT queryid text, -- 4
OUT planid text,
OUT query text,
OUT query_plan text,
OUT state_code int8,
OUT top_queryid text,
OUT top_query text,
OUT application_name text,
OUT relations text, -- 11
OUT cmd_type int,
OUT elevel int,
OUT sqlcode TEXT,
OUT message text,
OUT bucket_start_time text,
OUT calls int8, -- 16
OUT total_time float8,
OUT min_time float8,
OUT max_time float8,
OUT mean_time float8,
OUT stddev_time float8,
OUT rows_retrieved int8,
OUT plans_calls int8, -- 23
OUT plan_total_time float8,
OUT plan_min_time float8,
OUT plan_max_time float8,
OUT plan_mean_time float8,
OUT plan_stddev_time float8,
OUT shared_blks_hit int8, -- 29
OUT shared_blks_read int8,
OUT shared_blks_dirtied int8,
OUT shared_blks_written int8,
OUT local_blks_hit int8,
OUT local_blks_read int8,
OUT local_blks_dirtied int8,
OUT local_blks_written int8,
OUT temp_blks_read int8,
OUT temp_blks_written int8,
OUT blk_read_time float8,
OUT blk_write_time float8,
OUT resp_calls text, -- 41
OUT cpu_user_time float8,
OUT cpu_sys_time float8,
OUT wal_records int8,
OUT wal_fpi int8,
OUT wal_bytes numeric,
OUT comments TEXT,
OUT toplevel BOOLEAN
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE FUNCTION get_state(state_code int8) RETURNS TEXT AS
$$
SELECT
CASE
WHEN state_code = 0 THEN 'PARSING'
WHEN state_code = 1 THEN 'PLANNING'
WHEN state_code = 2 THEN 'ACTIVE'
WHEN state_code = 3 THEN 'FINISHED'
WHEN state_code = 4 THEN 'FINISHED WITH ERROR'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION get_cmd_type (cmd_type INTEGER) RETURNS TEXT AS
$$
SELECT
CASE
WHEN cmd_type = 0 THEN ''
WHEN cmd_type = 1 THEN 'SELECT'
WHEN cmd_type = 2 THEN 'UPDATE'
WHEN cmd_type = 3 THEN 'INSERT'
WHEN cmd_type = 4 THEN 'DELETE'
WHEN cmd_type = 5 THEN 'UTILITY'
WHEN cmd_type = 6 THEN 'NOTHING'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION pg_stat_monitor_settings(
OUT name text,
OUT value text,
OUT default_value text,
OUT description text,
OUT minimum INTEGER,
OUT maximum INTEGER,
OUT options text,
OUT restart text
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor_settings'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE VIEW pg_stat_monitor_settings AS SELECT
name,
value,
default_value,
description,
minimum,
maximum,
options,
restart
FROM pg_stat_monitor_settings();
-- Register a view on the function for ease of use.
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid::regrole,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
queryid,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_time,
min_time,
max_time,
mean_time,
stddev_time,
rows_retrieved,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
state_code,
get_state(state_code) as state
FROM pg_stat_monitor_internal(TRUE) p, pg_database d WHERE dbid = oid
ORDER BY bucket_start_time;
CREATE FUNCTION decode_error_level(elevel int)
RETURNS text
AS
$$
SELECT
CASE
WHEN elevel = 0 THEN ''
WHEN elevel = 10 THEN 'DEBUG5'
WHEN elevel = 11 THEN 'DEBUG4'
WHEN elevel = 12 THEN 'DEBUG3'
WHEN elevel = 13 THEN 'DEBUG2'
WHEN elevel = 14 THEN 'DEBUG1'
WHEN elevel = 15 THEN 'LOG'
WHEN elevel = 16 THEN 'LOG_SERVER_ONLY'
WHEN elevel = 17 THEN 'INFO'
WHEN elevel = 18 THEN 'NOTICE'
WHEN elevel = 19 THEN 'WARNING'
WHEN elevel = 20 THEN 'ERROR'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION histogram(_bucket int, _quryid text)
RETURNS SETOF RECORD AS $$
DECLARE
rec record;
BEGIN
for rec in
with stat as (select queryid, bucket, unnest(range()) as range, unnest(resp_calls)::int freq from pg_stat_monitor) select range, freq, repeat('â– ', (freq::float / max(freq) over() * 30)::int) as bar from stat where queryid = _quryid and bucket = _bucket
loop
return next rec;
end loop;
END
$$ language plpgsql;
-- CREATE FUNCTION pg_stat_monitor_hook_stats(
-- OUT hook text,
-- OUT min_time float8,
-- OUT max_time float8,
-- OUT total_time float8,
-- OUT ncalls int8
--)
--RETURNS SETOF record
--AS 'MODULE_PATHNAME', 'pg_stat_monitor_hook_stats'
--LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
--CREATE VIEW pg_stat_monitor_hook_stats AS SELECT
-- hook,
-- min_time,
-- max_time,
-- total_time,
-- total_time / greatest(ncalls, 1) as avg_time,
-- ncalls,
-- ROUND(CAST(total_time / greatest(sum(total_time) OVER(), 0.00000001) * 100 as numeric), 2)::text || '%' as load_comparison
-- FROM pg_stat_monitor_hook_stats();
GRANT SELECT ON pg_stat_monitor TO PUBLIC;
GRANT SELECT ON pg_stat_monitor_settings TO PUBLIC;
-- Don't want this to be available to non-superusers.
REVOKE ALL ON FUNCTION pg_stat_monitor_reset() FROM PUBLIC;

View File

@ -0,0 +1,497 @@
/* contrib/pg_stat_monitor/pg_stat_monitor--2.0--2.1.sql */
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "ALTER EXTENSION pg_stat_monitor" to load this file. \quit
DROP FUNCTION pg_stat_monitor_internal CASCADE;
DROP FUNCTION pgsm_create_view CASCADE;
DROP FUNCTION pgsm_create_11_view();
DROP FUNCTION pgsm_create_13_view();
DROP FUNCTION pgsm_create_14_view();
DROP FUNCTION pgsm_create_15_view();
CREATE FUNCTION pg_stat_monitor_internal(
IN showtext boolean,
OUT bucket int8, -- 0
OUT userid oid,
OUT username text,
OUT dbid oid,
OUT datname text,
OUT client_ip int8,
OUT queryid int8, -- 6
OUT planid int8,
OUT query text,
OUT query_plan text,
OUT pgsm_query_id int8,
OUT top_queryid int8,
OUT top_query text,
OUT application_name text,
OUT relations text, -- 14
OUT cmd_type int,
OUT elevel int,
OUT sqlcode TEXT,
OUT message text,
OUT bucket_start_time timestamptz,
OUT calls int8, -- 20
OUT total_exec_time float8, -- 21
OUT min_exec_time float8,
OUT max_exec_time float8,
OUT mean_exec_time float8,
OUT stddev_exec_time float8,
OUT rows int8, -- 26
OUT plans int8, -- 27
OUT total_plan_time float8, -- 28
OUT min_plan_time float8,
OUT max_plan_time float8,
OUT mean_plan_time float8,
OUT stddev_plan_time float8,
OUT shared_blks_hit int8, -- 33
OUT shared_blks_read int8,
OUT shared_blks_dirtied int8,
OUT shared_blks_written int8,
OUT local_blks_hit int8,
OUT local_blks_read int8,
OUT local_blks_dirtied int8,
OUT local_blks_written int8,
OUT temp_blks_read int8,
OUT temp_blks_written int8,
OUT shared_blk_read_time float8,
OUT shared_blk_write_time float8,
OUT local_blk_read_time float8,
OUT local_blk_write_time float8,
OUT temp_blk_read_time float8,
OUT temp_blk_write_time float8,
OUT resp_calls text, -- 49
OUT cpu_user_time float8,
OUT cpu_sys_time float8,
OUT wal_records int8,
OUT wal_fpi int8,
OUT wal_bytes numeric,
OUT comments TEXT,
OUT jit_functions int8, -- 56
OUT jit_generation_time float8,
OUT jit_inlining_count int8,
OUT jit_inlining_time float8,
OUT jit_optimization_count int8,
OUT jit_optimization_time float8,
OUT jit_emission_count int8,
OUT jit_emission_time float8,
OUT jit_deform_count int8,
OUT jit_deform_time float8,
OUT stats_since timestamp with time zone, -- 66
OUT minmax_stats_since timestamp with time zone,
OUT toplevel BOOLEAN, -- 68
OUT bucket_done BOOLEAN
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor_2_1'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
-- Register a view on the function for ease of use.
CREATE FUNCTION pgsm_create_11_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time AS total_time,
min_exec_time AS min_time,
max_exec_time AS max_time,
mean_exec_time AS mean_time,
stddev_exec_time AS stddev_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
shared_blk_read_time AS blk_read_time,
shared_blk_write_time AS blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
bucket_done
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_13_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
shared_blk_read_time AS blk_read_time,
shared_blk_write_time AS blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
-- PostgreSQL-13 Specific Coulumns
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_14_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
shared_blk_read_time AS blk_read_time,
shared_blk_write_time AS blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_15_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
shared_blk_read_time AS blk_read_time,
shared_blk_write_time AS blk_write_time,
temp_blk_read_time,
temp_blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time,
jit_functions,
jit_generation_time,
jit_inlining_count,
jit_inlining_time,
jit_optimization_count,
jit_optimization_time,
jit_emission_count,
jit_emission_time
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_17_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
shared_blk_read_time,
shared_blk_write_time,
local_blk_read_time,
local_blk_write_time,
temp_blk_read_time,
temp_blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time,
jit_functions,
jit_generation_time,
jit_inlining_count,
jit_inlining_time,
jit_optimization_count,
jit_optimization_time,
jit_emission_count,
jit_emission_time,
jit_deform_count,
jit_deform_time,
stats_since,
minmax_stats_since
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_view() RETURNS INT AS
$$
DECLARE ver integer;
BEGIN
SELECT current_setting('server_version_num') INTO ver;
IF (ver >= 170000) THEN
return pgsm_create_17_view();
END IF;
IF (ver >= 150000) THEN
return pgsm_create_15_view();
END IF;
IF (ver >= 140000) THEN
return pgsm_create_14_view();
END IF;
IF (ver >= 130000) THEN
return pgsm_create_13_view();
END IF;
IF (ver >= 110000) THEN
return pgsm_create_11_view();
END IF;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
SELECT pgsm_create_view();
REVOKE ALL ON FUNCTION pgsm_create_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_11_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_13_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_14_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_15_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_17_view FROM PUBLIC;
GRANT EXECUTE ON FUNCTION range TO PUBLIC;
GRANT EXECUTE ON FUNCTION decode_error_level TO PUBLIC;
GRANT EXECUTE ON FUNCTION get_histogram_timings TO PUBLIC;
GRANT EXECUTE ON FUNCTION get_cmd_type TO PUBLIC;
GRANT EXECUTE ON FUNCTION pg_stat_monitor_internal TO PUBLIC;
GRANT SELECT ON pg_stat_monitor TO PUBLIC;
-- Reset is only available to super user
REVOKE ALL ON FUNCTION pg_stat_monitor_reset FROM PUBLIC;

471
pg_stat_monitor--2.0.sql Normal file
View File

@ -0,0 +1,471 @@
/* contrib/pg_stat_monitor/pg_stat_monitor--2.0.sql */
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "CREATE EXTENSION pg_stat_monitor" to load this file. \quit
-- Register functions.
CREATE FUNCTION pg_stat_monitor_reset()
RETURNS void
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION pg_stat_monitor_version()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION get_histogram_timings()
RETURNS text
AS 'MODULE_PATHNAME'
LANGUAGE C PARALLEL SAFE;
CREATE FUNCTION range()
RETURNS text[] AS $$
SELECT string_to_array(get_histogram_timings(), ',');
$$ LANGUAGE SQL;
-- Some generic utility function used internally.
CREATE FUNCTION get_cmd_type (cmd_type INTEGER) RETURNS TEXT AS
$$
SELECT
CASE
WHEN cmd_type = 0 THEN ''
WHEN cmd_type = 1 THEN 'SELECT'
WHEN cmd_type = 2 THEN 'UPDATE'
WHEN cmd_type = 3 THEN 'INSERT'
WHEN cmd_type = 4 THEN 'DELETE'
WHEN cmd_type = 5 THEN 'UTILITY'
WHEN cmd_type = 6 THEN 'NOTHING'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION decode_error_level(elevel int)
RETURNS text
AS
$$
SELECT
CASE
WHEN elevel = 0 THEN ''
WHEN elevel = 10 THEN 'DEBUG5'
WHEN elevel = 11 THEN 'DEBUG4'
WHEN elevel = 12 THEN 'DEBUG3'
WHEN elevel = 13 THEN 'DEBUG2'
WHEN elevel = 14 THEN 'DEBUG1'
WHEN elevel = 15 THEN 'LOG'
WHEN elevel = 16 THEN 'LOG_SERVER_ONLY'
WHEN elevel = 17 THEN 'INFO'
WHEN elevel = 18 THEN 'NOTICE'
WHEN elevel = 19 THEN 'WARNING'
WHEN elevel = 20 THEN 'ERROR'
END
$$
LANGUAGE SQL PARALLEL SAFE;
CREATE FUNCTION histogram(_bucket int, _quryid int8)
RETURNS SETOF RECORD AS $$
DECLARE
rec record;
BEGIN
FOR rec IN
WITH stat AS (select queryid, bucket, unnest(range()) AS range,
unnest(resp_calls)::int freq FROM pg_stat_monitor) select range,
freq, repeat('â– ', (freq::float / max(freq) over() * 30)::int) AS bar
FROM stat WHERE queryid = _quryid and bucket = _bucket
LOOP
RETURN next rec;
END loop;
END
$$ language plpgsql;
-- pg_stat_monitor internal function, must not call outside from this file.
CREATE FUNCTION pg_stat_monitor_internal(
IN showtext boolean,
OUT bucket int8, -- 0
OUT userid oid,
OUT username text,
OUT dbid oid,
OUT datname text,
OUT client_ip int8,
OUT queryid int8, -- 4
OUT planid int8,
OUT query text,
OUT query_plan text,
OUT pgsm_query_id int8,
OUT top_queryid int8,
OUT top_query text,
OUT application_name text,
OUT relations text, -- 11
OUT cmd_type int,
OUT elevel int,
OUT sqlcode TEXT,
OUT message text,
OUT bucket_start_time timestamptz,
OUT calls int8, -- 16
OUT total_exec_time float8,
OUT min_exec_time float8,
OUT max_exec_time float8,
OUT mean_exec_time float8,
OUT stddev_exec_time float8,
OUT rows int8,
OUT plans int8, -- 23
OUT total_plan_time float8,
OUT min_plan_time float8,
OUT max_plan_time float8,
OUT mean_plan_time float8,
OUT stddev_plan_time float8,
OUT shared_blks_hit int8, -- 29
OUT shared_blks_read int8,
OUT shared_blks_dirtied int8,
OUT shared_blks_written int8,
OUT local_blks_hit int8,
OUT local_blks_read int8,
OUT local_blks_dirtied int8,
OUT local_blks_written int8,
OUT temp_blks_read int8,
OUT temp_blks_written int8,
OUT blk_read_time float8,
OUT blk_write_time float8,
OUT temp_blk_read_time float8,
OUT temp_blk_write_time float8,
OUT resp_calls text, -- 41
OUT cpu_user_time float8,
OUT cpu_sys_time float8,
OUT wal_records int8,
OUT wal_fpi int8,
OUT wal_bytes numeric,
OUT comments TEXT,
OUT jit_functions int8,
OUT jit_generation_time float8,
OUT jit_inlining_count int8,
OUT jit_inlining_time float8,
OUT jit_optimization_count int8,
OUT jit_optimization_time float8,
OUT jit_emission_count int8,
OUT jit_emission_time float8,
OUT toplevel BOOLEAN,
OUT bucket_done BOOLEAN
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_monitor_2_0'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
-- Register a view on the function for ease of use.
CREATE FUNCTION pgsm_create_11_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time AS total_time,
min_exec_time AS min_time,
max_exec_time AS max_time,
mean_exec_time AS mean_time,
stddev_exec_time AS stddev_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
bucket_done
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_13_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
-- PostgreSQL-13 Specific Coulumns
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_14_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_15_view() RETURNS INT AS
$$
BEGIN
CREATE VIEW pg_stat_monitor AS SELECT
bucket,
bucket_start_time AS bucket_start_time,
userid,
username,
dbid,
datname,
'0.0.0.0'::inet + client_ip AS client_ip,
pgsm_query_id,
queryid,
toplevel,
top_queryid,
query,
comments,
planid,
query_plan,
top_query,
application_name,
string_to_array(relations, ',') AS relations,
cmd_type,
get_cmd_type(cmd_type) AS cmd_type_text,
elevel,
sqlcode,
message,
calls,
total_exec_time,
min_exec_time,
max_exec_time,
mean_exec_time,
stddev_exec_time,
rows,
shared_blks_hit,
shared_blks_read,
shared_blks_dirtied,
shared_blks_written,
local_blks_hit,
local_blks_read,
local_blks_dirtied,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time,
temp_blk_read_time,
temp_blk_write_time,
(string_to_array(resp_calls, ',')) resp_calls,
cpu_user_time,
cpu_sys_time,
wal_records,
wal_fpi,
wal_bytes,
bucket_done,
plans,
total_plan_time,
min_plan_time,
max_plan_time,
mean_plan_time,
stddev_plan_time,
jit_functions,
jit_generation_time,
jit_inlining_count,
jit_inlining_time,
jit_optimization_count,
jit_optimization_time,
jit_emission_count,
jit_emission_time
FROM pg_stat_monitor_internal(TRUE)
ORDER BY bucket_start_time;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION pgsm_create_view() RETURNS INT AS
$$
DECLARE ver integer;
BEGIN
SELECT current_setting('server_version_num') INTO ver;
IF (ver >= 150000) THEN
return pgsm_create_15_view();
END IF;
IF (ver >= 140000) THEN
return pgsm_create_14_view();
END IF;
IF (ver >= 130000) THEN
return pgsm_create_13_view();
END IF;
IF (ver >= 110000) THEN
return pgsm_create_11_view();
END IF;
RETURN 0;
END;
$$ LANGUAGE plpgsql;
SELECT pgsm_create_view();
REVOKE ALL ON FUNCTION pgsm_create_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_11_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_13_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_14_view FROM PUBLIC;
REVOKE ALL ON FUNCTION pgsm_create_15_view FROM PUBLIC;
GRANT EXECUTE ON FUNCTION range TO PUBLIC;
GRANT EXECUTE ON FUNCTION decode_error_level TO PUBLIC;
GRANT EXECUTE ON FUNCTION get_histogram_timings TO PUBLIC;
GRANT EXECUTE ON FUNCTION get_cmd_type TO PUBLIC;
GRANT EXECUTE ON FUNCTION pg_stat_monitor_internal TO PUBLIC;
GRANT SELECT ON pg_stat_monitor TO PUBLIC;
-- Reset is only available to super user
REVOKE ALL ON FUNCTION pg_stat_monitor_reset FROM PUBLIC;

View File

@ -0,0 +1,48 @@
/* contrib/pg_stat_monitor/pg_stat_monitor--2.1--2.2.sql */
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "ALTER EXTENSION pg_stat_monitor" to load this file. \quit
CREATE OR REPLACE FUNCTION get_cmd_type (cmd_type INTEGER) RETURNS TEXT AS
$$
SELECT
CASE
WHEN cmd_type = 0 THEN ''
WHEN cmd_type = 1 THEN 'SELECT'
WHEN cmd_type = 2 THEN 'UPDATE'
WHEN cmd_type = 3 THEN 'INSERT'
WHEN cmd_type = 4 THEN 'DELETE'
WHEN cmd_type = 5 AND current_setting('server_version_num')::int >= 150000 THEN 'MERGE'
WHEN cmd_type = 5 AND current_setting('server_version_num')::int < 150000 THEN 'UTILITY'
WHEN cmd_type = 6 AND current_setting('server_version_num')::int >= 150000 THEN 'UTILITY'
WHEN cmd_type = 6 AND current_setting('server_version_num')::int < 150000 THEN 'NOTHING'
WHEN cmd_type = 7 THEN 'NOTHING'
END
$$
LANGUAGE SQL PARALLEL SAFE;
-- Create new function that handles error levels across PostgreSQL versions 12-17
CREATE OR REPLACE FUNCTION decode_error_level(elevel int)
RETURNS text
AS $$
SELECT CASE
WHEN elevel = 0 THEN ''
WHEN elevel = 10 THEN 'DEBUG5'
WHEN elevel = 11 THEN 'DEBUG4'
WHEN elevel = 12 THEN 'DEBUG3'
WHEN elevel = 13 THEN 'DEBUG2'
WHEN elevel = 14 THEN 'DEBUG1'
WHEN elevel = 15 THEN 'LOG'
WHEN elevel = 16 THEN 'LOG_SERVER_ONLY'
WHEN elevel = 17 THEN 'INFO'
WHEN elevel = 18 THEN 'NOTICE'
WHEN elevel = 19 THEN 'WARNING'
WHEN elevel = 20 AND current_setting('server_version_num')::int < 140000 THEN 'ERROR'
WHEN elevel = 20 AND current_setting('server_version_num')::int >= 140000 THEN 'WARNING_CLIENT_ONLY'
WHEN elevel = 21 AND current_setting('server_version_num')::int < 140000 THEN 'FATAL'
WHEN elevel = 21 AND current_setting('server_version_num')::int >= 140000 THEN 'ERROR'
WHEN elevel = 22 AND current_setting('server_version_num')::int < 140000 THEN 'PANIC'
WHEN elevel = 22 AND current_setting('server_version_num')::int >= 140000 THEN 'FATAL'
WHEN elevel = 23 AND current_setting('server_version_num')::int >= 140000 THEN 'PANIC'
END;
$$ LANGUAGE SQL PARALLEL SAFE;

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
# pg_stat_monitor extension
comment = 'The pg_stat_monitor is a PostgreSQL Query Performance Monitoring tool, based on PostgreSQL contrib module pg_stat_statements. pg_stat_monitor provides aggregated statistics, client information, plan details including plan, and histogram information.'
default_version = '1.0'
default_version = '2.2'
module_pathname = '$libdir/pg_stat_monitor'
relocatable = true

View File

@ -1,11 +1,11 @@
/*-------------------------------------------------------------------------
*
* pg_stat_monitor.h
* Track statement execution times across a whole database cluster.
* Track statement execution times across a whole database cluster.
*
* Portions Copyright © 2018-2020, Percona LLC and/or its affiliates
* Portions Copyright © 2018-2024, Percona LLC and/or its affiliates
*
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
* Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
*
* Portions Copyright (c) 1994, The Regents of the University of California
*
@ -27,10 +27,14 @@
#include <sys/time.h>
#include <sys/resource.h>
#include "lib/dshash.h"
#include "utils/dsa.h"
#include "access/hash.h"
#include "catalog/pg_authid.h"
#include "executor/instrument.h"
#include "common/ip.h"
#include "jit/jit.h"
#include "funcapi.h"
#include "access/twophase.h"
#include "mb/pg_wchar.h"
@ -53,6 +57,9 @@
#include "utils/lsyscache.h"
#include "utils/guc.h"
#include "utils/guc_tables.h"
#include "utils/memutils.h"
#include "utils/palloc.h"
#define MAX_BACKEND_PROCESES (MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts)
#define IntArrayGetTextDatum(x,y) intarray_get_datum(x,y)
@ -60,7 +67,6 @@
/* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */
#define USAGE_EXEC(duration) (1.0)
#define USAGE_INIT (1.0) /* including initial planning */
#define ASSUMED_MEDIAN_INIT (10.0) /* initial assumed median usage */
#define ASSUMED_LENGTH_INIT 1024 /* initial assumed mean query length */
#define USAGE_DECREASE_FACTOR (0.99) /* decreased every entry_dealloc */
#define STICKY_DECREASE_FACTOR (0.50) /* factor for sticky entries */
@ -68,31 +74,34 @@
#define JUMBLE_SIZE 1024 /* query serialization buffer size */
#define HISTOGRAM_MAX_TIME 50000000
#define MAX_RESPONSE_BUCKET 50
#define INVALID_BUCKET_ID -1
#define MAX_REL_LEN 255
#define MAX_BUCKETS 10
#define TEXT_LEN 255
#define ERROR_MESSAGE_LEN 100
#define REL_TYPENAME_LEN 64
#define REL_LST 10
#define REL_LEN 1000
#define REL_LEN 132 /* REL_TYPENAME_LEN * 2 (relname + schema) + 1
* (for view indication) + 1 and dot and
* string terminator */
#define CMD_LST 10
#define CMD_LEN 20
#define APPLICATIONNAME_LEN 100
#define COMMENTS_LEN 512
#define APPLICATIONNAME_LEN NAMEDATALEN
#define COMMENTS_LEN 256
#define PGSM_OVER_FLOW_MAX 10
#define PLAN_TEXT_LEN 1024
/* the assumption of query max nested level */
#define DEFAULT_MAX_NESTED_LEVEL 10
#define MAX_QUERY_BUF (PGSM_QUERY_SHARED_BUFFER * 1024 * 1024)
#define MAX_BUCKETS_MEM (PGSM_MAX * 1024 * 1024)
#define BUCKETS_MEM_OVERFLOW() ((hash_get_num_entries(pgss_hash) * sizeof(pgssEntry)) >= MAX_BUCKETS_MEM)
#define MAX_BUCKET_ENTRIES (MAX_BUCKETS_MEM / sizeof(pgssEntry))
#define MAX_QUERY_BUF ((int64)pgsm_query_shared_buffer * 1024 * 1024)
#define MAX_BUCKETS_MEM ((int64)pgsm_max * 1024 * 1024)
#define BUCKETS_MEM_OVERFLOW() ((hash_get_num_entries(pgsm_hash) * sizeof(pgsmEntry)) >= MAX_BUCKETS_MEM)
#define MAX_BUCKET_ENTRIES (MAX_BUCKETS_MEM / sizeof(pgsmEntry))
#define QUERY_BUFFER_OVERFLOW(x,y) ((x + y + sizeof(uint64) + sizeof(uint64)) > MAX_QUERY_BUF)
#define QUERY_MARGIN 100
#define MIN_QUERY_LEN 10
#define SQLCODE_LEN 20
#define TOTAL_RELS_LENGTH (REL_LST * REL_LEN)
#if PG_VERSION_NUM >= 130000
#define MAX_SETTINGS 15
@ -102,22 +111,37 @@
/* Update this if need a enum GUC with more options. */
#define MAX_ENUM_OPTIONS 6
typedef struct GucVariables
{
enum config_type type; /* PGC_BOOL, PGC_INT, PGC_REAL, PGC_STRING,
* PGC_ENUM */
int guc_variable;
char guc_name[TEXT_LEN];
char guc_desc[TEXT_LEN];
int guc_default;
int guc_min;
int guc_max;
int guc_unit;
int *guc_value;
bool guc_restart;
int n_options;
char guc_options[MAX_ENUM_OPTIONS][32];
} GucVariable;
/*
* pg_stat_monitor uses the hash structure to store all query statistics
* except the query text, which gets stored out of line in the raw DSA area.
* Enabling USE_DYNAMIC_HASH uses the dshash for storing the query statistics
* that get created in the DSA area and can grow to any size.
*
* The only issue with using the dshash is that the newly created hash entries
* are explicitly locked by dshash, and its caller is required to release the lock.
* That works well as long as we do not want to swallow the errors thrown from
* dshash function. Since the lightweight locks acquired internally by dshash
* automatically get released by error.
* But throwing an error from pg_stat_monitor would mean erroring out the user query,
* which is not acceptable for any stat collector extension.
*
* Moreover, some of the pg_stat_monitor functions perform the sequence scan on the
* hash table, while the sequence scan support for dshash table is only available
* for PG 15 and onwards.
* So until we figure out the way to release the locks acquired internally by dshash
* in case of an error while ignoring the error at the same time, we will keep using
* the classic shared memory hash table.
*/
#ifdef USE_DYNAMIC_HASH
#define PGSM_HASH_TABLE dshash_table
#define PGSM_HASH_TABLE_HANDLE dshash_table_handle
#define PGSM_HASH_SEQ_STATUS dshash_seq_status
#else
#define PGSM_HASH_TABLE HTAB
#define PGSM_HASH_TABLE_HANDLE HTAB*
#define PGSM_HASH_SEQ_STATUS HASH_SEQ_STATUS
#endif
#if PG_VERSION_NUM < 130000
@ -129,29 +153,24 @@ typedef struct WalUsage
} WalUsage;
#endif
typedef enum OVERFLOW_TARGET
{
OVERFLOW_TARGET_NONE = 0,
OVERFLOW_TARGET_DISK
} OVERFLOW_TARGET;
typedef enum pgssStoreKind
typedef enum pgsmStoreKind
{
PGSS_INVALID = -1,
PGSM_INVALID = -1,
/*
* PGSS_PLAN and PGSS_EXEC must be respectively 0 and 1 as they're used to
* PGSM_PLAN and PGSM_EXEC must be respectively 0 and 1 as they're used to
* reference the underlying values in the arrays in the Counters struct,
* and this order is required in pg_stat_statements_internal().
* and this order is required in pg_stat_monitor_internal().
*/
PGSS_PARSE = 0,
PGSS_PLAN,
PGSS_EXEC,
PGSS_FINISHED,
PGSS_ERROR,
PGSM_PARSE = 0,
PGSM_PLAN,
PGSM_EXEC,
PGSM_STORE,
PGSM_ERROR,
PGSS_NUMKIND /* Must be last value of this enum */
} pgssStoreKind;
PGSM_NUMKIND /* Must be last value of this enum */
} pgsmStoreKind;
/* the assumption of query max nested level */
#define DEFAULT_MAX_NESTED_LEVEL 10
@ -164,7 +183,7 @@ typedef enum AGG_KEY
AGG_KEY_DATABASE = 0,
AGG_KEY_USER,
AGG_KEY_HOST
} AGG_KEY;
} AGG_KEY;
#define MAX_QUERY_LEN 1024
@ -176,45 +195,32 @@ typedef struct CallTime
double max_time; /* maximum execution time in msec */
double mean_time; /* mean execution time in msec */
double sum_var_time; /* sum of variances in execution time in msec */
} CallTime;
} CallTime;
/*
* Entry type for queries hash table (query ID).
*
* We use a hash table to keep track of query IDs that have their
* corresponding query text added to the query buffer (pgsm_query_shared_buffer).
*
* This allow us to avoid adding duplicated queries to the buffer, therefore
* leaving more space for other queries and saving some CPU.
*/
typedef struct pgssQueryEntry
{
uint64 queryid; /* query identifier, also the key. */
size_t query_pos; /* query location within query buffer */
} pgssQueryEntry;
typedef struct PlanInfo
{
uint64 planid; /* plan identifier */
char plan_text[PLAN_TEXT_LEN]; /* plan text */
size_t plan_len; /* strlen(plan_text) */
} PlanInfo;
} PlanInfo;
typedef struct pgssHashKey
typedef struct pgsmHashKey
{
uint64 bucket_id; /* bucket number */
uint64 queryid; /* query identifier */
uint64 userid; /* user OID */
uint64 dbid; /* database OID */
uint64 ip; /* client ip address */
uint64 planid; /* plan identifier */
uint64 appid; /* hash of application name */
uint64 toplevel; /* query executed at top level */
} pgssHashKey;
Oid userid; /* user OID */
Oid dbid; /* database OID */
uint32 ip; /* client ip address */
bool toplevel; /* query executed at top level */
uint64 parentid; /* parent queryid of current query */
} pgsmHashKey;
typedef struct QueryInfo
{
uint64 parentid; /* parent queryid of current query */
dsa_pointer parent_query;
int64 type; /* type of query, options are query, info,
* warning, error, fatal */
char application_name[APPLICATIONNAME_LEN];
@ -231,14 +237,14 @@ typedef struct ErrorInfo
int64 elevel; /* error elevel */
char sqlcode[SQLCODE_LEN]; /* error sqlcode */
char message[ERROR_MESSAGE_LEN]; /* error message text */
} ErrorInfo;
} ErrorInfo;
typedef struct Calls
{
int64 calls; /* # of times executed */
int64 rows; /* total # of retrieved or affected rows */
double usage; /* usage factor */
} Calls;
} Calls;
typedef struct Blocks
@ -253,26 +259,75 @@ typedef struct Blocks
int64 local_blks_written; /* # of local disk blocks written */
int64 temp_blks_read; /* # of temp blocks read */
int64 temp_blks_written; /* # of temp blocks written */
double blk_read_time; /* time spent reading, in msec */
double blk_write_time; /* time spent writing, in msec */
} Blocks;
double shared_blk_read_time; /* time spent reading shared blocks,
* in msec */
double shared_blk_write_time; /* time spent writing shared blocks,
* in msec */
double local_blk_read_time; /* time spent reading local blocks, in
* msec */
double local_blk_write_time; /* time spent writing local blocks, in
* msec */
double temp_blk_read_time; /* time spent reading temp blocks, in msec */
double temp_blk_write_time; /* time spent writing temp blocks, in
* msec */
/*
* Variables for local entry. The values to be passed to pgsm_update_entry
* from pgsm_store.
*/
instr_time instr_shared_blk_read_time; /* time spent reading shared
* blocks */
instr_time instr_shared_blk_write_time; /* time spent writing shared
* blocks */
instr_time instr_local_blk_read_time; /* time spent reading local blocks */
instr_time instr_local_blk_write_time; /* time spent writing local blocks */
instr_time instr_temp_blk_read_time; /* time spent reading temp blocks */
instr_time instr_temp_blk_write_time; /* time spent writing temp blocks */
} Blocks;
typedef struct JitInfo
{
int64 jit_functions; /* total number of JIT functions emitted */
double jit_generation_time; /* total time to generate jit code */
int64 jit_inlining_count; /* number of times inlining time has been
* > 0 */
double jit_deform_time; /* total time to deform tuples in jit code */
int64 jit_deform_count; /* number of times deform time has been >
* 0 */
double jit_inlining_time; /* total time to inline jit code */
int64 jit_optimization_count; /* number of times optimization time
* has been > 0 */
double jit_optimization_time; /* total time to optimize jit code */
int64 jit_emission_count; /* number of times emission time has been
* > 0 */
double jit_emission_time; /* total time to emit jit code */
/*
* Variables for local entry. The values to be passed to pgsm_update_entry
* from pgsm_store.
*/
instr_time instr_generation_counter; /* generation counter */
instr_time instr_inlining_counter; /* inlining counter */
instr_time instr_deform_counter; /* deform counter */
instr_time instr_optimization_counter; /* optimization counter */
instr_time instr_emission_counter; /* emission counter */
} JitInfo;
typedef struct SysInfo
{
float utime; /* user cpu time */
float stime; /* system cpu time */
} SysInfo;
double utime; /* user cpu time */
double stime; /* system cpu time */
} SysInfo;
typedef struct Wal_Usage
{
int64 wal_records; /* # of WAL records generated */
int64 wal_fpi; /* # of WAL full page images generated */
uint64 wal_bytes; /* total amount of WAL bytes generated */
} Wal_Usage;
} Wal_Usage;
typedef struct Counters
{
uint64 bucket_id; /* bucket id */
Calls calls;
QueryInfo info;
CallTime time;
@ -283,11 +338,11 @@ typedef struct Counters
Blocks blocks;
SysInfo sysinfo;
JitInfo jitinfo;
ErrorInfo error;
Wal_Usage walusage;
int resp_calls[MAX_RESPONSE_BUCKET]; /* execution time's in
* msec */
uint64 state; /* query state */
} Counters;
/* Some global structure to get the cpu usage, really don't like the idea of global variable */
@ -295,59 +350,57 @@ typedef struct Counters
/*
* Statistics per statement
*/
typedef struct pgssEntry
typedef struct pgsmEntry
{
pgssHashKey key; /* hash key of entry - MUST BE FIRST */
pgsmHashKey key; /* hash key of entry - MUST BE FIRST */
uint64 pgsm_query_id; /* pgsm generate normalized query hash */
char datname[NAMEDATALEN]; /* database name */
char username[NAMEDATALEN]; /* user name */
Counters counters; /* the statistics for this query */
int encoding; /* query text encoding */
TimestampTz stats_since; /* timestamp of entry allocation */
TimestampTz minmax_stats_since; /* timestamp of last min/max values reset */
slock_t mutex; /* protects the counters only */
size_t query_pos; /* query location within query buffer */
} pgssEntry;
union
{
dsa_pointer query_pos; /* query location within query buffer */
char *query_pointer;
} query_text;
} pgsmEntry;
/*
* Global shared state
*/
typedef struct pgssSharedState
typedef struct pgsmSharedState
{
LWLock *lock; /* protects hashtable search/modification */
double cur_median_usage; /* current median usage in hashtable */
slock_t mutex; /* protects following fields only: */
Size extent; /* current extent of query file */
int64 n_writers; /* number of active writers to query file */
pg_atomic_uint64 current_wbucket;
pg_atomic_uint64 prev_bucket_sec;
uint64 bucket_entry[MAX_BUCKETS];
char bucket_start_time[MAX_BUCKETS][60]; /* start time of the
* bucket */
LWLock *errors_lock; /* protects errors hashtable
* search/modification */
int hash_tranche_id;
void *raw_dsa_area; /* DSA area pointer to store query texts.
* dshash also lives in this memory when
* USE_DYNAMIC_HASH is enabled */
PGSM_HASH_TABLE_HANDLE hash_handle;
/*
* These variables are used when pgsm_overflow_target is ON.
*
* overflow is set to true when the query buffer overflows.
*
* n_bucket_cycles counts the number of times we changed bucket since the
* query buffer overflowed. When it reaches pgsm_max_buckets we remove the
* dump file, also reset the counter.
*
* This allows us to avoid having a large file on disk that would also
* slowdown queries to the pg_stat_monitor view.
* hash table handle. can be either classic shared memory hash or dshash
* (if we are using USE_DYNAMIC_HASH)
*/
bool overflow;
size_t n_bucket_cycles;
} pgssSharedState;
#define ResetSharedState(x) \
do { \
x->cur_median_usage = ASSUMED_MEDIAN_INIT; \
x->cur_median_usage = ASSUMED_MEDIAN_INIT; \
x->n_writers = 0; \
pg_atomic_init_u64(&x->current_wbucket, 0); \
pg_atomic_init_u64(&x->prev_bucket_sec, 0); \
memset(&x->bucket_entry, 0, MAX_BUCKETS * sizeof(uint64)); \
} while(0)
bool pgsm_oom;
TimestampTz bucket_start_time[]; /* start time of the bucket */
} pgsmSharedState;
typedef struct pgsmLocalState
{
pgsmSharedState *shared_pgsmState;
dsa_area *dsa; /* local dsa area for backend attached to the
* dsa area created by postmaster at startup. */
PGSM_HASH_TABLE *shared_hash;
MemoryContext pgsm_mem_cxt;
} pgsmLocalState;
#if PG_VERSION_NUM < 140000
/*
@ -385,52 +438,42 @@ typedef struct JumbleState
} JumbleState;
#endif
/* Links to shared memory state */
bool SaveQueryText(uint64 bucketid,
uint64 queryid,
unsigned char *buf,
const char *query,
uint64 query_len,
size_t *query_pos);
/* guc.c */
void init_guc(void);
GucVariable *get_conf(int i);
/* hash_create.c */
dsa_area *get_dsa_area_for_query_text(void);
PGSM_HASH_TABLE *get_pgsmHash(void);
void pgsm_attach_shmem(void);
bool IsHashInitialize(void);
void pgss_shmem_startup(void);
void pgss_shmem_shutdown(int code, Datum arg);
bool IsSystemOOM(void);
void pgsm_shmem_startup(void);
void pgsm_shmem_shutdown(int code, Datum arg);
int pgsm_get_bucket_size(void);
pgssSharedState *pgsm_get_ss(void);
HTAB *pgsm_get_plan_hash(void);
HTAB *pgsm_get_hash(void);
HTAB *pgsm_get_query_hash(void);
HTAB *pgsm_get_plan_hash(void);
void hash_entry_reset(void);
void hash_query_entryies_reset(void);
pgsmSharedState *pgsm_get_ss(void);
void hash_query_entries();
void hash_query_entry_dealloc(int new_bucket_id, int old_bucket_id, unsigned char *query_buffer[]);
void hash_entry_dealloc(int new_bucket_id, int old_bucket_id, unsigned char *query_buffer);
pgssEntry *hash_entry_alloc(pgssSharedState *pgss, pgssHashKey *key, int encoding);
Size hash_memsize(void);
int read_query_buffer(int bucket_id, uint64 queryid, char *query_txt, size_t pos);
uint64 read_query(unsigned char *buf, uint64 queryid, char *query, size_t pos);
void pgss_startup(void);
void set_qbuf(unsigned char *);
pgsmEntry *hash_entry_alloc(pgsmSharedState *pgsm, pgsmHashKey *key, int encoding);
Size pgsm_ShmemSize(void);
void pgsm_startup(void);
/* hash_query.c */
void pgss_startup(void);
void pgsm_startup(void);
MemoryContext GetPgsmMemoryContext(void);
/* guc.c */
void init_guc(void);
/* GUC variables*/
/*---- GUC variables ----*/
typedef enum
{
PSGM_TRACK_NONE = 0, /* track no statements */
PGSM_TRACK_TOP, /* only top level statements */
PGSM_TRACK_ALL /* all statements, including nested ones */
} PGSMTrackLevel;
} PGSMTrackLevel;
static const struct config_enum_entry track_options[] =
{
{"none", PSGM_TRACK_NONE, false},
@ -439,90 +482,30 @@ static const struct config_enum_entry track_options[] =
{NULL, 0, false}
};
#define PGSM_MAX get_conf(0)->guc_variable
#define PGSM_QUERY_MAX_LEN get_conf(1)->guc_variable
#define PGSM_TRACK_UTILITY get_conf(2)->guc_variable
#define PGSM_NORMALIZED_QUERY get_conf(3)->guc_variable
#define PGSM_MAX_BUCKETS get_conf(4)->guc_variable
#define PGSM_BUCKET_TIME get_conf(5)->guc_variable
#define PGSM_HISTOGRAM_MIN get_conf(6)->guc_variable
#define PGSM_HISTOGRAM_MAX get_conf(7)->guc_variable
#define PGSM_HISTOGRAM_BUCKETS get_conf(8)->guc_variable
#define PGSM_QUERY_SHARED_BUFFER get_conf(9)->guc_variable
#define PGSM_OVERFLOW_TARGET get_conf(10)->guc_variable
#define PGSM_QUERY_PLAN get_conf(11)->guc_variable
#define PGSM_TRACK get_conf(12)->guc_variable
#define PGSM_EXTRACT_COMMENTS get_conf(13)->guc_variable
#define PGSM_TRACK_PLANNING get_conf(14)->guc_variable
/*---- Benchmarking ----*/
#ifdef BENCHMARK
/*
* These enumerator values are used as index in the hook stats array.
* STATS_START and STATS_END are used only to delimit the range.
* STATS_END is also the length of the valid items in the enum.
*/
enum pg_hook_stats_id
typedef enum
{
STATS_START = -1,
STATS_PGSS_POST_PARSE_ANALYZE,
STATS_PGSS_EXECUTORSTART,
STATS_PGSS_EXECUTORUN,
STATS_PGSS_EXECUTORFINISH,
STATS_PGSS_EXECUTOREND,
STATS_PGSS_PROCESSUTILITY,
#if PG_VERSION_NUM >= 130000
STATS_PGSS_PLANNER_HOOK,
#endif
STATS_PGSM_EMIT_LOG_HOOK,
STATS_PGSS_EXECUTORCHECKPERMS,
STATS_END
};
HISTOGRAM_START,
HISTOGRAM_END,
HISTOGRAM_COUNT
} HistogramTimingType;
/* Hold time to execute statistics for a hook. */
struct pg_hook_stats_t
{
char hook_name[64];
double min_time;
double max_time;
double total_time;
uint64 ncalls;
};
#define HOOK_STATS_SIZE MAXALIGN((size_t)STATS_END * sizeof(struct pg_hook_stats_t))
/* Allocate a pg_hook_stats_t array of size HOOK_STATS_SIZE on shared memory. */
void init_hook_stats(void);
/* Update hook time execution statistics. */
void update_hook_stats(enum pg_hook_stats_id hook_id, double time_elapsed);
/*
* Macro used to declare a hook function:
* Example:
* DECLARE_HOOK(void my_hook, const char *query, size_t length);
* Will expand to:
* static void my_hook(const char *query, size_t length);
* static void my_hook_benchmark(const char *query, size_t length);
*/
#define DECLARE_HOOK(hook, ...) \
static hook(__VA_ARGS__); \
static hook##_benchmark(__VA_ARGS__);
/*
* Macro used to wrap a hook when pg_stat_monitor is compiled with -DBENCHMARK.
*
* It is intended to be used as follows in _PG_init():
* pg_hook_function = HOOK(my_hook_function);
* Then, if pg_stat_monitor is compiled with -DBENCHMARK this will expand to:
* pg_hook_name = my_hook_function_benchmark;
* Otherwise it will simple expand to:
* pg_hook_name = my_hook_function;
*/
#define HOOK(name) name##_benchmark
#else /* #ifdef BENCHMARK */
extern int pgsm_max;
extern int pgsm_query_max_len;
extern int pgsm_bucket_time;
extern int pgsm_max_buckets;
extern int pgsm_histogram_buckets;
extern double pgsm_histogram_min;
extern double pgsm_histogram_max;
extern int pgsm_query_shared_buffer;
extern bool pgsm_track_planning;
extern bool pgsm_extract_comments;
extern bool pgsm_enable_query_plan;
extern bool pgsm_enable_overflow;
extern bool pgsm_normalized_query;
extern bool pgsm_track_utility;
extern bool pgsm_track_application_names;
extern bool pgsm_enable_pgsm_query_id;
extern int pgsm_track;
#define DECLARE_HOOK(hook, ...) \
static hook(__VA_ARGS__);
@ -530,4 +513,9 @@ void update_hook_stats(enum pg_hook_stats_id hook_id, double time_elapsed);
#define HOOK_STATS_SIZE 0
#endif
#endif
void *pgsm_hash_find_or_insert(PGSM_HASH_TABLE * shared_hash, pgsmHashKey *key, bool *found);
void *pgsm_hash_find(PGSM_HASH_TABLE * shared_hash, pgsmHashKey *key, bool *found);
void pgsm_hash_seq_init(PGSM_HASH_SEQ_STATUS * hstat, PGSM_HASH_TABLE * shared_hash, bool lock);
void *pgsm_hash_seq_next(PGSM_HASH_SEQ_STATUS * hstat);
void pgsm_hash_seq_term(PGSM_HASH_SEQ_STATUS * hstat);
void pgsm_hash_delete_current(PGSM_HASH_SEQ_STATUS * hstat, PGSM_HASH_TABLE * shared_hash, void *key);

View File

@ -1,228 +0,0 @@
/*-------------------------------------------------------------------------
*
* pgsm_errors.c
* Track pg_stat_monitor internal error messages.
*
* Copyright © 2021, Percona LLC and/or its affiliates
*
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
*
* Portions Copyright (c) 1994, The Regents of the University of California
*
* IDENTIFICATION
* contrib/pg_stat_monitor/pgsm_errors.c
*
*-------------------------------------------------------------------------
*/
#include <stdarg.h>
#include <stdbool.h>
#include <string.h>
#include <time.h>
#include <sys/time.h>
#include <postgres.h>
#include <access/hash.h>
#include <storage/shmem.h>
#include <utils/hsearch.h>
#include "pg_stat_monitor.h"
#include "pgsm_errors.h"
PG_FUNCTION_INFO_V1(pg_stat_monitor_errors);
PG_FUNCTION_INFO_V1(pg_stat_monitor_reset_errors);
/*
* Maximum number of error messages tracked.
* This should be set to a sensible value in order to track
* the different type of errors that pg_stat_monitor may
* report, e.g. out of memory.
*/
#define PSGM_ERRORS_MAX 128
static HTAB *pgsm_errors_ht = NULL;
void
psgm_errors_init(void)
{
HASHCTL info;
#if PG_VERSION_NUM >= 140000
int flags = HASH_ELEM | HASH_STRINGS;
#else
int flags = HASH_ELEM | HASH_BLOBS;
#endif
memset(&info, 0, sizeof(info));
info.keysize = ERROR_MSG_MAX_LEN;
info.entrysize = sizeof(ErrorEntry);
pgsm_errors_ht = ShmemInitHash("pg_stat_monitor: errors hashtable",
PSGM_ERRORS_MAX, /* initial size */
PSGM_ERRORS_MAX, /* maximum size */
&info,
flags);
}
size_t
pgsm_errors_size(void)
{
return hash_estimate_size(PSGM_ERRORS_MAX, sizeof(ErrorEntry));
}
void
pgsm_log(PgsmLogSeverity severity, const char *format,...)
{
char key[ERROR_MSG_MAX_LEN];
ErrorEntry *entry;
bool found = false;
va_list ap;
int n;
struct timeval tv;
struct tm *lt;
pgssSharedState *pgss;
va_start(ap, format);
n = vsnprintf(key, ERROR_MSG_MAX_LEN, format, ap);
va_end(ap);
if (n < 0)
return;
pgss = pgsm_get_ss();
LWLockAcquire(pgss->errors_lock, LW_EXCLUSIVE);
entry = (ErrorEntry *) hash_search(pgsm_errors_ht, key, HASH_ENTER_NULL, &found);
if (!entry)
{
LWLockRelease(pgss->errors_lock);
/*
* We're out of memory, can't track this error message.
*/
return;
}
if (!found)
{
entry->severity = severity;
entry->calls = 0;
}
/* Update message timestamp. */
gettimeofday(&tv, NULL);
lt = localtime(&tv.tv_sec);
snprintf(entry->time, sizeof(entry->time),
"%04d-%02d-%02d %02d:%02d:%02d",
lt->tm_year + 1900,
lt->tm_mon + 1,
lt->tm_mday,
lt->tm_hour,
lt->tm_min,
lt->tm_sec);
entry->calls++;
LWLockRelease(pgss->errors_lock);
}
/*
* Clear all entries from the hash table.
*/
Datum
pg_stat_monitor_reset_errors(PG_FUNCTION_ARGS)
{
HASH_SEQ_STATUS hash_seq;
ErrorEntry *entry;
pgssSharedState *pgss = pgsm_get_ss();
/* Safety check... */
if (!IsSystemInitialized())
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("pg_stat_monitor: must be loaded via shared_preload_libraries")));
LWLockAcquire(pgss->errors_lock, LW_EXCLUSIVE);
hash_seq_init(&hash_seq, pgsm_errors_ht);
while ((entry = hash_seq_search(&hash_seq)) != NULL)
entry = hash_search(pgsm_errors_ht, &entry->message, HASH_REMOVE, NULL);
LWLockRelease(pgss->errors_lock);
PG_RETURN_VOID();
}
/*
* Invoked when users query the view pg_stat_monitor_errors.
* This function creates tuples with error messages from data present in
* the hash table, then return the dataset to the caller.
*/
Datum
pg_stat_monitor_errors(PG_FUNCTION_ARGS)
{
ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
TupleDesc tupdesc;
Tuplestorestate *tupstore;
MemoryContext per_query_ctx;
MemoryContext oldcontext;
HASH_SEQ_STATUS hash_seq;
ErrorEntry *error_entry;
pgssSharedState *pgss = pgsm_get_ss();
/* Safety check... */
if (!IsSystemInitialized())
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("pg_stat_monitor: must be loaded via shared_preload_libraries")));
/* check to see if caller supports us returning a tuplestore */
if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("pg_stat_monitor: set-valued function called in context that cannot accept a set")));
/* Switch into long-lived context to construct returned data structures */
per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
oldcontext = MemoryContextSwitchTo(per_query_ctx);
/* Build a tuple descriptor for our result type */
if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
elog(ERROR, "pg_stat_monitor: return type must be a row type");
if (tupdesc->natts != 4)
elog(ERROR, "pg_stat_monitor: incorrect number of output arguments, required 3, found %d", tupdesc->natts);
tupstore = tuplestore_begin_heap(true, false, work_mem);
rsinfo->returnMode = SFRM_Materialize;
rsinfo->setResult = tupstore;
rsinfo->setDesc = tupdesc;
MemoryContextSwitchTo(oldcontext);
LWLockAcquire(pgss->errors_lock, LW_SHARED);
hash_seq_init(&hash_seq, pgsm_errors_ht);
while ((error_entry = hash_seq_search(&hash_seq)) != NULL)
{
Datum values[4];
bool nulls[4];
int i = 0;
memset(values, 0, sizeof(values));
memset(nulls, 0, sizeof(nulls));
values[i++] = Int64GetDatumFast(error_entry->severity);
values[i++] = CStringGetTextDatum(error_entry->message);
values[i++] = CStringGetTextDatum(error_entry->time);
values[i++] = Int64GetDatumFast(error_entry->calls);
tuplestore_putvalues(tupstore, tupdesc, values, nulls);
}
LWLockRelease(pgss->errors_lock);
/* clean up and return the tuplestore */
tuplestore_donestoring(tupstore);
return (Datum) 0;
}

View File

@ -12,12 +12,39 @@ SELECT 1 AS num;
(1 row)
SELECT query,application_name FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | application_name
-------------------------------------------------------------------------------+-----------------------------
SELECT 1 AS num | pg_regress/application_name
SELECT pg_stat_monitor_reset() | pg_regress/application_name
SELECT query,application_name FROM pg_stat_monitor ORDER BY query COLLATE "C" | pg_regress/application_name
(3 rows)
query | application_name
--------------------------------+-----------------------------
SELECT 1 AS num | pg_regress/application_name
SELECT pg_stat_monitor_reset() | pg_regress/application_name
(2 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT 1 AS num;
num
-----
1
(1 row)
SET pg_stat_monitor.pgsm_track_application_names='no';
SELECT 1 AS num;
num
-----
1
(1 row)
SELECT query,application_name FROM pg_stat_monitor ORDER BY query, application_name COLLATE "C";
query | application_name
-------------------------------------------------------+-----------------------------
SELECT 1 AS num | pg_regress/application_name
SELECT 1 AS num |
SELECT pg_stat_monitor_reset() | pg_regress/application_name
SET pg_stat_monitor.pgsm_track_application_names='no' |
(4 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -1,18 +1,18 @@
Create EXTENSION pg_stat_monitor;
CREATE EXTENSION pg_stat_monitor;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
Set application_name = 'naeem' ;
SET application_name = 'naeem' ;
SELECT 1 AS num;
num
-----
1
(1 row)
Set application_name = 'psql' ;
SET application_name = 'psql' ;
SELECT 1 AS num;
num
-----
@ -20,15 +20,14 @@ SELECT 1 AS num;
(1 row)
SELECT query,application_name FROM pg_stat_monitor ORDER BY query, application_name COLLATE "C";
query | application_name
-------------------------------------------------------------------------------------------------+------------------------------------
SELECT 1 AS num | naeem
SELECT 1 AS num | psql
SELECT pg_stat_monitor_reset() | pg_regress/application_name_unique
SELECT query,application_name FROM pg_stat_monitor ORDER BY query, application_name COLLATE "C" | psql
Set application_name = 'naeem' | naeem
Set application_name = 'psql' | psql
(6 rows)
query | application_name
--------------------------------+------------------------------------
SELECT 1 AS num | naeem
SELECT 1 AS num | psql
SELECT pg_stat_monitor_reset() | pg_regress/application_name_unique
SET application_name = 'naeem' | naeem
SET application_name = 'psql' | psql
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -0,0 +1,36 @@
CREATE EXTENSION pg_stat_monitor;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SET application_name = 'naeem' ;
SELECT 1 AS num;
num
-----
1
(1 row)
SET application_name = 'psql' ;
SELECT 1 AS num;
num
-----
1
(1 row)
SELECT query,application_name FROM pg_stat_monitor ORDER BY query, application_name COLLATE "C";
query | application_name
--------------------------------+------------------------------------
SELECT 1 AS num | naeem
SELECT 1 AS num | psql
SELECT pg_stat_monitor_reset() | pg_regress/application_name_unique
(3 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DROP EXTENSION pg_stat_monitor;

View File

@ -12,12 +12,11 @@ SELECT 1 AS num;
(1 row)
SELECT query FROM pg_stat_monitor ORDER BY query COLLATE "C";
query
--------------------------------------------------------------
query
--------------------------------
SELECT 1 AS num
SELECT pg_stat_monitor_reset()
SELECT query FROM pg_stat_monitor ORDER BY query COLLATE "C"
(3 rows)
(2 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -1,35 +0,0 @@
CREATE EXTENSION pg_stat_monitor;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
select pg_sleep(.5);
pg_sleep
----------
(1 row)
SELECT 1;
?column?
----------
1
(1 row)
SELECT query FROM pg_stat_monitor ORDER BY query COLLATE "C";
query
--------------------------------------------------------------
SELECT $1
SELECT pg_stat_monitor_reset()
SELECT query FROM pg_stat_monitor ORDER BY query COLLATE "C"
select pg_sleep($1)
(4 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DROP EXTENSION pg_stat_monitor;

View File

@ -23,20 +23,21 @@ SELECT b FROM t2 FOR UPDATE;
TRUNCATE t1;
DROP TABLE t1;
DROP TABLE t2;
SELECT query, cmd_type, cmd_type_text FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | cmd_type | cmd_type_text
----------------------------------------------------------------------------------------+----------+---------------
CREATE TABLE t1 (a INTEGER) | 0 |
CREATE TABLE t2 (b INTEGER) | 0 |
DELETE FROM t1 | 4 | DELETE
DROP TABLE t1 | 0 |
INSERT INTO t1 VALUES(1) | 3 | INSERT
SELECT a FROM t1 | 1 | SELECT
SELECT b FROM t2 FOR UPDATE | 1 | SELECT
SELECT pg_stat_monitor_reset() | 1 | SELECT
SELECT query, cmd_type, cmd_type_text FROM pg_stat_monitor ORDER BY query COLLATE "C" | 1 | SELECT
TRUNCATE t1 | 0 |
UPDATE t1 SET a = 2 | 2 | UPDATE
query | cmd_type | cmd_type_text
--------------------------------+----------+---------------
CREATE TABLE t1 (a INTEGER) | 5 | UTILITY
CREATE TABLE t2 (b INTEGER) | 5 | UTILITY
DELETE FROM t1 | 4 | DELETE
DROP TABLE t1 | 5 | UTILITY
DROP TABLE t2 | 5 | UTILITY
INSERT INTO t1 VALUES(1) | 3 | INSERT
SELECT a FROM t1 | 1 | SELECT
SELECT b FROM t2 FOR UPDATE | 1 | SELECT
SELECT pg_stat_monitor_reset() | 1 | SELECT
TRUNCATE t1 | 5 | UTILITY
UPDATE t1 SET a = 2 | 2 | UPDATE
(11 rows)
SELECT pg_stat_monitor_reset();

View File

@ -0,0 +1,49 @@
CREATE EXTENSION pg_stat_monitor;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
CREATE TABLE t1 (a INTEGER);
CREATE TABLE t2 (b INTEGER);
INSERT INTO t1 VALUES(1);
SELECT a FROM t1;
a
---
1
(1 row)
UPDATE t1 SET a = 2;
DELETE FROM t1;
SELECT b FROM t2 FOR UPDATE;
b
---
(0 rows)
TRUNCATE t1;
DROP TABLE t1;
DROP TABLE t2;
SELECT query, cmd_type, cmd_type_text FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | cmd_type | cmd_type_text
--------------------------------+----------+---------------
CREATE TABLE t1 (a INTEGER) | 6 | UTILITY
CREATE TABLE t2 (b INTEGER) | 6 | UTILITY
DELETE FROM t1 | 4 | DELETE
DROP TABLE t1 | 6 | UTILITY
DROP TABLE t2 | 6 | UTILITY
INSERT INTO t1 VALUES(1) | 3 | INSERT
SELECT a FROM t1 | 1 | SELECT
SELECT b FROM t2 FOR UPDATE | 1 | SELECT
SELECT pg_stat_monitor_reset() | 1 | SELECT
TRUNCATE t1 | 6 | UTILITY
UPDATE t1 SET a = 2 | 2 | UPDATE
(11 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DROP EXTENSION pg_stat_monitor;

View File

@ -36,13 +36,12 @@ SELECT a,b,c,d FROM t1, t2, t3, t4 WHERE t1.a = t2.b AND t3.c = t4.d ORDER BY a;
---+---+---+---
(0 rows)
SELECT query,calls FROM pg_stat_monitor ORDER BY query COLLATE "C";
SELECT query, sum(calls) as calls FROM pg_stat_monitor GROUP BY query ORDER BY query COLLATE "C";
query | calls
---------------------------------------------------------------------------------+-------
SELECT a,b,c,d FROM t1, t2, t3, t4 WHERE t1.a = t2.b AND t3.c = t4.d ORDER BY a | 4
SELECT pg_stat_monitor_reset() | 1
SELECT query,calls FROM pg_stat_monitor ORDER BY query COLLATE "C" | 1
(3 rows)
(2 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
@ -60,12 +59,11 @@ begin
n := n + 1;
end loop;
end $$;
SELECT query,calls FROM pg_stat_monitor ORDER BY query COLLATE "C";
SELECT query, sum(calls) as calls FROM pg_stat_monitor GROUP BY query ORDER BY query COLLATE "C";
query | calls
---------------------------------------------------------------------------------------------------+-------
SELECT a,b,c,d FROM t1, t2, t3, t4 WHERE t1.a = t2.b AND t3.c = t4.d ORDER BY a | 1000
SELECT pg_stat_monitor_reset() | 1
SELECT query,calls FROM pg_stat_monitor ORDER BY query COLLATE "C" | 1
do $$ +| 1
declare +|
n integer:= 1; +|
@ -76,7 +74,7 @@ SELECT query,calls FROM pg_stat_monitor ORDER BY query COLLATE "C";
n := n + 1; +|
end loop; +|
end $$ |
(4 rows)
(3 rows)
DROP TABLE t1;
DROP TABLE t2;

View File

@ -27,12 +27,13 @@ SELECT * FROM t3,t4 WHERE t3.c = t4.d;
(0 rows)
\c contrib_regression
DROP DATABASE db2;
SELECT datname, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
datname | query
--------------------+-----------------------------------------------------------------------
datname | query
--------------------+---------------------------------------
contrib_regression | DROP DATABASE db2
db1 | SELECT * FROM t1,t2 WHERE t1.a = t2.b
db2 | SELECT * FROM t3,t4 WHERE t3.c = t4.d
contrib_regression | SELECT datname, query FROM pg_stat_monitor ORDER BY query COLLATE "C"
contrib_regression | SELECT pg_stat_monitor_reset()
(4 rows)
@ -45,10 +46,6 @@ SELECT pg_stat_monitor_reset();
\c db1
DROP TABLE t1;
DROP TABLE t2;
\c db2
DROP TABLE t3;
DROP TABLE t4;
\c contrib_regression
DROP DATABASE db1;
DROP DATABASE db2;
DROP EXTENSION pg_stat_monitor;

View File

@ -0,0 +1,25 @@
CREATE EXTENSION pg_stat_monitor;
DO $$
DECLARE
i integer;
BEGIN
FOR i IN 10..24 LOOP
RAISE NOTICE 'error_code: %, error_level: %', i, decode_error_level(i);
END LOOP;
END $$;
NOTICE: error_code: 10, error_level: DEBUG5
NOTICE: error_code: 11, error_level: DEBUG4
NOTICE: error_code: 12, error_level: DEBUG3
NOTICE: error_code: 13, error_level: DEBUG2
NOTICE: error_code: 14, error_level: DEBUG1
NOTICE: error_code: 15, error_level: LOG
NOTICE: error_code: 16, error_level: LOG_SERVER_ONLY
NOTICE: error_code: 17, error_level: INFO
NOTICE: error_code: 18, error_level: NOTICE
NOTICE: error_code: 19, error_level: WARNING
NOTICE: error_code: 20, error_level: WARNING_CLIENT_ONLY
NOTICE: error_code: 21, error_level: ERROR
NOTICE: error_code: 22, error_level: FATAL
NOTICE: error_code: 23, error_level: PANIC
NOTICE: error_code: 24, error_level: <NULL>
DROP EXTENSION pg_stat_monitor;

View File

@ -0,0 +1,25 @@
CREATE EXTENSION pg_stat_monitor;
DO $$
DECLARE
i integer;
BEGIN
FOR i IN 10..24 LOOP
RAISE NOTICE 'error_code: %, error_level: %', i, decode_error_level(i);
END LOOP;
END $$;
NOTICE: error_code: 10, error_level: DEBUG5
NOTICE: error_code: 11, error_level: DEBUG4
NOTICE: error_code: 12, error_level: DEBUG3
NOTICE: error_code: 13, error_level: DEBUG2
NOTICE: error_code: 14, error_level: DEBUG1
NOTICE: error_code: 15, error_level: LOG
NOTICE: error_code: 16, error_level: LOG_SERVER_ONLY
NOTICE: error_code: 17, error_level: INFO
NOTICE: error_code: 18, error_level: NOTICE
NOTICE: error_code: 19, error_level: WARNING
NOTICE: error_code: 20, error_level: ERROR
NOTICE: error_code: 21, error_level: FATAL
NOTICE: error_code: 22, error_level: PANIC
NOTICE: error_code: 23, error_level: <NULL>
NOTICE: error_code: 24, error_level: <NULL>
DROP EXTENSION pg_stat_monitor;

View File

@ -0,0 +1,54 @@
CREATE EXTENSION pg_stat_monitor;
SET pg_stat_monitor.pgsm_track='all';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
CREATE OR REPLACE FUNCTION test() RETURNS VOID AS
$$
BEGIN
PERFORM 1 + 2;
END; $$ language plpgsql;
CREATE OR REPLACE FUNCTION test2() RETURNS VOID AS
$$
BEGIN
PERFORM 1 + 2;
END; $$ language plpgsql;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT test();
test
------
(1 row)
SELECT test2();
test2
-------
(1 row)
SELECT 1 + 2;
?column?
----------
3
(1 row)
SELECT left(query, 15) as query, calls, top_query, pgsm_query_id FROM pg_stat_monitor ORDER BY query, top_query COLLATE "C";
query | calls | top_query | pgsm_query_id
-----------------+-------+-----------------+----------------------
SELECT 1 + 2 | 1 | SELECT test(); | 5193804135051352284
SELECT 1 + 2 | 1 | SELECT test2(); | 5193804135051352284
SELECT 1 + 2 | 1 | | 5193804135051352284
SELECT pg_stat_ | 1 | | 689150021118383254
SELECT test() | 1 | | -6801876889834540522
SELECT test2() | 1 | | 369102705908374543
(6 rows)
DROP EXTENSION pg_stat_monitor;

View File

@ -21,22 +21,21 @@ RAISE WARNING 'warning message';
END $$;
WARNING: warning message
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel;
query | elevel | sqlcode | message
-----------------------------------------------------------------------------------------------+--------+---------+-----------------------------------
ELECET * FROM unknown; | 21 | 42601 | syntax error at or near "ELECET"
SELECT * FROM unknown; | 21 | 42P01 | relation "unknown" does not exist
SELECT 1/0; | 21 | 22012 | division by zero
SELECT pg_stat_monitor_reset() | 0 | |
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel | 0 | |
do $$ +| 0 | |
BEGIN +| | |
RAISE WARNING 'warning message'; +| | |
END $$ | | |
do $$ +| 19 | 01000 | warning message
BEGIN +| | |
RAISE WARNING 'warning message'; +| | |
END $$; | | |
(7 rows)
query | elevel | sqlcode | message
----------------------------------+--------+---------+-----------------------------------
ELECET * FROM unknown; | 20 | 42601 | syntax error at or near "ELECET"
SELECT * FROM unknown; | 20 | 42P01 | relation "unknown" does not exist
SELECT 1/0; | 20 | 22012 | division by zero
SELECT pg_stat_monitor_reset() | 0 | |
do $$ +| 0 | |
BEGIN +| | |
RAISE WARNING 'warning message';+| | |
END $$ | | |
do $$ +| 19 | 01000 | warning message
BEGIN +| | |
RAISE WARNING 'warning message';+| | |
END $$; | | |
(6 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -21,22 +21,21 @@ RAISE WARNING 'warning message';
END $$;
WARNING: warning message
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel;
query | elevel | sqlcode | message
-----------------------------------------------------------------------------------------------+--------+---------+-----------------------------------
ELECET * FROM unknown; | 20 | 42601 | syntax error at or near "ELECET"
SELECT * FROM unknown; | 20 | 42P01 | relation "unknown" does not exist
SELECT 1/0; | 20 | 22012 | division by zero
SELECT pg_stat_monitor_reset() | 0 | |
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel | 0 | |
do $$ +| 0 | |
BEGIN +| | |
RAISE WARNING 'warning message'; +| | |
END $$ | | |
do $$ +| 19 | 01000 | warning message
BEGIN +| | |
RAISE WARNING 'warning message'; +| | |
END $$; | | |
(7 rows)
query | elevel | sqlcode | message
----------------------------------+--------+---------+-----------------------------------
ELECET * FROM unknown; | 21 | 42601 | syntax error at or near "ELECET"
SELECT * FROM unknown; | 21 | 42P01 | relation "unknown" does not exist
SELECT 1/0; | 21 | 22012 | division by zero
SELECT pg_stat_monitor_reset() | 0 | |
do $$ +| 0 | |
BEGIN +| | |
RAISE WARNING 'warning message';+| | |
END $$ | | |
do $$ +| 19 | 01000 | warning message
BEGIN +| | |
RAISE WARNING 'warning message';+| | |
END $$; | | |
(6 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -0,0 +1,42 @@
CREATE EXTENSION pg_stat_monitor;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT 1/0; -- divide by zero
ERROR: division by zero
SELECT * FROM unknown; -- unknown table
ERROR: relation "unknown" does not exist
LINE 1: SELECT * FROM unknown;
^
ELECET * FROM unknown; -- syntax error
ERROR: syntax error at or near "ELECET"
LINE 1: ELECET * FROM unknown;
^
do $$
BEGIN
RAISE WARNING 'warning message';
END $$;
WARNING: warning message
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel;
query | elevel | sqlcode | message
----------------------------------+--------+---------+-----------------------------------
ELECET * FROM unknown; | 20 | 42601 | syntax error at or near "ELECET"
SELECT * FROM unknown; | 20 | 42P01 | relation "unknown" does not exist
SELECT 1/0; | 20 | 22012 | division by zero
SELECT pg_stat_monitor_reset() | 0 | |
do $$ +| 0 | 01000 | warning message
BEGIN +| | |
RAISE WARNING 'warning message';+| | |
END $$; | | |
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DROP EXTENSION pg_stat_monitor;

View File

@ -1,4 +1,4 @@
Drop Table if exists Company;
DROP TABLE IF EXISTS Company;
NOTICE: table "company" does not exist, skipping
CREATE TABLE Company(
ID INT PRIMARY KEY NOT NULL,
@ -15,16 +15,15 @@ INSERT INTO Company(ID, Name) VALUES (1, 'Percona');
INSERT INTO Company(ID, Name) VALUES (1, 'Percona');
ERROR: duplicate key value violates unique constraint "company_pkey"
DETAIL: Key (id)=(1) already exists.
Drop Table if exists Company;
DROP TABLE IF EXISTS Company;
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel;
query | elevel | sqlcode | message
-----------------------------------------------------------------------------------------------+--------+---------+---------------------------------------------------------------
Drop Table if exists Company | 0 | |
INSERT INTO Company(ID, Name) VALUES (1, 'Percona') | 0 | |
INSERT INTO Company(ID, Name) VALUES (1, 'Percona'); | 21 | 23505 | duplicate key value violates unique constraint "company_pkey"
SELECT pg_stat_monitor_reset() | 0 | |
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel | 0 | |
(5 rows)
query | elevel | sqlcode | message
-------------------------------------------------------+--------+---------+---------------------------------------------------------------
DROP TABLE IF EXISTS Company | 0 | |
INSERT INTO Company(ID, Name) VALUES (1, 'Percona') | 0 | |
INSERT INTO Company(ID, Name) VALUES (1, 'Percona'); | 21 | 23505 | duplicate key value violates unique constraint "company_pkey"
SELECT pg_stat_monitor_reset() | 0 | |
(4 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -1,4 +1,4 @@
Drop Table if exists Company;
DROP TABLE IF EXISTS Company;
NOTICE: table "company" does not exist, skipping
CREATE TABLE Company(
ID INT PRIMARY KEY NOT NULL,
@ -15,16 +15,15 @@ INSERT INTO Company(ID, Name) VALUES (1, 'Percona');
INSERT INTO Company(ID, Name) VALUES (1, 'Percona');
ERROR: duplicate key value violates unique constraint "company_pkey"
DETAIL: Key (id)=(1) already exists.
Drop Table if exists Company;
DROP TABLE IF EXISTS Company;
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel;
query | elevel | sqlcode | message
-----------------------------------------------------------------------------------------------+--------+---------+---------------------------------------------------------------
Drop Table if exists Company | 0 | |
INSERT INTO Company(ID, Name) VALUES (1, 'Percona') | 0 | |
INSERT INTO Company(ID, Name) VALUES (1, 'Percona'); | 20 | 23505 | duplicate key value violates unique constraint "company_pkey"
SELECT pg_stat_monitor_reset() | 0 | |
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel | 0 | |
(5 rows)
query | elevel | sqlcode | message
-------------------------------------------------------+--------+---------+---------------------------------------------------------------
DROP TABLE IF EXISTS Company | 0 | |
INSERT INTO Company(ID, Name) VALUES (1, 'Percona') | 0 | |
INSERT INTO Company(ID, Name) VALUES (1, 'Percona'); | 20 | 23505 | duplicate key value violates unique constraint "company_pkey"
SELECT pg_stat_monitor_reset() | 0 | |
(4 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -0,0 +1,49 @@
DROP ROLE IF EXISTS su;
NOTICE: role "su" does not exist, skipping
CREATE USER su WITH SUPERUSER;
SET ROLE su;
CREATE EXTENSION pg_stat_monitor;
CREATE USER u1;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT routine_schema, routine_name, routine_type, data_type FROM information_schema.routines WHERE routine_schema = 'public' ORDER BY routine_name COLLATE "C";
routine_schema | routine_name | routine_type | data_type
----------------+--------------------------+--------------+-----------
public | decode_error_level | FUNCTION | text
public | get_cmd_type | FUNCTION | text
public | get_histogram_timings | FUNCTION | text
public | histogram | FUNCTION | record
public | pg_stat_monitor_internal | FUNCTION | record
public | pg_stat_monitor_reset | FUNCTION | void
public | pg_stat_monitor_version | FUNCTION | text
public | pgsm_create_11_view | FUNCTION | integer
public | pgsm_create_13_view | FUNCTION | integer
public | pgsm_create_14_view | FUNCTION | integer
public | pgsm_create_15_view | FUNCTION | integer
public | pgsm_create_17_view | FUNCTION | integer
public | pgsm_create_view | FUNCTION | integer
public | range | FUNCTION | ARRAY
(14 rows)
SET ROLE u1;
SELECT routine_schema, routine_name, routine_type, data_type FROM information_schema.routines WHERE routine_schema = 'public' ORDER BY routine_name COLLATE "C";
routine_schema | routine_name | routine_type | data_type
----------------+--------------------------+--------------+-----------
public | decode_error_level | FUNCTION | text
public | get_cmd_type | FUNCTION | text
public | get_histogram_timings | FUNCTION | text
public | histogram | FUNCTION | record
public | pg_stat_monitor_internal | FUNCTION | record
public | pg_stat_monitor_version | FUNCTION | text
public | range | FUNCTION | ARRAY
(7 rows)
SET ROLE su;
DROP USER u1;
DROP EXTENSION pg_stat_monitor;
DROP USER su;
ERROR: current user cannot be dropped

View File

@ -0,0 +1,43 @@
DROP ROLE IF EXISTS su;
NOTICE: role "su" does not exist, skipping
CREATE USER su WITH SUPERUSER;
SET ROLE su;
CREATE EXTENSION pg_stat_monitor;
CREATE USER u1;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT routine_schema, routine_name, routine_type, data_type FROM information_schema.routines WHERE routine_schema = 'public' ORDER BY routine_name COLLATE "C";
routine_schema | routine_name | routine_type | data_type
----------------+--------------------------+--------------+-----------
public | decode_error_level | FUNCTION | text
public | get_cmd_type | FUNCTION | text
public | get_histogram_timings | FUNCTION | text
public | histogram | FUNCTION | record
public | pg_stat_monitor_internal | FUNCTION | record
public | pg_stat_monitor_reset | FUNCTION | void
public | pg_stat_monitor_version | FUNCTION | text
public | pgsm_create_11_view | FUNCTION | integer
public | pgsm_create_13_view | FUNCTION | integer
public | pgsm_create_14_view | FUNCTION | integer
public | pgsm_create_15_view | FUNCTION | integer
public | pgsm_create_17_view | FUNCTION | integer
public | pgsm_create_view | FUNCTION | integer
public | range | FUNCTION | ARRAY
(14 rows)
SET ROLE u1;
SELECT routine_schema, routine_name, routine_type, data_type FROM information_schema.routines WHERE routine_schema = 'public' ORDER BY routine_name COLLATE "C";
routine_schema | routine_name | routine_type | data_type
----------------+-------------------------+--------------+-----------
public | histogram | FUNCTION | record
public | pg_stat_monitor_reset | FUNCTION | void
public | pg_stat_monitor_version | FUNCTION | text
(3 rows)
SET ROLE su;
DROP USER u1;
DROP EXTENSION pg_stat_monitor;

View File

@ -1,40 +1,41 @@
CREATE EXTENSION pg_stat_monitor;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
select pg_sleep(.5);
pg_sleep
----------
(1 row)
SELECT * FROM pg_stat_monitor_settings ORDER BY name COLLATE "C";
name | value | default_value | description | minimum | maximum | options | restart
------------------------------------------+--------+---------------+----------------------------------------------------------------------------------------------------------+---------+------------+----------------+---------
pg_stat_monitor.pgsm_bucket_time | 60 | 60 | Sets the time in seconds per bucket. | 1 | 2147483647 | | yes
pg_stat_monitor.pgsm_enable_query_plan | no | no | Enable/Disable query plan monitoring | | | yes, no | no
pg_stat_monitor.pgsm_extract_comments | no | no | Enable/Disable extracting comments from queries. | | | yes, no | no
pg_stat_monitor.pgsm_histogram_buckets | 10 | 10 | Sets the maximum number of histogram buckets | 2 | 50 | | yes
pg_stat_monitor.pgsm_histogram_max | 100000 | 100000 | Sets the time in millisecond. | 10 | 2147483647 | | yes
pg_stat_monitor.pgsm_histogram_min | 0 | 0 | Sets the time in millisecond. | 0 | 2147483647 | | yes
pg_stat_monitor.pgsm_max | 100 | 100 | Sets the maximum size of shared memory in (MB) used for statement's metadata tracked by pg_stat_monitor. | 1 | 1000 | | yes
pg_stat_monitor.pgsm_max_buckets | 10 | 10 | Sets the maximum number of buckets. | 1 | 10 | | yes
pg_stat_monitor.pgsm_normalized_query | no | no | Selects whether save query in normalized format. | | | yes, no | no
pg_stat_monitor.pgsm_overflow_target | 1 | 1 | Sets the overflow target for pg_stat_monitor | 0 | 1 | | yes
pg_stat_monitor.pgsm_query_max_len | 2048 | 2048 | Sets the maximum length of query. | 1024 | 2147483647 | | yes
pg_stat_monitor.pgsm_query_shared_buffer | 20 | 20 | Sets the maximum size of shared memory in (MB) used for query tracked by pg_stat_monitor. | 1 | 10000 | | yes
pg_stat_monitor.pgsm_track | top | top | Selects which statements are tracked by pg_stat_monitor. | | | none, top, all | no
pg_stat_monitor.pgsm_track_planning | no | no | Selects whether planning statistics are tracked. | | | yes, no | no
pg_stat_monitor.pgsm_track_utility | yes | yes | Selects whether utility commands are tracked. | | | yes, no | no
(15 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT name
, setting
, unit
, context
, vartype
, source
, min_val
, max_val
, enumvals
, boot_val
, reset_val
, pending_restart
FROM pg_settings
WHERE name LIKE 'pg_stat_monitor.%'
ORDER
BY name
COLLATE "C";
name | setting | unit | context | vartype | source | min_val | max_val | enumvals | boot_val | reset_val | pending_restart
----------------------------------------------+---------+------+------------+---------+---------+---------+------------+----------------+----------+-----------+-----------------
pg_stat_monitor.pgsm_bucket_time | 60 | s | postmaster | integer | default | 1 | 2147483647 | | 60 | 60 | f
pg_stat_monitor.pgsm_enable_overflow | on | | postmaster | bool | default | | | | on | on | f
pg_stat_monitor.pgsm_enable_pgsm_query_id | on | | user | bool | default | | | | on | on | f
pg_stat_monitor.pgsm_enable_query_plan | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_extract_comments | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_histogram_buckets | 20 | | postmaster | integer | default | 2 | 50 | | 20 | 20 | f
pg_stat_monitor.pgsm_histogram_max | 100000 | ms | postmaster | real | default | 10 | 5e+07 | | 100000 | 100000 | f
pg_stat_monitor.pgsm_histogram_min | 1 | ms | postmaster | real | default | 0 | 5e+07 | | 1 | 1 | f
pg_stat_monitor.pgsm_max | 256 | MB | postmaster | integer | default | 10 | 10240 | | 256 | 256 | f
pg_stat_monitor.pgsm_max_buckets | 10 | | postmaster | integer | default | 1 | 20000 | | 10 | 10 | f
pg_stat_monitor.pgsm_normalized_query | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_overflow_target | 1 | | postmaster | integer | default | 0 | 1 | | 1 | 1 | f
pg_stat_monitor.pgsm_query_max_len | 2048 | | postmaster | integer | default | 1024 | 2147483647 | | 2048 | 2048 | f
pg_stat_monitor.pgsm_query_shared_buffer | 20 | MB | postmaster | integer | default | 1 | 10000 | | 20 | 20 | f
pg_stat_monitor.pgsm_track | top | | user | enum | default | | | {none,top,all} | top | top | f
pg_stat_monitor.pgsm_track_application_names | on | | user | bool | default | | | | on | on | f
pg_stat_monitor.pgsm_track_planning | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_track_utility | on | | user | bool | default | | | | on | on | f
(18 rows)
DROP EXTENSION pg_stat_monitor;

View File

@ -1,39 +1,40 @@
CREATE EXTENSION pg_stat_monitor;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
select pg_sleep(.5);
pg_sleep
----------
(1 row)
SELECT * FROM pg_stat_monitor_settings ORDER BY name COLLATE "C";
name | value | default_value | description | minimum | maximum | options | restart
------------------------------------------+--------+---------------+----------------------------------------------------------------------------------------------------------+---------+------------+----------------+---------
pg_stat_monitor.pgsm_bucket_time | 60 | 60 | Sets the time in seconds per bucket. | 1 | 2147483647 | | yes
pg_stat_monitor.pgsm_enable_query_plan | no | no | Enable/Disable query plan monitoring | | | yes, no | no
pg_stat_monitor.pgsm_extract_comments | no | no | Enable/Disable extracting comments from queries. | | | yes, no | no
pg_stat_monitor.pgsm_histogram_buckets | 10 | 10 | Sets the maximum number of histogram buckets | 2 | 50 | | yes
pg_stat_monitor.pgsm_histogram_max | 100000 | 100000 | Sets the time in millisecond. | 10 | 2147483647 | | yes
pg_stat_monitor.pgsm_histogram_min | 0 | 0 | Sets the time in millisecond. | 0 | 2147483647 | | yes
pg_stat_monitor.pgsm_max | 100 | 100 | Sets the maximum size of shared memory in (MB) used for statement's metadata tracked by pg_stat_monitor. | 1 | 1000 | | yes
pg_stat_monitor.pgsm_max_buckets | 10 | 10 | Sets the maximum number of buckets. | 1 | 10 | | yes
pg_stat_monitor.pgsm_normalized_query | no | no | Selects whether save query in normalized format. | | | yes, no | no
pg_stat_monitor.pgsm_overflow_target | 1 | 1 | Sets the overflow target for pg_stat_monitor | 0 | 1 | | yes
pg_stat_monitor.pgsm_query_max_len | 2048 | 2048 | Sets the maximum length of query. | 1024 | 2147483647 | | yes
pg_stat_monitor.pgsm_query_shared_buffer | 20 | 20 | Sets the maximum size of shared memory in (MB) used for query tracked by pg_stat_monitor. | 1 | 10000 | | yes
pg_stat_monitor.pgsm_track | top | top | Selects which statements are tracked by pg_stat_monitor. | | | none, top, all | no
pg_stat_monitor.pgsm_track_utility | yes | yes | Selects whether utility commands are tracked. | | | yes, no | no
(14 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT name
, setting
, unit
, context
, vartype
, source
, min_val
, max_val
, enumvals
, boot_val
, reset_val
, pending_restart
FROM pg_settings
WHERE name LIKE 'pg_stat_monitor.%'
ORDER
BY name
COLLATE "C";
name | setting | unit | context | vartype | source | min_val | max_val | enumvals | boot_val | reset_val | pending_restart
----------------------------------------------+---------+------+------------+---------+---------+---------+------------+----------------+----------+-----------+-----------------
pg_stat_monitor.pgsm_bucket_time | 60 | s | postmaster | integer | default | 1 | 2147483647 | | 60 | 60 | f
pg_stat_monitor.pgsm_enable_overflow | on | | postmaster | bool | default | | | | on | on | f
pg_stat_monitor.pgsm_enable_pgsm_query_id | on | | user | bool | default | | | | on | on | f
pg_stat_monitor.pgsm_enable_query_plan | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_extract_comments | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_histogram_buckets | 20 | | postmaster | integer | default | 2 | 50 | | 20 | 20 | f
pg_stat_monitor.pgsm_histogram_max | 100000 | ms | postmaster | real | default | 10 | 5e+07 | | 100000 | 100000 | f
pg_stat_monitor.pgsm_histogram_min | 1 | ms | postmaster | real | default | 0 | 5e+07 | | 1 | 1 | f
pg_stat_monitor.pgsm_max | 256 | MB | postmaster | integer | default | 10 | 10240 | | 256 | 256 | f
pg_stat_monitor.pgsm_max_buckets | 10 | | postmaster | integer | default | 1 | 20000 | | 10 | 10 | f
pg_stat_monitor.pgsm_normalized_query | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_overflow_target | 1 | | postmaster | integer | default | 0 | 1 | | 1 | 1 | f
pg_stat_monitor.pgsm_query_max_len | 2048 | | postmaster | integer | default | 1024 | 2147483647 | | 2048 | 2048 | f
pg_stat_monitor.pgsm_query_shared_buffer | 20 | MB | postmaster | integer | default | 1 | 10000 | | 20 | 20 | f
pg_stat_monitor.pgsm_track | top | | user | enum | default | | | {none,top,all} | top | top | f
pg_stat_monitor.pgsm_track_application_names | on | | user | bool | default | | | | on | on | f
pg_stat_monitor.pgsm_track_utility | on | | user | bool | default | | | | on | on | f
(17 rows)
DROP EXTENSION pg_stat_monitor;

View File

@ -0,0 +1,40 @@
CREATE EXTENSION pg_stat_monitor;
SELECT name
, setting
, unit
, context
, vartype
, source
, min_val
, max_val
, enumvals
, boot_val
, reset_val
, pending_restart
FROM pg_settings
WHERE name LIKE 'pg_stat_monitor.%'
ORDER
BY name
COLLATE "C";
name | setting | unit | context | vartype | source | min_val | max_val | enumvals | boot_val | reset_val | pending_restart
----------------------------------------------+---------+------+------------+---------+---------+---------+------------+----------------+----------+-----------+-----------------
pg_stat_monitor.pgsm_bucket_time | 60 | s | postmaster | integer | default | 1 | 2147483647 | | 60 | 60 | f
pg_stat_monitor.pgsm_enable_overflow | on | | postmaster | bool | default | | | | on | on | f
pg_stat_monitor.pgsm_enable_pgsm_query_id | on | | user | bool | default | | | | on | on | f
pg_stat_monitor.pgsm_enable_query_plan | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_extract_comments | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_histogram_buckets | 20 | | postmaster | integer | default | 2 | 50 | | 20 | 20 | f
pg_stat_monitor.pgsm_histogram_max | 100000 | | postmaster | real | default | 10 | 5e+07 | | 100000 | 100000 | f
pg_stat_monitor.pgsm_histogram_min | 1 | | postmaster | real | default | 0 | 5e+07 | | 1 | 1 | f
pg_stat_monitor.pgsm_max | 256 | MB | postmaster | integer | default | 10 | 10240 | | 256 | 256 | f
pg_stat_monitor.pgsm_max_buckets | 10 | | postmaster | integer | default | 1 | 20000 | | 10 | 10 | f
pg_stat_monitor.pgsm_normalized_query | off | | user | bool | default | | | | off | off | f
pg_stat_monitor.pgsm_overflow_target | 1 | | postmaster | integer | default | 0 | 1 | | 1 | 1 | f
pg_stat_monitor.pgsm_query_max_len | 2048 | | postmaster | integer | default | 1024 | 2147483647 | | 2048 | 2048 | f
pg_stat_monitor.pgsm_query_shared_buffer | 20 | MB | postmaster | integer | default | 1 | 10000 | | 20 | 20 | f
pg_stat_monitor.pgsm_track | top | | user | enum | default | | | {none,top,all} | top | top | f
pg_stat_monitor.pgsm_track_application_names | on | | user | bool | default | | | | on | on | f
pg_stat_monitor.pgsm_track_utility | on | | user | bool | default | | | | on | on | f
(17 rows)
DROP EXTENSION pg_stat_monitor;

View File

@ -45,18 +45,17 @@ INFO: Sleep 5 seconds
(1 row)
SELECT substr(query, 0,50) as query, calls, resp_calls FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | calls | resp_calls
---------------------------------------------------+-------+-----------------------
SELECT pg_sleep(i) | 5 | {0,0,0,0,0,0,3,2,0,0}
SELECT pg_stat_monitor_reset() | 1 | {1,0,0,0,0,0,0,0,0,0}
SELECT substr(query, 0,50) as query, calls, resp_ | 1 | {1,0,0,0,0,0,0,0,0,0}
Set pg_stat_monitor.pgsm_track='all' | 1 | {1,0,0,0,0,0,0,0,0,0}
select run_pg_sleep(5) | 1 | {0,0,0,0,0,0,0,0,1,0}
(5 rows)
query | calls | resp_calls
--------------------------------------+-------+-----------------------
SELECT pg_sleep(i) | 5 | {0,0,0,0,0,0,3,2,0,0}
SELECT pg_stat_monitor_reset() | 1 | {1,0,0,0,0,0,0,0,0,0}
Set pg_stat_monitor.pgsm_track='all' | 1 | {1,0,0,0,0,0,0,0,0,0}
select run_pg_sleep(5) | 1 | {0,0,0,0,0,0,0,0,1,0}
(4 rows)
select * from generate_histogram();
range | freq | bar
--------------------+------+--------------------------------------------------------------------------------------------
range | freq | bar
--------------------+------+--------------------------------
(0 - 3)} | 0 |
(3 - 10)} | 0 |
(10 - 31)} | 0 |

View File

@ -45,13 +45,13 @@ INFO: Sleep 5 seconds
(1 row)
SELECT substr(query, 0,50) as query, calls, resp_calls FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | calls | resp_calls
---------------------------------------------------+-------+-----------------------
SELECT pg_sleep(i) | 5 | {0,0,0,0,0,0,3,2,0,0}
SELECT pg_stat_monitor_reset() | 1 | {1,0,0,0,0,0,0,0,0,0}
SELECT substr(query, 0,50) as query, calls, resp_ | 1 | {1,0,0,0,0,0,0,0,0,0}
Set pg_stat_monitor.pgsm_track='all' | 1 | {1,0,0,0,0,0,0,0,0,0}
select run_pg_sleep(5) | 1 | {0,0,0,0,0,0,0,0,1,0}
query | calls | resp_calls
--------------------------------------+-------+-----------------------
SELECT pg_sleep(i) | 2 | {0,0,0,0,0,0,2,0,0,0}
SELECT pg_sleep(i) | 3 | {0,0,0,0,0,0,1,2,0,0}
SELECT pg_stat_monitor_reset() | 1 | {1,0,0,0,0,0,0,0,0,0}
Set pg_stat_monitor.pgsm_track='all' | 1 | {1,0,0,0,0,0,0,0,0,0}
select run_pg_sleep(5) | 1 | {0,0,0,0,0,0,0,0,1,0}
(5 rows)
select * from generate_histogram();
@ -63,8 +63,8 @@ select * from generate_histogram();
(31 - 100)} | 0 |
(100 - 316)} | 0 |
(316 - 1000)} | 0 |
(1000 - 3162)} | 3 | â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– 
(3162 - 10000)} | 2 | â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– 
(1000 - 3162)} | 1 | â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– 
(3162 - 10000)} | 2 | â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– 
(10000 - 31622)} | 0 |
(31622 - 100000)} | 0 |
(10 rows)

View File

@ -45,15 +45,12 @@ INFO: Sleep 5 seconds
(1 row)
SELECT substr(query, 0,50) as query, calls, resp_calls FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | calls | resp_calls
---------------------------------------------------+-------+-----------------------
SELECT pg_sleep(i) | 4 | {0,0,0,0,0,0,3,1,0,0}
SELECT pg_sleep(i) | 1 | {0,0,0,0,0,0,0,1,0,0}
SELECT pg_stat_monitor_reset() | 1 | {1,0,0,0,0,0,0,0,0,0}
SELECT substr(query, 0,50) as query, calls, resp_ | 1 | {1,0,0,0,0,0,0,0,0,0}
Set pg_stat_monitor.pgsm_track='all' | 1 | {1,0,0,0,0,0,0,0,0,0}
select run_pg_sleep(5) | 1 | {0,0,0,0,0,0,0,0,1,0}
(6 rows)
query | calls | resp_calls
--------------------------------+-------+-----------------------
SELECT pg_sleep(i) | 5 | {0,0,0,0,0,0,3,2,0,0}
SELECT pg_stat_monitor_reset() | 1 | {1,0,0,0,0,0,0,0,0,0}
select run_pg_sleep(5) | 1 | {0,0,0,0,0,0,0,0,1,0}
(3 rows)
select * from generate_histogram();
range | freq | bar
@ -65,7 +62,7 @@ select * from generate_histogram();
(100 - 316)} | 0 |
(316 - 1000)} | 0 |
(1000 - 3162)} | 3 | â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– 
(3162 - 10000)} | 1 | â– â– â– â– â– â– â– â– â– â– 
(3162 - 10000)} | 2 | â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– â– 
(10000 - 31622)} | 0 |
(31622 - 100000)} | 0 |
(10 rows)

View File

@ -0,0 +1,6 @@
--
-- Statement level tracking
--
SELECT setting::integer < 140000 AS skip_test FROM pg_settings where name = 'server_version_num' \gset
\if :skip_test
\quit

View File

@ -0,0 +1,326 @@
--
-- Statement level tracking
--
SELECT setting::integer < 140000 AS skip_test FROM pg_settings where name = 'server_version_num' \gset
\if :skip_test
\quit
\endif
CREATE EXTENSION pg_stat_monitor;
SET pg_stat_monitor.pgsm_track_utility = TRUE;
SET pg_stat_monitor.pgsm_normalized_query = TRUE;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- DO block - top-level tracking.
CREATE TABLE stats_track_tab (x int);
SET pg_stat_monitor.pgsm_track = 'top';
DELETE FROM stats_track_tab;
DO $$
BEGIN
DELETE FROM stats_track_tab;
END;
$$ LANGUAGE plpgsql;
SELECT toplevel, calls, query FROM pg_stat_monitor
WHERE query LIKE '%DELETE%' ORDER BY query COLLATE "C", toplevel;
toplevel | calls | query
----------+-------+--------------------------------
t | 1 | DELETE FROM stats_track_tab
t | 1 | DO $$ +
| | BEGIN +
| | DELETE FROM stats_track_tab;+
| | END; +
| | $$ LANGUAGE plpgsql
(2 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- DO block - all-level tracking.
SET pg_stat_monitor.pgsm_track = 'all';
DELETE FROM stats_track_tab;
DO $$
BEGIN
DELETE FROM stats_track_tab;
END; $$;
DO LANGUAGE plpgsql $$
BEGIN
-- this is a SELECT
PERFORM 'hello world'::TEXT;
END; $$;
SELECT toplevel, calls, query FROM pg_stat_monitor
ORDER BY query COLLATE "C", toplevel;
toplevel | calls | query
----------+-------+----------------------------------------
f | 1 | DELETE FROM stats_track_tab
t | 1 | DELETE FROM stats_track_tab
t | 1 | DO $$ +
| | BEGIN +
| | DELETE FROM stats_track_tab; +
| | END; $$
t | 1 | DO LANGUAGE plpgsql $$ +
| | BEGIN +
| | -- this is a SELECT +
| | PERFORM 'hello world'::TEXT; +
| | END; $$
f | 1 | SELECT $1::TEXT
t | 1 | SELECT pg_stat_monitor_reset()
t | 1 | SET pg_stat_monitor.pgsm_track = 'all'
(7 rows)
-- DO block - top-level tracking without utility.
SET pg_stat_monitor.pgsm_track = 'top';
SET pg_stat_monitor.pgsm_track_utility = FALSE;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DELETE FROM stats_track_tab;
DO $$
BEGIN
DELETE FROM stats_track_tab;
END; $$;
DO LANGUAGE plpgsql $$
BEGIN
-- this is a SELECT
PERFORM 'hello world'::TEXT;
END; $$;
SELECT toplevel, calls, query FROM pg_stat_monitor
ORDER BY query COLLATE "C", toplevel;
toplevel | calls | query
----------+-------+--------------------------------
t | 2 | DELETE FROM stats_track_tab
t | 1 | SELECT $1::TEXT
t | 1 | SELECT pg_stat_monitor_reset()
(3 rows)
-- DO block - all-level tracking without utility.
SET pg_stat_monitor.pgsm_track = 'all';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DELETE FROM stats_track_tab;
DO $$
BEGIN
DELETE FROM stats_track_tab;
END; $$;
DO LANGUAGE plpgsql $$
BEGIN
-- this is a SELECT
PERFORM 'hello world'::TEXT;
END; $$;
SELECT toplevel, calls, query FROM pg_stat_monitor
ORDER BY query COLLATE "C", toplevel;
toplevel | calls | query
----------+-------+--------------------------------
t | 2 | DELETE FROM stats_track_tab
t | 1 | SELECT $1::TEXT
t | 1 | SELECT pg_stat_monitor_reset()
(3 rows)
-- PL/pgSQL function - top-level tracking.
SET pg_stat_monitor.pgsm_track = 'top';
SET pg_stat_monitor.pgsm_track_utility = FALSE;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
CREATE FUNCTION PLUS_TWO(i INTEGER) RETURNS INTEGER AS $$
DECLARE
r INTEGER;
BEGIN
SELECT (i + 1 + 1.0)::INTEGER INTO r;
RETURN r;
END; $$ LANGUAGE plpgsql;
SELECT PLUS_TWO(3);
plus_two
----------
5
(1 row)
SELECT PLUS_TWO(7);
plus_two
----------
9
(1 row)
-- SQL function --- use LIMIT to keep it from being inlined
CREATE FUNCTION PLUS_ONE(i INTEGER) RETURNS INTEGER AS
$$ SELECT (i + 1.0)::INTEGER LIMIT 1 $$ LANGUAGE SQL;
SELECT PLUS_ONE(8);
plus_one
----------
9
(1 row)
SELECT PLUS_ONE(10);
plus_one
----------
11
(1 row)
SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
calls | rows | query
-------+------+--------------------------------
2 | 2 | SELECT PLUS_ONE($1)
2 | 2 | SELECT PLUS_TWO($1)
1 | 1 | SELECT pg_stat_monitor_reset()
(3 rows)
-- immutable SQL function --- can be executed at plan time
CREATE FUNCTION PLUS_THREE(i INTEGER) RETURNS INTEGER AS
$$ SELECT i + 3 LIMIT 1 $$ IMMUTABLE LANGUAGE SQL;
SELECT PLUS_THREE(8);
plus_three
------------
11
(1 row)
SELECT PLUS_THREE(10);
plus_three
------------
13
(1 row)
SELECT toplevel, calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
toplevel | calls | rows | query
----------+-------+------+---------------------------------------------------------------------------
t | 2 | 2 | SELECT PLUS_ONE($1)
t | 2 | 2 | SELECT PLUS_THREE($1)
t | 2 | 2 | SELECT PLUS_TWO($1)
t | 1 | 3 | SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C"
f | 2 | 2 | SELECT i + $2 LIMIT $3
t | 1 | 1 | SELECT pg_stat_monitor_reset()
(6 rows)
-- PL/pgSQL function - all-level tracking.
SET pg_stat_monitor.pgsm_track = 'all';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- we drop and recreate the functions to avoid any caching funnies
DROP FUNCTION PLUS_ONE(INTEGER);
DROP FUNCTION PLUS_TWO(INTEGER);
DROP FUNCTION PLUS_THREE(INTEGER);
-- PL/pgSQL function
CREATE FUNCTION PLUS_TWO(i INTEGER) RETURNS INTEGER AS $$
DECLARE
r INTEGER;
BEGIN
SELECT (i + 1 + 1.0)::INTEGER INTO r;
RETURN r;
END; $$ LANGUAGE plpgsql;
SELECT PLUS_TWO(-1);
plus_two
----------
1
(1 row)
SELECT PLUS_TWO(2);
plus_two
----------
4
(1 row)
-- SQL function --- use LIMIT to keep it from being inlined
CREATE FUNCTION PLUS_ONE(i INTEGER) RETURNS INTEGER AS
$$ SELECT (i + 1.0)::INTEGER LIMIT 1 $$ LANGUAGE SQL;
SELECT PLUS_ONE(3);
plus_one
----------
4
(1 row)
SELECT PLUS_ONE(1);
plus_one
----------
2
(1 row)
SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
calls | rows | query
-------+------+-----------------------------------
2 | 2 | SELECT (i + $2 + $3)::INTEGER
2 | 2 | SELECT (i + $2)::INTEGER LIMIT $3
2 | 2 | SELECT PLUS_ONE($1)
2 | 2 | SELECT PLUS_TWO($1)
1 | 1 | SELECT pg_stat_monitor_reset()
(5 rows)
-- immutable SQL function --- can be executed at plan time
CREATE FUNCTION PLUS_THREE(i INTEGER) RETURNS INTEGER AS
$$ SELECT i + 3 LIMIT 1 $$ IMMUTABLE LANGUAGE SQL;
SELECT PLUS_THREE(8);
plus_three
------------
11
(1 row)
SELECT PLUS_THREE(10);
plus_three
------------
13
(1 row)
SELECT toplevel, calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
toplevel | calls | rows | query
----------+-------+------+---------------------------------------------------------------------------
f | 2 | 2 | SELECT (i + $2 + $3)::INTEGER
f | 2 | 2 | SELECT (i + $2)::INTEGER LIMIT $3
t | 2 | 2 | SELECT PLUS_ONE($1)
t | 2 | 2 | SELECT PLUS_THREE($1)
t | 2 | 2 | SELECT PLUS_TWO($1)
t | 1 | 5 | SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C"
f | 2 | 2 | SELECT i + $2 LIMIT $3
t | 1 | 1 | SELECT pg_stat_monitor_reset()
(8 rows)
--
-- pg_stat_monitor.pgsm_track = none
--
SET pg_stat_monitor.pgsm_track = 'none';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT 1 AS "one";
one
-----
1
(1 row)
SELECT 1 + 1 AS "two";
two
-----
2
(1 row)
SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
calls | rows | query
-------+------+-------
(0 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DROP EXTENSION pg_stat_monitor;

View File

@ -0,0 +1,325 @@
--
-- Statement level tracking
--
SELECT setting::integer < 140000 AS skip_test FROM pg_settings where name = 'server_version_num' \gset
\if :skip_test
\quit
\endif
CREATE EXTENSION pg_stat_monitor;
SET pg_stat_monitor.pgsm_track_utility = TRUE;
SET pg_stat_monitor.pgsm_normalized_query = TRUE;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- DO block - top-level tracking.
CREATE TABLE stats_track_tab (x int);
SET pg_stat_monitor.pgsm_track = 'top';
DELETE FROM stats_track_tab;
DO $$
BEGIN
DELETE FROM stats_track_tab;
END;
$$ LANGUAGE plpgsql;
SELECT toplevel, calls, query FROM pg_stat_monitor
WHERE query LIKE '%DELETE%' ORDER BY query COLLATE "C", toplevel;
toplevel | calls | query
----------+-------+--------------------------------
t | 1 | DELETE FROM stats_track_tab
t | 1 | DO $$ +
| | BEGIN +
| | DELETE FROM stats_track_tab;+
| | END; +
| | $$ LANGUAGE plpgsql
(2 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- DO block - all-level tracking.
SET pg_stat_monitor.pgsm_track = 'all';
DELETE FROM stats_track_tab;
DO $$
BEGIN
DELETE FROM stats_track_tab;
END; $$;
DO LANGUAGE plpgsql $$
BEGIN
-- this is a SELECT
PERFORM 'hello world'::TEXT;
END; $$;
SELECT toplevel, calls, query FROM pg_stat_monitor
ORDER BY query COLLATE "C", toplevel;
toplevel | calls | query
----------+-------+----------------------------------------
f | 1 | DELETE FROM stats_track_tab
t | 1 | DELETE FROM stats_track_tab
t | 1 | DO $$ +
| | BEGIN +
| | DELETE FROM stats_track_tab; +
| | END; $$
t | 1 | DO LANGUAGE plpgsql $$ +
| | BEGIN +
| | -- this is a SELECT +
| | PERFORM 'hello world'::TEXT; +
| | END; $$
f | 1 | SELECT $1::TEXT
t | 1 | SELECT pg_stat_monitor_reset()
t | 1 | SET pg_stat_monitor.pgsm_track = 'all'
(7 rows)
-- DO block - top-level tracking without utility.
SET pg_stat_monitor.pgsm_track = 'top';
SET pg_stat_monitor.pgsm_track_utility = FALSE;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DELETE FROM stats_track_tab;
DO $$
BEGIN
DELETE FROM stats_track_tab;
END; $$;
DO LANGUAGE plpgsql $$
BEGIN
-- this is a SELECT
PERFORM 'hello world'::TEXT;
END; $$;
SELECT toplevel, calls, query FROM pg_stat_monitor
ORDER BY query COLLATE "C", toplevel;
toplevel | calls | query
----------+-------+--------------------------------
t | 1 | DELETE FROM stats_track_tab
t | 1 | SELECT pg_stat_monitor_reset()
(2 rows)
-- DO block - all-level tracking without utility.
SET pg_stat_monitor.pgsm_track = 'all';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DELETE FROM stats_track_tab;
DO $$
BEGIN
DELETE FROM stats_track_tab;
END; $$;
DO LANGUAGE plpgsql $$
BEGIN
-- this is a SELECT
PERFORM 'hello world'::TEXT;
END; $$;
SELECT toplevel, calls, query FROM pg_stat_monitor
ORDER BY query COLLATE "C", toplevel;
toplevel | calls | query
----------+-------+--------------------------------
f | 1 | DELETE FROM stats_track_tab
t | 1 | DELETE FROM stats_track_tab
f | 1 | SELECT $1::TEXT
t | 1 | SELECT pg_stat_monitor_reset()
(4 rows)
-- PL/pgSQL function - top-level tracking.
SET pg_stat_monitor.pgsm_track = 'top';
SET pg_stat_monitor.pgsm_track_utility = FALSE;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
CREATE FUNCTION PLUS_TWO(i INTEGER) RETURNS INTEGER AS $$
DECLARE
r INTEGER;
BEGIN
SELECT (i + 1 + 1.0)::INTEGER INTO r;
RETURN r;
END; $$ LANGUAGE plpgsql;
SELECT PLUS_TWO(3);
plus_two
----------
5
(1 row)
SELECT PLUS_TWO(7);
plus_two
----------
9
(1 row)
-- SQL function --- use LIMIT to keep it from being inlined
CREATE FUNCTION PLUS_ONE(i INTEGER) RETURNS INTEGER AS
$$ SELECT (i + 1.0)::INTEGER LIMIT 1 $$ LANGUAGE SQL;
SELECT PLUS_ONE(8);
plus_one
----------
9
(1 row)
SELECT PLUS_ONE(10);
plus_one
----------
11
(1 row)
SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
calls | rows | query
-------+------+--------------------------------
2 | 2 | SELECT PLUS_ONE($1)
2 | 2 | SELECT PLUS_TWO($1)
1 | 1 | SELECT pg_stat_monitor_reset()
(3 rows)
-- immutable SQL function --- can be executed at plan time
CREATE FUNCTION PLUS_THREE(i INTEGER) RETURNS INTEGER AS
$$ SELECT i + 3 LIMIT 1 $$ IMMUTABLE LANGUAGE SQL;
SELECT PLUS_THREE(8);
plus_three
------------
11
(1 row)
SELECT PLUS_THREE(10);
plus_three
------------
13
(1 row)
SELECT toplevel, calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
toplevel | calls | rows | query
----------+-------+------+---------------------------------------------------------------------------
t | 2 | 2 | SELECT PLUS_ONE($1)
t | 2 | 2 | SELECT PLUS_THREE($1)
t | 2 | 2 | SELECT PLUS_TWO($1)
t | 1 | 3 | SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C"
t | 1 | 1 | SELECT pg_stat_monitor_reset()
(5 rows)
-- PL/pgSQL function - all-level tracking.
SET pg_stat_monitor.pgsm_track = 'all';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- we drop and recreate the functions to avoid any caching funnies
DROP FUNCTION PLUS_ONE(INTEGER);
DROP FUNCTION PLUS_TWO(INTEGER);
DROP FUNCTION PLUS_THREE(INTEGER);
-- PL/pgSQL function
CREATE FUNCTION PLUS_TWO(i INTEGER) RETURNS INTEGER AS $$
DECLARE
r INTEGER;
BEGIN
SELECT (i + 1 + 1.0)::INTEGER INTO r;
RETURN r;
END; $$ LANGUAGE plpgsql;
SELECT PLUS_TWO(-1);
plus_two
----------
1
(1 row)
SELECT PLUS_TWO(2);
plus_two
----------
4
(1 row)
-- SQL function --- use LIMIT to keep it from being inlined
CREATE FUNCTION PLUS_ONE(i INTEGER) RETURNS INTEGER AS
$$ SELECT (i + 1.0)::INTEGER LIMIT 1 $$ LANGUAGE SQL;
SELECT PLUS_ONE(3);
plus_one
----------
4
(1 row)
SELECT PLUS_ONE(1);
plus_one
----------
2
(1 row)
SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
calls | rows | query
-------+------+-----------------------------------
2 | 2 | SELECT (i + $2 + $3)::INTEGER
2 | 2 | SELECT (i + $2)::INTEGER LIMIT $3
2 | 2 | SELECT PLUS_ONE($1)
2 | 2 | SELECT PLUS_TWO($1)
1 | 1 | SELECT pg_stat_monitor_reset()
(5 rows)
-- immutable SQL function --- can be executed at plan time
CREATE FUNCTION PLUS_THREE(i INTEGER) RETURNS INTEGER AS
$$ SELECT i + 3 LIMIT 1 $$ IMMUTABLE LANGUAGE SQL;
SELECT PLUS_THREE(8);
plus_three
------------
11
(1 row)
SELECT PLUS_THREE(10);
plus_three
------------
13
(1 row)
SELECT toplevel, calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
toplevel | calls | rows | query
----------+-------+------+---------------------------------------------------------------------------
f | 2 | 2 | SELECT (i + $2 + $3)::INTEGER
f | 2 | 2 | SELECT (i + $2)::INTEGER LIMIT $3
t | 2 | 2 | SELECT PLUS_ONE($1)
t | 2 | 2 | SELECT PLUS_THREE($1)
t | 2 | 2 | SELECT PLUS_TWO($1)
t | 1 | 5 | SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C"
f | 2 | 2 | SELECT i + $2 LIMIT $3
t | 1 | 1 | SELECT pg_stat_monitor_reset()
(8 rows)
--
-- pg_stat_monitor.pgsm_track = none
--
SET pg_stat_monitor.pgsm_track = 'none';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT 1 AS "one";
one
-----
1
(1 row)
SELECT 1 + 1 AS "two";
two
-----
2
(1 row)
SELECT calls, rows, query FROM pg_stat_monitor ORDER BY query COLLATE "C";
calls | rows | query
-------+------+-------
(0 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DROP EXTENSION pg_stat_monitor;

View File

@ -1,442 +0,0 @@
CREATE EXTENSION pg_stat_monitor;
--
-- simple and compound statements
--
SET pg_stat_monitor.track_utility = FALSE;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT 1 AS "int";
int
-----
1
(1 row)
SELECT 'hello'
-- multiline
AS "text";
text
-------
hello
(1 row)
SELECT 'world' AS "text";
text
-------
world
(1 row)
-- transaction
BEGIN;
SELECT 1 AS "int";
int
-----
1
(1 row)
SELECT 'hello' AS "text";
text
-------
hello
(1 row)
COMMIT;
-- compound transaction
BEGIN \;
SELECT 2.0 AS "float" \;
SELECT 'world' AS "text" \;
COMMIT;
-- compound with empty statements and spurious leading spacing
\;\; SELECT 3 + 3 \;\;\; SELECT ' ' || ' !' \;\; SELECT 1 + 4 \;;
?column?
----------
5
(1 row)
-- non ;-terminated statements
SELECT 1 + 1 + 1 AS "add" \gset
SELECT :add + 1 + 1 AS "add" \;
SELECT :add + 1 + 1 AS "add" \gset
-- set operator
SELECT 1 AS i UNION SELECT 2 ORDER BY i;
i
---
1
2
(2 rows)
-- ? operator
select '{"a":1, "b":2}'::jsonb ? 'b';
?column?
----------
t
(1 row)
-- cte
WITH t(f) AS (
VALUES (1.0), (2.0)
)
SELECT f FROM t ORDER BY f;
f
-----
1.0
2.0
(2 rows)
-- prepared statement with parameter
PREPARE pgss_test (int) AS SELECT $1, 'test' LIMIT 1;
EXECUTE pgss_test(1);
?column? | ?column?
----------+----------
1 | test
(1 row)
DEALLOCATE pgss_test;
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | calls | rows
---------------------------------------------------------------------------+-------+------
BEGIN | 2 | 0
COMMIT | 2 | 0
PREPARE pgss_test (int) AS SELECT $1, $2 LIMIT $3 | 1 | 1
SELECT $1 | 2 | 2
SELECT $1 +| 4 | 4
+| |
AS "text" | |
SELECT $1 + $2 | 2 | 2
SELECT $1 + $2 + $3 AS "add" | 3 | 3
SELECT $1 AS "float" | 1 | 1
SELECT $1 AS i UNION SELECT $2 ORDER BY i | 1 | 2
SELECT $1 || $2 | 1 | 1
SELECT pg_stat_monitor_reset() | 1 | 1
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C" | 1 | 0
WITH t(f) AS ( +| 1 | 2
VALUES ($1), ($2) +| |
) +| |
SELECT f FROM t ORDER BY f | |
select $1::jsonb ? $2 | 1 | 1
(14 rows)
--
-- CRUD: INSERT SELECT UPDATE DELETE on test table
--
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- utility "create table" should not be shown
CREATE TEMP TABLE test (a int, b char(20));
INSERT INTO test VALUES(generate_series(1, 10), 'aaa');
UPDATE test SET b = 'bbb' WHERE a > 7;
DELETE FROM test WHERE a > 9;
-- explicit transaction
BEGIN;
UPDATE test SET b = '111' WHERE a = 1 ;
COMMIT;
BEGIN \;
UPDATE test SET b = '222' WHERE a = 2 \;
COMMIT ;
UPDATE test SET b = '333' WHERE a = 3 \;
UPDATE test SET b = '444' WHERE a = 4 ;
BEGIN \;
UPDATE test SET b = '555' WHERE a = 5 \;
UPDATE test SET b = '666' WHERE a = 6 \;
COMMIT ;
-- many INSERT values
INSERT INTO test (a, b) VALUES (1, 'a'), (2, 'b'), (3, 'c');
-- SELECT with constants
SELECT * FROM test WHERE a > 5 ORDER BY a ;
a | b
---+----------------------
6 | 666
7 | aaa
8 | bbb
9 | bbb
(4 rows)
SELECT *
FROM test
WHERE a > 9
ORDER BY a ;
a | b
---+---
(0 rows)
-- SELECT without constants
SELECT * FROM test ORDER BY a;
a | b
---+----------------------
1 | a
1 | 111
2 | b
2 | 222
3 | c
3 | 333
4 | 444
5 | 555
6 | 666
7 | aaa
8 | bbb
9 | bbb
(12 rows)
-- SELECT with IN clause
SELECT * FROM test WHERE a IN (1, 2, 3, 4, 5);
a | b
---+----------------------
1 | 111
2 | 222
3 | 333
4 | 444
5 | 555
1 | a
2 | b
3 | c
(8 rows)
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | calls | rows
---------------------------------------------------------------------------+-------+------
BEGIN | 3 | 0
COMMIT | 3 | 0
CREATE TEMP TABLE test (a int, b char(20)) | 1 | 0
DELETE FROM test WHERE a > $1 | 1 | 1
INSERT INTO test (a, b) VALUES ($1, $2), ($3, $4), ($5, $6) | 1 | 3
INSERT INTO test VALUES(generate_series($1, $2), $3) | 1 | 10
SELECT * FROM test ORDER BY a | 1 | 12
SELECT * FROM test WHERE a > $1 ORDER BY a | 2 | 4
SELECT * FROM test WHERE a IN ($1, $2, $3, $4, $5) | 1 | 8
SELECT pg_stat_monitor_reset() | 1 | 1
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C" | 1 | 0
UPDATE test SET b = $1 WHERE a = $2 | 6 | 6
UPDATE test SET b = $1 WHERE a > $2 | 1 | 3
(13 rows)
--
-- pg_stat_monitor.track = none
--
SET pg_stat_monitor.track = 'none';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT 1 AS "one";
one
-----
1
(1 row)
SELECT 1 + 1 AS "two";
two
-----
2
(1 row)
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | calls | rows
---------------------------------------------------------------------------+-------+------
SELECT $1 | 1 | 1
SELECT $1 + $2 | 1 | 1
SELECT pg_stat_monitor_reset() | 1 | 1
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C" | 1 | 0
(4 rows)
--
-- pg_stat_monitor.track = top
--
SET pg_stat_monitor.track = 'top';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DO LANGUAGE plpgsql $$
BEGIN
-- this is a SELECT
PERFORM 'hello world'::TEXT;
END;
$$;
-- PL/pgSQL function
CREATE FUNCTION PLUS_TWO(i INTEGER) RETURNS INTEGER AS $$
DECLARE
r INTEGER;
BEGIN
SELECT (i + 1 + 1.0)::INTEGER INTO r;
RETURN r;
END; $$ LANGUAGE plpgsql;
SELECT PLUS_TWO(3);
plus_two
----------
5
(1 row)
SELECT PLUS_TWO(7);
plus_two
----------
9
(1 row)
-- SQL function --- use LIMIT to keep it from being inlined
CREATE FUNCTION PLUS_ONE(i INTEGER) RETURNS INTEGER AS
$$ SELECT (i + 1.0)::INTEGER LIMIT 1 $$ LANGUAGE SQL;
SELECT PLUS_ONE(8);
plus_one
----------
9
(1 row)
SELECT PLUS_ONE(10);
plus_one
----------
11
(1 row)
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | calls | rows
---------------------------------------------------------------------------+-------+------
CREATE FUNCTION PLUS_ONE(i INTEGER) RETURNS INTEGER AS +| 1 | 0
$$ SELECT (i + 1.0)::INTEGER LIMIT 1 $$ LANGUAGE SQL | |
CREATE FUNCTION PLUS_TWO(i INTEGER) RETURNS INTEGER AS $$ +| 1 | 0
DECLARE +| |
r INTEGER; +| |
BEGIN +| |
SELECT (i + 1 + 1.0)::INTEGER INTO r; +| |
RETURN r; +| |
END; $$ LANGUAGE plpgsql | |
DO LANGUAGE plpgsql $$ +| 1 | 0
BEGIN +| |
-- this is a SELECT +| |
PERFORM 'hello world'::TEXT; +| |
END; +| |
$$ | |
SELECT $1 +| 1 | 1
+| |
AS "text" | |
SELECT (i + $2 + $3)::INTEGER | 2 | 2
SELECT (i + $2)::INTEGER LIMIT $3 | 2 | 2
SELECT PLUS_ONE($1) | 2 | 2
SELECT PLUS_TWO($1) | 2 | 2
SELECT pg_stat_monitor_reset() | 1 | 1
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C" | 1 | 0
(10 rows)
--
-- pg_stat_monitor.track = all
--
SET pg_stat_monitor.track = 'all';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- we drop and recreate the functions to avoid any caching funnies
DROP FUNCTION PLUS_ONE(INTEGER);
DROP FUNCTION PLUS_TWO(INTEGER);
-- PL/pgSQL function
CREATE FUNCTION PLUS_TWO(i INTEGER) RETURNS INTEGER AS $$
DECLARE
r INTEGER;
BEGIN
SELECT (i + 1 + 1.0)::INTEGER INTO r;
RETURN r;
END; $$ LANGUAGE plpgsql;
SELECT PLUS_TWO(-1);
plus_two
----------
1
(1 row)
SELECT PLUS_TWO(2);
plus_two
----------
4
(1 row)
-- SQL function --- use LIMIT to keep it from being inlined
CREATE FUNCTION PLUS_ONE(i INTEGER) RETURNS INTEGER AS
$$ SELECT (i + 1.0)::INTEGER LIMIT 1 $$ LANGUAGE SQL;
SELECT PLUS_ONE(3);
plus_one
----------
4
(1 row)
SELECT PLUS_ONE(1);
plus_one
----------
2
(1 row)
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | calls | rows
---------------------------------------------------------------------------+-------+------
CREATE FUNCTION PLUS_ONE(i INTEGER) RETURNS INTEGER AS +| 1 | 0
$$ SELECT (i + 1.0)::INTEGER LIMIT 1 $$ LANGUAGE SQL | |
CREATE FUNCTION PLUS_TWO(i INTEGER) RETURNS INTEGER AS $$ +| 1 | 0
DECLARE +| |
r INTEGER; +| |
BEGIN +| |
SELECT (i + 1 + 1.0)::INTEGER INTO r; +| |
RETURN r; +| |
END; $$ LANGUAGE plpgsql | |
DROP FUNCTION PLUS_ONE(INTEGER) | 1 | 0
DROP FUNCTION PLUS_TWO(INTEGER) | 1 | 0
SELECT (i + $2 + $3)::INTEGER | 2 | 2
SELECT (i + $2)::INTEGER LIMIT $3 | 2 | 2
SELECT PLUS_ONE($1) | 2 | 2
SELECT PLUS_TWO($1) | 2 | 2
SELECT pg_stat_monitor_reset() | 1 | 1
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C" | 1 | 0
(10 rows)
--
-- utility commands
--
SET pg_stat_monitor.track_utility = TRUE;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT 1;
?column?
----------
1
(1 row)
CREATE INDEX test_b ON test(b);
DROP TABLE test \;
DROP TABLE IF EXISTS test \;
DROP FUNCTION PLUS_ONE(INTEGER);
NOTICE: table "test" does not exist, skipping
DROP TABLE IF EXISTS test \;
DROP TABLE IF EXISTS test \;
DROP FUNCTION IF EXISTS PLUS_ONE(INTEGER);
NOTICE: table "test" does not exist, skipping
NOTICE: table "test" does not exist, skipping
NOTICE: function plus_one(pg_catalog.int4) does not exist, skipping
DROP FUNCTION PLUS_TWO(INTEGER);
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | calls | rows
---------------------------------------------------------------------------+-------+------
CREATE INDEX test_b ON test(b) | 1 | 0
DROP FUNCTION IF EXISTS PLUS_ONE(INTEGER) | 1 | 0
DROP FUNCTION PLUS_ONE(INTEGER) | 1 | 0
DROP FUNCTION PLUS_TWO(INTEGER) | 1 | 0
DROP TABLE IF EXISTS test | 3 | 0
DROP TABLE test | 1 | 0
SELECT $1 | 1 | 1
SELECT pg_stat_monitor_reset() | 1 | 1
SELECT query, calls, rows FROM pg_stat_monitor ORDER BY query COLLATE "C" | 1 | 0
(9 rows)
DROP EXTENSION pg_stat_monitor;

View File

@ -0,0 +1,117 @@
CREATE EXTENSION pg_stat_monitor;
CREATE DATABASE db1;
CREATE DATABASE db2;
\c db1
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int);
CREATE FUNCTION add(integer, integer) RETURNS integer
AS 'select $1 + $2;'
LANGUAGE SQL
IMMUTABLE
RETURNS NULL ON NULL INPUT;
\c db2
CREATE TABLE t1 (a int);
CREATE TABLE t3 (c int);
CREATE FUNCTION add(integer, integer) RETURNS integer
AS 'select $1 + $2;'
LANGUAGE SQL
IMMUTABLE
RETURNS NULL ON NULL INPUT;
\c contrib_regression
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
\c db1
SELECT * FROM t1;
a
---
(0 rows)
SELECT *, ADD(1, 2) FROM t1;
a | add
---+-----
(0 rows)
SELECT * FROM t2;
b
---
(0 rows)
-- Check that spaces and comments do not generate a different pgsm_query_id
SELECT * FROM t2 --WHATEVER;
;
b
---
(0 rows)
SELECT * FROM t2 /* ...
...
More comments to check for spaces.
*/
;
b
---
(0 rows)
\c db2
SELECT * FROM t1;
a
---
(0 rows)
SELECT *, ADD(1, 2) FROM t1;
a | add
---+-----
(0 rows)
set pg_stat_monitor.pgsm_enable_pgsm_query_id = off;
SELECT * FROM t3;
c
---
(0 rows)
set pg_stat_monitor.pgsm_enable_pgsm_query_id = on;
SELECT * FROM t3 where c = 20;
c
---
(0 rows)
\c contrib_regression
SELECT datname, pgsm_query_id, query, calls FROM pg_stat_monitor ORDER BY pgsm_query_id, query, datname;
datname | pgsm_query_id | query | calls
--------------------+---------------------+-----------------------------------------------------+-------
contrib_regression | 689150021118383254 | SELECT pg_stat_monitor_reset() | 1
db1 | 1897482803466821995 | SELECT * FROM t2 | 3
db1 | 1988437669671417938 | SELECT * FROM t1 | 1
db2 | 1988437669671417938 | SELECT * FROM t1 | 1
db1 | 2864453209316739369 | select $1 + $2 | 1
db2 | 2864453209316739369 | select $1 + $2 | 1
db2 | 6220142855706866455 | set pg_stat_monitor.pgsm_enable_pgsm_query_id = on | 1
db2 | 6633979598391393345 | SELECT * FROM t3 where c = 20 | 1
db1 | 8140395000078788481 | SELECT *, ADD(1, 2) FROM t1 | 1
db2 | 8140395000078788481 | SELECT *, ADD(1, 2) FROM t1 | 1
db2 | | SELECT * FROM t3 | 1
db2 | | set pg_stat_monitor.pgsm_enable_pgsm_query_id = off | 1
(12 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
\c db1
DROP TABLE t1;
DROP TABLE t2;
DROP FUNCTION ADD;
\c db2
DROP TABLE t1;
DROP TABLE t3;
DROP FUNCTION ADD;
\c contrib_regression
DROP DATABASE db1;
DROP DATABASE db2;
DROP EXTENSION pg_stat_monitor;

View File

@ -0,0 +1,115 @@
CREATE EXTENSION pg_stat_monitor;
CREATE DATABASE db1;
CREATE DATABASE db2;
\c db1
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int);
CREATE FUNCTION add(integer, integer) RETURNS integer
AS 'select $1 + $2;'
LANGUAGE SQL
IMMUTABLE
RETURNS NULL ON NULL INPUT;
\c db2
CREATE TABLE t1 (a int);
CREATE TABLE t3 (c int);
CREATE FUNCTION add(integer, integer) RETURNS integer
AS 'select $1 + $2;'
LANGUAGE SQL
IMMUTABLE
RETURNS NULL ON NULL INPUT;
\c contrib_regression
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
\c db1
SELECT * FROM t1;
a
---
(0 rows)
SELECT *, ADD(1, 2) FROM t1;
a | add
---+-----
(0 rows)
SELECT * FROM t2;
b
---
(0 rows)
-- Check that spaces and comments do not generate a different pgsm_query_id
SELECT * FROM t2 --WHATEVER;
;
b
---
(0 rows)
SELECT * FROM t2 /* ...
...
More comments to check for spaces.
*/
;
b
---
(0 rows)
\c db2
SELECT * FROM t1;
a
---
(0 rows)
SELECT *, ADD(1, 2) FROM t1;
a | add
---+-----
(0 rows)
set pg_stat_monitor.pgsm_enable_pgsm_query_id = off;
SELECT * FROM t3;
c
---
(0 rows)
set pg_stat_monitor.pgsm_enable_pgsm_query_id = on;
SELECT * FROM t3 where c = 20;
c
---
(0 rows)
\c contrib_regression
SELECT datname, pgsm_query_id, query, calls FROM pg_stat_monitor ORDER BY pgsm_query_id, query, datname;
datname | pgsm_query_id | query | calls
--------------------+---------------------+-----------------------------------------------------+-------
contrib_regression | 689150021118383254 | SELECT pg_stat_monitor_reset() | 1
db1 | 1897482803466821995 | SELECT * FROM t2 | 3
db1 | 1988437669671417938 | SELECT * FROM t1 | 1
db2 | 1988437669671417938 | SELECT * FROM t1 | 1
db2 | 6220142855706866455 | set pg_stat_monitor.pgsm_enable_pgsm_query_id = on | 1
db2 | 6633979598391393345 | SELECT * FROM t3 where c = 20 | 1
db1 | 8140395000078788481 | SELECT *, ADD(1, 2) FROM t1 | 1
db2 | 8140395000078788481 | SELECT *, ADD(1, 2) FROM t1 | 1
db2 | | SELECT * FROM t3 | 1
db2 | | set pg_stat_monitor.pgsm_enable_pgsm_query_id = off | 1
(10 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
\c db1
DROP TABLE t1;
DROP TABLE t2;
DROP FUNCTION ADD;
\c db2
DROP TABLE t1;
DROP TABLE t3;
DROP FUNCTION ADD;
\c contrib_regression
DROP DATABASE db1;
DROP DATABASE db2;
DROP EXTENSION pg_stat_monitor;

View File

@ -37,15 +37,14 @@ SELECT * FROM foo1, foo2, foo3, foo4;
(0 rows)
SELECT query, relations from pg_stat_monitor ORDER BY query collate "C";
query | relations
-------------------------------------------------------------------------+---------------------------------------------------
SELECT * FROM foo1 | {public.foo1}
SELECT * FROM foo1, foo2 | {public.foo1,public.foo2}
SELECT * FROM foo1, foo2, foo3 | {public.foo1,public.foo2,public.foo3}
SELECT * FROM foo1, foo2, foo3, foo4 | {public.foo1,public.foo2,public.foo3,public.foo4}
SELECT pg_stat_monitor_reset() |
SELECT query, relations from pg_stat_monitor ORDER BY query collate "C" | {public.pg_stat_monitor*,pg_catalog.pg_database}
(6 rows)
query | relations
--------------------------------------+---------------------------------------------------
SELECT * FROM foo1 | {public.foo1}
SELECT * FROM foo1, foo2 | {public.foo1,public.foo2}
SELECT * FROM foo1, foo2, foo3 | {public.foo1,public.foo2,public.foo3}
SELECT * FROM foo1, foo2, foo3, foo4 | {public.foo1,public.foo2,public.foo3,public.foo4}
SELECT pg_stat_monitor_reset() |
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
@ -54,10 +53,10 @@ SELECT pg_stat_monitor_reset();
(1 row)
-- test the schema qualified table
CREATE schema sch1;
CREATE schema sch2;
CREATE schema sch3;
CREATE schema sch4;
CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE SCHEMA sch3;
CREATE SCHEMA sch4;
CREATE TABLE sch1.foo1(a int);
CREATE TABLE sch2.foo2(b int);
CREATE TABLE sch3.foo3(c int);
@ -89,15 +88,14 @@ SELECT * FROM sch1.foo1, sch2.foo2, sch3.foo3, sch4.foo4;
(0 rows)
SELECT query, relations from pg_stat_monitor ORDER BY query collate "C";
query | relations
-------------------------------------------------------------------------+--------------------------------------------------
SELECT * FROM sch1.foo1 | {sch1.foo1}
SELECT * FROM sch1.foo1, sch2.foo2 | {sch1.foo1,sch2.foo2}
SELECT * FROM sch1.foo1, sch2.foo2, sch3.foo3 | {sch1.foo1,sch2.foo2,sch3.foo3}
SELECT * FROM sch1.foo1, sch2.foo2, sch3.foo3, sch4.foo4 | {sch1.foo1,sch2.foo2,sch3.foo3,sch4.foo4}
SELECT pg_stat_monitor_reset() |
SELECT query, relations from pg_stat_monitor ORDER BY query collate "C" | {public.pg_stat_monitor*,pg_catalog.pg_database}
(6 rows)
query | relations
----------------------------------------------------------+-------------------------------------------
SELECT * FROM sch1.foo1 | {sch1.foo1}
SELECT * FROM sch1.foo1, sch2.foo2 | {sch1.foo1,sch2.foo2}
SELECT * FROM sch1.foo1, sch2.foo2, sch3.foo3 | {sch1.foo1,sch2.foo2,sch3.foo3}
SELECT * FROM sch1.foo1, sch2.foo2, sch3.foo3, sch4.foo4 | {sch1.foo1,sch2.foo2,sch3.foo3,sch4.foo4}
SELECT pg_stat_monitor_reset() |
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
@ -122,13 +120,12 @@ SELECT * FROM sch1.foo1, sch2.foo2, foo1, foo2;
(0 rows)
SELECT query, relations from pg_stat_monitor ORDER BY query;
query | relations
-------------------------------------------------------------+--------------------------------------------------
SELECT * FROM sch1.foo1, foo1 | {sch1.foo1,public.foo1}
SELECT * FROM sch1.foo1, sch2.foo2, foo1, foo2 | {sch1.foo1,sch2.foo2,public.foo1,public.foo2}
SELECT pg_stat_monitor_reset() |
SELECT query, relations from pg_stat_monitor ORDER BY query | {public.pg_stat_monitor*,pg_catalog.pg_database}
(4 rows)
query | relations
------------------------------------------------+-----------------------------------------------
SELECT * FROM sch1.foo1, foo1 | {sch1.foo1,public.foo1}
SELECT * FROM sch1.foo1, sch2.foo2, foo1, foo2 | {sch1.foo1,sch2.foo2,public.foo1,public.foo2}
SELECT pg_stat_monitor_reset() |
(3 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
@ -168,15 +165,14 @@ SELECT * FROM v1,v2,v3,v4;
(0 rows)
SELECT query, relations from pg_stat_monitor ORDER BY query collate "C";
query | relations
-------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------
SELECT * FROM v1 | {public.v1*,public.foo1}
SELECT * FROM v1,v2 | {public.v1*,public.foo1,public.v2*,public.foo2}
SELECT * FROM v1,v2,v3 | {public.v1*,public.foo1,public.v2*,public.foo2,public.v3*,public.foo3}
SELECT * FROM v1,v2,v3,v4 | {public.v1*,public.foo1,public.v2*,public.foo2,public.v3*,public.foo3,public.v4*,public.foo4}
SELECT pg_stat_monitor_reset() |
SELECT query, relations from pg_stat_monitor ORDER BY query collate "C" | {public.pg_stat_monitor*,pg_catalog.pg_database}
(6 rows)
query | relations
--------------------------------+-----------------------------------------------------------------------------------------------
SELECT * FROM v1 | {public.v1*,public.foo1}
SELECT * FROM v1,v2 | {public.v1*,public.foo1,public.v2*,public.foo2}
SELECT * FROM v1,v2,v3 | {public.v1*,public.foo1,public.v2*,public.foo2,public.v3*,public.foo3}
SELECT * FROM v1,v2,v3,v4 | {public.v1*,public.foo1,public.v2*,public.foo2,public.v3*,public.foo3,public.v4*,public.foo4}
SELECT pg_stat_monitor_reset() |
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -0,0 +1,199 @@
CREATE EXTENSION pg_stat_monitor;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
CREATE TABLE foo1(a int);
CREATE TABLE foo2(b int);
CREATE TABLE foo3(c int);
CREATE TABLE foo4(d int);
-- test the simple table names
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT * FROM foo1;
a
---
(0 rows)
SELECT * FROM foo1, foo2;
a | b
---+---
(0 rows)
SELECT * FROM foo1, foo2, foo3;
a | b | c
---+---+---
(0 rows)
SELECT * FROM foo1, foo2, foo3, foo4;
a | b | c | d
---+---+---+---
(0 rows)
SELECT query, relations from pg_stat_monitor ORDER BY query collate "C";
query | relations
--------------------------------------+---------------------------------------------------
SELECT * FROM foo1 | {public.foo1}
SELECT * FROM foo1, foo2 | {public.foo1,public.foo2}
SELECT * FROM foo1, foo2, foo3 | {public.foo1,public.foo2,public.foo3}
SELECT * FROM foo1, foo2, foo3, foo4 | {public.foo1,public.foo2,public.foo3,public.foo4}
SELECT pg_stat_monitor_reset() |
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- test the schema qualified table
CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE SCHEMA sch3;
CREATE SCHEMA sch4;
CREATE TABLE sch1.foo1(a int);
CREATE TABLE sch2.foo2(b int);
CREATE TABLE sch3.foo3(c int);
CREATE TABLE sch4.foo4(d int);
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT * FROM sch1.foo1;
a
---
(0 rows)
SELECT * FROM sch1.foo1, sch2.foo2;
a | b
---+---
(0 rows)
SELECT * FROM sch1.foo1, sch2.foo2, sch3.foo3;
a | b | c
---+---+---
(0 rows)
SELECT * FROM sch1.foo1, sch2.foo2, sch3.foo3, sch4.foo4;
a | b | c | d
---+---+---+---
(0 rows)
SELECT query, relations from pg_stat_monitor ORDER BY query collate "C";
query | relations
----------------------------------------------------------+-------------------------------------------
SELECT * FROM sch1.foo1 | {sch1.foo1}
SELECT * FROM sch1.foo1, sch2.foo2 | {sch1.foo1,sch2.foo2}
SELECT * FROM sch1.foo1, sch2.foo2, sch3.foo3 | {sch1.foo1,sch2.foo2,sch3.foo3}
SELECT * FROM sch1.foo1, sch2.foo2, sch3.foo3, sch4.foo4 | {sch1.foo1,sch2.foo2,sch3.foo3,sch4.foo4}
SELECT pg_stat_monitor_reset() |
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT * FROM sch1.foo1, foo1;
a | a
---+---
(0 rows)
SELECT * FROM sch1.foo1, sch2.foo2, foo1, foo2;
a | b | a | b
---+---+---+---
(0 rows)
SELECT query, relations from pg_stat_monitor ORDER BY query;
query | relations
------------------------------------------------+-----------------------------------------------
SELECT * FROM sch1.foo1, foo1 | {sch1.foo1,public.foo1}
SELECT * FROM sch1.foo1, sch2.foo2, foo1, foo2 | {sch1.foo1,sch2.foo2,public.foo1,public.foo2}
SELECT pg_stat_monitor_reset() |
(3 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
-- test the view
CREATE VIEW v1 AS SELECT * from foo1;
CREATE VIEW v2 AS SELECT * from foo1,foo2;
CREATE VIEW v3 AS SELECT * from foo1,foo2,foo3;
CREATE VIEW v4 AS SELECT * from foo1,foo2,foo3,foo4;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT * FROM v1;
a
---
(0 rows)
SELECT * FROM v1,v2;
a | a | b
---+---+---
(0 rows)
SELECT * FROM v1,v2,v3;
a | a | b | a | b | c
---+---+---+---+---+---
(0 rows)
SELECT * FROM v1,v2,v3,v4;
a | a | b | a | b | c | a | b | c | d
---+---+---+---+---+---+---+---+---+---
(0 rows)
SELECT query, relations from pg_stat_monitor ORDER BY query collate "C";
query | relations
--------------------------------+-----------------------------------------------------------------------------------------------
SELECT * FROM v1 | {public.v1*,public.foo1}
SELECT * FROM v1,v2 | {public.v1*,public.v2*,public.foo1,public.foo2}
SELECT * FROM v1,v2,v3 | {public.v1*,public.v2*,public.v3*,public.foo1,public.foo2,public.foo3}
SELECT * FROM v1,v2,v3,v4 | {public.v1*,public.v2*,public.v3*,public.v4*,public.foo1,public.foo2,public.foo3,public.foo4}
SELECT pg_stat_monitor_reset() |
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DROP VIEW v1;
DROP VIEW v2;
DROP VIEW v3;
DROP VIEW v4;
DROP TABLE foo1;
DROP TABLE foo2;
DROP TABLE foo3;
DROP TABLE foo4;
DROP TABLE sch1.foo1;
DROP TABLE sch2.foo2;
DROP TABLE sch3.foo3;
DROP TABLE sch4.foo4;
DROP SCHEMA sch1;
DROP SCHEMA sch2;
DROP SCHEMA sch3;
DROP SCHEMA sch4;
DROP EXTENSION pg_stat_monitor;

View File

@ -1,7 +1,6 @@
CREATE EXTENSION pg_stat_monitor;
CREATE TABLE t1(a int);
CREATE TABLE t2(b int);
ERROR: relation "t2" already exists
INSERT INTO t1 VALUES(generate_series(1,1000));
INSERT INTO t2 VALUES(generate_series(1,5000));
SELECT pg_stat_monitor_reset();
@ -8540,16 +8539,15 @@ SELECt * FROM t2 WHERE b % 2 = 0;
5000
(2500 rows)
SELECT query, rows_retrieved FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | rows_retrieved
------------------------------------------------------------------------------+----------------
SELECT * FROM t1 | 1000
SELECT * FROM t1 LIMIT 10 | 10
SELECT * FROM t2 | 5000
SELECT pg_stat_monitor_reset() | 1
SELECT query, rows_retrieved FROM pg_stat_monitor ORDER BY query COLLATE "C" | 0
SELECt * FROM t2 WHERE b % 2 = 0 | 2500
(6 rows)
SELECT query, rows FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | rows
-----------------------------------+------
SELECT * FROM t1 | 1000
SELECT * FROM t1 LIMIT 10 | 10
SELECT * FROM t2 | 5000
SELECT pg_stat_monitor_reset() | 1
SELECt * FROM t2 WHERE b % 2 = 0 | 2500
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
@ -8558,4 +8556,5 @@ SELECT pg_stat_monitor_reset();
(1 row)
DROP TABLE t1;
DROP TABLE t2;
DROP EXTENSION pg_stat_monitor;

File diff suppressed because it is too large Load Diff

View File

@ -1,31 +0,0 @@
CREATE EXTENSION pg_stat_monitor;
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
SELECT 1;
?column?
----------
1
(1 row)
SELECT 1/0; -- divide by zero
ERROR: division by zero
SELECT query, state_code, state FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | state_code | state
---------------------------------------------------------------------------------+------------+---------------------
SELECT $1 | 3 | FINISHED
SELECT 1/0; | 4 | FINISHED WITH ERROR
SELECT pg_stat_monitor_reset() | 3 | FINISHED
SELECT query, state_code, state FROM pg_stat_monitor ORDER BY query COLLATE "C" | 2 | ACTIVE
(4 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
(1 row)
DROP EXTENSION pg_stat_monitor;

View File

@ -17,8 +17,7 @@ SELECT query, comments FROM pg_stat_monitor ORDER BY query COLLATE "C";
--------------------------------------------------------------------------+----------------------------------------------------------
SELECT 1 AS num /* { "application", psql_app, "real_ip", 192.168.1.3) */ | /* { "application", psql_app, "real_ip", 192.168.1.3) */
SELECT pg_stat_monitor_reset() |
SELECT query, comments FROM pg_stat_monitor ORDER BY query COLLATE "C" |
(3 rows)
(2 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -1,5 +1,5 @@
CREATE EXTENSION pg_stat_monitor;
Set pg_stat_monitor.pgsm_track='all';
SET pg_stat_monitor.pgsm_track='all';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
@ -11,7 +11,7 @@ $$
BEGIN
return (select $1 + $2);
END; $$ language plpgsql;
CREATE OR REPLACE function add2(int, int) RETURNS int as
CREATE OR REPLACE FUNCTION add2(int, int) RETURNS INTEGER AS
$$
BEGIN
return add($1,$2);
@ -24,24 +24,23 @@ SELECT add2(1,2);
(1 row)
SELECT query, top_query FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | top_query
-------------------------------------------------------------------------+------------------
(select $1 + $2) | SELECT add2(1,2)
CREATE OR REPLACE FUNCTION add(int, int) RETURNS INTEGER AS +|
$$ +|
BEGIN +|
return (select $1 + $2); +|
END; $$ language plpgsql |
CREATE OR REPLACE function add2(int, int) RETURNS int as +|
$$ +|
BEGIN +|
return add($1,$2); +|
END; +|
$$ language plpgsql |
SELECT add2(1,2) |
SELECT pg_stat_monitor_reset() |
SELECT query, top_query FROM pg_stat_monitor ORDER BY query COLLATE "C" |
(6 rows)
query | top_query
--------------------------------------------------------------+-------------------
(select $1 + $2) | SELECT add2(1,2);
CREATE OR REPLACE FUNCTION add(int, int) RETURNS INTEGER AS +|
$$ +|
BEGIN +|
return (select $1 + $2); +|
END; $$ language plpgsql |
CREATE OR REPLACE FUNCTION add2(int, int) RETURNS INTEGER AS+|
$$ +|
BEGIN +|
return add($1,$2); +|
END; +|
$$ language plpgsql |
SELECT add2(1,2) |
SELECT pg_stat_monitor_reset() |
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

View File

@ -1,5 +1,5 @@
CREATE EXTENSION pg_stat_monitor;
Set pg_stat_monitor.pgsm_track='all';
SET pg_stat_monitor.pgsm_track='all';
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset
-----------------------
@ -11,7 +11,7 @@ $$
BEGIN
return (select $1 + $2);
END; $$ language plpgsql;
CREATE OR REPLACE function add2(int, int) RETURNS int as
CREATE OR REPLACE FUNCTION add2(int, int) RETURNS INTEGER AS
$$
BEGIN
return add($1,$2);
@ -24,24 +24,23 @@ SELECT add2(1,2);
(1 row)
SELECT query, top_query FROM pg_stat_monitor ORDER BY query COLLATE "C";
query | top_query
-------------------------------------------------------------------------+------------------
CREATE OR REPLACE FUNCTION add(int, int) RETURNS INTEGER AS +|
$$ +|
BEGIN +|
return (select $1 + $2); +|
END; $$ language plpgsql |
CREATE OR REPLACE function add2(int, int) RETURNS int as +|
$$ +|
BEGIN +|
return add($1,$2); +|
END; +|
$$ language plpgsql |
SELECT (select $1 + $2) | SELECT add2(1,2)
SELECT add2(1,2) |
SELECT pg_stat_monitor_reset() |
SELECT query, top_query FROM pg_stat_monitor ORDER BY query COLLATE "C" |
(6 rows)
query | top_query
--------------------------------------------------------------+-------------------
CREATE OR REPLACE FUNCTION add(int, int) RETURNS INTEGER AS +|
$$ +|
BEGIN +|
return (select $1 + $2); +|
END; $$ language plpgsql |
CREATE OR REPLACE FUNCTION add2(int, int) RETURNS INTEGER AS+|
$$ +|
BEGIN +|
return add($1,$2); +|
END; +|
$$ language plpgsql |
SELECT (select $1 + $2) | SELECT add2(1,2);
SELECT add2(1,2) |
SELECT pg_stat_monitor_reset() |
(5 rows)
SELECT pg_stat_monitor_reset();
pg_stat_monitor_reset

Some files were not shown because too many files have changed in this diff Show More