🌐 AI搜索 & 代理 主页
Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,13 +57,13 @@ Experience the full monitoring solution: **https://demo.postgres.ai** (login: `d
**Infrastructure:**
- **Linux machine** with Docker installed (separate from your database server)
- **Docker access** - the user running `postgres_ai` must have Docker permissions
- **Access (network and pg_hba)** to the Postgres database(s) you want to monitor
- **Access (network and `pg_hba.conf`)** to the Postgres database(s) you want to monitor

**Database:**
- Supports Postgres versions 14-18
- **pg_stat_statements extension must be created** for the DB used for connection
- Supports Postgres 14-18
- **pg_stat_statements extension must be created** for the database used for the connection

## ⚠️ Security Notice
## ⚠️ Security notice

**WARNING: Security is your responsibility!**

Expand Down Expand Up @@ -143,10 +143,10 @@ curl -o postgres_ai https://gitlab.com/postgres-ai/postgres_ai/-/raw/main/postgr
&& chmod +x postgres_ai
```

Now, start it and wait for a few minutes. To obtain a PostgresAI access token for your organization, visit https://console.postgres.ai (`Your org name → Manage → Access tokens`):
Now, start it and wait for a few minutes. To obtain a PostgresAI access token for your organization, visit `https://console.postgres.ai` (`Your org name → Manage → Access tokens`):

```bash
# Production setup with your Access token
# Production setup with your access token
./postgres_ai quickstart --api-key=your_access_token
```
**Note:** You can also add your database instance in the same command:
Expand Down Expand Up @@ -268,10 +268,10 @@ Technical URLs (for advanced users):
### Node.js CLI (early preview)

```bash
# run without install
# Run without installing
node ./cli/bin/postgres-ai.js --help

# local dev: install aliases into PATH
# Local development: install aliases into PATH
npm --prefix cli install --no-audit --no-fund
npm link ./cli
postgres-ai --help
Expand Down Expand Up @@ -305,9 +305,9 @@ Install dev dependencies (includes `pytest`, `pytest-postgresql`, `psycopg`, etc
python3 -m pip install -r reporter/requirements-dev.txt
```

### Running Tests
### Running tests

#### Unit Tests Only (Fast, No External Services Required)
#### Unit tests only (fast, no external services required)

Run only unit tests with mocked Prometheus interactions:
```bash
Expand All @@ -320,7 +320,7 @@ pytest tests/reporter/test_generators_unit.py -v
pytest tests/reporter/test_formatters.py -v
```

#### All Tests: Unit + Integration (Requires PostgreSQL)
#### All tests: unit + integration (requires PostgreSQL)

Run the complete test suite (both unit and integration tests):
```bash
Expand All @@ -333,7 +333,7 @@ Integration tests create a temporary PostgreSQL instance automatically and requi
- `pytest tests/reporter` → **Unit tests only** (integration tests skipped)
- `pytest tests/reporter --run-integration` → **Both unit and integration tests**

### Test Coverage
### Test coverage

Generate coverage report:
```bash
Expand Down
10 changes: 5 additions & 5 deletions cli/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ postgres-ai mon health [--wait <sec>] # Check monitoring services health

##### Quickstart options
- `--demo` - Demo mode with sample database (testing only, cannot use with --api-key)
- `--api-key <key>` - Postgres AI API key for automated report uploads
- `--api-key <key>` - PostgresAI API key for automated report uploads
- `--db-url <url>` - PostgreSQL connection URL to monitor (format: `postgresql://user:pass@host:port/db`)
- `-y, --yes` - Accept all defaults and skip interactive prompts

Expand Down Expand Up @@ -205,12 +205,12 @@ API key resolution order:

Base URL resolution order:
- API base URL (`apiBaseUrl`):
1. Command line option (`--api-base-url`)
1. Command-line option (`--api-base-url`)
2. Environment variable (`PGAI_API_BASE_URL`)
3. User config file `baseUrl` (`~/.config/postgresai/config.json`)
4. Default: `https://postgres.ai/api/general/`
- UI base URL (`uiBaseUrl`):
1. Command line option (`--ui-base-url`)
1. Command-line option (`--ui-base-url`)
2. Environment variable (`PGAI_UI_BASE_URL`)
3. Default: `https://console.postgres.ai`

Expand Down Expand Up @@ -264,5 +264,5 @@ Notes:

## Learn more

- Documentation: https://postgres.ai/docs
- Issues: https://gitlab.com/postgres-ai/postgres_ai/-/issues
- Documentation: `https://postgres.ai/docs`
- Issues: `https://gitlab.com/postgres-ai/postgres_ai/-/issues`
12 changes: 6 additions & 6 deletions docs/brew-installation.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Homebrew Installation for PostgresAI CLI
# Homebrew installation for PostgresAI CLI

This document describes how to set up and distribute the PostgresAI CLI via Homebrew.

## For Users
## For users

### Installation

Expand Down Expand Up @@ -33,9 +33,9 @@ brew uninstall postgresai
brew untap postgres-ai/tap
```

## For Maintainers
## For maintainers

### Creating the Homebrew Tap Repository
### Creating the Homebrew tap repository

1. Create a new GitLab repository named `homebrew-tap` at:
`https://gitlab.com/postgres-ai/homebrew-tap`
Expand All @@ -53,7 +53,7 @@ brew untap postgres-ai/tap
# Update the sha256 field in the formula
```

### Updating the Formula
### Updating the formula

After publishing a new version to npm:

Expand All @@ -66,7 +66,7 @@ After publishing a new version to npm:
```
4. Commit and push to the homebrew-tap repository

### Testing Locally
### Testing locally

Before pushing to the tap:

Expand Down
14 changes: 7 additions & 7 deletions postgres_ai_helm/INSTALLATION_GUIDE.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Postgres AI monitoring - Helm chart installation guide
# PostgresAI monitoring: Helm chart installation guide

## Installation

Expand Down Expand Up @@ -70,14 +70,14 @@ kubectl create secret generic postgres-ai-monitoring-secrets \

**Notes:**

- `SINK_POSTGRES_PASSWORD` should be generated by you and will be used to connect to the internal database for storing metrics
- `GRAFANA_PASSWORD` should be generated by you and will be used to access grafana
- `POSTGRES_AI_API_KEY` should be attained from PostgresAI platform and will be used to connect to the PostgresAI platform
- `SINK_POSTGRES_PASSWORD` should be generated by you and will be used to connect to the internal database for storing metrics.
- `GRAFANA_PASSWORD` should be generated by you and will be used to access Grafana.
- `POSTGRES_AI_API_KEY` should be obtained from the PostgresAI platform and will be used to connect to the PostgresAI platform.
- Add `--from-literal` for each database that you want to monitor
- Key must match `passwordSecretKey` in custom-values.yaml
- Key name must be `db-password-<passwordSecretKey>` and value must be the password for monitoring user in the database

### 5. Install helm chart
### 5. Install Helm chart

```bash
helm install postgres-ai-monitoring ./postgres-ai-monitoring-0.12.tgz \
Expand All @@ -91,7 +91,7 @@ helm install postgres-ai-monitoring ./postgres-ai-monitoring-0.12.tgz \
kubectl get pods -n postgres-ai-mon
```

## Access grafana
## Access Grafana

**Port Forward** (quick access):

Expand All @@ -101,7 +101,7 @@ kubectl port-forward -n postgres-ai-mon svc/postgres-ai-monitoring-grafana 3000:

Open: `http://localhost:3000`

**Ingress**: Access via configured domain (e.g., `http://monitoring.example.com`)
**Ingress**: Access via the configured domain (e.g., `http://monitoring.example.com`)

**Login**: Username and password from the secret (`grafana-admin-user` / `grafana-admin-password`)

Expand Down
2 changes: 1 addition & 1 deletion terraform/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ On first boot, EC2 instance clones the specified version of this repository and

## Quick start

See [QUICKSTART.md](QUICKSTART.md) for step-by-step guide.
See [QUICKSTART.md](QUICKSTART.md) for a step-by-step guide.

### Validation

Expand Down
22 changes: 11 additions & 11 deletions tests/lock_waits/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Lock Waits Metric Testing
# Lock waits metric testing

This directory contains tests and scripts to verify that the `lock_waits` metric is working correctly.

Expand All @@ -12,17 +12,17 @@ The `lock_waits` metric collects detailed information about lock waits in Postgr
- Query IDs (PostgreSQL 14+)
- Wait durations and blocker transaction durations

## Test Components
## Test components

### 1. Python Test Script (`test_lock_waits_metric.py`)
### 1. Python test script (`test_lock_waits_metric.py`)

Automated test that:
- Creates lock contention scenarios in the target database
- Waits for pgwatch to collect metrics
- Verifies the metric is collected in Prometheus/VictoriaMetrics
- Validates the metric structure and labels

### 2. SQL Script (`create_lock_contention.sql`)
### 2. SQL script (`create_lock_contention.sql`)

Manual SQL script to create lock contention for testing. Can be run in multiple psql sessions.

Expand All @@ -42,7 +42,7 @@ Manual SQL script to create lock contention for testing. Can be run in multiple
- Check `config/pgwatch-prometheus/metrics.yml` includes `lock_waits`
- Verify pgwatch is collecting metrics from the target database

## Running the Automated Test
## Running the automated test

### Basic Usage

Expand Down Expand Up @@ -116,7 +116,7 @@ pgwatch_lock_waits_blocker_tx_ms{datname="target_database"}

## Expected Results

### Successful Test Output
### Successful test output

```
Setting up test environment...
Expand Down Expand Up @@ -144,7 +144,7 @@ Validating metric structure...

## Troubleshooting

### No Records Found
### No records found

- **Check pgwatch is running**: `docker ps | grep pgwatch-prometheus`
- **Check pgwatch logs**: `docker logs pgwatch-prometheus`
Expand All @@ -154,7 +154,7 @@ Validating metric structure...
- **Check database name**: Ensure `--test-dbname` matches the monitored database
- **Verify metrics exist**: `curl "http://localhost:59090/api/v1/label/__name__/values" | grep lock_waits`

### Invalid Data Structure
### Invalid data structure

- **Check PostgreSQL version**: Metric requires PostgreSQL 14+ for query_id support
- **Verify metric SQL**: Check the SQL query in `metrics.yml` is correct
Expand Down Expand Up @@ -189,9 +189,9 @@ test_lock_waits:
- main
```

## Additional Test Scenarios
## Additional test scenarios

### Test Different Lock Types
### Test different lock types

Modify the test to create different types of locks:

Expand All @@ -203,7 +203,7 @@ LOCK TABLE lock_test_table IN EXCLUSIVE MODE;
SELECT pg_advisory_lock(12345);
```

### Test Multiple Concurrent Waits
### Test multiple concurrent waits

Create multiple waiting transactions to test the LIMIT clause:

Expand Down
21 changes: 9 additions & 12 deletions workload_examples/single_index_analysis.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
## workload example for single index analysis dashboard
## Workload example for single index analysis dashboard

This example prepares and runs a repeatable workload designed for the postgres_ai monitoring “Single index analysis” dashboard. It also shows how to deploy `pg_index_pilot`, generate controlled index bloat, and let `pg_index_pilot` automatically rebuild indexes when bloat exceeds the configured threshold during periodic runs.

### prerequisites
### Prerequisites

- Postgres instance
- PostgreSQL instance
- `pg_cron` extension available for scheduling periodic execution
- `pgbench` installed for workload generation

## prepare the dataset in the target database
## Prepare the dataset in the target database

Create a table with several indexes and populate 10 million rows in the target database (e.g., `workloaddb`). This schema uses `test_pilot` schema and `items` table.
Create a table with several indexes and populate 10 million rows in the target database (e.g., `workloaddb`). This schema uses the `test_pilot` schema and the `items` table.

```bash
psql -U postgres -d workloaddb <<'SQL'
Expand Down Expand Up @@ -54,7 +54,7 @@ select setval('test_pilot.items_id_seq', (select coalesce(max(id),0) from test_p
SQL
```

### deploy pg_index_pilot
### Deploy pg_index_pilot

```bash
# Clone the repository
Expand All @@ -74,7 +74,7 @@ psql -U postgres -d index_pilot_control -f index_pilot_functions.sql
psql -U postgres -d index_pilot_control -f index_pilot_fdw.sql
```

### register the target database via FDW
### Register the target database via FDW

Replace placeholders with actual connection details for your target database (the database where workload and indexes live; in examples below it is `workloaddb`).

Expand Down Expand Up @@ -114,14 +114,11 @@ select cron.schedule_in_database(
'call index_pilot.periodic(real_run := true);',
'index_pilot_control' -- run in control database
);
SQL
```

Behavior: when `index_pilot.periodic(true)` runs, it evaluates index bloat in the registered target database(s). If bloat for an index exceeds the configured `index_rebuild_scale_factor` at the time of a run, an index rebuild is initiated.

#

### run the workload with pgbench
### Run the workload with pgbench

Use two concurrent pgbench jobs: one generates updates that touch ranges of `id` and another performs point-lookups by `id`. This mix creates index bloat over time; when bloat exceeds the configured threshold during a periodic run, `pg_index_pilot` triggers a rebuild.

Expand Down Expand Up @@ -164,7 +161,7 @@ tmux new -d -s pgbench_selects 'env PGPASSWORD=<password> pgbench -n -h 127.0.0.

Let these processes run continuously. The updates will steadily create index bloat; every 20 minutes, `index_pilot.periodic(true)` evaluates bloat and, if thresholds are exceeded, initiates index rebuilds.

### monitor results
### Monitor results

- In the postgres_ai monitoring included with this repository, use:
- `Single index analysis` for targeted inspection
Expand Down