🌐 AI搜索 & 代理 主页
Skip to content

Commit 8aed097

Browse files
committed
readme
1 parent 709be6a commit 8aed097

File tree

1 file changed

+106
-17
lines changed

1 file changed

+106
-17
lines changed

README.md

Lines changed: 106 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -2,44 +2,133 @@
22

33
![PostgresML](./logo-small.png)
44

5-
PostgresML is a Proof of Concept to create the simplest end-to-end machine learning system. We're building on the shoulders of giants, namely Postgres which is arguably the most robust storage and compute engine that exists, and we're coupling that with Python machine learning libraries (and their c implementations) to prototype different machine learning workflows.
5+
PostgresML is an end-to-end machine learning system. Using only SQL, it allows to train models and run online predictions, alongside normal queries, directly using the data in your databases.
66

7-
Common architectures driven by standard organizational hierarchies make it hard to employ machine learning successfully, i.e [Conway's Law](https://en.wikipedia.org/wiki/Conway%27s_law). A single model at a unicorn scale startup may require work from Data Scientists, Data Engineers, Machine Learning Engineers, Infrastructure Engineers, Reliability Engineers, Front & Backend Product Engineers, multiple Engineering Managers, a Product Manager and finally, the Business Partner(s) this "solution" is supposed to eventually address. It can take multiple quarters of effort to shepherd a first effort. The typical level of complexity adds risk, makes maintenance a hot potato and iteration politically difficult. Worse, burnout and morale damage to expensive headcount have left teams and leadership warry of implementing ML solutions throughout the industry, even though FAANGs have proven the immense value when successful.
7+
## Why
8+
9+
Deploying machine learning models into existing applications is not straight forward. Unless you're already using Python in your day to day work, you need to learn a new language and toolchain, figure out how to EL(T) your data from your database(s) into a warehouse or object storage, learn how to train models (Scikit-Learn, Pytorch, Tensorflow, etc.), and finally serve preditions to your apps, forcing your organization into microservices and all the complexity that comes with it.
10+
11+
PostgresML makes ML simple: your data doesn't really go anywhere, you train using simple SQL commands, and you get the predictions to your apps using a mechanism you've been using already: a Postgres connection and a query.
812

913
Our goal is that anyone with a basic understanding of SQL should be able to build and deploy machine learning models to production, while receiving the benefits of a high performance machine learning platform. Ultimately, PostgresML aims to be the easiest, safest and fastest way to gain value from machine learning.
1014

15+
## Quick start
16+
17+
Using Docker, boot up PostresML locally:
18+
19+
```bash
20+
$ docker-compose up
21+
```
22+
23+
The system is available on port 5433 by default, just in case you happen to run Postgres locally already:
24+
25+
```bash
26+
$ psql -U root -h 127.0.0.1 -p 5433
27+
```
28+
29+
We've included a couple examples in the `examples/` folder. You can run them directly with `$ psql -U root -h 127.0.0.1 -p 5433 -f <filename>`.
30+
31+
See [installation instructions](#Installation) for installing PostgresML in different supported environments and more information.
32+
33+
## Features
34+
35+
### Training models
36+
37+
Given a Postgres table or a view, PostgresML can train a model using some commonly used algorithms. We currently support the following Scikit-Learn regression and classification models:
38+
39+
- `LinearRegression`,
40+
- `LogisticRegression`,
41+
- `SVR`,
42+
- `SVC`,
43+
- `RandomForestRegressor`,
44+
- `RandomForestClassifier`,
45+
- `GradientBoostingRegressor`,
46+
- `GradientBoostingClassifier`.
47+
48+
Training a model is then as simple as:
49+
50+
```sql
51+
SELECT * FROM pgml.train(
52+
'Human-friendly project name',
53+
'regression',
54+
'<name of the table or view containing the data>',
55+
'<name of the column containing the y target value>'
56+
);
57+
```
58+
59+
PostgresML will snapshot the data from the table, train multiple models from the above list given the objective (`regression` or `classification`), and automatically choose and deploy the model with the best predictions.
60+
61+
### Making predictions
62+
63+
Once the model is trained, making predictions is as simple as:
64+
65+
```sql
66+
SELECT pgml.predict('Human-friendly project name', ARRAY[...]) AS prediction_score;
67+
```
68+
69+
where `ARRAY[...]` is a list of the features for which we want to run a prediction. This list has to be in the same order as the columns in the data table. This score then can be used in normal queries, for example:
70+
71+
```sql
72+
SELECT *,
73+
pgml.predict(
74+
'Probability of buying our products',
75+
ARRAY[user.location, NOW() - user.created_at, user.total_purchases_in_dollars]
76+
) AS likely_to_buy_score
77+
FROM users
78+
WHERE comapany_id = 5
79+
ORDER BY likely_to_buy_score
80+
LIMIT 25;
81+
```
82+
83+
Take a look [below](#Working-with-PostgresML) for an example with real data.
84+
85+
### Model and data versioning
86+
87+
As data in your database changes, it is possible to retrain the model again to get better predictions. With PostgresML, it's as simple as running the `pgml.train` command again. If the model scores better, it will be automatically used in predictions; if not, the existing model will be kept and continue to score in your queries. We also snapshot the training data, so models can be retrained deterministically to validate and fix any issues.
88+
89+
## Roadmap
90+
91+
This project is currently a proof of concept. Some important features which we are currently thinking about or working on are listed below.
92+
93+
### Production deployment
94+
95+
Most companies that use PostgreSQL in production do so using managed services like AWS RDS, Digital Ocean, Azure, etc. Those services do not allow running custom extensions, so we have to run PostgresML
96+
directly on VMs, e.g. EC2, droplets, etc. The idea here is to replicate production data directly from Postgres and make it available in real-time to PostgresML. We're considering solutions like logical replication for small to mid-size databases, and Debezium for multi-TB deployments.
97+
98+
### Model management dashboard
99+
100+
A good looking and useful UI goes a long way. A dashboard similar to existing solutions like MLFlow or AWS SageMaker will make the experience of working with PostgresML as pleasant as possible.
101+
102+
103+
### Data explorer
104+
105+
A data explorer allows anyone to browse the dataset in production and to find useful tables and features to build effective machine learning models.
106+
107+
### More algorithms
108+
109+
Scikit-Learn is a good start, but we're also thinking about including Tensorflow, Pytorch, and many more useful models.
110+
11111

12112
### FAQ
13113

14114
*How far can this scale?*
15115

16-
Petabyte sized Postgres deployements are [documented](https://www.computerworld.com/article/2535825/size-matters--yahoo-claims-2-petabyte-database-is-world-s-biggest--busiest.html) in production since at least 2008, and [recent patches](https://www.2ndquadrant.com/en/blog/postgresql-maximum-table-size/) have enabled working beyond exabyte up to the yotabyte scale. Machine learning models can be horizontally scaled using industry proven Postgres replication techniques.
116+
Petabyte sized Postgres deployements are [documented](https://www.computerworld.com/article/2535825/size-matters--yahoo-claims-2-petabyte-database-is-world-s-biggest--busiest.html) in production since at least 2008, and [recent patches](https://www.2ndquadrant.com/en/blog/postgresql-maximum-table-size/) have enabled working beyond exabyte and up to the yotabyte scale. Machine learning models can be horizontally scaled using standard Postgres replicas.
17117

18118
*How reliable can this be?*
19119

20-
Postgres is widely considered mission critical, and some of the most [reliable](https://www.postgresql.org/docs/current/wal-reliability.html) technology in any modern stack. PostgresML allows an infrastructure organization to leverage pre-existing best practices to deploy machine learning into production with less risk and effort than other systems. For example, model backup and recovery happens automatically alongside normal data backup procedures.
120+
Postgres is widely considered mission critical, and some of the most [reliable](https://www.postgresql.org/docs/current/wal-reliability.html) technology in any modern stack. PostgresML allows an infrastructure organization to leverage pre-existing best practices to deploy machine learning into production with less risk and effort than other systems. For example, model backup and recovery happens automatically alongside normal Postgres data backup.
21121

22122
*How good are the models?*
23123

24-
Model quality is often a tradeoff between compute resources and incremental quality improvements. Sometimes a few thousands training examples and an off the shelf algorithm can deliver significant business value after a few seconds of training a model. PostgresML allows stakeholders to choose several different algorithms to get the most bang for the buck, or invest in more computationally intensive techniques as necessary. In addition, PostgresML automatically applies best practices for data cleaning like imputing missing values by default and normalizing data to prevent common problems in production.
124+
Model quality is often a tradeoff between compute resources and incremental quality improvements. Sometimes a few thousands training examples and an off the shelf algorithm can deliver significant business value after a few seconds of training. PostgresML allows stakeholders to choose several different algorithms to get the most bang for the buck, or invest in more computationally intensive techniques as necessary. In addition, PostgresML automatically applies best practices for data cleaning like imputing missing values by default and normalizing data to prevent common problems in production.
25125

26126
PostgresML doesn't help with reformulating a business problem into a machine learning problem. Like most things in life, the ultimate in quality will be a concerted effort of experts working over time. PostgresML is intended to establish successful patterns for those experts to collaborate around while leveraging the expertise of open source and research communities.
27127

28128
*Is PostgresML fast?*
29129

30130
Colocating the compute with the data inside the database removes one of the most common latency bottlenecks in the ML stack, which is the (de)serialization of data between stores and services across the wire. Modern versions of Postgres also support automatic query parrellization across multiple workers to further minimize latency in large batch workloads. Finally, PostgresML will utilize GPU compute if both the algorithm and hardware support it, although it is currently rare in practice for production databases to have GPUs. We're working on [benchmarks](sql/benchmarks.sql).
31131

32-
### Current features
33-
- Train models directly in Postgres with data from a table or view
34-
- Make predictions in Postgres using SELECT statements
35-
- Manage new versions and algorithms over time as your solution evolves
36-
37-
### Planned features
38-
- Model management dashboard
39-
- Data explorer
40-
- Scheduled training
41-
- More algorithms and libraries including custom algorithm support
42-
43132

44133
## Installation
45134

@@ -138,7 +227,7 @@ $ psql -c 'SELECT pgml.version()'
138227

139228
The two most important functions the framework provides are:
140229

141-
1. `pgml.train(project_name TEXT, objective TEXT, relation_name TEXT, y_column_name TEXT, algorithm TEXT)`,
230+
1. `pgml.train(project_name TEXT, objective TEXT, relation_name TEXT, y_column_name TEXT, algorithm TEXT DEFAULT NULL)`,
142231
2. `pgml.predict(project_name TEXT, VARIADIC features DOUBLE PRECISION[])`.
143232

144233
The first function trains a model, given a human-friendly project name, a `regression` or `classification` objective, a table or view name which contains the training and testing datasets, and the name of the `y` column containing the target values. The second function predicts novel datapoints, given the project name for an exiting model trained with `pgml.train`, and a list of features used to train that model.

0 commit comments

Comments
 (0)