You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pgml-sdks/pgml/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,8 +3,8 @@
3
3
# Suported Languages
4
4
5
5
We support a number of different languages:
6
-
-[Python](/python)
7
-
-[JavaScript](/javascript)
6
+
-[Python](python)
7
+
-[JavaScript](javascript)
8
8
- Rust
9
9
10
10
Our SDK is written completely in Rust and translated by Rust to our other supported languages. See each individual language for an overview and specification on how to use the SDK.
Before running any examples first install dependencies and set the DATABASE_URL environment variable:
5
+
```
6
+
npm i
7
+
export DATABASE_URL={YOUR DATABASE URL}
8
+
```
9
+
10
+
## [Semantic Search](./semantic_search.js)
4
11
This is a basic example to perform semantic search on a collection of documents. Embeddings are created using `intfloat/e5-small` model. The results are semantically similar documemts to the query. Finally, the collection is archived.
5
12
6
-
###[Question Answering](./question_answering.js)
13
+
## [Question Answering](./question_answering.js)
7
14
This is an example to find documents relevant to a question from the collection of documents. The query is passed to vector search to retrieve documents that match closely in the embeddings space. A score is returned with each of the search result.
8
15
9
-
###[Question Answering using Instructore Model](./question_answering_instructor.js)
16
+
## [Question Answering using Instructore Model](./question_answering_instructor.js)
10
17
In this example, we will use `hknlp/instructor-base` model to build text embeddings instead of the default `intfloat/e5-small` model.
In this example, we will show how to use `vector_recall` result as a `context` to a HuggingFace question answering model. We will use `Builtins.transform()` to run the model on the database.
0 commit comments