🌐 AI搜索 & 代理 主页
Skip to content

Conversation

@levkk
Copy link
Contributor

@levkk levkk commented Jun 19, 2024

Features

  1. [SDK] Add support for automatic batching for document upserts. Looking for review on the API before moving forward with PR, e.g. before adding tests.
from pgml import Collection, Batch

collection = Collection("my_collection")
batch = Batch(collection, 25, {"merge": True})

await batch.upsert_documents([{"id": 1}]) # Doesn't upsert yet

for i in range(23):
    await batch.upsert_documents([{"id": i}]) # Doesn't upsert yet

# Upserts whatever is in the current batch
# and appends the document to the next batch
await batch.upsert_documents([{"id": 1}])

# Upserts the final batch
await batch.finish()

Bugs

  1. [SDK] Formatting for long SQL queries.
  2. [SDK] A couple of spelling typos.

@levkk levkk requested a review from SilasMarvin June 19, 2024 14:36
@levkk levkk marked this pull request as draft June 19, 2024 14:38
@SilasMarvin
Copy link
Contributor

SilasMarvin commented Jun 19, 2024

I'm not sure why a user would use that, and not just:

from pgml import Collection, Batch

collection = Collection("my_collection")
# batch = Batch(collection, 25, {"merge": True})
batch = []

# await batch.upsert_documents([{"id": 1}]) # Doesn't upsert yet
batch.append({"id": 1})

for i in range(23):
    # await batch.upsert_documents([{"id": i}]) # Doesn't upsert yet
    batch.append({"id": i})

# Upserts whatever is in the current batch
# and appends the document to the next batch
# await batch.upsert_documents([{"id": 1}])
await collection.upsert_documents(batch, {"merge": True})

# Upserts the final batch
# await batch.finish()

@SilasMarvin
Copy link
Contributor

Oh I see the automatic handling of upserting after they hit the threshold is nice, but it is a bit confusing. I think most people in the Python world are used to using batching systems already built into the dataset they are operating on. For example: https://huggingface.co/docs/datasets/en/process#batch-processing Not saying we shouldn't add it, but maybe we should clarify the name to like AutoBatchUpsert or something I'm not sure.

@montanalow
Copy link
Contributor

Why not use the batch_size argument on Collection.upsert_documents for this functionality?

@levkk
Copy link
Contributor Author

levkk commented Jun 19, 2024

Python world are used to using batching systems already built into the dataset

Datasets are one of very many sources for data. For example, my use case that triggered the desire for this feature was streaming WET files from a warcio.archiveiterator.ArchiveIterator which seemingly doesn't have batching support built-in. This is not uncommon for most non-machine learning libraries and toolkits which people use to build regular web apps. Is it easy to write the batching logic yourself? Seemingly so, but it's really easy to forget to flush the last often incomplete batch when the source stream is complete, especially when you have to do it yourself, over and over, for each use case you have in your code.

Why not use the batch_size argument on Collection.upsert_documents for this functionality?

batch_size doesn't handle the incomplete batch scenario, where a user inserts len(records) % batch_size != 0, hence the need for finish() aka flush(). You have to tell the collection when you're done writing and no more records will be added to whatever incomplete batch it's been buffering.

@montanalow
Copy link
Contributor

Right, having to call flush/finish is only an issue because of this new API you’re introducing. The example Silas gave doesn’t have the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants