🌐 AI搜索 & 代理 主页
Skip to content

Conversation

@ChuckHend
Copy link
Contributor

@ChuckHend ChuckHend commented Jan 17, 2024

The autodeploy feature deploys the newly trained model only if the f1 score is higher than the model that is currently deployed. However, there can be cases where we aren't able to compute the f1 score, such as when the test set only contains positive labels (results in divide by zero). This can happen if the training data is sorted by labels in Postgres and test_sampling => 'first', or if its just a horribly unbalanced training set and some bad luck with test_sampling => 'random'.

In either case, there's legitimate reasons that the f1 score does not exist. So, rather than crash we can do two things;

  1. not supersede the currently deployed -- if we cant compute the metrics, I think its safe to assume its not better than the currently deployed model
  2. give user a meaningful warning log so they can troubleshoot

@montanalow montanalow marked this pull request as ready for review January 17, 2024 06:08
@montanalow montanalow merged commit 1882ca3 into postgresml:master Jan 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants