🌐 AI搜索 & 代理 主页
Skip to content

Conversation

@junseokShim
Copy link

@junseokShim junseokShim commented Dec 3, 2025

This PR adds ONE_HOT operator support to TensorFlow Lite Micro.

  • Ported ONE_HOT kernel from TensorFlow Lite to Micro:
    • tensorflow/lite/micro/kernels/one_hot.cc
    • tensorflow/lite/micro/kernels/one_hot.h
  • Added micro tests converted from Lite tests:
    • tensorflow/lite/micro/one_hot_test.cc
  • Verified with:
    • make -f tensorflow/lite/micro/tools/make/Makefile test_one_hot_test
    • clang-format and cpplint.py on modified files

This intends to address #3078.

bug=fixes #3078

@junseokShim junseokShim requested a review from a team as a code owner December 3, 2025 11:15
@google-cla
Copy link

google-cla bot commented Dec 3, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@junseokShim
Copy link
Author

@googlebot I signed it!

@junseokShim junseokShim force-pushed the feature/port_onehot_operator branch from 26628b8 to 8ecac95 Compare December 3, 2025 11:46
@ddavis-2015
Copy link
Member

@junseokShim Please add a line at the end of your PR description above: bug=fixes #3078. The CI will not pass without this line in the description.

@ddavis-2015
Copy link
Member

@veblush Before this PR can be accepted, the LiteRT code will need to be updated to separate out the parameter parsing for ONE_HOT here . Operators that have individual parameter parsing methods are in the same file, located near the beginning of the file.

@ddavis-2015
Copy link
Member

@junseokShim This does not seem to be a completed PR. Please adhere to the requirements as listed in issue #3078.

@junseokShim
Copy link
Author

@junseokShim This does not seem to be a completed PR. Please adhere to the requirements as listed in issue #3078.

OK, I will double-check the detailed requirements and make sure they are reflected.

Comment on lines +22 to +25
namespace tflite {
namespace ops {
namespace micro {
namespace one_hot {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please flatten the namespace to just tflite.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I'll reflect it in the next commit.

Comment on lines -483 to +485
$(TENSORFLOW_ROOT)tensorflow/lite/micro/kernels/zeros_like.cc
$(TENSORFLOW_ROOT)tensorflow/lite/micro/kernels/zeros_like.cc \
$(TENSORFLOW_ROOT)tensorflow/lite/micro/kernels/one_hot.cc \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

new files should be in alphabetical order

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I'll reflect it in the next commit. too

}
}

TfLiteStatus ResizeOutputTensor(TfLiteContext* context,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Method should have a name more representative of it's functionality.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I'll fix the method in the next commit.

expected_dim_i = op_context.indices->dims->data[i - 1];
}

// If the size pre-allocated by the TFLM compiler (Offline Memory Planner)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change Offline Memory Planner to just Memory Planner please. There are several memory planners.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I'll change the memory planner.

Comment on lines +228 to +235
const TFLMRegistration* Register_ONE_HOT() {
static TFLMRegistration r = {};

r.prepare = one_hot::Prepare;
r.invoke = one_hot::Eval;

return &r;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please follow the registration implementation of all other kernels.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I will modify the registration implementation soon.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this file in the PR?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's my mistake, I will remove this file at next PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is not required. External declaration of the registration method is handled elsewhere in TFLM.

Copy link
Author

@junseokShim junseokShim Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay. I will modify at next PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this file included in the PR?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's my mistake, I will remove this file at next PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants