-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Build] Refactor OPE code to adhere to stricter build warnings #443
base: master
Are you sure you want to change the base?
Conversation
d07374b
to
ad04349
Compare
81a1700
to
63939bd
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #443 +/- ##
=======================================
Coverage 88.23% 88.23%
=======================================
Files 101 102 +1
Lines 6356 6356
=======================================
Hits 5608 5608
Misses 748 748 ☔ View full report in Codecov by Sentry. |
@@ -76,17 +76,18 @@ pid_t start_worker_process(const std::string &control_queue_name, std::vector<st | |||
// Setup the environment | |||
|
|||
// Null terminated char * array | |||
char *env_arr[env.size() + 1]; | |||
std::vector<char *> env_arr(env.size() + 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in case of vector, all elements are initialized with 0 (nullptr) already because it calls (char*) init for every element, so next line is redundant (it was necessary for regular array though)
: TypedNeuropodTensor<std::string>(copy_and_strip_last_dim(dims)), write_buffer_(this->get_num_elements()) | ||
{ | ||
auto byte_block = stdx::make_unique<SHMNeuropodTensor<uint8_t>>(dims, std::move(block), data, block_id); | ||
|
||
auto base_ptr = byte_block->get_raw_data_ptr(); | ||
auto max_len = dims[dims.size() - 1]; | ||
auto max_len = static_cast<size_t>(dims[dims.size() - 1]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this cast is not needed and as result you don't need to cast it then at line 229 again
@@ -27,16 +27,16 @@ namespace neuropod | |||
// This is useful for serialization and to wrap and/or copy tensors between backends. | |||
// For example, if you had a TorchNeuropodTensor and you wanted to get a tensor compatible | |||
// with `allocator` without making a copy, you could use this function | |||
std::shared_ptr<NeuropodTensor> wrap_existing_tensor(NeuropodTensorAllocator & allocator, | |||
std::shared_ptr<NeuropodTensor> tensor) | |||
static std::shared_ptr<NeuropodTensor> wrap_existing_tensor(NeuropodTensorAllocator & allocator, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting why static is necessary here, did compiler require it to be static?
Some thoughts:
In C++, anonymous namespace was recommended and "static" was even deprecated for vars and functions. It was undeprecated in C++11
Some discussion clarifies details https://stackoverflow.com/questions/154469/unnamed-anonymous-namespaces-vs-static-functions
Anyway, please share details on it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Please check I left comment and question.
ad04349
to
39a9c4c
Compare
Summary:
Refactor code under
source/neuropod/multiprocess
to adhere to the stricter set of warnings introduced in #437Test Plan:
CI