Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated Resnet50 model for Blackhole, with Batch = 32 #17985

Merged
merged 1 commit into from
Mar 2, 2025

Conversation

mywoodstock
Copy link
Contributor

@mywoodstock mywoodstock commented Feb 19, 2025

Ticket

#17393
#18341

Problem description

This PR enables larger batch sizes 20 and 32 for Resnet50 on Blackhole.

What's changed

Some updates to the model itself to allow batch 32.
More updates to the fold op to allow non-rectangular grids.

Checklist

@mywoodstock mywoodstock force-pushed the asarje/rn50-bh-largebatch-20250218 branch 2 times, most recently from 72712db to 90fb639 Compare February 26, 2025 23:37
@mywoodstock mywoodstock changed the title [DO NOT METGE] Asarje/rn50 bh largebatch 20250218 Updated Resnet50 model for Blackhole, with Batch = 32 Feb 26, 2025
@mywoodstock mywoodstock marked this pull request as ready for review February 26, 2025 23:45
@@ -202,7 +202,7 @@ void MAIN {
#ifdef ARCH_BLACKHOLE
// FIXME: This is a temporary workaround to avoid hangs on blackhole.
// https://github.com/tenstorrent/tt-metal/issues/16439
for (uint32_t i = 0; i < 10; i++) {
for (uint32_t i = 0; i < 100; i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to add an order of magnitude to the delay?

Someone said this was a 30% slowdown that is now a 300% slowdown.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, this snuck in --> didn't mean to push this in

@@ -914,6 +966,19 @@ def run(self, input_tensor, device, ops_parallel_config, conv_op_cache={}) -> tt
),
}
)
# ## 128
# core_range_set = ttnn.CoreRangeSet(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove commented out code here and in other parts of the PR.

and layer_module
and (layer_module == "layer1_module2" or layer_module == "layer1_module3")
):
conv_kwargs_2["conv_config"].act_block_h_override = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point it may be better to have a function go through all three cases, and then just do something like

if f(batch_size, layer_module):
    conv_kwargs_2["conv_config"].act_block_h_override = 0

@mywoodstock mywoodstock force-pushed the asarje/rn50-bh-largebatch-20250218 branch from af8258a to cad363d Compare February 27, 2025 17:19
@mywoodstock mywoodstock force-pushed the asarje/rn50-bh-largebatch-20250218 branch from 0648d6a to 752a333 Compare February 27, 2025 17:35
@@ -273,4 +273,5 @@ void kernel_main() {
} // act_w_num_outer
cb_pop_front(tilized_in0_cb_id, act_block_num_tiles);
}
noc_async_write_barrier();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed?
Should the width sharded reader kernel also be updated similarly?

Copy link
Contributor

@s-jovic s-jovic Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just be careful here, from my experience width sharded reader kernel hung when I added the barrier. #18341

Copy link
Contributor Author

@mywoodstock mywoodstock Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we need these since watcher was catching outstanding writes for these kernels. And adding it to this these kernels included do not hang.

@mywoodstock mywoodstock force-pushed the asarje/rn50-bh-largebatch-20250218 branch 3 times, most recently from 6975cc9 to 20bf23b Compare March 1, 2025 02:56
@mywoodstock mywoodstock force-pushed the asarje/rn50-bh-largebatch-20250218 branch from 20bf23b to bc243cc Compare March 1, 2025 23:10
@mywoodstock mywoodstock merged commit 8150c70 into main Mar 2, 2025
16 checks passed
@mywoodstock mywoodstock deleted the asarje/rn50-bh-largebatch-20250218 branch March 2, 2025 00:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants