-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Partially fix panics when setting WGPU_SETTINGS_PRIO=webgl2
#18113
base: main
Are you sure you want to change the base?
Conversation
@@ -350,6 +350,7 @@ fn create_downsample_depth_pipelines( | |||
.get_downlevel_capabilities() | |||
.flags | |||
.contains(DownlevelFlags::COMPUTE_SHADERS) | |||
|| (render_device.limits().max_compute_workgroup_storage_size == 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check is a little awkward - there's no straightforward "is compute available" on wgpu::Limits
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is bizarre. The default for WebGPU is fairly high https://gpuweb.github.io/gpuweb/#dom-supported-limits-maxcomputeworkgroupstoragesize. I didn't even know some platforms support compute shaders, but not workgroup-shared data.
@cwfitzgerald, can we get some input from wgpu/gpuweb people? Is this valid?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should have added a comment to clarify. I don't think this can ever happen on a real GPU. But the downlevel_webgl2_defaults()
override simulates a lack of compute by setting all max_compute_*
limits to zero. I arbitrarily picked one limit as a canary.
I'll see if I can make this cleaner or come up with an alternative.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarified in d89d39c. Still kinda icky though.
I also considered working out some real limits for the mip generation, e.g. a minimum for max_compute_workgroup_size_x/y/z
. But I'm not confident I can get that right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't even know some platforms support compute shaders, but not workgroup-shared data.
You don't have to worry about this, compute shaders imply some amount of workgroup shared data.
downlevel_webgl2_defaults()
You should only need to check the COMPUTE_SHADERS
downlevel flag to know if compute shaders are supported.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I can tell, checking COMPUTE_SHADERS
on the adapter is not sufficient when the device limits have been overridden via DeviceDescriptor::limits
(with the intent of simulating webgl2 limits on another platform).
If I rely only on COMPUTE_SHADERS
:
$env:WGPU_SETTINGS_PRIO = "webgl2"
cargo run --example occlusion_culling
In Device::create_bind_group_layout, label = 'downsample depth bind group layout'
Too many bindings of type StorageTextures in Stage ShaderStages(COMPUTE), limit is 0, count was 12. Check the limit `max_storage_textures_per_shader_stage` passed to `Adapter::request_device`
I guess there's a deeper question on whether setting device limits is the right way to simulate another platform's limits?
If we can get this setup in CI, I would love automated tests for this. It's not feasible to manually test this kind of stuff myself, but it would be really nice to have CI let me know when we're going above WebGL2 or WebGPU minspec. |
Overview
Fixes #17869.
Summary
WGPU_SETTINGS_PRIO
changes various limits onRenderDevice
. This is useful to simulate platforms with lower limits.However, some plugins only check the limits on
RenderAdapter
(the actual GPU) - these limits are not affected byWGPU_SETTINGS_PRIO
. So the plugins try to use features that are unavailable and wgpu says "oh no". See #17869 for details.The PR adds various checks on
RenderDevice
limits. This in enough to get most examples working, but some are not fixed (see below).Testing
Not Fixed
While testing I found a few other cases of limits being broken.
"Compatibility" settings (WebGPU minimums) breaks native in various examples.
occlusion_culling
breaks fake webgl.occlusion_culling
breaks real webgl.OIT breaks fake webgl.
OIT breaks real webgl