-
-
Notifications
You must be signed in to change notification settings - Fork 45
Interoperability for multiple libraries using wgpu #704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I definitely like the idea and would love to contribute. With the current implementation I guess something along the lines of this would nearly work already shader_a = .from_id('ripple')
shader_b = .from_id('heightmap')
shader_a.image = shader_b.image
shader_a.image.channels[0] = BufferTexture('a')
shader_a._prepare_render()
shader_a.show() Another goal I eventually have would be to use passes or full shaders from Shadertoy and display them in a fastplotlib subplot (I played around with network visualization, where I wanted to click on nodes eventually and then see an interactive preview of said shader). I haven't looked deep enough, but feel like it might be possible. Apart from designing an API and using common globals I worry that performance could be an issue if buffers and textures are copied or even downloaded to CPU and uploaded again. So maybe working with references and views and command buffers is a good idea. Especially if they are executed ( |
I like this idea and it's something I've wondered how to do as well. A common array-like class that can just be passed around pygfx and a future "wupy" library would be great. fastplotlib just manages pygfx buffers to make some things easier for users (for example, tiling textures when above the limit, automating I've been playing with compute shaders for linalg some weekends. Not sure how the bindings would have to be assigned to make all this work interoperably. |
That's definitely the idea! I imagine something like this: from pygfx import Texture
from a_future_gpu_based_image_processing_lib import GpuImage
from my_fancy_compute_thingy import ImageCombiner
tex = Texture(....)
image = GpuImage( ... )
combiner = ImageCombiner()
combiner.image1 = tex
combiner.image2 = image
combiner.dispatch(...) Everything just works, as long as they follow some sort of spec.
So I envision not so much a base class that wraps a texture/buffer, but allowing different libs to work together because they all implement something like: class MyImageThatWrapsATexture:
def __wgpu_interface__(self):
return {
# To be determined ...
} You can then have something like: def is_wgpu_compatible_texture(ob):
try:
wi = ob.__wgpu_interface__()
except AttributeError:
return False
if wi['type'] == "texture":
return True |
I now wonder if the offscreen canvas could be the intermediate that handles a swap chain etc. As it's currently providing a canvas like interface to easily be a render target (mainly So to provide access to data: expose buffers and textures via a common attribute/method. Here usages could be a problem so maybe suggest to always create the resources with a lot of usages and then use views with the required usages further down? To provide access to renderpasses/post effect/compute shader transformations (of the type 0-? data in, one storage texture/buffer out): just use that object in the relevant library as usual. Maybe advocate to accept (and copy/convert if necessary) both textures and buffers. For shared metrics/diagnostic: one global device (somehow collect all required limits/features before requesting it?). I am thinking about situations where a lot of data is used and it would be great to know how much vram is used/available - maybe even see in detail which resource is using it. |
I'm not sure if I fully follow you. But in any case, to render to an actual surface (os window), a texture must be obtained via
That's indeed a good point! For textures it can indeed help to turn many usages on on the texture, and use the required subset in the views. But I think some usages cannot be used together. And for buffers this is not a solution. We will need to document this explicitly: anyone creating classes with |
That's more of a convenience thing, like it would be really easy if I could render to a fpl subplot the way I render to a gui or offscreen canvas. 😅
For example color attachment (render target) and texture binding will conflict. But you can create a resource with both and then deconflict with views. However, if you have multiple views of the same texture they will still conflict inside a single render pass (maybe also pipeline?). It works really nice here: pygfx/shadertoy#43 This makes me question as to why not have all usages available. It's a good question to ask upstream or even on the spec, maybe I will browse through some meeting notes to find an answer. Performance or security (like not allowing COPY_SRC) could be one case. |
If that's the case, that'd be great! Indeed views can still conflict, but in cases where it does, you're probably doing something weird anyway (rendering to a texture that is also used as 'input'.
Good question. I suspect it's also sanity; specifying the intended usage beforehand and the driver (wgpu) making sure it is not used for things the developer did not intent. |
One more thought: requiring (or at least suggesting) good labels for shared resources. Perhaps even a shema. For example I include a repr of the relevant object. |
We cannot interoperate with data structures based on OpenGL/OpenCL, but we can make sure that multiple libraries that use wgpu-py can interoperate. With that I mainly mean that buffers and textures created by one tool can be consumed by another. E.g. write a shadertoy, and use the resulting texture in Pygfx or Fastplotlib. Now that we're looking into compute more (see pygfx/pygfx#1063) this becomes more relevant; being able to use the GPU to do compute, and use resulting buffers/textures to render stuff. Or render stuff and run compute on the result.
To make this work we basically need two things:
wgpu.utils
. We could expand that.GPUBuffer
from apygfx.Buffer
. We could implement something similar to the Python array protocol. This comes down to defining a spec at wgpu-py, and implementing it in Pygfx and others.The text was updated successfully, but these errors were encountered: