-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[C++][Python] DLPack on FixedShapeTensorArray/FixedShapeTensorScalar #38868
Comments
Would it make sense to present contiguous buffers of ChunkedArray as DLPack tensors? |
Not sure I understand. You mean a |
The first option. I'm not sure the second is an option - I'm assuming non-contiguous (chunked) arrays can't be represented as DLPack tensors? |
Ah, maybe the issue desc isn't clear. What I meant was, are we talking about chunks of tensors or chunks of primitive arrays. Then we can make it clearer and just talk about tensor extension arrays vs primitive arrays, as chunking can't be supported because DLPack needs a contiguous buffer. So for the second part, for primitive arrays (not chunked), this is already being worked on in #33984. What I would really like to see is to have DLPack support for This is the case I was having in mind when opening this issue and the question I think is: would it be correct to present the whole tensor extension storage array as a DLPack tensor or would it be more correct to represent specific tensor in the |
If Array sized DLPack tensors can easily be sliced to Scalar tensors (and all other things being equal) I'd be slightly in favor of having Array sized tensors just because that'd mean keeping less objects around :) |
The reason I am hesitant to add a direct All our arrays (including I think it would be inconsistent of such methods like An alternative way to expose the full nD array backing the One might argue that a downside of this is that you then need do an additional |
Then we should probably also support Also, fortunately DLPack doesn't support nulls, because neither does our Tensor class :-) |
I think going through our
And then we would add DLPack support to the Tensor class 👍 And this will all have to be done in C++. |
On any
:-P |
Note that the more minimal scope of FixedShapeTensorArray, this already exists in C++, so here it's just a matter of adding bindings to that in a python (I would argue that this is more important than implementing generic Array->Tensor conversion, because primitive arrays already have |
Describe the enhancement requested
With the work in #33984 support for
DLPack
(exporting) will be added toArrays
. It would be great to do similar forFixedShapeTensorArray
(andTensor
) classes.I would like to open a discussion to decide how to approach the implementation of tensor protocol in case of
FixedShapeTensorArray
.If we follow the same logic for
FixedShapeTensorArray
as in #38472 the dimensionality of the object would change as the tensor extension type array is an array of tensors and not a single tensor. This is not necessarily the right approach. Maybe it would make more sense to apply DLPack support toFixedShapeTensorScalar
.Will be happy to receive feedback on this!
cc @jorisvandenbossche @rok
Component(s)
C++, Python
The text was updated successfully, but these errors were encountered: