Replies: 1 comment 1 reply
-
This is done because that is how the underlying library ImageMagick works. This is native code and has three different builds for the different depths. It might be possible to use this side by side but I have never tested that. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The title says it all - why are there 3 versions? Ideally if I've got a mixed workload (that is, both high and low depth images), logically I would think that making a decision about how much memory to use would be dependent on the type of image I'm encoding or decoding. But instead, it appears that the high-depth builds will statically use a set size of datatype regardless of if the image being decoded is high or low bit depth.
Why is this? I'm genuinely curious as to why it appears you must subscribe to decode an image using a statically-defined depth, rather than choosing how much depth you need at runtime. Perhaps I sound entitled, but I find this rather inconvenient for applications where you may need to occasionally encode or decode high bit-depth image types, but otherwise can work using less memory on lower-depth ones. Surely I must be missing something obvious here.
Beta Was this translation helpful? Give feedback.
All reactions