-
Notifications
You must be signed in to change notification settings - Fork 485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ORC-817, ORC-1088: Support ZStandard compression using zstd-jni #1743
Conversation
* Add zstd-jni dependency, and add a new CompressionCodec ZstdCodec that uses it. Add ORC conf to set compression level. * Add ORC conf to use long mode, and add configuration setters for windowLog and longModeEnable. * Add tests that verify the correctness of writing and reading across compression levels, window sizes, and long mode use. * Add test for compatibility between Zstd aircompressor and zstd-jni implementations. * Fix filterWithSeek test with a smaller percentage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you. Could you fix the checkstyle issues?
Error: src/java/org/apache/orc/impl/WriterImpl.java:[311,13] (indentation) Indentation: 'new' has incorrect indentation level 12, expected level should be 14.
Error: src/java/org/apache/orc/impl/WriterImpl.java:[317,15] (indentation) Indentation: 'new' has incorrect indentation level 14, expected level should be 16.
Error: src/java/org/apache/orc/impl/ZstdCodec.java:[23,1] (imports) CustomImportOrder: Import statement for 'com.github.luben.zstd.Zstd' is in the wrong order. Should be in the 'THIRD_PARTY_PACKAGE' group, expecting not assigned imports on this line.
Error: src/java/org/apache/orc/impl/ZstdCodec.java:[24,1] (imports) CustomImportOrder: Import statement for 'com.github.luben.zstd.ZstdCompressCtx' is in the wrong order. Should be in the 'THIRD_PARTY_PACKAGE' group, expecting not assigned imports on this line.
Error: src/java/org/apache/orc/impl/ZstdCodec.java:[25,1] (imports) CustomImportOrder: Import statement for 'com.github.luben.zstd.ZstdDecompressCtx' is in the wrong order. Should be in the 'THIRD_PARTY_PACKAGE' group, expecting not assigned imports on this line.
Error: src/java/org/apache/orc/impl/ZstdCodec.java:[27,1] (imports) CustomImportOrder: Import statement for 'org.apache.orc.CompressionCodec' is in the wrong order. Should be in the 'THIRD_PARTY_PACKAGE' group, expecting not assigned imports on this line.
Error: src/java/org/apache/orc/impl/ZstdCodec.java:[28,1] (imports) CustomImportOrder: Import statement for 'org.apache.orc.CompressionKind' is in the wrong order. Should be in the 'THIRD_PARTY_PACKAGE' group, expecting not assigned imports on this line.
Error: src/java/org/apache/orc/impl/ZstdCodec.java:[29,1] (imports) CustomImportOrder: Import statement for 'org.slf4j.Logger' is in the wrong order. Should be in the 'THIRD_PARTY_PACKAGE' group, expecting not assigned imports on this line.
Error: src/java/org/apache/orc/impl/ZstdCodec.java:[30,1] (imports) CustomImportOrder: Import statement for 'org.slf4j.LoggerFactory' is in the wrong order. Should be in the 'THIRD_PARTY_PACKAGE' group, expecting not assigned imports on this line.
Error: src/java/org/apache/orc/impl/ZstdCodec.java:[105] (regexp) RegexpSinglelineJava: No starting LAND and LOR allowed.
Error: src/java/org/apache/orc/impl/ZstdCodec.java:[143] (regexp) RegexpSinglelineJava: No starting LAND and LOR allowed.
Error: src/java/org/apache/orc/impl/OrcCodecPool.java:[49] (sizes) LineLength: Line is longer than 100 characters (found 103).
Error: src/test/org/apache/orc/TestRowFilteringComplexTypesNulls.java:[36,8] (imports) UnusedImports: Unused import -
COMPRESSION_ZSTD_IMPL("orc.compression.zstd.impl", | ||
"hive.exec.orc.compression.zstd.impl", "java", | ||
"Define the implementation used with the ZStandard codec, java or jni."), | ||
COMPRESSION_ZSTD_LEVEL("orc.compression.zstd.level", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also fix ORC-1088
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
source table: ORC zlib 4408374802439 4TB
zstd-jni
orc.compression.zstd.level=3 (default)
zstd compress size: 3119313447131 2905G
orc.compression.zstd.level=10
zstd compress size: 2621369844393 2441G
aircompressor
zstd compress size: 3138804372295 2923G
This reverts commit 7ff24ba
maven properties keep the alphabetical order
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we can control the compression level, I'd like to propose to use orc.compression.zstd.level=1
like Apache Spark.
private[spark] val IO_COMPRESSION_ZSTD_LEVEL =
ConfigBuilder("spark.io.compression.zstd.level")
.doc("Compression level for Zstd compression codec. Increasing the compression " +
"level will result in better compression at the expense of more CPU and memory")
.version("2.3.0")
.intConf
.createWithDefault(1)
For zstd-jni
library, Apache Spark community decides to use Level 1
since Apache Spark 2.3.0 because of the compression speed.
================================================================================================
Benchmark ZStandardCompressionCodec
================================================================================================
OpenJDK 64-Bit Server VM 17.0.9+9-LTS on Linux 5.15.0-1053-azure
AMD EPYC 7763 64-Core Processor
Benchmark ZStandardCompressionCodec: Best Time(ms) Avg Time(ms) Stdev(ms) Rate(M/s) Per Row(ns) Relative
--------------------------------------------------------------------------------------------------------------------------------------
Compression 10000 times at level 1 without buffer pool 661 665 4 0.0 66093.0 1.0X
Compression 10000 times at level 2 without buffer pool 705 707 2 0.0 70472.6 0.9X
Compression 10000 times at level 3 without buffer pool 796 796 0 0.0 79570.7 0.8X
Compression 10000 times at level 1 with buffer pool 588 589 1 0.0 58835.3 1.1X
Compression 10000 times at level 2 with buffer pool 620 621 1 0.0 61982.9 1.1X
Compression 10000 times at level 3 with buffer pool 725 726 1 0.0 72460.9 0.9X
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, for the rest of the code looks good to me, @cxzl25 . Thank you for your hard work on this. Let me test a little more with your PR.
I changed the default level to 1 and compared quickly with the generate benchmark. Level 1 is still smaller. JAVA (Aircompressor)
ZSTD-JNI
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, LGTM (Pending CIs)
zstdCompressCtx = new ZstdCompressCtx(); | ||
zstdCompressCtx.setLevel(zso.level); | ||
zstdCompressCtx.setLong(zso.windowLog); | ||
zstdCompressCtx.setChecksum(false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the record, the default value is false
already.
### What changes were proposed in this pull request? Original PR: #988 Original author: dchristle This PR will support the use of [zstd-jni](https://github.com/luben/zstd-jni) library as the implementation of ORC zstd, with better performance than [aircompressor](https://github.com/airlift/aircompressor). (#988 (comment)) This PR also exposes the compression level and "long mode" settings to ORC users. These settings allow the user to select different speed/compression trade-offs that were not supported by the original aircompressor. - Add zstd-jni dependency, and add a new CompressionCodec ZstdCodec that uses it. Add ORC conf to set compression level. - Add ORC conf to use long mode, and add configuration setters for windowLog. - Add tests that verify the correctness of writing and reading across compression levels, window sizes, and long mode use. - Add test for compatibility between Zstd aircompressor and zstd-jni implementations. ### Why are the changes needed? These change makes sense for a few reasons: ORC users will gain all the improvements from the main zstd library. It is under active development and receives regular speed and compression improvements. In contrast, aircompressor's zstd implementation is older and stale. ORC users will be able to use the entire speed/compression tradeoff space. Today, aircompressor's implementation has only one of eight compression strategies ([link](https://github.com/airlift/aircompressor/blob/c5e6972bd37e1d3834514957447028060a268eea/src/main/java/io/airlift/compress/zstd/CompressionParameters.java#L143)). This means only a small range of faster but less compressive strategies can be exposed to ORC users. ORC storage with high compression (e.g. for large-but-infrequently-used data) is a clear use case that this PR would unlock. It will harmonize the Java ORC implementation with other projects in the Hadoop ecosystem. Parquet, Spark, and even the C++ ORC reader/writers all rely on the official zstd implementation either via zstd-jni or directly. In this way, the Java reader/writer code is an outlier. Detection and fixing any bugs or regressions will generally happen much faster, given the larger number of users and active developer community of zstd and zstd-jni. The largest tradeoff is that zstd-jni wraps compiled code. That said, many microprocessor architectures are already targeted & bundled into zstd-jni, so this should be a rare hurdle. ### How was this patch tested? - Unit tests for reading and writing ORC files using a variety of compression levels, window logs, all pass. - Unit test to compress and decompress between aircompressor and zstd-jni passes. Note that the current aircompressor implementation uses a small subset of levels, so the test only compares data using the default compression settings. ### Was this patch authored or co-authored using generative AI tooling? No Closes #1743 from cxzl25/ORC-817. Lead-authored-by: sychen <sychen@ctrip.com> Co-authored-by: Dongjoon Hyun <dongjoon@apache.org> Co-authored-by: David Christle <dchristle@squareup.com> Co-authored-by: Yiqun Zhang <guiyanakuang@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 33be571) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
Thank you, @cxzl25 and all! |
Thanks for all the help! Migrating from zlib to zstd, a table has a compression rate of 35% through aircompressor. By adjusting some parameters of zstd-jni, a compression rate of 44% is achieved. |
My bad. It seems that I made a regression at ORC 1.9
ORC 2.0
ZStd compression level looks inconsistent with this dataset and let me change the |
### What changes were proposed in this pull request? This PR aims to Upgrade Apache ORC to 2.0.0 for Apache Spark 4.0.0. Apache ORC community has 3-year support policy which is longer than Apache Spark. It's aligned like the following. - Apache ORC 2.0.x <-> Apache Spark 4.0.x - Apache ORC 1.9.x <-> Apache Spark 3.5.x - Apache ORC 1.8.x <-> Apache Spark 3.4.x - Apache ORC 1.7.x (Supported) <-> Apache Spark 3.3.x (End-Of-Support) ### Why are the changes needed? **Release Note** - https://github.com/apache/orc/releases/tag/v2.0.0 **Milestone** - https://github.com/apache/orc/milestone/20?closed=1 - apache/orc#1728 - apache/orc#1801 - apache/orc#1498 - apache/orc#1627 - apache/orc#1497 - apache/orc#1509 - apache/orc#1554 - apache/orc#1708 - apache/orc#1733 - apache/orc#1760 - apache/orc#1743 ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #45443 from dongjoon-hyun/SPARK-44115. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request? This PR aims to Upgrade Apache ORC to 2.0.0 for Apache Spark 4.0.0. Apache ORC community has 3-year support policy which is longer than Apache Spark. It's aligned like the following. - Apache ORC 2.0.x <-> Apache Spark 4.0.x - Apache ORC 1.9.x <-> Apache Spark 3.5.x - Apache ORC 1.8.x <-> Apache Spark 3.4.x - Apache ORC 1.7.x (Supported) <-> Apache Spark 3.3.x (End-Of-Support) ### Why are the changes needed? **Release Note** - https://github.com/apache/orc/releases/tag/v2.0.0 **Milestone** - https://github.com/apache/orc/milestone/20?closed=1 - apache/orc#1728 - apache/orc#1801 - apache/orc#1498 - apache/orc#1627 - apache/orc#1497 - apache/orc#1509 - apache/orc#1554 - apache/orc#1708 - apache/orc#1733 - apache/orc#1760 - apache/orc#1743 ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#45443 from dongjoon-hyun/SPARK-44115. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
What changes were proposed in this pull request?
Original PR: #988
Original author: @dchristle
This PR will support the use of zstd-jni library as the implementation of ORC zstd, with better performance than aircompressor. (#988 (comment))
This PR also exposes the compression level and "long mode" settings to ORC users. These settings allow the user to select different speed/compression trade-offs that were not supported by the original aircompressor.
Why are the changes needed?
These change makes sense for a few reasons:
ORC users will gain all the improvements from the main zstd library. It is under active development and receives regular speed and compression improvements. In contrast, aircompressor's zstd implementation is older and stale.
ORC users will be able to use the entire speed/compression tradeoff space. Today, aircompressor's implementation has only one of eight compression strategies (link). This means only a small range of faster but less compressive strategies can be exposed to ORC users. ORC storage with high compression (e.g. for large-but-infrequently-used data) is a clear use case that this PR would unlock.
It will harmonize the Java ORC implementation with other projects in the Hadoop ecosystem. Parquet, Spark, and even the C++ ORC reader/writers all rely on the official zstd implementation either via zstd-jni or directly. In this way, the Java reader/writer code is an outlier.
Detection and fixing any bugs or regressions will generally happen much faster, given the larger number of users and active developer community of zstd and zstd-jni.
The largest tradeoff is that zstd-jni wraps compiled code. That said, many microprocessor architectures are already targeted & bundled into zstd-jni, so this should be a rare hurdle.
How was this patch tested?
Was this patch authored or co-authored using generative AI tooling?
No