Conversation
|
run benchmark in_list |
|
🤖 |
|
🤖: Benchmark completed Details
|
|
run benchmarks |
|
🤖 |
|
run benchmark tpch tpchds |
|
🤖 Hi @Dandandan, thanks for the request (#19390 (comment)).
Please choose one or more of these with |
|
🤖: Benchmark completed Details
|
|
run benchmark tpch tpcds |
|
🤖 |
|
🤖: Benchmark completed Details
|
|
🤖 |
|
🤖: Benchmark completed Details
|
@Dandandan how do I think once this optim is done, there could be a lot to reuse for broadcast joins... |
For plain (non dynamic) filters, I think based on a treshold (<= 3) it either gets planned as a chain of or expressions or using |
7ba1c85 to
276a37f
Compare
|
run benchmark in_list |
276a37f to
d18b346
Compare
|
🤖 |
|
🤖: Benchmark completed Details
|
2fc00e5 to
3db393a
Compare
|
run benchmark in_list |
|
🤖 |
|
🤖: Benchmark completed Details
|
85ee830 to
361e281
Compare
|
@alamb @geoffreyclaude I wonder if you have any thoughts on how we can move forward with this. May a high level the tradeoff I see is:
Practically speaking I want to get this across the line but I find this hard to review, both because of the size of the diff but also because of the nuance of the implementation. Quite honestly without AI explaining to me some of the more niche branches I’d struggle to grok this or find holes in the logic. |
|
@adriangb Thanks for looking into this a bit more! If we agree most of these optimizations are worth pursuing (my biased Datadog opinion being that the Int8/Int16 and Utf8 optims are the most critical), we should first get the first two commits merged. Those set the foundations for the rest. Each subsequent optimization commit is then much easier to reason about independently and they can be easily split into dedicated PRs. The first commit is "just" AI generated benchmarks, I don't think we need to care too much about that one. The second one though is the most important... |
|
Adding new benchmarks in a separate PR sounds like a good way to start Also a related PR here: |
|
Yeah agreed. @geoffreyclaude if you make a PR with just the benchmarks we can merge that immediately. Then the second commit would be its own PR. The rest I feel like we really need to look at the tradeoff of performance vs. complexity one by one and e.g. bake them off against #21547. |
@adriangb I (well, Codex...) cherry-picked the first two commits to dedicated PRs: |
## Which issue does this PR close? - Part of #19241. - This PR was originally proposed as the first commit in the broader `IN LIST` optimization series in #19390. ## Rationale for this change `IN LIST` has become the target of several specialized execution strategies, but the existing benchmark coverage in `datafusion/physical-expr/benches/in_list.rs` is mostly end-to-end and historical in nature. That broad coverage is useful for regression tracking, but it is not ideal for answering more focused questions such as: - how a specific strategy behaves as the list size crosses a threshold - whether the fast path helps both hit-heavy and miss-heavy workloads - how null handling affects the strategy-specific implementations - how two-stage string filters behave in their worst-case collision patterns This PR adds a dedicated strategy benchmark harness for `IN LIST` so future performance work can be evaluated against a stable, repeatable, strategy-focused corpus. This PR does not change `InList` execution behavior. It only adds benchmark coverage. ## What changes are included in this PR? - Adds `datafusion/physical-expr/benches/in_list_strategy.rs` - Registers the new benchmark target in `datafusion/physical-expr/Cargo.toml` - Keeps the existing `benches/in_list.rs` benchmark suite intact for broader historical comparison - Adds targeted Criterion coverage for the main `IN LIST` strategy families, including: - bitmap-style paths for narrow integer cases - branchless and hash/probe-style paths for primitive values at different list-size thresholds - reinterpretation-heavy cases such as floats and timestamps - string and string-view cases, including stage-2 / prefix-collision stress inputs - null-heavy and `NOT IN` scenarios - dictionary and fixed-size-binary coverage used by the broader `IN LIST` implementation ## Are these changes tested? Yes. I validated this PR with: - `cargo fmt --all` - `cargo clippy -p datafusion-physical-expr --all-targets --all-features -- -D warnings` - `cargo test -p datafusion-physical-expr in_list --lib` ## Are there any user-facing changes? No user-facing changes. This PR only adds benchmark coverage for development and performance evaluation.
1383e1f to
1024c3f
Compare
Moves the existing primitive and generic static-filter implementations into dedicated modules without changing the underlying algorithms. Keeps ArrayStaticFilter as the generic fallback and preserves the pre-existing result construction path.
Introduces NestedTypeFilter for non-primitive constant IN lists. Replaces the legacy ArrayStaticFilter fallback with a HashTable-backed lookup and shared bitmap result construction.
Replaces HashSet<u8> with a 32-byte stack-allocated bitmap. Provides O(1) membership testing via bit-shifting, significantly reducing memory overhead and improving cache locality. Triggers for UInt8 arrays.
Implements an 8 KB heap-allocated bitmap for UInt16. Maintains O(1) performance while handling the larger value space. Triggers for UInt16 arrays.
Introduces zero-copy buffer reinterpretation to allow signed integers and other 1 or 2-byte primitive types (e.g. Float16) to use the high-performance bitmap filters. Triggers for all types with 1-byte or 2-byte width.
Adds a const-generic unrolled comparison chain that avoids CPU branching. Outperforms hash lookups for very small lists. Triggers for primitives when list size <= 32 (4-byte), 16 (8-byte), or 4 (16-byte).
Implements a fast hash table using open addressing with linear probing and a 25% load factor. Replaces the legacy HashSet for primitives, reducing indirection. Triggers for primitives when list size exceeds branchless thresholds.
Introduces a two-stage filter for ByteView types. Stage 1 uses a fast DirectProbeFilter on masked views (len + prefix) for quick rejection; Stage 2 performs full verification only for potential long-string matches. Triggers for Utf8View and BinaryView.
Port of the two-stage View optimization to standard Utf8 and LargeUtf8 types. Encodes strings as i128 (len + prefix) for fast O(1) pre-filtering before falling back to full string comparison. Triggers for Utf8 and LargeUtf8.
FixedSizeBinary(N) arrays share the same contiguous buffer layout as primitive arrays, so for power-of-2 widths (1, 2, 4, 8, 16) we can zero-copy reinterpret them and use the optimized primitive filters (bitmap, branchless, hash) instead of falling through to the NestedTypeFilter fallback.
1024c3f to
d4ee901
Compare
|
run benchmark in_list_strategy |
|
🤖 Criterion benchmark running (GKE) | trigger CPU Details (lscpu)Comparing perf/in_list_optims (d4ee901) to 5a427cb (merge-base) diff File an issue against this benchmark runner |
|
🤖 Criterion benchmark completed (GKE) | trigger Instance: CPU Details (lscpu)Details
Resource Usagebase (merge-base)
branch
File an issue against this benchmark runner |
## Which issue does this PR close? - Part of apache#19241. - This PR was originally proposed as the first commit in the broader `IN LIST` optimization series in apache#19390. ## Rationale for this change `IN LIST` has become the target of several specialized execution strategies, but the existing benchmark coverage in `datafusion/physical-expr/benches/in_list.rs` is mostly end-to-end and historical in nature. That broad coverage is useful for regression tracking, but it is not ideal for answering more focused questions such as: - how a specific strategy behaves as the list size crosses a threshold - whether the fast path helps both hit-heavy and miss-heavy workloads - how null handling affects the strategy-specific implementations - how two-stage string filters behave in their worst-case collision patterns This PR adds a dedicated strategy benchmark harness for `IN LIST` so future performance work can be evaluated against a stable, repeatable, strategy-focused corpus. This PR does not change `InList` execution behavior. It only adds benchmark coverage. ## What changes are included in this PR? - Adds `datafusion/physical-expr/benches/in_list_strategy.rs` - Registers the new benchmark target in `datafusion/physical-expr/Cargo.toml` - Keeps the existing `benches/in_list.rs` benchmark suite intact for broader historical comparison - Adds targeted Criterion coverage for the main `IN LIST` strategy families, including: - bitmap-style paths for narrow integer cases - branchless and hash/probe-style paths for primitive values at different list-size thresholds - reinterpretation-heavy cases such as floats and timestamps - string and string-view cases, including stage-2 / prefix-collision stress inputs - null-heavy and `NOT IN` scenarios - dictionary and fixed-size-binary coverage used by the broader `IN LIST` implementation ## Are these changes tested? Yes. I validated this PR with: - `cargo fmt --all` - `cargo clippy -p datafusion-physical-expr --all-targets --all-features -- -D warnings` - `cargo test -p datafusion-physical-expr in_list --lib` ## Are there any user-facing changes? No user-facing changes. This PR only adds benchmark coverage for development and performance evaluation.
Which issue does this PR close?
Rationale for this change
The current
InListexpression implementation uses a genericArrayStaticFilterthat relies onmake_comparatorfor all types, which adds significant overhead for primitive types. This PR introduces type-specialized filters that exploit the properties of different data types to achieve substantial performance improvements.What changes are included in this PR?
This PR refactors the
InListexpression into a modular static-filter architecture and then layers increasingly specialized filters on top. The implementation is split into 10 incremental commits:Commit 1: Refactor InListExpr into static-filter modules
Refactors
InListExprfrom a single monolithic file into anin_list/module with submodules:static_filter.rs(trait),primitive_filter.rs(primitive fast paths),nested_filter.rs(generic fallback), andstrategy.rs(filter selection). Keeps the existingArrayStaticFiltergeneric fallback and preserves the pre-existing result construction path.Commit 2: Optimize generic InList static filtering
Introduces
NestedTypeFilterfor non-primitive constantINlists. Replaces the legacyArrayStaticFilterfallback with aHashTable-backed lookup and shared bitmap result construction viaresult.rs, reducing the cost of generic constant-list evaluation before the more specialized primitive and string optimizations land.Commit 3: Implement Bitmap Filter for UInt8 (Stack-based)
bitmap/u8_list=4_match=50%is 8.1× faster than baselineCommit 4: Extend Bitmap Filter to UInt16 (Heap-based)
Commit 5: Implement Zero-Copy Reinterpretation and enable Int8/Int16 Bitmaps
in_list_Int16_list=28_nulls=0%is 13.6× faster than baselineCommit 6: Implement Branchless Filter for small primitive lists
values.iter().fold(false, |acc, &v| acc | (v == needle))primitive/i32_branchless_list=4_match=50%is 13.4× faster;reinterpret/timestamp_ns_branchless_list=4_match=50%is 22.5× fasterCommit 7: Implement Direct Probe (Hash) Filter for large primitive lists
HashSetpath when the list exceeds branchless thresholdsCommit 8: Implement String View (Utf8View/BinaryView) Optimizations
in_list_Utf8View_list=28_nulls=0%_str=100is 11.1× faster than baselineUtf8Viewbenches are 3.70× geomean faster overallCommit 9: Implement Legacy String Optimization (Utf8TwoStageFilter)
[len:u32][data:12 bytes]for quick rejectionNestedTypeFilterfor long strings to avoid two-stage overheadUtf8benches are 1.20× geomean faster overall, with a few small-list regressions still presentCommit 10: Implement FixedSizeBinary zero-copy reinterpretation optimization
FixedSizeBinary(N)forN ∈ {1, 2, 4, 8, 16}now uses the primitive fast paths instead of the generic nested fallbackfixed_size_binary/fsb16_list=10000_match=50%is 7.6× faster than baselinePerformance Summary
Benchmarks were compared by scanning
target/criterion, pairingbefore/sample.jsonwith the latestnew/sample.json, and using the best observed sample (min(time / iters)) on each side to reduce system noise.Overall on the current benchmark corpus:
Representative wins from the latest run:
reinterpret/timestamp_ns_branchless_list=4_match=50%: 22.5× fasterin_list_TimestampNs_list=3_nulls=20%: 17.9× fasterin_list_Int16_list=28_nulls=0%: 13.6× fasterprimitive/i32_branchless_list=4_match=50%: 13.4× fasterin_list_Utf8View_list=28_nulls=0%_str=100: 11.1× fasterfixed_size_binary/fsb16_list=10000_match=50%: 7.6× fasterKnown regressions from the latest run are limited to a few legacy
Utf8cases.Filter Selection Strategy
Are these changes tested?
Yes, the optimizations are covered by the existing
in_listtest suite, which exercises:Benchmark coverage lives in both:
benches/in_list.rsfor broad end-to-end legacy coveragebenches/in_list_strategy.rsfor strategy-focused microbenchmarks at the threshold boundariesAre there any user-facing changes?
No user-facing API changes. This stack combines preparatory refactors with performance optimizations while maintaining identical behavior.