Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhancement: allow configuring maximum number of metrics per DD metrics request payloads #476

Merged
merged 1 commit into from
Feb 5, 2025

Conversation

tobz
Copy link
Member

@tobz tobz commented Feb 5, 2025

Summary

In #457, we changed around how we handle the ability to split oversized request payloads in the Datadog Metrics destination from storing the raw encoded metrics to storing the Metric values themselves. This was done in order to improve the average memory consumed by the destination, as large event batches would tend to drive the buffers used to hold the encoded metrics up in size over time, which could waste significant amounts of memory in the long run.

While switching to holding Metric values directly provided more determinism -- Metric is only ever X bytes, not variable -- it worsened the worst-case behavior because metrics can easily be encoded to a smaller size than that of Metric, meaning that after encoding a certain number of metrics, holding their Metric representation becomes inefficient.

In order to put an upper bound on this, we've introduced a "maximum metrics per payload" configuration that the request builders use. This means that we'll flush a request either when it's hit the (un)compressed size limits, or when it hits the maximum-metrics-per-payload limit.

This new configuration value -- serializer_max_metrics_per_payload -- operates slightly different from a nearly equivalent configuration value in the Datadog Agent: serializer_max_series_points_per_payload. This is due to the fact that the Datadog Agent is tracking the points that have been serialized, whereas we have to hold on to the entire Metric, so I wanted to keep the configuration setting named in a way that's faithful to the underlying behavior. However, all of this said, series/sketches generally have one point on average when flushed, so the number of metrics in a payload is also generally equal to the number of points in a payload. As such, we have the same default value of 10000, meaning we'll allow us to 10,000 metrics per request payload.

With this change, our calculated firm bounds for the Datadog Metrics component have dropped significantly, from ~69MB to ~6.6MB. In reality, after merging #457, the theoretical firm bound was closer to ~415MB, but I didn't bother trying to bring it true-to-life because it depended on an annoying calculation to determine the smallest valid metric we could encode, and how many of those we could fit per endpoint, and so on... easier to just make this follow-up PR. :)

Change Type

  • Bug fix
  • New feature
  • Non-functional (chore, refactoring, docs)
  • Performance

How did you test this PR?

This PR includes a unit test that asserts that the configured limit is obeyed. I also tested this out locally by sending a small number of metrics through DogStatsD and observing that multiple payloads were built, indicating that we were flushing earlier than we normally would have otherwise, since all metrics would fit within the configured (un)compressed size limits.

References

N/A

@tobz tobz requested a review from a team as a code owner February 5, 2025 18:25
@tobz tobz added the type/enhancement An enhancement in functionality or support. label Feb 5, 2025
@github-actions github-actions bot added area/core Core functionality, event model, etc. area/components Sources, transforms, and destinations. destination/datadog-metrics Datadog Metrics destination. destination/datadog Common Datadog destination code. labels Feb 5, 2025
@pr-commenter
Copy link

pr-commenter bot commented Feb 5, 2025

Regression Detector (DogStatsD)

Regression Detector Results

Run ID: 78007058-5937-40e4-898e-5f8ddb9d4c53

Baseline: 7.63.0-rc.2
Comparison: 7.63.0-rc.2

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gates_idle_rss memory utilization +0.42 [+0.31, +0.53] 1
dsd_uds_10mb_3k_contexts ingress throughput +0.01 [-0.00, +0.02] 1
dsd_uds_1mb_3k_contexts ingress throughput +0.00 [-0.00, +0.01] 1
dsd_uds_100mb_250k_contexts ingress throughput +0.00 [-0.00, +0.00] 1
dsd_uds_512kb_3k_contexts ingress throughput +0.00 [-0.01, +0.01] 1
dsd_uds_1mb_50k_contexts ingress throughput +0.00 [-0.00, +0.00] 1
dsd_uds_1mb_50k_contexts_memlimit ingress throughput +0.00 [-0.00, +0.00] 1
dsd_uds_1mb_3k_contexts_dualship ingress throughput +0.00 [-0.00, +0.00] 1
dsd_uds_100mb_3k_contexts ingress throughput -0.00 [-0.04, +0.04] 1
dsd_uds_40mb_12k_contexts_40_senders ingress throughput -0.00 [-0.01, +0.00] 1
dsd_uds_100mb_3k_contexts_distributions_only memory utilization -1.34 [-1.49, -1.19] 1
dsd_uds_500mb_3k_contexts ingress throughput -2.59 [-2.73, -2.45] 1

Bounds Checks: ❌ Failed

perf experiment bounds_check_name replicates_passed links
quality_gates_idle_rss memory_usage 0/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@pr-commenter
Copy link

pr-commenter bot commented Feb 5, 2025

Regression Detector (Saluki)

Regression Detector Results

Run ID: a3c92e88-6b12-403a-8b10-d3a49a649abc

Baseline: 4cfc44e
Comparison: cca757b
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gates_idle_rss memory utilization +0.60 [+0.57, +0.63] 1
dsd_uds_1mb_50k_contexts_memlimit ingress throughput +0.59 [+0.33, +0.84] 1
dsd_uds_100mb_3k_contexts ingress throughput +0.01 [-0.04, +0.06] 1
dsd_uds_40mb_12k_contexts_40_senders ingress throughput +0.01 [-0.02, +0.04] 1
dsd_uds_50mb_10k_contexts_no_inlining ingress throughput +0.00 [-0.06, +0.06] 1
dsd_uds_10mb_3k_contexts ingress throughput +0.00 [-0.03, +0.03] 1
dsd_uds_1mb_3k_contexts ingress throughput -0.00 [-0.00, +0.00] 1
dsd_uds_1mb_50k_contexts ingress throughput -0.00 [-0.01, +0.00] 1
dsd_uds_1mb_3k_contexts_dualship ingress throughput -0.00 [-0.01, +0.00] 1
dsd_uds_512kb_3k_contexts ingress throughput -0.00 [-0.01, +0.01] 1
dsd_uds_100mb_250k_contexts ingress throughput -0.01 [-0.05, +0.03] 1
dsd_uds_50mb_10k_contexts_no_inlining_no_allocs ingress throughput -0.02 [-0.07, +0.04] 1
dsd_uds_100mb_3k_contexts_distributions_only memory utilization -0.15 [-0.26, -0.04] 1
dsd_uds_500mb_3k_contexts ingress throughput -0.63 [-0.75, -0.50] 1

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
quality_gates_idle_rss memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@pr-commenter
Copy link

pr-commenter bot commented Feb 5, 2025

Regression Detector Links

Experiment Result Links

experiment link(s)
dsd_uds_100mb_250k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_100mb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_100mb_3k_contexts_distributions_only [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_10mb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_1mb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_1mb_3k_contexts_dualship [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_1mb_50k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_1mb_50k_contexts_memlimit [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_40mb_12k_contexts_40_senders [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_500mb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_512kb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
quality_gates_idle_rss [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_50mb_10k_contexts_no_inlining (ADP only) [Profiling (ADP)] [SMP Dashboard]
dsd_uds_50mb_10k_contexts_no_inlining_no_allocs (ADP only) [Profiling (ADP)] [SMP Dashboard]

@tobz tobz merged commit 4e8add1 into main Feb 5, 2025
21 checks passed
@tobz tobz deleted the tobz/dd-metrics-bound-metrics-per-payload branch February 5, 2025 19:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/components Sources, transforms, and destinations. area/core Core functionality, event model, etc. destination/datadog Common Datadog destination code. destination/datadog-metrics Datadog Metrics destination. type/enhancement An enhancement in functionality or support.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants