-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enhancement: batch timestamped (passthrough) metrics for a short period of time before forwarding #426
Conversation
Regression Detector (Saluki)Regression Detector ResultsRun ID: a76a760d-897d-444d-a3c0-d7247cb38421 Baseline: 2b44efa Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | dsd_uds_500mb_3k_contexts | ingress throughput | +0.81 | [+0.69, +0.93] | 1 | |
➖ | dsd_uds_1mb_50k_contexts_memlimit | ingress throughput | +0.74 | [+0.21, +1.28] | 1 | |
➖ | dsd_uds_100mb_3k_contexts_distributions_only | memory utilization | +0.43 | [+0.31, +0.55] | 1 | |
➖ | dsd_uds_10mb_3k_contexts | ingress throughput | +0.01 | [-0.02, +0.04] | 1 | |
➖ | dsd_uds_100mb_3k_contexts | ingress throughput | +0.01 | [-0.04, +0.06] | 1 | |
➖ | dsd_uds_50mb_10k_contexts_no_inlining | ingress throughput | +0.01 | [-0.06, +0.07] | 1 | |
➖ | dsd_uds_40mb_12k_contexts_40_senders | ingress throughput | +0.00 | [-0.02, +0.03] | 1 | |
➖ | dsd_uds_50mb_10k_contexts_no_inlining_no_allocs | ingress throughput | +0.00 | [-0.05, +0.05] | 1 | |
➖ | dsd_uds_1mb_3k_contexts | ingress throughput | +0.00 | [-0.02, +0.02] | 1 | |
➖ | dsd_uds_1mb_50k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_1mb_3k_contexts_dualship | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_512kb_3k_contexts | ingress throughput | -0.01 | [-0.02, +0.01] | 1 | |
➖ | dsd_uds_100mb_250k_contexts | ingress throughput | -0.01 | [-0.04, +0.03] | 1 | |
➖ | quality_gates_idle_rss | memory utilization | -0.59 | [-0.62, -0.56] | 1 |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | quality_gates_idle_rss | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Regression Detector (DogStatsD)Regression Detector ResultsRun ID: 6778acac-bd47-4ea7-aa8b-6e9ace904ed9 Baseline: 7.63.0-rc.2 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | dsd_uds_40mb_12k_contexts_40_senders | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_1mb_50k_contexts | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_512kb_3k_contexts | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | |
➖ | dsd_uds_1mb_50k_contexts_memlimit | ingress throughput | +0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_1mb_3k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_1mb_3k_contexts_dualship | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_100mb_250k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_10mb_3k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | |
➖ | dsd_uds_100mb_3k_contexts | ingress throughput | -0.01 | [-0.05, +0.04] | 1 | |
➖ | dsd_uds_100mb_3k_contexts_distributions_only | memory utilization | -0.32 | [-0.46, -0.17] | 1 | |
➖ | dsd_uds_500mb_3k_contexts | ingress throughput | -2.08 | [-2.22, -1.94] | 1 | |
➖ | quality_gates_idle_rss | memory utilization | -2.10 | [-2.22, -1.99] | 1 |
Bounds Checks: ❌ Failed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
❌ | quality_gates_idle_rss | memory_usage | 0/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Regression Detector LinksExperiment Result Links
|
af4d048
to
3dc98ab
Compare
…od of time before forwarding
49022d1
to
f76068d
Compare
self.forward_events(forwarder).await; | ||
|
||
if self.active_buffer.try_push(event).is_some() { | ||
error!("Event buffer is full even after forwarding events. Dropping event."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to add a return here to prevent line 490 from hitting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, adding a return here makes sense. 👍🏻
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 75b6f96.
I also now increment the "events dropped" metric to reflect the reflect we're legitimately dropping a metric on the floor in that branch.
Summary
This PR adds the ability for the
aggregate
transform to batch "passthrough" (pre-aggregated) metrics for short periods of time, in larger-than-normal event buffers, with the express goal of improving the efficiency of handling pre-aggregated metrics.We've updated the logic of the transform to follow an equivalent behavior in the Datadog Agent's "no aggregation pipeline", which looks something like:
This aims to improve how many pre-aggregated metrics are packed into an individual series/sketch request, improving efficiency and reducing the number of requests that have to be sent. There's still a difference in number of series/sketch requests sent between Core Agent and ADP even with this batching behavior in place, which I'm still currently investigating in staging.
Change Type
How did you test this PR?
Tested in staging.
(more detail to be added here as I test further)
References
N/A