-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load generation App Updates #1509
Conversation
Here are performance results for 100, 1,000, and 5,000 TPS for metric tests: 100: 1,000: 5,000: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did 1st pass on this PR
@@ -49,7 +41,7 @@ The following is a list of additional optional command line arguments applicable | |||
|
|||
### StatsD Metrics Load Test Sample Command, | |||
``` | |||
./gradlew :load-generator:run --args="metric --metricCount=100 --datapointCount=10 --observationInterval=1000 -u=localhost:8125 -d=statsd" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
observationInterval=1000 should we have observation interval configurable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think as sample app, providing configurable observationInterval is good, but in load generator context and how it is used through test-framework; I think it would be unused functionality and could create problem as well.
log.error("Metric provider was not found!"); | ||
|
||
private void createCounters(Meter meter) { | ||
if (meter != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this could be replaced by an early return.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
} | ||
); | ||
} | ||
} else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
log.info("Generating metrics at a rate of(ms) : {}" , this.param.getObservationInterval()); | ||
scheduler.scheduleAtFixedRate(emitter, 0, this.param.getObservationInterval(), TimeUnit.MILLISECONDS); | ||
log.info("Generating metrics at a rate of(ms) : {}" , FLUSH_INTERVAL); | ||
scheduler.scheduleAtFixedRate(emitter, 0, FLUSH_INTERVAL, TimeUnit.MILLISECONDS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this simplifies things a bit. Should we add an warning/raise an exception in case the emitter function is not able to complete in less than 1s? Otherwise the ingestion rate will not be correct right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am unsure how to implement this. scheduleAtFixedRate
only throws these three exceptions:
RejectedExecutionException – if the task cannot be scheduled for execution
NullPointerException – if command or unit is null
IllegalArgumentException – if period less than or equal to zero
@@ -24,7 +24,7 @@ public abstract class TraceEmitter implements Emitter { | |||
|
|||
@Override | |||
public void start(Runnable emitter) { | |||
scheduler.scheduleAtFixedRate(emitter, TimeUnit.SECONDS.toMillis(5000), | |||
scheduler.scheduleAtFixedRate(emitter, 0, | |||
TimeUnit.SECONDS.toNanos(1) / this.param.getRate(), TimeUnit.NANOSECONDS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to fix this as well no? this needs to be scheduled every second and emitter should emitt the rate number of spans?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Raphael, in my opinion, this is already fixed. works little differently than metrics. nextdatapoint is called no of rate times, which produces that many traces in a second, which gets batched and exported at 1s frequency.
How? By doing what? What changes are being made in this PR? |
@@ -21,6 +21,9 @@ | |||
public interface Emitter { | |||
|
|||
int NUM_THREADS = 5; | |||
|
|||
final long FLUSH_INTERVAL = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this no longer configurable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, for load gen its not required to be configured, as we have TPS (100, 1000, 5000) batched at 1s. Adding flush_interval just adds unused complexity.
#1509 (comment)
scrape_interval: 15s | ||
scrape_interval: 1s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't changing the scrape interval affect performance? Why is this change necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes it should affect performance by factor of 15. This is to match the metric load as mentioned in the performance report.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted. Will pull into separate PR as discussed.
Updated PR body. |
Description:
Improves OTLP Metric and Traces Load Generation
latest
load-gen app.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.