Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add feedback mechanism to predict max throughput of the OpenSearch cluster #742

Open
6 tasks
rishabh6788 opened this issue Feb 10, 2025 · 0 comments
Open
6 tasks
Assignees
Labels
enhancement New feature or request

Comments

@rishabh6788
Copy link
Collaborator

rishabh6788 commented Feb 10, 2025

Is your feature request related to a problem? Please describe

Currently, opensearch-benchmark requires users to manually specify the number of clients and target throughput when running performance benchmarks. This requires multiple trial-and-error runs to find the optimal load that a system can handle while maintaining acceptable performance. We need an automated way to discover the maximum sustainable load while keeping response latency and other performance parameters within acceptable limits.

Describe the solution you'd like

Add capability for opensearch-benchmark to automatically determine the maximum number of parallel clients a system can handle while maintaining latency or any other performance parameter below a user-defined threshold. Instead of specifying client count and throughput, users will specify:

  • Total benchmark duration
  • Maximum acceptable query latency threshold

The tool will:

  1. Start with minimal load.
  2. Gradually ramp up the number of clients.
  3. Monitor query latency or any other defined performance parameter.
  4. Stop adding clients when threshold breaches threshold.
  5. Report maximum sustainable client count along with standard metrics.

Describe alternatives you've considered

No response

Additional context

No response

Acceptance Criteria

  • Command line interface supports duration and latency threshold parameters
  • Implements gradual client scaling with latency monitoring
  • Stops scaling when latency threshold is exceeded
  • Generates enhanced report with scaling metrics
  • Includes safety measures and proper error handling
  • Documentation updated to reflect new functionality
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: 🏗 In progress
Status: In Progress
Development

No branches or pull requests

2 participants