Skip to content

Commit

Permalink
Switching Procedure and procedure(s) to Scenario and scenario(s) to b…
Browse files Browse the repository at this point in the history
…etter align with common understandings

Signed-off-by: beaioun <mshi@ucsd.edu>
  • Loading branch information
beaioun committed Dec 21, 2023
1 parent af69cdd commit 49d0bd6
Show file tree
Hide file tree
Showing 25 changed files with 475 additions and 475 deletions.
10 changes: 5 additions & 5 deletions docs/api/execute-test.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ Argument | Description | Required
:--- | :--- |:---
`workload` | The dataset and operations that execute during a test. See [OpenSearch Benchmark Workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads) for more details on workloads. | Yes
`workload-params` | Parameters defined within each workload that can be overwritten. These parameters are outlined in the README of each workload. You can find an example of the parameters for the eventdata workload [here](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/eventdata#parameters). | No
`procedure` | Test Procedures define the sequence of operations and parameters for a specific workload. When no `procedure` is specified, Benchmark selects the default for the workload. You can find an example test procedure [here](https://github.com/opensearch-project/opensearch-benchmark-workloads/blob/main/eventdata/procedures/default.json). | No
`scenario` | Test scenarios define the sequence of operations and parameters for a specific workload. When no `scenario` is specified, Benchmark selects the default for the workload. You can find an example test scenario [here](https://github.com/opensearch-project/opensearch-benchmark-workloads/blob/main/eventdata/scenarios/default.json). | No
`client-options` | Options for the [OpenSearch Python client](https://opensearch.org/docs/latest/clients/python/). Required if testing against a cluster with security enabled. | No
`pipeline` | Steps required to execute a test, including provisioning an OpenSearch from source code or a specified distribution. Defaults to `from-sources` which provisions an OpenSearch cluster from source code. | No
`distribution-version` | The OpenSearch version to use for a given test. Defining a version can be useful when using a `pipeline` that includes provisioning. When using a `pipeline` without provisioning, Benchmark will automatically determine the version | No
`target-hosts` | The OpenSearch endpoint(s) to execute a test against. This should only be specified with `--pipeline=benchmark-only` | No
`test-mode` | Run a single iteration of each operation in the test procedure. The test provides a quick way for sanity checking a testing configuration. Therefore, do not use `test-mode` for actual benchmarking. | No
`test-mode` | Run a single iteration of each operation in the test scenario. The test provides a quick way for sanity checking a testing configuration. Therefore, do not use `test-mode` for actual benchmarking. | No
`kill-running-processes` | Kill any running OpenSearch Benchmark processes on the local machine before the test executes. | No

*Example 1*
Expand Down Expand Up @@ -72,7 +72,7 @@ Argument | Description | Required
`workload-revision` | Define a specific revision in the workload repository that Benchmark should use. | No
`workload` | Define the workload to use. List possible workloads with `opensearch-benchmark list workloads`. | No
`workload-params` | Define a comma-separated list of key:value pairs that are injected verbatim to the workload as variables. | No
`procedure` | Define the procedure to use. List possible procedures for workloads with `opensearch-benchmark list workloads`. | No
`scenario` | Define the scenario to use. List possible scenarios for workloads with `opensearch-benchmark list workloads`. | No
`provision-config-instance` | Define the provision_config_instance to use. List possible provision_config_instances with `opensearch-benchmark list provision_config_instances` (default: `defaults`). | No
`provision-config-instance-params` | Define a comma-separated list of key:value pairs that are injected verbatim as variables for the provision_config_instance. | No
`runtime-jdk` | The major version of the runtime JDK to use. | No
Expand All @@ -85,8 +85,8 @@ Argument | Description | Required
`telemetry` | Enable the provided telemetry devices, provided as a comma-separated list. List possible telemetry devices with `opensearch-benchmark list telemetry`. | No
`telemetry-params` | Define a comma-separated list of key:value pairs that are injected verbatim to the telemetry devices as parameters. | No
`distribution-repository` | Define the repository from where the OpenSearch distribution should be downloaded (default: `release`). | No
`include-tasks` | Defines a comma-separated list of tasks to run. By default all tasks of a procedure are run. | No
`exclude-tasks` | Defines a comma-separated list of tasks not to run. By default all tasks of a procedure are run. | No
`include-tasks` | Defines a comma-separated list of tasks to run. By default all tasks of a scenario are run. | No
`exclude-tasks` | Defines a comma-separated list of tasks not to run. By default all tasks of a scenario are run. | No
`user-tag` | Define a user-specific key-value pair (separated by ':'). It is added to each metric record as meta info. Example: intention:baseline-ticket-12345 | No
`results-format` | Define the output format for the command line results. Options are `markdown` and `csv` (default: `markdown`). | No
`results-numbers-align` | Define the output column number alignment for the command line results. Options are `right`, `center`, `left` and `decimal` (default: right). | No
Expand Down
2 changes: 1 addition & 1 deletion it/distribution_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,5 +42,5 @@ def test_docker_distribution(cfg):
dist = it.DISTRIBUTIONS[-1]
it.wait_until_port_is_free(port_number=port)
assert it.execute_test(cfg, f"--pipeline=\"docker\" --distribution-version=\"{dist}\" "
f"--workload=\"geonames\" --procedure=\"append-no-conflicts-index-only\" --test-mode "
f"--workload=\"geonames\" --scenario=\"append-no-conflicts-index-only\" --test-mode "
f"--provision-config-instance=4gheap --target-hosts=127.0.0.1:{port}") == 0
6 changes: 3 additions & 3 deletions it/info_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@


@it.benchmark_in_mem
def test_workload_info_with_procedure(cfg):
assert it.osbenchmark(cfg, "info --workload=geonames --procedure=append-no-conflicts") == 0
def test_workload_info_with_scenario(cfg):
assert it.osbenchmark(cfg, "info --workload=geonames --scenario=append-no-conflicts") == 0


@it.benchmark_in_mem
Expand All @@ -38,7 +38,7 @@ def test_workload_info_with_workload_repo(cfg):

@it.benchmark_in_mem
def test_workload_info_with_task_filter(cfg):
assert it.osbenchmark(cfg, "info --workload=geonames --procedure=append-no-conflicts --include-tasks=\"type:search\"") == 0
assert it.osbenchmark(cfg, "info --workload=geonames --scenario=append-no-conflicts --include-tasks=\"type:search\"") == 0


@it.benchmark_in_mem
Expand Down
4 changes: 2 additions & 2 deletions it/sources_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ def test_sources(cfg):
it.wait_until_port_is_free(port_number=port)
assert it.execute_test(cfg, f"--pipeline=from-sources --revision=latest \
--workload=geonames --test-mode --target-hosts=127.0.0.1:{port} "
f"--procedure=append-no-conflicts --provision-config-instance=4gheap "
f"--scenario=append-no-conflicts --provision-config-instance=4gheap "
f"--opensearch-plugins=analysis-icu") == 0

it.wait_until_port_is_free(port_number=port)
assert it.execute_test(cfg, f"--pipeline=from-sources --workload=geonames --test-mode --target-hosts=127.0.0.1:{port} "
f"--procedure=append-no-conflicts-index-only --provision-config-instance=\"4gheap,ea\"") == 0
f"--scenario=append-no-conflicts-index-only --provision-config-instance=\"4gheap,ea\"") == 0
2 changes: 1 addition & 1 deletion it/tracker_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def test_cluster():
def test_create_workload(cfg, tmp_path, test_cluster):
# prepare some data
cmd = f"--test-mode --pipeline=benchmark-only --target-hosts=127.0.0.1:{test_cluster.http_port} " \
f" --workload=geonames --procedure=append-no-conflicts-index-only --quiet"
f" --workload=geonames --scenario=append-no-conflicts-index-only --quiet"
assert it.execute_test(cfg, cmd) == 0

# create the workload
Expand Down
18 changes: 9 additions & 9 deletions osbenchmark/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,16 +142,16 @@ def add_workload_source(subparser):
default=""
)
info_parser.add_argument(
"--procedure",
help=f"Define the procedure to use. List possible procedures for workloads with `{PROGRAM_NAME} list workloads`."
"--scenario",
help=f"Define the scenario to use. List possible scenarios for workloads with `{PROGRAM_NAME} list workloads`."
)
info_task_filter_group = info_parser.add_mutually_exclusive_group()
info_task_filter_group.add_argument(
"--include-tasks",
help="Defines a comma-separated list of tasks to run. By default all tasks of a procedure are run.")
help="Defines a comma-separated list of tasks to run. By default all tasks of a scenario are run.")
info_task_filter_group.add_argument(
"--exclude-tasks",
help="Defines a comma-separated list of tasks not to run. By default all tasks of a procedure are run.")
help="Defines a comma-separated list of tasks not to run. By default all tasks of a scenario are run.")

create_workload_parser = subparsers.add_parser("create-workload", help="Create a Benchmark workload from existing data")
create_workload_parser.add_argument(
Expand Down Expand Up @@ -437,8 +437,8 @@ def add_workload_source(subparser):
default=""
)
test_execution_parser.add_argument(
"--procedure",
help=f"Define the procedure to use. List possible procedures for workloads with `{PROGRAM_NAME} list workloads`.")
"--scenario",
help=f"Define the scenario to use. List possible scenarios for workloads with `{PROGRAM_NAME} list workloads`.")
test_execution_parser.add_argument(
"--provision-config-instance",
help=f"Define the provision_config_instance to use. List possible "
Expand Down Expand Up @@ -500,10 +500,10 @@ def add_workload_source(subparser):
task_filter_group = test_execution_parser.add_mutually_exclusive_group()
task_filter_group.add_argument(
"--include-tasks",
help="Defines a comma-separated list of tasks to run. By default all tasks of a procedure are run.")
help="Defines a comma-separated list of tasks to run. By default all tasks of a scenario are run.")
task_filter_group.add_argument(
"--exclude-tasks",
help="Defines a comma-separated list of tasks not to run. By default all tasks of a procedure are run.")
help="Defines a comma-separated list of tasks not to run. By default all tasks of a scenario are run.")
test_execution_parser.add_argument(
"--user-tag",
help="Define a user-specific key-value pair (separated by ':'). It is added to each metric record as meta info. "
Expand Down Expand Up @@ -746,7 +746,7 @@ def configure_workload_params(arg_parser, args, cfg, command_requires_workload=T

if command_requires_workload:
cfg.add(config.Scope.applicationOverride, "workload", "params", opts.to_dict(args.workload_params))
cfg.add(config.Scope.applicationOverride, "workload", "procedure.name", args.procedure)
cfg.add(config.Scope.applicationOverride, "workload", "scenario.name", args.scenario)
cfg.add(config.Scope.applicationOverride, "workload", "include.tasks", opts.csv_to_list(args.include_tasks))
cfg.add(config.Scope.applicationOverride, "workload", "exclude.tasks", opts.csv_to_list(args.exclude_tasks))

Expand Down
4 changes: 2 additions & 2 deletions osbenchmark/builder/builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ def stop(cfg):
test_ex_id=current_test_execution.test_execution_id,
test_ex_timestamp=current_test_execution.test_execution_timestamp,
workload_name=current_test_execution.workload_name,
procedure_name=current_test_execution.procedure_name
scenario_name=current_test_execution.scenario_name
)
except exceptions.NotFound:
logging.getLogger(__name__).info("Could not find test_execution [%s] and will thus not persist system metrics.", test_execution_id)
Expand Down Expand Up @@ -360,7 +360,7 @@ def receiveMsg_StartEngine(self, msg, sender):
# TODO: This is implicitly set by #load_provision_config() - can we gather this elsewhere?
self.provision_config_revision = self.cfg.opts("builder", "repository.revision")

# In our startup procedure we first create all builders. Only if this succeeds we'll continue.
# In our startup scenario we first create all builders. Only if this succeeds we'll continue.
hosts = self.cfg.opts("client", "hosts").default
if len(hosts) == 0:
raise exceptions.LaunchError("No target hosts are configured.")
Expand Down
2 changes: 1 addition & 1 deletion osbenchmark/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ class Scope(Enum):
# A sole benchmark
benchmark = 3
# Single benchmark workload setup (e.g. default, multinode, ...)
procedure = 4
scenario = 4
# property for every invocation, i.e. for backtesting
invocation = 5

Expand Down
Loading

0 comments on commit 49d0bd6

Please sign in to comment.