Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Renaming all occurrences of test-procedure to procedure #421

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/api/execute-test.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ Argument | Description | Required
:--- | :--- |:---
`workload` | The dataset and operations that execute during a test. See [OpenSearch Benchmark Workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads) for more details on workloads. | Yes
`workload-params` | Parameters defined within each workload that can be overwritten. These parameters are outlined in the README of each workload. You can find an example of the parameters for the eventdata workload [here](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/eventdata#parameters). | No
`test_procedure` | Test Procedures define the sequence of operations and parameters for a specific workload. When no `test_procedure` is specified, Benchmark selects the default for the workload. You can find an example test procedure [here](https://github.com/opensearch-project/opensearch-benchmark-workloads/blob/main/eventdata/test_procedures/default.json). | No
`scenario` | Scenarios define the sequence of operations and parameters for a specific workload. When no `scenario` is specified, Benchmark selects the default for the workload. You can find an example scenario [here](https://github.com/opensearch-project/opensearch-benchmark-workloads/blob/main/eventdata/scenarios/default.json). | No
`client-options` | Options for the [OpenSearch Python client](https://opensearch.org/docs/latest/clients/python/). Required if testing against a cluster with security enabled. | No
`pipeline` | Steps required to execute a test, including provisioning an OpenSearch from source code or a specified distribution. Defaults to `from-sources` which provisions an OpenSearch cluster from source code. | No
`distribution-version` | The OpenSearch version to use for a given test. Defining a version can be useful when using a `pipeline` that includes provisioning. When using a `pipeline` without provisioning, Benchmark will automatically determine the version | No
`target-hosts` | The OpenSearch endpoint(s) to execute a test against. This should only be specified with `--pipeline=benchmark-only` | No
`test-mode` | Run a single iteration of each operation in the test procedure. The test provides a quick way for sanity checking a testing configuration. Therefore, do not use `test-mode` for actual benchmarking. | No
`test-mode` | Run a single iteration of each operation in the Scenario. The test provides a quick way for sanity checking a testing configuration. Therefore, do not use `test-mode` for actual benchmarking. | No
`kill-running-processes` | Kill any running OpenSearch Benchmark processes on the local machine before the test executes. | No

*Example 1*
Expand Down Expand Up @@ -72,7 +72,7 @@ Argument | Description | Required
`workload-revision` | Define a specific revision in the workload repository that Benchmark should use. | No
`workload` | Define the workload to use. List possible workloads with `opensearch-benchmark list workloads`. | No
`workload-params` | Define a comma-separated list of key:value pairs that are injected verbatim to the workload as variables. | No
`test-procedure` | Define the test_procedure to use. List possible test_procedures for workloads with `opensearch-benchmark list workloads`. | No
`scenario` | Define the scenario to use. List possible scenarios for workloads with `opensearch-benchmark info --workload=<workload_name>`. | No
`provision-config-instance` | Define the provision_config_instance to use. List possible provision_config_instances with `opensearch-benchmark list provision_config_instances` (default: `defaults`). | No
`provision-config-instance-params` | Define a comma-separated list of key:value pairs that are injected verbatim as variables for the provision_config_instance. | No
`runtime-jdk` | The major version of the runtime JDK to use. | No
Expand All @@ -85,8 +85,8 @@ Argument | Description | Required
`telemetry` | Enable the provided telemetry devices, provided as a comma-separated list. List possible telemetry devices with `opensearch-benchmark list telemetry`. | No
`telemetry-params` | Define a comma-separated list of key:value pairs that are injected verbatim to the telemetry devices as parameters. | No
`distribution-repository` | Define the repository from where the OpenSearch distribution should be downloaded (default: `release`). | No
`include-tasks` | Defines a comma-separated list of tasks to run. By default all tasks of a test_procedure are run. | No
`exclude-tasks` | Defines a comma-separated list of tasks not to run. By default all tasks of a test_procedure are run. | No
`include-tasks` | Defines a comma-separated list of tasks to run. By default all tasks of a scenario are run. | No
`exclude-tasks` | Defines a comma-separated list of tasks not to run. By default all tasks of a scenario are run. | No
`user-tag` | Define a user-specific key-value pair (separated by ':'). It is added to each metric record as meta info. Example: intention:baseline-ticket-12345 | No
`results-format` | Define the output format for the command line results. Options are `markdown` and `csv` (default: `markdown`). | No
`results-numbers-align` | Define the output column number alignment for the command line results. Options are `right`, `center`, `left` and `decimal` (default: right). | No
Expand Down
2 changes: 1 addition & 1 deletion it/distribution_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,5 +42,5 @@ def test_docker_distribution(cfg):
dist = it.DISTRIBUTIONS[-1]
it.wait_until_port_is_free(port_number=port)
assert it.execute_test(cfg, f"--pipeline=\"docker\" --distribution-version=\"{dist}\" "
f"--workload=\"geonames\" --test-procedure=\"append-no-conflicts-index-only\" --test-mode "
f"--workload=\"geonames\" --scenario=\"append-no-conflicts-index-only\" --test-mode "
f"--provision-config-instance=4gheap --target-hosts=127.0.0.1:{port}") == 0
6 changes: 3 additions & 3 deletions it/info_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@


@it.benchmark_in_mem
def test_workload_info_with_test_procedure(cfg):
assert it.osbenchmark(cfg, "info --workload=geonames --test-procedure=append-no-conflicts") == 0
def test_workload_info_with_scenario(cfg):
assert it.osbenchmark(cfg, "info --workload=geonames --scenario=append-no-conflicts") == 0


@it.benchmark_in_mem
Expand All @@ -38,7 +38,7 @@ def test_workload_info_with_workload_repo(cfg):

@it.benchmark_in_mem
def test_workload_info_with_task_filter(cfg):
assert it.osbenchmark(cfg, "info --workload=geonames --test-procedure=append-no-conflicts --include-tasks=\"type:search\"") == 0
assert it.osbenchmark(cfg, "info --workload=geonames --scenario=append-no-conflicts --include-tasks=\"type:search\"") == 0


@it.benchmark_in_mem
Expand Down
7 changes: 4 additions & 3 deletions it/sources_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,11 @@
def test_sources(cfg):
port = 19200
it.wait_until_port_is_free(port_number=port)
assert it.execute_test(cfg, f"--revision=latest --workload=geonames --test-mode --target-hosts=127.0.0.1:{port} "
f"--test-procedure=append-no-conflicts --provision-config-instance=4gheap "
assert it.execute_test(cfg, f"--pipeline=from-sources --revision=latest \
--workload=geonames --test-mode --target-hosts=127.0.0.1:{port} "
f"--scenario=append-no-conflicts --provision-config-instance=4gheap "
f"--opensearch-plugins=analysis-icu") == 0

it.wait_until_port_is_free(port_number=port)
assert it.execute_test(cfg, f"--pipeline=from-sources --workload=geonames --test-mode --target-hosts=127.0.0.1:{port} "
f"--test-procedure=append-no-conflicts-index-only --provision-config-instance=\"4gheap,ea\"") == 0
f"--scenario=append-no-conflicts-index-only --provision-config-instance=\"4gheap,ea\"") == 0
2 changes: 1 addition & 1 deletion it/tracker_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def test_cluster():
def test_create_workload(cfg, tmp_path, test_cluster):
# prepare some data
cmd = f"--test-mode --pipeline=benchmark-only --target-hosts=127.0.0.1:{test_cluster.http_port} " \
f" --workload=geonames --test-procedure=append-no-conflicts-index-only --quiet"
f" --workload=geonames --scenario=append-no-conflicts-index-only --quiet"
assert it.execute_test(cfg, cmd) == 0

# create the workload
Expand Down
18 changes: 9 additions & 9 deletions osbenchmark/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,16 +142,16 @@ def add_workload_source(subparser):
default=""
)
info_parser.add_argument(
"--test-procedure",
help=f"Define the test_procedure to use. List possible test_procedures for workloads with `{PROGRAM_NAME} list workloads`."
"--scenario",
help=f"Define the scenario to use. List possible scenarios for workloads with `{PROGRAM_NAME} list workloads`."
)
info_task_filter_group = info_parser.add_mutually_exclusive_group()
info_task_filter_group.add_argument(
"--include-tasks",
help="Defines a comma-separated list of tasks to run. By default all tasks of a test_procedure are run.")
help="Defines a comma-separated list of tasks to run. By default all tasks of a scenario are run.")
info_task_filter_group.add_argument(
"--exclude-tasks",
help="Defines a comma-separated list of tasks not to run. By default all tasks of a test_procedure are run.")
help="Defines a comma-separated list of tasks not to run. By default all tasks of a scenario are run.")

create_workload_parser = subparsers.add_parser("create-workload", help="Create a Benchmark workload from existing data")
create_workload_parser.add_argument(
Expand Down Expand Up @@ -462,8 +462,8 @@ def add_workload_source(subparser):
default=""
)
test_execution_parser.add_argument(
"--test-procedure",
help=f"Define the test_procedure to use. List possible test_procedures for workloads with `{PROGRAM_NAME} list workloads`.")
"--scenario",
help=f"Define the scenario to use. List possible scenarios for workloads with `{PROGRAM_NAME} list workloads`.")
test_execution_parser.add_argument(
"--provision-config-instance",
help=f"Define the provision_config_instance to use. List possible "
Expand Down Expand Up @@ -525,10 +525,10 @@ def add_workload_source(subparser):
task_filter_group = test_execution_parser.add_mutually_exclusive_group()
task_filter_group.add_argument(
"--include-tasks",
help="Defines a comma-separated list of tasks to run. By default all tasks of a test_procedure are run.")
help="Defines a comma-separated list of tasks to run. By default all tasks of a scenario are run.")
task_filter_group.add_argument(
"--exclude-tasks",
help="Defines a comma-separated list of tasks not to run. By default all tasks of a test_procedure are run.")
help="Defines a comma-separated list of tasks not to run. By default all tasks of a scenario are run.")
test_execution_parser.add_argument(
"--user-tag",
help="Define a user-specific key-value pair (separated by ':'). It is added to each metric record as meta info. "
Expand Down Expand Up @@ -774,7 +774,7 @@ def configure_workload_params(arg_parser, args, cfg, command_requires_workload=T

if command_requires_workload:
cfg.add(config.Scope.applicationOverride, "workload", "params", opts.to_dict(args.workload_params))
cfg.add(config.Scope.applicationOverride, "workload", "test_procedure.name", args.test_procedure)
cfg.add(config.Scope.applicationOverride, "workload", "scenario.name", args.scenario)
cfg.add(config.Scope.applicationOverride, "workload", "include.tasks", opts.csv_to_list(args.include_tasks))
cfg.add(config.Scope.applicationOverride, "workload", "exclude.tasks", opts.csv_to_list(args.exclude_tasks))

Expand Down
2 changes: 1 addition & 1 deletion osbenchmark/builder/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,4 @@

# expose only the minimum API
from .builder import StartEngine, EngineStarted, StopEngine, EngineStopped, ResetRelativeTime, BuilderActor, \
cluster_distribution_version, download, install, start, stop
cluster_distribution_version, cluster_distribution_type, download, install, start, stop
26 changes: 24 additions & 2 deletions osbenchmark/builder/builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ def stop(cfg):
test_ex_id=current_test_execution.test_execution_id,
test_ex_timestamp=current_test_execution.test_execution_timestamp,
workload_name=current_test_execution.workload_name,
test_procedure_name=current_test_execution.test_procedure_name
scenario_name=current_test_execution.scenario_name
)
except exceptions.NotFound:
logging.getLogger(__name__).info("Could not find test_execution [%s] and will thus not persist system metrics.", test_execution_id)
Expand Down Expand Up @@ -268,6 +268,28 @@ def cluster_distribution_version(cfg, client_factory=client.OsClientFactory):
return opensearch.info()["version"]["number"]


def cluster_distribution_type(cfg, client_factory=client.OsClientFactory):
"""
Attempt to get the cluster's distribution type even before it is actually started (which makes only sense for externally
provisioned clusters).

:param cfg: The current config object.
:param client_factory: Factory class that creates the OpenSearch client.
:return: The distribution type.
"""
hosts = cfg.opts("client", "hosts").default
client_options = cfg.opts("client", "options").default
opensearch = client_factory(hosts, client_options).create()
# unconditionally wait for the REST layer - if it's not up by then, we'll intentionally raise the original error
client.wait_for_rest_layer(opensearch)
try:
distribution_type = opensearch.info()["version"]["distribution"]
except Exception:
console.warn("Could not determine distribution type from endpoint, use --distribution-version to specify")
distribution_type = None
return distribution_type


def to_ip_port(hosts):
ip_port_pairs = []
for host in hosts:
Expand Down Expand Up @@ -355,7 +377,7 @@ def receiveMsg_StartEngine(self, msg, sender):
# TODO: This is implicitly set by #load_provision_config() - can we gather this elsewhere?
self.provision_config_revision = self.cfg.opts("builder", "repository.revision")

# In our startup procedure we first create all builders. Only if this succeeds we'll continue.
# In our startup scenario we first create all builders. Only if this succeeds we'll continue.
hosts = self.cfg.opts("client", "hosts").default
if len(hosts) == 0:
raise exceptions.LaunchError("No target hosts are configured.")
Expand Down
2 changes: 1 addition & 1 deletion osbenchmark/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ class Scope(Enum):
# A sole benchmark
benchmark = 3
# Single benchmark workload setup (e.g. default, multinode, ...)
test_procedure = 4
scenario = 4
# property for every invocation, i.e. for backtesting
invocation = 5

Expand Down
Loading