Skip to content

Latest commit

 

History

History
256 lines (183 loc) · 9.91 KB

File metadata and controls

256 lines (183 loc) · 9.91 KB

Executing Workload Testcases

Use ./ctest.sh to run a single test or batch of tests. You can do this at the top-level build directory or under each workload directory. In the latter case, only the tests of the workload will be executed.

cd build
cd workload/dummy
./ctest.sh -N

CTest Options

There is an extensive list of options in ./ctest.sh to control how tests can be executed. The followings are most common options which are inherited from ctest. See man ctest for all inherited ctest options. The ./ctest.sh extensions are listed below.

  • -R: Select tests based on a regular expression string.
  • -E: Exclude tests based on a regular expression string.
  • -V: Show test execution with details.
  • -N: List test vectors only.

Example: list tests with boringssl in name excluding those with _gated

./ctest.sh -R boringssl -E _gated -N

Example: run only test_static_boringssl (exact match)

./ctest.sh -R '^test_static_boringssl$'

Customize Configurations

It is possible to specify a test configuration file to overwrite any configuration parameter of a test case:

./ctest.sh --config=test_config.yaml -V

The configuration file uses the following format:

*_dummy_pi:
    SCALE: 3000

where *_dummy_pi specifies the test case name. You can use * to specify a wildcard match. The subsection underneath specifies the configuration variables and values. Any parameters specified in each test case validate.sh can be overwritten.

Use with caution as overwriting configuration parameters may lead to invalid parameter combinations.

Benchmark Scripts

A set of utility scripts are linked under your workload build directory to make it easy for workload benchmark activities.

ctest.sh

  • ctest.sh: This is an extended ctest script extending the following features, besides what ctest supports:
Usage: [options]
--nohup          Run the test case(s) in the daemon mode for long benchmark 
--daemon         Run the test case(s) with daemonize for long benchmark with cleaning of environments before workload execution.  
--noenv          Clean any external environment variables before proceeding with the tests.    
--loop           Run the benchmark multiple times sequentially.
--run            Run the benchmark multiple times on the same SUT(s).  
--burst          Run the benchmark multiple times simultaneously.
--config         Specify the test-config file.  
--options        Specify additional validation backend options.  
--set            Set the workload parameter values during loop and burst iterations.  
--stop [prefix]  Kill all ctest sessions without prefix or kill specified session with prefix input as workload benchmark namespace name.
--continue       Ignore any errors and continue the loop and burst iterations.  
--prepare-sut    Prepare cloud SUT instances for reuse.
--reuse-sut      Reuse previously prepared cloud SUT instances. 
--cleanup-sut    Cleanup cloud SUT instances. 
--dry-run        Generate the testcase configurations and then exit.  
--testcase       Specify the exact testcase name to be executed.  
--attach <file>  Specify a file to be attached under the logs directory.  
--check-docker-image             Check image availability before running the workload.  
--push-docker-image <registry>   Push the workload image(s) to the mirror registry.   
--testset                        Specify a testset yaml file.
--describe-params                Show workload parameter descriptions.  

Examples

  1. Run aws test cases 5 times sequentially (loop):

    ./ctest.sh -R aws --loop=5 --nohup
  2. Run aws test cases 5 times simultaneously (burst):

    ./ctest.sh -R aws --burst=5 --nohup
  3. Run aws test cases 4 times simultaneously with the SCALE value incremented linearly as 1000, 1300, 1600, 1900 in each iteration:

    ... uses three previous values to deduce the increment

    ./ctest.sh -R aws --set "SCALE=1000 1300 1600 ...2000" --burst=4 --nohup
  4. Run aws test cases 4 times simultaneously with the SCALE value incremented linearly as 1000, 1600, 1000, 1600 in each iteration:

    ... uses three previous values to deduce the increment

    |200 means the values must be divisible by 200

    ./ctest.sh -R aws --set "SCALE=1000 1300 1600 ...2000 |200" --burst=4 --nohup
  5. Run aws test cases 4 times simultaneously with the SCALE value incremented linearly as 1000, 1600, 2000, 1000 in each iteration:

    ... uses three previous values to deduce the increment

    8000| means the values must be a factor of 8000

    ./ctest.sh -R aws --set "SCALE=1000 1200 1400 ...2000 8000|" --burst=4 --nohup
  6. Run aws test cases 4 times simultaneously with the SCALE value incremented exponentially as 1000, 2000, 4000, 8000 in each iteration:

    ... uses three previous values to deduce the multiplication factor

    ./ctest.sh -R aws --set "SCALE=1000 2000 4000 ...10000" --burst=4 --nohup  
  7. Run aws test cases 6 times simultaneously with the SCALE value enumerated repeatedly as 1000, 1500, 1700, 1000, 1500, 1700 in each iteration:

    ./ctest.sh -R aws --set "SCALE=1000 1500 1700" --burst=6 --nohup
  8. Run aws test cases 6 times simultaneously with the SCALE and BATCH_SIZE values enumerated separately as (1000,1), (1500,2), (1700,4), (1000,8) in each iteration:

    Values are repeated if needed.

    ./ctest.sh -R aws --set "SCALE=1000 1500 1700" --set BATCH_SIZE="1 2 4 8" --burst=6 --nohup
  9. Run aws test cases 8 times simultaneously with the SCALE and BATCH_SIZE values permutated as (1000,1), (1000,2), (1000,4), (1000,8), (1500,1), (1500, 2), (1500, 4), (1500, 8) in each iteration:

    ./ctest.sh -R aws --set "SCALE=1000 1500 1700/BATCH_SIZE=1 2 4 8" --burst=8 --nohup
  10. For cloud instances, it is possible to test different machine types by enumerating the <CSP>_MACHINE_TYPE values (<CSP> is Cloud Service Provider's abbreviation, e.g. AWS_MACHINE_TYPE or GCP_MACHINE_TYPE):

    ./ctest.sh -R aws --set "AWS_MACHINE_TYPE=m6i.xlarge m6i.2xlarge m6i.4xlarge" --loop 3 --nohup
  11. For aws with specified:

    • type of disk
      ./ctest.sh -R aws --set "AWS_DISK_TYPE=io1 io2" --loop 2 --nohup
    • size of disk
      ./ctest.sh -R aws --set "AWS_DISK_SIZE=500 1000" --loop 2 --nohup
    • disk's IOPS
      ./ctest.sh -R aws --set "AWS_IOPS=16000 32000" --loop 2 --nohup
    • number of striped disks
      ./ctest.sh -R aws --set "AWS_NUM_STRIPED_DISKS=1 2" --loop 2 --nohup

Cloud SUT Reuse

It is possible to reuse the Cloud SUT instances during the benchmark process. This is especially useful in tuning parameters for any workload.

To reuse any SUT instances, you need to first prepare (provision) the Cloud instances, using the ctest.sh --prepare-sut command as follows:

./ctest.sh -R aws_kafka_3n_pkm -V --prepare-sut

The --prepare-sut command provisions and prepares the Cloud instances suitable for running the aws_kafka_3n_pkm test case. The preparation includes installing docker/Kubernetes and labeling the worker nodes. The SUT details are stored under the sut-logs-aws_kafka_3n_pkm directory.

Next, you can run any iterations of the test cases, reusing the prepared SUT instances with the --reuse-sut command, as follows:

./ctest.sh -R aws_kafka_3n_pkm -V --reuse-sut

If --reuse-sut is set, --burst is disabled.

Finally, to cleanup the SUT instances, use the --cleanup-sut command:

./ctest.sh -R aws_kafka_3n_pkm -V --cleanup-sut

SUT reuse is subject to the following limitations:

  • The SUT instances are provisioned and prepared for a specific test case. Different test cases cannot share SUT instances.
  • It is possible to change workload parameters, provided that such changes do not:
    • The changes do not affect the worker node numbers.
    • The changes do not affect the worker node machine types, disk storage, or network topologies.
    • The changes do not affect worker node labeling.
    • The changes do not introduce any new container images.

After using the Cloud instances, please clean them up.


Running Testcases using Testset YAML

You can specify a testset configuration file to sequentially run a few test cases with ctest.sh:

PLATFORM: SPR
BENCHMARK: dummy
TERRAFORM_OPTIONS: "--docker"
TERRAFORM_SUT: kvm
testcase: "test_kvm_dummy_pi_pkm"
options: "--sutinfo --intel_publish"
#test-config: "test-config.yaml"
SCALE: 2000

---

PLATFORM: ICX
BENCHMARK: dummy
TERRAFORM_OPTIONS: "--docker"
TERRAFORM_SUT: kvm
testcase: "test_kvm_dummy_pi_pkm"
options: "--sutinfo --intel_publish"
#test-config: "test-config.yaml"
SCALE: 2000

where two testcases are executed:

  • Supported cmake options: PLATFORM, BENCHMARK, REGISTERY, REGISTRY_AUTH, TIMEOUT, SPOT_INSTANCE, TERRAFORM_OPTIONS, and TERRAFORM_SUT.
  • Supported ctest options: testcase, test-config, config, loop, burst, run, and options.
    • testcase and options can be either a string or a list of strings.
    • If testcase starts with and ends with /, the testcase name is a regular expression. If testcase starts with !/ and ends with /, the testcase name is an anti regular expression. Otherwise, the testcase name is the exact name.
  • Any other specified parameters are passed to ctest.sh via --set.