From b01544aeafa9ef28b40eef3b8d0db60cd83b72be Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Fri, 29 Jul 2022 10:03:42 -0400 Subject: [PATCH 01/38] Ready for Testing --- README.md | 125 ++++++++++++++++++++++++++++++------- red-queen.py | 169 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 273 insertions(+), 21 deletions(-) create mode 100644 red-queen.py diff --git a/README.md b/README.md index 6fbf74c1..159f11e5 100644 --- a/README.md +++ b/README.md @@ -1,63 +1,146 @@ -> "Well, in our country," said Alice, still panting a little, "you'd generally -> get to somewhere else—if you run very fast for a long time, as we've been +> "Well, in our country," said Alice, still panting a little, "you'd generally +> get to somewhere else—if you run very fast for a long time, as we've been > doing." > -> "A slow sort of country!" said the Queen. "Now, here, you see, it takes all -> the running you can do, to keep in the same place. If you want to get +> "A slow sort of country!" said the Queen. "Now, here, you see, it takes all +> the running you can do, to keep in the same place. If you want to get > somewhere else, you must run at least twice as fast as that!" > > [Carroll, Lewis: Through the Looking-Glass, Chapter 2]( https://www.gutenberg.org/files/12/12-h/12-h.htm) -## About +
+ +

About

+ The Red Queen benchmark framework was created to facilitate the benchmarking algorithms used in quantum compilation. -The framework is tightly integrated into `pytest`. Therefore, to use it -effectively, you should know the basics of `pytest` first. Take a look at the +The framework is tightly integrated into `pytest`. Therefore, to use it +effectively, you should know the basics of `pytest` first. Take a look at the [introductory material](https://docs.pytest.org/en/latest/getting-started.html). -## Usage + +
+ +

Usage

+ + Red Queen is a framework for benchmarking quantum compilation algorithms. Since -it was not designed as a package, there is no notion of installation. Hence, you -must clone this repository to use it: +is is still in early development, you must clone this repository to use it: + + ```bash git clone git@github.com:Qiskit/red-queen.git ``` -To run benchmarks, you must first go to the `red-queen` directory and install +To run benchmarks, you must first go to the `red-queen` directory and install the required packages: + + ```bash cd red-queen -pip install -r requirements.txt +pip install red-queen +``` + + +Red Queen has a `CLI` (command line interface) that you can use to execute benchmarks. + +
+ +The general templete for the `CLI` is as follows: + +
+ +[For MacOs and Linux] + +```bash +python red-queen.py -c -t -b +``` + +[For Windows] + +```bash +python -m red-queen.py -c -t -b ``` +
+ Now, suppose you want to run the mapping benchmarks using only `tweedledum`. -You must do it using `pytest` +You can do this via the `CLI` or with `pytest` + + +
+ +[For MacOs and Linux] + +With `CLI` + +```bash +python red-queen.py -c tweedledum -t mapping -b map_queko.py +``` + +With `pytest` + +```bash +python -m pytest games/mapping/map_queko.py -m tweedledum --store +``` + +
+ +[For Windows] + +With `CLI` + +```bash +python -m red-queen.py -c tweedledum -t mapping -b map_queko.py +``` + +With `pytest` + ```bash -pytest red_queen/games/mapping/map_queko.py -m tweedledum --store +python -m pytest -s games/mapping/map_queko.py -m tweedledum --store ``` -To run pytest on Windows, you will have to use `python -m` in order to run the -`pytest` command. You will also need to add `-s` to your pytest call to disable +To run pytest or any python script on Windows, you will have to use `python -m` in order to run the +`pytest` command. You will also need to add `-s` to your pytest call to disable stdin handling. + ```bash -python -m pytest -s red_queen/games/mapping/map_queko.py -m tweedledum --store +python -m pytest -s games/mapping/map_queko.py -m tweedledum --store ``` +
-The benchmark suite will consider all functions named `bench_*` in -`red_queen/games/mapping/map_queko.py`. Because we set the `-m` option, only the the ones +To run pytest or any python script on Windows, you will have to use `python -m` in order to run the +`pytest` command. You will also need to add `-s` to your pytest call to disable +stdin handling. + +The benchmark suite will consider all functions named `bench_*` in +`games/mapping/map_queko.py`. Because we set the `-m` option, only the the ones marked with `tweedledum` will be run. (We could easy do the same for `qiskit`). If you don't define a `-m` option, all `bench_*` functions will be run. + +```bash +python -m pytest -s red_queen/games/mapping/map_queko.py -m tweedledum --store +``` + +
+ +The benchmark suite will consider all functions named `bench_*` in +`games/mapping/map_queko.py`. Because The `--store` option tells the framework to store the results in json file in the `results` directory. To see the results as a table, you can use the you can use: + ```bash python -m report.console_tables --storage results/0001_bench.json ``` +
+ ## Warning + This code is still under development. There are many razer sharp edges. For information of how execution works and other details about the framwork @@ -73,5 +156,5 @@ on the knowledge of the internals of the following established `pytest` plugins: ## License -This software is licensed under the Apache 2.0 licence (see -[LICENSE](https://github.com/Qiskit/red-queen/blob/main/LICENSE)) +This software is licensed under the Apache 2.0 licence (see +[LICENSE](https://github.com/Qiskit/red-queen/blob/main/LICENSE)) \ No newline at end of file diff --git a/red-queen.py b/red-queen.py new file mode 100644 index 00000000..fc2d8a67 --- /dev/null +++ b/red-queen.py @@ -0,0 +1,169 @@ +#!/usr/bin/env python3 +import os +import sys +import platform +from contextlib import redirect_stderr +import subprocess +import configparser +import click + + +def benchmarkRetrieval(): + benchmark_category = {} + benchmark_types = [] + benchmarks = [] + dir_path = "red_queen/games/" + windows_ad1 = "" + windows_ad2 = "" + if platform.system() == "Windows": + windows_ad1 = "python3 -m " + windows_ad2 = "-s " + for entry in os.scandir(dir_path): + if entry.is_dir(): + benchmark_category[entry.name] = [] + subDict = {} + for sub in os.scandir(f"{dir_path}{entry.name}"): + if not sub.name.startswith("_") and sub.name.endswith(".py") and sub.is_file(): + subDict[sub.name] = sub.path + benchmark_category[entry.name] = subDict + + benchmark_types = list(benchmark_category.keys()) + for benchmark_pairs in benchmark_category.values(): + for keys in benchmark_pairs.keys(): + benchmarks.append(keys) + + return benchmark_category, benchmark_types, benchmarks, windows_ad1, windows_ad2 + + +def complierRetrieval(): + complierList = [] + config = configparser.ConfigParser() + config.read("pytest.ini") + for complier in config["pytest"]["markers"].split("\n"): + if complier != "": + complierList.append(complier) + # print(complierList) + # This line tests to see if there is a complier specifed + return complierList + + +def resultRetrieval(): + results = {} + dir_path = "results" + for entry in os.scandir(dir_path): + if entry.is_file(): + filename = entry.name + resultCount = int(filename.split("_")[0]) + results[resultCount] = {entry.name: entry.path} + return results + + +def runBenchmarks(pytestPaths: str, windows_ad1: str, mTag: str, compiler: str): + click.echo("benchmarks ran") + subprocess.run( + [f"{windows_ad1}pytest -n auto {windows_ad2}{pytestPaths} {mTag}{compiler} --store"], + shell=True, + ) + + +def showResult(): + resultsDict = resultRetrieval() + click.echo(resultsDict) + resultNum = max(resultsDict.keys()) + click.echo(resultNum) + result_path = tuple(resultsDict[resultNum].values())[0] + command = f"python3 -m report.console_tables --storage {result_path}" + subprocess.run([command], shell=True) + click.echo( + f"If you want to view the results chart type:\npython -m report.console_tables --storage {result_path}" + ) + + +benchmark_category, benchmark_types, benchmarks, windows_ad1, windows_ad2 = benchmarkRetrieval() +complierList = complierRetrieval() + + +@click.command() +# @click.option("--version", action="version", version="%(prog)s 0.0.1") +@click.option( + "-c", + "--compiler", + "compiler", + multiple=True, + type=click.Choice(complierList), + help="enter a compliler here", +) +@click.option( + "-t", + "--benchmarkType", + "benchmarkType", + multiple=True, + type=click.Choice(benchmark_types), + help="enter the type of benchmark(s) here", +) +@click.option( + "-b", + "--benchmark", + "benchmark", + multiple=True, + type=click.Choice(benchmarks), + help="enter the specfic benchmark(s) here", +) +def main(compiler, benchmarkType, benchmark): + benchmarkPaths = [] + pytestPaths = "" + myDict = {} + mTag = "" + if len(compiler) > 0: + mTag = "-m " + compiler = compiler[0] + else: + compiler = "" + # ### This for loop ensures that we are able to run various benchmark types + i = 0 + j = 0 + ### Has a benchmark type been specified? + if len(benchmarkType) > 0: + # click.echo("passed test 0") + while i < len(benchmarkType): + ### Is the inputted benchmark type valid? + if set(benchmarkType).issubset(benchmark_category): + # click.echo("passed test 1") + ### Has a benchmark been specified? + if len(benchmark) > 0: + # click.echo("passed test 2") + ### Are the inputted benchmark(s) valid? + if set(benchmark).issubset(benchmarks): + # click.echo("passed test 3") + ### Is the inputted benchmark within the inputted benchmark type suite? + if set(benchmark).issubset(set(benchmark_category[benchmarkType[i]])): + # click.echo("passed test 4") + for j in range(len(benchmark)): + benchmarkPaths.append( + benchmark_category[benchmarkType[0]][benchmark[j]] + ) + pytestPaths = " ".join(tuple(benchmarkPaths)) + runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) + showResult() + i += 1 + else: + myDict = benchmark_category[benchmarkType[0]] + for v in myDict.values(): + benchmarkPaths.append(v) + pytestPaths = " ".join(tuple(benchmarkPaths)) + runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) + showResult() + i += 1 + else: + question = input(f"Would you like to run all {len(benchmarks)} available benchmarks (y/n) ") + if question.lower() == "y": + for benchmark_list in benchmark_category.values(): + for paths in benchmark_list.values(): + benchmarkPaths.append(paths) + pytestPaths = " ".join(tuple(benchmarkPaths)) + runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) + showResult() + + +if __name__ == "__main__": + main() \ No newline at end of file From 52312fffc622dba41beb0efbb28037506f09ef29 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Fri, 29 Jul 2022 10:32:44 -0400 Subject: [PATCH 02/38] Added Proper Header --- red-queen.py | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/red-queen.py b/red-queen.py index fc2d8a67..ab15a048 100644 --- a/red-queen.py +++ b/red-queen.py @@ -1,3 +1,8 @@ +# ------------------------------------------------------------------------------ +# Part of Red Queen Project. This file is distributed under the MIT License. +# See accompanying file /LICENSE for details. +# ------------------------------------------------------------------------------ + #!/usr/bin/env python3 import os import sys From b79e63085be4ef9f765a271a5b7a711d20133663 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Fri, 29 Jul 2022 13:21:07 -0400 Subject: [PATCH 03/38] Entry Point Added --- README.md | 14 +++----------- red-queen.py => cli.py | 2 -- requirements.txt | 4 +++- setup.py | 4 ++++ 4 files changed, 10 insertions(+), 14 deletions(-) rename red-queen.py => cli.py (99%) diff --git a/README.md b/README.md index 159f11e5..231f0a44 100644 --- a/README.md +++ b/README.md @@ -52,19 +52,11 @@ The general templete for the `CLI` is as follows:
-[For MacOs and Linux] ```bash -python red-queen.py -c -t -b +red-queen -c -t -b ``` -[For Windows] - -```bash -python -m red-queen.py -c -t -b -``` - -
Now, suppose you want to run the mapping benchmarks using only `tweedledum`. You can do this via the `CLI` or with `pytest` @@ -77,7 +69,7 @@ You can do this via the `CLI` or with `pytest` With `CLI` ```bash -python red-queen.py -c tweedledum -t mapping -b map_queko.py +red-queen -c tweedledum -t mapping -b map_queko.py ``` With `pytest` @@ -93,7 +85,7 @@ python -m pytest games/mapping/map_queko.py -m tweedledum --store With `CLI` ```bash -python -m red-queen.py -c tweedledum -t mapping -b map_queko.py +red-queen -c tweedledum -t mapping -b map_queko.py ``` With `pytest` diff --git a/red-queen.py b/cli.py similarity index 99% rename from red-queen.py rename to cli.py index ab15a048..052bd427 100644 --- a/red-queen.py +++ b/cli.py @@ -5,9 +5,7 @@ #!/usr/bin/env python3 import os -import sys import platform -from contextlib import redirect_stderr import subprocess import configparser import click diff --git a/requirements.txt b/requirements.txt index 58ace3f2..584deebf 100644 --- a/requirements.txt +++ b/requirements.txt @@ -6,4 +6,6 @@ qiskit-terra qiskit-aer pytket>1.0,<2.0 setproctitle -rich \ No newline at end of file +rich +configparser +click \ No newline at end of file diff --git a/setup.py b/setup.py index bffd22ae..d78445be 100755 --- a/setup.py +++ b/setup.py @@ -59,5 +59,9 @@ "Documentation": "https://qiskit.org/documentation/", "Source Code": "https://github.com/Qiskit/red-queen", }, + entry_points=""" + [console_scripts] + red-queen=cli:main + """, zip_safe=False, ) From 16c5a8b9b4100ebe26ff297d783e894cc321c907 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon <39884062+Lementknight@users.noreply.github.com> Date: Wed, 3 Aug 2022 11:55:39 -0400 Subject: [PATCH 04/38] Update setup.py Co-authored-by: Matthew Treinish --- setup.py | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/setup.py b/setup.py index 5ed20c47..638d6e7f 100644 --- a/setup.py +++ b/setup.py @@ -59,9 +59,10 @@ "Documentation": "https://qiskit.org/documentation/", "Source Code": "https://github.com/Qiskit/red-queen", }, - entry_points=""" - [console_scripts] - red-queen=cli:main - """, + entry_points={ + 'console_scripts': [ + "red-queen = red_queen.cli:main", + ] + } zip_safe=False, ) \ No newline at end of file From 5758d09ac746f8dec5d184fce18b9481bc56d79c Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 3 Aug 2022 11:57:16 -0400 Subject: [PATCH 05/38] Code Perservation --- cli.py | 172 --------------------------------------------------------- 1 file changed, 172 deletions(-) delete mode 100644 cli.py diff --git a/cli.py b/cli.py deleted file mode 100644 index 052bd427..00000000 --- a/cli.py +++ /dev/null @@ -1,172 +0,0 @@ -# ------------------------------------------------------------------------------ -# Part of Red Queen Project. This file is distributed under the MIT License. -# See accompanying file /LICENSE for details. -# ------------------------------------------------------------------------------ - -#!/usr/bin/env python3 -import os -import platform -import subprocess -import configparser -import click - - -def benchmarkRetrieval(): - benchmark_category = {} - benchmark_types = [] - benchmarks = [] - dir_path = "red_queen/games/" - windows_ad1 = "" - windows_ad2 = "" - if platform.system() == "Windows": - windows_ad1 = "python3 -m " - windows_ad2 = "-s " - for entry in os.scandir(dir_path): - if entry.is_dir(): - benchmark_category[entry.name] = [] - subDict = {} - for sub in os.scandir(f"{dir_path}{entry.name}"): - if not sub.name.startswith("_") and sub.name.endswith(".py") and sub.is_file(): - subDict[sub.name] = sub.path - benchmark_category[entry.name] = subDict - - benchmark_types = list(benchmark_category.keys()) - for benchmark_pairs in benchmark_category.values(): - for keys in benchmark_pairs.keys(): - benchmarks.append(keys) - - return benchmark_category, benchmark_types, benchmarks, windows_ad1, windows_ad2 - - -def complierRetrieval(): - complierList = [] - config = configparser.ConfigParser() - config.read("pytest.ini") - for complier in config["pytest"]["markers"].split("\n"): - if complier != "": - complierList.append(complier) - # print(complierList) - # This line tests to see if there is a complier specifed - return complierList - - -def resultRetrieval(): - results = {} - dir_path = "results" - for entry in os.scandir(dir_path): - if entry.is_file(): - filename = entry.name - resultCount = int(filename.split("_")[0]) - results[resultCount] = {entry.name: entry.path} - return results - - -def runBenchmarks(pytestPaths: str, windows_ad1: str, mTag: str, compiler: str): - click.echo("benchmarks ran") - subprocess.run( - [f"{windows_ad1}pytest -n auto {windows_ad2}{pytestPaths} {mTag}{compiler} --store"], - shell=True, - ) - - -def showResult(): - resultsDict = resultRetrieval() - click.echo(resultsDict) - resultNum = max(resultsDict.keys()) - click.echo(resultNum) - result_path = tuple(resultsDict[resultNum].values())[0] - command = f"python3 -m report.console_tables --storage {result_path}" - subprocess.run([command], shell=True) - click.echo( - f"If you want to view the results chart type:\npython -m report.console_tables --storage {result_path}" - ) - - -benchmark_category, benchmark_types, benchmarks, windows_ad1, windows_ad2 = benchmarkRetrieval() -complierList = complierRetrieval() - - -@click.command() -# @click.option("--version", action="version", version="%(prog)s 0.0.1") -@click.option( - "-c", - "--compiler", - "compiler", - multiple=True, - type=click.Choice(complierList), - help="enter a compliler here", -) -@click.option( - "-t", - "--benchmarkType", - "benchmarkType", - multiple=True, - type=click.Choice(benchmark_types), - help="enter the type of benchmark(s) here", -) -@click.option( - "-b", - "--benchmark", - "benchmark", - multiple=True, - type=click.Choice(benchmarks), - help="enter the specfic benchmark(s) here", -) -def main(compiler, benchmarkType, benchmark): - benchmarkPaths = [] - pytestPaths = "" - myDict = {} - mTag = "" - if len(compiler) > 0: - mTag = "-m " - compiler = compiler[0] - else: - compiler = "" - # ### This for loop ensures that we are able to run various benchmark types - i = 0 - j = 0 - ### Has a benchmark type been specified? - if len(benchmarkType) > 0: - # click.echo("passed test 0") - while i < len(benchmarkType): - ### Is the inputted benchmark type valid? - if set(benchmarkType).issubset(benchmark_category): - # click.echo("passed test 1") - ### Has a benchmark been specified? - if len(benchmark) > 0: - # click.echo("passed test 2") - ### Are the inputted benchmark(s) valid? - if set(benchmark).issubset(benchmarks): - # click.echo("passed test 3") - ### Is the inputted benchmark within the inputted benchmark type suite? - if set(benchmark).issubset(set(benchmark_category[benchmarkType[i]])): - # click.echo("passed test 4") - for j in range(len(benchmark)): - benchmarkPaths.append( - benchmark_category[benchmarkType[0]][benchmark[j]] - ) - pytestPaths = " ".join(tuple(benchmarkPaths)) - runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) - showResult() - i += 1 - else: - myDict = benchmark_category[benchmarkType[0]] - for v in myDict.values(): - benchmarkPaths.append(v) - pytestPaths = " ".join(tuple(benchmarkPaths)) - runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) - showResult() - i += 1 - else: - question = input(f"Would you like to run all {len(benchmarks)} available benchmarks (y/n) ") - if question.lower() == "y": - for benchmark_list in benchmark_category.values(): - for paths in benchmark_list.values(): - benchmarkPaths.append(paths) - pytestPaths = " ".join(tuple(benchmarkPaths)) - runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) - showResult() - - -if __name__ == "__main__": - main() \ No newline at end of file From 3e8105fdee2124a3fabb3e195fa5b750c8983341 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 3 Aug 2022 12:00:21 -0400 Subject: [PATCH 06/38] Fixed Requirements --- requirements.txt | 1 - 1 file changed, 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 584deebf..46675102 100644 --- a/requirements.txt +++ b/requirements.txt @@ -7,5 +7,4 @@ qiskit-aer pytket>1.0,<2.0 setproctitle rich -configparser click \ No newline at end of file From 05d176a8f44590cb37113d36efd082703378fce6 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 3 Aug 2022 12:06:14 -0400 Subject: [PATCH 07/38] Missing comma --- setup.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/setup.py b/setup.py index 638d6e7f..7e6d6798 100644 --- a/setup.py +++ b/setup.py @@ -63,6 +63,6 @@ 'console_scripts': [ "red-queen = red_queen.cli:main", ] - } + }, zip_safe=False, ) \ No newline at end of file From c8817054acb50706ff154b12b8c09fd790bd7ec7 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 3 Aug 2022 12:21:06 -0400 Subject: [PATCH 08/38] Fixed Entry Point --- setup.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/setup.py b/setup.py index 7e6d6798..f4f045c1 100644 --- a/setup.py +++ b/setup.py @@ -62,7 +62,7 @@ entry_points={ 'console_scripts': [ "red-queen = red_queen.cli:main", - ] + ], }, zip_safe=False, ) \ No newline at end of file From 19b496a9b25068befb8ad2599aa62c50cf9163f0 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 3 Aug 2022 13:28:56 -0400 Subject: [PATCH 09/38] Added Missing Python File --- red_queen/cli.py | 172 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 172 insertions(+) create mode 100644 red_queen/cli.py diff --git a/red_queen/cli.py b/red_queen/cli.py new file mode 100644 index 00000000..052bd427 --- /dev/null +++ b/red_queen/cli.py @@ -0,0 +1,172 @@ +# ------------------------------------------------------------------------------ +# Part of Red Queen Project. This file is distributed under the MIT License. +# See accompanying file /LICENSE for details. +# ------------------------------------------------------------------------------ + +#!/usr/bin/env python3 +import os +import platform +import subprocess +import configparser +import click + + +def benchmarkRetrieval(): + benchmark_category = {} + benchmark_types = [] + benchmarks = [] + dir_path = "red_queen/games/" + windows_ad1 = "" + windows_ad2 = "" + if platform.system() == "Windows": + windows_ad1 = "python3 -m " + windows_ad2 = "-s " + for entry in os.scandir(dir_path): + if entry.is_dir(): + benchmark_category[entry.name] = [] + subDict = {} + for sub in os.scandir(f"{dir_path}{entry.name}"): + if not sub.name.startswith("_") and sub.name.endswith(".py") and sub.is_file(): + subDict[sub.name] = sub.path + benchmark_category[entry.name] = subDict + + benchmark_types = list(benchmark_category.keys()) + for benchmark_pairs in benchmark_category.values(): + for keys in benchmark_pairs.keys(): + benchmarks.append(keys) + + return benchmark_category, benchmark_types, benchmarks, windows_ad1, windows_ad2 + + +def complierRetrieval(): + complierList = [] + config = configparser.ConfigParser() + config.read("pytest.ini") + for complier in config["pytest"]["markers"].split("\n"): + if complier != "": + complierList.append(complier) + # print(complierList) + # This line tests to see if there is a complier specifed + return complierList + + +def resultRetrieval(): + results = {} + dir_path = "results" + for entry in os.scandir(dir_path): + if entry.is_file(): + filename = entry.name + resultCount = int(filename.split("_")[0]) + results[resultCount] = {entry.name: entry.path} + return results + + +def runBenchmarks(pytestPaths: str, windows_ad1: str, mTag: str, compiler: str): + click.echo("benchmarks ran") + subprocess.run( + [f"{windows_ad1}pytest -n auto {windows_ad2}{pytestPaths} {mTag}{compiler} --store"], + shell=True, + ) + + +def showResult(): + resultsDict = resultRetrieval() + click.echo(resultsDict) + resultNum = max(resultsDict.keys()) + click.echo(resultNum) + result_path = tuple(resultsDict[resultNum].values())[0] + command = f"python3 -m report.console_tables --storage {result_path}" + subprocess.run([command], shell=True) + click.echo( + f"If you want to view the results chart type:\npython -m report.console_tables --storage {result_path}" + ) + + +benchmark_category, benchmark_types, benchmarks, windows_ad1, windows_ad2 = benchmarkRetrieval() +complierList = complierRetrieval() + + +@click.command() +# @click.option("--version", action="version", version="%(prog)s 0.0.1") +@click.option( + "-c", + "--compiler", + "compiler", + multiple=True, + type=click.Choice(complierList), + help="enter a compliler here", +) +@click.option( + "-t", + "--benchmarkType", + "benchmarkType", + multiple=True, + type=click.Choice(benchmark_types), + help="enter the type of benchmark(s) here", +) +@click.option( + "-b", + "--benchmark", + "benchmark", + multiple=True, + type=click.Choice(benchmarks), + help="enter the specfic benchmark(s) here", +) +def main(compiler, benchmarkType, benchmark): + benchmarkPaths = [] + pytestPaths = "" + myDict = {} + mTag = "" + if len(compiler) > 0: + mTag = "-m " + compiler = compiler[0] + else: + compiler = "" + # ### This for loop ensures that we are able to run various benchmark types + i = 0 + j = 0 + ### Has a benchmark type been specified? + if len(benchmarkType) > 0: + # click.echo("passed test 0") + while i < len(benchmarkType): + ### Is the inputted benchmark type valid? + if set(benchmarkType).issubset(benchmark_category): + # click.echo("passed test 1") + ### Has a benchmark been specified? + if len(benchmark) > 0: + # click.echo("passed test 2") + ### Are the inputted benchmark(s) valid? + if set(benchmark).issubset(benchmarks): + # click.echo("passed test 3") + ### Is the inputted benchmark within the inputted benchmark type suite? + if set(benchmark).issubset(set(benchmark_category[benchmarkType[i]])): + # click.echo("passed test 4") + for j in range(len(benchmark)): + benchmarkPaths.append( + benchmark_category[benchmarkType[0]][benchmark[j]] + ) + pytestPaths = " ".join(tuple(benchmarkPaths)) + runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) + showResult() + i += 1 + else: + myDict = benchmark_category[benchmarkType[0]] + for v in myDict.values(): + benchmarkPaths.append(v) + pytestPaths = " ".join(tuple(benchmarkPaths)) + runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) + showResult() + i += 1 + else: + question = input(f"Would you like to run all {len(benchmarks)} available benchmarks (y/n) ") + if question.lower() == "y": + for benchmark_list in benchmark_category.values(): + for paths in benchmark_list.values(): + benchmarkPaths.append(paths) + pytestPaths = " ".join(tuple(benchmarkPaths)) + runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) + showResult() + + +if __name__ == "__main__": + main() \ No newline at end of file From 52274a074aac8acd86e35fb7d7d104b26e51208f Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 3 Aug 2022 16:03:55 -0400 Subject: [PATCH 10/38] Windows Patch --- red_queen/cli.py | 113 ++++++++++++++++++++++++----------------------- 1 file changed, 58 insertions(+), 55 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 052bd427..c8486bc1 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -16,74 +16,77 @@ def benchmarkRetrieval(): benchmark_types = [] benchmarks = [] dir_path = "red_queen/games/" - windows_ad1 = "" - windows_ad2 = "" - if platform.system() == "Windows": - windows_ad1 = "python3 -m " - windows_ad2 = "-s " for entry in os.scandir(dir_path): if entry.is_dir(): benchmark_category[entry.name] = [] - subDict = {} + sub_dict = {} for sub in os.scandir(f"{dir_path}{entry.name}"): if not sub.name.startswith("_") and sub.name.endswith(".py") and sub.is_file(): - subDict[sub.name] = sub.path - benchmark_category[entry.name] = subDict + sub_dict[sub.name] = sub.path + benchmark_category[entry.name] = sub_dict benchmark_types = list(benchmark_category.keys()) for benchmark_pairs in benchmark_category.values(): for keys in benchmark_pairs.keys(): benchmarks.append(keys) - return benchmark_category, benchmark_types, benchmarks, windows_ad1, windows_ad2 + return benchmark_category, benchmark_types, benchmarks -def complierRetrieval(): - complierList = [] +def complier_retrieval(): + complier_list = [] config = configparser.ConfigParser() config.read("pytest.ini") for complier in config["pytest"]["markers"].split("\n"): if complier != "": - complierList.append(complier) - # print(complierList) + complier_list.append(complier) + # print(complier_list) # This line tests to see if there is a complier specifed - return complierList + return complier_list -def resultRetrieval(): +def result_retrieval(): results = {} dir_path = "results" for entry in os.scandir(dir_path): if entry.is_file(): filename = entry.name - resultCount = int(filename.split("_")[0]) - results[resultCount] = {entry.name: entry.path} + result_count = int(filename.split("_")[0]) + results[result_count] = {entry.name: entry.path} return results -def runBenchmarks(pytestPaths: str, windows_ad1: str, mTag: str, compiler: str): +def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): click.echo("benchmarks ran") - subprocess.run( - [f"{windows_ad1}pytest -n auto {windows_ad2}{pytestPaths} {mTag}{compiler} --store"], - shell=True, - ) - - -def showResult(): - resultsDict = resultRetrieval() - click.echo(resultsDict) - resultNum = max(resultsDict.keys()) - click.echo(resultNum) - result_path = tuple(resultsDict[resultNum].values())[0] + if platform.system() == "Windows": + subprocess.run( + [f"python -m pytest -s {pytest_paths} {m_tag}{compiler} --store"], + shell=True, + check=True, + ) + else: + subprocess.run( + [f"pytest {pytest_paths} {m_tag}{compiler} --store"], + shell=True, + check=True, + ) + + +def show_result(): + results_dict = result_retrieval() + click.echo(results_dict) + result_num = max(results_dict.keys()) + click.echo(result_num) + result_path = tuple(results_dict[result_num].values())[0] command = f"python3 -m report.console_tables --storage {result_path}" - subprocess.run([command], shell=True) + subprocess.run([command], shell=True, check=True) click.echo( - f"If you want to view the results chart type:\npython -m report.console_tables --storage {result_path}" + f"To view the results chart type:\npython -m report.console_tables --storage {result_path}" ) -benchmark_category, benchmark_types, benchmarks, windows_ad1, windows_ad2 = benchmarkRetrieval() -complierList = complierRetrieval() +benchmark_category, benchmark_types, benchmarks = benchmarkRetrieval() +complier_list = complier_retrieval() @click.command() @@ -93,7 +96,7 @@ def showResult(): "--compiler", "compiler", multiple=True, - type=click.Choice(complierList), + type=click.Choice(complier_list), help="enter a compliler here", ) @click.option( @@ -113,12 +116,12 @@ def showResult(): help="enter the specfic benchmark(s) here", ) def main(compiler, benchmarkType, benchmark): - benchmarkPaths = [] - pytestPaths = "" - myDict = {} - mTag = "" + benchmark_paths = [] + pytest_paths = "" + mydict = {} + m_tag = "" if len(compiler) > 0: - mTag = "-m " + m_tag = "-m " compiler = compiler[0] else: compiler = "" @@ -142,31 +145,31 @@ def main(compiler, benchmarkType, benchmark): if set(benchmark).issubset(set(benchmark_category[benchmarkType[i]])): # click.echo("passed test 4") for j in range(len(benchmark)): - benchmarkPaths.append( + benchmark_paths.append( benchmark_category[benchmarkType[0]][benchmark[j]] ) - pytestPaths = " ".join(tuple(benchmarkPaths)) - runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) - showResult() + pytest_paths = " ".join(tuple(benchmark_paths)) + run_benchmarks(pytest_paths, m_tag, compiler) + show_result() i += 1 else: - myDict = benchmark_category[benchmarkType[0]] - for v in myDict.values(): - benchmarkPaths.append(v) - pytestPaths = " ".join(tuple(benchmarkPaths)) - runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) - showResult() + mydict = benchmark_category[benchmarkType[0]] + for v in mydict.values(): + benchmark_paths.append(v) + pytest_paths = " ".join(tuple(benchmark_paths)) + run_benchmarks(pytest_paths, m_tag, compiler) + show_result() i += 1 else: question = input(f"Would you like to run all {len(benchmarks)} available benchmarks (y/n) ") if question.lower() == "y": for benchmark_list in benchmark_category.values(): for paths in benchmark_list.values(): - benchmarkPaths.append(paths) - pytestPaths = " ".join(tuple(benchmarkPaths)) - runBenchmarks(pytestPaths, windows_ad1, mTag, compiler) - showResult() + benchmark_paths.append(paths) + pytest_paths = " ".join(tuple(benchmark_paths)) + run_benchmarks(pytest_paths, m_tag, compiler) + show_result() if __name__ == "__main__": - main() \ No newline at end of file + main() From acd8d71584439b9cf3475294c6eb3f3906b5bc35 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Fri, 5 Aug 2022 10:11:10 -0400 Subject: [PATCH 11/38] Removed Security Issue --- red_queen/cli.py | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index c8486bc1..826d3ad2 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -61,13 +61,11 @@ def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): if platform.system() == "Windows": subprocess.run( [f"python -m pytest -s {pytest_paths} {m_tag}{compiler} --store"], - shell=True, check=True, ) else: subprocess.run( [f"pytest {pytest_paths} {m_tag}{compiler} --store"], - shell=True, check=True, ) @@ -79,7 +77,7 @@ def show_result(): click.echo(result_num) result_path = tuple(results_dict[result_num].values())[0] command = f"python3 -m report.console_tables --storage {result_path}" - subprocess.run([command], shell=True, check=True) + subprocess.run([command], check=True) click.echo( f"To view the results chart type:\npython -m report.console_tables --storage {result_path}" ) From e404a26b8750abc5994733cd06fbff9c6cd453c6 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Fri, 5 Aug 2022 10:27:41 -0400 Subject: [PATCH 12/38] Pylint Fixes --- red_queen/cli.py | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 826d3ad2..2c9161d3 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -11,38 +11,38 @@ import click -def benchmarkRetrieval(): - benchmark_category = {} - benchmark_types = [] +def benchmark_retrieval(): + benchmark_dict = {} + type_list = [] benchmarks = [] dir_path = "red_queen/games/" for entry in os.scandir(dir_path): if entry.is_dir(): - benchmark_category[entry.name] = [] + benchmark_dict[entry.name] = [] sub_dict = {} for sub in os.scandir(f"{dir_path}{entry.name}"): if not sub.name.startswith("_") and sub.name.endswith(".py") and sub.is_file(): sub_dict[sub.name] = sub.path - benchmark_category[entry.name] = sub_dict + benchmark_dict[entry.name] = sub_dict - benchmark_types = list(benchmark_category.keys()) - for benchmark_pairs in benchmark_category.values(): + type_list = list(benchmark_dict.keys()) + for benchmark_pairs in benchmark_dict.values(): for keys in benchmark_pairs.keys(): benchmarks.append(keys) - return benchmark_category, benchmark_types, benchmarks + return benchmark_dict, type_list, benchmarks def complier_retrieval(): - complier_list = [] + list_of_compliers = [] config = configparser.ConfigParser() config.read("pytest.ini") for complier in config["pytest"]["markers"].split("\n"): if complier != "": - complier_list.append(complier) + list_of_compliers.append(complier) # print(complier_list) # This line tests to see if there is a complier specifed - return complier_list + return list_of_compliers def result_retrieval(): @@ -83,7 +83,7 @@ def show_result(): ) -benchmark_category, benchmark_types, benchmarks = benchmarkRetrieval() +benchmark_category, benchmark_types, benchmarks = benchmark_retrieval() complier_list = complier_retrieval() @@ -113,7 +113,7 @@ def show_result(): type=click.Choice(benchmarks), help="enter the specfic benchmark(s) here", ) -def main(compiler, benchmarkType, benchmark): +def main(compiler=None, benchmarkType=None, benchmark=None): benchmark_paths = [] pytest_paths = "" mydict = {} From cf4ecd06a28ab5e75b64829dc8d3a73edb8c54f9 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon <39884062+Lementknight@users.noreply.github.com> Date: Fri, 5 Aug 2022 10:28:01 -0400 Subject: [PATCH 13/38] Update red_queen/cli.py Co-authored-by: Matthew Treinish --- red_queen/cli.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 2c9161d3..81176c43 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -142,7 +142,7 @@ def main(compiler=None, benchmarkType=None, benchmark=None): ### Is the inputted benchmark within the inputted benchmark type suite? if set(benchmark).issubset(set(benchmark_category[benchmarkType[i]])): # click.echo("passed test 4") - for j in range(len(benchmark)): + for j, _ in enumerate(benchmark): benchmark_paths.append( benchmark_category[benchmarkType[0]][benchmark[j]] ) From 422815dc4953696e17c56dd77ca0901b03fbb84a Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon <39884062+Lementknight@users.noreply.github.com> Date: Fri, 5 Aug 2022 10:40:54 -0400 Subject: [PATCH 14/38] Update red_queen/cli.py Co-authored-by: Raynel Sanchez <87539502+raynelfss@users.noreply.github.com> --- red_queen/cli.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 81176c43..65d1f2bc 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -60,7 +60,7 @@ def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): click.echo("benchmarks ran") if platform.system() == "Windows": subprocess.run( - [f"python -m pytest -s {pytest_paths} {m_tag}{compiler} --store"], + f"python -m pytest -s {pytest_paths} {m_tag}{compiler} --store".split(), check=True, ) else: From 279e6f7845569020116132f9932fec5ba7c6ddce Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Fri, 5 Aug 2022 10:41:30 -0400 Subject: [PATCH 15/38] Small Changes --- red_queen/cli.py | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 65d1f2bc..3d80f631 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -12,9 +12,14 @@ def benchmark_retrieval(): + """_summary_ + + Returns: + _type_: _description_ + """ benchmark_dict = {} type_list = [] - benchmarks = [] + list_of_benchmarks = [] dir_path = "red_queen/games/" for entry in os.scandir(dir_path): if entry.is_dir(): @@ -28,9 +33,9 @@ def benchmark_retrieval(): type_list = list(benchmark_dict.keys()) for benchmark_pairs in benchmark_dict.values(): for keys in benchmark_pairs.keys(): - benchmarks.append(keys) + list_of_benchmarks.append(keys) - return benchmark_dict, type_list, benchmarks + return benchmark_dict, type_list, list_of_benchmarks def complier_retrieval(): @@ -58,6 +63,7 @@ def result_retrieval(): def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): click.echo("benchmarks ran") + click.echo(pytest_paths) if platform.system() == "Windows": subprocess.run( f"python -m pytest -s {pytest_paths} {m_tag}{compiler} --store".split(), @@ -146,9 +152,9 @@ def main(compiler=None, benchmarkType=None, benchmark=None): benchmark_paths.append( benchmark_category[benchmarkType[0]][benchmark[j]] ) - pytest_paths = " ".join(tuple(benchmark_paths)) + pytest_paths = benchmark_paths run_benchmarks(pytest_paths, m_tag, compiler) - show_result() + # show_result() i += 1 else: mydict = benchmark_category[benchmarkType[0]] From 3c808d71ada22a4da582950db2c73fc790b5987a Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Fri, 5 Aug 2022 21:38:30 -0400 Subject: [PATCH 16/38] Passes Tox and Should Work with Windows Now --- red_queen/cli.py | 35 ++++++++++++++++++++++++----------- 1 file changed, 24 insertions(+), 11 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 3d80f631..6b8fb3e0 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -1,8 +1,8 @@ # ------------------------------------------------------------------------------ -# Part of Red Queen Project. This file is distributed under the MIT License. +# Part of Red Queen Project. This file is distributed under the Apache 2.0. # See accompanying file /LICENSE for details. # ------------------------------------------------------------------------------ - +"""Shebang is used to make this code an python executable""" #!/usr/bin/env python3 import os import platform @@ -12,11 +12,6 @@ def benchmark_retrieval(): - """_summary_ - - Returns: - _type_: _description_ - """ benchmark_dict = {} type_list = [] list_of_benchmarks = [] @@ -62,16 +57,34 @@ def result_retrieval(): def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): + command_list = ["pytest"] + compiler_command = [m_tag, compiler, "--store"] + # command_list.append(pytest_paths) click.echo("benchmarks ran") - click.echo(pytest_paths) + # click.echo(pytest_paths) + click.echo(f"sys.executable pytest -s {pytest_paths} {m_tag} {compiler} --store".split()) if platform.system() == "Windows": + command_list.insert(0, "-m") + command_list.insert(0, "python") + command_list.append("-s") + for _, string in enumerate(pytest_paths): + command_list.append(string) + for _, string in enumerate(compiler_command): + command_list.append(string) + # click.echo(command_list) subprocess.run( - f"python -m pytest -s {pytest_paths} {m_tag}{compiler} --store".split(), + command_list, check=True, ) else: + for _, string in enumerate(pytest_paths): + command_list.append(string) + for _, string in enumerate(compiler_command): + command_list.append(string) + # proper_command = " ".join(command_list) + # click.echo(proper_command) subprocess.run( - [f"pytest {pytest_paths} {m_tag}{compiler} --store"], + command_list, check=True, ) @@ -125,7 +138,7 @@ def main(compiler=None, benchmarkType=None, benchmark=None): mydict = {} m_tag = "" if len(compiler) > 0: - m_tag = "-m " + m_tag = "-m" compiler = compiler[0] else: compiler = "" From 79cb6db452cc958f0f9178f883584f1f23e853f0 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Fri, 5 Aug 2022 22:46:18 -0400 Subject: [PATCH 17/38] Removed Duplicate Text --- README.md | 14 -------------- 1 file changed, 14 deletions(-) diff --git a/README.md b/README.md index f76e5374..4c40c2af 100644 --- a/README.md +++ b/README.md @@ -103,22 +103,8 @@ python -m pytest -s games/mapping/map_queko.py -m tweedledum --store ```
-To run pytest or any python script on Windows, you will have to use `python -m` in order to run the -`pytest` command. You will also need to add `-s` to your pytest call to disable -stdin handling. - -The benchmark suite will consider all functions named `bench_*` in -`games/mapping/map_queko.py`. Because we set the `-m` option, only the the ones -marked with `tweedledum` will be run. (We could easy do the same for `qiskit`). -If you don't define a `-m` option, all `bench_*` functions will be run. -```bash -python -m pytest -s red_queen/games/mapping/map_queko.py -m tweedledum --store -``` - -
- The benchmark suite will consider all functions named `bench_*` in `games/mapping/map_queko.py`. Because The `--store` option tells the framework to store the results in json file in From f41069df70f755aebf8f6685682cc63c11d7e6e6 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Tue, 9 Aug 2022 14:41:40 -0400 Subject: [PATCH 18/38] Ready for Testing --- red_queen/cli.py | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 6b8fb3e0..2aead5b8 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -40,8 +40,6 @@ def complier_retrieval(): for complier in config["pytest"]["markers"].split("\n"): if complier != "": list_of_compliers.append(complier) - # print(complier_list) - # This line tests to see if there is a complier specifed return list_of_compliers @@ -59,10 +57,6 @@ def result_retrieval(): def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): command_list = ["pytest"] compiler_command = [m_tag, compiler, "--store"] - # command_list.append(pytest_paths) - click.echo("benchmarks ran") - # click.echo(pytest_paths) - click.echo(f"sys.executable pytest -s {pytest_paths} {m_tag} {compiler} --store".split()) if platform.system() == "Windows": command_list.insert(0, "-m") command_list.insert(0, "python") @@ -71,7 +65,6 @@ def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): command_list.append(string) for _, string in enumerate(compiler_command): command_list.append(string) - # click.echo(command_list) subprocess.run( command_list, check=True, @@ -81,8 +74,6 @@ def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): command_list.append(string) for _, string in enumerate(compiler_command): command_list.append(string) - # proper_command = " ".join(command_list) - # click.echo(proper_command) subprocess.run( command_list, check=True, @@ -91,14 +82,19 @@ def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): def show_result(): results_dict = result_retrieval() - click.echo(results_dict) result_num = max(results_dict.keys()) - click.echo(result_num) result_path = tuple(results_dict[result_num].values())[0] - command = f"python3 -m report.console_tables --storage {result_path}" - subprocess.run([command], check=True) + command_list = ["python3", "-m", "report.console_tables", "--storage"] + command_list.append(str(result_path)) + subprocess.run( + command_list, + check=True, + ) + click.echo( + "To view the table again:" + ) click.echo( - f"To view the results chart type:\npython -m report.console_tables --storage {result_path}" + " ".join(command_list) ) @@ -189,4 +185,5 @@ def main(compiler=None, benchmarkType=None, benchmark=None): if __name__ == "__main__": - main() + # main() + show_result() From 6e77b84a44b8685eda1b447ca0cc522be310c5c5 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Tue, 9 Aug 2022 14:45:13 -0400 Subject: [PATCH 19/38] Passes Tox --- red_queen/cli.py | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 2aead5b8..5404323d 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -89,13 +89,9 @@ def show_result(): subprocess.run( command_list, check=True, - ) - click.echo( - "To view the table again:" - ) - click.echo( - " ".join(command_list) ) + click.echo("To view the table again:") + click.echo(" ".join(command_list)) benchmark_category, benchmark_types, benchmarks = benchmark_retrieval() From d734d762cd1bc7f5ce0ad6da88e0360a18953e15 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon <39884062+Lementknight@users.noreply.github.com> Date: Wed, 10 Aug 2022 08:37:58 -0400 Subject: [PATCH 20/38] Update README.md Co-authored-by: Matthew Treinish --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4c40c2af..9260bdef 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ the required packages: ```bash cd red-queen -pip install red-queen +pip install . ``` From 454dbfaddf39841dc12aed95cbe0974397e1afe9 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 10 Aug 2022 08:43:46 -0400 Subject: [PATCH 21/38] Revised Docstring --- red_queen/cli.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 5404323d..49264b20 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -2,7 +2,10 @@ # Part of Red Queen Project. This file is distributed under the Apache 2.0. # See accompanying file /LICENSE for details. # ------------------------------------------------------------------------------ -"""Shebang is used to make this code an python executable""" +"""The purpose of this code below is to make the user experience of the Red Queen benchmark suite more steamlined. +The code achieves that by collecting all avaiable benchmark alongside their paths, and uses this information to +create exacutable pytest code that will exacute the benchmarks for users with worrying about the nuisances of +the pytest framework. The scope of this cli will grow with time.""" #!/usr/bin/env python3 import os import platform From 1e2e9445b8a058c570d50a0302e8bf9eb82a59c3 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon <39884062+Lementknight@users.noreply.github.com> Date: Wed, 10 Aug 2022 08:45:16 -0400 Subject: [PATCH 22/38] Update red_queen/cli.py Co-authored-by: Matthew Treinish --- red_queen/cli.py | 1 - 1 file changed, 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 49264b20..6289a868 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -149,7 +149,6 @@ def main(compiler=None, benchmarkType=None, benchmark=None): # click.echo("passed test 1") ### Has a benchmark been specified? if len(benchmark) > 0: - # click.echo("passed test 2") ### Are the inputted benchmark(s) valid? if set(benchmark).issubset(benchmarks): # click.echo("passed test 3") From 65fb8cfce99630ab4f28382e197f5be338583286 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon <39884062+Lementknight@users.noreply.github.com> Date: Wed, 10 Aug 2022 08:45:23 -0400 Subject: [PATCH 23/38] Update red_queen/cli.py Co-authored-by: Matthew Treinish --- red_queen/cli.py | 1 - 1 file changed, 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 6289a868..b145e14a 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -151,7 +151,6 @@ def main(compiler=None, benchmarkType=None, benchmark=None): if len(benchmark) > 0: ### Are the inputted benchmark(s) valid? if set(benchmark).issubset(benchmarks): - # click.echo("passed test 3") ### Is the inputted benchmark within the inputted benchmark type suite? if set(benchmark).issubset(set(benchmark_category[benchmarkType[i]])): # click.echo("passed test 4") From af682a4bc320a3545c437507fb6c3dee4863b6f9 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon <39884062+Lementknight@users.noreply.github.com> Date: Wed, 10 Aug 2022 08:45:30 -0400 Subject: [PATCH 24/38] Update red_queen/cli.py Co-authored-by: Matthew Treinish --- red_queen/cli.py | 1 - 1 file changed, 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index b145e14a..95fa2d72 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -153,7 +153,6 @@ def main(compiler=None, benchmarkType=None, benchmark=None): if set(benchmark).issubset(benchmarks): ### Is the inputted benchmark within the inputted benchmark type suite? if set(benchmark).issubset(set(benchmark_category[benchmarkType[i]])): - # click.echo("passed test 4") for j, _ in enumerate(benchmark): benchmark_paths.append( benchmark_category[benchmarkType[0]][benchmark[j]] From c39b862cf3539579ca5af1ccc525c7382411d58d Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon <39884062+Lementknight@users.noreply.github.com> Date: Wed, 10 Aug 2022 08:46:46 -0400 Subject: [PATCH 25/38] Update red_queen/cli.py Co-authored-by: Matthew Treinish --- red_queen/cli.py | 1 - 1 file changed, 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 95fa2d72..7f17a19a 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -102,7 +102,6 @@ def show_result(): @click.command() -# @click.option("--version", action="version", version="%(prog)s 0.0.1") @click.option( "-c", "--compiler", From a5f7e8d73d5f023b9ff0bd2ca3f7ade386affd92 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon <39884062+Lementknight@users.noreply.github.com> Date: Wed, 10 Aug 2022 08:50:54 -0400 Subject: [PATCH 26/38] Update red_queen/cli.py Co-authored-by: Matthew Treinish --- red_queen/cli.py | 1 - 1 file changed, 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 7f17a19a..cf388765 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -145,7 +145,6 @@ def main(compiler=None, benchmarkType=None, benchmark=None): while i < len(benchmarkType): ### Is the inputted benchmark type valid? if set(benchmarkType).issubset(benchmark_category): - # click.echo("passed test 1") ### Has a benchmark been specified? if len(benchmark) > 0: ### Are the inputted benchmark(s) valid? From d9c01c8bea5653d0de49b17f4783a50f3ca61053 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 10 Aug 2022 08:48:33 -0400 Subject: [PATCH 27/38] Tox Reformating --- red_queen/cli.py | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index cf388765..05fe67ba 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -2,9 +2,9 @@ # Part of Red Queen Project. This file is distributed under the Apache 2.0. # See accompanying file /LICENSE for details. # ------------------------------------------------------------------------------ -"""The purpose of this code below is to make the user experience of the Red Queen benchmark suite more steamlined. +"""The purpose of this code below is to make the user experience of the Red Queen benchmark suite more steamlined. The code achieves that by collecting all avaiable benchmark alongside their paths, and uses this information to -create exacutable pytest code that will exacute the benchmarks for users with worrying about the nuisances of +create exacutable pytest code that will exacute the benchmarks for users with worrying about the nuisances of the pytest framework. The scope of this cli will grow with time.""" #!/usr/bin/env python3 import os @@ -179,5 +179,4 @@ def main(compiler=None, benchmarkType=None, benchmark=None): if __name__ == "__main__": - # main() - show_result() + main() From 726311869da784f1943cc7595a5144071d02db95 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 10 Aug 2022 08:59:00 -0400 Subject: [PATCH 28/38] Suggested Revisions Applied --- red_queen/cli.py | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 05fe67ba..2a245709 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -1,12 +1,18 @@ +#!/usr/bin/env python3 # ------------------------------------------------------------------------------ # Part of Red Queen Project. This file is distributed under the Apache 2.0. # See accompanying file /LICENSE for details. # ------------------------------------------------------------------------------ -"""The purpose of this code below is to make the user experience of the Red Queen benchmark suite more steamlined. -The code achieves that by collecting all avaiable benchmark alongside their paths, and uses this information to -create exacutable pytest code that will exacute the benchmarks for users with worrying about the nuisances of -the pytest framework. The scope of this cli will grow with time.""" -#!/usr/bin/env python3 + +"""The purpose of this code below is to make the user experience of +the Red Queen benchmark suite more steamlined + +The code achieves that by collecting all avaiable benchmark alongside their paths, +and uses this information to create exacutable pytest code that +will exacute the benchmarks for users with worrying about the +nuisances ofthe pytest framework. The scope of this cli will +grow with time. +""" import os import platform import subprocess @@ -141,7 +147,6 @@ def main(compiler=None, benchmarkType=None, benchmark=None): j = 0 ### Has a benchmark type been specified? if len(benchmarkType) > 0: - # click.echo("passed test 0") while i < len(benchmarkType): ### Is the inputted benchmark type valid? if set(benchmarkType).issubset(benchmark_category): @@ -157,7 +162,7 @@ def main(compiler=None, benchmarkType=None, benchmark=None): ) pytest_paths = benchmark_paths run_benchmarks(pytest_paths, m_tag, compiler) - # show_result() + show_result() i += 1 else: mydict = benchmark_category[benchmarkType[0]] From a9ed1acebaf6ed04403cfe58d6b8651d359a7de5 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Wed, 10 Aug 2022 09:05:42 -0400 Subject: [PATCH 29/38] Recommended Revisions Made --- README.md | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 9260bdef..98679c6d 100644 --- a/README.md +++ b/README.md @@ -9,9 +9,8 @@ > [Carroll, Lewis: Through the Looking-Glass, Chapter 2]( https://www.gutenberg.org/files/12/12-h/12-h.htm) -
-

About

+# About The Red Queen benchmark framework was created to facilitate the benchmarking algorithms used in quantum compilation. @@ -21,9 +20,8 @@ effectively, you should know the basics of `pytest` first. Take a look at the [introductory material](https://docs.pytest.org/en/latest/getting-started.html). -
-

Usage

+# Usage Red Queen is a framework for benchmarking quantum compilation algorithms. Since @@ -46,11 +44,9 @@ pip install . Red Queen has a `CLI` (command line interface) that you can use to execute benchmarks. -
The general templete for the `CLI` is as follows: -
```bash @@ -62,7 +58,6 @@ Now, suppose you want to run the mapping benchmarks using only `tweedledum`. You can do this via the `CLI` or with `pytest` -
[For MacOs and Linux] @@ -78,7 +73,6 @@ With `pytest` python -m pytest games/mapping/map_queko.py -m tweedledum --store ``` -
[For Windows] @@ -101,7 +95,6 @@ stdin handling. ```bash python -m pytest -s games/mapping/map_queko.py -m tweedledum --store ``` -
@@ -115,7 +108,6 @@ use: python -m report.console_tables --storage results/0001_bench.json ``` -
## Warning From f7f61a32dfdf76105edb1085400b6cd7858c4e75 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Fri, 12 Aug 2022 10:17:49 -0400 Subject: [PATCH 30/38] Minor Change --- red_queen/cli.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 2a245709..b001a3cd 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -79,6 +79,8 @@ def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): check=True, ) else: + command_list.insert(0, "-m") + command_list.insert(0, "python") for _, string in enumerate(pytest_paths): command_list.append(string) for _, string in enumerate(compiler_command): @@ -93,7 +95,7 @@ def show_result(): results_dict = result_retrieval() result_num = max(results_dict.keys()) result_path = tuple(results_dict[result_num].values())[0] - command_list = ["python3", "-m", "report.console_tables", "--storage"] + command_list = ["sys.executable", "-m", "report.console_tables", "--storage"] command_list.append(str(result_path)) subprocess.run( command_list, From 279549fca3b6d69cb27fb32e7f72b26a0064c316 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Thu, 25 Aug 2022 21:25:39 -0400 Subject: [PATCH 31/38] Removing Sys.Exacutable from CLI Sys.exacutable doesn't work for its intended application, because it gives the path of the red-queen python package instead of the python path. So I have replaced the line that had sys.exacutable with python. I did this because I want the result table of the benchmark exacutaion to be displayed. --- red_queen/cli.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index b001a3cd..0027c5ce 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -95,7 +95,7 @@ def show_result(): results_dict = result_retrieval() result_num = max(results_dict.keys()) result_path = tuple(results_dict[result_num].values())[0] - command_list = ["sys.executable", "-m", "report.console_tables", "--storage"] + command_list = ["python", "-m", "report.console_tables", "--storage"] command_list.append(str(result_path)) subprocess.run( command_list, From d310cdc71a98246fb6c30a3cc0795a5a353728bc Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Mon, 26 Dec 2022 00:10:41 -0500 Subject: [PATCH 32/38] Rewritting ReadMe with Guidence for Virtual Enviornment Usage and CLI Usage In the commit, I have added documentation for how some sets up their machine to use Red Queen properly, and along with that how to ultize Red Queen with its CLI. --- README.md | 34 +++++++++++++++++++++++++++++++--- 1 file changed, 31 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 98679c6d..6df8cf4e 100644 --- a/README.md +++ b/README.md @@ -32,15 +32,43 @@ is is still in early development, you must clone this repository to use it: git clone git@github.com:Qiskit/red-queen.git ``` -To run benchmarks, you must first go to the `red-queen` directory and install -the required packages: +Since Red Queen is a Python package is version control is very important, therefore using a virtual enviornment to run benchmarks is highly recomended. +To get into Red Queen repository ```bash cd red-queen -pip install . ``` +To create a virtual environment + +```bash +python -m venv .virtualenv +``` +After you create the virtual enviornment you need to activate it + +To activate the virtual enviornment + +[For MacOs and Linux] + +```bash +source .virtualenv/bin/activate +``` + +[For Windows] + +```bash +.virtualenv\Scripts\activate.bat +``` + +Lastly you need to install all of the neccessary dependencies for Red Queen + +```bash +pip install -e . +``` +With all that you are ready to start using Red Queen. + +
Red Queen has a `CLI` (command line interface) that you can use to execute benchmarks. From d4fb6a1a178642c82664d1e30272a99b9e6b3f72 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Mon, 26 Dec 2022 00:24:30 -0500 Subject: [PATCH 33/38] Adding .virtualenv to .gitignore file In this commit, I added .virtualenv to the .gitignore file, so that people who follow along with the documentation don't have to worry about their virtual enviornment affecting their commits --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index 1dc0672d..4f4619f1 100644 --- a/.gitignore +++ b/.gitignore @@ -112,6 +112,7 @@ celerybeat.pid # Environments .env .venv +.virtualenv env/ venv/ ENV/ From 4c0965f38132efa19696f90994f83dbf78c8cf8e Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Mon, 26 Dec 2022 14:20:43 -0500 Subject: [PATCH 34/38] Adding Missing File In this commit, I have added the __init__.py for the red_queen/games/applications/benchmarks/ directory. I am hoping that this allow fixes the merge issue for my pull request. --- .../games/applications/benchmarks/__init__.py | 236 ++++++++++++++++++ 1 file changed, 236 insertions(+) create mode 100644 red_queen/games/applications/benchmarks/__init__.py diff --git a/red_queen/games/applications/benchmarks/__init__.py b/red_queen/games/applications/benchmarks/__init__.py new file mode 100644 index 00000000..fcc35634 --- /dev/null +++ b/red_queen/games/applications/benchmarks/__init__.py @@ -0,0 +1,236 @@ +# ------------------------------------------------------------------------------ +# Part of Qiskit. This file is distributed under the Apache 2.0 License. +# See accompanying file /LICENSE for details. +# ------------------------------------------------------------------------------ + +"""Benchmarks data.""" + +import os +from pathlib import Path + +_base_path = Path(__file__).parent +_base_path = _base_path.resolve() +_base_path = _base_path.relative_to(os.getcwd()) + +queko_bigd_qasm = sorted(_base_path.glob("queko/BIGD/*.qasm")) +queko_bntf_qasm = sorted(_base_path.glob("queko/BNTF/*.qasm")) +queko_bss_qasm = sorted(_base_path.glob("queko/BSS/*.qasm")) +queko_qasm = queko_bigd_qasm + queko_bntf_qasm + queko_bss_qasm + +queko_coupling = { + "54QBT": [ + [0, 6], + [1, 6], + [1, 7], + [2, 7], + [2, 8], + [3, 8], + [3, 9], + [4, 9], + [4, 10], + [5, 10], + [5, 11], + [6, 12], + [6, 13], + [7, 13], + [7, 14], + [8, 14], + [8, 15], + [9, 15], + [9, 16], + [10, 16], + [10, 17], + [11, 17], + [12, 18], + [13, 18], + [13, 19], + [14, 19], + [14, 20], + [15, 20], + [15, 21], + [16, 21], + [16, 22], + [17, 22], + [17, 23], + [18, 24], + [18, 25], + [19, 25], + [19, 26], + [20, 26], + [20, 27], + [21, 27], + [21, 28], + [22, 28], + [22, 29], + [23, 29], + [24, 30], + [25, 30], + [25, 31], + [26, 31], + [26, 32], + [27, 32], + [27, 33], + [28, 33], + [28, 34], + [29, 34], + [29, 35], + [30, 36], + [30, 37], + [31, 37], + [31, 38], + [32, 38], + [32, 39], + [33, 39], + [33, 40], + [34, 40], + [34, 41], + [35, 41], + [36, 42], + [37, 42], + [37, 43], + [38, 43], + [38, 44], + [39, 44], + [39, 45], + [40, 45], + [40, 46], + [41, 46], + [41, 47], + [42, 48], + [42, 49], + [43, 49], + [43, 50], + [44, 50], + [44, 51], + [45, 51], + [45, 52], + [46, 52], + [46, 53], + [47, 53], + ], + "53QBT": [ + [0, 1], + [1, 2], + [2, 3], + [3, 4], + [0, 5], + [4, 6], + [5, 9], + [6, 13], + [7, 8], + [8, 9], + [9, 10], + [10, 11], + [11, 12], + [12, 13], + [13, 14], + [14, 15], + [7, 16], + [11, 17], + [15, 18], + [16, 19], + [17, 23], + [18, 27], + [19, 20], + [20, 21], + [21, 22], + [22, 23], + [23, 24], + [24, 25], + [25, 26], + [26, 27], + [21, 28], + [25, 29], + [28, 32], + [29, 36], + [30, 31], + [31, 32], + [32, 33], + [33, 34], + [34, 35], + [35, 36], + [36, 37], + [37, 38], + [30, 39], + [34, 40], + [38, 41], + [39, 42], + [40, 46], + [41, 50], + [42, 43], + [43, 44], + [44, 45], + [45, 46], + [46, 47], + [47, 48], + [48, 49], + [49, 50], + [44, 51], + [48, 52], + ], + "20QBT": [ + [0, 1], + [1, 2], + [2, 3], + [3, 4], + [0, 5], + [1, 6], + [1, 7], + [2, 6], + [2, 7], + [3, 8], + [3, 9], + [4, 8], + [4, 9], + [5, 6], + [6, 7], + [7, 8], + [8, 9], + [5, 10], + [5, 11], + [6, 10], + [6, 11], + [7, 12], + [7, 13], + [8, 12], + [8, 13], + [9, 14], + [10, 11], + [11, 12], + [12, 13], + [13, 14], + [10, 15], + [11, 16], + [11, 17], + [12, 16], + [12, 17], + [13, 18], + [13, 19], + [14, 18], + [14, 19], + [15, 16], + [16, 17], + [17, 18], + [18, 19], + ], + "16QBT": [ + [0, 1], + [1, 2], + [2, 3], + [3, 4], + [4, 5], + [5, 6], + [6, 7], + [0, 8], + [3, 11], + [4, 12], + [7, 15], + [8, 9], + [9, 10], + [10, 11], + [11, 12], + [12, 13], + [13, 14], + [14, 15], + ], +} From 16ca108524f152f7229617613558ad26fc487f88 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Mon, 26 Dec 2022 14:46:50 -0500 Subject: [PATCH 35/38] Remove Unneccessary Files In this commit, I have removed an extra init file as it is not a part of the main Red Queen Repository, and it has nothing to do with the CLI. --- .../games/applications/benchmarks/__init__.py | 236 ------------------ 1 file changed, 236 deletions(-) delete mode 100644 red_queen/games/applications/benchmarks/__init__.py diff --git a/red_queen/games/applications/benchmarks/__init__.py b/red_queen/games/applications/benchmarks/__init__.py deleted file mode 100644 index fcc35634..00000000 --- a/red_queen/games/applications/benchmarks/__init__.py +++ /dev/null @@ -1,236 +0,0 @@ -# ------------------------------------------------------------------------------ -# Part of Qiskit. This file is distributed under the Apache 2.0 License. -# See accompanying file /LICENSE for details. -# ------------------------------------------------------------------------------ - -"""Benchmarks data.""" - -import os -from pathlib import Path - -_base_path = Path(__file__).parent -_base_path = _base_path.resolve() -_base_path = _base_path.relative_to(os.getcwd()) - -queko_bigd_qasm = sorted(_base_path.glob("queko/BIGD/*.qasm")) -queko_bntf_qasm = sorted(_base_path.glob("queko/BNTF/*.qasm")) -queko_bss_qasm = sorted(_base_path.glob("queko/BSS/*.qasm")) -queko_qasm = queko_bigd_qasm + queko_bntf_qasm + queko_bss_qasm - -queko_coupling = { - "54QBT": [ - [0, 6], - [1, 6], - [1, 7], - [2, 7], - [2, 8], - [3, 8], - [3, 9], - [4, 9], - [4, 10], - [5, 10], - [5, 11], - [6, 12], - [6, 13], - [7, 13], - [7, 14], - [8, 14], - [8, 15], - [9, 15], - [9, 16], - [10, 16], - [10, 17], - [11, 17], - [12, 18], - [13, 18], - [13, 19], - [14, 19], - [14, 20], - [15, 20], - [15, 21], - [16, 21], - [16, 22], - [17, 22], - [17, 23], - [18, 24], - [18, 25], - [19, 25], - [19, 26], - [20, 26], - [20, 27], - [21, 27], - [21, 28], - [22, 28], - [22, 29], - [23, 29], - [24, 30], - [25, 30], - [25, 31], - [26, 31], - [26, 32], - [27, 32], - [27, 33], - [28, 33], - [28, 34], - [29, 34], - [29, 35], - [30, 36], - [30, 37], - [31, 37], - [31, 38], - [32, 38], - [32, 39], - [33, 39], - [33, 40], - [34, 40], - [34, 41], - [35, 41], - [36, 42], - [37, 42], - [37, 43], - [38, 43], - [38, 44], - [39, 44], - [39, 45], - [40, 45], - [40, 46], - [41, 46], - [41, 47], - [42, 48], - [42, 49], - [43, 49], - [43, 50], - [44, 50], - [44, 51], - [45, 51], - [45, 52], - [46, 52], - [46, 53], - [47, 53], - ], - "53QBT": [ - [0, 1], - [1, 2], - [2, 3], - [3, 4], - [0, 5], - [4, 6], - [5, 9], - [6, 13], - [7, 8], - [8, 9], - [9, 10], - [10, 11], - [11, 12], - [12, 13], - [13, 14], - [14, 15], - [7, 16], - [11, 17], - [15, 18], - [16, 19], - [17, 23], - [18, 27], - [19, 20], - [20, 21], - [21, 22], - [22, 23], - [23, 24], - [24, 25], - [25, 26], - [26, 27], - [21, 28], - [25, 29], - [28, 32], - [29, 36], - [30, 31], - [31, 32], - [32, 33], - [33, 34], - [34, 35], - [35, 36], - [36, 37], - [37, 38], - [30, 39], - [34, 40], - [38, 41], - [39, 42], - [40, 46], - [41, 50], - [42, 43], - [43, 44], - [44, 45], - [45, 46], - [46, 47], - [47, 48], - [48, 49], - [49, 50], - [44, 51], - [48, 52], - ], - "20QBT": [ - [0, 1], - [1, 2], - [2, 3], - [3, 4], - [0, 5], - [1, 6], - [1, 7], - [2, 6], - [2, 7], - [3, 8], - [3, 9], - [4, 8], - [4, 9], - [5, 6], - [6, 7], - [7, 8], - [8, 9], - [5, 10], - [5, 11], - [6, 10], - [6, 11], - [7, 12], - [7, 13], - [8, 12], - [8, 13], - [9, 14], - [10, 11], - [11, 12], - [12, 13], - [13, 14], - [10, 15], - [11, 16], - [11, 17], - [12, 16], - [12, 17], - [13, 18], - [13, 19], - [14, 18], - [14, 19], - [15, 16], - [16, 17], - [17, 18], - [18, 19], - ], - "16QBT": [ - [0, 1], - [1, 2], - [2, 3], - [3, 4], - [4, 5], - [5, 6], - [6, 7], - [0, 8], - [3, 11], - [4, 12], - [7, 15], - [8, 9], - [9, 10], - [10, 11], - [11, 12], - [12, 13], - [13, 14], - [14, 15], - ], -} From 08eef595efc2be55d057cdad688a9da8d3c40579 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Sat, 3 Jun 2023 19:41:29 -0400 Subject: [PATCH 36/38] Redesigning CLI In this commit I have done a couple of things: One, I have simplified the logic for the benchmark handler. Before if someone wanted to run a benchmark they would have to specify the benchmark type first. I forsaw this causing issues, so I made it so that all a user would have to do is supply the cli with the benchmark file name. Two, I reduced the benchmark and complier retrival function into one since they both do the same thing, so it just reduces clutter in code. Three, I changed the name of variables in the hopes of making it easier to understand what each function call is doing All in all, I believe that this was a neccessary first step in making this cli a reliable tool that anyone in the quantum computing sapce can ultize --- red_queen/cli.py | 116 ++++++++++++++++++----------------------------- 1 file changed, 43 insertions(+), 73 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 0027c5ce..6e93c603 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -19,39 +19,39 @@ import configparser import click - -def benchmark_retrieval(): +# This function retrives the paths for each benchmark and returns +# a hashmap with the benchmark's name and it's path +def benchmark_complier_retrieval(): benchmark_dict = {} - type_list = [] list_of_benchmarks = [] - dir_path = "red_queen/games/" - for entry in os.scandir(dir_path): - if entry.is_dir(): - benchmark_dict[entry.name] = [] - sub_dict = {} - for sub in os.scandir(f"{dir_path}{entry.name}"): - if not sub.name.startswith("_") and sub.name.endswith(".py") and sub.is_file(): - sub_dict[sub.name] = sub.path - benchmark_dict[entry.name] = sub_dict - - type_list = list(benchmark_dict.keys()) - for benchmark_pairs in benchmark_dict.values(): - for keys in benchmark_pairs.keys(): - list_of_benchmarks.append(keys) - - return benchmark_dict, type_list, list_of_benchmarks - - -def complier_retrieval(): list_of_compliers = [] + benchmark_category_set = set() + benchmark_path = "games/" + for benchmark_category in os.scandir(benchmark_path): + if benchmark_category.is_dir(): + benchmark_category_set.add(benchmark_category.name) + name_path_pair = {} + for benchmark in os.scandir(f"{benchmark_path}{benchmark_category.name}"): + if ( + not benchmark.name.startswith("_") + and benchmark.name.endswith(".py") + and benchmark.is_file() + ): + name_path_pair[benchmark.name] = benchmark.path + benchmark_dict.update(name_path_pair) + + for benchmark_name, _ in benchmark_dict.items(): + list_of_benchmarks.append(benchmark_name) + config = configparser.ConfigParser() - config.read("pytest.ini") + config.read("../pytest.ini") for complier in config["pytest"]["markers"].split("\n"): if complier != "": list_of_compliers.append(complier) - return list_of_compliers + return benchmark_dict, list_of_benchmarks, list_of_compliers +# The function retrives the latest benchmark ran's results def result_retrieval(): results = {} dir_path = "results" @@ -63,6 +63,7 @@ def result_retrieval(): return results +# This functions creates the pytest cli call and runs it def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): command_list = ["pytest"] compiler_command = [m_tag, compiler, "--store"] @@ -91,6 +92,7 @@ def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): ) +# This function displays the results def show_result(): results_dict = result_retrieval() result_num = max(results_dict.keys()) @@ -102,13 +104,13 @@ def show_result(): check=True, ) click.echo("To view the table again:") - click.echo(" ".join(command_list)) + click.echo(" ".join(command_list)+"\n") -benchmark_category, benchmark_types, benchmarks = benchmark_retrieval() -complier_list = complier_retrieval() +benchmark_hash, benchmarks_list, complier_list = benchmark_complier_retrieval() +# The code below deals with each flag of the cli @click.command() @click.option( "-c", @@ -118,26 +120,18 @@ def show_result(): type=click.Choice(complier_list), help="enter a compliler here", ) -@click.option( - "-t", - "--benchmarkType", - "benchmarkType", - multiple=True, - type=click.Choice(benchmark_types), - help="enter the type of benchmark(s) here", -) @click.option( "-b", "--benchmark", "benchmark", multiple=True, - type=click.Choice(benchmarks), + type=click.Choice(benchmarks_list), help="enter the specfic benchmark(s) here", ) -def main(compiler=None, benchmarkType=None, benchmark=None): +# The main function calls on the above helper functions to run the desired benchmarks +def main(compiler=None, benchmark=None): benchmark_paths = [] pytest_paths = "" - mydict = {} m_tag = "" if len(compiler) > 0: m_tag = "-m" @@ -145,44 +139,20 @@ def main(compiler=None, benchmarkType=None, benchmark=None): else: compiler = "" # ### This for loop ensures that we are able to run various benchmark types - i = 0 j = 0 ### Has a benchmark type been specified? - if len(benchmarkType) > 0: - while i < len(benchmarkType): - ### Is the inputted benchmark type valid? - if set(benchmarkType).issubset(benchmark_category): - ### Has a benchmark been specified? - if len(benchmark) > 0: - ### Are the inputted benchmark(s) valid? - if set(benchmark).issubset(benchmarks): - ### Is the inputted benchmark within the inputted benchmark type suite? - if set(benchmark).issubset(set(benchmark_category[benchmarkType[i]])): - for j, _ in enumerate(benchmark): - benchmark_paths.append( - benchmark_category[benchmarkType[0]][benchmark[j]] - ) - pytest_paths = benchmark_paths - run_benchmarks(pytest_paths, m_tag, compiler) - show_result() - i += 1 - else: - mydict = benchmark_category[benchmarkType[0]] - for v in mydict.values(): - benchmark_paths.append(v) - pytest_paths = " ".join(tuple(benchmark_paths)) - run_benchmarks(pytest_paths, m_tag, compiler) - show_result() - i += 1 + if len(benchmark) > 0: + ### Are the inputted benchmark(s) valid? + if set(benchmark).issubset(benchmarks_list): + for j, _ in enumerate(benchmark): + benchmark_paths.append(benchmark_hash[benchmark[j]]) + pytest_paths = benchmark_paths + run_benchmarks(pytest_paths, m_tag, compiler) + show_result() + i += 1 else: - question = input(f"Would you like to run all {len(benchmarks)} available benchmarks (y/n) ") - if question.lower() == "y": - for benchmark_list in benchmark_category.values(): - for paths in benchmark_list.values(): - benchmark_paths.append(paths) - pytest_paths = " ".join(tuple(benchmark_paths)) - run_benchmarks(pytest_paths, m_tag, compiler) - show_result() + print("Please input benchmark") + print("Example:\nred-queen -b grovers.py") if __name__ == "__main__": From c9ec3cf7fabc6e4d677e94e6e71a45ea86710c68 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Sat, 3 Jun 2023 19:54:25 -0400 Subject: [PATCH 37/38] Code Formatting Changes I just modified the formatting of thr cli, so that it would pass tox. --- red_queen/cli.py | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/red_queen/cli.py b/red_queen/cli.py index 6e93c603..f23295d5 100644 --- a/red_queen/cli.py +++ b/red_queen/cli.py @@ -21,10 +21,11 @@ # This function retrives the paths for each benchmark and returns # a hashmap with the benchmark's name and it's path +# It also retrives all available compliers for Red Queen def benchmark_complier_retrieval(): benchmark_dict = {} list_of_benchmarks = [] - list_of_compliers = [] + list_of_compliers = set() benchmark_category_set = set() benchmark_path = "games/" for benchmark_category in os.scandir(benchmark_path): @@ -43,11 +44,12 @@ def benchmark_complier_retrieval(): for benchmark_name, _ in benchmark_dict.items(): list_of_benchmarks.append(benchmark_name) + # Config helps us process the .ini file config = configparser.ConfigParser() config.read("../pytest.ini") for complier in config["pytest"]["markers"].split("\n"): if complier != "": - list_of_compliers.append(complier) + list_of_compliers.add(complier) return benchmark_dict, list_of_benchmarks, list_of_compliers @@ -55,18 +57,19 @@ def benchmark_complier_retrieval(): def result_retrieval(): results = {} dir_path = "results" - for entry in os.scandir(dir_path): - if entry.is_file(): - filename = entry.name + for result in os.scandir(dir_path): + if result.is_file(): + filename = result.name result_count = int(filename.split("_")[0]) - results[result_count] = {entry.name: entry.path} + results[result_count] = {result.name: result.path} return results -# This functions creates the pytest cli call and runs it +# This functions creates the pytest command and runs it def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str): command_list = ["pytest"] compiler_command = [m_tag, compiler, "--store"] + # Are you using a Windows Machine? if platform.system() == "Windows": command_list.insert(0, "-m") command_list.insert(0, "python") @@ -104,7 +107,7 @@ def show_result(): check=True, ) click.echo("To view the table again:") - click.echo(" ".join(command_list)+"\n") + click.echo(" ".join(command_list) + "\n") benchmark_hash, benchmarks_list, complier_list = benchmark_complier_retrieval() @@ -133,14 +136,15 @@ def main(compiler=None, benchmark=None): benchmark_paths = [] pytest_paths = "" m_tag = "" + # Is there a specfic compiler listed? if len(compiler) > 0: m_tag = "-m" compiler = compiler[0] else: compiler = "" - # ### This for loop ensures that we are able to run various benchmark types j = 0 - ### Has a benchmark type been specified? + i = 0 + ### Has the user input benchmarks? if len(benchmark) > 0: ### Are the inputted benchmark(s) valid? if set(benchmark).issubset(benchmarks_list): From 63fa321af9a42b41c302bac73af7647b43a26479 Mon Sep 17 00:00:00 2001 From: Caleb Aguirre-Leon Date: Sat, 3 Jun 2023 20:16:12 -0400 Subject: [PATCH 38/38] Updated ReadMe File In this commit, I added information about the cli and how to run virtual environments --- README.md | 40 +++++++++++++++++++--------------------- 1 file changed, 19 insertions(+), 21 deletions(-) diff --git a/README.md b/README.md index 6df8cf4e..28ca719e 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,6 @@ > [Carroll, Lewis: Through the Looking-Glass, Chapter 2]( https://www.gutenberg.org/files/12/12-h/12-h.htm) - # About The Red Queen benchmark framework was created to facilitate the benchmarking @@ -19,15 +18,11 @@ The framework is tightly integrated into `pytest`. Therefore, to use it effectively, you should know the basics of `pytest` first. Take a look at the [introductory material](https://docs.pytest.org/en/latest/getting-started.html). - - # Usage - Red Queen is a framework for benchmarking quantum compilation algorithms. Since is is still in early development, you must clone this repository to use it: - ```bash git clone git@github.com:Qiskit/red-queen.git ``` @@ -45,6 +40,7 @@ To create a virtual environment ```bash python -m venv .virtualenv ``` + After you create the virtual enviornment you need to activate it To activate the virtual enviornment @@ -61,38 +57,45 @@ source .virtualenv/bin/activate .virtualenv\Scripts\activate.bat ``` -Lastly you need to install all of the neccessary dependencies for Red Queen +You need to install all of the neccessary dependencies for Red Queen ```bash pip install -e . ``` -With all that you are ready to start using Red Queen. -
+Lastly `cd` into `red_queen` so that you'll be able to use the `cli` (command line interface) -Red Queen has a `CLI` (command line interface) that you can use to execute benchmarks. +```bash +cd red_queen +``` +With all that you are ready to start using Red Queen CLI. -The general templete for the `CLI` is as follows: +Red Queen's `CLI` simplifies the execution of benchmarks by constructing the pytest scripts for you all you gotta do is specify the benchmarks that you'd like to run. + +You can find all available benchmark just by running this command +```bash +red-queen --help +``` +The general templete for the `CLI` is as follows: ```bash -red-queen -c -t -b +red-queen -c -b ``` +> Side bar: The complier flag is optional. If you don't specify a complier the `cli` will just run `qiskit` Now, suppose you want to run the mapping benchmarks using only `tweedledum`. You can do this via the `CLI` or with `pytest` - - [For MacOs and Linux] With `CLI` ```bash -red-queen -c tweedledum -t mapping -b map_queko.py +red-queen -c tweedledum -b map_queko.py ``` With `pytest` @@ -101,13 +104,12 @@ With `pytest` python -m pytest games/mapping/map_queko.py -m tweedledum --store ``` - [For Windows] With `CLI` ```bash -red-queen -c tweedledum -t mapping -b map_queko.py +red-queen -c tweedledum -b map_queko.py ``` With `pytest` @@ -124,8 +126,6 @@ stdin handling. python -m pytest -s games/mapping/map_queko.py -m tweedledum --store ``` - - The benchmark suite will consider all functions named `bench_*` in `games/mapping/map_queko.py`. Because The `--store` option tells the framework to store the results in json file in @@ -136,7 +136,6 @@ use: python -m report.console_tables --storage results/0001_bench.json ``` - ## Warning This code is still under development. There are many razer sharp edges. @@ -154,8 +153,7 @@ on the knowledge of the internals of the following established `pytest` plugins: ## License - -This software is licensed under the Apache 2.0 licence (see +This software is licensed under the Apache 2.0 licence (see [LICENSE](https://github.com/Qiskit/red-queen/blob/main/LICENSE)) ## Contributing