Skip to content
This repository has been archived by the owner on Aug 14, 2024. It is now read-only.

Adding Red Queen CLI #27

Open
wants to merge 45 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
b01544a
Ready for Testing
Lementknight Jul 29, 2022
52312ff
Added Proper Header
Lementknight Jul 29, 2022
b79e630
Entry Point Added
Lementknight Jul 29, 2022
2b1d01a
Merge branch 'main' into red-queen-cli
Lementknight Aug 2, 2022
16c5a8b
Update setup.py
Lementknight Aug 3, 2022
5758d09
Code Perservation
Lementknight Aug 3, 2022
3e8105f
Fixed Requirements
Lementknight Aug 3, 2022
05d176a
Missing comma
Lementknight Aug 3, 2022
c881705
Fixed Entry Point
Lementknight Aug 3, 2022
19b496a
Added Missing Python File
Lementknight Aug 3, 2022
52274a0
Windows Patch
Lementknight Aug 3, 2022
acd8d71
Removed Security Issue
Lementknight Aug 5, 2022
e404a26
Pylint Fixes
Lementknight Aug 5, 2022
cf4ecd0
Update red_queen/cli.py
Lementknight Aug 5, 2022
422815d
Update red_queen/cli.py
Lementknight Aug 5, 2022
279e6f7
Small Changes
Lementknight Aug 5, 2022
3c808d7
Passes Tox and Should Work with Windows Now
Lementknight Aug 6, 2022
3ff1047
Merge branch 'main' into red-queen-cli
Lementknight Aug 6, 2022
79cb6db
Removed Duplicate Text
Lementknight Aug 6, 2022
9d7d71a
Merge branch 'main' into red-queen-cli
Lementknight Aug 9, 2022
f41069d
Ready for Testing
Lementknight Aug 9, 2022
6e77b84
Passes Tox
Lementknight Aug 9, 2022
d734d76
Update README.md
Lementknight Aug 10, 2022
454dbfa
Revised Docstring
Lementknight Aug 10, 2022
1e2e944
Update red_queen/cli.py
Lementknight Aug 10, 2022
65fb8cf
Update red_queen/cli.py
Lementknight Aug 10, 2022
af682a4
Update red_queen/cli.py
Lementknight Aug 10, 2022
c39b862
Update red_queen/cli.py
Lementknight Aug 10, 2022
a5f7e8d
Update red_queen/cli.py
Lementknight Aug 10, 2022
d9c01c8
Tox Reformating
Lementknight Aug 10, 2022
7263118
Suggested Revisions Applied
Lementknight Aug 10, 2022
a9ed1ac
Recommended Revisions Made
Lementknight Aug 10, 2022
fb57d04
Merge branch 'main' into red-queen-cli
Lementknight Aug 11, 2022
f7f61a3
Minor Change
Lementknight Aug 12, 2022
b410c0f
Merge branch 'main' into red-queen-cli
Lementknight Aug 13, 2022
279549f
Removing Sys.Exacutable from CLI
Lementknight Aug 26, 2022
d310cdc
Rewritting ReadMe with Guidence for Virtual Enviornment Usage and CLI…
Lementknight Dec 26, 2022
10fafa2
Merge branch 'Qiskit:main' into red-queen-cli
Lementknight Dec 26, 2022
d4fb6a1
Adding .virtualenv to .gitignore file
Lementknight Dec 26, 2022
4c0965f
Adding Missing File
Lementknight Dec 26, 2022
16ca108
Remove Unneccessary Files
Lementknight Dec 26, 2022
8531c98
Merge branch 'main' into red-queen-cli
mtreinish Jan 3, 2023
08eef59
Redesigning CLI
Lementknight Jun 3, 2023
c9ec3cf
Code Formatting Changes
Lementknight Jun 3, 2023
63fa321
Updated ReadMe File
Lementknight Jun 4, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
114 changes: 95 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,63 +1,138 @@
> "Well, in our country," said Alice, still panting a little, "you'd generally
> get to somewhere else—if you run very fast for a long time, as we've been
> "Well, in our country," said Alice, still panting a little, "you'd generally
> get to somewhere else—if you run very fast for a long time, as we've been
> doing."
>
> "A slow sort of country!" said the Queen. "Now, here, you see, it takes all
> the running you can do, to keep in the same place. If you want to get
> "A slow sort of country!" said the Queen. "Now, here, you see, it takes all
> the running you can do, to keep in the same place. If you want to get
> somewhere else, you must run at least twice as fast as that!"
>
> [Carroll, Lewis: Through the Looking-Glass, Chapter 2](
https://www.gutenberg.org/files/12/12-h/12-h.htm)

## About
<br>
Lementknight marked this conversation as resolved.
Show resolved Hide resolved

<h1>About</h1>

The Red Queen benchmark framework was created to facilitate the benchmarking
algorithms used in quantum compilation.

The framework is tightly integrated into `pytest`. Therefore, to use it
effectively, you should know the basics of `pytest` first. Take a look at the
The framework is tightly integrated into `pytest`. Therefore, to use it
effectively, you should know the basics of `pytest` first. Take a look at the
[introductory material](https://docs.pytest.org/en/latest/getting-started.html).

## Usage

<br>

<h1>Usage</h1>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does using explicit html markup for headers and breaks improve the formatting here? I think we should stick to the markdown formatting if not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was looking into it, and it doesn't really add much. The Web Dev in me wanted it to make sense, but it doesn't. I'll adjust it accordingly.



Red Queen is a framework for benchmarking quantum compilation algorithms. Since
it was not designed as a package, there is no notion of installation. Hence, you
must clone this repository to use it:
is is still in early development, you must clone this repository to use it:


```bash
git clone git@github.com:Qiskit/red-queen.git
```

To run benchmarks, you must first go to the `red-queen` directory and install
To run benchmarks, you must first go to the `red-queen` directory and install
the required packages:


```bash
cd red-queen
pip install -r requirements.txt
pip install red-queen
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
```


Red Queen has a `CLI` (command line interface) that you can use to execute benchmarks.

<br>

The general templete for the `CLI` is as follows:

<br>


```bash
red-queen -c <complier> -t <benchmark_type> -b <benchmark_name>
```


Now, suppose you want to run the mapping benchmarks using only `tweedledum`.
You must do it using `pytest`
You can do this via the `CLI` or with `pytest`


<br>

[For MacOs and Linux]

With `CLI`

```bash
pytest red_queen/games/mapping/map_queko.py -m tweedledum --store
red-queen -c tweedledum -t mapping -b map_queko.py
```

To run pytest on Windows, you will have to use `python -m` in order to run the
`pytest` command. You will also need to add `-s` to your pytest call to disable
With `pytest`

```bash
python -m pytest games/mapping/map_queko.py -m tweedledum --store
```

<br>

[For Windows]

With `CLI`

```bash
red-queen -c tweedledum -t mapping -b map_queko.py
```

With `pytest`

```bash
python -m pytest -s games/mapping/map_queko.py -m tweedledum --store
```

To run pytest or any python script on Windows, you will have to use `python -m` in order to run the
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
`pytest` command. You will also need to add `-s` to your pytest call to disable
stdin handling.

```bash
python -m pytest -s red_queen/games/mapping/map_queko.py -m tweedledum --store
python -m pytest -s games/mapping/map_queko.py -m tweedledum --store
```
<br>

To run pytest or any python script on Windows, you will have to use `python -m` in order to run the
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
`pytest` command. You will also need to add `-s` to your pytest call to disable
stdin handling.

The benchmark suite will consider all functions named `bench_*` in
`red_queen/games/mapping/map_queko.py`. Because we set the `-m` option, only the the ones
The benchmark suite will consider all functions named `bench_*` in
`games/mapping/map_queko.py`. Because we set the `-m` option, only the the ones
marked with `tweedledum` will be run. (We could easy do the same for `qiskit`).
If you don't define a `-m` option, all `bench_*` functions will be run.


```bash
python -m pytest -s red_queen/games/mapping/map_queko.py -m tweedledum --store
```

<br>

The benchmark suite will consider all functions named `bench_*` in
`games/mapping/map_queko.py`. Because
The `--store` option tells the framework to store the results in json file in
the `results` directory. To see the results as a table, you can use the you can
use:

```bash
python -m report.console_tables --storage results/0001_bench.json
```

<br>

## Warning

This code is still under development. There are many razer sharp edges.

For information of how execution works and other details about the framwork
Expand All @@ -73,6 +148,7 @@ on the knowledge of the internals of the following established `pytest` plugins:

## License


This software is licensed under the Apache 2.0 licence (see
[LICENSE](https://github.com/Qiskit/red-queen/blob/main/LICENSE))

Expand Down
173 changes: 173 additions & 0 deletions red_queen/cli.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
# ------------------------------------------------------------------------------
# Part of Red Queen Project. This file is distributed under the MIT License.
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
# See accompanying file /LICENSE for details.
# ------------------------------------------------------------------------------

#!/usr/bin/env python3
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
import os
import platform
import subprocess
import configparser
import click


def benchmarkRetrieval():
benchmark_category = {}
benchmark_types = []
benchmarks = []
dir_path = "red_queen/games/"
for entry in os.scandir(dir_path):
if entry.is_dir():
benchmark_category[entry.name] = []
sub_dict = {}
for sub in os.scandir(f"{dir_path}{entry.name}"):
if not sub.name.startswith("_") and sub.name.endswith(".py") and sub.is_file():
sub_dict[sub.name] = sub.path
benchmark_category[entry.name] = sub_dict

benchmark_types = list(benchmark_category.keys())
for benchmark_pairs in benchmark_category.values():
for keys in benchmark_pairs.keys():
benchmarks.append(keys)

return benchmark_category, benchmark_types, benchmarks


def complier_retrieval():
complier_list = []
config = configparser.ConfigParser()
config.read("pytest.ini")
for complier in config["pytest"]["markers"].split("\n"):
if complier != "":
complier_list.append(complier)
# print(complier_list)
# This line tests to see if there is a complier specifed
return complier_list


def result_retrieval():
results = {}
dir_path = "results"
for entry in os.scandir(dir_path):
if entry.is_file():
filename = entry.name
result_count = int(filename.split("_")[0])
results[result_count] = {entry.name: entry.path}
return results


def run_benchmarks(pytest_paths: str, m_tag: str, compiler: str):
click.echo("benchmarks ran")
if platform.system() == "Windows":
subprocess.run(
[f"python -m pytest -s {pytest_paths} {m_tag}{compiler} --store"],
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
check=True,
)
else:
subprocess.run(
[f"pytest {pytest_paths} {m_tag}{compiler} --store"],
check=True,
)


def show_result():
results_dict = result_retrieval()
click.echo(results_dict)
result_num = max(results_dict.keys())
click.echo(result_num)
result_path = tuple(results_dict[result_num].values())[0]
command = f"python3 -m report.console_tables --storage {result_path}"
subprocess.run([command], check=True)
click.echo(
f"To view the results chart type:\npython -m report.console_tables --storage {result_path}"
)


benchmark_category, benchmark_types, benchmarks = benchmarkRetrieval()
complier_list = complier_retrieval()


@click.command()
# @click.option("--version", action="version", version="%(prog)s 0.0.1")
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
@click.option(
"-c",
"--compiler",
"compiler",
multiple=True,
type=click.Choice(complier_list),
help="enter a compliler here",
)
@click.option(
"-t",
"--benchmarkType",
"benchmarkType",
multiple=True,
type=click.Choice(benchmark_types),
help="enter the type of benchmark(s) here",
)
@click.option(
"-b",
"--benchmark",
"benchmark",
multiple=True,
type=click.Choice(benchmarks),
help="enter the specfic benchmark(s) here",
)
def main(compiler, benchmarkType, benchmark):
benchmark_paths = []
pytest_paths = ""
mydict = {}
m_tag = ""
if len(compiler) > 0:
m_tag = "-m "
compiler = compiler[0]
else:
compiler = ""
# ### This for loop ensures that we are able to run various benchmark types
i = 0
j = 0
### Has a benchmark type been specified?
if len(benchmarkType) > 0:
# click.echo("passed test 0")
while i < len(benchmarkType):
### Is the inputted benchmark type valid?
if set(benchmarkType).issubset(benchmark_category):
# click.echo("passed test 1")
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
### Has a benchmark been specified?
if len(benchmark) > 0:
# click.echo("passed test 2")
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
### Are the inputted benchmark(s) valid?
if set(benchmark).issubset(benchmarks):
# click.echo("passed test 3")
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
### Is the inputted benchmark within the inputted benchmark type suite?
if set(benchmark).issubset(set(benchmark_category[benchmarkType[i]])):
# click.echo("passed test 4")
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
for j in range(len(benchmark)):
Lementknight marked this conversation as resolved.
Show resolved Hide resolved
benchmark_paths.append(
benchmark_category[benchmarkType[0]][benchmark[j]]
)
pytest_paths = " ".join(tuple(benchmark_paths))
run_benchmarks(pytest_paths, m_tag, compiler)
show_result()
i += 1
else:
mydict = benchmark_category[benchmarkType[0]]
for v in mydict.values():
benchmark_paths.append(v)
pytest_paths = " ".join(tuple(benchmark_paths))
run_benchmarks(pytest_paths, m_tag, compiler)
show_result()
i += 1
else:
question = input(f"Would you like to run all {len(benchmarks)} available benchmarks (y/n) ")
if question.lower() == "y":
for benchmark_list in benchmark_category.values():
for paths in benchmark_list.values():
benchmark_paths.append(paths)
pytest_paths = " ".join(tuple(benchmark_paths))
run_benchmarks(pytest_paths, m_tag, compiler)
show_result()


if __name__ == "__main__":
main()
3 changes: 2 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ qiskit-terra
qiskit-aer
pytket>1.0,<2.0
setproctitle
rich
rich
click
5 changes: 5 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,5 +59,10 @@
"Documentation": "https://qiskit.org/documentation/",
"Source Code": "https://github.com/Qiskit/red-queen",
},
entry_points={
'console_scripts': [
"red-queen = red_queen.cli:main",
],
},
zip_safe=False,
)