Skip to content

Commit

Permalink
JOSS: References
Browse files Browse the repository at this point in the history
  • Loading branch information
perdelt committed Dec 28, 2024
1 parent c59a5aa commit b9399f4
Show file tree
Hide file tree
Showing 2 changed files with 74 additions and 0 deletions.
57 changes: 57 additions & 0 deletions paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -304,5 +304,62 @@ @InProceedings{10.1007/978-3-031-68031-1_9
doi = {10.1007/978-3-031-68031-1_9},
}

@InProceedings{10.1007/978-3-319-67162-8_12,
author="Seybold, Daniel
and Domaschka, J{\"o}rg",
editor="Kirikova, M{\={a}}r{\={\i}}te
and N{\o}rv{\aa}g, Kjetil
and Papadopoulos, George A.
and Gamper, Johann
and Wrembel, Robert
and Darmont, J{\'e}r{\^o}me
and Rizzi, Stefano",
title="Is Distributed Database Evaluation Cloud-Ready?",
booktitle="New Trends in Databases and Information Systems",
year="2017",
publisher="Springer International Publishing",
address="Cham",
pages="100--108",
abstract="The database landscape has significantly evolved over the last decade as cloud computing enables to run distributed databases on virtually unlimited cloud resources. Hence, the already non-trivial task of selecting and deploying a distributed database system becomes more challenging. Database evaluation frameworks aim at easing this task by guiding the database selection and deployment decision. The evaluation of databases has evolved as well by moving the evaluation focus from performance to distribution aspects such as scalability and elasticity. This paper presents a cloud-centric analysis of distributed database evaluation frameworks based on evaluation tiers and framework requirements. It analysis eight well adopted evaluation frameworks. The results point out that the evaluation tiers performance, scalability, elasticity and consistency are well supported, in contrast to resource selection and availability. Further, the analysed frameworks do not support cloud-centric requirements but support classic evaluation requirements.",
isbn="978-3-319-67162-8"
}

@InProceedings{10.1007/978-3-030-12079-5_4,
author="Brent, Lexi
and Fekete, Alan",
editor="Chang, Lijun
and Gan, Junhao
and Cao, Xin",
title="A Versatile Framework for Painless Benchmarking of Database Management Systems",
booktitle="Databases Theory and Applications",
year="2019",
publisher="Springer International Publishing",
address="Cham",
pages="45--56",
abstract="Benchmarking is a crucial aspect of evaluating database management systems. Researchers, developers, and users utilise industry-standard benchmarks to assist with their research, development, or purchase decisions, respectively. Despite this ubiquity, benchmarking is usually a difficult process involving laborious tasks such as writing and debugging custom testbed scripts, or extracting and transforming output into useful formats. To date, there are only a limited number of comprehensive benchmarking frameworks designed to tackle these usability and efficiency challenges directly.",
isbn="978-3-030-12079-5"
}

@InProceedings{10.1007/978-3-319-15350-6_6,
author="Bermbach, David
and Kuhlenkamp, J{\"o}rn
and Dey, Akon
and Sakr, Sherif
and Nambiar, Raghunath",
editor="Nambiar, Raghunath
and Poess, Meikel",
title="Towards an Extensible Middleware for Database Benchmarking",
booktitle="Performance Characterization and Benchmarking. Traditional to Big Data",
year="2015",
publisher="Springer International Publishing",
address="Cham",
pages="82--96",
abstract="Today's database benchmarks are designed to evaluate a particular type of database. Furthermore, popular benchmarks, like those from TPC, come without a ready-to-use implementation requiring database benchmark users to implement the benchmarking tool from scratch. The result of this is that there is no single framework that can be used to compare arbitrary database systems. The primary reason for this, among others, being the complexity of designing and implementing distributed benchmarking tools.",
isbn="978-3-319-15350-6"
}






17 changes: 17 additions & 0 deletions paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,26 @@ See the [homepage](https://github.com/Beuth-Erdelt/Benchmark-Experiment-Host-Man

# Statement of Need

In [@10.1007/978-3-030-84924-5_6] we introduced the package.

In [@10.1007/978-3-319-67162-8_12] the authors present a cloud-centric analysis of eight evaluation frameworks.
In [@10.1007/978-3-030-12079-5_4] the authors inspect several frameworks and collect requirements for a DBMS benchmarking framework in an interview based method and per interest group.

* Help with time-consuming initial setup and configuration
* Metadata collection
* Generality and versatility
* Extensibility and abstraction
* Usability and configurability
* Track everything
* Repeatability/ reproducibility

In [@10.1007/978-3-319-15350-6_6] the authors list important components for benchmarking, like Benchmark Coordinator, Measurement Manager, Workload Executor. They plead for a benchmarking middleware to support the process, to "*take care of the hassle of distributed benchmarking and managing the measurement infrastructure*". This is supposed to help the benchmark designer to concentrate on the core competences: specifying workload profiles and analyzing obtained measurements

## Summary of Solution

* Virtualization with Docker containers
* Orchestration with Kubernetes
* Monitoring with cAdvisor / Prometheus, since it is a common practise in cluster management

# A Basic Example

Expand Down

0 comments on commit b9399f4

Please sign in to comment.