Skip to content

openml/automlbenchmark

Folders and files

NameName
Last commit message
Last commit date
Mar 3, 2025
Mar 9, 2025
Dec 8, 2024
Dec 8, 2024
Dec 8, 2024
Mar 4, 2025
Jul 25, 2022
Feb 24, 2025
Dec 8, 2024
Mar 4, 2025
Jun 22, 2023
Mar 24, 2021
Oct 10, 2022
Mar 18, 2025
Dec 8, 2024
Aug 6, 2018
Apr 8, 2020
Dec 8, 2024
Dec 8, 2024
Dec 8, 2024
Aug 2, 2021
Dec 8, 2024
Nov 26, 2024
May 19, 2021
Dec 8, 2024
Dec 8, 2024
Dec 8, 2024
Feb 16, 2025
Mar 24, 2023
Dec 8, 2024

Repository files navigation

AutoML Benchmark

The OpenML AutoML Benchmark provides a framework for evaluating and comparing open-source AutoML systems. The system is extensible because you can add your own AutoML frameworks and datasets. For a thorough explanation of the benchmark, and evaluation of results, you can read our paper.

Automatic Machine Learning (AutoML) systems automatically build machine learning pipelines or neural architectures in a data-driven, objective, and automatic way. They automate a lot of drudge work in designing machine learning systems, so that better systems can be developed, faster. However, AutoML research is also slowed down by two factors:

  • We currently lack standardized, easily-accessible benchmarking suites of tasks (datasets) that are curated to reflect important problem domains, practical to use, and sufficiently challenging to support a rigorous analysis of performance results.

  • Subtle differences in the problem definition, such as the design of the hyperparameter search space or the way time budgets are defined, can drastically alter a task’s difficulty. This issue makes it difficult to reproduce published research and compare results from different papers.

This toolkit aims to address these problems by setting up standardized environments for in-depth experimentation with a wide range of AutoML systems.

Website: https://openml.github.io/automlbenchmark/index.html

Documentation: https://openml.github.io/automlbenchmark/docs/index.html

Installation: https://openml.github.io/automlbenchmark/docs/getting_started/

Features: