Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(section): add information about mlr3inferr #855

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ Imports:
mlr3filters,
mlr3fselect,
mlr3hyperband,
mlr3inferr,
mlr3learners,
mlr3oml,
mlr3mbo,
Expand All @@ -48,7 +49,8 @@ Remotes:
mlr-org/mlr3extralearners,
mlr-org/mlr3batchmark,
mlr-org/mlr3proba,
mlr-org/mlr3fairness
mlr-org/mlr3fairness,
mlr-org/mlr3inferr
Encoding: UTF-8
Roxygen: list(markdown = TRUE)
RoxygenNote: 7.2.3
2 changes: 1 addition & 1 deletion R/zzz.R
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ NULL

db = new.env()
db$index = c("base", "utils", "datasets", "data.table", "stats", "batchtools")
db$hosted = c("paradox", "mlr3misc", "mlr3", "mlr3data", "mlr3db", "mlr3proba", "mlr3pipelines", "mlr3learners", "mlr3filters", "bbotk", "mlr3tuning", "mlr3viz", "mlr3fselect", "mlr3cluster", "mlr3spatiotempcv", "mlr3spatial", "mlr3extralearners", "mlr3tuningspaces", "mlr3hyperband", "mlr3mbo", "mlr3verse", "mlr3benchmark", "mlr3oml", "mlr3batchmark", "mlr3fairness")
db$hosted = c("paradox", "mlr3misc", "mlr3", "mlr3data", "mlr3db", "mlr3proba", "mlr3pipelines", "mlr3learners", "mlr3filters", "bbotk", "mlr3tuning", "mlr3viz", "mlr3fselect", "mlr3cluster", "mlr3spatiotempcv", "mlr3spatial", "mlr3extralearners", "mlr3tuningspaces", "mlr3hyperband", "mlr3mbo", "mlr3verse", "mlr3benchmark", "mlr3oml", "mlr3batchmark", "mlr3fairness", "mlr3inferr")

lgr = NULL

Expand Down
18 changes: 18 additions & 0 deletions book/book.bib
Original file line number Diff line number Diff line change
Expand Up @@ -1436,3 +1436,21 @@ @book{hutter2019automated
publisher = {Springer},
keywords = {}
}
@misc{kuempelfischer2024ciforge,
title={Constructing Confidence Intervals for 'the' Generalization Error -- a Comprehensive Benchmark Study},
author={Hannah Schulz-Kümpel and Sebastian Fischer and Thomas Nagler and Anne-Laure Boulesteix and Bernd Bischl and Roman Hornung},
year={2024},
eprint={2409.18836},
archivePrefix={arXiv},
primaryClass={stat.ML},
url={https://arxiv.org/abs/2409.18836},
}

@article{bayle2020cross,
title={Cross-validation confidence intervals for test error},
author={Bayle, Pierre and Bayle, Alexandre and Janson, Lucas and Mackey, Lester},
journal={Advances in Neural Information Processing Systems},
volume={33},
pages={16339--16350},
year={2020}
}
1 change: 1 addition & 0 deletions book/chapters/appendices/errata.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ This appendix lists changes to the online version of this book to chapters inclu
## 3. Evaluation and Benchmarking

* Use `$encapsulate()` method instead of the `$encapsulate` and `$fallback` fields.
* A section on the `mlr3inferr` package was added.

## 4. Hyperparameter Optimization

Expand Down
41 changes: 41 additions & 0 deletions book/chapters/chapter3/evaluation_and_benchmarking.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -285,6 +285,31 @@ print(plt2)
```


### Confidence Intervals {#sec-resampling-ci}

Confidence intervals (CIs) provide a range of values within which we can be confident that it covers the true generalization error.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would delete the first sentence. The next sentence basically says the same, but this one isn't quite accurate.

Instead of relying solely on a single point estimate, CIs offer a measure of uncertainty around this estimate, allowing us to understand the reliability of our performance estimate.
While constructing CIs for the generalization error is challenging due to the complex nature of the inference problem, some methods have been shown to work well in practice @kuempelfischer2024ciforge.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add some more context here -- some learners/models can provide these directly (and often those calculations aren't all that complex), but if the learner doesn't support it, we have to do something else. Then describe in a sentence or two what those methods do.

When employing such methods, it is important to be aware that they can fail in some cases -- e.g. in the presence of outliers or instable learning procedures -- and to be aware that the resulting CIs can either be too conservative or too liberal.

The `r ref_pkg("mlr3inferr")` package extends the `mlr3` ecosystem with both inference methods and new resampling strategies.
The inference methods are implemented as `r ref("Measure")` objects, that take in another measure for which to compute the CI.
Below, we demonstrate how to use the inference method suggested by @bayle2020cross to compute a CI for the cross-validation result from above.
As opposed to other mlr3 measures, the result is not a scalar value, but a vector containing the point estimate, as well as the lower and upper bounds of the CI for the specified confidence level.

```{r}
library(mlr3inferr)
# alpha = 0.05 is also the default
msr_ci = msr("ci.wald_cv", msr("classif.acc"), alpha = 0.05)
rr$aggregate(msr_ci)
```

We can also use `msr("ci")`, which will automatically select the appropriate method based on the `Resampling` object, if an inference method is available for it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do I know what resamplings have inference methods?


```{r}
rr$aggregate(msr("ci", msr("classif.acc")))
```

### ResampleResult Objects {#sec-resampling-inspect}

As well as being useful for estimating the generalization performance, the `r ref("ResampleResult")` object can also be used for model inspection.
Expand Down Expand Up @@ -576,6 +601,22 @@ plt = plt + ggplot2::scale_fill_manual(values = c("grey30", "grey50", "grey70"))
print(plt)
```

It is also possible to plot confidence intervals by setting the type of plot to `"ci"`.
Ignoring the multiple testing problem, @fig-benchmark-ci shows that the difference between the random forest and both other learners is statistically significant for the sonar task, whereas no final conclusion can be drawn for the german credit problem.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not ignore the multiple testing problem? I would show results for a single learner here.


```{r}
#| fig-height: 5
#| fig-width: 6
#| label: fig-benchmark-ci
#| fig-cap: 'Confidence intervals for accuracy scores for each learner across resampling iterations and the three tasks. Random forests (`lrn("classif.ranger")`) consistently outperforms the other learners.'
#| fig-alt: Nine confidence intervals, one corresponding to each task/learner combination. In all cases the random forest performs best and the featureless baseline the worst.
#| echo: false
#| warning: false
#| message: false
autoplot(bmr, "ci", measure = msr("ci", msr("classif.acc")))
```


## Evaluation of Binary Classifiers {#sec-roc}

In @sec-basics-classif-learner we touched on the concept of a confusion matrix and how it can be used to break down classification errors in more detail.
Expand Down
2 changes: 1 addition & 1 deletion book/common/chap_auths.csv
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Chapter Number,Title,Authors
1,Introduction and Overview,"Lars Kotthoff, Raphael Sonabend, Natalie Foss, Bernd Bischl"
2,Data and Basic Modeling,"Natalie Foss, Lars Kotthoff"
3,Evaluation and Benchmarking,"Giuseppe Casalicchio, Lukas Burk"
3,Evaluation and Benchmarking,"Giuseppe Casalicchio, Lukas Burk, Sebastian Fischer"
4,Hyperparameter Optimization,"Marc Becker, Lennart Schneider, Sebastian Fischer"
5,Advanced Tuning Methods and Black Box Optimization,"Lennart Schneider, Marc Becker"
6,Feature Selection,Marvin N. Wright
Expand Down