-
-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(section): add information about mlr3inferr #855
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -285,6 +285,31 @@ print(plt2) | |
``` | ||
|
||
|
||
### Confidence Intervals {#sec-resampling-ci} | ||
|
||
Confidence intervals (CIs) provide a range of values within which we can be confident that it covers the true generalization error. | ||
Instead of relying solely on a single point estimate, CIs offer a measure of uncertainty around this estimate, allowing us to understand the reliability of our performance estimate. | ||
While constructing CIs for the generalization error is challenging due to the complex nature of the inference problem, some methods have been shown to work well in practice @kuempelfischer2024ciforge. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would add some more context here -- some learners/models can provide these directly (and often those calculations aren't all that complex), but if the learner doesn't support it, we have to do something else. Then describe in a sentence or two what those methods do. |
||
When employing such methods, it is important to be aware that they can fail in some cases -- e.g. in the presence of outliers or instable learning procedures -- and to be aware that the resulting CIs can either be too conservative or too liberal. | ||
|
||
The `r ref_pkg("mlr3inferr")` package extends the `mlr3` ecosystem with both inference methods and new resampling strategies. | ||
The inference methods are implemented as `r ref("Measure")` objects, that take in another measure for which to compute the CI. | ||
Below, we demonstrate how to use the inference method suggested by @bayle2020cross to compute a CI for the cross-validation result from above. | ||
As opposed to other mlr3 measures, the result is not a scalar value, but a vector containing the point estimate, as well as the lower and upper bounds of the CI for the specified confidence level. | ||
|
||
```{r} | ||
library(mlr3inferr) | ||
# alpha = 0.05 is also the default | ||
msr_ci = msr("ci.wald_cv", msr("classif.acc"), alpha = 0.05) | ||
rr$aggregate(msr_ci) | ||
``` | ||
|
||
We can also use `msr("ci")`, which will automatically select the appropriate method based on the `Resampling` object, if an inference method is available for it. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How do I know what resamplings have inference methods? |
||
|
||
```{r} | ||
rr$aggregate(msr("ci", msr("classif.acc"))) | ||
``` | ||
|
||
### ResampleResult Objects {#sec-resampling-inspect} | ||
|
||
As well as being useful for estimating the generalization performance, the `r ref("ResampleResult")` object can also be used for model inspection. | ||
|
@@ -576,6 +601,22 @@ plt = plt + ggplot2::scale_fill_manual(values = c("grey30", "grey50", "grey70")) | |
print(plt) | ||
``` | ||
|
||
It is also possible to plot confidence intervals by setting the type of plot to `"ci"`. | ||
Ignoring the multiple testing problem, @fig-benchmark-ci shows that the difference between the random forest and both other learners is statistically significant for the sonar task, whereas no final conclusion can be drawn for the german credit problem. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we not ignore the multiple testing problem? I would show results for a single learner here. |
||
|
||
```{r} | ||
#| fig-height: 5 | ||
#| fig-width: 6 | ||
#| label: fig-benchmark-ci | ||
#| fig-cap: 'Confidence intervals for accuracy scores for each learner across resampling iterations and the three tasks. Random forests (`lrn("classif.ranger")`) consistently outperforms the other learners.' | ||
#| fig-alt: Nine confidence intervals, one corresponding to each task/learner combination. In all cases the random forest performs best and the featureless baseline the worst. | ||
#| echo: false | ||
#| warning: false | ||
#| message: false | ||
autoplot(bmr, "ci", measure = msr("ci", msr("classif.acc"))) | ||
``` | ||
|
||
|
||
## Evaluation of Binary Classifiers {#sec-roc} | ||
|
||
In @sec-basics-classif-learner we touched on the concept of a confusion matrix and how it can be used to break down classification errors in more detail. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would delete the first sentence. The next sentence basically says the same, but this one isn't quite accurate.