-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LULUC implementation #282
base: master
Are you sure you want to change the base?
LULUC implementation #282
Conversation
marcadella
commented
Feb 3, 2025
•
edited
Loading
edited
- Massive update to Biomee implementation using OOP and linked list to store cohorts.
- Implementation LULUC for Biomee #272
- LULUC vignette #273
- Correct state transitions to match states #274
- Biome LUH2 interface #286
- Only external change is the addition of `lu_fraction` in the output (non-breaking as it is appended in last position). - Fixed a bug where cohort output were not properly numbered for multi-year simulation.
Strict check on biomee drivers
…d regenerate biomee outputs.
…umns.
- factor simulation state (known otherwise as steering) from myinterface. - Remove unused code - Seems to break slightly Biomee p-model. Needs investigation day by day.
- Rename files to match current conversion (.mod) - Remove biomee-specific constants from params_core and remove un-needed dependencies
…ile is not needed anymore. Update luluc vignette Update NEWS
expect_true(all.equal(colMeans(out$data[[1]]$output_daily_tile), colMeans(biomee_p_model_output$data[[1]]$output_daily_tile), tolerance = 1e-4)) | ||
expect_true(all.equal(colMeans(out$data[[1]]$output_annual_tile), colMeans(biomee_p_model_output$data[[1]]$output_annual_tile), tolerance = 1e-4)) | ||
expect_true(all.equal(colMeans(out$data[[1]]$output_annual_cohorts), colMeans(biomee_p_model_output$data[[1]]$output_annual_cohorts), tolerance = 1e-4)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using colMeans
for the test averages over time, right?
This is much less strict than all daily/yearly values to be correct.
I suggest to remove the colMeans
to have stricter testing.
Has this been tried out previously and was rejected? If so for what reasons?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Furthermore, instead of testthat::expect_true
testthat::expect_equal()
yields more precise error messages, indicating which column (e.g. GPP, or Tree density) disagrees. I argue that this speeds up debugging since this info will already be available from the output of the CI run.
See below the error messages resulting from these two functions:
df_reference <- data.frame(LE=c(1,2), GPP=c(2,3), Density_treeha=c(2,3))
df_to_test <- data.frame(LE=c(1.001,2), GPP=c(20.001,30.001), Density_treeha=c(2,3))
testthat::expect_equal(
tolerance = 1e-4,
df_to_test,
df_reference)
#> Error: `df_to_test` not equal to `df_reference`.
#> Component "LE": Mean relative difference: 0.000999001
#> Component "GPP": Mean relative difference: 0.900004
testthat::expect_equal(
tolerance = 0.01,
df_to_test,
df_reference)
#> Error: `df_to_test` not equal to `df_reference`.
#> Component "GPP": Mean relative difference: 0.900004
testthat::expect_true(
all.equal(
colMeans(df_to_test),
colMeans(df_reference),
tolerance = 1e-4))
#> Error: all.equal(colMeans(df_to_test), colMeans(df_reference), tolerance = 1e-04) is not TRUE
#>
#> `actual` is a character vector ('Mean relative difference: 0.8490651')
#> `expected` is a logical vector (TRUE)
Created on 2025-02-27 with reprex v2.1.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Illustrating it with the real output, and mocked up errors:
library(rsofun)
library(dplyr)
library(tidyr)
df_reference2 <- rsofun::biomee_p_model_output$data[[1]]$output_daily_tile |> tidyr::tibble()
df_to_test2 <- df_reference2 |>
dplyr::mutate(Evap = Evap * (1 + runif(n(), min = -1e-3, max = 1e-3)),
GPP = GPP * (1 + runif(n(), min = -1e-4, max = 1e-4)),
slowSOM = slowSOM * (1 + runif(n(), min = -1e-4, max = 1e-4)))
testthat::expect_true(
all.equal(
colMeans(df_to_test2),
colMeans(df_reference2),
tolerance = 1e-4))
# This should not pass, but since the error I added is centered around 0 it is removed by `colMeans`.
testthat::expect_equal(
tolerance = 1e-4, # this is relative tolerance
df_to_test2,
df_reference2)
#> Error: `df_to_test2` not equal to `df_reference2`.
#> Component "Evap": Mean relative difference: 0.0004877337
Created on 2025-02-27 with reprex v2.1.0
Thanks Mayeul for this major overhaul!
|
Up to you. I found it enough to use the mean to detect when I was breaking
something. I was also always visualizing some plots.
…On Thu, Feb 27, 2025, 10:25 Fabian Bernhard ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In tests/testthat/test-model-runs.R
<#282 (comment)>:
> + expect_true(all.equal(colMeans(out$data[[1]]$output_daily_tile), colMeans(biomee_p_model_output$data[[1]]$output_daily_tile), tolerance = 1e-4))
+ expect_true(all.equal(colMeans(out$data[[1]]$output_annual_tile), colMeans(biomee_p_model_output$data[[1]]$output_annual_tile), tolerance = 1e-4))
+ expect_true(all.equal(colMeans(out$data[[1]]$output_annual_cohorts), colMeans(biomee_p_model_output$data[[1]]$output_annual_cohorts), tolerance = 1e-4))
+
Illustrating it with the real output, and mocked up errors:
library(rsofun)
library(dplyr)
library(tidyr)df_reference2 <- rsofun::biomee_p_model_output$data[[1]]$output_daily_tile |> tidyr::tibble()df_to_test2 <- df_reference2 |>
dplyr::mutate(Evap = Evap * (1 + runif(n(), min = -1e-3, max = 1e-3)),
GPP = GPP * (1 + runif(n(), min = -1e-4, max = 1e-4)),
slowSOM = slowSOM * (1 + runif(n(), min = -1e-4, max = 1e-4)))
testthat::expect_true(
all.equal(
colMeans(df_to_test2),
colMeans(df_reference2),
tolerance = 1e-4))# This should not pass, but since the error I added is centered around 0 it is removed by `colMeans`.
testthat::expect_equal(
tolerance = 1e-4, # this is relative tolerance
df_to_test2,
df_reference2)#> Error: `df_to_test2` not equal to `df_reference2`.#> Component "Evap": Mean relative difference: 0.0004877337
Created on 2025-02-27 with reprex v2.1.0 <https://reprex.tidyverse.org>
—
Reply to this email directly, view it on GitHub
<#282 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJMC4KCVCIO3FHVRHNTXQV32R3KX5AVCNFSM6AAAAABWLZ3LWCVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDMNBXGE2DCOBQG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
These are regression tests to check numerical equality of output when changing the BiomeE code.
The selected reference values were generated on Ubuntu. The GitHub CI showed that some differences exist when comparing on macOS or on Windows. Most mean relative differences are tiny. I.e. below 1e-5. Rauto was frequently between 1e-4 and 1e-5. Outlier was plantN on Windows with Mean relative differences of: 0.0005171742. Therefore we set the tolerance to 1e-3
GitHub CI indicated largest differences when comparing the Ubuntu-generated reference values to runs on macOS and Windows. Especially the following variables showed relative differences between 1e-3 and 1e-2: N_uptk Nupt NSN seedC and mineralN even up to 0.028 (Windows) and 0.041 (mac). Therefore we set the tolerance to 0.05, i.e. 5% differences. Unfortunately this is set to the same value for all the variables, even though many show much better agreement.
Rauto = c(2.97977797991678e-09,5.82770990149584e-05,6.47079059490352e-06,5.27829615748487e-06,5.31174691786873e-06), | ||
Rh = c(2.77344884125341e-06,2.75237016467145e-06,1.07433834273252e-05,2.56007547250192e-06,2.55474083132867e-06), | ||
NSC = c(0.00582691049203277,0.00565235828980803,0.00280602532438934,0.00323596899397671,0.00322869536466897), | ||
LAI = c(0,0.000364852574421093,0.00466133235022426,0.00618919776752591,0.00619784602895379), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The LULUC change appeared to have changed the initial conditions regarding LAI
and sapwood to hardwood ratio of carbon. This affects the initial trajectory but appears to smooth out after 8 years.
No action needs to be taken. But it's good to have this documented here.
Detailed observations:
LAI:
The first year output, shows the LAI outputs at only half of the output before the LULUC change.
This links then also to leafC
(in line 388).
Sapwood/Hardwood:
Moreover, with the LULUC change, at the end of first year total carbon in sapwood and hardwood is similar, but is distributed differently: sapwood SW_C
is only half and hardwood HW_C
is double w.r.t the outputs prior to the LULUC change (lines 390 and 391).
These changes are present in the first year ref_BiomeE_Pmodel_odt_yr1
(and might be linked to initialisation or allometry), but disappear for later years in the simulation. Changes to the reference values in ref_BiomeE_Pmodel_odt_yr251
of these quantities are indeed only minor (lines 426 and 427).
Obseravations for ref_BiomeE_Pmodel_oat
(and per cohort comparing ref_BiomeE_Pmodel_oac_yr1
vs ref_BiomeE_Pmodel_oac_yr251
) are similar. With the LULUC change, in the first two years there is less carbon in SapwoodC
than in WoodC
, this switches and starting from year 8,9 more carbon in sapwood than in hardwood. By the end of the 250 year simulation the ratio is 4.9/3.7 in favor of sapwood. Prior to the LULUC change this ratio was approximately this from year 1. (lines 469 and 470).
Instead of 62 it was 64. But best is to reuse 60 + 4
This clarifies the term `state` in this context as referring to the steering variables of the current year. In another context `state` is already used in rsofun for physical state variables of the dynamical system. Essentially `steering_state` contains the steering object, i.e. the state of the simulation step, controlling if output is written etc. This also clarifies the `steering` parameters by renaming them `steering_params`, reflecting the fact that these input parameters are constant throughout the whole simulation.
One point @fabern and I discussed:
|
Fix NAN in LULUC test, fix out-of-bounds dimension, rename 'state' to 'steering_state'