From 48f19309639d00a83d671103b5fbd19fc604c20b Mon Sep 17 00:00:00 2001 From: noctiluc3nt Date: Mon, 16 Sep 2024 15:45:04 +0200 Subject: [PATCH] updated links --- 01_post-processing.Rmd | 6 ++++++ 05_flux-footprint.Rmd | 1 + docs/flux-footprint.html | 1 + docs/index.html | 5 ++--- docs/post-processing-of-raw-eddy-covariance-data.html | 5 +++++ docs/search_index.json | 2 +- index.Rmd | 5 ++--- notebooks/05_flux-footprint-estimation.ipynb | 8 ++++++++ 8 files changed, 26 insertions(+), 7 deletions(-) diff --git a/01_post-processing.Rmd b/01_post-processing.Rmd index f2a1c82..6c01323 100644 --- a/01_post-processing.Rmd +++ b/01_post-processing.Rmd @@ -1,5 +1,11 @@ # Post-processing of raw eddy-covariance data + + To derive turbulent fluxes and other turbulence characteristics from eddy-covariance measurements, they have to be post-processed first. In the following, we will have a look a the required steps -- starting with the theoretical foundation of the method and the necessary physical and technical considerations for the raw data processing. The upcoming analyses (e.g., [02_basic-turbulence-diagnostics.ipynb](https://github.com/noctiluc3nt/ec_analyze/blob/main/notebooks/02_basic-turbulence-diagnostics.ipynb)) build upon this and allow for a more detailed investigation (e.g., [03_quadrant-analysis.ipynb](https://github.com/noctiluc3nt/ec_analyze/blob/main/notebooks/03_quadrant-analysis.ipynb)). diff --git a/05_flux-footprint.Rmd b/05_flux-footprint.Rmd index 1a92279..b273569 100644 --- a/05_flux-footprint.Rmd +++ b/05_flux-footprint.Rmd @@ -69,6 +69,7 @@ str(ffp) ..$ : num [1:87] -1.91 -1.58 -0.79 0 0.79 ... $ contour_levels: num [1:9] 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 +Therein, `fy_mean` represents the crosswind-integrated footprint with coordinates `x` and `xmax` the location of the maximum footprint. `(x2d, y2d, f2d)` represent the 2d flux footprint and `(xcontour, ycontour)` the contour lines of the respective contour levels, which can be specified in the `contours`argument in `calc_flux_footprint`. The output list `ffp` can be easily plotted using the function `plot_flux_footprint`, as shown in the following. ### Plotting of flux footprint with `plot_flux_footprint` diff --git a/docs/flux-footprint.html b/docs/flux-footprint.html index 5ca2c90..ad292cb 100644 --- a/docs/flux-footprint.html +++ b/docs/flux-footprint.html @@ -284,6 +284,7 @@

7.2.1 Calculate 2D flux footprint ..$ : num [1:133] -3.37 -3.16 -2.37 -1.58 -0.79 ... ..$ : num [1:87] -1.91 -1.58 -0.79 0 0.79 ... $ contour_levels: num [1:9] 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 +

Therein, fy_mean represents the crosswind-integrated footprint with coordinates x and xmax the location of the maximum footprint. (x2d, y2d, f2d) represent the 2d flux footprint and (xcontour, ycontour) the contour lines of the respective contour levels, which can be specified in the contoursargument in calc_flux_footprint. The output list ffp can be easily plotted using the function plot_flux_footprint, as shown in the following.

7.2.2 Plotting of flux footprint with plot_flux_footprint

diff --git a/docs/index.html b/docs/index.html index 516ad56..6bbf640 100644 --- a/docs/index.html +++ b/docs/index.html @@ -226,10 +226,9 @@

Reddy: An open-source package for analyzing eddy-covariance da

1 Introduction


-

Package: Reddy package (Github)
+

Package source: Reddy package (Github)
Jupyter notebooks: jupyter notebooks (Github)
-Publication: Link
-Any comments or issues?: create an issue (Github)

+Any comments or issues? create an issue (Github)


Overview: Eddy-covariance (EC) measurements allow to measure turbulent fluxes directly and non-invasively over long periods of time, and thus represent the standard measurement method for turbulent exchange processes between atmosphere and land, vegetation, cryosphere or hydrosphere.
diff --git a/docs/post-processing-of-raw-eddy-covariance-data.html b/docs/post-processing-of-raw-eddy-covariance-data.html index 6d33e2d..0942e17 100644 --- a/docs/post-processing-of-raw-eddy-covariance-data.html +++ b/docs/post-processing-of-raw-eddy-covariance-data.html @@ -221,6 +221,11 @@

2 Post-processing of raw eddy-covariance data

+

To derive turbulent fluxes and other turbulence characteristics from eddy-covariance measurements, they have to be post-processed first. In the following, we will have a look a the required steps – starting with the theoretical foundation of the method and the necessary physical and technical considerations for the raw data processing. The upcoming analyses (e.g., 02_basic-turbulence-diagnostics.ipynb) build upon this and allow for a more detailed investigation (e.g., 03_quadrant-analysis.ipynb).

2.1 Eddy-covariance method

diff --git a/docs/search_index.json b/docs/search_index.json index 1bbb0bf..21e13ac 100644 --- a/docs/search_index.json +++ b/docs/search_index.json @@ -1 +1 @@ -[["index.html", "Reddy: An open-source package for analyzing eddy-covariance data 1 Introduction 1.1 Measurement setup and instrumentation 1.2 Data sources", " Reddy: An open-source package for analyzing eddy-covariance data 1 Introduction Package: Reddy package (Github) Jupyter notebooks: jupyter notebooks (Github) Publication: Link Any comments or issues?: create an issue (Github) Overview: Eddy-covariance (EC) measurements allow to measure turbulent fluxes directly and non-invasively over long periods of time, and thus represent the standard measurement method for turbulent exchange processes between atmosphere and land, vegetation, cryosphere or hydrosphere. However, they require a thought-through technical setup and a special post-processing, before further analysis can be carried out. This gitbook provides an overview about the setup, post-processing and meteorological evaluation of EC measurements, which can be used for research and teaching. The structure of this gitbook is detailed in figure 1. All described functions are implemented in the R-package Reddy, which was specially developed to allow a reproducible and comprehensive analysis of EC measurements. Each chapter covers one topic and is also available as jupyter notebook, which can be downloaded here. Figure 1: Overview of the topics in this gitbook and the functionality of the Reddy package Installation of the Reddy package: The Reddy package can be installed directly from github: devtools::install_git("https://github.com/noctiluc3nt/Reddy") Entering a function name function in an R terminal, shows the function and the performed calculations, such that the calculation and plotting procedure is fully comprehensible. The Reddy package depends on the libraries MASS (for kernel density estimation), pracma (basic linear algebra) and RcppRoll (C++ interface for accelerated data handling), which are automatically installed with Reddy. 1.1 Measurement setup and instrumentation Setup: An EC setup has two main components: the sonic anemometer (for the wind components and temperature) and a gas analyzer (for water vapor, carbon dioxide, methane, …). The sonic usually consists of three pairs of transducers, that transmit and receive ultra-sonic sound waves whose propagation speed depends on wind speed, temperature and humidity. Therefrom, the three wind components and the sonic temperature, which is approximately the virtual temperature, can be derived. The gas analyzers utilize usually an absorption line in the near- or mid-infrared of the respective trace gas, to measure their number density (IRGA - infrared gas analyzer). The air is pumped from the inlet (with a flow rate of about 12 l/min, that is controlled by the flow module which contains a pump), through the inlet tube into the gas analyzer, where the actual measurement takes place. There are different types of infrared gas analyzer, the most common difference beeing between gas analyzers with closed or open measuring path. Further possible methods for gas analyzers are laser spectroscopy, mass spectroscopy, chromatography and chemiluminescence (often used for NOx) and also particle flux measurements can be combined with sonic measurements to derive particle fluxes. The EC system can be mounted on a tower or mast with one or several measurement heights. For choosing the location and measurement height, the surface roughness and the flux footprint (i.e., the area where the flux originates from) are examined. Since the mast or tower disturb the EC measurements, the orientation of the system is chosen based on the prevailing wind direction(s), such that the main wind direction(s) are undisturbed by the tower structures, as exemplified in figure 3. The sonics have an internal orientation (and should face north as indicated on the sonic), and they should be levelled (however, their precise levelling is not essential as the wind is rotated in the post-processing anyway). There are several options for the inlet position: Usually it is placed below the sonic (see the setup in Kuivajärvi), but most importantly it should not interfere with the sonic measurements. A detailed description of high-standard EC site setup (following the standards of the Integrated Carbon Observation System, ICOS) can be found in Rebmann et al. (2018). Figure 2: Eddy-covariance sites in different environments: urban (Helsinki, Finland), lake (Kuivajärvi, Finland), boreal forest (Sodankylä, Finland), and alpine tundra (Finse, Norway). Calibration and Maintainance: The EC system requires regular maintainance, in particular the gas analyzers. The gas analyzers are calibrated regularly in-field, and more rarely, intensively by the manufacturer. For this prupose, reference measurements are carried out with gas cylinders containing a fixed amount of the gas, e.g. zero-gas or 450 ppm CO\\(_2\\). For water vapor, usually only the zero-gas calibration is performed, since fixed water vapor concentrations are difficult to maintain. For other trace gases, the in-field calibration is performed with a zero-gas and another fixed concentration, while the manufacturer calibrates for several fixed gas concentrations. The optics of the gas analyzer have to be cleaned regulary depending on the environmental conditions, whereby open-path gas analyzers require more maintainance than closed-path gas analyzers. Additionally, the inlet filters should be cleaned or replaced regulary as well as the sampling tube. The sonic requires less maintainance, however, the measurements are sensitive to the distance of the transducer pairs, so their distance should be checked. Figure 3: Components of an eddy-covariance system (demonstration setup) 1.2 Data sources FLUXNETs: There are several coordinated FLUXNETs, which standardise eddy-covariance measurements regionally or globally and provide post-processed flux data. For example: FLUXNET: FLUXNET combines several regional networks to a global flux dataset. A list of regional networks can be found here and a list of stations here. The most recent global flux data set is FLUXNET2015 (Pastorello et al. 2020). ICOS: ICOS (Integrated Carbon Observation System) is a European network of different station types (Atmosphere, Ecosystem, Ocean), which observe carbon concentrations and fluxes. The atmospheric stations are listed here and data can be downloaded from the data portal. Other packages for processing of eddy-covariance data: Multiple other packages for eddy-covariance data processing exist with different applications and different degrees of specialization as well as in different languages. The main focus of these packages, however, is the post-processing of raw eddy-covariance data (often as wrapper of manufacturer software) and long-term ecosystem monitoring studies, as summarized below. In this regard, Reddy R fills a gap, as it additionally considers turbulence theoretical and meteorological applications and allows for a fully customized post-processing and analysis of eddy-covariance data. EddyPro® Fortran: Post-processing of eddy-covariance data from the manufacturer LI-COR Biosciences. ONEFlux C (“Open Network-Enabled Flux processing pipeline”): Post-processing of (half-)hourly eddy-covariance data used to create the FLUXNET2015 dataset (Pastorello et al. 2020). REddyProc R: Post-processing of (half-)hourly eddy-covariance measurements. openeddy R: Post-processing of eddy-covariance data, aligned with REddyProc. RFlux R: GUI for post-processing of eddy-covariance raw data by calling EddyPro®. eddy4R R (Metzger et al. 2017) : Family of several R-package for eddy-covariance post-processing in a DevOps framework. icoscp Python: Access to data from ICOS (Integrated Carbon Observation System) data portal. flux-data-qaqc Python (Volk et al. 2021) : Post-processing of eddy-covariance measurements to derive daily or monthly evapotranspiration in a energy balance framework. References "],["post-processing-of-raw-eddy-covariance-data.html", "2 Post-processing of raw eddy-covariance data 2.1 Eddy-covariance method 2.2 Raw data handling and corrections 2.3 An example post-processing routine", " 2 Post-processing of raw eddy-covariance data To derive turbulent fluxes and other turbulence characteristics from eddy-covariance measurements, they have to be post-processed first. In the following, we will have a look a the required steps – starting with the theoretical foundation of the method and the necessary physical and technical considerations for the raw data processing. The upcoming analyses (e.g., 02_basic-turbulence-diagnostics.ipynb) build upon this and allow for a more detailed investigation (e.g., 03_quadrant-analysis.ipynb). 2.1 Eddy-covariance method How do we derive a flux from the sonic measurements? The eddy-covariance method is based on the Reynolds decomposition, which is applied to every measured quantity \\(x\\) through \\[ x = \\overline{x} + x' \\quad \\textrm{with} \\quad \\overline{x}:= \\frac{1}{t_s} \\int_0^{t_s} x \\: dt. \\] Therein, \\(\\overline{x}\\) represents the time average and \\(x'\\) the deviation from it. In principle, one would have to consider the ensemble average (i.e., all possible paths in the phase space), but this is replaced by the time average assuming ergodicity. Now, we take a look at the flux, which is defined as the average product of two quantities \\(\\overline{xw}\\), and insert the Reynolds decomposition: \\[ \\overline{xw} = \\overline{x}\\:\\overline{w} + \\overline{x}\\overline{w'} + \\overline{x'}\\overline{w} + \\overline{x'w'} = \\overline{x}\\:\\overline{w} + \\overline{x'w'} \\] The second and third term vanish because of the Reynolds postulates (i.e. \\(\\overline{x'} = 0\\)). If additionally \\(\\overline{w}=0\\), the flux can be represented by the correlation, i.e., \\(\\overline{xw} = \\overline{x'w'}\\). As you see, this procedure involves a lot of assumptions: For \\(\\overline{x'} = 0\\) to be true, we assume that the flow is steady and for \\(\\overline{w}=0\\) you either have a homogeneous and flat surface or you need to rotate the sonic coordiante system (see the correction methods below). Usually, \\(w\\) represents the vertical wind speed and \\(x\\) can be an arbitrary scalar quantity. For the momentum flux, we use \\(x=u\\) (streamwise velocity), for the sensible heat flux \\(x=T\\) (temperature or potential temperature), for the latent heat flux \\(x=q\\) (specific humidity) and for trace gas fluxes \\(x=c\\) (the concentration or mixing ratio of the respective trace gas). 2.2 Raw data handling and corrections For applying the described eddy-covariance method to the measurements, we have to decide on an averaging time \\(t_s\\), have to perform quality control of the raw data and have to check the fulfillment of the made assumptions. 2.2.1 Choosing a suitable averaging time As we have seen above, for calculating the fluxes with the eddy-covariance method, a suitable averaging time (\\(t_s\\)) should be chosen. For this, several things need to be considered: Generally, one can say that the longer the averaging time, the less likely it is that the conditions are steady (which is an assumption in the EC method), but short averaging times lose low frequency contributions. So usually, an averaging time of 30 minutes is chosen. However, this averaging time can still be too long if the turbulence is very intermittent, which is the case for very stable conditions. Consequently, some studies use the 30 minutes average just for unstable stratification and an averaging time of 1 to 10 minutes for stable conditions. It is recommendable to first study the characteristics of the sampled raw data (e.g., based on spectra) and then chose an averaging time. 2.2.2 Overview of correction procedures applied to the raw data Despiking (despiking): Removing of spikes in the raw data. For this usually three methods are applied: (1) based on pre-defined thresholds (i.e., an expected temperature or wind speed interval), (2) median deviation (MAD) test, i.e., all measurements lie within a pre-defined range around the median, and (3) based on skewness (3. moment) and kurtosis (4. moment), i.e., the skewness and kurtosis of the considered interval do not exceed pre-defined values. Details see e.g. Mauder et al. (2013). Lag-time correction (shift2maxccf): Lag-time can occur if several loggers are used or the inlet tube of the gas analyzer is very long, such that there is a time offset between the different measured variables. This can be corrected by calculating the maximum cross-correlation and shift the time series according to the lag difference, which is done by the function shift2maxccf. Linear detrending (pracma::detrending): Linear detrending by substracting the mean value (however, linear detrending is not consistent with Reynolds decomposition, so it should be used/interpreted with caution, see e.g. Baldocchi (2003)). Rotation (rotate_double or rotate_planar): Rotation of the sonic coordinate system. Since the orientation of the sonic is arbitrary, the measurements have to be interpreted in a suitable coordinate system. For this, the natural coordinate systems is used, which means that it follows the streamlines and the coordinates are then given by the streamwise velocity component \\(u\\), the crosswise velocity component \\(v\\) and the vertical velocity component \\(w\\). To determine the orientation, two main approaches exist: Double rotation (rotate_double) alignes the sonic coordinate system with the streamlines for every averaging interval. Hence, this method can be applied near-real time. Planar fit rotation (rotate_planar, Wilczak, Oncley, and Stage (2001)) aligns the sonic coordinate system with the mean streamlines under the conditions that the mean vertical velocity \\(\\overline{w}\\) vanishes. For this, a longer time series (usually the whole measurement campaign) is required and thus near-real time processing is not possible. Generally, it is recommended to use double rotation in relatively simple topography, but planar fit in complex (micro-)topogaphy. The angles around which the sonic coordinate system was rotated also allow to estimate the quality of the measurements (the smaller the angle of rotation, the better and important is that they do not flip signs, in particular for close-to-zero fluxes under very stable stratification). Quality flagging (flag_stationarity,flag_w,flag_distortion,flag_most, e.g. Foken and Wichura (1996)): Several quality flags (ranging from 0: the best to 2: the worst) can be applied. Common flags are: a stationarity flag (flag_stationarity), that test the alignment with the stationarity assumption of the eddy-covariance method, a vertical velocity flag (flag_w), which checks that the remaining vertical velocity after the rotation is small, a flow distortion flag (flag_distortion), which removes the pre-defined wind directions that are possibly affected/blocked by the mast, an integral turbulence characteristics flag (flag_itc), which test for the agreement with Monin-Obukhov similarity theory. However, these flags have to be applied purpose-oriented and interpreted with caution. Sometimes particularly “poorly flagged” measurements are interesting for investigating turbulence under challenging (and therefore interesting) conditions, e.g. under very stable stratification. SND (and cross-wind) correction (SNDcorrection, Schotanus, Nieuwstadt, and DeBruin (1983)): Converts the buoyancy flux to sensible heat flux. Since most sonics measure the sound propagation speed, the measured temperature (the so called sonic temperature \\(T_s\\)) is similar to the virtual temperature and thus the resulting flux \\(\\overline{w'T_s'}\\) represents the buoyancy flux, which is about 10-20 % larger than the sensible heat flux \\(\\overline{w'T'}\\). Note, the used constants in the method depend on the measurement device (default for CSAT3). WPL correction (WPLcorrection, Webb, Pearman, and Leuning (1980)): Converts volume- to mass-related quantities for trace gas fluxes. This correction only applies to trace gas concentrations (water vapor, carbon dioxide, methane, …) measured with a gas analyzer. Unit conversion (density) (ppt2rho): Converts ppt to density. Closed-path gas analyzers measure trace gas concentrations in parts-per-… (ppt: … thousand, ppm: … million), which has to be converted to a density based on the respective molar mass. Unit conversion (fluxes) (cov2sh, cov2lh,cov2cf): Converts covariances to the respective flux in W/m\\(^2\\), i.e., cov(w,T) to sensible heat flux, cov(w,q) to latent heat flux and cov(w,c) to CO\\(_2\\) flux. There are more (and partly controversial) correction methods, details are discussed in Foken (2017). Further correction methods are required when spectra are considered (spectral corrections) or budgets are calculated (gap-filling of missing values, to avoid a bias due to the discarded data). 2.3 An example post-processing routine Now, we create an example post-processing routine. For this, we use 3 days of raw eddy-covariance measurements (CSAT3/Campbell Scientific and Li-7200/LI-COR closed-path IR gas analyzer) with a 10 Hz sampling frequency from our station in Finse, Hardangervidda, Norway. The data set is in the folder data/ec-data_10Hz_raw and can be downloaded directly or the whole github repository can be cloned with git clone git@github.com:noctiluc3nt/ec_analyze.git. #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #ec data files dir_in="../data/ec-data_10Hz_raw" files=list.files(dir_in,full.names=TRUE) nf=length(files) Each given file contains 30 minutes of measurements, such that we just need to average over one file to get the 30 minutes averages and fluxes. In the notebook 04_multiresolution-decomposition.ipynb we will have a look at a method that allows to determine suitable averaging times more accurately. In our example routine, the data is first despiked, than the wind is rotated and the sonic measurements are averaged. For the gas analyzer measurements, the unit is converted to a density. Then the turbulence intensities (standard deviation) and the fluxes (covariances) are calculated. For the sensible heat flux, we need to apply the SND correction additionally, the WPL correction can be applied to the latent heat flux. Here, the temperature is given directly in \\(^\\circ\\)C, however, in the direct output of the LI-COR systems (.ghg-files) usually the speed of sound (sos in m/s) is stored, which can be easily converted to temperature with the function sos2Ts. #allocate output var_out=c("time","u_mean","v_mean","w_mean","ws_mean","wd_mean","T_mean","h2o_mean","co2_mean", "u_sd","v_sd","w_sd","T_sd","h2o_sd","co2_sd", "cov_uv","cov_uw","cov_vw","cov_wT","cov_h2ow","cov_co2w","cov_wT_snd","cov_rhoH2Ow_wpl", "rot_angle1","rot_angle2","flag_stationarity","flag_w","flag_distortion") nv=length(var_out) dat=data.frame(array(NA,dim=c(nf,nv))) colnames(dat)=var_out #postprocessing per file (30 mins) for (i in 1:nf) { tmp=read.table(files[i],sep=",",header=T) dat$time[i]=tmp$X[1] #despiking tmp$T_degC=despiking(tmp$T_degC,-40,30) tmp$u_m.s=despiking(tmp$u_m.s,-25,25) tmp$v_m.s=despiking(tmp$v_m.s,-25,25) tmp$w_m.s=despiking(tmp$w_m.s,-5,5) #wind before rotation dat$ws_mean[i]=sqrt(mean(tmp$u_m.s,na.rm=T)^2+mean(tmp$v_m.s,na.rm=T)^2) dat$wd_mean[i]=atan2(mean(tmp$v_m.s,na.rm=T),mean(tmp$u_m.s,na.rm=T)) #rotation rot_wind=rotate_double(tmp$u_m.s,tmp$v_m.s,tmp$w_m.s) tmp$u_m.s=rot_wind$u tmp$v_m.s=rot_wind$v tmp$w_m.s=rot_wind$w dat$rot_angle1[i]=rot_wind$theta dat$rot_angle2[i]=rot_wind$phi #averaging dat$u_mean[i]=mean(tmp$u_m.s,na.rm=T) dat$v_mean[i]=mean(tmp$v_m.s,na.rm=T) dat$w_mean[i]=mean(tmp$w_m.s,na.rm=T) dat$T_mean[i]=mean(tmp$T_degC,na.rm=T) #unit conversion for gases tmp$rhoH2O=ppt2rho(tmp$H2O_ppt,dat$T_mean[i]+273.15,87000) tmp$rhoCO2=ppt2rho(tmp$CO2_ppm/1000,dat$T_mean[i]+273.15,87000,gas="CO2") dat$h2o_mean[i]=mean(tmp$rhoH2O,na.rm=T) dat$co2_mean[i]=mean(tmp$rhoCO2,na.rm=T) #sds dat$u_sd[i]=sd(tmp$u_m.s,na.rm=T) dat$v_sd[i]=sd(tmp$v_m.s,na.rm=T) dat$w_sd[i]=sd(tmp$w_m.s,na.rm=T) dat$T_sd[i]=sd(tmp$T_degC,na.rm=T) dat$h2o_sd[i]=sd(tmp$rhoH2O,na.rm=T) dat$co2_sd[i]=sd(tmp$rhoCO2,na.rm=T) #covs dat$cov_uw[i]=cov(tmp$u_m.s,tmp$w_m.s,use="pairwise.complete.obs") dat$cov_uv[i]=cov(tmp$u_m.s,tmp$v_m.s,use="pairwise.complete.obs") dat$cov_vw[i]=cov(tmp$v_m.s,tmp$w_m.s,use="pairwise.complete.obs") dat$cov_wT[i]=cov(tmp$T_degC,tmp$w_m.s,use="pairwise.complete.obs") dat$cov_h2ow[i]=cov(tmp$rhoH2O,tmp$w_m.s,use="pairwise.complete.obs") dat$cov_co2w[i]=cov(tmp$rhoCO2,tmp$w_m.s,use="pairwise.complete.obs") #SND correction dat$cov_wT_snd[i]=SNDcorrection(tmp$u_m.s,tmp$v_m.s,tmp$w_m.s,tmp$T_degC+273.15) #WPL correction #dat$cov_rhoH2Ow_wpl[i]=WPLcorrection(tmp$rhoH2O,w=tmp$w_m.s,Ts=tmp$T_degC+273.15,q=tmp$q) #flagging dat$flag_stationarity[i]=flag_stationarity(tmp$w_m.s,tmp$T_degC,nsub=3000) dat$flag_w[i]=flag_w(dat$w_mean[i]) dat$flag_distortion[i]=flag_distortion(tmp$u_m.s,tmp$v_m.s,dir_blocked=c(340,360)) } The calculated covariances have to be converted to the fluxes (unit W/m\\(^2\\)), for example with the functions cov2shand cov2lh: #calculate fluxes dat$sh=cov2sh(dat$cov_wT_snd) dat$lh=cov2lh(dat$cov_h2ow) The output now contains the 30 minutes averages, standard deviations and covariances of the measured quantities, which will be used in the following notebooks for some more detailed analysis. #look at output head(dat) A data.frame: 6 × 30 time u_mean v_mean w_mean ws_mean wd_mean T_mean h2o_mean co2_mean u_sd ⋯ cov_co2w cov_wT_snd cov_rhoH2Ow_wpl rot_angle1 rot_angle2 flag_stationarity flag_w flag_distortion sh lh <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ⋯ <dbl> <dbl> <lgl> <dbl> <dbl> <dbl> <dbl> <lgl> <dbl> <dbl> 1 2018-07-20 08:30:00 2.872084 -3.939352e-16 6.931732e-06 2.873170 -2.289284 15.86938 0.007254132 0.0006009614 1.062143 ⋯ -1.567917e-07 0.1052858 NA 228.8337 0.3484284 0 0 NA 123.2512 103.1085 2 2018-07-20 09:00:00 2.864793 7.504684e-17 9.052527e-08 2.864538 -2.326045 16.55190 0.007584267 0.0005978138 1.105122 ⋯ -2.294700e-07 0.1600703 NA 226.7274 0.7745417 0 0 NA 187.3839 162.6196 3 2018-07-20 09:30:00 3.996526 9.365292e-16 3.049021e-05 4.002522 -2.035397 17.05704 0.007472065 0.0005954894 1.409495 ⋯ -1.862787e-07 0.1533514 NA 243.3803 0.2888957 0 0 NA 179.5185 123.5711 4 2018-07-20 10:00:00 4.998016 -2.812846e-16 -2.391239e-06 4.997530 -1.977737 17.60447 0.006762097 0.0005953881 1.289326 ⋯ -1.504328e-07 0.1556649 NA 246.6840 0.3483120 0 0 NA 182.2267 123.4474 5 2018-07-20 10:30:00 4.879095 4.062636e-16 2.187117e-05 4.880696 -2.014489 18.08994 0.005410862 0.0005948927 1.469949 ⋯ -1.989823e-07 0.1914488 NA 244.5783 0.6531922 0 0 NA 224.1166 191.1642 6 2018-07-20 11:00:00 5.225037 -8.977916e-17 -6.972624e-06 5.223984 -2.261205 18.24207 0.004422858 0.0005954766 1.319227 ⋯ -1.330638e-07 0.1490271 NA 230.4425 0.4471074 0 0 NA 174.4563 150.8430 saveRDS(dat,file="../data/ec-data_30min_processed/processed_data_example.rds") Additionally, the data can be gap-filled with gapfilling or averaged to other customized averaging intervals with averaging. The function averaging is based on the RcppRoll package, which provides fast and efficient “rolling” functions utilizing a C++ interface. Generally, it should be noted that different sensors may require different correction methods, see e.g. Foken (2017) for a discussion, so a detailed knowledge of the instrumentation is required. References "],["turbulence-diagnostics.html", "3 Turbulence diagnostics 3.1 Standard turbulence diagnostics and fluxes 3.2 Stability dependence 3.3 Turbulence regimes and “Hockey-stick” model", " 3 Turbulence diagnostics After completing the post-processing, we can now move on to looking at some turbulence diagnostics. Based on the generated half-hourly output of the first notebook (01_post-processing.ipynb) some standard turbulence quantities, i.e. stability parameter, friction velocity, TKE, turbulence intensity are calculated and plotted, which allows to study turbulent flow properties systematically. 3.1 Standard turbulence diagnostics and fluxes Here, some functions from the Reddy package are used to calculate turbulent kinetic energy \\(TKE\\), velocity scale of TKE \\(V_{TKE}\\), horizontal turbulence intensity \\(TI\\), vertical turbulence intensity \\(I_w\\), friction velocity \\(u_*\\), Obukhov length \\(L\\), stability parameter \\(\\zeta\\) and directional shear angle and plot their timeseries for the previously post-processed example data. Turbulence intensity (calc_ti and calc_iw): Turbulence intensity generally refers to the standard deviation, e.g., \\(\\sigma_u, \\sigma_v, \\sigma_w, \\sigma_T\\), and thus describes the mean fluctuation intensity. The horizontal turbulence intensity TI (calc_ti) and the vertical turbulence intensity \\(I_w\\) (calc_iw) are calculated by normalizing the respective standard deviations with the mean wind speed \\(\\overline{u}\\) (to get a dimensionless measure): \\[TI = \\frac{\\sqrt{\\sigma_u^2+\\sigma_v^2}}{\\overline{u}} ,\\quad\\quad I_w= \\frac{\\sigma_w}{\\overline{u}} \\] Turbulent kinetic energy TKE (calc_tke): TKE describes the mean kinetic energy that the eddies contain and is calculated by \\[ TKE = 0.5 (\\sigma_u^2 + \\sigma_v^2 + \\sigma_w^2) =0.5 (\\overline{u'^2} + \\overline{v'^2} + \\overline{w'^2} ).\\] From the TKE, a velocity scale can easily be derived \\(V_{TKE} = \\sqrt{TKE}\\) (calc_vtke). Friction velocity \\(u_*\\) (calc_ustar): The friction velocity describes the effect of friction in form of a velocity scale \\[ u_* = \\sqrt[4]{\\overline{u'v'}^2+\\overline{v'w'}^2}.\\] Obukhov length \\(L\\) (calc_L): The Obukhov length is a length scale that describes the effect of buoyancy versus shear (friction velocity) through \\[ L = -\\frac{u_*^3 \\overline{T_v}}{\\kappa \\,g\\,\\overline{w'T'}}, \\] and is central in Monin-Obukhov similarity theory (MOST) for deriving dimensionless scaling parameters, in particular the stability parameter \\(\\zeta\\). The fluxes used in the calculation are either the surface fluxes for global scaling (Monin and Obukhov 1954) or the fluxes at the measurement height for local scaling (Nieuwstadt 1984). Stability parameter \\(\\zeta\\) (calc_zeta): The stability parameter is the ratio of measurement height and Obukhov length, and thus a dimensionless measure for stability \\[ \\zeta = \\frac{z}{L}.\\] \\(\\zeta\\) is negative in unstable stratification, zero in neutral stratification, and positive in stable stratification, which can be traced back to the sign of the heat flux \\(\\overline{w'T'}\\) in the Obukhov length \\(L\\). Now, we use some of the Reddy functions to calculate and plot time series of these basic turbulent diagnostics. #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) library(latex2exp) #read in processed example data dat=readRDS("../data/ec-data_30min_processed/processed_data_example.rds") #calculation of some useful turbulence diagnostics dat$tke=calc_tke(dat$u_sd,dat$v_sd,dat$w_sd) dat$vtke=calc_vtke(dat$u_sd,dat$v_sd,dat$w_sd) dat$ti=calc_ti(dat$u_sd,dat$v_sd,dat$ws_mean) dat$iw=calc_iw(dat$w_sd,dat$ws_mean) dat$ustar=calc_ustar(dat$cov_uw,dat$cov_vw) dat$L=calc_L(dat$ustar,dat$T_mean,dat$cov_wT) dat$zeta=calc_zeta(4.4,dat$L) dat$alpha_uw=calc_dshear(dat$cov_uw,dat$cov_vw) #simple timeseries plots par(mfrow=c(3,3)) plot(dat$T_mean,type="l",ylab="temperature [°C]",col=4) grid() plot(dat$ws_mean,type="l",ylab="wind speed [m/s]",col=4) grid() plot(dat$tke,type="l",ylab=TeX("TKE [$m^2/s^2$]"),ylim=c(0,2)) grid() plot(dat$zeta,type="l",ylab="stability parameter [-]") abline(h=0,lty=2) grid() plot(dat$ti,type="l",ylab="turbulence intensity [-]",ylim=c(0,1)) points(dat$iw,type="l",col="gray50") grid() plot(dat$cov_uw,type="l",ylab=TeX("cov(u,w) [$m^2/s^2$]")) grid() plot(dat$cov_wT,type="l",ylab=TeX("cov(T,w) [$K m/s$]"),col="red") grid() plot(dat$cov_h2ow,type="l",ylab=TeX("cov(q,w) [$mmol m/s$]"),col="blue") grid() plot(dat$cov_co2w,type="l",ylab=TeX("cov(c,w) [$ppm m/s$]"),col="orange") grid() As we can see, the stratification is unstable during daytime, where also the latent heat flux is positive, indicating evapotranspiration. The momentum flux is always negative, which indicates downward momentum transport (kinetic energy dissipation) in agreement with a typical logarithmic wind profile. Thereby the magnitude of the momentum flux is higher during daytime in accordance with the higher windspeeds and higher TKE there (shear-generated turbulence). The CO\\(_2\\) flux is negative during daytime (CO\\(_2\\) uptake due to photosynthesis) and positive at nighttime (CO\\(_2\\) release). 3.2 Stability dependence It is often interesting to investigate the stability dependence of turbulence characteristics, especially since Monin-Obukhov similarity theory (MOST) predicts all dimensionless turbulence characteristics based on the dimensionless stability parameter \\(\\zeta\\). A simple way is to plot stability versus a turbulence characteristic and bin the data based on stability intervals. For this the function binning can be used, which bins one variable based on another for predefined bins, as exemplified below for the dependence of the sensible heat flux on the stability parameter. The output contains mean, median, 25% and 75% quartile per discrete bin as dataframe. The used variables and the bins can be customized. Such type of analysis is usually applied to longer timeseries and can also be combined with other methods, e.g. quadrant analysis, to study how different coherent structures behave with changing stability, see e.g. Li and Bou-Zeid (2011), Schmutz and Vogt (2019) or Mack et al. (2024). zeta_bins=c(-10^(2:-2),10^(-2:2)) sh_binned=binning(dat$sh,dat$zeta,zeta_bins) sh_binned #look at output xbins=c(-5^(2:-1),0,5^(-1:2)) plot(xbins,sh_binned[,2],type="b",ylim=c(-20,150),lwd=2,col=2,pch=20,xlab="stability parameter [-]",ylab=TeX("SH [W/m$^2$]")) points(xbins,sh_binned[,3],type="l",lty=2,col=2) points(xbins,sh_binned[,4],type="l",lty=2,col=2) #polygon(c(xbins,rev(xbins)),c(sh_binned[,3],rev(sh_binned[,4])),lty=0,col=rgb(0.8,0,0,0.2) ) abline(h=0,lty=2) abline(v=0,lty=2) grid() A matrix: 9 × 4 of type dbl 122.045246 116.804837 109.052920 129.797163 120.620398 125.471467 75.448402 167.986972 17.306097 12.061651 5.647776 24.415438 1.823135 1.864732 1.093005 2.574063 NA NA NA NA -3.564331 -3.564331 -3.564331 -3.564331 -11.192381 -12.213284 -13.937180 -8.561890 -18.102981 -19.201199 -22.358905 -14.674265 -8.655841 -10.389901 -11.853905 -6.324807 Based on our short example data, we can see that the sensible heat flux is positive under unstable conditions and negative under stable conditions. This can also been seen directly from the definition of \\(\\zeta\\) (and \\(L\\) therein). Such type of analysis can be extended to different quantities and binning based on other important scales, e.g. friction velocity. 3.3 Turbulence regimes and “Hockey-stick” model The above shown example for stability dependence can be extended to arbitrary variables and is often used to characterize different turbulence regimes. To diagnose different turbulence regimes, scatter plots of different turbulence diagnostics can be considered, e.g. plotting wind speed vs TKE or stability parameter vs TKE. For example, Sun et al. (2012) investigate turbulence regimes in stable boundary layers and distinguish between weak and strong turbulence regimes. For this, they plot wind speed versus TKE or \\(\\sigma_w\\) and find two different slopes corresponding to different turbulence generation mechanism (local instability versus bulk shear) – this simple model is now commonly referred to as “Hockey-stick model” (Sun et al. (2012), therein Fig. 2). However, this schematic is very simplified and especially at sites in complex terrain (like our) versatile factors (e.g., the footprint) modify it. In our example data (3 summer days), we see the bulk shear dominated regime, which is indicated by the linear regression. #some basic plots plot(dat$ws_mean,dat$tke,xlim=c(0,7),ylim=c(0,2.5),xlab="wind speed [m/s]",ylab=TeX("TKE [$m^2/s^2$]"),cex=2,pch=20,col=rgb(0,0,0,0.4)) cond=(dat$ws_mean>4) fit=lm(dat$tke[cond]~dat$ws_mean[cond]) abline(fit,col=2,lty=2,lwd=2) #abline(0.1,0.1,col=4,lty=2) grid() References "],["quadrant-analysis.html", "4 Quadrant analysis 4.1 Calculation of occurrence frequencies and strengths of the four quadrants with calc_quadrant_analysis 4.2 Plotting quadrant analysis with plot_quadrant_analysis", " 4 Quadrant analysis Quadrant analysis is a simple conditional sampling method to detect coherent structures directly from the high-frequency measurements (e.g. Katul et al. (1997)) or simulations (e.g. Wallace (2016)). Coherent structures maintain their structure for a significant time period, and due to their coherence, they can generate deviations from mean flow properties (e.g. Jimenez (2018)). For quadrant analysis a scatter plot of a pair of two variables is considered, which are normalized through \\[ \\hat{x} = \\frac{x-\\overline{x}}{\\sigma_x}\\] by substracting the mean value \\(\\overline{x}\\) and dividing by \\(\\sigma_x\\) to make different variables with different value ranges comparable and centered around zero. Based on this, strength \\(S_i\\) and duration \\(D_i\\) of each quadrant (\\(i=1, ..., 4\\)) can be calculated: \\[ S_i = \\frac{\\overline{\\hat{x}'\\hat{w}'}_i}{\\overline{\\hat{x}\\hat{w}}}, \\quad\\quad D_i = \\frac{1}{t_s} \\int_0^{t_s} I_i(t) dt\\] \\(S_i\\) represents the relative strength of the respective quadrant (normalized by the total covariance) and \\(D_i\\) the occurrence frequency of each quadrant (with averaging time \\(t_s\\) and indicator function \\(I_i\\)). To get an overall measure for the contribution of disorganized (i.e. counter-gradient contribution) versus organized (i.e. down-gradient contribution) structures, the exuberance \\(E\\) (Shaw, Tavanger, and Ward 1983) and the organization ratio \\(OR\\) (Mack et al. 2024) can be defined as: \\[ E = \\frac{S(disorganized)}{S(organized)}, \\quad\\quad OR = \\frac{D(disorganized)}{D(organized)}\\] For example, for the sensible heat flux warm updrafts and cold downdrafts are the organized structures, and oppositely cold updrafts and warm downdrafts the disorganized ones (in the figure below, organized structures are shaded and disorganized are not shaded). To filter out particularly strong coherent structures, a hole size can be applied with the filter conditions \\(\\vert \\hat{x}\\hat{y} \\vert \\le H \\cdot \\vert \\overline{\\hat{x}'\\hat{y}'}\\vert\\) (hyperbolic curves in the figure). Usually, \\(y\\) is chosen to be the vertical velocity \\(w\\). The quadrant analysis is directly related to the fluxes by using the respective constituting quantities: \\((u,w)\\) for momentum flux, \\((T,w)\\) for sensible heat flux, \\((q,w)\\) for latent heat flux and \\((c,w)\\) for CO\\(_2\\) flux – as visualized in the figure (adapted from Mack et al. (2024)). #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #ec data files dir_in="../data/ec-data_10Hz_raw" files=list.files(dir_in,full.names=TRUE) nf=length(files) 4.1 Calculation of occurrence frequencies and strengths of the four quadrants with calc_quadrant_analysis To perform quadrant analysis, Reddy provides two functions: calc_quadrant_analysis for calculating occurrence frequency and strength of the four quadrants and plot_quadrant_analysis for plotting the two variables as scatter plot with a 2d kernel density estimation and a linear regression, as detailed below. The function calc_quadrant_analysis counts the occurrence frequency of each quadrant, calculates their strength as product \\(\\hat{x}\\hat{y}\\) and as covariance \\(\\overline{\\hat{x}'\\hat{y}'}\\). The argument do_normalization = TRUE can be used to normalize the two variables and with hole_sizes one or several hole sizes can be applied. The output is a list containing \\(D_i\\), \\(S_i\\) for every quadrant and hole size. i=8 #select a file tmp=read.table(files[i],sep=",",header=T) qa_Tw=calc_quadrant_analysis(tmp$T_degC,tmp$w_m.s) #based on the raw data (10 Hz) directly (i.e., unrotated) str(qa_Tw) List of 10 $ hole_sizes : int [1:11] 0 1 2 3 4 5 6 7 8 9 ... $ occurrence : int [1:4, 1:11] 5448 2975 6619 2958 4689 144 3812 300 4232 8 ... $ product : num [1:4, 1:11] 1.068 -0.291 0.619 -0.42 0.816 ... $ covariance : num [1:4, 1:11] 0.10672 0.05309 -0.00914 0.03226 0.31902 ... $ covariance_total : num 0.15 $ correlation_total : num 0.434 $ product_total : num [1:18000] 0.0448 -0.0544 -0.4983 0.115 0.3095 ... $ exuberance : num [1:11] -0.422 -2.699 -5.521 -7.411 NaN ... $ organization_ratio: num [1:11] 0.491672 0.052229 0.00992 0.002498 0.000495 ... $ meta : chr "Output format: rows represent the quadrants Q1, Q2, Q3, Q4 -- columns represent selected hole sizes" 4.2 Plotting quadrant analysis with plot_quadrant_analysis plot_quadrant_analysis plots a scatter plot of two variables with a 2d kernel density estimation (MASS::kde2d) and a linear regression (lm()) to allow for a visual inspection. Example: Quadrant Analysis (T,w) during daytime i=8 #select a file -- a daytime example print(files[i]) tmp=read.table(files[i],sep=",",header=T) plot_quadrant_analysis(tmp$T_degC,tmp$w_m.s,xlab="T (normalized)",ylab="w (normalized)",main="Quadrant Analysis (T,w)") [1] "../data/ec-data_10Hz_raw/2018-07-20T120000.csv" Call: lm(formula = yval ~ xval) Residuals: Min 1Q Median 3Q Max -4.5818 -0.5491 0.0005 0.5471 4.1136 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.849e-16 6.716e-03 0.00 1 xval 4.337e-01 6.716e-03 64.58 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9011 on 17998 degrees of freedom Multiple R-squared: 0.1881, Adjusted R-squared: 0.1881 F-statistic: 4170 on 1 and 17998 DF, p-value: < 2.2e-16 Example: Quadrant Analysis (T,w) during nighttime i=38 #select a file -- a nighttime example print(files[i]) tmp=read.table(files[i],sep=",",header=T) plot_quadrant_analysis(tmp$T_degC,tmp$w_m.s,xlab="T (normalized)",ylab="w (normalized)",main="Quadrant Analysis (T,w)") #based on the raw data (10 Hz) directly (i.e., unrotated) [1] "../data/ec-data_10Hz_raw/2018-07-21T030000.csv" Call: lm(formula = yval ~ xval) Residuals: Min 1Q Median 3Q Max -5.8334 -0.5262 0.0022 0.5425 6.2520 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.006e-17 7.120e-03 0.00 1 xval -2.957e-01 7.121e-03 -41.53 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9553 on 17998 degrees of freedom Multiple R-squared: 0.08745, Adjusted R-squared: 0.0874 F-statistic: 1725 on 1 and 17998 DF, p-value: < 2.2e-16 Example: Quadrant Analysis (u,w) during nighttime plot_quadrant_analysis(tmp$u_m.s,tmp$w_m.s,xlab="u (normalized)",ylab="w (normalized)",main="Quadrant Analysis (u,w)") #based on the raw data (10 Hz) directly (i.e., unrotated) Call: lm(formula = yval ~ xval) Residuals: Min 1Q Median 3Q Max -5.9799 -0.5474 -0.0342 0.5301 6.6338 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.749e-16 7.326e-03 0.00 1 xval -1.843e-01 7.326e-03 -25.15 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9829 on 17998 degrees of freedom Multiple R-squared: 0.03395, Adjusted R-squared: 0.0339 F-statistic: 632.5 on 1 and 17998 DF, p-value: < 2.2e-16 Quadrant analysis can be applied to any combination of two measured quantities and also allows to check the measurement quality or significance of the relation between them. It can also be used to check the effect of the coordinate rotation (e.g., rotate_double) visually. Quadrant analysis has been used to study coherent structures, and how they affect fluxes and cause flux dissimilarity, e.g. Thomas and Foken (2007), Li and Bou-Zeid (2011), Schmutz and Vogt (2019) and Mack et al. (2024). The concept can be extended to a combination of three variables in the framework of octant analysis, e.g. Guo et al. (2022). References "],["spectra.html", "5 Spectra 5.1 Spectral analysis of timeseries 5.2 FFT spectrum and comparison to theoretical spectra 5.3 Multiresolution decomposition (MRD)", " 5 Spectra 5.1 Spectral analysis of timeseries Turbulence is associated with different scales, which interact in a complex manner and to investigate them spectra can be examined. Thereby, the measurements are transformed from time to frequency domain. The most common approach is to represent the timeseries with a series of sine and cosine function (with different frequencies), which is referred to as Fourier analysis. A numerical optimized way to calculate the Fourier transformation is to use Fast Fourier Transform (FFT, in rbase: fft). However, periodicity is not always a suitable assumption, such that other basis functions and approaches might be more applicable. With a wavelet transform information in time and frequency domain is retained and different wavelet basis functions (e.g., Morlet wavelets or Haar-wavelets) allow to represent localized and non-periodic behaviour (WaveletComp::wt.image). A very practical discrete wavelet transform is multiresolution decomposition (MRD, Vickers and Mahrt (2003), Reddy:calc_mrd), which is routinley applied in the analysis of eddy-covariance data (see details below). A related method used for flux calculations from eddy-covariance data is based on Ogives, that is a cumulative frequency distribution, i.e. the sum of the cospectral energy. Sievers et al. (2015) developed an Ogive optimization, which allows to disentangle low frequency contributions on flux estimates. This approach is particular relevant under low-flux conditions, e.g. with changing signs in one averaging interval. The low frequency contributions are generally associated with non-local features, e.g. topography, while high-frequency contributions are local. Quick overview: Fast Fourier Transform FFT (rbase: fft, spectrum) Wavelets (WaveletComp::wt.image) Multiresolution decomposition MRD (Reddy::calc_mrd) Ogives (agricolae::ogive.freq) #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #ec data files dir_in="../data/ec-data_10Hz_raw" files=list.files(dir_in,full.names=TRUE) nf=length(files) i=8 #select a file tmp=read.table(files[i],sep=",",header=T) 5.2 FFT spectrum and comparison to theoretical spectra The rbase function spectrum calculates the spectrum based on FFT and plots by default an associated periodigram. spectrum(tmp$u_m.s) However, to systematically investigate spectral density and reduce the noise, it is recommended to apply binning (i.e., averaging over frequency intervals), which can be done with the function Reddy::calc_spectrum (as wrapper of spectrum). The resulting averaged spectra can then be compared to theoretical slopes. In homogeneous and isotropic turbulence a spectral slope of -5/3 follows from theoretical considerations (Kolmogorovs energy cascade), which is usually used as visual comparison. Deviations from this -5/3-slope indicate that either more energy (steeper slope) or less energy (weaker slope) is dissipated, which can have various reasons, e.g. turbulence anisotropy or energy injections. spectrum_u = calc_spectrum(tmp$u_m.s) Call: lm(formula = log(sbin[, 2]) ~ bins[2:nbins]) Residuals: Min 1Q Median 3Q Max -2.20343 -0.08638 0.00209 0.21610 1.08777 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.9486 0.2669 -14.79 2.23e-16 *** bins[2:nbins] -2.7810 0.1064 -26.15 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.6212 on 34 degrees of freedom (63 observations deleted due to missingness) Multiple R-squared: 0.9526, Adjusted R-squared: 0.9512 F-statistic: 683.6 on 1 and 34 DF, p-value: < 2.2e-16 5.3 Multiresolution decomposition (MRD) Multiresolution decomposition (MRD) is a method to characterize the timescale dependence of variances (spectrum) or covariances (cospectrum) and to find scale gaps between turbulent and submeso-scale motions. It uses Haar wavelets, which have the advantage over Fourier analysis that no periodicity is assumed. For this, the time series is step-by-step cut in half, as visualized in the figure. 5.3.1 Calculating multiresolution decomposition with calc_mrd #cospectra mrd_uw=calc_mrd(tmp$u_m.s,tmp$w_m.s,time_res=0.1) #momentum flux mrd_Tw=calc_mrd(tmp$T_degC,tmp$w_m.s,time_res=0.1) #sensible heat flux #spectra mrd_ww=calc_mrd(tmp$w_m.s,tmp$w_m.s,time_res=0.1) #vertical veloctiy mrd_TT=calc_mrd(tmp$T_degC,tmp$T_degC,time_res=0.1) #temperature The function returns a dataframe containing index, exponent \\(m\\), scale (i.e. \\(2^{m}\\)), time [s], mean, median, 25% and 75% quartiles as columns. The number of rows is given by \\(M\\) fulfilling $2^M $ #measurements. #look into output mrd_uw A data.frame: 15 × 8 index m scale time mean median q25 q75 <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 14 16384 1638.4 2.009888e-03 6.411861e-05 -5.739598e-03 7.411443e-03 2 13 8192 819.2 3.723193e-03 3.046532e-04 -7.508746e-03 1.040180e-02 3 12 4096 409.6 5.007376e-03 6.024576e-04 -9.401348e-03 1.487008e-02 4 11 2048 204.8 5.831290e-03 6.054180e-04 -9.164119e-03 1.592909e-02 5 10 1024 102.4 9.752098e-03 1.325413e-03 -6.967310e-03 1.938746e-02 6 9 512 51.2 1.090534e-02 2.818075e-03 -4.432612e-03 1.858083e-02 7 8 256 25.6 1.465273e-02 3.577008e-03 -3.271827e-03 2.213298e-02 8 7 128 12.8 1.157982e-02 3.560662e-03 -6.819690e-03 1.801111e-02 9 6 64 6.4 9.264997e-03 5.478194e-04 -8.774036e-03 9.014596e-03 10 5 32 3.2 6.077250e-03 4.486199e-03 -2.698569e-03 9.425087e-03 11 4 16 1.6 4.209757e-03 -5.068024e-04 -4.568223e-03 7.532330e-03 12 3 8 0.8 -3.430667e-03 -4.011748e-03 -6.571402e-03 -8.710123e-04 13 2 4 0.4 -6.626465e-03 -6.626465e-03 -7.390707e-03 -5.862222e-03 14 1 2 0.2 -4.079521e-03 -4.079521e-03 -4.079521e-03 -4.079521e-03 15 0 1 0.1 -8.838739e-34 -8.838739e-34 -8.838739e-34 -8.838739e-34 5.3.2 Plotting multiresolution decomposition with plot_mrd plot_mrd takes an object returned by calc_mrd and plots mean, median and quartiles versus time. plot_mrd(mrd_uw, main="Cospectrum (u,w)") #plot_mrd(mrd_Tw, main="Cospectrum (T,w)") #plot_mrd(mrd_ww, main="Spectrum (w,w)") plot_mrd(mrd_TT, main="Spectrum (T,T)") Composite MRDs can be created by averaging over several MRDs (also possible to distinguish different flow regimes, e.g., based on the stability parameter calc_zeta) and can be used to find long-term characteristic (e.g., scale gaps). Scale(s) gap(s) are defined as the the zero-crossings of the spectrum or cospectrum (larger than the first zero-crossing at the measurement frequency itself). References "],["reynolds-stress-tensor.html", "6 Reynolds stress tensor 6.1 Invariant analysis of the Reynolds stress tensor 6.2 Anisotropy and velocity aspect ratio (VAR)", " 6 Reynolds stress tensor The Reynolds stress tensor \\[ R := \\overline{u_i ' u_j '} = \\begin{pmatrix} u'^2 & \\overline{u'v'} & \\overline{u'w'} \\\\ \\overline{v'u'} & v'^2 & \\overline{v'w'} \\\\ \\overline{w'u'} & \\overline{w'v'} & w'^2 \\end{pmatrix}, \\quad \\textrm{with} \\quad R = R^T\\] summarizes all normal and shear stresses. Based on an invariant analysis, the most important geometric properties can be derived from its eigenvalues and eigenvectors. Using a linear combination of the three eigenvalues, a two-dimensional mapping into an equilateral triangle – called barycentric map (Banerjee et al. (2007), see figure) – with the coordinates \\((x_B,y_B)\\), can be constructed, which allows to characterize the anisotropy (\\(y_B\\)) and the limiting states. The three corners of the triangle represent the three limiting state 1-, 2- and 3-component limit, where the 3-component limit corresponds to isotropic turbulence. The 2-component limit represents “disk-like” turbulence associated with strong wind shear and the 1-component limit “rod-like” turbulence, which is related to wave-like motions, e.g. internal gravity waves. #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #read in processed example data dat=readRDS("../data/ec-data_30min_processed/processed_data_example.rds") 6.1 Invariant analysis of the Reynolds stress tensor 6.1.1 Performing the invariant analysis of the Reynolds stress tensor with calc_anisotropy The function calc_anisotropy calculates the invariant analysis and takes for this vectors for all six independent components of the Reynolds stress tensor as input (since it is symmetric there are only six not nine independent entries) in the form \\[ R = \\begin{pmatrix} a_{11} & a_{12} & a_{13}\\\\ a_{12} & a_{22} & a_{23} \\\\ a_{13} & a_{23} & a_{33} \\end{pmatrix} \\] and in the order \\(a_{11}, a_{12}, a_{13}, a_{22}, a_{23}, a_{33}\\). rey_ana = calc_anisotropy(dat$u_sd^2,dat$cov_uv,dat$cov_uw,dat$v_sd^2,dat$cov_vw,dat$w_sd^2) str(rey_ana) List of 6 $ xb : num [1:127] 0.348 0.249 0.326 0.257 0.188 ... $ yb : num [1:127] 0.119 0.124 0.101 0.103 0.117 ... $ eta : num [1:127] 0.165 0.152 0.166 0.157 0.148 ... $ xi : num [1:127] -0.0562 -0.1214 -0.0821 -0.1202 -0.1353 ... $ eigenvalues : num [1:127, 1:3] 0.283 0.232 0.281 0.245 0.204 ... $ eigenvectors: num [1:127, 1:3, 1:3] 0.89759 0.93931 0.86082 0.00664 -0.64731 ... The output contains the coordinates of the barycentric map (xb, yb) and of the Lumley triangle (eta, xi) as well as all eigenvalues and eigenvectors. 6.1.2 Plotting the barycentric map using plot_barycentric_map The function plot_barycentric_map takes xb, yb as input (as calculated in the invariant analysis before) and plots them in the barycentric map. plot_barycentric_map(rey_ana$xb,rey_ana$yb) The three corners of the tringle represent the three limiting state: 3-component limit (isotropic, “sphere-like”) at \\((0.5,\\sqrt{3}/2\\)), 2-component limit (“disk-like”) at \\((0,0)\\) and 1-component limit (“rod-like”) at \\((1,0)\\). \\(y_B\\) is a measure for anisotropy – from \\(y_B = 0\\) completely anisotropic to \\(y_B = \\sqrt(3)/2\\) perfectly isotropic. Anisotropy leads to deviations from classical scaling functions used in Monin-Obukhov similarity theory, e.g. Stiperski, Calaf, and Rotach (2019) and Stiperski and Calaf (2023). 6.2 Anisotropy and velocity aspect ratio (VAR) The velocity aspect ratio VAR (calc_var) is an approximation of anisotropy that only takes the diagonal elements of the Reynolds stress tensor, i.e., \\(\\sigma_u, \\sigma_v,\\sigma_w\\), into account (Mahrt 2010). A deviation from a linear regression between VAR and \\(y_B\\) allows quantify the effect of shear stresses. As seen in the plot below, for small values of \\(y_B\\), i.e., very anisotropic, is the deviation from the linear fit larger than for higher values of \\(y_B\\). var=calc_var(dat$u_sd,dat$v_sd,dat$w_sd) plot(var,rey_ana$yb,col=rgb(0,0,0,0.5),pch=20,cex=2,xlab="VAR",ylab="yb") grid() fit=lm(rey_ana$yb~var) abline(fit,lwd=2,col=2) print(summary(fit)) Call: lm(formula = rey_ana$yb ~ var) Residuals: Min 1Q Median 3Q Max -0.011421 -0.003373 -0.001110 0.003227 0.021316 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.08683 0.00265 -32.77 <2e-16 *** var 0.64442 0.00660 97.64 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.005629 on 125 degrees of freedom Multiple R-squared: 0.9871, Adjusted R-squared: 0.987 F-statistic: 9534 on 1 and 125 DF, p-value: < 2.2e-16 References "],["flux-footprint.html", "7 Flux footprint 7.1 Flux footprint parametrizations 7.2 2D flux footprint estimate", " 7 Flux footprint 7.1 Flux footprint parametrizations The calculation of a 2d flux footprint makes it possible to estimate the size of the surface that contributes to the measured flux. This also allows to analyze whether changes in the flux result from a change in the footprint (e.g. surface composition, vegetation, surface roughness). Here the flux footprint parametrization according to Kljun et al. (2015) is used. The mathematical idea for deriving a flux footprint parametrization is to express the flux (\\(F_c\\)) as integral over the distribution of its sinks and sources (\\(S_c\\)) times a transfer function \\(f\\) (the flux footprint): \\[ F_c(0,0,z) = \\int\\int S_c(x,y) f(x,y) \\:dx\\,dy \\] By treating streamwise and crosswise velocity independently, the footprint can be expressed as product of the crosswind-integrated footprint (\\(\\overline{f^y}(x)\\) which is then only a function of \\(x\\)) and a function expressing the crosswind dispersion (\\(D_y\\)) through \\[ f(x,y) = \\overline{f^y}(x)D_y. \\] This assumption leads to a symmtric footprint in crosswind direction. For further derivations a concrete footprint model has to be applied, which is in Kljun et al., 2015 an advanced Lagrangian particle dispersion model (LPDM-B) based on 3d particle backtracking between surface and boundary layer height \\(h\\) that is valid for steady flows under all stabilities. There are several other approaches to flux footprint estimation (e.g. Hsieh, Katul, and Chi (2000) and Kormann and Meixner (2001)), but the Kljun et al. (2015) model is the most common approach today. #load Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #read in processed example data dat=readRDS("../data/ec-data_30min_processed/processed_data_example.rds") #select file i=8 #daytime example 7.2 2D flux footprint estimate 7.2.1 Calculate 2D flux footprint estimate with calc_flux_footprint The function calc_flux_footprintuses the 2d flux footprint parametrization (FFP) according to Kljun et al. (2015) to calculate the footprint based on measurement height zm, mean horizontal wind speed u_mean, boundary layer height h, Obukhov length L (calc_L), standard deviation of cross-wind component v_sd and either friction velocity ustar(calc_ustar) or surface roughness length z0 in a resolution given by nres. The boundary layer height can be taken from e.g. ERA5. ustar=calc_ustar(dat$cov_uw,dat$cov_vw) L=calc_L(ustar,dat$T_mean,dat$cov_wT) zm=4.4 h=700 ffp=calc_flux_footprint(zm,dat$u_mean[i],h,L[i],dat$v_sd[i],ustar[i]) str(ffp) List of 9 $ xmax : num 23 $ x : num [1:999] 4.38 5.17 5.96 6.76 7.55 ... $ fy_mean : num [1:999] 3.43e-20 3.60e-10 5.56e-07 1.85e-05 1.37e-04 ... $ x2d : num [1:1499, 1:999] 4.38 4.38 4.38 4.38 4.38 ... $ y2d : num [1:1499, 1:999] -592 -591 -590 -589 -589 ... $ f2d : num [1:999, 1:1499] 0 0 0 0 0 0 0 0 0 0 ... $ xcontour :List of 9 ..$ : num [1:1573] 6.76 6.75 6.72 6.7 6.69 ... ..$ : num [1:815] 7.55 7.48 7.42 7.38 7.35 ... ..$ : num [1:543] 8.34 8.25 8.16 8.08 8.03 ... ..$ : num [1:397] 9.13 9.03 8.91 8.81 8.73 ... ..$ : num [1:307] 9.92 9.78 9.64 9.52 9.43 ... ..$ : num [1:235] 10.7 10.7 10.5 10.3 10.2 ... ..$ : num [1:181] 11.5 11.3 11.2 11 11 ... ..$ : num [1:133] 12.3 12.2 12.1 12 12 ... ..$ : num [1:87] 13.9 13.8 13.7 13.7 13.7 ... $ ycontour :List of 9 ..$ : num [1:1573] -2.39 -2.37 -1.58 -0.79 0 ... ..$ : num [1:815] -4.56 -3.95 -3.16 -2.37 -1.58 ... ..$ : num [1:543] -6.14 -5.53 -4.74 -3.95 -3.16 ... ..$ : num [1:397] -6.85 -6.32 -5.53 -4.74 -3.95 ... ..$ : num [1:307] -6.95 -6.32 -5.53 -4.74 -3.95 ... ..$ : num [1:235] -6.48 -6.32 -5.53 -4.74 -3.95 ... ..$ : num [1:181] -5.45 -4.74 -3.95 -3.16 -2.37 ... ..$ : num [1:133] -3.37 -3.16 -2.37 -1.58 -0.79 ... ..$ : num [1:87] -1.91 -1.58 -0.79 0 0.79 ... $ contour_levels: num [1:9] 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 7.2.2 Plotting of flux footprint with plot_flux_footprint The function plot_flux_footprint takes as input an object returned by calc_flux_footprint and plots the cross-wind integrated footprint and the 2d footprint. plot_flux_footprint(ffp) Flux footprint climatologies can be create as composite footprints by averaging over several FFP calculations, see the dedicated webpage https://www.footprint.kljun.net/. The flux footprint can then be plotted on an aerial photo or an ecosystem type classification, see e.g. Kljun et al. (2015) therein Fig. 5 or Pirk et al. (2023) therein Fig. 4. References "],["surface-energy-balance.html", "8 Surface energy balance 8.1 Surface energy balance closure 8.2 Calculating surface energy balance 8.3 Hydrological measures: Bowen ratio and evaporative fraction 8.4 Meteorological contextualization", " 8 Surface energy balance 8.1 Surface energy balance closure The surface energy balance (SEB) is a framework to describe exchange processes between atmosphere and land, and can be written as \\[ R_{net} - G = SH + LH + I_{SEB} \\] with net radiation \\(R_{net} = SW\\downarrow - SW\\uparrow + LW\\downarrow - LW\\uparrow\\) (combining shortwave (SW) and longwave (LW), incoming (\\(\\downarrow\\)) and outgoing (\\(\\uparrow\\)) radiative fluxes) and the ground heat flux \\(G\\). The turbulent fluxes are given by \\[\\begin{align} SH = \\rho c_p \\overline{w'T'} \\quad \\textrm{ and } \\quad LH = \\rho L_v \\overline{w'q'} \\end{align}\\] with the air density \\(\\rho\\), the heat capacity of air (under constant pressure) \\(c_p\\) and the latent heat of vaporization \\(L_v\\). The sensible heat flux SH is positive under unstable conditions (ground heats atmosphere) and negative under stable conditions (atmosphere heats ground). Positive latent heat fluxes (LH) represent evaporation or transpiration, whereas negative LH indicates condensation or deposition of water (vapor). The last term \\(I_{SEB}\\) describes the imbalance (or residual flux). Depending on the value of \\(I_{SEB}\\), the SEB is called closed or unclosed \\[\\begin{align} I_{SEB} \\begin{cases} = 0 \\rightarrow \\textrm{closed} \\\\ \\ne 0 \\rightarrow \\textrm{unclosed} \\end{cases} \\quad \\textrm{and} \\quad CR := \\frac{R_{net}-G}{SH+LH} \\begin{cases} = 1 \\rightarrow \\textrm{closed} \\\\ \\ne 1 \\rightarrow \\textrm{unclosed}, \\end{cases} \\end{align}\\] which can also be expressed in a relative measure, the closure ratio CR. The advantage of the closure ratio is the normalization by the available radiation, but the disadvantage is a cancellation of biases. Wilson and others (2002) showed based on the FLUXNET towers, that there is on average a SEB unclosure of 20-30 %. Reasons for the unclosure are (similar to the TKE budget unclosure) horizontal advection, flux divergence, submeso-scale motions, different measurement footprints, melting, runoff or rain fluxes, canopy interactions as well as possible measurement and post-processing errors (as in great detail reviewed by Mauder, Foken, and Cuxart (2020)). Stoy et al. (2013) found a larger unclosure in heterogeneous terrain and a correlation with friction velocity. 8.2 Calculating surface energy balance #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) library(dplyr) sigma=5.67*10^(-8) #read in processed example data dat=readRDS("../data/ec-data_30min_processed/processed_data_example.rds") dat$TIME=as.POSIXct(dat$time,format="%F %T") Plotting of surface energy balance with plot_seb The function plot_sebplots the surface energy balance as time series and as scatter plot (R-GH, SH+LH) with linear regression as well as calculates the residual flux and the closure ratio. In addition to the turbulence data used so far (which contains the two turbulent fluxes SH, LH), we now also need additional radiative measurements (all four radiative fluxes SWin, SWout, LWin, LWout) and the ground heat flux (GH), which are available in the file ../data/radiation-data_30min/biomet_data.csv as 30 minutes averages. #read in radiation data dat_rad=read.table("../data//radiation-data_30min//biomet_data.csv",sep=",",header=T) colnames(dat_rad)=c("time","rh","ta","swin","swout","lwin","lwout","shf1","shf2") dat_rad$TIME=as.POSIXct(dat_rad$time,format="%F %T") dat=inner_join(dat,dat_rad,by="TIME") head(dat) A data.frame: 6 × 40 time.x u_mean v_mean w_mean ws_mean wd_mean T_mean h2o_mean co2_mean u_sd ⋯ TIME time.y rh ta swin swout lwin lwout shf1 shf2 <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ⋯ <dttm> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 2018-07-20 08:30:00 2.872084 -3.939352e-16 6.931732e-06 2.873170 -2.289284 15.86938 0.007254132 0.0006009614 1.062143 ⋯ 2018-07-20 08:30:00 2018-07-20 08:30:00 56.54792 16.28775 715.4410 105.7314 312.4243 458.7009 21.59691 1.682706 2 2018-07-20 09:00:00 2.864793 7.504684e-17 9.052527e-08 2.864538 -2.326045 16.55190 0.007584267 0.0005978138 1.105122 ⋯ 2018-07-20 09:00:00 2018-07-20 09:00:00 54.34070 16.77433 756.2658 109.9171 319.3567 460.9099 26.81290 1.952374 3 2018-07-20 09:30:00 3.996526 9.365292e-16 3.049021e-05 4.002522 -2.035397 17.05704 0.007472065 0.0005954894 1.409495 ⋯ 2018-07-20 09:30:00 2018-07-20 09:30:00 47.65451 17.40209 797.9751 112.8786 301.3082 463.2456 30.59997 2.139507 4 2018-07-20 10:00:00 4.998016 -2.812846e-16 -2.391239e-06 4.997530 -1.977737 17.60447 0.006762097 0.0005953881 1.289326 ⋯ 2018-07-20 10:00:00 2018-07-20 10:00:00 36.98989 18.08719 809.7048 114.7406 295.6089 467.7517 33.01136 2.270067 5 2018-07-20 10:30:00 4.879095 4.062636e-16 2.187117e-05 4.880696 -2.014489 18.08994 0.005410862 0.0005948927 1.469949 ⋯ 2018-07-20 10:30:00 2018-07-20 10:30:00 30.13753 18.32750 823.0150 117.3849 291.3130 469.1732 34.70869 2.299792 6 2018-07-20 11:00:00 5.225037 -8.977916e-17 -6.972624e-06 5.223984 -2.261205 18.24207 0.004422858 0.0005954766 1.319227 ⋯ 2018-07-20 11:00:00 2018-07-20 11:00:00 25.45414 18.60887 827.8588 118.4659 288.2217 467.7229 34.78918 2.284315 plot_seb(dat$swin,dat$swout,dat$lwin,dat$lwout,dat$sh,dat$lh,dat$shf1) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -134.13 -19.47 -5.41 28.33 170.39 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 27.66214 5.45569 5.07 1.4e-06 *** x 0.52887 0.02051 25.78 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 48.56 on 125 degrees of freedom Multiple R-squared: 0.8417, Adjusted R-squared: 0.8405 F-statistic: 664.7 on 1 and 125 DF, p-value: < 2.2e-16 8.3 Hydrological measures: Bowen ratio and evaporative fraction The Bowen ratio \\(BR\\) and the evaporative fraction \\(EF\\) are given by \\[ BR:= \\frac{SH}{LH}\\quad\\quad\\textrm{and}\\quad\\quad EF:= \\frac{LH}{LH+SH} \\] and are both a way to describe the heat transfer by sensible vs latent heat. They are related to the surface and the water abundance therein, and can be used to compare different surface types in different climates. Therefore, it is usually applied on monthly, seasonal or annual averages (so here just exemplarily calculated on daily basis). print(paste("BR (average):", calc_br(mean(dat$sh),mean(dat$lh)))) print(paste("EF (average):", calc_ef(mean(dat$sh),mean(dat$lh)))) [1] "BR (average): 1.2715665126865" [1] "EF (average): 0.440224837976387" 8.4 Meteorological contextualization The turbulent fluxes are mainly driven by the available energy through incoming solar and longwave radiation, which is in the atmosphere attenuated by clouds in a complex way, depending on cloud height and type. A way to estimate the cloud cover without directly measuring it is the clear sky-index. Based on longwave incoming radiation, temperature and humidity, an estimate of the clear-sky-index (CSI) can be derived as ratio of actual emissivity \\(\\epsilon_{actual}\\) and theoretical (clear-sky) emissivity \\(\\epsilon_{theoretical}\\). If \\(\\epsilon_{actual} \\le \\epsilon_{theoretical}\\), i.e. \\(CSI \\le 1\\), then the sky is assumed to be clear-sky (see e.g. the review paper Lehner, Rotach, and Obleitner (2021)). dat$csi=calc_csi(dat$T_mean+273.15,dat$lwin,dat$rh) plot(dat$csi,type="l",lwd=2,ylab="CSI") References "],["references.html", "9 References", " 9 References "],["404.html", "Page not found", " Page not found The page you requested cannot be found (perhaps it was moved or renamed). You may want to try searching to find the page's new location, or use the table of contents to find the page you are looking for. "]] +[["index.html", "Reddy: An open-source package for analyzing eddy-covariance data 1 Introduction 1.1 Measurement setup and instrumentation 1.2 Data sources", " Reddy: An open-source package for analyzing eddy-covariance data 1 Introduction Package source: Reddy package (Github) Jupyter notebooks: jupyter notebooks (Github) Any comments or issues? create an issue (Github) Overview: Eddy-covariance (EC) measurements allow to measure turbulent fluxes directly and non-invasively over long periods of time, and thus represent the standard measurement method for turbulent exchange processes between atmosphere and land, vegetation, cryosphere or hydrosphere. However, they require a thought-through technical setup and a special post-processing, before further analysis can be carried out. This gitbook provides an overview about the setup, post-processing and meteorological evaluation of EC measurements, which can be used for research and teaching. The structure of this gitbook is detailed in figure 1. All described functions are implemented in the R-package Reddy, which was specially developed to allow a reproducible and comprehensive analysis of EC measurements. Each chapter covers one topic and is also available as jupyter notebook, which can be downloaded here. Figure 1: Overview of the topics in this gitbook and the functionality of the Reddy package Installation of the Reddy package: The Reddy package can be installed directly from github: devtools::install_git("https://github.com/noctiluc3nt/Reddy") Entering a function name function in an R terminal, shows the function and the performed calculations, such that the calculation and plotting procedure is fully comprehensible. The Reddy package depends on the libraries MASS (for kernel density estimation), pracma (basic linear algebra) and RcppRoll (C++ interface for accelerated data handling), which are automatically installed with Reddy. 1.1 Measurement setup and instrumentation Setup: An EC setup has two main components: the sonic anemometer (for the wind components and temperature) and a gas analyzer (for water vapor, carbon dioxide, methane, …). The sonic usually consists of three pairs of transducers, that transmit and receive ultra-sonic sound waves whose propagation speed depends on wind speed, temperature and humidity. Therefrom, the three wind components and the sonic temperature, which is approximately the virtual temperature, can be derived. The gas analyzers utilize usually an absorption line in the near- or mid-infrared of the respective trace gas, to measure their number density (IRGA - infrared gas analyzer). The air is pumped from the inlet (with a flow rate of about 12 l/min, that is controlled by the flow module which contains a pump), through the inlet tube into the gas analyzer, where the actual measurement takes place. There are different types of infrared gas analyzer, the most common difference beeing between gas analyzers with closed or open measuring path. Further possible methods for gas analyzers are laser spectroscopy, mass spectroscopy, chromatography and chemiluminescence (often used for NOx) and also particle flux measurements can be combined with sonic measurements to derive particle fluxes. The EC system can be mounted on a tower or mast with one or several measurement heights. For choosing the location and measurement height, the surface roughness and the flux footprint (i.e., the area where the flux originates from) are examined. Since the mast or tower disturb the EC measurements, the orientation of the system is chosen based on the prevailing wind direction(s), such that the main wind direction(s) are undisturbed by the tower structures, as exemplified in figure 3. The sonics have an internal orientation (and should face north as indicated on the sonic), and they should be levelled (however, their precise levelling is not essential as the wind is rotated in the post-processing anyway). There are several options for the inlet position: Usually it is placed below the sonic (see the setup in Kuivajärvi), but most importantly it should not interfere with the sonic measurements. A detailed description of high-standard EC site setup (following the standards of the Integrated Carbon Observation System, ICOS) can be found in Rebmann et al. (2018). Figure 2: Eddy-covariance sites in different environments: urban (Helsinki, Finland), lake (Kuivajärvi, Finland), boreal forest (Sodankylä, Finland), and alpine tundra (Finse, Norway). Calibration and Maintainance: The EC system requires regular maintainance, in particular the gas analyzers. The gas analyzers are calibrated regularly in-field, and more rarely, intensively by the manufacturer. For this prupose, reference measurements are carried out with gas cylinders containing a fixed amount of the gas, e.g. zero-gas or 450 ppm CO\\(_2\\). For water vapor, usually only the zero-gas calibration is performed, since fixed water vapor concentrations are difficult to maintain. For other trace gases, the in-field calibration is performed with a zero-gas and another fixed concentration, while the manufacturer calibrates for several fixed gas concentrations. The optics of the gas analyzer have to be cleaned regulary depending on the environmental conditions, whereby open-path gas analyzers require more maintainance than closed-path gas analyzers. Additionally, the inlet filters should be cleaned or replaced regulary as well as the sampling tube. The sonic requires less maintainance, however, the measurements are sensitive to the distance of the transducer pairs, so their distance should be checked. Figure 3: Components of an eddy-covariance system (demonstration setup) 1.2 Data sources FLUXNETs: There are several coordinated FLUXNETs, which standardise eddy-covariance measurements regionally or globally and provide post-processed flux data. For example: FLUXNET: FLUXNET combines several regional networks to a global flux dataset. A list of regional networks can be found here and a list of stations here. The most recent global flux data set is FLUXNET2015 (Pastorello et al. 2020). ICOS: ICOS (Integrated Carbon Observation System) is a European network of different station types (Atmosphere, Ecosystem, Ocean), which observe carbon concentrations and fluxes. The atmospheric stations are listed here and data can be downloaded from the data portal. Other packages for processing of eddy-covariance data: Multiple other packages for eddy-covariance data processing exist with different applications and different degrees of specialization as well as in different languages. The main focus of these packages, however, is the post-processing of raw eddy-covariance data (often as wrapper of manufacturer software) and long-term ecosystem monitoring studies, as summarized below. In this regard, Reddy R fills a gap, as it additionally considers turbulence theoretical and meteorological applications and allows for a fully customized post-processing and analysis of eddy-covariance data. EddyPro® Fortran: Post-processing of eddy-covariance data from the manufacturer LI-COR Biosciences. ONEFlux C (“Open Network-Enabled Flux processing pipeline”): Post-processing of (half-)hourly eddy-covariance data used to create the FLUXNET2015 dataset (Pastorello et al. 2020). REddyProc R: Post-processing of (half-)hourly eddy-covariance measurements. openeddy R: Post-processing of eddy-covariance data, aligned with REddyProc. RFlux R: GUI for post-processing of eddy-covariance raw data by calling EddyPro®. eddy4R R (Metzger et al. 2017) : Family of several R-package for eddy-covariance post-processing in a DevOps framework. icoscp Python: Access to data from ICOS (Integrated Carbon Observation System) data portal. flux-data-qaqc Python (Volk et al. 2021) : Post-processing of eddy-covariance measurements to derive daily or monthly evapotranspiration in a energy balance framework. References "],["post-processing-of-raw-eddy-covariance-data.html", "2 Post-processing of raw eddy-covariance data 2.1 Eddy-covariance method 2.2 Raw data handling and corrections 2.3 An example post-processing routine", " 2 Post-processing of raw eddy-covariance data To derive turbulent fluxes and other turbulence characteristics from eddy-covariance measurements, they have to be post-processed first. In the following, we will have a look a the required steps – starting with the theoretical foundation of the method and the necessary physical and technical considerations for the raw data processing. The upcoming analyses (e.g., 02_basic-turbulence-diagnostics.ipynb) build upon this and allow for a more detailed investigation (e.g., 03_quadrant-analysis.ipynb). 2.1 Eddy-covariance method How do we derive a flux from the sonic measurements? The eddy-covariance method is based on the Reynolds decomposition, which is applied to every measured quantity \\(x\\) through \\[ x = \\overline{x} + x' \\quad \\textrm{with} \\quad \\overline{x}:= \\frac{1}{t_s} \\int_0^{t_s} x \\: dt. \\] Therein, \\(\\overline{x}\\) represents the time average and \\(x'\\) the deviation from it. In principle, one would have to consider the ensemble average (i.e., all possible paths in the phase space), but this is replaced by the time average assuming ergodicity. Now, we take a look at the flux, which is defined as the average product of two quantities \\(\\overline{xw}\\), and insert the Reynolds decomposition: \\[ \\overline{xw} = \\overline{x}\\:\\overline{w} + \\overline{x}\\overline{w'} + \\overline{x'}\\overline{w} + \\overline{x'w'} = \\overline{x}\\:\\overline{w} + \\overline{x'w'} \\] The second and third term vanish because of the Reynolds postulates (i.e. \\(\\overline{x'} = 0\\)). If additionally \\(\\overline{w}=0\\), the flux can be represented by the correlation, i.e., \\(\\overline{xw} = \\overline{x'w'}\\). As you see, this procedure involves a lot of assumptions: For \\(\\overline{x'} = 0\\) to be true, we assume that the flow is steady and for \\(\\overline{w}=0\\) you either have a homogeneous and flat surface or you need to rotate the sonic coordiante system (see the correction methods below). Usually, \\(w\\) represents the vertical wind speed and \\(x\\) can be an arbitrary scalar quantity. For the momentum flux, we use \\(x=u\\) (streamwise velocity), for the sensible heat flux \\(x=T\\) (temperature or potential temperature), for the latent heat flux \\(x=q\\) (specific humidity) and for trace gas fluxes \\(x=c\\) (the concentration or mixing ratio of the respective trace gas). 2.2 Raw data handling and corrections For applying the described eddy-covariance method to the measurements, we have to decide on an averaging time \\(t_s\\), have to perform quality control of the raw data and have to check the fulfillment of the made assumptions. 2.2.1 Choosing a suitable averaging time As we have seen above, for calculating the fluxes with the eddy-covariance method, a suitable averaging time (\\(t_s\\)) should be chosen. For this, several things need to be considered: Generally, one can say that the longer the averaging time, the less likely it is that the conditions are steady (which is an assumption in the EC method), but short averaging times lose low frequency contributions. So usually, an averaging time of 30 minutes is chosen. However, this averaging time can still be too long if the turbulence is very intermittent, which is the case for very stable conditions. Consequently, some studies use the 30 minutes average just for unstable stratification and an averaging time of 1 to 10 minutes for stable conditions. It is recommendable to first study the characteristics of the sampled raw data (e.g., based on spectra) and then chose an averaging time. 2.2.2 Overview of correction procedures applied to the raw data Despiking (despiking): Removing of spikes in the raw data. For this usually three methods are applied: (1) based on pre-defined thresholds (i.e., an expected temperature or wind speed interval), (2) median deviation (MAD) test, i.e., all measurements lie within a pre-defined range around the median, and (3) based on skewness (3. moment) and kurtosis (4. moment), i.e., the skewness and kurtosis of the considered interval do not exceed pre-defined values. Details see e.g. Mauder et al. (2013). Lag-time correction (shift2maxccf): Lag-time can occur if several loggers are used or the inlet tube of the gas analyzer is very long, such that there is a time offset between the different measured variables. This can be corrected by calculating the maximum cross-correlation and shift the time series according to the lag difference, which is done by the function shift2maxccf. Linear detrending (pracma::detrending): Linear detrending by substracting the mean value (however, linear detrending is not consistent with Reynolds decomposition, so it should be used/interpreted with caution, see e.g. Baldocchi (2003)). Rotation (rotate_double or rotate_planar): Rotation of the sonic coordinate system. Since the orientation of the sonic is arbitrary, the measurements have to be interpreted in a suitable coordinate system. For this, the natural coordinate systems is used, which means that it follows the streamlines and the coordinates are then given by the streamwise velocity component \\(u\\), the crosswise velocity component \\(v\\) and the vertical velocity component \\(w\\). To determine the orientation, two main approaches exist: Double rotation (rotate_double) alignes the sonic coordinate system with the streamlines for every averaging interval. Hence, this method can be applied near-real time. Planar fit rotation (rotate_planar, Wilczak, Oncley, and Stage (2001)) aligns the sonic coordinate system with the mean streamlines under the conditions that the mean vertical velocity \\(\\overline{w}\\) vanishes. For this, a longer time series (usually the whole measurement campaign) is required and thus near-real time processing is not possible. Generally, it is recommended to use double rotation in relatively simple topography, but planar fit in complex (micro-)topogaphy. The angles around which the sonic coordinate system was rotated also allow to estimate the quality of the measurements (the smaller the angle of rotation, the better and important is that they do not flip signs, in particular for close-to-zero fluxes under very stable stratification). Quality flagging (flag_stationarity,flag_w,flag_distortion,flag_most, e.g. Foken and Wichura (1996)): Several quality flags (ranging from 0: the best to 2: the worst) can be applied. Common flags are: a stationarity flag (flag_stationarity), that test the alignment with the stationarity assumption of the eddy-covariance method, a vertical velocity flag (flag_w), which checks that the remaining vertical velocity after the rotation is small, a flow distortion flag (flag_distortion), which removes the pre-defined wind directions that are possibly affected/blocked by the mast, an integral turbulence characteristics flag (flag_itc), which test for the agreement with Monin-Obukhov similarity theory. However, these flags have to be applied purpose-oriented and interpreted with caution. Sometimes particularly “poorly flagged” measurements are interesting for investigating turbulence under challenging (and therefore interesting) conditions, e.g. under very stable stratification. SND (and cross-wind) correction (SNDcorrection, Schotanus, Nieuwstadt, and DeBruin (1983)): Converts the buoyancy flux to sensible heat flux. Since most sonics measure the sound propagation speed, the measured temperature (the so called sonic temperature \\(T_s\\)) is similar to the virtual temperature and thus the resulting flux \\(\\overline{w'T_s'}\\) represents the buoyancy flux, which is about 10-20 % larger than the sensible heat flux \\(\\overline{w'T'}\\). Note, the used constants in the method depend on the measurement device (default for CSAT3). WPL correction (WPLcorrection, Webb, Pearman, and Leuning (1980)): Converts volume- to mass-related quantities for trace gas fluxes. This correction only applies to trace gas concentrations (water vapor, carbon dioxide, methane, …) measured with a gas analyzer. Unit conversion (density) (ppt2rho): Converts ppt to density. Closed-path gas analyzers measure trace gas concentrations in parts-per-… (ppt: … thousand, ppm: … million), which has to be converted to a density based on the respective molar mass. Unit conversion (fluxes) (cov2sh, cov2lh,cov2cf): Converts covariances to the respective flux in W/m\\(^2\\), i.e., cov(w,T) to sensible heat flux, cov(w,q) to latent heat flux and cov(w,c) to CO\\(_2\\) flux. There are more (and partly controversial) correction methods, details are discussed in Foken (2017). Further correction methods are required when spectra are considered (spectral corrections) or budgets are calculated (gap-filling of missing values, to avoid a bias due to the discarded data). 2.3 An example post-processing routine Now, we create an example post-processing routine. For this, we use 3 days of raw eddy-covariance measurements (CSAT3/Campbell Scientific and Li-7200/LI-COR closed-path IR gas analyzer) with a 10 Hz sampling frequency from our station in Finse, Hardangervidda, Norway. The data set is in the folder data/ec-data_10Hz_raw and can be downloaded directly or the whole github repository can be cloned with git clone git@github.com:noctiluc3nt/ec_analyze.git. #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #ec data files dir_in="../data/ec-data_10Hz_raw" files=list.files(dir_in,full.names=TRUE) nf=length(files) Each given file contains 30 minutes of measurements, such that we just need to average over one file to get the 30 minutes averages and fluxes. In the notebook 04_multiresolution-decomposition.ipynb we will have a look at a method that allows to determine suitable averaging times more accurately. In our example routine, the data is first despiked, than the wind is rotated and the sonic measurements are averaged. For the gas analyzer measurements, the unit is converted to a density. Then the turbulence intensities (standard deviation) and the fluxes (covariances) are calculated. For the sensible heat flux, we need to apply the SND correction additionally, the WPL correction can be applied to the latent heat flux. Here, the temperature is given directly in \\(^\\circ\\)C, however, in the direct output of the LI-COR systems (.ghg-files) usually the speed of sound (sos in m/s) is stored, which can be easily converted to temperature with the function sos2Ts. #allocate output var_out=c("time","u_mean","v_mean","w_mean","ws_mean","wd_mean","T_mean","h2o_mean","co2_mean", "u_sd","v_sd","w_sd","T_sd","h2o_sd","co2_sd", "cov_uv","cov_uw","cov_vw","cov_wT","cov_h2ow","cov_co2w","cov_wT_snd","cov_rhoH2Ow_wpl", "rot_angle1","rot_angle2","flag_stationarity","flag_w","flag_distortion") nv=length(var_out) dat=data.frame(array(NA,dim=c(nf,nv))) colnames(dat)=var_out #postprocessing per file (30 mins) for (i in 1:nf) { tmp=read.table(files[i],sep=",",header=T) dat$time[i]=tmp$X[1] #despiking tmp$T_degC=despiking(tmp$T_degC,-40,30) tmp$u_m.s=despiking(tmp$u_m.s,-25,25) tmp$v_m.s=despiking(tmp$v_m.s,-25,25) tmp$w_m.s=despiking(tmp$w_m.s,-5,5) #wind before rotation dat$ws_mean[i]=sqrt(mean(tmp$u_m.s,na.rm=T)^2+mean(tmp$v_m.s,na.rm=T)^2) dat$wd_mean[i]=atan2(mean(tmp$v_m.s,na.rm=T),mean(tmp$u_m.s,na.rm=T)) #rotation rot_wind=rotate_double(tmp$u_m.s,tmp$v_m.s,tmp$w_m.s) tmp$u_m.s=rot_wind$u tmp$v_m.s=rot_wind$v tmp$w_m.s=rot_wind$w dat$rot_angle1[i]=rot_wind$theta dat$rot_angle2[i]=rot_wind$phi #averaging dat$u_mean[i]=mean(tmp$u_m.s,na.rm=T) dat$v_mean[i]=mean(tmp$v_m.s,na.rm=T) dat$w_mean[i]=mean(tmp$w_m.s,na.rm=T) dat$T_mean[i]=mean(tmp$T_degC,na.rm=T) #unit conversion for gases tmp$rhoH2O=ppt2rho(tmp$H2O_ppt,dat$T_mean[i]+273.15,87000) tmp$rhoCO2=ppt2rho(tmp$CO2_ppm/1000,dat$T_mean[i]+273.15,87000,gas="CO2") dat$h2o_mean[i]=mean(tmp$rhoH2O,na.rm=T) dat$co2_mean[i]=mean(tmp$rhoCO2,na.rm=T) #sds dat$u_sd[i]=sd(tmp$u_m.s,na.rm=T) dat$v_sd[i]=sd(tmp$v_m.s,na.rm=T) dat$w_sd[i]=sd(tmp$w_m.s,na.rm=T) dat$T_sd[i]=sd(tmp$T_degC,na.rm=T) dat$h2o_sd[i]=sd(tmp$rhoH2O,na.rm=T) dat$co2_sd[i]=sd(tmp$rhoCO2,na.rm=T) #covs dat$cov_uw[i]=cov(tmp$u_m.s,tmp$w_m.s,use="pairwise.complete.obs") dat$cov_uv[i]=cov(tmp$u_m.s,tmp$v_m.s,use="pairwise.complete.obs") dat$cov_vw[i]=cov(tmp$v_m.s,tmp$w_m.s,use="pairwise.complete.obs") dat$cov_wT[i]=cov(tmp$T_degC,tmp$w_m.s,use="pairwise.complete.obs") dat$cov_h2ow[i]=cov(tmp$rhoH2O,tmp$w_m.s,use="pairwise.complete.obs") dat$cov_co2w[i]=cov(tmp$rhoCO2,tmp$w_m.s,use="pairwise.complete.obs") #SND correction dat$cov_wT_snd[i]=SNDcorrection(tmp$u_m.s,tmp$v_m.s,tmp$w_m.s,tmp$T_degC+273.15) #WPL correction #dat$cov_rhoH2Ow_wpl[i]=WPLcorrection(tmp$rhoH2O,w=tmp$w_m.s,Ts=tmp$T_degC+273.15,q=tmp$q) #flagging dat$flag_stationarity[i]=flag_stationarity(tmp$w_m.s,tmp$T_degC,nsub=3000) dat$flag_w[i]=flag_w(dat$w_mean[i]) dat$flag_distortion[i]=flag_distortion(tmp$u_m.s,tmp$v_m.s,dir_blocked=c(340,360)) } The calculated covariances have to be converted to the fluxes (unit W/m\\(^2\\)), for example with the functions cov2shand cov2lh: #calculate fluxes dat$sh=cov2sh(dat$cov_wT_snd) dat$lh=cov2lh(dat$cov_h2ow) The output now contains the 30 minutes averages, standard deviations and covariances of the measured quantities, which will be used in the following notebooks for some more detailed analysis. #look at output head(dat) A data.frame: 6 × 30 time u_mean v_mean w_mean ws_mean wd_mean T_mean h2o_mean co2_mean u_sd ⋯ cov_co2w cov_wT_snd cov_rhoH2Ow_wpl rot_angle1 rot_angle2 flag_stationarity flag_w flag_distortion sh lh <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ⋯ <dbl> <dbl> <lgl> <dbl> <dbl> <dbl> <dbl> <lgl> <dbl> <dbl> 1 2018-07-20 08:30:00 2.872084 -3.939352e-16 6.931732e-06 2.873170 -2.289284 15.86938 0.007254132 0.0006009614 1.062143 ⋯ -1.567917e-07 0.1052858 NA 228.8337 0.3484284 0 0 NA 123.2512 103.1085 2 2018-07-20 09:00:00 2.864793 7.504684e-17 9.052527e-08 2.864538 -2.326045 16.55190 0.007584267 0.0005978138 1.105122 ⋯ -2.294700e-07 0.1600703 NA 226.7274 0.7745417 0 0 NA 187.3839 162.6196 3 2018-07-20 09:30:00 3.996526 9.365292e-16 3.049021e-05 4.002522 -2.035397 17.05704 0.007472065 0.0005954894 1.409495 ⋯ -1.862787e-07 0.1533514 NA 243.3803 0.2888957 0 0 NA 179.5185 123.5711 4 2018-07-20 10:00:00 4.998016 -2.812846e-16 -2.391239e-06 4.997530 -1.977737 17.60447 0.006762097 0.0005953881 1.289326 ⋯ -1.504328e-07 0.1556649 NA 246.6840 0.3483120 0 0 NA 182.2267 123.4474 5 2018-07-20 10:30:00 4.879095 4.062636e-16 2.187117e-05 4.880696 -2.014489 18.08994 0.005410862 0.0005948927 1.469949 ⋯ -1.989823e-07 0.1914488 NA 244.5783 0.6531922 0 0 NA 224.1166 191.1642 6 2018-07-20 11:00:00 5.225037 -8.977916e-17 -6.972624e-06 5.223984 -2.261205 18.24207 0.004422858 0.0005954766 1.319227 ⋯ -1.330638e-07 0.1490271 NA 230.4425 0.4471074 0 0 NA 174.4563 150.8430 saveRDS(dat,file="../data/ec-data_30min_processed/processed_data_example.rds") Additionally, the data can be gap-filled with gapfilling or averaged to other customized averaging intervals with averaging. The function averaging is based on the RcppRoll package, which provides fast and efficient “rolling” functions utilizing a C++ interface. Generally, it should be noted that different sensors may require different correction methods, see e.g. Foken (2017) for a discussion, so a detailed knowledge of the instrumentation is required. References "],["turbulence-diagnostics.html", "3 Turbulence diagnostics 3.1 Standard turbulence diagnostics and fluxes 3.2 Stability dependence 3.3 Turbulence regimes and “Hockey-stick” model", " 3 Turbulence diagnostics After completing the post-processing, we can now move on to looking at some turbulence diagnostics. Based on the generated half-hourly output of the first notebook (01_post-processing.ipynb) some standard turbulence quantities, i.e. stability parameter, friction velocity, TKE, turbulence intensity are calculated and plotted, which allows to study turbulent flow properties systematically. 3.1 Standard turbulence diagnostics and fluxes Here, some functions from the Reddy package are used to calculate turbulent kinetic energy \\(TKE\\), velocity scale of TKE \\(V_{TKE}\\), horizontal turbulence intensity \\(TI\\), vertical turbulence intensity \\(I_w\\), friction velocity \\(u_*\\), Obukhov length \\(L\\), stability parameter \\(\\zeta\\) and directional shear angle and plot their timeseries for the previously post-processed example data. Turbulence intensity (calc_ti and calc_iw): Turbulence intensity generally refers to the standard deviation, e.g., \\(\\sigma_u, \\sigma_v, \\sigma_w, \\sigma_T\\), and thus describes the mean fluctuation intensity. The horizontal turbulence intensity TI (calc_ti) and the vertical turbulence intensity \\(I_w\\) (calc_iw) are calculated by normalizing the respective standard deviations with the mean wind speed \\(\\overline{u}\\) (to get a dimensionless measure): \\[TI = \\frac{\\sqrt{\\sigma_u^2+\\sigma_v^2}}{\\overline{u}} ,\\quad\\quad I_w= \\frac{\\sigma_w}{\\overline{u}} \\] Turbulent kinetic energy TKE (calc_tke): TKE describes the mean kinetic energy that the eddies contain and is calculated by \\[ TKE = 0.5 (\\sigma_u^2 + \\sigma_v^2 + \\sigma_w^2) =0.5 (\\overline{u'^2} + \\overline{v'^2} + \\overline{w'^2} ).\\] From the TKE, a velocity scale can easily be derived \\(V_{TKE} = \\sqrt{TKE}\\) (calc_vtke). Friction velocity \\(u_*\\) (calc_ustar): The friction velocity describes the effect of friction in form of a velocity scale \\[ u_* = \\sqrt[4]{\\overline{u'v'}^2+\\overline{v'w'}^2}.\\] Obukhov length \\(L\\) (calc_L): The Obukhov length is a length scale that describes the effect of buoyancy versus shear (friction velocity) through \\[ L = -\\frac{u_*^3 \\overline{T_v}}{\\kappa \\,g\\,\\overline{w'T'}}, \\] and is central in Monin-Obukhov similarity theory (MOST) for deriving dimensionless scaling parameters, in particular the stability parameter \\(\\zeta\\). The fluxes used in the calculation are either the surface fluxes for global scaling (Monin and Obukhov 1954) or the fluxes at the measurement height for local scaling (Nieuwstadt 1984). Stability parameter \\(\\zeta\\) (calc_zeta): The stability parameter is the ratio of measurement height and Obukhov length, and thus a dimensionless measure for stability \\[ \\zeta = \\frac{z}{L}.\\] \\(\\zeta\\) is negative in unstable stratification, zero in neutral stratification, and positive in stable stratification, which can be traced back to the sign of the heat flux \\(\\overline{w'T'}\\) in the Obukhov length \\(L\\). Now, we use some of the Reddy functions to calculate and plot time series of these basic turbulent diagnostics. #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) library(latex2exp) #read in processed example data dat=readRDS("../data/ec-data_30min_processed/processed_data_example.rds") #calculation of some useful turbulence diagnostics dat$tke=calc_tke(dat$u_sd,dat$v_sd,dat$w_sd) dat$vtke=calc_vtke(dat$u_sd,dat$v_sd,dat$w_sd) dat$ti=calc_ti(dat$u_sd,dat$v_sd,dat$ws_mean) dat$iw=calc_iw(dat$w_sd,dat$ws_mean) dat$ustar=calc_ustar(dat$cov_uw,dat$cov_vw) dat$L=calc_L(dat$ustar,dat$T_mean,dat$cov_wT) dat$zeta=calc_zeta(4.4,dat$L) dat$alpha_uw=calc_dshear(dat$cov_uw,dat$cov_vw) #simple timeseries plots par(mfrow=c(3,3)) plot(dat$T_mean,type="l",ylab="temperature [°C]",col=4) grid() plot(dat$ws_mean,type="l",ylab="wind speed [m/s]",col=4) grid() plot(dat$tke,type="l",ylab=TeX("TKE [$m^2/s^2$]"),ylim=c(0,2)) grid() plot(dat$zeta,type="l",ylab="stability parameter [-]") abline(h=0,lty=2) grid() plot(dat$ti,type="l",ylab="turbulence intensity [-]",ylim=c(0,1)) points(dat$iw,type="l",col="gray50") grid() plot(dat$cov_uw,type="l",ylab=TeX("cov(u,w) [$m^2/s^2$]")) grid() plot(dat$cov_wT,type="l",ylab=TeX("cov(T,w) [$K m/s$]"),col="red") grid() plot(dat$cov_h2ow,type="l",ylab=TeX("cov(q,w) [$mmol m/s$]"),col="blue") grid() plot(dat$cov_co2w,type="l",ylab=TeX("cov(c,w) [$ppm m/s$]"),col="orange") grid() As we can see, the stratification is unstable during daytime, where also the latent heat flux is positive, indicating evapotranspiration. The momentum flux is always negative, which indicates downward momentum transport (kinetic energy dissipation) in agreement with a typical logarithmic wind profile. Thereby the magnitude of the momentum flux is higher during daytime in accordance with the higher windspeeds and higher TKE there (shear-generated turbulence). The CO\\(_2\\) flux is negative during daytime (CO\\(_2\\) uptake due to photosynthesis) and positive at nighttime (CO\\(_2\\) release). 3.2 Stability dependence It is often interesting to investigate the stability dependence of turbulence characteristics, especially since Monin-Obukhov similarity theory (MOST) predicts all dimensionless turbulence characteristics based on the dimensionless stability parameter \\(\\zeta\\). A simple way is to plot stability versus a turbulence characteristic and bin the data based on stability intervals. For this the function binning can be used, which bins one variable based on another for predefined bins, as exemplified below for the dependence of the sensible heat flux on the stability parameter. The output contains mean, median, 25% and 75% quartile per discrete bin as dataframe. The used variables and the bins can be customized. Such type of analysis is usually applied to longer timeseries and can also be combined with other methods, e.g. quadrant analysis, to study how different coherent structures behave with changing stability, see e.g. Li and Bou-Zeid (2011), Schmutz and Vogt (2019) or Mack et al. (2024). zeta_bins=c(-10^(2:-2),10^(-2:2)) sh_binned=binning(dat$sh,dat$zeta,zeta_bins) sh_binned #look at output xbins=c(-5^(2:-1),0,5^(-1:2)) plot(xbins,sh_binned[,2],type="b",ylim=c(-20,150),lwd=2,col=2,pch=20,xlab="stability parameter [-]",ylab=TeX("SH [W/m$^2$]")) points(xbins,sh_binned[,3],type="l",lty=2,col=2) points(xbins,sh_binned[,4],type="l",lty=2,col=2) #polygon(c(xbins,rev(xbins)),c(sh_binned[,3],rev(sh_binned[,4])),lty=0,col=rgb(0.8,0,0,0.2) ) abline(h=0,lty=2) abline(v=0,lty=2) grid() A matrix: 9 × 4 of type dbl 122.045246 116.804837 109.052920 129.797163 120.620398 125.471467 75.448402 167.986972 17.306097 12.061651 5.647776 24.415438 1.823135 1.864732 1.093005 2.574063 NA NA NA NA -3.564331 -3.564331 -3.564331 -3.564331 -11.192381 -12.213284 -13.937180 -8.561890 -18.102981 -19.201199 -22.358905 -14.674265 -8.655841 -10.389901 -11.853905 -6.324807 Based on our short example data, we can see that the sensible heat flux is positive under unstable conditions and negative under stable conditions. This can also been seen directly from the definition of \\(\\zeta\\) (and \\(L\\) therein). Such type of analysis can be extended to different quantities and binning based on other important scales, e.g. friction velocity. 3.3 Turbulence regimes and “Hockey-stick” model The above shown example for stability dependence can be extended to arbitrary variables and is often used to characterize different turbulence regimes. To diagnose different turbulence regimes, scatter plots of different turbulence diagnostics can be considered, e.g. plotting wind speed vs TKE or stability parameter vs TKE. For example, Sun et al. (2012) investigate turbulence regimes in stable boundary layers and distinguish between weak and strong turbulence regimes. For this, they plot wind speed versus TKE or \\(\\sigma_w\\) and find two different slopes corresponding to different turbulence generation mechanism (local instability versus bulk shear) – this simple model is now commonly referred to as “Hockey-stick model” (Sun et al. (2012), therein Fig. 2). However, this schematic is very simplified and especially at sites in complex terrain (like our) versatile factors (e.g., the footprint) modify it. In our example data (3 summer days), we see the bulk shear dominated regime, which is indicated by the linear regression. #some basic plots plot(dat$ws_mean,dat$tke,xlim=c(0,7),ylim=c(0,2.5),xlab="wind speed [m/s]",ylab=TeX("TKE [$m^2/s^2$]"),cex=2,pch=20,col=rgb(0,0,0,0.4)) cond=(dat$ws_mean>4) fit=lm(dat$tke[cond]~dat$ws_mean[cond]) abline(fit,col=2,lty=2,lwd=2) #abline(0.1,0.1,col=4,lty=2) grid() References "],["quadrant-analysis.html", "4 Quadrant analysis 4.1 Calculation of occurrence frequencies and strengths of the four quadrants with calc_quadrant_analysis 4.2 Plotting quadrant analysis with plot_quadrant_analysis", " 4 Quadrant analysis Quadrant analysis is a simple conditional sampling method to detect coherent structures directly from the high-frequency measurements (e.g. Katul et al. (1997)) or simulations (e.g. Wallace (2016)). Coherent structures maintain their structure for a significant time period, and due to their coherence, they can generate deviations from mean flow properties (e.g. Jimenez (2018)). For quadrant analysis a scatter plot of a pair of two variables is considered, which are normalized through \\[ \\hat{x} = \\frac{x-\\overline{x}}{\\sigma_x}\\] by substracting the mean value \\(\\overline{x}\\) and dividing by \\(\\sigma_x\\) to make different variables with different value ranges comparable and centered around zero. Based on this, strength \\(S_i\\) and duration \\(D_i\\) of each quadrant (\\(i=1, ..., 4\\)) can be calculated: \\[ S_i = \\frac{\\overline{\\hat{x}'\\hat{w}'}_i}{\\overline{\\hat{x}\\hat{w}}}, \\quad\\quad D_i = \\frac{1}{t_s} \\int_0^{t_s} I_i(t) dt\\] \\(S_i\\) represents the relative strength of the respective quadrant (normalized by the total covariance) and \\(D_i\\) the occurrence frequency of each quadrant (with averaging time \\(t_s\\) and indicator function \\(I_i\\)). To get an overall measure for the contribution of disorganized (i.e. counter-gradient contribution) versus organized (i.e. down-gradient contribution) structures, the exuberance \\(E\\) (Shaw, Tavanger, and Ward 1983) and the organization ratio \\(OR\\) (Mack et al. 2024) can be defined as: \\[ E = \\frac{S(disorganized)}{S(organized)}, \\quad\\quad OR = \\frac{D(disorganized)}{D(organized)}\\] For example, for the sensible heat flux warm updrafts and cold downdrafts are the organized structures, and oppositely cold updrafts and warm downdrafts the disorganized ones (in the figure below, organized structures are shaded and disorganized are not shaded). To filter out particularly strong coherent structures, a hole size can be applied with the filter conditions \\(\\vert \\hat{x}\\hat{y} \\vert \\le H \\cdot \\vert \\overline{\\hat{x}'\\hat{y}'}\\vert\\) (hyperbolic curves in the figure). Usually, \\(y\\) is chosen to be the vertical velocity \\(w\\). The quadrant analysis is directly related to the fluxes by using the respective constituting quantities: \\((u,w)\\) for momentum flux, \\((T,w)\\) for sensible heat flux, \\((q,w)\\) for latent heat flux and \\((c,w)\\) for CO\\(_2\\) flux – as visualized in the figure (adapted from Mack et al. (2024)). #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #ec data files dir_in="../data/ec-data_10Hz_raw" files=list.files(dir_in,full.names=TRUE) nf=length(files) 4.1 Calculation of occurrence frequencies and strengths of the four quadrants with calc_quadrant_analysis To perform quadrant analysis, Reddy provides two functions: calc_quadrant_analysis for calculating occurrence frequency and strength of the four quadrants and plot_quadrant_analysis for plotting the two variables as scatter plot with a 2d kernel density estimation and a linear regression, as detailed below. The function calc_quadrant_analysis counts the occurrence frequency of each quadrant, calculates their strength as product \\(\\hat{x}\\hat{y}\\) and as covariance \\(\\overline{\\hat{x}'\\hat{y}'}\\). The argument do_normalization = TRUE can be used to normalize the two variables and with hole_sizes one or several hole sizes can be applied. The output is a list containing \\(D_i\\), \\(S_i\\) for every quadrant and hole size. i=8 #select a file tmp=read.table(files[i],sep=",",header=T) qa_Tw=calc_quadrant_analysis(tmp$T_degC,tmp$w_m.s) #based on the raw data (10 Hz) directly (i.e., unrotated) str(qa_Tw) List of 10 $ hole_sizes : int [1:11] 0 1 2 3 4 5 6 7 8 9 ... $ occurrence : int [1:4, 1:11] 5448 2975 6619 2958 4689 144 3812 300 4232 8 ... $ product : num [1:4, 1:11] 1.068 -0.291 0.619 -0.42 0.816 ... $ covariance : num [1:4, 1:11] 0.10672 0.05309 -0.00914 0.03226 0.31902 ... $ covariance_total : num 0.15 $ correlation_total : num 0.434 $ product_total : num [1:18000] 0.0448 -0.0544 -0.4983 0.115 0.3095 ... $ exuberance : num [1:11] -0.422 -2.699 -5.521 -7.411 NaN ... $ organization_ratio: num [1:11] 0.491672 0.052229 0.00992 0.002498 0.000495 ... $ meta : chr "Output format: rows represent the quadrants Q1, Q2, Q3, Q4 -- columns represent selected hole sizes" 4.2 Plotting quadrant analysis with plot_quadrant_analysis plot_quadrant_analysis plots a scatter plot of two variables with a 2d kernel density estimation (MASS::kde2d) and a linear regression (lm()) to allow for a visual inspection. Example: Quadrant Analysis (T,w) during daytime i=8 #select a file -- a daytime example print(files[i]) tmp=read.table(files[i],sep=",",header=T) plot_quadrant_analysis(tmp$T_degC,tmp$w_m.s,xlab="T (normalized)",ylab="w (normalized)",main="Quadrant Analysis (T,w)") [1] "../data/ec-data_10Hz_raw/2018-07-20T120000.csv" Call: lm(formula = yval ~ xval) Residuals: Min 1Q Median 3Q Max -4.5818 -0.5491 0.0005 0.5471 4.1136 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.849e-16 6.716e-03 0.00 1 xval 4.337e-01 6.716e-03 64.58 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9011 on 17998 degrees of freedom Multiple R-squared: 0.1881, Adjusted R-squared: 0.1881 F-statistic: 4170 on 1 and 17998 DF, p-value: < 2.2e-16 Example: Quadrant Analysis (T,w) during nighttime i=38 #select a file -- a nighttime example print(files[i]) tmp=read.table(files[i],sep=",",header=T) plot_quadrant_analysis(tmp$T_degC,tmp$w_m.s,xlab="T (normalized)",ylab="w (normalized)",main="Quadrant Analysis (T,w)") #based on the raw data (10 Hz) directly (i.e., unrotated) [1] "../data/ec-data_10Hz_raw/2018-07-21T030000.csv" Call: lm(formula = yval ~ xval) Residuals: Min 1Q Median 3Q Max -5.8334 -0.5262 0.0022 0.5425 6.2520 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.006e-17 7.120e-03 0.00 1 xval -2.957e-01 7.121e-03 -41.53 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9553 on 17998 degrees of freedom Multiple R-squared: 0.08745, Adjusted R-squared: 0.0874 F-statistic: 1725 on 1 and 17998 DF, p-value: < 2.2e-16 Example: Quadrant Analysis (u,w) during nighttime plot_quadrant_analysis(tmp$u_m.s,tmp$w_m.s,xlab="u (normalized)",ylab="w (normalized)",main="Quadrant Analysis (u,w)") #based on the raw data (10 Hz) directly (i.e., unrotated) Call: lm(formula = yval ~ xval) Residuals: Min 1Q Median 3Q Max -5.9799 -0.5474 -0.0342 0.5301 6.6338 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.749e-16 7.326e-03 0.00 1 xval -1.843e-01 7.326e-03 -25.15 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9829 on 17998 degrees of freedom Multiple R-squared: 0.03395, Adjusted R-squared: 0.0339 F-statistic: 632.5 on 1 and 17998 DF, p-value: < 2.2e-16 Quadrant analysis can be applied to any combination of two measured quantities and also allows to check the measurement quality or significance of the relation between them. It can also be used to check the effect of the coordinate rotation (e.g., rotate_double) visually. Quadrant analysis has been used to study coherent structures, and how they affect fluxes and cause flux dissimilarity, e.g. Thomas and Foken (2007), Li and Bou-Zeid (2011), Schmutz and Vogt (2019) and Mack et al. (2024). The concept can be extended to a combination of three variables in the framework of octant analysis, e.g. Guo et al. (2022). References "],["spectra.html", "5 Spectra 5.1 Spectral analysis of timeseries 5.2 FFT spectrum and comparison to theoretical spectra 5.3 Multiresolution decomposition (MRD)", " 5 Spectra 5.1 Spectral analysis of timeseries Turbulence is associated with different scales, which interact in a complex manner and to investigate them spectra can be examined. Thereby, the measurements are transformed from time to frequency domain. The most common approach is to represent the timeseries with a series of sine and cosine function (with different frequencies), which is referred to as Fourier analysis. A numerical optimized way to calculate the Fourier transformation is to use Fast Fourier Transform (FFT, in rbase: fft). However, periodicity is not always a suitable assumption, such that other basis functions and approaches might be more applicable. With a wavelet transform information in time and frequency domain is retained and different wavelet basis functions (e.g., Morlet wavelets or Haar-wavelets) allow to represent localized and non-periodic behaviour (WaveletComp::wt.image). A very practical discrete wavelet transform is multiresolution decomposition (MRD, Vickers and Mahrt (2003), Reddy:calc_mrd), which is routinley applied in the analysis of eddy-covariance data (see details below). A related method used for flux calculations from eddy-covariance data is based on Ogives, that is a cumulative frequency distribution, i.e. the sum of the cospectral energy. Sievers et al. (2015) developed an Ogive optimization, which allows to disentangle low frequency contributions on flux estimates. This approach is particular relevant under low-flux conditions, e.g. with changing signs in one averaging interval. The low frequency contributions are generally associated with non-local features, e.g. topography, while high-frequency contributions are local. Quick overview: Fast Fourier Transform FFT (rbase: fft, spectrum) Wavelets (WaveletComp::wt.image) Multiresolution decomposition MRD (Reddy::calc_mrd) Ogives (agricolae::ogive.freq) #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #ec data files dir_in="../data/ec-data_10Hz_raw" files=list.files(dir_in,full.names=TRUE) nf=length(files) i=8 #select a file tmp=read.table(files[i],sep=",",header=T) 5.2 FFT spectrum and comparison to theoretical spectra The rbase function spectrum calculates the spectrum based on FFT and plots by default an associated periodigram. spectrum(tmp$u_m.s) However, to systematically investigate spectral density and reduce the noise, it is recommended to apply binning (i.e., averaging over frequency intervals), which can be done with the function Reddy::calc_spectrum (as wrapper of spectrum). The resulting averaged spectra can then be compared to theoretical slopes. In homogeneous and isotropic turbulence a spectral slope of -5/3 follows from theoretical considerations (Kolmogorovs energy cascade), which is usually used as visual comparison. Deviations from this -5/3-slope indicate that either more energy (steeper slope) or less energy (weaker slope) is dissipated, which can have various reasons, e.g. turbulence anisotropy or energy injections. spectrum_u = calc_spectrum(tmp$u_m.s) Call: lm(formula = log(sbin[, 2]) ~ bins[2:nbins]) Residuals: Min 1Q Median 3Q Max -2.20343 -0.08638 0.00209 0.21610 1.08777 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.9486 0.2669 -14.79 2.23e-16 *** bins[2:nbins] -2.7810 0.1064 -26.15 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.6212 on 34 degrees of freedom (63 observations deleted due to missingness) Multiple R-squared: 0.9526, Adjusted R-squared: 0.9512 F-statistic: 683.6 on 1 and 34 DF, p-value: < 2.2e-16 5.3 Multiresolution decomposition (MRD) Multiresolution decomposition (MRD) is a method to characterize the timescale dependence of variances (spectrum) or covariances (cospectrum) and to find scale gaps between turbulent and submeso-scale motions. It uses Haar wavelets, which have the advantage over Fourier analysis that no periodicity is assumed. For this, the time series is step-by-step cut in half, as visualized in the figure. 5.3.1 Calculating multiresolution decomposition with calc_mrd #cospectra mrd_uw=calc_mrd(tmp$u_m.s,tmp$w_m.s,time_res=0.1) #momentum flux mrd_Tw=calc_mrd(tmp$T_degC,tmp$w_m.s,time_res=0.1) #sensible heat flux #spectra mrd_ww=calc_mrd(tmp$w_m.s,tmp$w_m.s,time_res=0.1) #vertical veloctiy mrd_TT=calc_mrd(tmp$T_degC,tmp$T_degC,time_res=0.1) #temperature The function returns a dataframe containing index, exponent \\(m\\), scale (i.e. \\(2^{m}\\)), time [s], mean, median, 25% and 75% quartiles as columns. The number of rows is given by \\(M\\) fulfilling $2^M $ #measurements. #look into output mrd_uw A data.frame: 15 × 8 index m scale time mean median q25 q75 <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 14 16384 1638.4 2.009888e-03 6.411861e-05 -5.739598e-03 7.411443e-03 2 13 8192 819.2 3.723193e-03 3.046532e-04 -7.508746e-03 1.040180e-02 3 12 4096 409.6 5.007376e-03 6.024576e-04 -9.401348e-03 1.487008e-02 4 11 2048 204.8 5.831290e-03 6.054180e-04 -9.164119e-03 1.592909e-02 5 10 1024 102.4 9.752098e-03 1.325413e-03 -6.967310e-03 1.938746e-02 6 9 512 51.2 1.090534e-02 2.818075e-03 -4.432612e-03 1.858083e-02 7 8 256 25.6 1.465273e-02 3.577008e-03 -3.271827e-03 2.213298e-02 8 7 128 12.8 1.157982e-02 3.560662e-03 -6.819690e-03 1.801111e-02 9 6 64 6.4 9.264997e-03 5.478194e-04 -8.774036e-03 9.014596e-03 10 5 32 3.2 6.077250e-03 4.486199e-03 -2.698569e-03 9.425087e-03 11 4 16 1.6 4.209757e-03 -5.068024e-04 -4.568223e-03 7.532330e-03 12 3 8 0.8 -3.430667e-03 -4.011748e-03 -6.571402e-03 -8.710123e-04 13 2 4 0.4 -6.626465e-03 -6.626465e-03 -7.390707e-03 -5.862222e-03 14 1 2 0.2 -4.079521e-03 -4.079521e-03 -4.079521e-03 -4.079521e-03 15 0 1 0.1 -8.838739e-34 -8.838739e-34 -8.838739e-34 -8.838739e-34 5.3.2 Plotting multiresolution decomposition with plot_mrd plot_mrd takes an object returned by calc_mrd and plots mean, median and quartiles versus time. plot_mrd(mrd_uw, main="Cospectrum (u,w)") #plot_mrd(mrd_Tw, main="Cospectrum (T,w)") #plot_mrd(mrd_ww, main="Spectrum (w,w)") plot_mrd(mrd_TT, main="Spectrum (T,T)") Composite MRDs can be created by averaging over several MRDs (also possible to distinguish different flow regimes, e.g., based on the stability parameter calc_zeta) and can be used to find long-term characteristic (e.g., scale gaps). Scale(s) gap(s) are defined as the the zero-crossings of the spectrum or cospectrum (larger than the first zero-crossing at the measurement frequency itself). References "],["reynolds-stress-tensor.html", "6 Reynolds stress tensor 6.1 Invariant analysis of the Reynolds stress tensor 6.2 Anisotropy and velocity aspect ratio (VAR)", " 6 Reynolds stress tensor The Reynolds stress tensor \\[ R := \\overline{u_i ' u_j '} = \\begin{pmatrix} u'^2 & \\overline{u'v'} & \\overline{u'w'} \\\\ \\overline{v'u'} & v'^2 & \\overline{v'w'} \\\\ \\overline{w'u'} & \\overline{w'v'} & w'^2 \\end{pmatrix}, \\quad \\textrm{with} \\quad R = R^T\\] summarizes all normal and shear stresses. Based on an invariant analysis, the most important geometric properties can be derived from its eigenvalues and eigenvectors. Using a linear combination of the three eigenvalues, a two-dimensional mapping into an equilateral triangle – called barycentric map (Banerjee et al. (2007), see figure) – with the coordinates \\((x_B,y_B)\\), can be constructed, which allows to characterize the anisotropy (\\(y_B\\)) and the limiting states. The three corners of the triangle represent the three limiting state 1-, 2- and 3-component limit, where the 3-component limit corresponds to isotropic turbulence. The 2-component limit represents “disk-like” turbulence associated with strong wind shear and the 1-component limit “rod-like” turbulence, which is related to wave-like motions, e.g. internal gravity waves. #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #read in processed example data dat=readRDS("../data/ec-data_30min_processed/processed_data_example.rds") 6.1 Invariant analysis of the Reynolds stress tensor 6.1.1 Performing the invariant analysis of the Reynolds stress tensor with calc_anisotropy The function calc_anisotropy calculates the invariant analysis and takes for this vectors for all six independent components of the Reynolds stress tensor as input (since it is symmetric there are only six not nine independent entries) in the form \\[ R = \\begin{pmatrix} a_{11} & a_{12} & a_{13}\\\\ a_{12} & a_{22} & a_{23} \\\\ a_{13} & a_{23} & a_{33} \\end{pmatrix} \\] and in the order \\(a_{11}, a_{12}, a_{13}, a_{22}, a_{23}, a_{33}\\). rey_ana = calc_anisotropy(dat$u_sd^2,dat$cov_uv,dat$cov_uw,dat$v_sd^2,dat$cov_vw,dat$w_sd^2) str(rey_ana) List of 6 $ xb : num [1:127] 0.348 0.249 0.326 0.257 0.188 ... $ yb : num [1:127] 0.119 0.124 0.101 0.103 0.117 ... $ eta : num [1:127] 0.165 0.152 0.166 0.157 0.148 ... $ xi : num [1:127] -0.0562 -0.1214 -0.0821 -0.1202 -0.1353 ... $ eigenvalues : num [1:127, 1:3] 0.283 0.232 0.281 0.245 0.204 ... $ eigenvectors: num [1:127, 1:3, 1:3] 0.89759 0.93931 0.86082 0.00664 -0.64731 ... The output contains the coordinates of the barycentric map (xb, yb) and of the Lumley triangle (eta, xi) as well as all eigenvalues and eigenvectors. 6.1.2 Plotting the barycentric map using plot_barycentric_map The function plot_barycentric_map takes xb, yb as input (as calculated in the invariant analysis before) and plots them in the barycentric map. plot_barycentric_map(rey_ana$xb,rey_ana$yb) The three corners of the tringle represent the three limiting state: 3-component limit (isotropic, “sphere-like”) at \\((0.5,\\sqrt{3}/2\\)), 2-component limit (“disk-like”) at \\((0,0)\\) and 1-component limit (“rod-like”) at \\((1,0)\\). \\(y_B\\) is a measure for anisotropy – from \\(y_B = 0\\) completely anisotropic to \\(y_B = \\sqrt(3)/2\\) perfectly isotropic. Anisotropy leads to deviations from classical scaling functions used in Monin-Obukhov similarity theory, e.g. Stiperski, Calaf, and Rotach (2019) and Stiperski and Calaf (2023). 6.2 Anisotropy and velocity aspect ratio (VAR) The velocity aspect ratio VAR (calc_var) is an approximation of anisotropy that only takes the diagonal elements of the Reynolds stress tensor, i.e., \\(\\sigma_u, \\sigma_v,\\sigma_w\\), into account (Mahrt 2010). A deviation from a linear regression between VAR and \\(y_B\\) allows quantify the effect of shear stresses. As seen in the plot below, for small values of \\(y_B\\), i.e., very anisotropic, is the deviation from the linear fit larger than for higher values of \\(y_B\\). var=calc_var(dat$u_sd,dat$v_sd,dat$w_sd) plot(var,rey_ana$yb,col=rgb(0,0,0,0.5),pch=20,cex=2,xlab="VAR",ylab="yb") grid() fit=lm(rey_ana$yb~var) abline(fit,lwd=2,col=2) print(summary(fit)) Call: lm(formula = rey_ana$yb ~ var) Residuals: Min 1Q Median 3Q Max -0.011421 -0.003373 -0.001110 0.003227 0.021316 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.08683 0.00265 -32.77 <2e-16 *** var 0.64442 0.00660 97.64 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.005629 on 125 degrees of freedom Multiple R-squared: 0.9871, Adjusted R-squared: 0.987 F-statistic: 9534 on 1 and 125 DF, p-value: < 2.2e-16 References "],["flux-footprint.html", "7 Flux footprint 7.1 Flux footprint parametrizations 7.2 2D flux footprint estimate", " 7 Flux footprint 7.1 Flux footprint parametrizations The calculation of a 2d flux footprint makes it possible to estimate the size of the surface that contributes to the measured flux. This also allows to analyze whether changes in the flux result from a change in the footprint (e.g. surface composition, vegetation, surface roughness). Here the flux footprint parametrization according to Kljun et al. (2015) is used. The mathematical idea for deriving a flux footprint parametrization is to express the flux (\\(F_c\\)) as integral over the distribution of its sinks and sources (\\(S_c\\)) times a transfer function \\(f\\) (the flux footprint): \\[ F_c(0,0,z) = \\int\\int S_c(x,y) f(x,y) \\:dx\\,dy \\] By treating streamwise and crosswise velocity independently, the footprint can be expressed as product of the crosswind-integrated footprint (\\(\\overline{f^y}(x)\\) which is then only a function of \\(x\\)) and a function expressing the crosswind dispersion (\\(D_y\\)) through \\[ f(x,y) = \\overline{f^y}(x)D_y. \\] This assumption leads to a symmtric footprint in crosswind direction. For further derivations a concrete footprint model has to be applied, which is in Kljun et al., 2015 an advanced Lagrangian particle dispersion model (LPDM-B) based on 3d particle backtracking between surface and boundary layer height \\(h\\) that is valid for steady flows under all stabilities. There are several other approaches to flux footprint estimation (e.g. Hsieh, Katul, and Chi (2000) and Kormann and Meixner (2001)), but the Kljun et al. (2015) model is the most common approach today. #load Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) #read in processed example data dat=readRDS("../data/ec-data_30min_processed/processed_data_example.rds") #select file i=8 #daytime example 7.2 2D flux footprint estimate 7.2.1 Calculate 2D flux footprint estimate with calc_flux_footprint The function calc_flux_footprintuses the 2d flux footprint parametrization (FFP) according to Kljun et al. (2015) to calculate the footprint based on measurement height zm, mean horizontal wind speed u_mean, boundary layer height h, Obukhov length L (calc_L), standard deviation of cross-wind component v_sd and either friction velocity ustar(calc_ustar) or surface roughness length z0 in a resolution given by nres. The boundary layer height can be taken from e.g. ERA5. ustar=calc_ustar(dat$cov_uw,dat$cov_vw) L=calc_L(ustar,dat$T_mean,dat$cov_wT) zm=4.4 h=700 ffp=calc_flux_footprint(zm,dat$u_mean[i],h,L[i],dat$v_sd[i],ustar[i]) str(ffp) List of 9 $ xmax : num 23 $ x : num [1:999] 4.38 5.17 5.96 6.76 7.55 ... $ fy_mean : num [1:999] 3.43e-20 3.60e-10 5.56e-07 1.85e-05 1.37e-04 ... $ x2d : num [1:1499, 1:999] 4.38 4.38 4.38 4.38 4.38 ... $ y2d : num [1:1499, 1:999] -592 -591 -590 -589 -589 ... $ f2d : num [1:999, 1:1499] 0 0 0 0 0 0 0 0 0 0 ... $ xcontour :List of 9 ..$ : num [1:1573] 6.76 6.75 6.72 6.7 6.69 ... ..$ : num [1:815] 7.55 7.48 7.42 7.38 7.35 ... ..$ : num [1:543] 8.34 8.25 8.16 8.08 8.03 ... ..$ : num [1:397] 9.13 9.03 8.91 8.81 8.73 ... ..$ : num [1:307] 9.92 9.78 9.64 9.52 9.43 ... ..$ : num [1:235] 10.7 10.7 10.5 10.3 10.2 ... ..$ : num [1:181] 11.5 11.3 11.2 11 11 ... ..$ : num [1:133] 12.3 12.2 12.1 12 12 ... ..$ : num [1:87] 13.9 13.8 13.7 13.7 13.7 ... $ ycontour :List of 9 ..$ : num [1:1573] -2.39 -2.37 -1.58 -0.79 0 ... ..$ : num [1:815] -4.56 -3.95 -3.16 -2.37 -1.58 ... ..$ : num [1:543] -6.14 -5.53 -4.74 -3.95 -3.16 ... ..$ : num [1:397] -6.85 -6.32 -5.53 -4.74 -3.95 ... ..$ : num [1:307] -6.95 -6.32 -5.53 -4.74 -3.95 ... ..$ : num [1:235] -6.48 -6.32 -5.53 -4.74 -3.95 ... ..$ : num [1:181] -5.45 -4.74 -3.95 -3.16 -2.37 ... ..$ : num [1:133] -3.37 -3.16 -2.37 -1.58 -0.79 ... ..$ : num [1:87] -1.91 -1.58 -0.79 0 0.79 ... $ contour_levels: num [1:9] 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Therein, fy_mean represents the crosswind-integrated footprint with coordinates x and xmax the location of the maximum footprint. (x2d, y2d, f2d) represent the 2d flux footprint and (xcontour, ycontour) the contour lines of the respective contour levels, which can be specified in the contoursargument in calc_flux_footprint. The output list ffp can be easily plotted using the function plot_flux_footprint, as shown in the following. 7.2.2 Plotting of flux footprint with plot_flux_footprint The function plot_flux_footprint takes as input an object returned by calc_flux_footprint and plots the cross-wind integrated footprint and the 2d footprint. plot_flux_footprint(ffp) Flux footprint climatologies can be create as composite footprints by averaging over several FFP calculations, see the dedicated webpage https://www.footprint.kljun.net/. The flux footprint can then be plotted on an aerial photo or an ecosystem type classification, see e.g. Kljun et al. (2015) therein Fig. 5 or Pirk et al. (2023) therein Fig. 4. References "],["surface-energy-balance.html", "8 Surface energy balance 8.1 Surface energy balance closure 8.2 Calculating surface energy balance 8.3 Hydrological measures: Bowen ratio and evaporative fraction 8.4 Meteorological contextualization", " 8 Surface energy balance 8.1 Surface energy balance closure The surface energy balance (SEB) is a framework to describe exchange processes between atmosphere and land, and can be written as \\[ R_{net} - G = SH + LH + I_{SEB} \\] with net radiation \\(R_{net} = SW\\downarrow - SW\\uparrow + LW\\downarrow - LW\\uparrow\\) (combining shortwave (SW) and longwave (LW), incoming (\\(\\downarrow\\)) and outgoing (\\(\\uparrow\\)) radiative fluxes) and the ground heat flux \\(G\\). The turbulent fluxes are given by \\[\\begin{align} SH = \\rho c_p \\overline{w'T'} \\quad \\textrm{ and } \\quad LH = \\rho L_v \\overline{w'q'} \\end{align}\\] with the air density \\(\\rho\\), the heat capacity of air (under constant pressure) \\(c_p\\) and the latent heat of vaporization \\(L_v\\). The sensible heat flux SH is positive under unstable conditions (ground heats atmosphere) and negative under stable conditions (atmosphere heats ground). Positive latent heat fluxes (LH) represent evaporation or transpiration, whereas negative LH indicates condensation or deposition of water (vapor). The last term \\(I_{SEB}\\) describes the imbalance (or residual flux). Depending on the value of \\(I_{SEB}\\), the SEB is called closed or unclosed \\[\\begin{align} I_{SEB} \\begin{cases} = 0 \\rightarrow \\textrm{closed} \\\\ \\ne 0 \\rightarrow \\textrm{unclosed} \\end{cases} \\quad \\textrm{and} \\quad CR := \\frac{R_{net}-G}{SH+LH} \\begin{cases} = 1 \\rightarrow \\textrm{closed} \\\\ \\ne 1 \\rightarrow \\textrm{unclosed}, \\end{cases} \\end{align}\\] which can also be expressed in a relative measure, the closure ratio CR. The advantage of the closure ratio is the normalization by the available radiation, but the disadvantage is a cancellation of biases. Wilson and others (2002) showed based on the FLUXNET towers, that there is on average a SEB unclosure of 20-30 %. Reasons for the unclosure are (similar to the TKE budget unclosure) horizontal advection, flux divergence, submeso-scale motions, different measurement footprints, melting, runoff or rain fluxes, canopy interactions as well as possible measurement and post-processing errors (as in great detail reviewed by Mauder, Foken, and Cuxart (2020)). Stoy et al. (2013) found a larger unclosure in heterogeneous terrain and a correlation with friction velocity. 8.2 Calculating surface energy balance #loading Reddy package install.packages("../src/Reddy_0.0.0.9000.tar.gz",repos=NULL,source=TRUE,quiet=TRUE) library(Reddy) library(dplyr) sigma=5.67*10^(-8) #read in processed example data dat=readRDS("../data/ec-data_30min_processed/processed_data_example.rds") dat$TIME=as.POSIXct(dat$time,format="%F %T") Plotting of surface energy balance with plot_seb The function plot_sebplots the surface energy balance as time series and as scatter plot (R-GH, SH+LH) with linear regression as well as calculates the residual flux and the closure ratio. In addition to the turbulence data used so far (which contains the two turbulent fluxes SH, LH), we now also need additional radiative measurements (all four radiative fluxes SWin, SWout, LWin, LWout) and the ground heat flux (GH), which are available in the file ../data/radiation-data_30min/biomet_data.csv as 30 minutes averages. #read in radiation data dat_rad=read.table("../data//radiation-data_30min//biomet_data.csv",sep=",",header=T) colnames(dat_rad)=c("time","rh","ta","swin","swout","lwin","lwout","shf1","shf2") dat_rad$TIME=as.POSIXct(dat_rad$time,format="%F %T") dat=inner_join(dat,dat_rad,by="TIME") head(dat) A data.frame: 6 × 40 time.x u_mean v_mean w_mean ws_mean wd_mean T_mean h2o_mean co2_mean u_sd ⋯ TIME time.y rh ta swin swout lwin lwout shf1 shf2 <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ⋯ <dttm> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 2018-07-20 08:30:00 2.872084 -3.939352e-16 6.931732e-06 2.873170 -2.289284 15.86938 0.007254132 0.0006009614 1.062143 ⋯ 2018-07-20 08:30:00 2018-07-20 08:30:00 56.54792 16.28775 715.4410 105.7314 312.4243 458.7009 21.59691 1.682706 2 2018-07-20 09:00:00 2.864793 7.504684e-17 9.052527e-08 2.864538 -2.326045 16.55190 0.007584267 0.0005978138 1.105122 ⋯ 2018-07-20 09:00:00 2018-07-20 09:00:00 54.34070 16.77433 756.2658 109.9171 319.3567 460.9099 26.81290 1.952374 3 2018-07-20 09:30:00 3.996526 9.365292e-16 3.049021e-05 4.002522 -2.035397 17.05704 0.007472065 0.0005954894 1.409495 ⋯ 2018-07-20 09:30:00 2018-07-20 09:30:00 47.65451 17.40209 797.9751 112.8786 301.3082 463.2456 30.59997 2.139507 4 2018-07-20 10:00:00 4.998016 -2.812846e-16 -2.391239e-06 4.997530 -1.977737 17.60447 0.006762097 0.0005953881 1.289326 ⋯ 2018-07-20 10:00:00 2018-07-20 10:00:00 36.98989 18.08719 809.7048 114.7406 295.6089 467.7517 33.01136 2.270067 5 2018-07-20 10:30:00 4.879095 4.062636e-16 2.187117e-05 4.880696 -2.014489 18.08994 0.005410862 0.0005948927 1.469949 ⋯ 2018-07-20 10:30:00 2018-07-20 10:30:00 30.13753 18.32750 823.0150 117.3849 291.3130 469.1732 34.70869 2.299792 6 2018-07-20 11:00:00 5.225037 -8.977916e-17 -6.972624e-06 5.223984 -2.261205 18.24207 0.004422858 0.0005954766 1.319227 ⋯ 2018-07-20 11:00:00 2018-07-20 11:00:00 25.45414 18.60887 827.8588 118.4659 288.2217 467.7229 34.78918 2.284315 plot_seb(dat$swin,dat$swout,dat$lwin,dat$lwout,dat$sh,dat$lh,dat$shf1) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -134.13 -19.47 -5.41 28.33 170.39 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 27.66214 5.45569 5.07 1.4e-06 *** x 0.52887 0.02051 25.78 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 48.56 on 125 degrees of freedom Multiple R-squared: 0.8417, Adjusted R-squared: 0.8405 F-statistic: 664.7 on 1 and 125 DF, p-value: < 2.2e-16 8.3 Hydrological measures: Bowen ratio and evaporative fraction The Bowen ratio \\(BR\\) and the evaporative fraction \\(EF\\) are given by \\[ BR:= \\frac{SH}{LH}\\quad\\quad\\textrm{and}\\quad\\quad EF:= \\frac{LH}{LH+SH} \\] and are both a way to describe the heat transfer by sensible vs latent heat. They are related to the surface and the water abundance therein, and can be used to compare different surface types in different climates. Therefore, it is usually applied on monthly, seasonal or annual averages (so here just exemplarily calculated on daily basis). print(paste("BR (average):", calc_br(mean(dat$sh),mean(dat$lh)))) print(paste("EF (average):", calc_ef(mean(dat$sh),mean(dat$lh)))) [1] "BR (average): 1.2715665126865" [1] "EF (average): 0.440224837976387" 8.4 Meteorological contextualization The turbulent fluxes are mainly driven by the available energy through incoming solar and longwave radiation, which is in the atmosphere attenuated by clouds in a complex way, depending on cloud height and type. A way to estimate the cloud cover without directly measuring it is the clear sky-index. Based on longwave incoming radiation, temperature and humidity, an estimate of the clear-sky-index (CSI) can be derived as ratio of actual emissivity \\(\\epsilon_{actual}\\) and theoretical (clear-sky) emissivity \\(\\epsilon_{theoretical}\\). If \\(\\epsilon_{actual} \\le \\epsilon_{theoretical}\\), i.e. \\(CSI \\le 1\\), then the sky is assumed to be clear-sky (see e.g. the review paper Lehner, Rotach, and Obleitner (2021)). dat$csi=calc_csi(dat$T_mean+273.15,dat$lwin,dat$rh) plot(dat$csi,type="l",lwd=2,ylab="CSI") References "],["references.html", "9 References", " 9 References "],["404.html", "Page not found", " Page not found The page you requested cannot be found (perhaps it was moved or renamed). You may want to try searching to find the page's new location, or use the table of contents to find the page you are looking for. "]] diff --git a/index.Rmd b/index.Rmd index 5636085..ef5421f 100644 --- a/index.Rmd +++ b/index.Rmd @@ -17,10 +17,9 @@ description: "" # Introduction
-**Package**: Reddy package (Github)
+**Package source**: Reddy package (Github)
**Jupyter notebooks**: jupyter notebooks (Github)
-**Publication**: Link
-**Any comments or issues?**: create an issue (Github) +**Any comments or issues?** create an issue (Github)
diff --git a/notebooks/05_flux-footprint-estimation.ipynb b/notebooks/05_flux-footprint-estimation.ipynb index 6a1445f..45c3849 100644 --- a/notebooks/05_flux-footprint-estimation.ipynb +++ b/notebooks/05_flux-footprint-estimation.ipynb @@ -99,6 +99,14 @@ "str(ffp)" ] }, + { + "cell_type": "markdown", + "id": "73116195", + "metadata": {}, + "source": [ + "Therein, `fy_mean` represents the crosswind-integrated footprint with coordinates `x` and `xmax` the location of the maximum footprint. `(x2d, y2d, f2d)` represent the 2d flux footprint and `(xcontour, ycontour)` the contour lines of the respective contour levels, which can be specified in the `contours`argument in `calc_flux_footprint`. The output list `ffp` can be easily plotted using the function `plot_flux_footprint`, as shown in the following." + ] + }, { "cell_type": "markdown", "id": "8c6a69b7",