Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/parallelizing varpro #42

Merged
merged 14 commits into from
Jan 26, 2025
2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ jobs:
- uses: actions-rs/cargo@v1
with:
command: check
args: --tests --benches
args: --tests --benches --all-features
2 changes: 1 addition & 1 deletion .github/workflows/lints.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ jobs:
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
args: --all -- --check

clippy:
name: Clippy
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,3 +26,4 @@ jobs:
- uses: actions-rs/cargo@v1
with:
command: test
args: --all-targets --all-features
18 changes: 18 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,24 @@
This is the changelog for the `varpro` library.
See also here for a [version history](https://crates.io/crates/varpro/versions).

## 0.11.0

* Removed `new` and `minimize` associated functions of the `LevMarSolver`
type.
* Require `Send` and `Sync` bounds for `BasisFunction` implementors.
* Expose parrallel calculations for `LevMarProblem` using extra generic
arguments and the rayon dependency. Use the `parallel` feature flag
to enable.
* Require `Send` and `Sync` trait bounds on all base functions for building
separable models whether or not the calculations are to be run in parallel.
This should not pose restrictions in practice for sane models. If `Send`
and `Sync` bounds cannot be satisfied there's always the possibility to
implement `SeparableNonlinearModel` by hand.

## 0.10.1

Documentation updates

## 0.10.0

- Updated dependencies to current versions.
Expand Down
11 changes: 8 additions & 3 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "varpro"
version = "0.10.1"
version = "0.11.0"
authors = ["geo-ant"]
edition = "2021"
license = "MIT"
Expand All @@ -17,10 +17,10 @@ members = ["shared_test_code"]
[dependencies]
thiserror = "1"
levenberg-marquardt = "0.14"
nalgebra = { version = "0.33" } #, features = ["rayon"]}
nalgebra = { version = "0.33", features = []}
num-traits = "0.2"
distrs = "0.2"
# rayon = "1.6"
rayon = {version = "1.6", optional = true}

[dev-dependencies]
approx = "0.5"
Expand All @@ -33,13 +33,18 @@ mockall = "0.11"
rand = "0.8"
byteorder = "1.5"

[features]
# additional solvers which use multithreading with rayon for some part of the calculations
parallel = ["rayon", "nalgebra/rayon"]

[[bench]]
name = "double_exponential_without_noise"
harness = false

[[bench]]
name = "multiple_right_hand_sides"
harness = false
required-features = ["parallel"]

[package.metadata.docs.rs]
# To build locally use
Expand Down
7 changes: 5 additions & 2 deletions benches/double_exponential_without_noise.rs
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ struct DoubleExponentialParameters {
fn build_problem<Model>(
true_parameters: DoubleExponentialParameters,
mut model: Model,
) -> LevMarProblem<Model, false>
) -> LevMarProblem<Model, false, false>
where
Model: SeparableNonlinearModel<ScalarType = f64>,
DefaultAllocator: nalgebra::allocator::Allocator<Dyn>,
Expand Down Expand Up @@ -61,9 +61,12 @@ where
.expect("Building valid problem should not panic")
}

fn run_minimization<Model>(problem: LevMarProblem<Model, false>) -> (DVector<f64>, DVector<f64>)
fn run_minimization<Model, const PAR: bool>(
problem: LevMarProblem<Model, false, PAR>,
) -> (DVector<f64>, DVector<f64>)
where
Model: SeparableNonlinearModel<ScalarType = f64> + std::fmt::Debug,
LevMarProblem<Model, false, PAR>: LeastSquaresProblem<Model::ScalarType, Dyn, Dyn>,
{
let result = LevMarSolver::default()
.fit(problem)
Expand Down
39 changes: 37 additions & 2 deletions benches/multiple_right_hand_sides.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
use criterion::{criterion_group, criterion_main, Criterion};
use levenberg_marquardt::LeastSquaresProblem;
use nalgebra::DMatrix;
use nalgebra::DVector;
use nalgebra::Dyn;
Expand All @@ -23,7 +24,7 @@ struct DoubleExponentialParameters {
fn build_problem_mrhs<Model>(
true_parameters: DoubleExponentialParameters,
mut model: Model,
) -> LevMarProblem<Model, true>
) -> LevMarProblem<Model, true, false>
where
Model: SeparableNonlinearModel<ScalarType = f64>,
{
Expand All @@ -37,9 +38,12 @@ where
.expect("Building valid problem should not panic")
}

fn run_minimization_mrhs<Model>(problem: LevMarProblem<Model, true>) -> (DVector<f64>, DMatrix<f64>)
fn run_minimization_mrhs<Model, const PAR: bool>(
problem: LevMarProblem<Model, true, PAR>,
) -> (DVector<f64>, DMatrix<f64>)
where
Model: SeparableNonlinearModel<ScalarType = f64> + std::fmt::Debug,
LevMarProblem<Model, true, PAR>: LeastSquaresProblem<Model::ScalarType, Dyn, Dyn>,
{
let result = LevMarSolver::default()
.fit(problem)
Expand Down Expand Up @@ -95,6 +99,37 @@ fn bench_double_exp_no_noise_mrhs(c: &mut Criterion) {
criterion::BatchSize::SmallInput,
)
});

group.bench_function("Handcrafted Model (MRHS) [multithreaded]", |bencher| {
bencher.iter_batched(
|| {
build_problem_mrhs(
true_parameters.clone(),
DoubleExpModelWithConstantOffsetSepModel::new(x.clone(), tau_guess),
)
.into_parallel()
},
run_minimization_mrhs,
criterion::BatchSize::SmallInput,
)
});

group.bench_function("Using Model Builder (MRHS) [multithreaded]", |bencher| {
bencher.iter_batched(
|| {
build_problem_mrhs(
true_parameters.clone(),
get_double_exponential_model_with_constant_offset(
x.clone(),
vec![tau_guess.0, tau_guess.1],
),
)
.into_parallel()
},
run_minimization_mrhs,
criterion::BatchSize::SmallInput,
)
});
}

criterion_group!(
Expand Down
4 changes: 2 additions & 2 deletions src/basis_function/detail.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ use nalgebra::{DVector, Scalar};
impl<ScalarType, Func> BasisFunction<ScalarType, ScalarType> for Func
where
ScalarType: Scalar,
Func: Fn(&DVector<ScalarType>, ScalarType) -> DVector<ScalarType>,
Func: Fn(&DVector<ScalarType>, ScalarType) -> DVector<ScalarType> + Send + Sync,
{
fn eval(&self, x: &DVector<ScalarType>, params: &[ScalarType]) -> DVector<ScalarType> {
if params.len() != Self::ARGUMENT_COUNT {
Expand Down Expand Up @@ -49,7 +49,7 @@ macro_rules! count_args {
// https://github.com/actix/actix-web/blob/web-v3.3.2/src/handler.rs
macro_rules! basefunction_impl_helper ({$ScalarType:ident, $(($n:tt, $T:ident)),+} => {
impl<$ScalarType, Func> BasisFunction<$ScalarType,($($T,)+)> for Func
where Func: Fn(&DVector<$ScalarType>,$($T,)+) -> DVector<$ScalarType>,
where Func: Fn(&DVector<$ScalarType>,$($T,)+) -> DVector<$ScalarType> + Send + Sync,
$ScalarType : Scalar
{
fn eval(&self, x : &DVector<$ScalarType>,params : &[$ScalarType]) -> DVector<$ScalarType> {
Expand Down
2 changes: 1 addition & 1 deletion src/basis_function/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ use nalgebra::{DVector, Scalar};
/// that allows us to implement this trait for functions taking different arguments. Just FYI: The
/// type reflects the list of parameters `\alpha_j$`, so that for a function `Fn(&DVector<ScalarType>,ScalarType) -> DVector<ScalarType>`
/// it follows that `ArgList=ScalarType`, while for `Fn(&DVector<ScalarType>,ScalarType)-> DVector<ScalarType>` it is `ArgList=(ScalarType,ScalarType)`.
pub trait BasisFunction<ScalarType, ArgList>
pub trait BasisFunction<ScalarType, ArgList>: Send + Sync
where
ScalarType: Scalar + Clone,
{
Expand Down
27 changes: 22 additions & 5 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,20 @@
//! sides has been added to this library. This is a powerful technique for suitable
//! problems and is explained at the end of this introductory chapter.
//!
//! ## Parallel Computations (Experimental)
//!
//! Since version 0.11.0, support for parallelizing some of the more expensive
//! computations has been added. Consider this support **experimental**, although
//! it has been thorougly tested. The problem is that I have yet to see an
//! example where the benches run faster than for the single threaded case.
//! Parallel calculations have to be enabled using the `parallel` feature of
//! this crate.
//!
//! To check out if parallelizing (some of the) computations works for you, see the
//! [`LevMarProblemBuilder::new_parallel`](crate::solvers::levmar::LevMarProblemBuilder::new_parallel) and
//! [`LevMarProblemBuilder::mrhs_parallel`](crate::solvers::levmar::LevMarProblemBuilder::mrhs_parallel)
//! builder functions.
//!
//! ## Overview
//!
//! Many nonlinear models consist of a mixture of both truly nonlinear and _linear_ model
Expand Down Expand Up @@ -139,13 +153,13 @@
//!
//! ```no_run
//! # use varpro::model::*;
//! # let problem : varpro::solvers::levmar::LevMarProblem<SeparableModel<f64>,false> = unimplemented!();
//! # let problem : varpro::solvers::levmar::LevMarProblem<SeparableModel<f64>,false,false> = unimplemented!();
//! use varpro::solvers::levmar::LevMarSolver;
//! let fit_result = LevMarSolver::default().fit(problem).unwrap();
//! ```
//!
//! Finally, check the minimization report and, if successful, retrieve the nonlinear parameters `$\alpha$`
//! using the [LevMarProblem::params](levenberg_marquardt::LeastSquaresProblem::params) and the linear
//! If successful, retrieve the nonlinear parameters `$\alpha$` using the
//! [LevMarProblem::params](levenberg_marquardt::LeastSquaresProblem::params) and the linear
//! coefficients `$\vec{c}$` using [LevMarProblem::linear_coefficients](crate::solvers::levmar::LevMarProblem::linear_coefficients)
//!
//! **Fit Statistics:** To get additional statistical information after the fit
Expand All @@ -155,7 +169,7 @@
//! ```no_run
//! # use varpro::model::SeparableModel;
//! # use varpro::prelude::*;
//! # let problem : varpro::solvers::levmar::LevMarProblem<SeparableModel<f64>,false> = unimplemented!();
//! # let problem : varpro::solvers::levmar::LevMarProblem<SeparableModel<f64>,false,false> = unimplemented!();
//! # use varpro::solvers::levmar::LevMarSolver;
//! # let fit_result = LevMarSolver::default().fit(problem).unwrap();
//! let alpha = fit_result.nonlinear_parameters();
Expand Down Expand Up @@ -387,7 +401,10 @@
//! \end{matrix}\right)
//! ```
//!
//! The order of the vectors in the matrix doesn't matter.
//! The order of the vectors in the matrix doesn't matter, but it will determine
//! the order of the linear coefficients, see
//! [`LevMarProblemBuilder::mrhs`](crate::solvers::levmar::LevMarProblemBuilder::mrhs)
//! for a more detailed explanation.
//!
//! ## Example
//! ```no_run
Expand Down
2 changes: 1 addition & 1 deletion src/model/builder/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ where
/// struct documentation.
pub fn invariant_function<F>(self, function: F) -> Self
where
F: Fn(&DVector<ScalarType>) -> DVector<ScalarType> + 'static,
F: Fn(&DVector<ScalarType>) -> DVector<ScalarType> + 'static + Send + Sync,
{
match self {
SeparableModelBuilder::Error(err) => Self::from(err),
Expand Down
5 changes: 4 additions & 1 deletion src/model/detail.rs
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,10 @@ pub fn create_wrapped_basis_function<ScalarType, ArgList, F, StrType, StrType2>(
model_parameters: &[StrType],
function_parameters: &[StrType2],
function: F,
) -> Result<Box<dyn Fn(&DVector<ScalarType>, &[ScalarType]) -> DVector<ScalarType>>, ModelBuildError>
) -> Result<
Box<dyn Fn(&DVector<ScalarType>, &[ScalarType]) -> DVector<ScalarType> + Send + Sync>,
ModelBuildError,
>
where
ScalarType: Scalar,
F: BasisFunction<ScalarType, ArgList> + 'static,
Expand Down
4 changes: 2 additions & 2 deletions src/model/model_basis_function.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ use nalgebra::DVector;
/// (nonlinear) parameters. This is the most low level representation of how our
/// wrapped functions are actually stored inside the model functions
type BaseFuncType<ScalarType> =
Box<dyn Fn(&DVector<ScalarType>, &[ScalarType]) -> DVector<ScalarType>>;
Box<dyn Fn(&DVector<ScalarType>, &[ScalarType]) -> DVector<ScalarType> + Send + Sync>;

/// An internal type that is used to store basefunctions whose interface has been wrapped in
/// such a way that they can accept the location and the *complete model parameters as arguments*.
Expand Down Expand Up @@ -45,7 +45,7 @@ where
/// To create parameter dependent model basis functions use the [ModelBasisFunctionBuilder].
pub fn parameter_independent<FuncType>(function: FuncType) -> Self
where
FuncType: Fn(&DVector<ScalarType>) -> DVector<ScalarType> + 'static,
FuncType: Fn(&DVector<ScalarType>) -> DVector<ScalarType> + 'static + Send + Sync,
{
Self {
function: Box::new(move |x, _params| (function)(x)),
Expand Down
Loading
Loading