Skip to content

Commit

Permalink
deploy: 65dabf7
Browse files Browse the repository at this point in the history
  • Loading branch information
michielkallenberg committed Dec 17, 2024
0 parents commit 6b0f3d6
Show file tree
Hide file tree
Showing 92 changed files with 7,654 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file records the configuration used when building these files. When it is not found, a full rebuild will be done.
config: a074f278a80691af0b468a6a957dba9e
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file added .doctrees/environment.pickle
Binary file not shown.
Binary file added .doctrees/examples.doctree
Binary file not shown.
Binary file added .doctrees/index.doctree
Binary file not shown.
Binary file added .doctrees/installation.doctree
Binary file not shown.
Binary file added .doctrees/modules.doctree
Binary file not shown.
Binary file added .doctrees/pcse_gym.doctree
Binary file not shown.
Binary file added .doctrees/pcse_gym.envs.doctree
Binary file not shown.
Binary file added .doctrees/pcse_gym.utils.doctree
Binary file not shown.
Binary file added .doctrees/usecases.doctree
Binary file not shown.
Empty file added .nojekyll
Empty file.
1 change: 1 addition & 0 deletions CNAME
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
cropgym.ai
Binary file added _images/figure_policies.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
224 changes: 224 additions & 0 deletions _sources/examples.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,224 @@
########
Examples
########

*****
Basic
*****

Here we show a basic example of how to use the PCSE Environment. This example requires pcse at commit 4ef02c7.

.. code:: python
# Import the PCSE Environment class
from pcse_gym.envs.common_env import PCSEEnv
# PCSE contains various utility classes to load parameter configurations
from pcse.fileinput import CABOFileReader, YAMLCropDataProvider
from pcse.util import WOFOST80SiteDataProvider
# Create and configure a PCSE-Gym environment
# Note: the following configuration has not been chosen for its realism
env = PCSEEnv(
model_config='Wofost80_NWLP_FD.conf',
agro_config='../pcse_gym/envs/configs/agro/potato_cropcalendar.yaml',
crop_parameters=YAMLCropDataProvider(force_reload=True),
site_parameters=WOFOST80SiteDataProvider(WAV=10, # Initial amount of water in total soil profile [cm]
NAVAILI=10, # Amount of N available in the pool at initialization of the system [kg/ha]
PAVAILI=50, # Amount of P available in the pool at initialization of the system [kg/ha]
KAVAILI=100, # Amount of K available in the pool at initialization of the system [kg/ha]
),
soil_parameters=CABOFileReader('../pcse_gym/envs/configs/soil/ec3.CAB'),
)
# Reset/initialize the environment to obtain an initial observation
o = env.reset()
By default, the PCSE Environment observations contain the crop model
output variables as specified by the config file (in this case
Wofost80_NWLP_FD.conf), as well as weather statistics. Printing the
observation gives the following initial information:

.. code:: python
{
'crop_model': {
'DVS': [0.0],
'LAI': [0.14400000000000002],
'TAGP': [60.0],
'TWSO': [0.0],
'TWLV': [48.0],
'TWST': [12.0],
'TWRT': [15.0],
'TRA': [0.0003297839759043299],
'RD': [10.0],
'SM': [0.4],
'WWLOW': [22.479999999999997],
'RFTRA': [1.0],
'NNI': [1.0],
'KNI': [1.0],
'PNI': [1.0],
'NPKI': [1.0],
'RFNPK': [0.999999989],
'NAVAIL': [10.0],
'PAVAIL': [50.0],
'KAVAIL': [100.0],
'Ndemand': [0.0],
'RNuptake': [0.0],
'Pdemand': [0.0],
'RPuptake': [0.0],
'Kdemand': [0.0],
'RKuptake': [0.0],
'NamountSO': [0.0],
'PamountSO': [0.0],
'KamountSO': [0.0]},
'weather': {
'IRRAD': [2240000.0],
'TMIN': [-1.24],
'TMAX': [3.75],
'VAP': [6.705809327134126],
'RAIN': [0.028000000000000004],
'E0': [0.0],
'ES0': [0.0],
'ET0': [0.0032214147993529507],
'WIND': [2.23]
}
}
Next, we can define actions to apply to the crops. By default, the PCSE
gym supports irrigation and fertilization

.. code:: python
# Define an action that does nothing
a = {
'irrigation': 0,
'N': 0,
'P': 0,
'K': 0,
}
# Apply it to our environment, to see how the PCSE model progresses in 1 day without interference
o, r, done, truncated, info = env.step(a)
# By choosing different action values we can evaluate the effects of different agro-management policies. Which actions are supported by default depends on the PCSE model, which can be extended manually.
From the model, we obtain an observation of how the crops behave on day
2. Also, we obtain a scalar reward that indicates the desirability of
the current crop state. By default, this has been set to the difference
in WSO (weight storage organ, that is eventually the yield that is
harvested) that was accumulated during this time step. Furthermore, the
environment gives a boolean ``done`` flag indicating whether the
environment has terminated, as well as an ``info`` dict that provides
the possibility of returning additional information that might be of
interest for analysis/debugging.

We can run the model until termination, to observe how the crops would
develop completely without interference:

.. code:: python
r_sum = 0
done = False
env.reset()
while not done:
o, r, done, truncated, info = env.step(a)
r_sum += r
print(f"TWSO: {o['crop_model']['TWSO'][0]:.2f} Total reward: {r_sum:.2f}")
The main objective of reinforcement learning is to build a policy that
dictates when to choose which actions to maximize the expected eventual
sum of rewards.

*************
Customization
*************

The default implementation of the environment bases its reward purely on
eventual yield. For the majority of use cases this is too simplistic.
The PCSE gym environment has been designed to be easily modifiable to
the required complexity. For example, the code below shows how we could
extend the PCSE environment to account for fertilizer prices and the
costs of its application.

.. code:: python
from pcse_gym.envs.common_env import PCSEEnv
class CustomPCSEEnv(PCSEEnv):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# N price per unit
self._n_price = 2
# Yield price per unit
self._y_price = 1
# N application costs
self._na_price = 10
# Keep track of how much nitrogen has been applied in the last time step
self._na = 0
def _apply_action(self, action):
super()._apply_action(action)
# Keep track of the amount of nitrogen that was applied
self._na = action.get('N', 0)
def _get_reward(self, *args, **kwargs) -> float:
# Obtain the default reward, reflecting the increase in yield
r = super()._get_reward(*args, **kwargs)
# Balance the yield price with that of the costs of the applied N
r = r * self._y_price - self._na * self._n_price
# If N was applied, subtract the application costs
if self._na != 0:
r -= self._na_price
return r
The environment class retains the functionality of the default
PCSEEnvironment class, but has a modified reward function.

.. code:: python
from pcse.fileinput import CABOFileReader, YAMLCropDataProvider
from pcse.util import WOFOST80SiteDataProvider
env = CustomPCSEEnv(
model_config='Wofost80_NWLP_FD.conf',
agro_config='../pcse_gym/envs/configs/agro/potato_cropcalendar.yaml',
crop_parameters=YAMLCropDataProvider(force_reload=True),
site_parameters=WOFOST80SiteDataProvider(WAV=10, # Initial amount of water in total soil profile [cm]
NAVAILI=10, # Amount of N available in the pool at initialization of the system [kg/ha]
PAVAILI=50, # Amount of P available in the pool at initialization of the system [kg/ha]
KAVAILI=100, # Amount of K available in the pool at initialization of the system [kg/ha]
),
soil_parameters=CABOFileReader('../pcse_gym/envs/configs/soil/ec3.CAB'),
)
o = env.reset()
# Define an action that applies N
a = {
'irrigation': 0,
'N': 10,
}
o, r, done, truncated, info = env.step(a)
print(r)
Considering the costs of N and its application, the reward of this time
step becomes -30.

The figure below shows the reward progression for simple policies using
this model. |Rewards for two simple policies|

.. |Rewards for two simple policies| image:: figure_policies.png
64 changes: 64 additions & 0 deletions _sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
.. CropGym documentation master file, created by
sphinx-quickstart on Thu Mar 9 21:25:16 2023.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
CropGym: a Reinforcement Learning Environment for Crop Management
-----------------------------------------------------------------

CropGym is a highly configurable `Python Gymnasium <https://gymnasium.farama.org/>`__ environment to conduct Reinforcement
Learning (RL) research for crop management. CropGym is built around
`PCSE <https://pcse.readthedocs.io/en/stable/>`__, a well established
python library that includes implementations of a variety of crop
simulation models. CropGym follows standard gym conventions and enables
daily interactions between an RL agent and a crop model.

Installation
------------
.. toctree::
:maxdepth: 2

installation.rst

Examples
--------
.. toctree::
:maxdepth: 2

examples.rst

Use Cases
---------
.. toctree::
:maxdepth: 2

usecases.rst

Citing CropGym
--------------

If you use CropGym in your publications, please cite us following this Bibtex entry

.. code-block:: text
@article{cropgym,
title={Nitrogen management with reinforcement learning and crop growth models},
volume={2},
DOI={10.1017/eds.2023.28},
journal={Environmental Data Science},
publisher={Cambridge University Press},
author={Kallenberg, Michiel G.J. and Overweg, Hiske and van Bree, Ron and Athanasiadis, Ioannis N.},
year={2023},
pages={e34}
}
Contact
-------
:email:`info@cropgym.ai`

Indices and tables
==================

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
9 changes: 9 additions & 0 deletions _sources/installation.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Installation instructions
-------------------------

To install a minimalistic version, do the following:

1. Clone `PCSE <https://github.com/ajwdewit/pcse.git>`__ (use commit 4ef02c7)
2. Clone `CropGym <https://github.com/WUR-AIs/PCSE-Gym>`__

The code has been tested using python 3.11.4.
7 changes: 7 additions & 0 deletions _sources/modules.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
pcse_gym
========

.. toctree::
:maxdepth: 4

pcse_gym
45 changes: 45 additions & 0 deletions _sources/pcse_gym.envs.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
pcse\_gym.envs package
======================

Submodules
----------

pcse\_gym.envs.common\_env module
---------------------------------

.. automodule:: pcse_gym.envs.common_env
:members:
:undoc-members:
:show-inheritance:

pcse\_gym.envs.rewards module
-----------------------------

.. automodule:: pcse_gym.envs.rewards
:members:
:undoc-members:
:show-inheritance:

pcse\_gym.envs.sb3 module
-------------------------

.. automodule:: pcse_gym.envs.sb3
:members:
:undoc-members:
:show-inheritance:

pcse\_gym.envs.winterwheat module
---------------------------------

.. automodule:: pcse_gym.envs.winterwheat
:members:
:undoc-members:
:show-inheritance:

Module contents
---------------

.. automodule:: pcse_gym.envs
:members:
:undoc-members:
:show-inheritance:
19 changes: 19 additions & 0 deletions _sources/pcse_gym.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
pcse\_gym package
=================

Subpackages
-----------

.. toctree::
:maxdepth: 4

pcse_gym.envs
pcse_gym.utils

Module contents
---------------

.. automodule:: pcse_gym
:members:
:undoc-members:
:show-inheritance:
Loading

0 comments on commit 6b0f3d6

Please sign in to comment.