diff --git a/README.md b/README.md index 75b664c1..7274456d 100644 --- a/README.md +++ b/README.md @@ -21,13 +21,19 @@ Developers Board

-RecTools is an easy-to-use Python library which makes the process of building recommendation systems easier, -faster and more structured than ever before. -It includes built-in toolkits for data processing and metrics calculation, -a variety of recommender models, some wrappers for already existing implementations of popular algorithms -and model selection framework. -The aim is to collect ready-to-use solutions and best practices in one place to make processes -of creating your first MVP and deploying model to production as fast and easy as possible. +RecTools is an easy-to-use Python library which makes the process of building recommender systems easier and +faster than ever before. + +## ✨ Highlights: Transformer models released! ✨ + +**BERT4Rec and SASRec are now available in RecTools:** +- Fully compatible with our `fit` / `recommend` paradigm and require NO special data processing +- Explicitly described in our [Transformers Theory & Practice Tutorial](examples/tutorials/transformers_tutorial.ipynb): loss options, item embedding options, category features utilization and more! +- Configurable, customizable, callback-friendly, checkpoints-included, logs-out-of-the-box, custom-validation-ready, multi-gpu-compatible! See our [Transformers Advanced Training User Guide](examples/tutorials/transformers_advanced_training_guide.ipynb) +- We are running benchmarks with comparison of RecTools models to other open-source implementations following BERT4Rec reproducibility paper and achieve highest scores on multiple datasets: [Performance on public transformers benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) + + + @@ -103,6 +109,8 @@ See [recommender baselines extended tutorial](https://github.com/MobileTeleSyste | Model | Type | Description (🎏 for user/item features, 🔆 for warm inference, ❄️ for cold inference support) | Tutorials & Benchmarks | |----|----|---------|--------| +| SASRec | Neural Network | `rectools.models.SASRecModel` - Transformer-based sequential model with unidirectional attention mechanism and "Shifted Sequence" training objective
🎏| 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)
📗 [Transformers advanced training](examples/tutorials/transformers_advanced_training_guide.ipynb)
🚀 [Top performance on public benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) | +| BERT4Rec | Neural Network | `rectools.models.BERT4RecModel` - Transformer-based sequential model with bidirectional attention mechanism and "MLM" (masked item) training objective
🎏| 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)
📗 [Transformers advanced training](examples/tutorials/transformers_advanced_training_guide.ipynb)
🚀 [Top performance on public benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) | | [implicit](https://github.com/benfred/implicit) ALS Wrapper | Matrix Factorization | `rectools.models.ImplicitALSWrapperModel` - Alternating Least Squares Matrix Factorizattion algorithm for implicit feedback.
🎏| 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#Implicit-ALS)
🚀 [50% boost to metrics with user & item features](examples/5_benchmark_iALS_with_features.ipynb) | | [implicit](https://github.com/benfred/implicit) BPR-MF Wrapper | Matrix Factorization | `rectools.models.ImplicitBPRWrapperModel` - Bayesian Personalized Ranking Matrix Factorization algorithm. | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#Bayesian-Personalized-Ranking-Matrix-Factorization-(BPR-MF)) | | [implicit](https://github.com/benfred/implicit) ItemKNN Wrapper | Nearest Neighbours | `rectools.models.ImplicitItemKNNWrapperModel` - Algorithm that calculates item-item similarity matrix using distances between item vectors in user-item interactions matrix | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#ItemKNN) | @@ -115,20 +123,33 @@ See [recommender baselines extended tutorial](https://github.com/MobileTeleSyste | Random | Heuristic | `rectools.models.RandomModel` - Simple random algorithm useful to benchmark Novelty, Coverage, etc.
❄️| - | - All of the models follow the same interface. **No exceptions** -- No need for manual creation of sparse matrixes or mapping ids. Preparing data for models is as simple as `dataset = Dataset.construct(interactions_df)` +- No need for manual creation of sparse matrixes, torch dataloaders or mapping ids. Preparing data for models is as simple as `dataset = Dataset.construct(interactions_df)` - Fitting any model is as simple as `model.fit(dataset)` - For getting recommendations `filter_viewed` and `items_to_recommend` options are available - For item-to-item recommendations use `recommend_to_items` method -- For feeding user/item features to model just specify dataframes when constructing `Dataset`. [Check our tutorial](examples/4_dataset_with_features.ipynb) +- For feeding user/item features to model just specify dataframes when constructing `Dataset`. [Check our example](examples/4_dataset_with_features.ipynb) - For warm / cold inference just provide all required ids in `users` or `target_items` parameters of `recommend` or `recommend_to_items` methods and make sure you have features in the dataset for warm users/items. **Nothing else is needed, everything works out of the box.** +- Our models can be initialized from configs and have useful methods like `get_config`, `get_params`, `save`, `load`. Common functions `model_from_config` and `load_model` are available. [Check our example](examples/9_model_configs_and_saving.ipynb) ## Extended validation tools +### `calc_metrics` for classification, ranking, "beyond-accuracy", DQ, popularity bias and between-model metrics + + +[User guide](https://github.com/MobileTeleSystems/RecTools/blob/main/examples/3_metrics.ipynb) | [Documentation](https://rectools.readthedocs.io/en/stable/features.html#metrics) + + ### `DebiasConfig` for debiased metrics calculation [User guide](https://github.com/MobileTeleSystems/RecTools/blob/main/examples/8_debiased_metrics.ipynb) | [Documentation](https://rectools.readthedocs.io/en/stable/api/rectools.metrics.debias.DebiasConfig.html) +### `cross_validate` for model metrics comparison + + +[User guide](https://github.com/MobileTeleSystems/RecTools/blob/main/examples/2_cross_validation.ipynb) + + ### `VisualApp` for model recommendations comparison diff --git a/docs/source/examples.rst b/docs/source/examples.rst index b294715e..e7100b7a 100644 --- a/docs/source/examples.rst +++ b/docs/source/examples.rst @@ -14,3 +14,5 @@ See examples here: https://github.com/MobileTeleSystems/RecTools/tree/main/examp examples/5_benchmark_iALS_with_features examples/6_benchmark_lightfm_inference examples/7_visualization + examples/8_debiased_metrics + examples/9_model_configs_and_saving diff --git a/docs/source/models.rst b/docs/source/models.rst index c05ba7d9..34dd23ba 100644 --- a/docs/source/models.rst +++ b/docs/source/models.rst @@ -12,12 +12,18 @@ Details of RecTools Models +-----------------------------+-------------------+---------------------+---------------------+ | Model | Supports features | Recommends for warm | Recommends for cold | +=============================+===================+=====================+=====================+ +| SASRecModel | Yes | No | No | ++-----------------------------+-------------------+---------------------+---------------------+ +| BERT4RecModel | Yes | No | No | ++-----------------------------+-------------------+---------------------+---------------------+ | DSSMModel | Yes | Yes | No | +-----------------------------+-------------------+---------------------+---------------------+ | EASEModel | No | No | No | +-----------------------------+-------------------+---------------------+---------------------+ | ImplicitALSWrapperModel | Yes | No | No | +-----------------------------+-------------------+---------------------+---------------------+ +| ImplicitBPRWrapperModel | No | No | No | ++-----------------------------+-------------------+---------------------+---------------------+ | ImplicitItemKNNWrapperModel | No | No | No | +-----------------------------+-------------------+---------------------+---------------------+ | LightFMWrapperModel | Yes | Yes | Yes | diff --git a/docs/source/tutorials.rst b/docs/source/tutorials.rst index 383c6769..1e85dca3 100644 --- a/docs/source/tutorials.rst +++ b/docs/source/tutorials.rst @@ -8,3 +8,5 @@ See tutorials here: https://github.com/MobileTeleSystems/RecTools/tree/main/exam :glob: examples/tutorials/baselines_extended_tutorial + examples/tutorials/transformers_tutorial + examples/tutorials/transformers_advanced_training_guide diff --git a/examples/tutorials/transformers_advanced_training_guide.ipynb b/examples/tutorials/transformers_advanced_training_guide.ipynb new file mode 100644 index 00000000..8649c5fc --- /dev/null +++ b/examples/tutorials/transformers_advanced_training_guide.ipynb @@ -0,0 +1,1956 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Transformer Models Advanced Training Guide\n", + "This guide is showing advanced features of RecTools transformer models training.\n", + "\n", + "### Table of Contents\n", + "\n", + "* Prepare data\n", + "* Advanced training guide\n", + " * Validation fold\n", + " * Validation loss\n", + " * Callback for Early Stopping\n", + " * Callbacks for Checkpoints\n", + " * Loading Checkpoints\n", + " * Callbacks for RecSys metrics\n", + " * RecSys metrics for Early Stopping anf Checkpoints\n", + "* Advanced training full example\n", + " * Running full training with all of the described validation features on Kion dataset\n", + "* More RecTools features for transformers\n", + " * Saving and loading models\n", + " * Configs for transformer models\n", + " * Classes and function in configs\n", + " * Multi-gpu training\n" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import itertools\n", + "import typing as tp\n", + "import warnings\n", + "from collections import Counter\n", + "from pathlib import Path\n", + "\n", + "import pandas as pd\n", + "import numpy as np\n", + "import torch\n", + "from lightning_fabric import seed_everything\n", + "from pytorch_lightning import Trainer, LightningModule\n", + "from pytorch_lightning.loggers import CSVLogger\n", + "from pytorch_lightning.callbacks import EarlyStopping, ModelCheckpoint, Callback\n", + "\n", + "from rectools import Columns, ExternalIds\n", + "from rectools.dataset import Dataset\n", + "from rectools.metrics import NDCG, Recall, Serendipity, calc_metrics\n", + "from rectools.models import BERT4RecModel, SASRecModel, load_model\n", + "from rectools.models.nn.item_net import IdEmbeddingsItemNet\n", + "from rectools.models.nn.transformer_base import TransformerModelBase\n", + "\n", + "# Enable deterministic behaviour with CUDA >= 10.2\n", + "os.environ[\"CUBLAS_WORKSPACE_CONFIG\"] = \":4096:8\"\n", + "warnings.simplefilter(\"ignore\", UserWarning)\n", + "warnings.simplefilter(\"ignore\", FutureWarning)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prepare data" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# %%time\n", + "!wget -q https://github.com/irsafilo/KION_DATASET/raw/f69775be31fa5779907cf0a92ddedb70037fb5ae/data_en.zip -O data_en.zip\n", + "!unzip -o data_en.zip\n", + "!rm data_en.zip" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(5476251, 5)\n" + ] + }, + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
user_iditem_iddatetimetotal_durwatched_pct
017654995062021-05-11425072.0
169931716592021-05-298317100.0
\n", + "
" + ], + "text/plain": [ + " user_id item_id datetime total_dur watched_pct\n", + "0 176549 9506 2021-05-11 4250 72.0\n", + "1 699317 1659 2021-05-29 8317 100.0" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Download dataset\n", + "DATA_PATH = Path(\"./data_en\")\n", + "items = pd.read_csv(DATA_PATH / 'items_en.csv', index_col=0)\n", + "interactions = (\n", + " pd.read_csv(DATA_PATH / 'interactions.csv', parse_dates=[\"last_watch_dt\"])\n", + " .rename(columns={\"last_watch_dt\": Columns.Datetime})\n", + ")\n", + "\n", + "print(interactions.shape)\n", + "interactions.head(2)" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(962179, 15706)" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "interactions[Columns.User].nunique(), interactions[Columns.Item].nunique()" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(5476251, 4)\n" + ] + } + ], + "source": [ + "# Process interactions\n", + "interactions[Columns.Weight] = np.where(interactions['watched_pct'] > 10, 3, 1)\n", + "raw_interactions = interactions[[\"user_id\", \"item_id\", \"datetime\", \"weight\"]]\n", + "print(raw_interactions.shape)\n", + "raw_interactions.head(2)\n", + "\n", + "dataset = Dataset.construct(raw_interactions)" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Seed set to 60\n" + ] + }, + { + "data": { + "text/plain": [ + "60" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "RANDOM_STATE=60\n", + "torch.use_deterministic_algorithms(True)\n", + "seed_everything(RANDOM_STATE, workers=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Advanced Training" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Validation fold\n", + "\n", + "Models do not create validation fold during `fit` by default. However, there is a simple way to force it.\n", + "\n", + "Let's assume that we want to use Leave-One-Out validation for specific set of users. To apply it we need to implement `get_val_mask_func` with required logic and pass it to model during initialization. \n", + "\n", + "This function should receive interactions with standard RecTools columns and return a binary mask which identifies interactions that should not be used during model training. But instrad should be used for validation loss calculation. They will also be available for Lightning Callbacks to allow RecSys metrics computations.\n", + "\n", + "*Please make sure you do not use `partial` while doing this. Partial functions cannot be by serialized using RecTools.*" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "# Implement `get_val_mask_func`\n", + "\n", + "N_VAL_USERS = 2048\n", + "unique_users = raw_interactions[Columns.User].unique()\n", + "VAL_USERS = unique_users[: N_VAL_USERS]\n", + "\n", + "def leave_one_out_mask_for_users(interactions: pd.DataFrame, val_users: ExternalIds) -> np.ndarray:\n", + " rank = (\n", + " interactions\n", + " .sort_values(Columns.Datetime, ascending=False, kind=\"stable\")\n", + " .groupby(Columns.User, sort=False)\n", + " .cumcount()\n", + " )\n", + " val_mask = (\n", + " (interactions[Columns.User].isin(val_users))\n", + " & (rank == 0)\n", + " )\n", + " return val_mask.values\n", + "\n", + "# We do not use `partial` for correct serialization of the model\n", + "def get_val_mask_func(interactions: pd.DataFrame):\n", + " return leave_one_out_mask_for_users(interactions, val_users = VAL_USERS)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n" + ] + } + ], + "source": [ + "model = SASRecModel(\n", + " n_factors=64,\n", + " n_blocks=2,\n", + " n_heads=2,\n", + " dropout_rate=0.2,\n", + " train_min_user_interactions=5,\n", + " session_max_len=50,\n", + " verbose=0,\n", + " deterministic=True,\n", + " item_net_block_types=(IdEmbeddingsItemNet,),\n", + " get_val_mask_func=get_val_mask_func, # pass our custom `get_val_mask_func`\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Validation loss\n", + "\n", + "Let's check how the validation loss is being logged.\n", + "We just want to quickly check functionality for now so let's create a custom Lightning trainer and use it replace the default one.\n", + "\n", + "Right now we will just assign new trainer to model `_trainer` attribute but later in this tutorial a clean way for passing custom trainer will be shown." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n", + "`Trainer.fit` stopped: `max_epochs=2` reached.\n" + ] + } + ], + "source": [ + "trainer = Trainer(\n", + " accelerator='gpu',\n", + " devices=1,\n", + " min_epochs=2,\n", + " max_epochs=2, \n", + " deterministic=True,\n", + " limit_train_batches=2, # use only 2 batches for each epoch for a test run\n", + " enable_checkpointing=False,\n", + " logger = CSVLogger(\"test_logs\"), # We use CSV logging for this guide but there are many other options\n", + " enable_progress_bar=False,\n", + " enable_model_summary=False,\n", + ")\n", + "\n", + "# Replace default trainer with our custom one\n", + "model._trainer = trainer\n", + "\n", + "# Fit model. Validation fold and validation loss computation will be done under the hood.\n", + "model.fit(dataset);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's look at model logs. We can access logs directory with `model.fit_trainer.log_dir`" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "hparams.yaml metrics.csv\r\n" + ] + } + ], + "source": [ + "# What's inside the logs directory?\n", + "!ls $model.fit_trainer.log_dir" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "epoch,step,train_loss,val_loss\r\n", + "\r\n", + "0,1,,22.39907455444336\r\n", + "\r\n", + "0,1,22.390357971191406,\r\n", + "\r\n", + "1,3,,22.25874137878418\r\n", + "\r\n", + "1,3,22.909526824951172,\r\n", + "\r\n" + ] + } + ], + "source": [ + "# Losses and metrics are in the `metrics.csv`\n", + "# Let's look at logs\n", + "\n", + "!tail $model.fit_trainer.log_dir/metrics.csv" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Callback for Early Stopping\n", + "\n", + "By default RecTools transfomers train for exact amount of epochs (specified in `epochs` argument).\n", + "\n", + "But now that we have validation loss logged, let's use it for model Early Stopping. It will ensure that model will not resume training if validation loss (or any other custom metric) doesn't impove. We have Lightning Callbacks for that." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "early_stopping_callback = EarlyStopping(\n", + " monitor=SASRecModel.val_loss_name, # or just pass \"val_loss\" here\n", + " mode=\"min\",\n", + " min_delta=1. # just for a quick test of functionality\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + } + ], + "source": [ + "trainer = Trainer(\n", + " accelerator='gpu',\n", + " devices=1,\n", + " min_epochs=1, # minimum number of epochs to train before early stopping\n", + " max_epochs=20, # maximum number of epochs to train\n", + " deterministic=True,\n", + " limit_train_batches=2, # use only 2 batches for each epoch for a test run\n", + " enable_checkpointing=False,\n", + " logger = CSVLogger(\"test_logs\"),\n", + " callbacks=early_stopping_callback, # pass our callback\n", + " enable_progress_bar=False,\n", + " enable_model_summary=False,\n", + ")\n", + "\n", + "# Replace default trainer with our custom one\n", + "model._trainer = trainer\n", + "\n", + "# Fit model. Everything will happen under the hood\n", + "model.fit(dataset);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here model stopped training after 4 epochs because validation loss wasn't improving by our specified `min_delta`" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "epoch,step,train_loss,val_loss\r\n", + "\r\n", + "0,1,,22.363222122192383\r\n", + "\r\n", + "0,1,22.359580993652344,\r\n", + "\r\n", + "1,3,,22.194488525390625\r\n", + "\r\n", + "1,3,22.31987190246582,\r\n", + "\r\n", + "2,5,,21.974754333496094\r\n", + "\r\n", + "2,5,22.225738525390625,\r\n", + "\r\n", + "3,7,,21.718231201171875\r\n", + "\r\n", + "3,7,22.150163650512695,\r\n", + "\r\n" + ] + } + ], + "source": [ + "# Let's check out logs\n", + "!tail $model.fit_trainer.log_dir/metrics.csv" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Callback for Checkpoints\n", + "Checkpoints are model states that are saved periodically during training." + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [], + "source": [ + "# Checkpoint last epoch\n", + "last_epoch_ckpt = ModelCheckpoint(filename=\"last_epoch\")\n", + "\n", + "# Checkpoints based on validation loss\n", + "least_val_loss_ckpt = ModelCheckpoint(\n", + " monitor=SASRecModel.val_loss_name, # or just pass \"val_loss\" here,\n", + " mode=\"min\",\n", + " filename=\"{epoch}-{val_loss:.2f}\",\n", + " save_top_k=2, # Let's save top 2 checkpoints for validation loss\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n", + "`Trainer.fit` stopped: `max_epochs=6` reached.\n" + ] + } + ], + "source": [ + "trainer = Trainer(\n", + " accelerator=\"gpu\",\n", + " devices=1,\n", + " min_epochs=1,\n", + " max_epochs=6,\n", + " deterministic=True,\n", + " limit_train_batches=2, # use only 2 batches for each epoch for a test run\n", + " logger = CSVLogger(\"test_logs\"),\n", + " callbacks=[last_epoch_ckpt, least_val_loss_ckpt], # pass our callbacks for checkpoints\n", + " enable_progress_bar=False,\n", + " enable_model_summary=False,\n", + ")\n", + "\n", + "# Replace default trainer with our custom one\n", + "model._trainer = trainer\n", + "\n", + "# Fit model. Everything will happen under the hood\n", + "model.fit(dataset);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's look at model checkpoints that were saved. By default they are neing saved to `checkpoints` directory in `model.fit_trainer.log_dir`" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "epoch=4-val_loss=21.53.ckpt epoch=5-val_loss=21.22.ckpt last_epoch.ckpt\r\n" + ] + } + ], + "source": [ + "# We have 2 checkpoints for 2 best validation loss values and one for last epoch\n", + "!ls $model.fit_trainer.log_dir/checkpoints" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Loading checkpoints" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Loading checkpoints is very simple with `load_from_checkpoint` method.\n", + "Note that there is an important limitation: **loaded model will not have `fit_trainer` and can't be saved again. But it is fully ready for recommendations.**" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "c85988c886f245ed8573b00a92e6260c", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Predicting: | | 0/? [00:00\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
user_iditem_idscorerank
0176549152970.6463731
117654986360.6091712
2176549122590.5975953
3176549123560.5440334
417654937340.5415805
\n", + "" + ], + "text/plain": [ + " user_id item_id score rank\n", + "0 176549 15297 0.646373 1\n", + "1 176549 8636 0.609171 2\n", + "2 176549 12259 0.597595 3\n", + "3 176549 12356 0.544033 4\n", + "4 176549 3734 0.541580 5" + ] + }, + "execution_count": 18, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "ckpt_path = os.path.join(model.fit_trainer.log_dir, \"checkpoints\", \"last_epoch.ckpt\")\n", + "loaded = SASRecModel.load_from_checkpoint(ckpt_path)\n", + "loaded.recommend(users=VAL_USERS[:1], dataset=dataset, filter_viewed=True, k=5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Callbacks for RecSys metrics during training\n", + "\n", + "Monitoring RecSys metrics (or any other custom things) on validation fold is not available out of the box, but we can create a custom Lightning Callback for that.\n", + "\n", + "Below is an example of calculating standard RecTools metrics on validation fold during training. We use it as an explicit example that any customization is possible. But it is recommend to implement metrics calculation using `torch` for faster computations.\n", + "\n", + "Please look at PyTorch Lightning documentation for more details on custom callbacks." + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [], + "source": [ + "# Implement custom Callback for RecTools metrics computation within validation epochs during training.\n", + "\n", + "class ValidationMetrics(Callback):\n", + " \n", + " def __init__(self, top_k: int, val_metrics: tp.Dict, verbose: int = 0) -> None:\n", + " self.top_k = top_k\n", + " self.val_metrics = val_metrics\n", + " self.verbose = verbose\n", + "\n", + " self.epoch_n_users: int = 0\n", + " self.batch_metrics: tp.List[tp.Dict[str, float]] = []\n", + "\n", + " def on_validation_batch_end(\n", + " self, \n", + " trainer: Trainer, \n", + " pl_module: LightningModule, \n", + " outputs: tp.Dict[str, torch.Tensor], \n", + " batch: tp.Dict[str, torch.Tensor], \n", + " batch_idx: int, \n", + " dataloader_idx: int = 0\n", + " ) -> None:\n", + " logits = outputs[\"logits\"]\n", + " if logits is None:\n", + " logits = pl_module.torch_model.encode_sessions(batch[\"x\"], pl_module.item_embs)[:, -1, :]\n", + " _, sorted_batch_recos = logits.topk(k=self.top_k)\n", + "\n", + " batch_recos = sorted_batch_recos.tolist()\n", + " targets = batch[\"y\"].tolist()\n", + "\n", + " batch_val_users = list(\n", + " itertools.chain.from_iterable(\n", + " itertools.repeat(idx, len(recos)) for idx, recos in enumerate(batch_recos)\n", + " )\n", + " )\n", + "\n", + " batch_target_users = list(\n", + " itertools.chain.from_iterable(\n", + " itertools.repeat(idx, len(targets)) for idx, targets in enumerate(targets)\n", + " )\n", + " )\n", + "\n", + " batch_recos_df = pd.DataFrame(\n", + " {\n", + " Columns.User: batch_val_users,\n", + " Columns.Item: list(itertools.chain.from_iterable(batch_recos)),\n", + " }\n", + " )\n", + " batch_recos_df[Columns.Rank] = batch_recos_df.groupby(Columns.User, sort=False).cumcount() + 1\n", + "\n", + " interactions = pd.DataFrame(\n", + " {\n", + " Columns.User: batch_target_users,\n", + " Columns.Item: list(itertools.chain.from_iterable(targets)),\n", + " }\n", + " )\n", + "\n", + " prev_interactions = pl_module.data_preparator.train_dataset.interactions.df\n", + " catalog = prev_interactions[Columns.Item].unique()\n", + "\n", + " batch_metrics = calc_metrics(\n", + " self.val_metrics, \n", + " batch_recos_df,\n", + " interactions, \n", + " prev_interactions,\n", + " catalog\n", + " )\n", + "\n", + " batch_n_users = batch[\"x\"].shape[0]\n", + " self.batch_metrics.append({metric: value * batch_n_users for metric, value in batch_metrics.items()})\n", + " self.epoch_n_users += batch_n_users\n", + "\n", + " def on_validation_epoch_end(self, trainer: Trainer, pl_module: LightningModule) -> None:\n", + " epoch_metrics = dict(sum(map(Counter, self.batch_metrics), Counter()))\n", + " epoch_metrics = {metric: value / self.epoch_n_users for metric, value in epoch_metrics.items()}\n", + "\n", + " self.log_dict(epoch_metrics, on_step=False, on_epoch=True, prog_bar=self.verbose > 0)\n", + "\n", + " self.batch_metrics.clear()\n", + " self.epoch_n_users = 0" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### RecSys metrics for Early Stopping and Checkpoints\n", + "When custom metrics callback is implemented, we can use the values of these metrics for both Early Stopping and Checkpoints." + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [], + "source": [ + "# Initialize callbacks for metrics calculation and checkpoint based on NDCG value\n", + "\n", + "metrics = {\n", + " \"NDCG@10\": NDCG(k=10),\n", + " \"Recall@10\": Recall(k=10),\n", + " \"Serendipity@10\": Serendipity(k=10),\n", + "}\n", + "top_k = max([metric.k for metric in metrics.values()])\n", + "\n", + "# Callback for calculating RecSys metrics\n", + "val_metrics_callback = ValidationMetrics(top_k=top_k, val_metrics=metrics, verbose=0)\n", + "\n", + "# Callback for checkpoint based on maximization of NDCG@10\n", + "best_ndcg_ckpt = ModelCheckpoint(\n", + " monitor=\"NDCG@10\",\n", + " mode=\"max\",\n", + " filename=\"{epoch}-{NDCG@10:.2f}\",\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n", + "`Trainer.fit` stopped: `max_epochs=6` reached.\n" + ] + }, + { + "data": { + "text/plain": [ + "" + ] + }, + "execution_count": 21, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "trainer = Trainer(\n", + " accelerator=\"gpu\",\n", + " devices=1,\n", + " min_epochs=1,\n", + " max_epochs=6,\n", + " deterministic=True,\n", + " limit_train_batches=2, # use only 2 batches for each epoch for a test run\n", + " logger = CSVLogger(\"test_logs\"),\n", + " callbacks=[val_metrics_callback, best_ndcg_ckpt], # pass our callbacks\n", + " enable_progress_bar=False,\n", + " enable_model_summary=False,\n", + ")\n", + "\n", + "# Replace default trainer with our custom one\n", + "model._trainer = trainer\n", + "\n", + "# Fit model. Everything will happen under the hood\n", + "model.fit(dataset)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We have checkpoint for best NDCG@10 model in the usual directory for checkpoints" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "epoch=5-NDCG@10=0.01.ckpt\r\n" + ] + } + ], + "source": [ + "!ls $model.fit_trainer.log_dir/checkpoints" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We also now have metrics in our logs. Let's load them" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
epochtrain_lossval_loss
0022.36174822.401196
1121.98980922.256557
2222.99430722.055750
3322.51099821.802269
4421.60662821.510941
\n", + "
" + ], + "text/plain": [ + " epoch train_loss val_loss\n", + "0 0 22.361748 22.401196\n", + "1 1 21.989809 22.256557\n", + "2 2 22.994307 22.055750\n", + "3 3 22.510998 21.802269\n", + "4 4 21.606628 21.510941" + ] + }, + "execution_count": 24, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "def get_logs(model: TransformerModelBase) -> tp.Tuple[pd.DataFrame, ...]:\n", + " log_path = Path(model.fit_trainer.log_dir) / \"metrics.csv\"\n", + " epoch_metrics_df = pd.read_csv(log_path)\n", + " \n", + " loss_df = epoch_metrics_df[[\"epoch\", \"train_loss\"]].dropna()\n", + " val_loss_df = epoch_metrics_df[[\"epoch\", \"val_loss\"]].dropna()\n", + " loss_df = pd.merge(loss_df, val_loss_df, how=\"inner\", on=\"epoch\")\n", + " loss_df.reset_index(drop=True, inplace=True)\n", + " \n", + " metrics_df = epoch_metrics_df.drop(columns=[\"train_loss\", \"val_loss\"]).dropna()\n", + " metrics_df.reset_index(drop=True, inplace=True)\n", + "\n", + " return loss_df, metrics_df\n", + "\n", + "loss_df, metrics_df = get_logs(model)\n", + "\n", + "loss_df.head()" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
NDCG@10Recall@10Serendipity@10epochstep
00.0000520.0006570.00000301
10.0003220.0046020.00000313
20.0025950.0295860.00000225
30.0045640.0414200.00000437
40.0113010.0940170.00000449
\n", + "
" + ], + "text/plain": [ + " NDCG@10 Recall@10 Serendipity@10 epoch step\n", + "0 0.000052 0.000657 0.000003 0 1\n", + "1 0.000322 0.004602 0.000003 1 3\n", + "2 0.002595 0.029586 0.000002 2 5\n", + "3 0.004564 0.041420 0.000004 3 7\n", + "4 0.011301 0.094017 0.000004 4 9" + ] + }, + "execution_count": 25, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "metrics_df.head()" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": {}, + "outputs": [], + "source": [ + "del model\n", + "torch.cuda.empty_cache()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Advanced training full example\n", + "Running full training with all of the described validation features on Kion dataset" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Seed set to 60\n", + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + } + ], + "source": [ + "# seed again for reproducibility of this piece of code\n", + "seed_everything(RANDOM_STATE, workers=True)\n", + "\n", + "# Callbacks\n", + "val_metrics_callback = ValidationMetrics(top_k=top_k, val_metrics=metrics, verbose=0)\n", + "best_ndcg_ckpt = ModelCheckpoint(\n", + " monitor=\"NDCG@10\",\n", + " mode=\"max\",\n", + " filename=\"{epoch}-{NDCG@10:.2f}\",\n", + ")\n", + "last_epoch_ckpt = ModelCheckpoint(filename=\"{epoch}-last_epoch\")\n", + "early_stopping_callback = EarlyStopping(\n", + " monitor=\"NDCG@10\",\n", + " patience=5,\n", + " mode=\"max\",\n", + ")\n", + "\n", + "# Function to get custom trainer with desired callbacks\n", + "def get_custom_trainer() -> Trainer:\n", + " return Trainer(\n", + " accelerator=\"gpu\",\n", + " devices=[1],\n", + " min_epochs=1,\n", + " max_epochs=100,\n", + " deterministic=True,\n", + " logger = CSVLogger(\"sasrec_logs\"),\n", + " enable_progress_bar=False,\n", + " enable_model_summary=False,\n", + " callbacks=[\n", + " val_metrics_callback, # calculate RecSys metrics\n", + " best_ndcg_ckpt, # save best NDCG model checkpoint\n", + " last_epoch_ckpt, # save model checkpoint after last epoch\n", + " early_stopping_callback, # early stopping on NDCG\n", + " ],\n", + " )\n", + "\n", + "# Model\n", + "model = SASRecModel(\n", + " n_factors=256,\n", + " n_blocks=2,\n", + " n_heads=4,\n", + " dropout_rate=0.2,\n", + " train_min_user_interactions=5,\n", + " session_max_len=50,\n", + " verbose=1,\n", + " deterministic=True,\n", + " item_net_block_types=(IdEmbeddingsItemNet,),\n", + " get_val_mask_func=get_val_mask_func, # pass our custom `get_val_mask_func`\n", + " get_trainer_func=get_custom_trainer, # pass function to initialize our custom trainer\n", + ")\n", + "\n", + "\n", + "# Fit model. Everything will happen under the hood\n", + "model.fit(dataset);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Early stopping was triggered. We have checkpoints for best NDCG model (on epoch 14) and on last epoch (19)" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "epoch=14-NDCG@10=0.03.ckpt epoch=19-last_epoch.ckpt\r\n" + ] + } + ], + "source": [ + "!ls $model.fit_trainer.log_dir/checkpoints" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Loading best NDCG model from checkpoint and recommending" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "c9ef25b79cb441bd9be5bd65667495b4", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Predicting: | | 0/? [00:00\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
user_iditem_idscorerank
0176549117492.6102771
117654920252.5773982
217654993422.3944893
3176549144882.3666644
417654975712.2897785
\n", + "" + ], + "text/plain": [ + " user_id item_id score rank\n", + "0 176549 11749 2.610277 1\n", + "1 176549 2025 2.577398 2\n", + "2 176549 9342 2.394489 3\n", + "3 176549 14488 2.366664 4\n", + "4 176549 7571 2.289778 5" + ] + }, + "execution_count": 29, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "ckpt_path = os.path.join(model.fit_trainer.log_dir, \"checkpoints\", \"epoch=14-NDCG@10=0.03.ckpt\")\n", + "best_model = SASRecModel.load_from_checkpoint(ckpt_path)\n", + "best_model.recommend(users=VAL_USERS[:1], dataset=dataset, filter_viewed=True, k=5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's also look at our logs for losses and metrics" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
NDCG@10Recall@10Serendipity@10epochstep
00.0236630.1834320.00006702362
10.0279190.2097300.00012214725
20.0293600.2163050.00016627088
30.0301700.2268240.00020339451
40.0304120.2255100.000161411814
150.0316400.2261670.0001861537807
160.0313330.2307690.0002031640170
170.0312380.2281390.0001841742533
180.0318930.2320840.0001951844896
190.0315600.2301120.0001791947259
\n", + "
" + ], + "text/plain": [ + " NDCG@10 Recall@10 Serendipity@10 epoch step\n", + "0 0.023663 0.183432 0.000067 0 2362\n", + "1 0.027919 0.209730 0.000122 1 4725\n", + "2 0.029360 0.216305 0.000166 2 7088\n", + "3 0.030170 0.226824 0.000203 3 9451\n", + "4 0.030412 0.225510 0.000161 4 11814\n", + "15 0.031640 0.226167 0.000186 15 37807\n", + "16 0.031333 0.230769 0.000203 16 40170\n", + "17 0.031238 0.228139 0.000184 17 42533\n", + "18 0.031893 0.232084 0.000195 18 44896\n", + "19 0.031560 0.230112 0.000179 19 47259" + ] + }, + "execution_count": 30, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "loss_df, metrics_df = get_logs(model)\n", + "pd.concat([metrics_df.head(5), metrics_df.tail(5)])" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjUAAAHHCAYAAABHp6kXAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8ekN5oAAAACXBIWXMAAA9hAAAPYQGoP6dpAABjtUlEQVR4nO3dd3wUZf4H8M9sTe8d0mmREgJiRJAWJESkW0AOQVB/eGChqXgnCngXwXIWFL1TwYZdsKBoQAIIQSEQEURKCISSQgLpySbZnd8fk2yypC7Zzewmn/frNa/dnXlm9jsZYj4+88yMIIqiCCIiIiI7p5C7ACIiIiJLYKghIiKiDoGhhoiIiDoEhhoiIiLqEBhqiIiIqENgqCEiIqIOgaGGiIiIOgSGGiIiIuoQGGqIiIioQ2CoISIiog6BoYaIrGrDhg0QBAEHDhyQuxQi6uAYaoiIiKhDYKghIiKiDoGhhohkd+jQISQkJMDNzQ0uLi6Ii4vDvn37TNpUVVVhxYoV6N69OxwcHODt7Y2hQ4ciKSnJ2CY7Oxv33nsvunbtCq1Wi8DAQEycOBFnzpwx2dYPP/yAm2++Gc7OznB1dcW4ceNw9OhRkzat3RYR2Q6V3AUQUed29OhR3HzzzXBzc8Njjz0GtVqNt956CyNGjMDOnTsRGxsLAHjmmWeQmJiI++67DzfccAOKiopw4MABHDx4ELfccgsAYOrUqTh69CgeeughhIWFITc3F0lJScjMzERYWBgA4IMPPsCsWbMQHx+P1atXo6ysDOvWrcPQoUNx6NAhY7vWbIuIbIxIRGRF69evFwGI+/fvb3T5pEmTRI1GI6anpxvnXbx4UXR1dRWHDRtmnBcdHS2OGzeuye+5cuWKCEB8/vnnm2xTXFwsenh4iPfff7/J/OzsbNHd3d04vzXbIiLbw9NPRCQbvV6Pn376CZMmTUJERIRxfmBgIO6++2788ssvKCoqAgB4eHjg6NGjOHnyZKPbcnR0hEajQXJyMq5cudJom6SkJBQUFGD69OnIy8szTkqlErGxsdixY0ert0VEtoehhohkc+nSJZSVlaFnz54NlkVFRcFgMODcuXMAgJUrV6KgoAA9evRA3759sXTpUhw+fNjYXqvVYvXq1fjhhx/g7++PYcOGYc2aNcjOzja2qQ1Eo0aNgq+vr8n0008/ITc3t9XbIiLbw1BDRHZh2LBhSE9Px7vvvos+ffrg7bffxoABA/D2228b2zz66KM4ceIEEhMT4eDggKeeegpRUVE4dOgQAMBgMACQxtUkJSU1mL7++utWb4uIbJDc57+IqGNrbkxNdXW16OTkJN55550Nls2bN09UKBRiYWFho9stLi4WY2JixC5dujT53SdOnBCdnJzEGTNmiKIoip999pkIQPzxxx/N3o+rt0VEtoc9NUQkG6VSiTFjxuDrr782uVQ6JycHGzduxNChQ+Hm5gYAyM/PN1nXxcUF3bp1g06nAwCUlZWhoqLCpE1kZCRcXV2NbeLj4+Hm5oZ///vfqKqqalDPpUuXWr0tIrI9vKSbiNrFu+++i61btzaY/8wzzyApKQlDhw7F3//+d6hUKrz11lvQ6XRYs2aNsd11112HESNGYODAgfDy8sKBAwfwxRdfYMGCBQCAEydOIC4uDnfeeSeuu+46qFQqbNq0CTk5OZg2bRoAwM3NDevWrcPMmTMxYMAATJs2Db6+vsjMzMSWLVswZMgQrF27tlXbIiIbJHdXERF1bLWnn5qazp07Jx48eFCMj48XXVxcRCcnJ3HkyJHi3r17Tbbz7LPPijfccIPo4eEhOjo6ir169RL/9a9/iZWVlaIoimJeXp44f/58sVevXqKzs7Po7u4uxsbGip999lmDmnbs2CHGx8eL7u7uooODgxgZGSnOnj1bPHDggNnbIiLbIYiiKMqYqYiIiIgsgmNqiIiIqENgqCEiIqIOgaGGiIiIOgSGGiIiIuoQGGqIiIioQ2CoISIiog6h09x8z2Aw4OLFi3B1dYUgCHKXQ0RERK0giiKKi4sRFBQEhaL5vphOE2ouXryI4OBgucsgIiKia3Du3Dl07dq12TadJtS4uroCkH4otc+SISIiIttWVFSE4OBg49/x5nSaUFN7ysnNzY2hhoiIyM60ZugIBwoTERFRh2B2qNm1axfGjx+PoKAgCIKAzZs3N2hz7NgxTJgwAe7u7nB2dsagQYOQmZnZ5Db/97//4eabb4anpyc8PT0xevRo/PbbbyZtZs+eDUEQTKaxY8eaWz4RERF1UGaHmtLSUkRHR+P1119vdHl6ejqGDh2KXr16ITk5GYcPH8ZTTz0FBweHJreZnJyM6dOnY8eOHUhJSUFwcDDGjBmDCxcumLQbO3YssrKyjNPHH39sbvlERETUQbXpKd2CIGDTpk2YNGmScd60adOgVqvxwQcfXHNRer0enp6eWLt2Le655x4AUk9NQUFBoz1DrVFUVAR3d3cUFhZyTA0RUQei1+tRVVUldxnUBhqNpsnLtc35+23RgcIGgwFbtmzBY489hvj4eBw6dAjh4eFYtmyZSfBpSVlZGaqqquDl5WUyPzk5GX5+fvD09MSoUaPw7LPPwtvb25K7QEREdkIURWRnZ6OgoEDuUqiNFAoFwsPDodFo2rQdi4aa3NxclJSU4LnnnsOzzz6L1atXY+vWrZgyZQp27NiB4cOHt2o7jz/+OIKCgjB69GjjvLFjx2LKlCkIDw9Heno6nnzySSQkJCAlJQVKpbLBNnQ6HXQ6nfFzUVFR23eQiIhsRm2g8fPzg5OTE2+saqdqb46blZWFkJCQNh1Hi/fUAMDEiROxcOFCAED//v2xd+9evPnmm60KNc899xw++eQTJCcnm4zDmTZtmvF937590a9fP0RGRiI5ORlxcXENtpOYmIgVK1a0dZeIiMgG6fV6Y6Bhj7398/X1xcWLF1FdXQ21Wn3N27HoJd0+Pj5QqVS47rrrTOZHRUU1e/VTrRdeeAHPPfccfvrpJ/Tr16/ZthEREfDx8cGpU6caXb5s2TIUFhYap3PnzrV+R4iIyKbVjqFxcnKSuRKyhNrTTnq9vk3bsWhPjUajwaBBg3D8+HGT+SdOnEBoaGiz665Zswb/+te/8OOPP+L6669v8bvOnz+P/Px8BAYGNrpcq9VCq9W2vngiIrI7POXUMVjqOJodakpKSkx6RzIyMpCWlgYvLy+EhIRg6dKluOuuuzBs2DCMHDkSW7duxbfffovk5GTjOvfccw+6dOmCxMREAMDq1auxfPlybNy4EWFhYcjOzgYAuLi4wMXFBSUlJVixYgWmTp2KgIAApKen47HHHkO3bt0QHx/fxh8BERERdQRmn346cOAAYmJiEBMTAwBYtGgRYmJisHz5cgDA5MmT8eabb2LNmjXo27cv3n77bXz55ZcYOnSocRuZmZnIysoyfl63bh0qKytx++23IzAw0Di98MILAAClUonDhw9jwoQJ6NGjB+bOnYuBAwdi9+7d7I0hIqJOKywsDC+//LJFtpWcnAxBEOz6arI23afGnvA+NUREHUdFRQUyMjIQHh7e7M1dbdGIESPQv39/i4SRS5cuwdnZ2SJji5KTkzFy5EhcuXIFHh4ebd6eOZo7nub8/eaznyzgSmklTuQUy10GERF1AKIoorq6ulVtfX19OVi6HoaaNko9ewUxq5Jw7/r9cpdCREQ2bvbs2di5cydeeeUV43MMN2zYAEEQ8MMPP2DgwIHQarX45ZdfkJ6ejokTJ8Lf3x8uLi4YNGgQtm3bZrK9q08/CYKAt99+G5MnT4aTkxO6d++Ob7755prr/fLLL9G7d29otVqEhYXhxRdfNFn+xhtvoHv37nBwcIC/vz9uv/1247IvvvgCffv2haOjI7y9vTF69GiUlpZecy2tYdGrnzqjCB9nAMCFgnKU6qrhrOWPlIhIDqIooryqbZcEXytHtbJVV/C88sorOHHiBPr06YOVK1cCAI4ePQoAeOKJJ/DCCy8gIiICnp6eOHfuHG699Vb861//glarxfvvv4/x48fj+PHjCAkJafI7VqxYgTVr1uD555/Ha6+9hhkzZuDs2bMN7tLfktTUVNx555145plncNddd2Hv3r34+9//Dm9vb8yePRsHDhzAww8/jA8++AA33XQTLl++jN27dwMAsrKyMH36dKxZswaTJ09GcXExdu/eDWuPeOFf4DbydNbA21mD/NJKnL5Uir5d3eUuiYioUyqv0uO65T/K8t1/royHk6blP6nu7u7QaDRwcnJCQEAAAOCvv/4CAKxcuRK33HKLsa2Xlxeio6ONn1etWoVNmzbhm2++wYIFC5r8jtmzZ2P69OkAgH//+9949dVX8dtvv2Hs2LFm7dNLL72EuLg4PPXUUwCAHj164M8//8Tzzz+P2bNnIzMzE87Ozrjtttvg6uqK0NBQ40VEWVlZqK6uxpQpU4y3dOnbt69Z338tePrJAiJ9XQAA6ZdKZK6EiIjs1dX3aCspKcGSJUsQFRUFDw8PuLi44NixYy3ezLb+zWudnZ3h5uaG3Nxcs+s5duwYhgwZYjJvyJAhOHnyJPR6PW655RaEhoYiIiICM2fOxEcffYSysjIAQHR0NOLi4tC3b1/ccccd+N///ocrV66YXYO52FNjAZF+LvjtzGWcymWoISKSi6NaiT9XynPvMkd1w2cQmsvZ2dnk85IlS5CUlIQXXngB3bp1g6OjI26//XZUVlY2u52rHzMgCILxMUaW5OrqioMHDyI5ORk//fQTli9fjmeeeQb79++Hh4cHkpKSsHfvXvz000947bXX8I9//AO//vorwsPDLV5LLYYaC+jmJ/XUMNQQEclHEIRWnQKSm0ajadXjAPbs2YPZs2dj8uTJAKSemzNnzli5ujpRUVHYs2dPg5p69OhhfJC0SqXC6NGjMXr0aDz99NPw8PDAzz//jClTpkAQBAwZMgRDhgzB8uXLERoaik2bNmHRokVWq9n2j74diPSV0jVPPxERUUvCwsLw66+/4syZM3BxcWmyF6V79+746quvMH78eAiCgKeeesoqPS5NWbx4MQYNGoRVq1bhrrvuQkpKCtauXYs33ngDAPDdd9/h9OnTGDZsGDw9PfH999/DYDCgZ8+e+PXXX7F9+3aMGTMGfn5++PXXX3Hp0iVERUVZtWaOqbGA2p6aM/mlqNa33z84IiKyP0uWLIFSqcR1110HX1/fJsfIvPTSS/D09MRNN92E8ePHIz4+HgMGDGi3OgcMGIDPPvsMn3zyCfr06YPly5dj5cqVmD17NgDAw8MDX331FUaNGoWoqCi8+eab+Pjjj9G7d2+4ublh165duPXWW9GjRw/885//xIsvvoiEhASr1sw7CluAwSCi99M/orxKj+2LhxsHDhMRkXXY8x2FqSHeUdiGKBQCImpPQXFcDRERkSwYaizEOFiY42qIiMgGzZs3Dy4uLo1O8+bNk7s8i+BAYQvp5ssroIiIyHatXLkSS5YsaXRZR3nQM0ONhdT21PD0ExER2SI/Pz/4+fnJXYZV8fSThUTWhppLpVZ/tgURERE1xFBjIWHezlAqBJToqpFTpJO7HCIiok6HocZCNCoFQr2cAHBcDRERkRwYaiwogg+2JCIikg1DjQXxGVBERETyYaixIIYaIiKytrCwMLz88sutaisIAjZv3mzVemwJQ40F8QZ8RERE8mGosaDaRyVcKtahsLxK5mqIiIg6F4YaC3JzUMPfTQuAg4WJiKih//73vwgKCoLBYDCZP3HiRMyZMwfp6emYOHEi/P394eLigkGDBmHbtm0W+/4//vgDo0aNgqOjI7y9vfHAAw+gpKTu71VycjJuuOEGODs7w8PDA0OGDMHZs2cBAL///jtGjhwJV1dXuLm5YeDAgThw4IDFarMEhhoL47gaIiKZiCJQWSrP1Mqbrt5xxx3Iz8/Hjh07jPMuX76MrVu3YsaMGSgpKcGtt96K7du349ChQxg7dizGjx+PzMzMNv94SktLER8fD09PT+zfvx+ff/45tm3bhgULFgAAqqurMWnSJAwfPhyHDx9GSkoKHnjgAQiCAACYMWMGunbtiv379yM1NRVPPPEE1Gp1m+uyJD4mwcIifV2w51Q+e2qIiNpbVRnw7yB5vvvJi4DGucVmnp6eSEhIwMaNGxEXFwcA+OKLL+Dj44ORI0dCoVAgOjra2H7VqlXYtGkTvvnmG2P4uFYbN25ERUUF3n//fTg7S7WuXbsW48ePx+rVq6FWq1FYWIjbbrsNkZGRAICoqCjj+pmZmVi6dCl69eoFAOjevXub6rEG9tRYGJ8BRUREzZkxYwa+/PJL6HTS3ec/+ugjTJs2DQqFAiUlJViyZAmioqLg4eEBFxcXHDt2zCI9NceOHUN0dLQx0ADAkCFDYDAYcPz4cXh5eWH27NmIj4/H+PHj8corryArK8vYdtGiRbjvvvswevRoPPfcc0hPT29zTZbGnhoL49O6iYhkonaSekzk+u5WGj9+PERRxJYtWzBo0CDs3r0b//nPfwAAS5YsQVJSEl544QV069YNjo6OuP3221FZWWmtyk2sX78eDz/8MLZu3YpPP/0U//znP5GUlIQbb7wRzzzzDO6++25s2bIFP/zwA55++ml88sknmDx5crvU1hoMNRZW21OTebkMumo9tCqlzBUREXUSgtCqU0Byc3BwwJQpU/DRRx/h1KlT6NmzJwYMGAAA2LNnD2bPnm0MCiUlJThz5oxFvjcqKgobNmxAaWmpsbdmz549UCgU6Nmzp7FdTEwMYmJisGzZMgwePBgbN27EjTfeCADo0aMHevTogYULF2L69OlYv369TYUann6yMF9XLVy1KhhE4ExemdzlEBGRDZoxYwa2bNmCd999FzNmzDDO7969O7766iukpaXh999/x913393gSqm2fKeDgwNmzZqFI0eOYMeOHXjooYcwc+ZM+Pv7IyMjA8uWLUNKSgrOnj2Ln376CSdPnkRUVBTKy8uxYMECJCcn4+zZs9izZw/2799vMubGFrCnxsIEQUCknwvSzhXgVG4Jega4yl0SERHZmFGjRsHLywvHjx/H3XffbZz/0ksvYc6cObjpppvg4+ODxx9/HEVFRRb5TicnJ/z444945JFHMGjQIDg5OWHq1Kl46aWXjMv/+usvvPfee8jPz0dgYCDmz5+P//u//0N1dTXy8/Nxzz33ICcnBz4+PpgyZQpWrFhhkdosRRDFVl6HZueKiorg7u6OwsJCuLm5WfW7lnz+O75IPY+Fo3vgkdG2NzqciMjeVVRUICMjA+Hh4XBwcJC7HGqj5o6nOX+/efrJCiL5tG4iIqJ2x1BjBbwBHxERWdtHH30EFxeXRqfevXvLXZ4sOKbGCmpDzem8EhgMIhQKQeaKiIioo5kwYQJiY2MbXWZrd/ptLww1VhDs6QiNUoGKKgMuFJQj2Kv19y8gIiJqDVdXV7i68mKU+nj6yQpUSgXCfKQgc4rjaoiIiNqF2aFm165dGD9+PIKCgiAIAjZv3tygzbFjxzBhwgS4u7vD2dkZgwYNavEWz59//jl69eoFBwcH9O3bF99//73JclEUsXz5cgQGBsLR0RGjR4/GyZMnzS2/3fBxCURE1mepe7iQvCx1IbbZp59KS0sRHR2NOXPmYMqUKQ2Wp6enY+jQoZg7dy5WrFgBNzc3HD16tNlL7vbu3Yvp06cjMTERt912GzZu3IhJkybh4MGD6NOnDwBgzZo1ePXVV/Hee+8hPDwcTz31FOLj4/Hnn3/a5OV83XgFFBGR1Wg0GigUCly8eBG+vr7QaDTGp0mTfRFFEZcuXYIgCG0eC9Sm+9QIgoBNmzZh0qRJxnnTpk2DWq3GBx980Ort3HXXXSgtLcV3331nnHfjjTeif//+ePPNNyGKIoKCgrB48WIsWbIEAFBYWAh/f39s2LAB06ZNa/E72vM+NQDwddoFPPJJGgaFeeLzeTdZ/fuIiDqbyspKZGVloayMd2+3d4IgoGvXrnBxcWmwzJy/3xYdKGwwGLBlyxY89thjiI+Px6FDhxAeHo5ly5aZBJ+rpaSkYNGiRSbz4uPjjae2MjIykJ2djdGjRxuXu7u7IzY2FikpKY2GGp1OZ3wCKgCL3ZGxtSL5YEsiIqvSaDQICQlBdXU19Hq93OVQG6jVaiiVbX9WokVDTW5uLkpKSvDcc8/h2WefxerVq7F161ZMmTIFO3bswPDhwxtdLzs7G/7+/ibz/P39kZ2dbVxeO6+pNldLTEyU9fbNtaHmSlkV8kt08HbRylYLEVFHVXvKorNewkymLHr1U+2ArYkTJ2LhwoXo378/nnjiCdx222148803LflVLVq2bBkKCwuN07lz59r1+x01SnTxcAQApF8qbdfvJiIi6owsGmp8fHygUqlw3XXXmcyPiopq9uqngIAA5OTkmMzLyclBQECAcXntvKbaXE2r1cLNzc1kam+8szAREVH7sWio0Wg0GDRoEI4fP24y/8SJEwgNDW1yvcGDB2P79u0m85KSkjB48GAAQHh4OAICAkzaFBUV4ddffzW2sUUMNURERO3H7DE1JSUlOHXqlPFzRkYG0tLS4OXlhZCQECxduhR33XUXhg0bhpEjR2Lr1q349ttvkZycbFznnnvuQZcuXZCYmAgAeOSRRzB8+HC8+OKLGDduHD755BMcOHAA//3vfwFI50wfffRRPPvss+jevbvxku6goKBmByDLzXivGl7WTUREZHVmh5oDBw5g5MiRxs+1Vy3NmjULGzZswOTJk/Hmm28iMTERDz/8MHr27Ikvv/wSQ4cONa6TmZkJhaKuk+imm27Cxo0b8c9//hNPPvkkunfvjs2bNxvvUQMAjz32GEpLS/HAAw+goKAAQ4cOxdatW23yHjW1eAUUERFR+2nTfWrsSXvfpwYALpdWYsCqJADAnyvj4aTho7aIiIjMYc7fbz77yYq8nDXwctYAAE7zCigiIiKrYqixskhfZwAcV0NERGRtDDVWxiugiIiI2gdDjZVxsDAREVH7YKixskhe1k1ERNQuGGqsrFtNT01GXimq9QaZqyEiIuq4GGqsrIuHIxzVSlTpRWReLpO7HCIiog6LocbKFAoBEcYroHhZNxERkbUw1LQDDhYmIiKyPoaadsDLuomIiKyPoaYdGEMNr4AiIiKyGoaadlB7+ul0bgk6yaO2iIiI2h1DTTsI83GCQgCKddXILdbJXQ4REVGHxFDTDrQqJUK9pSugOK6GiIjIOhhq2kntKSjeWZiIiMg6GGraSaQfe2qIiIisiaGmnXTjvWqIiIisiqGmnXTjgy2JiIisiqGmndQ+rTunSIeiiiqZqyEiIup4GGraiZuDGn6uWgBAOk9BERERWRxDTTvi4xKIiIish6GmHdVd1s2ndRMREVkaQ007Yk8NERGR9TDUtCNeAUVERGQ9DDXtqDbUZF4ug65aL3M1REREHQtDTTvyc9XCRauC3iDibH6Z3OUQERF1KAw17UgQBOP9ajiuhoiIyLIYatpZ7eMSeK8aIiIiy2KoaWfGB1tysDAREZFFMdS0Mz7YkoiIyDoYatpZ/cu6DQZR5mqIiIg6Doaadhbi5QS1UkBFlQEXC8vlLoeIiKjDYKhpZyqlAmHeNeNqeAqKiIjIYhhqZMDHJRAREVkeQ40M6sbV8MGWRERElsJQI4NI3quGiIjI4swONbt27cL48eMRFBQEQRCwefNmk+WzZ8+GIAgm09ixY5vdZlhYWIN1BEHA/PnzjW1GjBjRYPm8efPMLd8mGE8/8V41REREFqMyd4XS0lJER0djzpw5mDJlSqNtxo4di/Xr1xs/a7XaZre5f/9+6PV1D3g8cuQIbrnlFtxxxx0m7e6//36sXLnS+NnJycnc8m1ChK80UPhyaSUul1bCy1kjc0VERET2z+xQk5CQgISEhGbbaLVaBAQEtHqbvr6+Jp+fe+45REZGYvjw4SbznZyczNqurXLSqNDFwxEXCsqRfqkEXs5ecpdERERk96wypiY5ORl+fn7o2bMnHnzwQeTn57d63crKSnz44YeYM2cOBEEwWfbRRx/Bx8cHffr0wbJly1BW1vSTrnU6HYqKikwmW8IHWxIREVmW2T01LRk7diymTJmC8PBwpKen48knn0RCQgJSUlKgVCpbXH/z5s0oKCjA7NmzTebffffdCA0NRVBQEA4fPozHH38cx48fx1dffdXodhITE7FixQpL7JJVdPN1wa4TlxhqiIiILEQQRfGa79UvCAI2bdqESZMmNdnm9OnTiIyMxLZt2xAXF9fiNuPj46HRaPDtt9822+7nn39GXFwcTp06hcjIyAbLdToddDqd8XNRURGCg4NRWFgINze3Fuuwto2/ZuLJTX9gRE9fbLj3BrnLISIisklFRUVwd3dv1d9vq1/SHRERAR8fH5w6darFtmfPnsW2bdtw3333tdg2NjYWAJrcrlarhZubm8lkSyJ9eVdhIiIiS7J6qDl//jzy8/MRGBjYYtv169fDz88P48aNa7FtWloaALRqu7ao9rLuCwXlKK/Ut9CaiIiIWmJ2qCkpKUFaWpoxVGRkZCAtLQ2ZmZkoKSnB0qVLsW/fPpw5cwbbt2/HxIkT0a1bN8THxxu3ERcXh7Vr15ps12AwYP369Zg1axZUKtOhPunp6Vi1ahVSU1Nx5swZfPPNN7jnnnswbNgw9OvX7xp2W37eLlp4OqkhisDpPPbWEBERtZXZoebAgQOIiYlBTEwMAGDRokWIiYnB8uXLoVQqcfjwYUyYMAE9evTA3LlzMXDgQOzevdvkXjXp6enIy8sz2e62bduQmZmJOXPmNPhOjUaDbdu2YcyYMejVqxcWL16MqVOntjjuxtbV3lmYp6CIiIjark0Dhe2JOQON2ssTXx7GJ/vP4eFR3bBoTE+5yyEiIrI5NjVQmJrGB1sSERFZDkONjHgDPiIiIsthqJFRt5oxNRl5pajWG2SuhoiIyL4x1Mioi4cjHNQKVOoNOHelXO5yiIiI7BpDjYwUCgERPjXjangKioiIqE0YamRmHFdziaGGiIioLRhqZNaN96ohIiKyCIYamdVd1s1QQ0RE1BYMNTLrVu+y7k5yH0QiIiKrYKiRWZiPExQCUFxRjUvFOrnLISIislsMNTLTqpQI8XICwHE1REREbcFQYwM4roaIiKjtGGpsAJ/WTURE1HYMNTaA96ohIiJqO4YaG2A8/ZTLp3UTERFdK4YaG1B7+im7qALFFVUyV0NERGSfGGpsgLujGr6uWgBA+iX21hAREV0LhhobUfu4BD7YkoiI6Now1NiIbhwsTERE1CYMNTYi0tcZAC/rJiIiulYMNTaim58rAJ5+IiIiulYMNTai9vTT2ctlqKw2yFwNERGR/WGosRH+blq4aFXQG0SczecVUEREROZiqLERgiBwXA0REVEbMNTYkEg+2JKIiOiaMdTYEONl3eypISIiMhtDjQ0xPq2bPTVERERmY6ixIfUfbGkwiDJXQ0REZF8YamxIiJcT1EoB5VV6ZBVVyF0OERGRXWGosSFqpQKh3rwCioiI6Fow1NiY2gdbMtQQERGZh6HGxnTjZd1ERETXhKHGxvCybiIiomvDUGNjai/r5oMtiYiIzMNQY2Mi/aSBwvmllbhSWilzNURERPaDocbGOGlU6OLhCIDjaoiIiMxhdqjZtWsXxo8fj6CgIAiCgM2bN5ssnz17NgRBMJnGjh3b7DafeeaZBuv06tXLpE1FRQXmz58Pb29vuLi4YOrUqcjJyTG3fLsQwQdbEhERmc3sUFNaWoro6Gi8/vrrTbYZO3YssrKyjNPHH3/c4nZ79+5tss4vv/xisnzhwoX49ttv8fnnn2Pnzp24ePEipkyZYm75doFXQBEREZlPZe4KCQkJSEhIaLaNVqtFQECAeYWoVE2uU1hYiHfeeQcbN27EqFGjAADr169HVFQU9u3bhxtvvNGs77J1vAKKiIjIfFYZU5OcnAw/Pz/07NkTDz74IPLz81tc5+TJkwgKCkJERARmzJiBzMxM47LU1FRUVVVh9OjRxnm9evVCSEgIUlJSGt2eTqdDUVGRyWQv+GBLIiIi81k81IwdOxbvv/8+tm/fjtWrV2Pnzp1ISEiAXq9vcp3Y2Fhs2LABW7duxbp165CRkYGbb74ZxcXFAIDs7GxoNBp4eHiYrOfv74/s7OxGt5mYmAh3d3fjFBwcbLF9tLbanprzV8pRUdX0z42IiIjqmH36qSXTpk0zvu/bty/69euHyMhIJCcnIy4urtF16p/O6tevH2JjYxEaGorPPvsMc+fOvaY6li1bhkWLFhk/FxUV2U2w8XbWwMNJjYKyKpy+VIrrgtzkLomIiMjmWf2S7oiICPj4+ODUqVOtXsfDwwM9evQwrhMQEIDKykoUFBSYtMvJyWlyHI5Wq4Wbm5vJZC8EQah7BhRPQREREbWK1UPN+fPnkZ+fj8DAwFavU1JSgvT0dOM6AwcOhFqtxvbt241tjh8/jszMTAwePNjiNduCSD7YkoiIyCxmh5qSkhKkpaUhLS0NAJCRkYG0tDRkZmaipKQES5cuxb59+3DmzBls374dEydORLdu3RAfH2/cRlxcHNauXWv8vGTJEuzcuRNnzpzB3r17MXnyZCiVSkyfPh0A4O7ujrlz52LRokXYsWMHUlNTce+992Lw4MEd7sqnWrysm4iIyDxmj6k5cOAARo4cafxcO25l1qxZWLduHQ4fPoz33nsPBQUFCAoKwpgxY7Bq1SpotVrjOunp6cjLyzN+Pn/+PKZPn478/Hz4+vpi6NCh2LdvH3x9fY1t/vOf/0ChUGDq1KnQ6XSIj4/HG2+8cU07bQ+MoYY9NURERK0iiKIoyl1EeygqKoK7uzsKCwvtYnxNZn4Zhj2/AxqVAsdWjoVSIchdEhERUbsz5+83n/1ko7p4OkKrUqCy2oDzV8rkLoeIiMjmMdTYKKVCQAQHCxMREbUaQ40N4+MSiIiIWo+hxoZF8mndRERErcZQY8N4WTcREVHrMdTYsPqnnzrJRWpERETXjKHGhoV5O0MhAEUV1bhUopO7HCIiIpvGUGPDHNRKBHs5AQDSc0tlroaIiMi2MdTYOD7YkoiIqHUYamxcZM24mpM5xTJXQkREZNsYamxc/2APAMCmgxeQW1QhbzFEREQ2jKHGxsX3DkB0V3cU66rx7++PyV0OERGRzWKosXFKhYBVk/pAEIDNaRexNz2v5ZWIiIg6IYYaO9Cvqwf+FhsKAHhq8xFUVhtkroiIiMj2MNTYiSVjesLHRYP0S6V4+5fTcpdDRERkcxhq7IS7kxpP3hoFAHh1+0mcv1Imc0VERES2haHGjkyO6YIbwr1QUWXAim//lLscIiIim8JQY0cEQcCzk/pApRCQ9GcOtv2ZI3dJRERENoOhxs708HfF3JvDAQDPfHsU5ZV6mSsiIiKyDQw1dujhUd0R5O6A81fK8fqOU3KXQ0REZBMYauyQs1aF5eN7AwDe2pWOdD4XioiIiKHGXsX39sfInr6o0otY/vURiKIod0lERESyYqixU4IgYMWEPtCqFNhzKh/fHs6SuyQiIiJZMdTYsRBvJ8wf2Q0AsOq7P1FUUSVzRURERPJhqGmrqnLg5Dbg909l+foHhkUg3McZl4p1+E/SCVlqICIisgUMNW114SDw0VTgx2WAof2fyeSgVmLFBGnQ8Ht7z+DoxcJ2r4GIiMgWMNS0VddBgNoZKMsHcv6QpYRhPXwxrl8gDCLwz81HYDBw0DAREXU+DDVtpdIAYUOl9+k7ZCvjqXHXwVmjxKHMAnx24JxsdRAREcmFocYSIkdKr6flCzUB7g5YeEsPAMBzW//C5dJK2WohIiKSA0ONJUSOkl7PpkgDh2Uy+6Yw9ApwRUFZFVb/8JdsdRAREcmBocYSfHoArkGAXgdkpshWhkqpwLOT+gAAPj1wDqlnL8tWCxERUXtjqLEEQag7BSXjuBoAuD7MC3de3xUA8M/NR1Gtb/8rsoiIiOTAUGMpEfKPq6n1REIUPJzUOJZVhPdTzspdDhERUbtgqLGUiBHSa/YfQMklWUvxctbg8bG9AAAvJZ1ATlGFrPUQERG1B4YaS3HxBfz7Su8zdspbC4C7rg9G/2APlOiq8eyWY3KXQ0REZHUMNZYUOUJ6lXlcDQAoFAKendQHCgH49veL+OVkntwlERERWZXZoWbXrl0YP348goKCIAgCNm/ebLJ89uzZEATBZBo7dmyz20xMTMSgQYPg6uoKPz8/TJo0CcePHzdpM2LEiAbbnTdvnrnlW1f9cTWi/Hf17dPFHfcMDgMALP/6CHTVenkLIiIisiKzQ01paSmio6Px+uuvN9lm7NixyMrKMk4ff/xxs9vcuXMn5s+fj3379iEpKQlVVVUYM2YMSktLTdrdf//9Jttds2aNueVbV+hNgFILFF0A8mzj4ZKLxvSAr6sWp/NK8b9dp+Uuh4iIyGpU5q6QkJCAhISEZttotVoEBAS0eptbt241+bxhwwb4+fkhNTUVw4YNM853cnIya7vtTu0IhNwojalJ3wH49pS7Irg5qPHPcVF45JM0vPbzKUzs3wXBXk5yl0VERGRxVhlTk5ycDD8/P/Ts2RMPPvgg8vPzzVq/sFB60rSXl5fJ/I8++gg+Pj7o06cPli1bhrKysia3odPpUFRUZDK1Cxt4ZMLVJkQH4aZIb+iqDXj6m6MQbeDUGBERkaVZPNSMHTsW77//PrZv347Vq1dj586dSEhIgF7fuvEcBoMBjz76KIYMGYI+ffoY599999348MMPsWPHDixbtgwffPAB/va3vzW5ncTERLi7uxun4ODgNu9bq9SOqznzC6Cvap/vbIEgCFg5sQ/USgE//5WLpD9z5C6JiIjI4gSxDf/bLggCNm3ahEmTJjXZ5vTp04iMjMS2bdsQFxfX4jYffPBB/PDDD/jll1/QtWvXJtv9/PPPiIuLw6lTpxAZGdlguU6ng06nM34uKipCcHAwCgsL4ebm1mId18xgAF7oBpTlA/f+II2zsRHP//gXXt+Rji4ejkhaNAxOGrPPPhIREbWroqIiuLu7t+rvt9Uv6Y6IiICPjw9OnTrVYtsFCxbgu+++w44dO5oNNAAQGxsLAE1uV6vVws3NzWRqFwoFED5cem8Dl3bXt2Bkd3TxcMSFgnK89nPLx4OIiMieWD3UnD9/Hvn5+QgMDGyyjSiKWLBgATZt2oSff/4Z4eHhLW43LS0NAJrdrmxscFwNADhqlHhmQm8AwP92ncbJnGKZKyIiIrIcs0NNSUkJ0tLSjKEiIyMDaWlpyMzMRElJCZYuXYp9+/bhzJkz2L59OyZOnIhu3bohPj7euI24uDisXbvW+Hn+/Pn48MMPsXHjRri6uiI7OxvZ2dkoLy8HAKSnp2PVqlVITU3FmTNn8M033+Cee+7BsGHD0K9fvzb+CKygdlzNhVSgvEDWUq52y3X+GB3lh2qDiKe+PsJBw0RE1GGYHWoOHDiAmJgYxMTEAAAWLVqEmJgYLF++HEqlEocPH8aECRPQo0cPzJ07FwMHDsTu3buh1WqN20hPT0deXt0dbtetW4fCwkKMGDECgYGBxunTTz8FAGg0Gmzbtg1jxoxBr169sHjxYkydOhXffvttW/ffOjyCAe/ugGgAzuyWu5oGnh7fGw5qBfadvoyv0y7KXQ4REZFFtGmgsD0xZ6CRRXy/FPjtv8D1c4HbXrL+95np9R2n8PyPx+HjosX2xcPh7qiWuyQiIqIGbGqgcKcVYZvjamrdd3M4InydkVeiw0s/HW95BSIiIhvHUGMtYUMBQQlcPg1cOSt3NQ1oVUqsmijdB+iDfWfx2f5zHF9DRER2jaHGWhzcgK6DpPc22lszpJsPpgzoAoMIPPblYUz77z6cyuUVUUREZJ8Yaqyp9tJuG7tfTX2rp/bDEwm94KBW4NeMy0h4ZTde+PE4Kqr4RG8iIrIvDDXWVDuuJmMnYLDNkKBWKjBveCSSFg5HXC8/VOlFrN1xCvEv78KuE5fkLo+IiKjVGGqsqctAQOsGlF8BstLkrqZZwV5OeHvW9XjzbwMQ4OaAs/lluOfd37Bg40HkFlXIXR4REVGLGGqsSakCwm6W3tvwKahagiBgbJ9AbFs8HHOGhEMhAN8dzkLcizvxQcoZ6A0cSExERLaLocbajI9MSJa1DHO4aFVYPv46fLNgKPp1dUexrhpPfX0UU9btxZELhXKXR0RE1CiGGmurHVeTuQ+oLJW3FjP16eKOTX8fgpUTe8NVq8Lv5wowYe0vWPXdnyjRVctdHhERkQmGGmvzjgTcgwFDFXB2r9zVmE2pEHDP4DBsWzwct/ULhEEE3vklA7e8tBNbj2Tz3jZERGQzGGqsTRCAiBHSezsYV9MUfzcHrL17ADbcOwghXk7IKqzAvA9Tcf/7B3D+Spnc5RERETHUtItI235kgjlG9PTDTwuHYcHIblArBWw7lotbXtqFt3amo0pvkLs8IiLqxBhq2kP4CAACkPsnUJwtczFt56BWYkl8T3z/8M24IdwL5VV6JP7wF8a/9gtSz16WuzwiIuqkGGrag7M3EBgtvbejq6Ba0t3fFZ8+cCPW3N4Pnk5q/JVdjKnrUrDsqz9QUFYpd3lERNTJMNS0Fzt4ZMK1EAQBd14fjO2LR+COgV0BAB//lom4F3di06HzHEhMRETthqGmvUTUu19NB/xD7+WswfN3ROPTB25ENz8X5JdWYuGnv2PG278i/VKJ3OUREVEnwFDTXkJuBFSOQEk2kHtM7mqsJjbCG98/fDOWxveEVqXA3vR8jH5pJ6b9NwWf7s9EUUWV3CUSEVEHxVDTXlRaIPQm6X0HuAqqORqVAvNHdkPSwuEYHeUHUQT2nb6Mx7/8A9c/uw3zPzqIpD9zUFnNq6WIiMhyBLGTDHooKiqCu7s7CgsL4ebmJk8Re18Dfvon0O0W4G9fyFODDC4UlOPrtAvYdPACTubWnYrydFLjtn5BmBTTBQNCPCAIgoxVEhGRLTLn7zdDTXvKPgK8OUQ6DfXEWan3phMRRRFHLxZh86EL+Pr3i7hUrDMuC/V2wqT+XTAppgvCfZxlrJKIiGwJQ00jbCLUiCLwQg+gNBeY9S0QPkyeOmyA3iBiz6k8bD50AVuPZqOsUm9c1j/YA1MGdMG4voHwdulcwY+IiEwx1DTCJkINAHx5P/DHZ8DQRcDop+Wrw4aUVVbjp6M52HToAnafvARDzb9IlULA8B6+mDygC0ZH+cNBrZS3UCIiancMNY2wmVCTthHY/CAQFAM8kCxfHTYqt7gC3/6ehc2HLuCPC4XG+S5aFRL6BGDygC64MdwbCgXH3xARdQYMNY2wmVBTdBF4KQqAADx2GnDykq8WG3cqtxibDl3A5kMXcaGg3Dg/0N0BE/oHYUpMV/QMcJWxQiIisjaGmkbYTKgBgNdjgUt/AXdsAHpPlrcWO2AwiNh/5jI2p13Ad4ezUFxRbVwWFeiGW6L8EBPqiQHBnnB3UstYKRERWZo5f79V7VQT1RcxUgo16TsYalpBoRAQG+GN2AhvPD2+N3b8lYtNhy5gx/FcHMsqwrGsImPbSF9nDAjxxIBQTwwI8UR3PxeeqiIi6iTYUyOHEz8CG+8EPEKARw4DvD/LNSkoq8TWI9n4LeMyDmZewZn8sgZtXB1U6B/sYQw6/YM94O7I3hwiInvB00+NsKlQoysBVocBhirgoYOAd6S89XQQ+SU6HMoswMHMKziYeQW/nytEeZW+Qbvufi41IUcKO5G+7M0hIrJVDDWNsKlQAwDrxwFnfwHGvQgMuk/uajqkar0Bf2UX41DmFRysCTtnG+nNcXNQoX+IJwaESCGnf4gH3BzYm0NEZAsYahphc6Fm1/PAz88CvW4Dpn0kdzWdRl5Nb07qWak35/D5AlRUmT6DShDq9eaEeKJHgCsifJ0ZdIiIZMBQ0wibCzXnU4G3RwFad+nSbiXHbMuhSm/AX1nFxlNWBzOv4Nzl8kbb+rpqEeHjjEg/F+NrpI8Lung6QsnTV0REVsFQ0wibCzUGPbAmAqgoAOZuA4IHyV0R1cgtrjCOzfn9XAHSL5WaPKfqahqVAuHezoj0c0aEj4vxNcLXGa7s3SEiahNe0m0PFErp2U/HvgFO72CosSF+rg6I7x2A+N4BxnlFFVU4fakUpy+V4PSlUqTXvGbklaKy2oDjOcU4nlPcyLa0iPB1RqSvCyJ8XRBZ8z7Ig707RESWxlAjp8iRUqhJ3wEMf0zuaqgZbg5q9A/2QP9gD5P5eoOIC1fKkX6pRAo6eaVIz5VeLxXrkFsz7Tt92WQ9rUqBcB9nRPjW9epE+Lpw7A4RURsw1MgpYqT0ev43QFcMaHnLf3ujVAgI8XZCiLcTRvbyM1lW27sjhZy6Hp4zeWXQVUtXZv2V3bB3x8dFUy/oSKEn3NcZIV5OUCsV7bVrRER2x+xQs2vXLjz//PNITU1FVlYWNm3ahEmTJhmXz549G++9957JOvHx8di6dWuz23399dfx/PPPIzs7G9HR0Xjttddwww03GJdXVFRg8eLF+OSTT6DT6RAfH4833ngD/v7+5u6C7fAKBzzDgCtngDO/AD0T5K6ILKi53p3zV8rqTmPl1Z3Wyi3WIa+kEnkll/HbGdPeHZVCQIiXk7FXJ9zHGRE+0nsfFw0E3sSRiDo5s0NNaWkpoqOjMWfOHEyZMqXRNmPHjsX69euNn7VabbPb/PTTT7Fo0SK8+eabiI2Nxcsvv4z4+HgcP34cfn7S//0uXLgQW7Zsweeffw53d3csWLAAU6ZMwZ49e8zdBdsSMRJIXS+dgmKo6RSUCgGh3s4I9XZu0LtToqtGxqVSnM4rQXrNGJ6MPGnsTlmlXgpAeaXAsVyT9VwdVNLpq3pBJ8LXGaHeTnDSsEOWiDqHNl39JAhCoz01BQUF2Lx5c6u3Exsbi0GDBmHt2rUAAIPBgODgYDz00EN44oknUFhYCF9fX2zcuBG33347AOCvv/5CVFQUUlJScOONN7b4HTZ39VOtP78GPrsH8OkBLNgvdzVko0RRRHZRhTRYuV7Pzum8Epy/Uo7mfos9ndTo4umIIHdHBHk4oqun9Brk4YguHo7s5SEimyb71U/Jycnw8/ODp6cnRo0ahWeffRbe3t6Ntq2srERqaiqWLVtmnKdQKDB69GikpKQAAFJTU1FVVYXRo0cb2/Tq1QshISGtDjU2K3wYICiAvBNA4QXAvYvcFZENEgQBge6OCHR3xJBuPibLKqr0yLxchtOXant3SpGRJ53WKiirwpWa6ciFoka3rVEp0MXDEUEeDghyd5QCUE3g6eLhiAB3Bziole2xm0REbWLxUDN27FhMmTIF4eHhSE9Px5NPPomEhASkpKRAqWz4H8a8vDzo9foGY2P8/f3x119/AQCys7Oh0Wjg4eHRoE12dnajdeh0Ouh0dfcWKSpq/D/osnP0BIJigAup0qXdMX+TuyKyMw5qJXr4u6KHf8OB5kUVVbhYUI6LBeW4cKUcFwoqcKHm88WCcuQUVaCy2mA8xdUUHxctung6oouHQ00AkqZAdwcEuDnA20XLS9SJSHYWDzXTpk0zvu/bty/69euHyMhIJCcnIy4uztJf16TExESsWLGi3b6vTSJGSqEmnaGGLMvNQQ23ADV6BTTeZVulNyC70DToXCioCT9XynCxoALlVXrkleiQV6LD7+ca/x6lQoCvixb+7g4IcNPC380B/m5S4PF3c0CAuzSPNyMkImuy+gjCiIgI+Pj44NSpU42GGh8fHyiVSuTk5JjMz8nJQUCAdPOzgIAAVFZWoqCgwKS3pn6bqy1btgyLFi0yfi4qKkJwcLAF9sgKIkcCu18ATicDBgOg4GW71D7USgWCvZwQ7OXU6HJRFFFQVlUTdKTenosF5bhYKL3PLqrApWId9AZpzE92UQV+b+b7nDXKusDj7gA/Ny0CasKPX+08Vy0vXSeia2L1UHP+/Hnk5+cjMDCw0eUajQYDBw7E9u3bjQOODQYDtm/fjgULFgAABg4cCLVaje3bt2Pq1KkAgOPHjyMzMxODBw9udLtarbbFq65sRtcbALUzUJYH5BwBAvvJXRERAGksj6ezBp7OGvTp4t5om2q9AfmllcgulEJNbk24yS7UIaeoAjk1n4srqlFa/wquJr8T8HbWws9VCy9nDTyc1NLkWPteAw9Hdd17JzXcHdUMQkRkfqgpKSnBqVOnjJ8zMjKQlpYGLy8veHl5YcWKFZg6dSoCAgKQnp6Oxx57DN26dUN8fLxxnbi4OEyePNkYWhYtWoRZs2bh+uuvxw033ICXX34ZpaWluPfeewEA7u7umDt3LhYtWgQvLy+4ubnhoYcewuDBg+17kHAtlQYIGwqc/FEaV8NQQ3ZEpVQYe1+im2lXVlmNnCIdsgsrTMKO9F6an1tcgSq9aDzdZQ5XrQruDQJQwzDk6ayGu6MGnjXzOBaIqOMwO9QcOHAAI0eONH6uPcUza9YsrFu3DocPH8Z7772HgoICBAUFYcyYMVi1apVJr0l6ejry8vKMn++66y5cunQJy5cvR3Z2Nvr374+tW7eaDB7+z3/+A4VCgalTp5rcfK/DiBwphZr0HcCQR+SuhsjinDQqhPuoEO7j3GQbg0HElbLKmh4fHQrKK41XcBWWVaKg/Kr3pZUoqqgGABTrqlGsq8b5K40/Zb0xgoCaoKOBt7MGnk4aeLtIr17ODSdvZy0cNbwSjMhW8SndtiL3L+CNWEDlADx+FlA7yF0RkV3QG0QUlVfhSk3QKSyreV9WVfO5Eleufl9WF4bM5aBWwNtZC09nNbyctfByqnl1Nn110argoFbAQa2Eg1oJrUp6z54hIvPIfp8auga+PQHXQKA4C8hMkXpuiKhFSkXduB9zVOsNNffwqUR+SSUul1biclklLpdUSvNKK3GltO71cmklKvUGVFQZjAOnr4VaKcBBpYRWrYBWpYRDvderA1DtMq1aAQeV6TJnrRKuDiq4aNVw0apq3qvg4qDi+CLqtBhqbIUgSJd2/75RGlfDUENkVSqlAr6uWvi6aoFWPEJOFEWUVupxuaQm/JTqkH9VALpcbyrR6aGr0qOiWo8qfV2HeJVeRJW+GsXmDRkyi1alMAk5Llop/DScV/fZ9ar5GpVCmpQKqBiSyE4w1NiSyJpQk/4zcMtKuashonoEQTD+wQ/xbvwS+KboDSJ01XpUVBlQUaWHrlp6NX1vgK5aD12VARXVNcuM7xuuV6rTo0RXXTdVVKO8Sg8A0FUboCupRF5JpUX2XSHAGHA0Kqm3qPazWiXUzJeWaZQKk+XGcKRSQF2zTKtSQFuv16mpV4ererN46o5awlBjSyJGSK/ZfwAllwAXX1nLISLLUCoEOGlUcDLvDJnZqvUGlOr0KNZVGYNOcc1rw89Sm+J6y0prBluX6KpNnidmEFETrAwArm0skiWolYJJyKk9LdfUa+0pPW1NQKo9hVd3qk9x1alA09OADir2Utkbhhpb4uIH+PeR7lWTsRPoe7vcFRGRHVEpFXB3UsDdqW13bhZFEdUGEZXVBmnSX/V61Xtdvc9VV7XRVdf/rDe219X0TFU08VrbS9XYqTszr/ZvE5VCaNBr5FzTY1f3qqwZ26Q0zqtd7lwz3slZq4KLRmrLoGQ9DDW2JmKEFGrSdzDUEJEsBEGAWilArVTAWeZ7mNaeuqsNOfVPyUljlpp4rWrktF390371TvPVD1S17WtVG8SaU3yW2ycHteKqUFT/VQlnjQpOWhWcNEo4a5RwqglDJq8aFZy0SjhplHBUKyEIPDUHMNTYnsiRQMpaabCwKEoDiImIOqn2OnVXn8Eg1lzpVhd06o9tKquUTtuV6vTGU3aluuqG7yuqUVoptSupqEalXgpL0nYsN+ZJEAAntRJOWlWDEOTUyGcHtQKONaflHGtOtTlefeqtpk3tZ3sZz8RQY2tCbgKUGqDoApB3EvDtIXdFRESdikIhwEEh/TG3pMpqA0rrDe5u+F4KP2VV1SjT6VFaWfdaXqlHaaUUqEp10mtZpTQwXBSB0prllyxacR2NUmESeq4OQbWfAz0c8fjYXlaqomUMNbZG4wSE3Ahk7JJ6axhqiIg6BOkqMPPvqdQUg0FEeVW90FMTdkor9SjTVZuEoPLKus+1PU7lV52mq5tnQHmVNP6pVqVeGiPV0k0rI32dGWroKhEjpVCTvgOI/T+5qyEiIhukUAjGwcjWYDCIJqfd6kJQTfCplE7LSa/SWCYXK9XSWgw1tihyJLB9BXDmF0BfBSjbdiUDERGRuRQyjGdqK15XZosCogFHL6CyGDh/QO5qiIiI7AJDjS1SKOpuxHd6h6ylEBER2QuGGltV++yndIYaIiKi1mCosVURNaHmQipQUShvLURERHaAocZWeQQD3t0AUQ9k7Ja7GiIiIpvHUGPLantrOK6GiIioRQw1tsw4ruZneesgIiKyAww1tixsKCAogcungStn5a6GiIjIpjHU2DIHd6Dr9dJ7noIiIiJqFkONrasdV5P6HqBv/pkbREREnRlDja0bcA+gdQcuHgR+eUnuaoiIiGwWQ42tc+8CjHtBep/8nHTfGiIiImqAocYe9L0D6D1FumfNVw8AlWVyV0RERGRzGGrsgSAA414EXIOA/FNA0lNyV0RERGRzGGrshZMXMOkN6f3+t4GTSfLWQ0REZGMYauxJ5Egg9kHp/dfzgdJ8eeshIiKyIQw19mb004BvL6AkB/juEUAU5a6IiIjIJjDU2Bu1IzDlv4BCDRz7FkjbKHdFRERENoGhxh4FRgMjn5Te//A4cOWMrOUQERHZAoYaezXkESBkMFBZDGyaBxj0cldEREQkK4Yae6VQApPfBDSuQGYKsOcVuSsiIiKSFUONPfMMAxJWS+93/BvI+l3WcoiIiOTEUGPv+t8N9LoNMFRJdxuuKpe7IiIiIlkw1Ng7QQDGvwq4+AOX/gK2rZC7IiIiIlkw1HQEzt7AxNel97+uA9J3yFsPERGRDMwONbt27cL48eMRFBQEQRCwefPmJtvOmzcPgiDg5ZdfbnabYWFhEAShwTR//nxjmxEjRjRYPm/ePHPL77i63wJcP1d6v/nvQNlleeshIiJqZ2aHmtLSUkRHR+P1119vtt2mTZuwb98+BAUFtbjN/fv3IysryzglJUnPNbrjjjtM2t1///0m7dasWWNu+R3bmGcB725A8UXg+yVyV0NERNSuVOaukJCQgISEhGbbXLhwAQ899BB+/PFHjBs3rsVt+vr6mnx+7rnnEBkZieHDh5vMd3JyQkBAgLkldx4aJ+luw2/fAhz5EuiRAPS7o+X1iIiIOgCLj6kxGAyYOXMmli5dit69e5u9fmVlJT788EPMmTMHgiCYLPvoo4/g4+ODPn36YNmyZSgrK7NU2R1Hl4HA8Mel91sWAwXn5K2HiIionZjdU9OS1atXQ6VS4eGHH76m9Tdv3oyCggLMnj3bZP7dd9+N0NBQBAUF4fDhw3j88cdx/PhxfPXVV41uR6fTQafTGT8XFRVdUz126ebFwKkk4Px+YPODwD3fAAqOCScioo7NoqEmNTUVr7zyCg4ePNigl6W13nnnHSQkJDQYi/PAAw8Y3/ft2xeBgYGIi4tDeno6IiMjG2wnMTERK1Z00sublSpg8lvAm0OBM7uBfW8ANy2QuyoiIiKrsuj/vu/evRu5ubkICQmBSqWCSqXC2bNnsXjxYoSFhbW4/tmzZ7Ft2zbcd999LbaNjY0FAJw6darR5cuWLUNhYaFxOneuk52G8Y4E4v8tvd++Asg5Km89REREVmbRnpqZM2di9OjRJvPi4+Mxc+ZM3HvvvS2uv379evj5+bVqcHFaWhoAIDAwsNHlWq0WWq225aI7soGzgRNbpemrB4D7fwZUnfxnQkREHZbZoaakpMSkdyQjIwNpaWnw8vJCSEgIvL29Tdqr1WoEBASgZ8+exnlxcXGYPHkyFiyoOyViMBiwfv16zJo1CyqVaVnp6enYuHEjbr31Vnh7e+Pw4cNYuHAhhg0bhn79+pm7C52HIAATXgPeGAzkHAF+fhYYs0ruqoiIiKzC7NNPBw4cQExMDGJiYgAAixYtQkxMDJYvX97qbaSnpyMvL89k3rZt25CZmYk5c+Y0aK/RaLBt2zaMGTMGvXr1wuLFizF16lR8++235pbf+bj4ScEGAPa+Bpz5Rd56iIiIrEQQRVGUu4j2UFRUBHd3dxQWFsLNzU3uctrfNw8BB98H3IOBB/cADu5yV0RERNQic/5+8zrfziI+EfAMAwrPAd8/Jnc1REREFsdQ01loXYAp/wMEBXD4E+DoJrkrIiIisiiGms4k+AbpxnwA8O2jQNFFWcshIiKyJIaazmb440Bgf6CiAPh6PmAwyF0RERGRRTDUdDZKtXQaSuUIpP8M7H9b7oqIiIgsgqGmM/LtUXe/mqSngEvH5a2HiIjIAhhqOqtB9wGRcUB1BfDV/UDOn4BBL3dVRERE14z3qenMirKAdYOB8ivSZ7UzEBgNdBlQMw0EPEKlOxMTERHJwJy/3xZ99hPZGbdAYPqnwM+rgIuHgMoSIHOvNNVy8gaC6oWcoAGAi698NRMRETWBPTUkMeiBvJPAxYPAhVTgwkEg+w/AUNWwrXsI0CWmLuQE9Qe0ru1eMhERdXzm/P1mqKGmVeukB2FeOFgzpQJ5JwBc/U9GAHx71uvRGQD49+ETwYmIqM0YahrBUGMhFUVAVlpdyLl4SHr0wtWUGinYdBkARIwAeo4DFByXTkRE5mGoaQRDjRWV5Eohp/6pq/LLpm0C+gG3rAAiR8lTIxER2SWGmkYw1LQjUQSunJFCzrn9QNpHgK5IWhY+XAo3QTGylkhERPaBoaYRDDUyKs0Hdr8I7P8foK+U5vWeAsQ9BXhFyFsbERHZNHP+fnOQA1mfszcw9t/AggNAv2kABODoV8DaQcD3S4GSS3JXSEREHQBDDbUfz1BgylvAvN1At1sAQzXw23+BV/sDyc8BumK5KyQiIjvGUEPtL6Av8LcvgFnfSpeBV5YAyYnAqzHAb/8DqivlrpCIiOwQQw3JJ3wYcP/PwB3vAV6RQOkl4PslwOs3AEe+BAwGuSskIiI7wlBD8hIEoPckYP6vwLiXAGc/4EoG8MUc4H8jgdPJcldIRER2gqGGbINSDQyaCzx8CBj5D0DjIt3k7/2JwAeTgazf5a6QiIhsHEMN2RatCzD8MeCR34HYeYBCDaT/DLw1DPjyfun+N0RERI1gqCHb5OwDJKwGFuwH+t4hzfvjM+C164EfngBK8+Stj4iIbA5DDdk2r3Bg6tvA/+2SHrFgqAJ+XQe80h/Y+TxQWSp3hUREZCN4R2GyL+k7gG1P142xcfYDrr8XCL0J6DoI0DjLWx8REVkUH5PQCIaaDsRgAP7cBGxfaTrGRqECAqOBkMF1k7O3bGUSEVHbMdQ0gqGmA6quBI58IfXeZKYAhecatvHpCYQOBkJukl49Qtq/TiIiumYMNY1gqOkECs5J4ebsXun10l8N27h1rQk5g6VTVj49AQWHlhER2SqGmkYw1HRCpfnAuX11ISfrd+l5U/U5etadqgq9STp9pVTLUy8RETXAUNMIhhpCZSlwfj9wNgXI3AucPwBUlZm2UTkCXa+XAk7IYCD4Bg4+JiKSEUNNIxhqqAF9ldR7U9uTk5kClF8xbSMoAb/rgC4xQFCM9ABO/97szSEiaicMNY1gqKEWGQxA3vG6kHM2BSg637CdUgsE9JECTlAM0GUA4NMDUCjbv2Yiog6OoaYRDDV0TQrPAxcPARcOAhcPSu8rChu2UztL43G61ASdoBjAK0J6YCcREV0zhppGMNSQRYgicPm0FG6MUxpQ1cidjR3c6wJO0AAp8Lh1YdAhIjIDQ00jGGrIagx6IO+k1JNzoaY3J/sPQK9r2NbZ1/S0VVAM4OLX/jUTEdkJhppGMNRQu6quBC4dqws5Fw8COX8Cor5hW/fgmpAzUAo6gf0BB/4bJSICzPv7bfZdx3bt2oXx48cjKCgIgiBg8+bNTbadN28eBEHAyy+/3Ow2n3nmGQiCYDL16tXLpE1FRQXmz58Pb29vuLi4YOrUqcjJyTG3fKL2odJIY2yuvxeY8Cow7xfgyQvA3G1Awhogerp04z8I0p2Qj30jPdPqvfHAcyHA2huATfOA3/4HnE8Fqhvp9SEiIhMqc1coLS1FdHQ05syZgylTpjTZbtOmTdi3bx+CgoJatd3evXtj27ZtdYWpTEtbuHAhtmzZgs8//xzu7u5YsGABpkyZgj179pi7C0TyUDsCwYOkqVZFkXRZ+YXUmtNXh4DCTOkqrLzjwO8fS+0UaumKqy4D68bn8IorIiITZoeahIQEJCQkNNvmwoULeOihh/Djjz9i3LhxrStEpUJAQECjywoLC/HOO+9g48aNGDVqFABg/fr1iIqKwr59+3DjjTeatxNEtsLBDQi/WZpqlVyqCTip0umrC6lA+eW6gcm1NC7SqaouNSGny0DpVBYHIhNRJ2V2qGmJwWDAzJkzsXTpUvTu3bvV6508eRJBQUFwcHDA4MGDkZiYiJAQ6eGDqampqKqqwujRo43te/XqhZCQEKSkpDQaanQ6HXS6ui77oqKiNuwVUTty8QV6xEsTIF1xVXC2LuDUXnFVWQKc/UWaajn51AWcoAGAT3fA0QPQurFXh4g6PIuHmtWrV0OlUuHhhx9u9TqxsbHYsGEDevbsiaysLKxYsQI333wzjhw5AldXV2RnZ0Oj0cDDw8NkPX9/f2RnZze6zcTERKxYsaItu0JkGwQB8AyTpj41p3wNeuDS8XqnrQ4COUeAsjzg5E/SdDWtm3SZ+bVMDEVEZAcsGmpSU1Pxyiuv4ODBgxDM6AKvfzqrX79+iI2NRWhoKD777DPMnTv3mmpZtmwZFi1aZPxcVFSE4ODga9oWkc1RKAH/66RpwExpXlWFFGzqn7YqulD3fCtdkTQVnru272wsFLkHS71BPt0B7+6AWxBPfxGRbCwaanbv3o3c3FzjaSMA0Ov1WLx4MV5++WWcOXOmVdvx8PBAjx49cOrUKQBAQEAAKisrUVBQYNJbk5OT0+Q4HK1WC61We837QmR31A7Swzi7Xm86v7pSCjMVhUBFAVBeUPO+lVN1ubSd1oQitTPgHSkNYvbpDnh3q3vlg0GJyMosGmpmzpxpMu4FAOLj4zFz5kzce++9rd5OSUkJ0tPTMXOm9H+gAwcOhFqtxvbt2zF16lQAwPHjx5GZmYnBgwdbbgeIOiKVBlD5AM4+17Z+tU66SssYdAqk1/LLwJUzQN4pIP8kcDlDurNy9mFpuppbV8Cnm9SjYww8PaS7LCvMvrsEEVEDZoeakpISYw8KAGRkZCAtLQ1eXl4ICQmBt7e3SXu1Wo2AgAD07NnTOC8uLg6TJ0/GggULAABLlizB+PHjERoaiosXL+Lpp5+GUqnE9OnTAQDu7u6YO3cuFi1aBC8vL7i5ueGhhx7C4MGDeeUTkbWptNLgZRff5ttVV0ohJ/+kdIfl/JN1gacsX3o4aNF54HTyVdt3rAk49QKPT3fAKxLQuvJ0FhG1mtmh5sCBAxg5cqTxc+24lVmzZmHDhg2t2kZ6ejry8vKMn8+fP4/p06cjPz8fvr6+GDp0KPbt2wdf37r/iP7nP/+BQqHA1KlTodPpEB8fjzfeeMPc8onIWlQawLeHNF2t7HK9oHMSyD8lvV4+LZ3eyvlDmq4mKKVL17UuV726NjLftebVud77q5apeEqaqCPjYxKISD76auly9QaB5wRQesny36dQ14UcjbN0Q0S1I6ByMH1t8N5RGrNU+6p2an4dlQN7mIgsxJy/3xa/pJuIqNWUKmlgsXckgLGmyypLpbE8lSWArrjmtaSVn0uAymJpG7qSusHOhiqg/Io0WZujpzReyDVQuirMrUvNa71J69a5wo8oAllpQGke0HWQdA8lIgtiqCEi26RxttwVU/pqKezUDz6VJdJl8NXlQFXNVF1h+tpgXlm9deqvW/PeUF33nbXhKedIM/voUi/kNBaAugBOXvYffHKPAX98ARz5EriSIc0TFNIdsSOGA+HDgZAbpV4uojbg6SciIkvRV9cFnbJ86T5BRRcbmS5IV5G1hlILuAVe1dPTRXqye9AAqbfLFl05I4WYP74Eco/WzVc5Aq4BdeGmllILBN9QF3Jsed+oXZnz95uhhohIDpWlQFEWUFwv6Fwdfkpzm9+G1g0IGyqFgIgRgG9PeXt1inOAo5uAI18A5/fXzVeogW5xQJ/bgZ4J0rimwvNAxi7g9E4gYydQnGW6LY0rEDakZt+GA37X2X+PVVNEUQq5BZnSWDJBAShU0kB5hapmunqeUppMPqvq1q2dJygbv2WCQQ/oqwB9pfRqqPe+0fnNLas339EDGHSfRX88DDWNYKghIrtTXSn9sS+6WC/8XJR6Qc7ubdjb4xIghZva3g73LtavsfwKcOxb6fTSmd2AaKhZIEgPau1zOxA1XjqN1hRRlAaIn06WAk7G7ob75uwLhA+T9it8GOAVbqUdsgJRlMYRFWQChZnSa8E56UaWte8ri61YgFAXcgApfBiPk4X59AAW7G+5nRkYahrBUENEHYpBL93k8HSyNGXuk8b91OfTQwo54cOlHh1LDcytLAWO/yCdXjqZJP0fe60u1wN9bwd6T5ZOM10L477V9OKcTakb7F3LI6Suhyp8GODid82702YGPVCcXRNSzklX9BnfZ0q9UlfX3xgnH+lnJoqAqJfGaBn00mTyuVoKJSaf9W3bB0EJKNWAUiO9KmrfqxqZp26irVoaFzbqH22r5SoMNY1gqCGiDq2qAjj3a13IyUoz/b9xQSGNU6ntyQmONe++PdWVQPp24I/PpUBT+0wxQDo11GeqNFmjB6VaB5w/IAWc0zuBCwdMB2XX1lDbi+MZKu177WSo917UX7VMXxMiWrlcX9N7Vj+8FF4wDXaNEqTA4hEiPTPNI7jmfUjNa1dA43TtP6PaGuuHHEO1tO/1PwM1AaR+IFHb9ANrGWoawVBDRJ1K+RXgzC91ISf/lOlylSMQOriuJyegX8OxFwa9dErpjy+AY99Ij8eo5RlWE2Rulx6s2p50xVLvTUZNT052IzdubG+CUjrd5x5SL7DUCy9uXXjzx2vEUNMIhhoi6tQKz0u9HLUh5+pByI5eUi9HxAjAKwI4/r006Lckp66NSwDQZ4oUZLoMsJ2Bu6X5wJmaQcdnfpHCl6ComxQK08/GSVnzKtRrq2yibb3lLv6AR6gUWNxrQotrIK/WshKGmkYw1BAR1RBF6d4xGTUh58wv0n17GuPgAVw3URonEzrEpk9TUMfEOwoTEVHTBEE6ZeR/HXDjg9LluBdS63py8k9J42763A5EjpKe60VkB9hTQ0RERDbLnL/fjdyRh4iIiMj+MNQQERFRh8BQQ0RERB0CQw0RERF1CAw1RERE1CEw1BAREVGHwFBDREREHQJDDREREXUIDDVERETUITDUEBERUYfAUENEREQdAkMNERERdQgMNURERNQhMNQQERFRh6CSu4D2IooiAOkR5kRERGQfav9u1/4db06nCTXFxcUAgODgYJkrISIiInMVFxfD3d292TaC2Jro0wEYDAZcvHgRrq6uEATBotsuKipCcHAwzp07Bzc3N4tu29Z0pn0FOtf+cl87rs60v9zXjkcURRQXFyMoKAgKRfOjZjpNT41CoUDXrl2t+h1ubm4d+h9WfZ1pX4HOtb/c146rM+0v97VjaamHphYHChMREVGHwFBDREREHQJDjQVotVo8/fTT0Gq1cpdidZ1pX4HOtb/c146rM+0v97Vz6zQDhYmIiKhjY08NERERdQgMNURERNQhMNQQERFRh8BQQ0RERB0CQ00rvf766wgLC4ODgwNiY2Px22+/Ndv+888/R69eveDg4IC+ffvi+++/b6dKr11iYiIGDRoEV1dX+Pn5YdKkSTh+/Hiz62zYsAGCIJhMDg4O7VRx2zzzzDMNau/Vq1ez69jjcQWAsLCwBvsqCALmz5/faHt7Oq67du3C+PHjERQUBEEQsHnzZpPloihi+fLlCAwMhKOjI0aPHo2TJ0+2uF1zf+fbS3P7W1VVhccffxx9+/aFs7MzgoKCcM899+DixYvNbvNafhfaQ0vHdvbs2Q3qHjt2bIvbtcVj29K+Nvb7KwgCnn/++Sa3aavH1ZoYalrh008/xaJFi/D000/j4MGDiI6ORnx8PHJzcxttv3fvXkyfPh1z587FoUOHMGnSJEyaNAlHjhxp58rNs3PnTsyfPx/79u1DUlISqqqqMGbMGJSWlja7npubG7KysozT2bNn26nituvdu7dJ7b/88kuTbe31uALA/v37TfYzKSkJAHDHHXc0uY69HNfS0lJER0fj9ddfb3T5mjVr8Oqrr+LNN9/Er7/+CmdnZ8THx6OioqLJbZr7O9+emtvfsrIyHDx4EE899RQOHjyIr776CsePH8eECRNa3K45vwvtpaVjCwBjx441qfvjjz9udpu2emxb2tf6+5iVlYV3330XgiBg6tSpzW7XFo+rVYnUohtuuEGcP3++8bNerxeDgoLExMTERtvfeeed4rhx40zmxcbGiv/3f/9n1TotLTc3VwQg7ty5s8k269evF93d3duvKAt6+umnxejo6Fa37yjHVRRF8ZFHHhEjIyNFg8HQ6HJ7Pa4AxE2bNhk/GwwGMSAgQHz++eeN8woKCkStVit+/PHHTW7H3N95uVy9v4357bffRADi2bNnm2xj7u+CHBrb11mzZokTJ040azv2cGxbc1wnTpwojho1qtk29nBcLY09NS2orKxEamoqRo8ebZynUCgwevRopKSkNLpOSkqKSXsAiI+Pb7K9rSosLAQAeHl5NduupKQEoaGhCA4OxsSJE3H06NH2KM8iTp48iaCgIERERGDGjBnIzMxssm1HOa6VlZX48MMPMWfOnGYf7mrPx7VWRkYGsrOzTY6bu7s7YmNjmzxu1/I7b8sKCwshCAI8PDyabWfO74ItSU5Ohp+fH3r27IkHH3wQ+fn5TbbtKMc2JycHW7Zswdy5c1tsa6/H9Vox1LQgLy8Per0e/v7+JvP9/f2RnZ3d6DrZ2dlmtbdFBoMBjz76KIYMGYI+ffo02a5nz55499138fXXX+PDDz+EwWDATTfdhPPnz7djtdcmNjYWGzZswNatW7Fu3TpkZGTg5ptvRnFxcaPtO8JxBYDNmzejoKAAs2fPbrKNPR/X+mqPjTnH7Vp+521VRUUFHn/8cUyfPr3ZBx6a+7tgK8aOHYv3338f27dvx+rVq7Fz504kJCRAr9c32r6jHNv33nsPrq6umDJlSrPt7PW4tkWneUo3mWf+/Pk4cuRIi+dfBw8ejMGDBxs/33TTTYiKisJbb72FVatWWbvMNklISDC+79evH2JjYxEaGorPPvusVf8HZK/eeecdJCQkICgoqMk29nxcSVJVVYU777wToihi3bp1zba119+FadOmGd/37dsX/fr1Q2RkJJKTkxEXFydjZdb17rvvYsaMGS0O3rfX49oW7KlpgY+PD5RKJXJyckzm5+TkICAgoNF1AgICzGpvaxYsWIDvvvsOO3bsQNeuXc1aV61WIyYmBqdOnbJSddbj4eGBHj16NFm7vR9XADh79iy2bduG++67z6z17PW41h4bc47btfzO25raQHP27FkkJSU120vTmJZ+F2xVREQEfHx8mqy7Ixzb3bt34/jx42b/DgP2e1zNwVDTAo1Gg4EDB2L79u3GeQaDAdu3bzf5P9n6Bg8ebNIeAJKSkppsbytEUcSCBQuwadMm/PzzzwgPDzd7G3q9Hn/88QcCAwOtUKF1lZSUID09vcna7fW41rd+/Xr4+flh3LhxZq1nr8c1PDwcAQEBJsetqKgIv/76a5PH7Vp+521JbaA5efIktm3bBm9vb7O30dLvgq06f/488vPzm6zb3o8tIPW0Dhw4ENHR0Wava6/H1Sxyj1S2B5988omo1WrFDRs2iH/++af4wAMPiB4eHmJ2drYoiqI4c+ZM8YknnjC237Nnj6hSqcQXXnhBPHbsmPj000+LarVa/OOPP+TahVZ58MEHRXd3dzE5OVnMysoyTmVlZcY2V+/rihUrxB9//FFMT08XU1NTxWnTpokODg7i0aNH5dgFsyxevFhMTk4WMzIyxD179oijR48WfXx8xNzcXFEUO85xraXX68WQkBDx8ccfb7DMno9rcXGxeOjQIfHQoUMiAPGll14SDx06ZLza57nnnhM9PDzEr7/+Wjx8+LA4ceJEMTw8XCwvLzduY9SoUeJrr71m/NzS77ycmtvfyspKccKECWLXrl3FtLQ0k99jnU5n3MbV+9vS74JcmtvX4uJiccmSJWJKSoqYkZEhbtu2TRwwYIDYvXt3saKiwrgNezm2Lf07FkVRLCwsFJ2cnMR169Y1ug17Oa7WxFDTSq+99poYEhIiajQa8YYbbhD37dtnXDZ8+HBx1qxZJu0/++wzsUePHqJGoxF79+4tbtmypZ0rNh+ARqf169cb21y9r48++qjx5+Lv7y/eeuut4sGDB9u/+Gtw1113iYGBgaJGoxG7dOki3nXXXeKpU6eMyzvKca31448/igDE48ePN1hmz8d1x44djf67rd0fg8EgPvXUU6K/v7+o1WrFuLi4Bj+D0NBQ8emnnzaZ19zvvJya29+MjIwmf4937Nhh3MbV+9vS74JcmtvXsrIyccyYMaKvr6+oVqvF0NBQ8f77728QTuzl2Lb071gURfGtt94SHR0dxYKCgka3YS/H1ZoEURRFq3YFEREREbUDjqkhIiKiDoGhhoiIiDoEhhoiIiLqEBhqiIiIqENgqCEiIqIOgaGGiIiIOgSGGiIiIuoQGGqIqNNKTk6GIAgoKCiQuxQisgCGGiIiIuoQGGqIiIioQ2CoISLZGAwGJCYmIjw8HI6OjoiOjsYXX3wBoO7U0JYtW9CvXz84ODjgxhtvxJEjR0y28eWXX6J3797QarUICwvDiy++aLJcp9Ph8ccfR3BwMLRaLbp164Z33nnHpE1qaiquv/56ODk54aabbsLx48etu+NEZBUMNUQkm8TERLz//vt48803cfToUSxcuBB/+9vfsHPnTmObpUuX4sUXX8T+/fvh6+uL8ePHo6qqCoAURu68805MmzYNf/zxB5555hk89dRT2LBhg3H9e+65Bx9//DFeffVVHDt2DG+99RZcXFxM6vjHP/6BF198EQcOHIBKpcKcOXPaZf+JyLL4QEsikoVOp4OXlxe2bduGwYMHG+ffd999KCsrwwMPPICRI0fik08+wV133QUAuHz5Mrp27YoNGzbgzjvvxIwZM3Dp0iX89NNPxvUfe+wxbNmyBUePHsWJEyfQs2dPJCUlYfTo0Q1qSE5OxsiRI7Ft2zbExcUBAL7//nuMGzcO5eXlcHBwsPJPgYgsiT01RCSLU6dOoaysDLfccgtcXFyM0/vvv4/09HRju/qBx8vLCz179sSxY8cAAMeOHcOQIUNMtjtkyBCcPHkSer0eaWlpUCqVGD58eLO19OvXz/g+MDAQAJCbm9vmfSSi9qWSuwAi6pxKSkoAAFu2bEGXLl1Mlmm1WpNgc60cHR1b1U6tVhvfC4IAQBrvQ0T2hT01RCSL6667DlqtFpmZmejWrZvJFBwcbGy3b98+4/srV67gxIkTiIqKAgBERUVhz549Jtvds2cPevToAaVSib59+8JgMJiM0SGijos9NUQkC1dXVyxZsgQLFy6EwWDA0KFDUVhYiD179sDNzQ2hoaEAgJUrV8Lb2xv+/v74xz/+AR8fH0yaNAkAsHjxYgwaNAirVq3CXXfdhZSUFKxduxZvvPEGACAsLAyzZs3CnDlz8OqrryI6Ohpnz55Fbm4u7rzzTrl2nYishKGGiGSzatUq+Pr6IjExEadPn4aHhwcGDBiAJ5980nj657nnnsMjjzyCkydPon///vj222+h0WgAAAMGDMBnn32G5cuXY9WqVQgMDMTKlSsxe/Zs43esW7cOTz75JP7+978jPz8fISEhePLJJ+XYXSKyMl79REQ2qfbKpCtXrsDDw0PucojIDnBMDREREXUIDDVERETUIfD0ExEREXUI7KkhIiKiDoGhhoiIiDoEhhoiIiLqEBhqiIiIqENgqCEiIqIOgaGGiIiIOgSGGiIiIuoQGGqIiIioQ2CoISIiog7h/wEXecK//RgKowAAAABJRU5ErkJggg==", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "loss_df.plot(kind=\"line\", x=\"epoch\", title=\"Losses\");" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjUAAAHHCAYAAABHp6kXAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8ekN5oAAAACXBIWXMAAA9hAAAPYQGoP6dpAABhqElEQVR4nO3dd1zU9eMH8Ncd4449ZSnLiQNBUYaVIyk0UynLkTvbZZYNc6RW335aZvmtzFHmqByZpaZm4V44wYEDcQGyEdn77v37A7nkKygH3OT1fDzuUXzu/fnc+3Mfznvx/ryHRAghQERERGTgpLquABEREVFTYKghIiIio8BQQ0REREaBoYaIiIiMAkMNERERGQWGGiIiIjIKDDVERERkFBhqiIiIyCgw1BAREZFRYKghIiIio8BQQ0Ras2rVKkgkEsjlcqSkpNzzfN++fdGlSxfVzz4+PpBIJJBIJJBKpbC3t4e/vz9eeuklHDt2rM7XKS0txVdffYWQkBDY2dlBLpejffv2eOONN3D58uV7yp89exYTJ06Er68v5HI5rK2tERgYiPfffx/Xrl1rmpMnIo0z1XUFiKj5KSsrw/z58/HNN988sGxgYCDeeecdAEBBQQEuXryIjRs34vvvv8fbb7+NL7/8skb57OxsDBgwAKdOncKTTz6J5557DtbW1oiPj8f69euxfPlylJeXq8p///33ePXVV+Hs7IzRo0fDz88PlZWViIuLw5o1a7Bo0SKUlJTAxMSkad8EImp6gohIS1auXCkAiMDAQCGTyURKSkqN5/v06SM6d+6s+tnb21sMGjTonuMUFxeLyMhIAUB89913NZ4bNGiQkEql4rfffrtnv9LSUvHOO++ofj58+LAwMTERvXv3Fvn5+feULykpEbNmzRKVlZVqnysRaR9vPxGR1s2YMQMKhQLz589v0P4WFhb46aef4OjoiE8//RRCCADAsWPHsH37dkyaNAnDhg27Zz+ZTIYvvvhC9fNHH30EiUSCX375BTY2NveUl8vl+OSTT9hKQ2QgGGqISOt8fX0xbtw4fP/990hNTW3QMaytrfHUU08hJSUFFy5cAABs3boVADB27NgH7l9cXIw9e/agb9++aNWqVYPqQET6haGGiHRi5syZqKysxGeffdbgY1R3Kr569SoA4OLFiwAAf3//B+575coVVFZW1uiYXC0nJwfZ2dmqx919cIhIfzHUEJFOtG7dGmPHjsXy5cuRlpbWoGNYW1sDqOpADAD5+fkAUOutpP9VXbb6GP9btxYtWqge1S1ARKTfGGqISGdmzZqFysrKBvetKSwsBPBviLG1tQXwb8i5n+p9qo9xty1btiAqKqpG/xsi0n8MNUSkM61bt8aYMWMa3FoTFxcHAGjbti0AwM/PDwBw7ty5B+7btm1bmJqaqo5xtz59+iA8PBxBQUFq14mIdIehhoh0qrq1Rt2+NYWFhfjjjz/g6emJjh07AgAGDx4MAPj5558fuL+VlRX69u2L/fv31zoRIBEZHoYaItKpNm3aYMyYMVi2bBnS09PrtU9JSQnGjh2LnJwczJw5ExKJBAAQFhaGAQMG4IcffsDmzZvv2a+8vBzvvvuu6ufZs2dDoVBgzJgxtd6Gqh4qTkSGgTMKE5HOzZw5Ez/99BPi4+PRuXPnGs+lpKSoWl4KCwtx4cIFbNy4Eenp6XjnnXfw8ssv1yi/Zs0aPP7443j66acxePBg9O/fH1ZWVkhISMD69euRlpam6ivzyCOP4Ntvv8XkyZPRrl071YzC5eXluHz5Mn755ReYm5vDzc1NO28EETWKRPBPESLSklWrVmHixIk4ceIEevToUeO5CRMmYPXq1ejcubOqn4uPjw8SExMBABKJBDY2NvD09ESvXr3wwgsvIDg4uNbXKSkpwXfffYcNGzbg4sWLKC8vh7e3NwYMGIApU6agTZs2NcqfPn0aX331Ffbt24f09HSYmZmhTZs2eOyxx/Dqq6/eU56I9BNDDRERERkF9qkhIiIio8BQQ0REREaBoYaIiIiMAkMNERERGQWGGiIiIjIKDDVERERkFJrN5HtKpRKpqamwsbFRzT5KRERE+k0IgYKCAnh4eEAqvX9bTLMJNampqfD09NR1NYiIiKgBkpOT0apVq/uWaTahxsbGBkDVm2Jra6vj2hAREVF95Ofnw9PTU/U9fj/NJtRU33KytbVlqCEiIjIw9ek6wo7CREREZBQYaoiIiMgoMNQQERGRUWg2fWrqS6FQoKKiQtfVIA0xNzd/4JBAIiIyTAw1dwghkJ6ejtzcXF1XhTRIKpXC19cX5ubmuq4KERE1MYaaO6oDjYuLCywtLTlBnxGqnoAxLS0NXl5evMZEREaGoQZVt5yqA42Tk5Ouq0Ma1KJFC6SmpqKyshJmZma6rg4RETUhdi4AVH1oLC0tdVwT0rTq204KhULHNSEioqbGUHMX3o4wfrzGRETGq0GhZvHixfDx8YFcLkdISAiOHz9+3/IbN26En58f5HI5/P39sWPHjhrPz507F35+frCysoKDgwPCw8Nx7Ngx1fM3btzApEmT4OvrCwsLC7Rp0wZz5sxBeXl5Q6pPRERERkjtULNhwwZMnToVc+bMQUxMDAICAhAREYHMzMxayx85cgSjRo3CpEmTEBsbi8jISERGRiIuLk5Vpn379vj2229x7tw5HDp0CD4+Pnj88ceRlZUFALh06RKUSiWWLVuG8+fP46uvvsLSpUsxY8aMBp42ERERGR2hpuDgYPH666+rflYoFMLDw0PMmzev1vLDhw8XgwYNqrEtJCREvPzyy3W+Rl5engAgdu3aVWeZzz//XPj6+ta73tXHzMvLu+e5kpISceHCBVFSUlLv4+mL8ePHCwD3vP9//PGHqL68e/fuFQAEACGRSIStra0IDAwU7733nkhNTb3nmHl5eWLGjBmiQ4cOQiaTCVdXV9G/f3+xadMmoVQqVeUSEhLExIkThaenpzA3NxceHh7i0UcfFT///LOoqKi457iJiYninXfeEV27dhVOTk7C19dXDBs2TPz111+1ntvkyZNF9+7dhbm5uQgICKi1zJkzZ8TDDz8sZDKZaNWqlfjss8/u+34Z8rUmImqO7vf9/b/UaqkpLy/HqVOnEB4ertomlUoRHh6O6OjoWveJjo6uUR4AIiIi6ixfXl6O5cuXw87ODgEBAXXWJS8vD46OjnU+X1ZWhvz8/BoPYyWXy/HZZ5/h9u3b9y0XHx+P1NRUnDhxAtOmTcOuXbvQpUsXnDt3TlUmNzcXvXr1wpo1azB9+nTExMTgwIEDGDFiBN5//33k5eUBAI4fP47u3bvj4sWLWLx4MeLi4rBv3z688MILWLJkCc6fP1/jtX/66Sd06dIFKSkpmDt3Lnbv3o1169YhNDQUL730EsaNG1dr593nn38eI0aMqPV88vPz8fjjj8Pb2xunTp3CggULMHfuXCxfvlzdt5CItKRCoYRSKXRdDTJW6qSllJQUAUAcOXKkxvb33ntPBAcH17qPmZmZWLt2bY1tixcvFi4uLjW2/fnnn8LKykpIJBLh4eEhjh8/Xmc9EhIShK2trVi+fHmdZebMmaNqnbj7YYwtNU8++aTw8/MT7733nmp7bS01t2/frrFvcXGx6NChg3jooYdU21599VVhZWUlUlJS7nmtgoICUVFRIZRKpejYsaMICgoSCoWi1nrd3aKzdetW4erqKqKjo2stW1hYKCIiIsQbb7xR6/Nz5syptaXmu+++Ew4ODqKsrEy1bdq0aaJDhw61HkcIw77WRIautKJSPP7lfhHw0d9iTfQNUalQPngnavY01lKjSf369cPp06dx5MgRDBgwAMOHD6+1n05KSgoGDBiAZ599Fi+++GKdx5s+fTry8vJUj+TkZLXqI4RAcXmlTh5CqPdXjImJCf7v//4P33zzDW7evFnv/SwsLPDKK6/g8OHDyMzMhFKpxPr16zF69Gh4eHjcU97a2hqmpqY4ffo0Ll68iHfffbfOJQeqRxmVl5fjjTfewKpVqxAaGopDhw6hR48ecHV1xSuvvIJx48Zh8+bN+OWXX7B27VpcvXq13vWPjo5G7969a8wOHBERgfj4+Ae2WhGR9v15Jg3xGQXILa7Ah5vjMPibQzh5I0fX1SIjotbke87OzjAxMUFGRkaN7RkZGXBzc6t1Hzc3t3qVt7KyQtu2bdG2bVuEhoaiXbt2WLFiBaZPn64qk5qain79+qFXr14PvMUgk8kgk8nUOb0aSioU6DT77wbv3xgXPo6Apbl68yI+9dRTCAwMxJw5c7BixYp67+fn5wegaoQZANy+fVu1rS6XL18GAHTo0EG1LTMzE61bt1b9/Pnnn+O1117D/v370aJFCwwYMAC5ubkYOnQo3njjDTz11FP47bffMH/+fDz66KNwcnLCE088gaioKLRp06ZedU9PT4evr2+Nba6urqrnHBwc6nUcItI8IQRWHr4OAOjTvgVik27jQlo+nlkajae7tcQHA/3gYivXcS3J0KnVUmNubo6goCDs3r1btU2pVGL37t0ICwurdZ+wsLAa5QEgKiqqzvJ3H7esrEz1c0pKCvr27YugoCCsXLmSixLW4rPPPsPq1atx8eLFeu9T3SokkUjUbiG6m5OTE06fPo3Tp0/D3t5eNdz+3Llz6NWrF4CqkXBOTk746KOPEBgYiP/85z81Qom7uztbWIiM1KnE2zifmg+ZqRRfjQjE3nf7YmRPT0gkwO+xKXh04X78cPAaKhRKXVfVaKXllSCroOzBBQ2Y2sskTJ06FePHj0ePHj0QHByMRYsWoaioCBMnTgQAjBs3Di1btsS8efMAAFOmTEGfPn2wcOFCDBo0COvXr8fJkydVLS1FRUX49NNPMWTIELi7uyM7OxuLFy9GSkoKnn32WQD/Bhpvb2988cUXqqHeAOpsIWosCzMTXPg4QiPHrs9rN0Tv3r0RERGB6dOnY8KECfXapzoA+fj4wMnJCfb29rh06dJ992nXrh2Aqo7H3bp1A1B1C6xt27YAAFPTf3+tKisrYWFhAaDqVpSVlVWNY1lbW6v+PyYmBi+//HK96g3U3QpY/RwR6Y+VR24AACIDW8LRquqW8fxhXTEq2Auzt57HmeRc/Gf7Raw/kYy5gzvj4XbOOqyt8Tl+PQdjVxyDEMDEh33wer+2sJUb31Ixajd3jBgxAl988QVmz56NwMBAnD59Gjt37lQ1+yclJSEtLU1VvlevXli7di2WL1+OgIAA/Pbbb9i8eTO6dOkCoOrL8NKlSxg2bBjat2+PwYMH49atWzh48CA6d+4MoKpl58qVK9i9ezdatWoFd3d31UNTJBIJLM1NdfJozKy38+fPx59//lnn6LK7lZSUYPny5ejduzdatGgBqVSKkSNH4pdffkFqauo95QsLC1FZWYlu3brBz88PX3zxBZTK+/9V1bZtW9Xoqp49e+LSpUvYsmULlEoltmzZgjNnzqCkpAQLFixAcnIyhgwZUu9zDQsLw4EDB1TLXABVvysdOnTgrSciPZKWV4KdcekAgAkP+dR4LsDTHn+82gufP9MVTlbmuJJZiDErjuHVn0/h5u1iHdTW+MSnF+CF1SdQVqlEuUKJZfuvod+CffjlWCIqja1lTLN9lvWHMc9TM3To0Brbxo4dK+Ry+T2jn+Lj40VaWpq4fPmyWLdunejWrZtwcnIS58+fV+1769Yt4efnJ1q1aiVWr14tzp8/Ly5fvixWrFgh2rZtqxpBFR0dLaytrUVoaKjYsmWLuHz5sjh//rxYsmSJsLS0FF9//bUQoup9d3R0FPHx8UIIIVasWCEsLCyEiYmJCA0NFQMGDBBmZmZiyJAhIjk5ucZ5JCQkiNjYWPHyyy+L9u3bi9jYWBEbG6sa7ZSbmytcXV3F2LFjRVxcnFi/fr2wtLQUy5Ytq/P9MuRrTWSoPt95UXhP2yZGLDty33K5xeVizpY40Xr6duE9bZvoMGuH+HrXZVFSXqmlmhqflNvFIuTTXcJ72jbx9HeHxV/n0kS/L/YK72nbhPe0beLxL/eLA5czdV3N+1Jn9BNDjTDsL7raQs3169eFubl5nZPv2djYiICAAPHee++JtLS0e46Zm5srPvjgA9GuXTthbm4uXF1dRXh4uPjjjz9qDNWOj48X48ePF61atRKmpqbCzs5O9O7dWyxbtqzG5HufffaZCAgIENnZ2UIIIcrKylST/mVnZ4vi4uJaz61Pnz61Dsu/fv26qszdk++1bNlSzJ8//77vlyFfayJDVFJeKQI/+lt4T9sm/jp37783tbmYlieGLz2i+uJ95LM9Iup8eo1/f+jBbheVifCF+4T3tG2i/8J94nZR1R+E5ZUK8eOha6Lr3L9V7/HElcdFQkaBjmtcO3VCjUSIRvQONSD5+fmws7NDXl4ebG1tazxXWlqK69evw9fXF3I5e983NSEEXnvtNWzbtg2zZ89GZGQkWrRogaKiIuzcuROffPIJfvjhB/To0UPjdeG1JtKuX08k4/1NZ9HS3gL73+sLU5P69XoQQuDPs2n4dPsFZORXdW7t16EFZg/uDF9nqwfsTaUVCoxdcQwnbtyGm60cm17rhZb2FjXK5BaX47+7E/BTdCIqlQKmUgnGhHrjrfB2sLc0r+PI2ne/7+//xSFEpHESiQRLlizB4sWLsXr1ari5uUEmk8HW1haff/45Zs2apZVAQ0TaJYRQdRAeF+Zd70ADVP27MSTAA3ve6YtX+7aBmYkEe+OzEPHVAXy+8xKKyys1VGvDp1AKvLkuFidu3IaN3BSrnu95T6ABAHtLc8wZ3Bl/v90b4R1dUKkUWHXkBvos2IcfD103yJFobKkB/3rXtpKSEmRnZ8Pe3h42NjZafW1eayLtOXbtFkYsPwoLMxMcnd4fdpYNH21zLasQH/15AfsvV41+dbeTY8YTHfFkV/dGDa4wNkIIzNwch7XHkmBuKsWa54MR2tqpXvseSsjGf7ZfwKX0AgBAa2crzHiiI/p3dNHpe8yWGtJrFhYW8PT01HqgISLtWnWnleap7i0bFWgAoHULa6ya2BPfj+sBT0cLpOWVYvK6WIz6/iji73wJE/DNnitYeywJEgnw3xGB9Q40APBwO2dsf/MRzHvaH87W5riWXYQX1pzEmBXHcDHNMNZPZKghIqImd/N2Mf4+f2cYdy+fJjmmRCLBY51cEfV2H0x9rD1kplIcvZaDJ74+iI/+PI+8kooHH8SIrT+ehC+jqmZ8nzu4Mwb6qz/tiYlUglHBXtj7bl+80qcNzE2kOHzlFgZ9fRDTfz+r95P3MdTcpZnciWvWeI2JtOOno4lQCuChtk5o79q0rbJyMxO82b8ddk3tgwGd3aBQCqw8fAP9F+7DryeTm+Uq4LsuZGDGH1Vzgr3erw3GNzJI2sjN8MFAP+x+pw8G+btDKYB1x5PR74t9WLLvKkorFE1Q66bHUAPAzKyqWbS4mBM9Gbvq5RtMTBo2azMRPVhJuQLrj1ctIjyhl+8DSjecp6Mllo4Nwk+TgtGmhRWyC8vx/m9nMWTxIeyLz2w2f8TEJN3GG+tioBTAM0Gt8O7jHR68Uz15Olpi8eju2PhKGLq2skNhWSU+23kJ4V/ux/azaXr3HrOj8B1paWnIzc2Fi4sLLC0t2fHMCCmVSqSmpsLMzAxeXl68xmSQ/jyTilmb4zDjCT+M6Oml6+rUat3xJEz//Rw8HS2w791+MJFq/rNWXqnE6iM38N/dCSgsqxoZFezjiHcjOiDY11Hjr68rVzIL8czSI8gtrkDfDi3w/bgeMFNjlJk6lEqBP2JT8Pnfl1TD7Hv6OODDJzuhayt7jbwmoF5HYYaaO4QQSE9PR25urvYrR1ojlUrh6+sLc3P9mYOBqL5OJ+di+LJolFcqYS0zxZ53+ujdytZCCAxYdBDxGQWYNagjXniktVZf/1ZhGZbuv4rV0Ykor6wakty3Qwu8+3gHdGlpp9W6aFpGfime/u4IUnJLENDKDuteCoWludpLOqqtuLwSy/Zfw7IDV1FaUfUeP929Jd6P8IObXdP/PjLU1KK+b4pCoaixlhAZF3Nzc67wTgYpI78Ug785hMyCMphIJVAoBZ7u1hJfjgjUddVqOHIlG8/9cAyW5iaInt4fdha6WTQxLa8E3+y5gl9PJKPyTh+bJ/zdMPWx9mjrYvgjL/NLKzB8aTQupRfA19kKv70SBidrmVbrkJZXggU74/F7bAqAqsWYX+7TGpMfbdekrXMMNbVQ500hItInpRUKjFh+FGeSc9He1RpzBnfG6B+OAQB+eyUMPXz05/bKi2tOIupCBsaGeuOTyC66rg4SbxVh0a4EbD6dAiEAqQR4unsrTOnfDp6OlrquXoOUVSow4ccTiL52C87WMvz+ai94OenuXM4k5+KTbRdwMvE2HmnnjDXPBzfp7X2Gmlow1BCRIRJC4J1fz+D32BTYW5phy+sPwdvJCtN+O4sNJ5PRyd0Wf05+WCv9Vh4kOacYvRfshRDArqm99apFJD69AAv/icc/FzIAAGYmVUOX3+jXVu9u4d2PUikweX0stp9Ng7XMFOtfCtWL22pCCOw4l452rtZNPtqNk+8RERmJ7w9ew++xKTCRSvDdc93h7VS17tH7AzrAVm6KC2n5WHs8Sce1rLIm+gaEAB5p56xXgQYAOrjZYPm4Htj8+kN4pJ0zKhQCa6IT0XvBXsz/6xJyi8t1XcUHEkLgk+0XsP1sGsxMJFg6JkgvAg1QNYfQoK7uTR5o1MVQQ0Skp/bGZ2L+X5cAALOf7IRebZ1VzzlZy/DOnaG7X/wdj5wi3X4pF5VVYv2JqmHczz+kuWHcjRXoaY+fJoVg3YuhCPJ2QGmFEkv3X8Ujn+3F13eNnNJHyw9cw8rDNwAAXzwbgIfbOd9/h2aIoYaImp2kW8WIS8nTdTXu60pmId5cGwulAEYFe2JcmPc9ZUaHeMHPzQZ5JRVY8He8Dmr5r99jU1BQWgkfJ0v0ad9Cp3Wpj7A2TvjtlTD8OKEHOrrboqCsEl9GXUbvz/fih4PX9G5yud9jbmLenYA7a1BHDA1sqeMa6SeGGiJqVorLK/H0ksN48ptDWHHouq6rU6u8kgq8tOYkCsoq0dPHAR8N6VJrx0tTEyk+HlrVGXf9iSScu6mboCaEwKrDVe/l+F4+kOpB/576kEgkeNTPFdsnP4xvn+uG1s5WyCkqx3+2X0TfBfuw9liSXqxUvf9yFt7/7SwA4IWHfbU+TN6QMNQQUbOyKSYF2YVVt2o+2XYB83Zc1Ktp9RVKgcnrYnEtuwgednIsGRMEc9O6/6kO9nXE0EAPCAHM3hqnk3M5dCUbV7OKYGVugmeCWmn99RtLKpXgya4e+Oft3vh8WFd42MmRnl+KGX+cQ/iX+7HldIrOfkfO3szFqz+fQqVSYEiAB2Y80VEn9TAUDDVE1GwolQI/3mmdqZ5ldtmBa3hn4xnVRG26Nv+vizhwOQtyMym+H98DzvWYe2TGEx1hZW6C2KRcbIq5qYVa1rTqTj+PZ3t4wkaum3lpmoKpiRTDe3pi73t9MXdwJzhbmyPxVjGmrD+Ngf89iH/Op2t1WYAb2UWYuPIEissVeLitM754NsBgWsF0haGGiJqNvfGZuJ5dBBu5KVZO6IkFz3SFiVSCP2JTMGn1CZ13Et106ia+P1gVuhY+G4jOHvUb2eJqK8eb/dsBAD7beUmrq1XfyC7CnvhMAGj0Ior6QmZqggkP+eLA+/3wXkTVKLP4jAK89NMpRC4+jIX/xGPL6RScT83TWN+b7MIyjF95HLeKytHJ3RZLxnS/b4sdVdH8fMpERHrihzuB4bkQL1jJTPFsD08428jw2s8xOJiQjVHLj2LlxJ71ah1pajFJtzH996pVlt98tC0GdXVXa/+JD/liw8lkXMsqwqJdlzFncGdNVPMeq+8M4+7XoQV8na208praYmluitf7tcWYUG98f+Aafjx8HWdu5uHMXX2XJBLA08ES7Vys0fZ/Hg1ttSoqq8TElSeQeKsYno4WWPV8T4NuAdMmTr5HRM1CXEoenvzmEEykEhx8vx887C1Uz51OzsXzq04gp6gc3k6WWPN8sGo+GG1IzyvF4G8PIaugDI93csXSMUENus1w4HIWxv14HCZSCXa8+Qg6uGl2zpDCskqE/t9uFJZVYvXzwQYx6qkxsgrKsO1sKi5nFOJKZgESMguRW1x3q5ibrRztXP8NOe1cbNDWxRqOVnWvPVdeqcSk1SdwMCEbjlbm+O2VMLRuYa2J0zEY6nx/s6WGiJqF6r40g/zdawQaoGrukt9eCcO4H48j8VYxhi05gpUTguHfSvMTm5VWKPDyTyeRVVCGDq42+HJEYIP7TfRu3wIDOrth5/l0zNkah3Uvhmp0NfpNp26isKwSrVtY4ZG2xj9nSgsbGSbeNQePEAK3isqRkFGIK1mFuJJRgCtZhUjIKERmQRnS80uRnl+KgwnZNY7jZGWONi7WaHfn0dbFBu1crdHCWoYPNp3FwYRsWJiZ4McJPZt9oFEXQw0RGb2M/FL8eTYVAPDCI7VPDNe6hTV+f60XJvx4AhfS8jFyeTSWjg3CI+001/oghMAHm87izM082Fua4ftxPWAta9w/yzMHdcTe+EwcvZaDbWfTMDjAo4lqW5NSKbD6yA0AwAQDGsbdlCQSCZytZXC2liGsjVON5/JKKnAls6pF50pmIRIyC3ElsxA3b5fgVlE5bl3PwfHrOTX2sTAzQUmFomr26NHdEehpr8WzMQ4MNURk9NZE30CFQqCnjwO6trKvs5yLjRwbXg7FKz+fwuErtzBx5Ql88WwAIrtpZqKz5QeuYfPpVNWXWFMsSujpaInX+rbFV7su49PtF/GonwusGhmUanMgIQvXsotgIzPFsO6GN4xb0+wszBDk7YAgb4ca24vLK3E1swhXsgqqWnjuhJ3EnGKU3Ol0PP9pf/Tzc9FFtQ0eQw0RGbWScgV+OVa1NtKkhx88aZmN3AwrJwTj3Y1nsPVMKt7acBpZBWV4sXfTTni291Im5u+smiF27uBO6NWm6W7fvNynNX6LSUZyTgm+3XsF0wb4Ndmxq1VP1z+8p6dGQpOxsjQ3hX8ru3tubZZVKnAjuxgANN4XyphxfBgRGbVNMTeRW1wBL0dLPNbJtV77mJtKsWhEICY9XHWr6tMdF/HJtgtNNgHblcwCvLkuFkIAo4K9MCb03iUQGkNuZoLZT1aNfvrh4DVcyyps0uNfzSrE/stZkEhQ6/INpD6ZqQk6uNkw0DQSQw0RGa27J9ub+JAPTNTo9yGVSvDhk50w884MrisOXcdbG06jrLJx85LkFVfgxTWnUFBWiWAfR3w0pLNGOvOGd3RB3w4tUKEQ+OjPC006adyaO31p+vu5aHWUGNGDMNQQkdHaG59Z1e9DXjUnTUO82Ls1Fo0IhKlUgq1nUvH8qhMoKG3Y5HaVCiXeWBeD69lFaGlvge80OKGaRCLBnMGdYW4ixf7LWdh1MbNJjptfWoHfTlXNWjyhl/6uxk3NE0MNERmt6gUrnwv2atSooshuLfHjhJ6wMjfB4Su3MGLZUWQWlKp9nPl/XVIN110+Lkjjk/z5Olth0p3RXh9vO98ks9/+dvImisoVaOdijYfaOj14ByItYqghIqN0PjUPR67egolU0iTT9/du3wLrXwqDs7U5LqTlY9iSI2r1Vfnt1E38cCdkfTk8oN5LIDTWG/3aws1WjuScEizbf61Rx1IoBVZH3wAATHjIR6Nz4BA1BEMNkRERQiAzvxRHrmTj56OJ2BmXjpJyzaxNo++qW2meqGWyvYbyb2WHTa/2greTJZJzSvDM0micTs594H4xSbcxo3oJhP7tMNBfvSUQGsNKZoqZg6r6BX237wqSc4obfKx98ZlIvFUMW7kpntLQMHeixuA4PCIDVKlQIimnGFezilTzXFzNqnoUlNZclNHCzASP+rlgoL8b+nXQzJwl+iYzvxR/nqmabK96BFNT8XaywqZXe2HiyhM4l5KHUcuP4rsx3dGvQ+3ziqTlleDln06hXKFERGdXvHVn4UlterKrO345loij13Lw6faLWDo2qEHHWXWng/DIYC9Ymhv/7xEZHv5WEumx6om6rmb9G1yuZBbixq0iVChqH80ilQBejpbwdbbC5YxCpOSWYPu5NGw/lwaZqRR92rfAE/7ueLSjC2yNdJG8NdGJqsn2NDErq7O1DOtfqpqk72BCNl5YfRKfDeuKZ4JqTkJXWqHAS2tOIaugDH5uNvhyeMOXQGgMiUSCj4Z0wRNfH8TO8+k4cDkLvdVcpykhowAHE7IhlQBjm3gIOlFTYagh0jEhBLILy+8JLteyipCSW1LnfnIzKdq0qFoo7+7/+jhbQmZqojp2XEo+dsSl4a9zabhxqxj/XMjAPxcyYG4ixSPtnDGgixse6+QKe8u6F9kzJCXlCvx8LBFA07fS3M1KZooV43ti2qaz+CM2Be9uPIPMglK82qcNJBIJhBCYtukszqXkweHOEgi6bCXr4GaD8WE++PHwdcz98zx2Tumt1sir6r404R1d4enY+JmPiTSBoYZIR7aeScXqIzdwJbMQeSV1DxGuXvzu3+BihbYu1vCws3jgX/0SiUQ1e+n7ER1wMa0Af8WlYce5NFzNKsLuS5nYfSkTplIJerV1xhN3Ao6ThkflaNLvsVWT7Xk6WuCxTm4afS1zUykWPhsAF1sZlu2/hs93xiMzvwwfPtkJyw9cw5bTqTCVSvDd6CC9CAJvPdYOW8+k4FpWEVYevo6X+7Sp1355xRXYdCoFAGos6EikbySiKWdk0mPqLF1OpGlxKXkYuvgwFHdmqJVIAE8HyxqhpU2LqoeDlWZaUBIyCrDjXDr+ikvDpfQC1XapBAht7YSB/u6I6OwKFxu5Rl5fE5RKgfCv9uNaVhHmDO6k1S/gFYeu45NtFwAAwb6OOHEjB0IAn0R20avbNRtPJuO9387CytwEu9/pCze7B1/f7w9cw6c7LsLPzQZ/TXmEo55Iq9T5/maoIdIyhVLg6e8O48zNPIR3dME7j3eAr7MV5GYmOqvTtaxC/BVXFXDiUvJV2yUSoKe3Iwb6u2FAFze42zXNKCJN2XMpA8+vOgkbmSmiZ/Rv9IrX6tp6JhXv/Hpa1d9pdIgXPn3KX6t1eBClUmDY0iOITcrF0EAP/Hdkt/uWVygF+izYi5u3SzD/aX+MDPbSUk2JqjDU1IKhhvTFqsPXMffPC7CRmWL3O33gYqtfLSFJt4qx83wadpxLv2e4cncvewzs4o4BXdz04nbK/xr9w1EcvnILL/VujRl3ljfQtiNXsvHm+tMI9LTDd6ODNDZjcGOcu5mHIYsPQQhgw0uhCGld9yR6/5xPx0s/nYK9pRmiP+gPC3PdhW9qnhhqasFQQ/ogLa8Ej315AIVllXp3W6I2Kbkl2BmXjp1xaTiZeBt3/2vRtZUdZg3qhGBfR91V8C4XUvPxxNcHYSKV4MD7/dCyieamaQiFUkAqgV7fppnxxzmsPZYEPzcbbJv8MExNag9fz31/FEeu3sIrfdrgg4FNv9o30YOo8/2tf39CEBmxj7ZeQGFZJbp52WO0ATTjt7S3wKSHfbHxlV44Or0/Ph7aGaGtHSGVAGdv5mHS6hO4nl2k62oC+HeyvYFd3HQaaADARCrR60ADAO893gF2Fma4lF6AX44l1VrmUnq+albmsVyNmwwAQw2RlkRdyMDO8+kwlUow72l/ncxX0hiutnKMC/PB+pfCcHxmOIK8HVBQWomX1pxEYVnlgw+gQZn5pdh6pmp0zguPtNZpXQyFg5U53o3oAABY+E88sgvL7imz+s5kexGdXXUeFInqg6GGSAsKyyoxe0scgKovXT83w74F6mwtw5LR3eFiI0NCZiHe/+0MdHkn+6ejVZPt9fDWzGR7xuq5YC90crdFfmklFuyMr/Hc7aJy/BFbFRS5GjcZCoYaIi34Kuoy0vJK0crBAlN0ME2+JrjYyrFkTBDMTCTYcS4dSxu5WGJDlZQr8PNRzU+2Z4xMpBJ8PLQzAODXU8k1OoZvOJmM0golOrnboqePg45qSKQehhoiDYtLycPKw1X9Pf4T2cWoRo8EeTvgoyFdAACf/30J+y9nab0Ov8fexO07k+093lmzk+0Zox4+jni6e0sIAczZEgelUqBSocRP0VVBkatxkyFhqCHSoEqFEtN/PwelAAYHeKBvHYseGrLnQrwwKtgTQgBvrotF0q2GrwKtLqVS4Mc7HYQn9vKFiYH1U9IXHwz0g7XMFGdu5mHjqWREXchASm4JHK3MMSTAQ9fVI6o3hhoiDVoTnYhzKXmwkZviwyd1M2+KNswd0hmBnvbIK6nASz+dRHG5djoO77+chatZRbCRmWJ4T0+tvKYxcrGR463wqtuin+2Mx9L9VwFU9bnR5aSQROpiqCHSkNTcEiz8p6rz5QcD/QxquQF1yUxNsHRMEJytZbiUXoBpm85ppePwD4eq+vGMDPbU+uzBxmZ8Lx+0dbFGTlE5ztzMg4lUgjF6Po8S0f9iqCHSkDlbz6OoXIEgbweM6qn/c9I0lpudHEvGdIepVII/z6Tih4PXNfp6F9PycfhK1Rwq43v5aPS1mgMzEyk+GtJZ9fPALm71WheKSJ8w1BBpwN/n0xF1IQOmUgn+7ynDm5OmoXr6OGL24E4AgHl/XcShhGyNvVb1ZHsDurihlYP+LdlgiB5q64yRPT0hN5PilXqu4E2kTxhqiJpYYVkl5mw5DwB4qXdrdHCz0XGNtGtsqDeeCWoFpQAmr4tBck7TdxzOLCjF1tOpAIAXOIy7Sc172h9xcyPQpaWdrqtCpDaGGqImtvCfeKTnl8LL0RJvGsmcNOqQSCT4T2QXdG1lh9vFFXj5p1MoKVc06Wv8HJ2IcoUSQd4O6ObFOVSakkQiqXMdKCJ9x99coiZ09mauamr5/0R2abYjR+RmVR2HnazMcSEtH9N/P9tkHYdLKxT4iZPtEVEtGGqImsjdc9IMDfRA7/YtdF0lnfKwt8C3z3WHiVSCzadTsfLwjSY57u8xKbhdXIFWDhZ4vJNrkxyTiIwDQw1RE1l15AbOp+bDVm6KWYM66bo6eiGsjRNmPlE1P8+nOy4i+uqtRh1PqRT48c7szBMf8uVtEiKqgf8iEDWBlNwSfBl1GQAw44mOaGEj03GN9MfEh3zwVLeWUCgF3lgbg5TckgYfa39CFq5kFsJaZorhPVo1YS2JyBgw1JDByswvxU/RN3A5o0Cn9RBCYPbmOBSXK9DTxwHDe3Bm27tJJFXD2jt72OJWUTle+ekUSisa1nF4xZ25b0b29ISN3Kwpq0lERoChhgyKEALRV2/h9V9i0Gv+Hny45Tye/PoQfjh4DUql5mewrc3f59Ox+1ImzEya15w06rAwr+o47GBphnMpeZj5R5zaHYcvpuXj0JVsSCVViywSEf0vhhoyCPmlFVh1+Doe++oARn1/FNvPpaFSKdDS3gLlCiX+s/0ixqw4htRG3NpoiILSCszZWjUnzcu926Cda/Oak0Ydno6W+Pa57pBKgE0xN1UjmOqreuHKgf7unGyPiGrFUEN6LS4lD9N/P4uQT3dj7p8XcCWzEJbmJhgd4oW/pjyCQ9P64f+e8oeFmQmOXL2FAYsO4M8zqVqr3xd/xyMjvww+TpZ449G2WntdQ/VQW2dMH1jVcfjjPy/g+PWceu2XWVCKLXcm2+MwbiKqC1eAI71TWqHAjnNp+OloImKTclXb27taY0yoN57q1rJGf4rnQrwQ2toRb284jTM38zB5XSz2XMrER0M7w1aD/S5OJ+dizZ3Whv9E+jfbOWnU9cIjvjibkoc/z6TitV9isG3yww9cY6h6sr3uXvbozsn2iKgODDWkNxJvFWHtsST8ejIZt4srAABmJhIM6OKOMSFeCPZ1hERSe3+V1i2s8durvfDN7gR8u/cK/ohNwfHrOfhyeABCWjs1eV2r56QRAniqW0s83M65yV/DWEkkEnw2zB8JGQW4lF6AV34+hQ0vh0JmWnsoLK1Q4OdjSQCAFx5prc2qEpGBYaghnVIoBfZcysTPRxOx/3KWantLews8F+KF4T086z082sxEiqmPd0CfDi3w1obTSM4pwcjvj+KVPm3wdnh7mJs23d3WHw9fx8W0fNhbmmHmoI5NdtzmwtLcFMvH9sDgbw/hdHIu5m49j3lPd6217B+xKcgpKkdLe062R0T3x1BDOpFVUIYNJ5Kw7niyat4SiQTo3a4FxoZ6o5+fC0waOIooyNsRf03pjY+2nsfGUzexZN9VHLichf+ODERbl8Z35E3OKcZXUQkAgBkDO8LZmnPSNISXkyW+GdUNE1Yex7rjyfBvaY/nQrxqlBFCqFbjnviQDyfbI6L7YqghrRFC4Pj1HPx0NBF/n09HhaJqSK+DpRmG9/DEcyFe8HayapLXspaZYsGzAejf0QUf/H4O51PzMejrQ5g5qCPGhnrXeRurPucwe0scSioUCPZ1xLOcAK5Rerdvgfci/PDZzkuYszUOHdxsEOT9b5+Z/Zf/nWxvRE/O/0NE98dQQxpXUFqBP2JT8PPRRFzOKFRt7+5ljzGh3njC311jnWwHdHFHNy8HvLvxDA4mZGP2lvPYcykTnz/TFS429++cWpu/4tKxNz7rzpw0XRocjuhfr/RpjXMpudhxLh2v/nwK2yY/DBfbqmtT3UozgpPtEVE9SERTLZ2r5/Lz82FnZ4e8vDzY2trqujrNglIpsOCfeKw+cgPF5VUzyFqYmSCyW0uMCfVCZw87rdZlTfQN/N9fl1BeqYSjlTnmPe2PiM5u9T5GfmkFwhfuR2ZBGd7s3w5TH2uvwRo3L0VllXjqu8O4nFGIIG8HrHsxFNeyCzFg0UFIJcD+9/rB05Fz0xA1R+p8f7OlhjTmx8PXsWTfVQBAWxdrjAnxwtNBrTQ6zLouUqkEEx7yRa+2zpiy/jQupuXj5Z9OYWRPT3z4ZCdYyR78UViwMx6ZBWXwdbbCa33baKHWzYeV7N+Ow6cSb+PjbedRXqkEAAzs4s5AQ0T1wpYa0oi4lDw89d1hVCgEZg3qiEkP++rNrZqySgW+jLqM5QeuQQjA28kSX40IvO/8JzFJtzFsyREIAax9IQS92nIItybsuZSBSatPQghAKgGUAtj0aq8a/WyIqHlR5/ubQwmoyRWVVWLyulhUKAQe7+SqV4EGAGSmJpg+sCPWvhAKDzs5Em8V49ml0Vi06zIqFcp7ylcolJhxZ06ap7u3ZKDRoEf9XDE1vOq2nlIA3bzsGWiIqN4aFGoWL14MHx8fyOVyhISE4Pjx4/ctv3HjRvj5+UEul8Pf3x87duyo8fzcuXPh5+cHKysrODg4IDw8HMeOHatRJicnB6NHj4atrS3s7e0xadIkFBYWgvTPnK3ncT27CO52cnz+TFe9CjR3C2vjhL/e6o2hgR5QKAUW7UrAM0ujcSO7qEa5FYeu41J6ARwszTBrUCcd1bb5eL1fWwzydwcATObSE0SkBrVDzYYNGzB16lTMmTMHMTExCAgIQEREBDIzM2stf+TIEYwaNQqTJk1CbGwsIiMjERkZibi4OFWZ9u3b49tvv8W5c+dw6NAh+Pj44PHHH0dW1r+TsY0ePRrnz59HVFQUtm3bhgMHDuCll15qwCmTJm05nYLfTt2EVAJ8NSIQ9pbmuq7SfdlZmOG/I7vhvyMDYSM3xenkXDzx9UGsP54EIQSSc4qxaNdlAMCMJzrC0Uq/z8cYSKUSfDOqG47P6I9H/TjZHhHVn9p9akJCQtCzZ098++23AAClUglPT09MnjwZH3zwwT3lR4wYgaKiImzbtk21LTQ0FIGBgVi6dGmtr1F9/2zXrl3o378/Ll68iE6dOuHEiRPo0aMHAGDnzp144okncPPmTXh4eDyw3uxTo3lJt4rxxNcHUVhWaZCjg1JySzB1w2kcu7PI4mOdXFFSrsChK9kIbe2IdS+G6m2rExGRsdJYn5ry8nKcOnUK4eHh/x5AKkV4eDiio6Nr3Sc6OrpGeQCIiIios3x5eTmWL18OOzs7BAQEqI5hb2+vCjQAEB4eDqlUes9tqmplZWXIz8+v8SDNqVAoMXl9LArLKtHTxwFvGuBtg5b2Flj7YiimD/SDmYkEURcycOhKNsxNpPj0KX8GGiIiPadWqMnOzoZCoYCra80mYVdXV6Snp9e6T3p6er3Kb9u2DdbW1pDL5fjqq68QFRUFZ2dn1TFcXFxqlDc1NYWjo2Odrztv3jzY2dmpHp6enI1Uk76MuowzybmwlZti0chuBjudvYlUgpf7tMHm1x9COxdrAMCb/duiTQtrHdeMiIgeRG++efr164fTp0/jyJEjGDBgAIYPH15nP536mD59OvLy8lSP5OTkJqwt3e1QQjaW7q+aj+azYV3R0t5CxzVqvM4edtj25sPY+dYjeL2f4bU6ERE1R2qFGmdnZ5iYmCAjI6PG9oyMDLi51T4zq5ubW73KW1lZoW3btggNDcWKFStgamqKFStWqI7xvwGnsrISOTk5db6uTCaDra1tjQc1vVuFZXj719MQAnguxAsD74xaMQYyUxP4udnythMRkYFQK9SYm5sjKCgIu3fvVm1TKpXYvXs3wsLCat0nLCysRnkAiIqKqrP83cctKytTHSM3NxenTp1SPb9nzx4olUqEhISocwrUhIQQeHfjGWQVlKGdizU+5HBnIiLSIbWXSZg6dSrGjx+PHj16IDg4GIsWLUJRUREmTpwIABg3bhxatmyJefPmAQCmTJmCPn36YOHChRg0aBDWr1+PkydPYvny5QCAoqIifPrppxgyZAjc3d2RnZ2NxYsXIyUlBc8++ywAoGPHjhgwYABefPFFLF26FBUVFXjjjTcwcuTIeo18Is1YefgG9sZnwdxUim+e6wYLc80sSklERFQfaoeaESNGICsrC7Nnz0Z6ejoCAwOxc+dOVWfgpKQkSKX/NgD16tULa9euxaxZszBjxgy0a9cOmzdvRpcuXQAAJiYmuHTpElavXo3s7Gw4OTmhZ8+eOHjwIDp37qw6zi+//II33ngD/fv3h1QqxbBhw/D111839vypgeJS8jD/r0sAgA8HdYSfG2/vERGRbnHtJ1JbUVklBn9zCNeyi/BYJ1csHxvEfidERKQRXPuJNOqjP8/jWnYR3Gzl+HyY/i6DQEREzQtDDall65lU/HryJiQSYNHIQDhw2QAiItITDDVUb8k5xZj5+zkAwOR+bRHa2knHNSIiIvoXQw3VS4VCiTfXx6KgrBJB3g54s387XVeJiIioBoYaqpdFuy4jNikXNnJT/HdkoMEug0BERMaL30z0QEeuZOO7fVXLIMx/uitaOVjquEZERET3Yqih+8opKsdbG6qWQRgV7IlBXY1nGQQiIjIuDDVUJyEE3tt4BpkFZWjrYo3ZT3Z+8E5EREQ6wlBDdVp95AZ2X8qEuakUX4/kMghERKTfGGqoVhdS8/F/O6qWQZj5REd08uAszEREpN8YaugexeWVmLwuBuUKJcI7umJcmLeuq0RERPRADDV0j4//vICrWVXLICx4hssgEBGRYWCooRq2nU3F+hPJkEiAr0ZwGQQiIjIcDDWkkpxTjOl3lkF4o19bhLXhMghERGQ4GGoIAFCpUGLK+lgUlFaiu5c9pnAZBCIiMjAMNQQAWLQrATGqZRC6cRkEIiIyOPzmIhy5mo3F+64AqFoGwdORyyAQEZHhYahp5nKKyvH2nWUQRvbkMghERGS4GGqauTlbzyMjvwxtWlhh9uBOuq4OERFRgzHUNGMpuSXYfjYVAPDfkd1gaW6q4xoRERE1HENNM7b+eBKUAghr7YQuLe10XR0iIqJGYahppioUSqw/kQwAGB3qpePaEBERNR5DTTMVdSEDWQVlcLaW4fFObrquDhERUaMx1DRTvxxLBACM6NkK5qb8NSAiIsPHb7Nm6FpWIQ5fuQWJBBgVzFtPRERkHBhqmqG1x5IAAP06uKCVAyfaIyIi48BQ08yUVijwW8xNAMAYdhAmIiIjwlDTzGw/m4bc4gq0tLdAn/Yuuq4OERFRk2GoaWaqOwg/F+IFE6lEx7UhIiJqOgw1zciF1HzEJOXCVCrBsz1a6bo6RERETYqhphmpbqWJ6OwGFxu5jmtDRETUtBhqmonCskpsjk0BwBmEiYjIODHUNBObY1NQVK5A6xZWCGvtpOvqEBERNTmGmmZACIGfj1bdehod4g2JhB2EiYjI+DDUNAMxSbm4lF4AmakUw7q31HV1iIiINIKhphn45U4rzeAAD9hbmuu4NkRERJrBUGPkbheVY9u5NADA6BB2ECYiIuPFUGPkNsXcRHmlEp09bBHoaa/r6hAREWkMQ40RUyoFfrmzeCU7CBMRkbFjqDFi0ddu4Xp2Eaxlphga6KHr6hAREWkUQ40Rqx7G/VS3lrCSmeq4NkRERJrFUGOkMvJL8c+FDACcQZiIiJoHhhojteFEMhRKgR7eDvBzs9V1dYiIiDSOocYIVSqUWHf8TgdhttIQEVEzwVBjhPbGZyEtrxQOlmYY2MVd19UhIiLSCoYaI/TLsaoOws/28ITczETHtSEiItIOhhojk5xTjP2XswAAzwXz1hMRETUfDDVGZu3xJAgBPNLOGT7OVrquDhERkdYw1BiRskoFfj2RDKBqBmEiIqLmhKHGiPx9PgO3isrhaitDeEcXXVeHiIhIqxhqjMgvd2YQHtnTC6YmvLRERNS88JvPSCRkFODY9RyYSCUYGeyp6+oQERFpHUONkahejbu/nwvc7Sx0XBsiIiLtY6gxAsXlldgUcxMAMDqUHYSJiKh5YqgxAtvOpKGgtBJejpZ4pK2zrqtDRESkEww1RuDnOzMIPxfiBalUouPaEBER6QZDjYE7ezMXZ2/mwdxEimeDWum6OkRERDrDUGPgfjla1UF4oL8bnKxlOq4NERGR7jDUGLC8kgpsPZMKgDMIExERMdQYsD9ibqKkQoH2rtbo6eOg6+oQERHpFEONgRJCqOamGR3iDYmEHYSJiKh5Y6gxUCdu3EZCZiEszEzwVPeWuq4OERGRzjHUGKif76zzNDTQA7ZyMx3XhoiISPcYagxQdmEZ/opLA8AOwkRERNUYagzQxpM3UaEQCGhlB/9WdrquDhERkV5gqDEwSqXA2uNVt564zhMREdG/GGoMzIGELCTnlMBGborBXT10XR0iIiK9wVBjYKqHcQ/r3goW5iY6rg0REZH+YKgxIKm5Jdh9MQMAMCbUS8e1ISIi0i8MNQZk/YlkKAUQ4uuIti42uq4OERGRXmlQqFm8eDF8fHwgl8sREhKC48eP37f8xo0b4efnB7lcDn9/f+zYsUP1XEVFBaZNmwZ/f39YWVnBw8MD48aNQ2pqao1jXL58GUOHDoWzszNsbW3x8MMPY+/evQ2pvkGqUCix/njVracx7CBMRER0D7VDzYYNGzB16lTMmTMHMTExCAgIQEREBDIzM2stf+TIEYwaNQqTJk1CbGwsIiMjERkZibi4OABAcXExYmJi8OGHHyImJga///474uPjMWTIkBrHefLJJ1FZWYk9e/bg1KlTCAgIwJNPPon09PQGnLbh2X0xA5kFZXC2NkdEZzddV4eIiEjvSIQQQp0dQkJC0LNnT3z77bcAAKVSCU9PT0yePBkffPDBPeVHjBiBoqIibNu2TbUtNDQUgYGBWLp0aa2vceLECQQHByMxMRFeXl7Izs5GixYtcODAATzyyCMAgIKCAtja2iIqKgrh4eEPrHd+fj7s7OyQl5cHW1tbdU5ZL4z54RgOXcnGa33b4P0BfrquDhERkVao8/2tVktNeXk5Tp06VSNESKVShIeHIzo6utZ9oqOj7wkdERERdZYHgLy8PEgkEtjb2wMAnJyc0KFDB6xZswZFRUWorKzEsmXL4OLigqCgoFqPUVZWhvz8/BoPQ3U9uwiHrmRDIgFGBbODMBERUW3UCjXZ2dlQKBRwdXWtsd3V1bXO20Dp6elqlS8tLcW0adMwatQoVSKTSCTYtWsXYmNjYWNjA7lcji+//BI7d+6Eg4NDrceZN28e7OzsVA9PT091TlWvrLvTl6Zv+xbwdLTUcW2IiIj0k16NfqqoqMDw4cMhhMCSJUtU24UQeP311+Hi4oKDBw/i+PHjiIyMxODBg5GWllbrsaZPn468vDzVIzk5WVun0aRKKxTYeLKq7lzniYiIqG6m6hR2dnaGiYkJMjIyamzPyMiAm1vtnVfd3NzqVb460CQmJmLPnj017pvt2bMH27Ztw+3bt1Xbv/vuO0RFRWH16tW19uWRyWSQyWTqnJ5e2hmXjtvFFfCwk6Ofn4uuq0NERKS31GqpMTc3R1BQEHbv3q3aplQqsXv3boSFhdW6T1hYWI3yABAVFVWjfHWgSUhIwK5du+Dk5FSjfHFxcVVlpTWrK5VKoVQq1TkFg3Ps+i0AwJDAljCRSnRcGyIiIv2lVksNAEydOhXjx49Hjx49EBwcjEWLFqGoqAgTJ04EAIwbNw4tW7bEvHnzAABTpkxBnz59sHDhQgwaNAjr16/HyZMnsXz5cgBVgeaZZ55BTEwMtm3bBoVCoepv4+joCHNzc4SFhcHBwQHjx4/H7NmzYWFhge+//x7Xr1/HoEGDmuq90EuJt6oCXTsXax3XhIiISL+pHWpGjBiBrKwszJ49G+np6QgMDMTOnTtVnYGTkpJqtKj06tULa9euxaxZszBjxgy0a9cOmzdvRpcuXQAAKSkp2Lp1KwAgMDCwxmvt3bsXffv2hbOzM3bu3ImZM2fi0UcfRUVFBTp37owtW7YgICCgoeduEKpDjY8zOwgTERHdj9rz1BgqQ5ynpqxSAb8Pd0II4MTMcLSwMfw+QkREROrQ2Dw1pF3JOSUQArA0N4Gztbmuq0NERKTXGGr0WFJOEQDA28kKEgk7CRMREd0PQ40eq+5P480J94iIiB6IoUaPqUKNE0MNERHRgzDU6LHEW//efiIiIqL7Y6jRY4k5bKkhIiKqL4YaPaVQCiQz1BAREdUbQ42eSssrQYVCwMxEAnc7C11Xh4iISO8x1Oip6k7Cng6WXPOJiIioHhhq9BRHPhEREamHoUZPJeZw5BMREZE6GGr0VGJ2VUuNFyfeIyIiqheGGj1VPZybq3MTERHVD0ONHhJCIOnOxHtejrz9REREVB8MNXoou7AcReUKSCSApyOHcxMREdUHQ40eql6d28POAjJTEx3XhoiIyDAw1Oih6uHc7CRMRERUfww1eujGLXYSJiIiUhdDjR5iJ2EiIiL1MdTooRucTZiIiEhtDDV6KImrcxMREamNoUbP5JdWIKeoHACXSCAiIlIHQ42eSbpz68nZ2hzWMlMd14aIiMhwMNToGQ7nJiIiahiGGj3D1bmJiIgahqFGz1Svzs1OwkREROphqNEz/7bUMNQQERGpg6FGz/zbp4a3n4iIiNTBUKNHSisUSM8vBQD4sKWGiIhILQw1euTm7WIIAVjLTOFoZa7r6hARERkUhho9cuOuTsISiUTHtSEiIjIsDDV6JJHLIxARETUYQ40e4ercREREDcdQo0eqV+dmJ2EiIiL1MdTokerVub0YaoiIiNTGUKMnKhVKJKv61PD2ExERkboYavREWl4pKpUC5qZSuNvKdV0dIiIig8NQoyeqZxL2dLCAVMrh3EREROpiqNETN25xdW4iIqLGYKjRE0mco4aIiKhRGGr0RGJ1S40jQw0REVFDMNToieo+Nd7OvP1ERETUEAw1ekAI8e/tJ7bUEBERNQhDjR7IKixDcbkCUgnQyoGhhoiIqCEYavRA9a0nD3sLmJvykhARETUEv0H1gKo/DUc+ERERNRhDjR5I5OrcREREjcZQowcSuTo3ERFRozHU6IFETrxHRETUaAw1eiCRSyQQERE1GkONjuWVVCC3uAIA4MU5aoiIiBqMoUbHku70p3G2lsFKZqrj2hARERkuhhodq16dm52EiYiIGoehRseql0fwYqghIiJqFIYaHbuRXb06NzsJExERNQZDjY5VD+f2cWZLDRERUWMw1OhYdUdhjnwiIiJqHIYaHSqtUCA9vxQA4MM5aoiIiBqFoUaHqjsJ28hNYW9ppuPaEBERGTaGGh26e3VuiUSi49oQEREZNoYaHeLyCERERE2HoUaHVC017CRMRETUaAw1OnRD1VLDUENERNRYDDU6VN1RmLefiIiIGo+hRkcqFEqk3C4BwJYaIiKipsBQoyOpuSWoVArITKVwtZHrujpEREQGj6FGRxLvmklYKuVwbiIiosZiqNGRxJx/56ghIiKixmOo0ZHEbM5RQ0RE1JQaFGoWL14MHx8fyOVyhISE4Pjx4/ctv3HjRvj5+UEul8Pf3x87duxQPVdRUYFp06bB398fVlZW8PDwwLhx45CamnrPcbZv346QkBBYWFjAwcEBkZGRDam+XmBLDRERUdNSO9Rs2LABU6dOxZw5cxATE4OAgABEREQgMzOz1vJHjhzBqFGjMGnSJMTGxiIyMhKRkZGIi4sDABQXFyMmJgYffvghYmJi8PvvvyM+Ph5DhgypcZxNmzZh7NixmDhxIs6cOYPDhw/jueeea8Ap6weuzk1ERNS0JEIIoc4OISEh6NmzJ7799lsAgFKphKenJyZPnowPPvjgnvIjRoxAUVERtm3bptoWGhqKwMBALF26tNbXOHHiBIKDg5GYmAgvLy9UVlbCx8cHH330ESZNmqROdVXy8/NhZ2eHvLw82NraNugYTUUIgY6zd6K0Qol97/aFjzNvQREREdVGne9vtVpqysvLcerUKYSHh/97AKkU4eHhiI6OrnWf6OjoGuUBICIios7yAJCXlweJRAJ7e3sAQExMDFJSUiCVStGtWze4u7tj4MCBqtYeQ5NZUIbSCiVMpBK0dLDQdXWIiIiMglqhJjs7GwqFAq6urjW2u7q6Ij09vdZ90tPT1SpfWlqKadOmYdSoUapEdu3aNQDA3LlzMWvWLGzbtg0ODg7o27cvcnJyaj1OWVkZ8vPzazz0xY07nYRb2lvAzIR9tYmIiJqCXn2jVlRUYPjw4RBCYMmSJartSqUSADBz5kwMGzYMQUFBWLlyJSQSCTZu3FjrsebNmwc7OzvVw9PTUyvnUB/sJExERNT01Ao1zs7OMDExQUZGRo3tGRkZcHNzq3UfNze3epWvDjSJiYmIioqqcd/M3d0dANCpUyfVNplMhtatWyMpKanW150+fTry8vJUj+Tk5PqfqIaxkzAREVHTUyvUmJubIygoCLt371ZtUyqV2L17N8LCwmrdJywsrEZ5AIiKiqpRvjrQJCQkYNeuXXBycqpRPigoCDKZDPHx8TX2uXHjBry9vWt9XZlMBltb2xoPfVG9OrcP56ghIiJqMqbq7jB16lSMHz8ePXr0QHBwMBYtWoSioiJMnDgRADBu3Di0bNkS8+bNAwBMmTIFffr0wcKFCzFo0CCsX78eJ0+exPLlywFUhZNnnnkGMTEx2LZtGxQKhaq/jaOjI8zNzWFra4tXXnkFc+bMgaenJ7y9vbFgwQIAwLPPPtskb4Q2Va/O7cXbT0RERE1G7VAzYsQIZGVlYfbs2UhPT0dgYCB27typ6gyclJQEqfTfBqBevXph7dq1mDVrFmbMmIF27dph8+bN6NKlCwAgJSUFW7duBQAEBgbWeK29e/eib9++AIAFCxbA1NQUY8eORUlJCUJCQrBnzx44ODg05Lx1qnrdJ/apISIiajpqz1NjqPRlnprc4nIEfhwFALjwcQQszdXOlURERM2GxuapocarbqVxsZEx0BARETUhhhotYydhIiIizWCo0TLVcG72pyEiImpSDDVappp4j3PUEBERNSmGGi1LvHP7yZuLWBIRETUphhotUw3nZksNERFRk2Ko0aLi8kpkFpQB4Bw1RERETY2hRouqZxK2szCDvaW5jmtDRERkXBhqtIgzCRMREWkOQ40WqToJc44aIiKiJsdQo0XsJExERKQ5DDVaxNW5iYiINIehRou4RAIREZHmMNRoSYVCidTcUgDsKExERKQJDDVaknK7BAqlgNxMChcbma6rQ0REZHQYarSk+taTt6MVJBKJjmtDRERkfBhqtISdhImIiDSLoUZLqodz+zDUEBERaQRDjZZUT7znxZFPREREGsFQoyWceI+IiEizGGq0QKkUSMypvv3ElhoiIiJNYKjRgoyCUpRXKmEqlcDDXq7r6hARERklhhotqL711NLBAqYmfMuJiIg0gd+wWsDVuYmIiDSPoUYL2EmYiIhI8xhqtKC6kzDXfCIiItIchhot4O0nIiIizWOo0TAhxL+3n9hSQ0REpDEMNRp2u7gCBaWVAAAv9qkhIiLSGIYaDau+9eRmK4fczETHtSEiIjJeDDUaxtW5iYiItIOhRsNuZHN1biIiIm1gqNGwxByOfCIiItIGhhoNS+LIJyIiIq1gqNGwG6rZhNlSQ0REpEkMNRpUVFaJ7MIyAOwoTEREpGkMNRpUPemeg6UZ7CzMdFwbIiIi48ZQo0FJdzoJe7GTMBERkcYx1GgQV+cmIiLSHoYaDaruJMw5aoiIiDSPoUaDePuJiIhIexhqNIircxMREWkPQ42GlFcqkZpbAoChhoiISBsYajTk5u1iKAVgaW6CFtYyXVeHiIjI6DHUaEhi9ercjpaQSCQ6rg0REZHxY6jRkMTs6oUseeuJiIhIGxhqNKS6pYarcxMREWkHQ42GcOQTERGRdjHUaEjirTu3n7g6NxERkVYw1GiAQimQnMPh3ERERNrEUKMB6fmlKFcoYWYigbudXNfVISIiahYYajSg+tZTKwdLmJrwLSYiItIGfuNqQBI7CRMREWkdQ40GVK/O7e3IUENERKQtDDUawNW5iYiItI+hRgNuZFe11Pjw9hMREZHWMNQ0MSEEknLYp4aIiEjbGGqaWE5ROQrLKiGRVI1+IiIiIu1gqGli1Z2E3W3lkJuZ6Lg2REREzQdDTRP7t5MwW2mIiIi0iaGmiVUvZOnDkU9ERERaxVDTxKpDDVtqiIiItIuhpolxdW4iIiLdYKhpYolcIoGIiEgnGGqaUGFZJW4VlQNgqCEiItI2hpomVH3rydHKHDZyMx3XhoiIqHlhqGlCvPVERESkOww1TSiRq3MTERHpDENNE6qeeM+bc9QQERFpHUNNE6penZu3n4iIiLSvQaFm8eLF8PHxgVwuR0hICI4fP37f8hs3boSfnx/kcjn8/f2xY8cO1XMVFRWYNm0a/P39YWVlBQ8PD4wbNw6pqam1HqusrAyBgYGQSCQ4ffp0Q6qvMVydm4iISHfUDjUbNmzA1KlTMWfOHMTExCAgIAARERHIzMystfyRI0cwatQoTJo0CbGxsYiMjERkZCTi4uIAAMXFxYiJicGHH36ImJgY/P7774iPj8eQIUNqPd77778PDw8PdautcWWVCqTmlQDg7SciIiJdkAghhDo7hISEoGfPnvj2228BAEqlEp6enpg8eTI++OCDe8qPGDECRUVF2LZtm2pbaGgoAgMDsXTp0lpf48SJEwgODkZiYiK8vLxU2//66y9MnToVmzZtQufOnREbG4vAwMB61Ts/Px92dnbIy8uDra2tGmdcP1cyCxH+5X5YmZsg7qMISCSSJn8NIiKi5kad72+1WmrKy8tx6tQphIeH/3sAqRTh4eGIjo6udZ/o6Oga5QEgIiKizvIAkJeXB4lEAnt7e9W2jIwMvPjii/jpp59gafng2ztlZWXIz8+v8dCkf1fntmKgISIi0gG1Qk12djYUCgVcXV1rbHd1dUV6enqt+6Snp6tVvrS0FNOmTcOoUaNUiUwIgQkTJuCVV15Bjx496lXXefPmwc7OTvXw9PSs134NVd1J2If9aYiIiHRCr0Y/VVRUYPjw4RBCYMmSJart33zzDQoKCjB9+vR6H2v69OnIy8tTPZKTkzVRZZXqTsJcnZuIiEg3TNUp7OzsDBMTE2RkZNTYnpGRATc3t1r3cXNzq1f56kCTmJiIPXv21LhvtmfPHkRHR0Mmk9XYp0ePHhg9ejRWr159z+vKZLJ7ymtS9RIJPuwkTEREpBNqtdSYm5sjKCgIu3fvVm1TKpXYvXs3wsLCat0nLCysRnkAiIqKqlG+OtAkJCRg165dcHJyqlH+66+/xpkzZ3D69GmcPn1aNSR8w4YN+PTTT9U5BY3hbMJERES6pVZLDQBMnToV48ePR48ePRAcHIxFixahqKgIEydOBACMGzcOLVu2xLx58wAAU6ZMQZ8+fbBw4UIMGjQI69evx8mTJ7F8+XIAVYHmmWeeQUxMDLZt2waFQqHqb+Po6Ahzc/MaI6AAwNraGgDQpk0btGrVquFn30QUSoHk27z9REREpEtqh5oRI0YgKysLs2fPRnp6OgIDA7Fz505VZ+CkpCRIpf82APXq1Qtr167FrFmzMGPGDLRr1w6bN29Gly5dAAApKSnYunUrANwzPHvv3r3o27dvA09Ne9LySlChEDA3kcLdzkLX1SEiImqW1J6nxlBpcp6aw1eyMfqHY2jdwgp73unbpMcmIiJqzjQ2Tw3Vjv1piIiIdI+hpglUj3zi8ghERES6w1DTBFQtNewkTEREpDMMNU0gMad6NmG21BAREekKQ00jCSFUt584nJuIiEh3GGoaKbuwHMXlCkgkQCsHDucmIiLSFYaaRqpendvDzgIyUxMd14aIiKj5YqhppNziCtjITdlJmIiISMc4+V4TEEKgrFIJuRlbaoiIiJoSJ9/TMolEwkBDRESkYww1REREZBQYaoiIiMgoMNQQERGRUWCoISIiIqPAUENERERGgaGGiIiIjAJDDRERERkFhhoiIiIyCgw1REREZBQYaoiIiMgoMNQQERGRUWCoISIiIqPAUENERERGwVTXFdAWIQSAqiXMiYiIyDBUf29Xf4/fT7MJNQUFBQAAT09PHdeEiIiI1FVQUAA7O7v7lpGI+kQfI6BUKpGamgobGxtIJJImPXZ+fj48PT2RnJwMW1vbJj22vuG5Gq/mdL48V+PVnM63uZyrEAIFBQXw8PCAVHr/XjPNpqVGKpWiVatWGn0NW1tbo/7FuhvP1Xg1p/PluRqv5nS+zeFcH9RCU40dhYmIiMgoMNQQERGRUWCoaQIymQxz5syBTCbTdVU0judqvJrT+fJcjVdzOt/mdK711Ww6ChMREZFxY0sNERERGQWGGiIiIjIKDDVERERkFBhqiIiIyCgw1NTT4sWL4ePjA7lcjpCQEBw/fvy+5Tdu3Ag/Pz/I5XL4+/tjx44dWqppw82bNw89e/aEjY0NXFxcEBkZifj4+Pvus2rVKkgkkhoPuVyupRo3zty5c++pu5+f3333McTrCgA+Pj73nKtEIsHrr79ea3lDuq4HDhzA4MGD4eHhAYlEgs2bN9d4XgiB2bNnw93dHRYWFggPD0dCQsIDj6vuZ15b7ne+FRUVmDZtGvz9/WFlZQUPDw+MGzcOqamp9z1mQz4L2vCgazthwoR76j1gwIAHHlcfr+2DzrW2z69EIsGCBQvqPKa+XldNYqiphw0bNmDq1KmYM2cOYmJiEBAQgIiICGRmZtZa/siRIxg1ahQmTZqE2NhYREZGIjIyEnFxcVquuXr279+P119/HUePHkVUVBQqKirw+OOPo6io6L772draIi0tTfVITEzUUo0br3PnzjXqfujQoTrLGup1BYATJ07UOM+oqCgAwLPPPlvnPoZyXYuKihAQEIDFixfX+vznn3+Or7/+GkuXLsWxY8dgZWWFiIgIlJaW1nlMdT/z2nS/8y0uLkZMTAw+/PBDxMTE4Pfff0d8fDyGDBnywOOq81nQlgddWwAYMGBAjXqvW7fuvsfU12v7oHO9+xzT0tLw448/QiKRYNiwYfc9rj5eV40S9EDBwcHi9ddfV/2sUCiEh4eHmDdvXq3lhw8fLgYNGlRjW0hIiHj55Zc1Ws+mlpmZKQCI/fv311lm5cqVws7OTnuVakJz5swRAQEB9S5vLNdVCCGmTJki2rRpI5RKZa3PG+p1BSD++OMP1c9KpVK4ubmJBQsWqLbl5uYKmUwm1q1bV+dx1P3M68r/nm9tjh8/LgCIxMTEOsuo+1nQhdrOdfz48WLo0KFqHccQrm19ruvQoUPFo48+et8yhnBdmxpbah6gvLwcp06dQnh4uGqbVCpFeHg4oqOja90nOjq6RnkAiIiIqLO8vsrLywMAODo63rdcYWEhvL294enpiaFDh+L8+fPaqF6TSEhIgIeHB1q3bo3Ro0cjKSmpzrLGcl3Ly8vx888/4/nnn7/v4q6GfF2rXb9+Henp6TWum52dHUJCQuq8bg35zOuzvLw8SCQS2Nvb37ecOp8FfbJv3z64uLigQ4cOePXVV3Hr1q06yxrLtc3IyMD27dsxadKkB5Y11OvaUAw1D5CdnQ2FQgFXV9ca211dXZGenl7rPunp6WqV10dKpRJvvfUWHnroIXTp0qXOch06dMCPP/6ILVu24Oeff4ZSqUSvXr1w8+ZNLda2YUJCQrBq1Srs3LkTS5YswfXr1/HII4+goKCg1vLGcF0BYPPmzcjNzcWECRPqLGPI1/Vu1ddGnevWkM+8viotLcW0adMwatSo+y54qO5nQV8MGDAAa9aswe7du/HZZ59h//79GDhwIBQKRa3ljeXarl69GjY2Nnj66afvW85Qr2tjNJtVukk9r7/+OuLi4h54/zUsLAxhYWGqn3v16oWOHTti2bJl+OSTTzRdzUYZOHCg6v+7du2KkJAQeHt749dff63XX0CGasWKFRg4cCA8PDzqLGPI15WqVFRUYPjw4RBCYMmSJfcta6ifhZEjR6r+39/fH127dkWbNm2wb98+9O/fX4c106wff/wRo0ePfmDnfUO9ro3BlpoHcHZ2homJCTIyMmpsz8jIgJubW637uLm5qVVe37zxxhvYtm0b9u7di1atWqm1r5mZGbp164YrV65oqHaaY29vj/bt29dZd0O/rgCQmJiIXbt24YUXXlBrP0O9rtXXRp3r1pDPvL6pDjSJiYmIioq6bytNbR70WdBXrVu3hrOzc531NoZre/DgQcTHx6v9GQYM97qqg6HmAczNzREUFITdu3ertimVSuzevbvGX7J3CwsLq1EeAKKiouosry+EEHjjjTfwxx9/YM+ePfD19VX7GAqFAufOnYO7u7sGaqhZhYWFuHr1ap11N9TrereVK1fCxcUFgwYNUms/Q72uvr6+cHNzq3Hd8vPzcezYsTqvW0M+8/qkOtAkJCRg165dcHJyUvsYD/os6KubN2/i1q1bddbb0K8tUNXSGhQUhICAALX3NdTrqhZd91Q2BOvXrxcymUysWrVKXLhwQbz00kvC3t5epKenCyGEGDt2rPjggw9U5Q8fPixMTU3FF198IS5evCjmzJkjzMzMxLlz53R1CvXy6quvCjs7O7Fv3z6RlpamehQXF6vK/O+5fvTRR+Lvv/8WV69eFadOnRIjR44UcrlcnD9/XhenoJZ33nlH7Nu3T1y/fl0cPnxYhIeHC2dnZ5GZmSmEMJ7rWk2hUAgvLy8xbdq0e54z5OtaUFAgYmNjRWxsrAAgvvzySxEbG6sa7TN//nxhb28vtmzZIs6ePSuGDh0qfH19RUlJieoYjz76qPjmm29UPz/oM69L9zvf8vJyMWTIENGqVStx+vTpGp/jsrIy1TH+93wf9FnQlfuda0FBgXj33XdFdHS0uH79uti1a5fo3r27aNeunSgtLVUdw1Cu7YN+j4UQIi8vT1haWoolS5bUegxDua6axFBTT998843w8vIS5ubmIjg4WBw9elT1XJ8+fcT48eNrlP/1119F+/bthbm5uejcubPYvn27lmusPgC1PlauXKkq87/n+tZbb6neF1dXV/HEE0+ImJgY7Ve+AUaMGCHc3d2Fubm5aNmypRgxYoS4cuWK6nljua7V/v77bwFAxMfH3/OcIV/XvXv31vp7W30+SqVSfPjhh8LV1VXIZDLRv3//e94Db29vMWfOnBrb7veZ16X7ne/169fr/Bzv3btXdYz/Pd8HfRZ05X7nWlxcLB5//HHRokULYWZmJry9vcWLL754TzgxlGv7oN9jIYRYtmyZsLCwELm5ubUew1CuqyZJhBBCo01BRERERFrAPjVERERkFBhqiIiIyCgw1BAREZFRYKghIiIio8BQQ0REREaBoYaIiIiMAkMNERERGQWGGiJqtvbt2weJRILc3FxdV4WImgBDDRERERkFhhoiIiIyCgw1RKQzSqUS8+bNg6+vLywsLBAQEIDffvsNwL+3hrZv346uXbtCLpcjNDQUcXFxNY6xadMmdO7cGTKZDD4+Pli4cGGN58vKyjBt2jR4enpCJpOhbdu2WLFiRY0yp06dQo8ePWBpaYlevXohPj5esydORBrBUENEOjNv3jysWbMGS5cuxfnz5/H2229jzJgx2L9/v6rMe++9h4ULF+LEiRNo0aIFBg8ejIqKCgBVYWT48OEYOXIkzp07h7lz5+LDDz/EqlWrVPuPGzcO69atw9dff42LFy9i2bJlsLa2rlGPmTNnYuHChTh58iRMTU3x/PPPa+X8iahpcUFLItKJsrIyODo6YteuXQgLC1Ntf+GFF1BcXIyXXnoJ/fr1w/r16zFixAgAQE5ODlq1aoVVq1Zh+PDhGD16NLKysvDPP/+o9n///fexfft2nD9/HpcvX0aHDh0QFRWF8PDwe+qwb98+9OvXD7t27UL//v0BADt27MCgQYNQUlICuVyu4XeBiJoSW2qISCeuXLmC4uJiPPbYY7C2tlY91qxZg6tXr6rK3R14HB0d0aFDB1y8eBEAcPHiRTz00EM1jvvQQw8hISEBCoUCp0+fhomJCfr06XPfunTt2lX1/+7u7gCAzMzMRp8jEWmXqa4rQETNU2FhIQBg+/btaNmyZY3nZDJZjWDTUBYWFvUqZ2Zmpvp/iUQCoKq/DxEZFrbUEJFOdOrUCTKZDElJSWjbtm2Nh6enp6rc0aNHVf9/+/ZtXL58GR07dgQAdOzYEYcPH65x3MOHD6N9+/YwMTGBv78/lEpljT46RGS82FJDRDphY2ODd999F2+//TaUSiUefvhh5OXl4fDhw7C1tYW3tzcA4OOPP4aTkxNcXV0xc+ZMODs7IzIyEgDwzjvvoGfPnvjkk08wYsQIREdH49tvv8V3330HAPDx8cH48ePx/PPP4+uvv0ZAQAASExORmZmJ4cOH6+rUiUhDGGqISGc++eQTtGjRAvPmzcO1a9dgb2+P7t27Y8aMGarbP/Pnz8eUKVOQkJCAwMBA/PnnnzA3NwcAdO/eHb/++itmz56NTz75BO7u7vj4448xYcIE1WssWbIEM2bMwGuvvYZbt27By8sLM2bM0MXpEpGGcfQTEeml6pFJt2/fhr29va6rQ0QGgH1qiIiIyCgw1BAREZFR4O0nIiIiMgpsqSEiIiKjwFBDRERERoGhhoiIiIwCQw0REREZBYYaIiIiMgoMNURERGQUGGqIiIjIKDDUEBERkVFgqCEiIiKj8P/FjCtXCrq5QwAAAABJRU5ErkJggg==", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "metrics_df[[\"epoch\", \"NDCG@10\"]].plot(kind=\"line\", x=\"epoch\", title=\"NDCG\");" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## More RecTools features for transformers\n", + "### Saving and loading models\n", + "Transformer models can be saved and loaded just like any other RecTools models. \n", + "\n", + "*Note that you can't use these common functions for savings and loading lightning checkpoints. Use `load_from_checkpoint` method instead.*\n", + "\n", + "**Note that you shouldn't change code for custom functions and classes that were passed to model during initialization if you want to have correct model saving and loading.** " + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "54579980" + ] + }, + "execution_count": 33, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model.save(\"my_model.pkl\")" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "8c3d274cc8064541b842dd0358bb6e79", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Predicting: | | 0/? [00:00\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
user_iditem_idscorerank
017654925992.6818411
1176549122252.5168732
217654920252.4160283
3176549117492.4103084
4176549141202.3568245
\n", + "" + ], + "text/plain": [ + " user_id item_id score rank\n", + "0 176549 2599 2.681841 1\n", + "1 176549 12225 2.516873 2\n", + "2 176549 2025 2.416028 3\n", + "3 176549 11749 2.410308 4\n", + "4 176549 14120 2.356824 5" + ] + }, + "execution_count": 34, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "loaded = load_model(\"my_model.pkl\")\n", + "print(type(loaded))\n", + "loaded.recommend(users=VAL_USERS[:1], dataset=dataset, filter_viewed=True, k=5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Configs for transformer models\n", + "\n", + "`from_config`, `get_config` and `get_params` methods are fully available for transformers just like for any other models." + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n" + ] + }, + { + "data": { + "text/plain": [ + "{'cls': 'SASRecModel',\n", + " 'verbose': 0,\n", + " 'data_preparator_type': 'rectools.models.nn.sasrec.SASRecDataPreparator',\n", + " 'n_blocks': 1,\n", + " 'n_heads': 1,\n", + " 'n_factors': 64,\n", + " 'use_pos_emb': True,\n", + " 'use_causal_attn': True,\n", + " 'use_key_padding_mask': False,\n", + " 'dropout_rate': 0.2,\n", + " 'session_max_len': 100,\n", + " 'dataloader_num_workers': 0,\n", + " 'batch_size': 128,\n", + " 'loss': 'softmax',\n", + " 'n_negatives': 1,\n", + " 'gbce_t': 0.2,\n", + " 'lr': 0.001,\n", + " 'epochs': 2,\n", + " 'deterministic': False,\n", + " 'recommend_batch_size': 256,\n", + " 'recommend_accelerator': 'auto',\n", + " 'recommend_devices': 1,\n", + " 'recommend_n_threads': 0,\n", + " 'recommend_use_gpu_ranking': True,\n", + " 'train_min_user_interactions': 2,\n", + " 'item_net_block_types': ['rectools.models.nn.item_net.IdEmbeddingsItemNet',\n", + " 'rectools.models.nn.item_net.CatFeaturesItemNet'],\n", + " 'pos_encoding_type': 'rectools.models.nn.transformer_net_blocks.LearnableInversePositionalEncoding',\n", + " 'transformer_layers_type': 'rectools.models.nn.sasrec.SASRecTransformerLayers',\n", + " 'lightning_module_type': 'rectools.models.nn.transformer_base.TransformerLightningModule',\n", + " 'get_val_mask_func': None,\n", + " 'get_trainer_func': None}" + ] + }, + "execution_count": 35, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "config = {\n", + " \"epochs\": 2,\n", + " \"n_blocks\": 1,\n", + " \"n_heads\": 1,\n", + " \"n_factors\": 64, \n", + "}\n", + "\n", + "model = SASRecModel.from_config(config)\n", + "model.get_params(simple_types=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Classes and functions in configs\n", + "\n", + "Transformer models in RecTools may accept functions and classes as arguments. These types of arguments are fully compatible with RecTools configs. You can eigther pass them as python objects or as strings that define their import paths.\n", + "\n", + "**Note that you shouldn't change code for those functions and classes if you want to have reproducible config and correct model saving and loading.** \n", + "\n", + "Below is an example of both approaches to pass them to configs:" + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "HPU available: False, using: 0 HPUs\n" + ] + }, + { + "data": { + "text/plain": [ + "{'cls': 'SASRecModel',\n", + " 'verbose': 0,\n", + " 'data_preparator_type': 'rectools.models.nn.sasrec.SASRecDataPreparator',\n", + " 'n_blocks': 2,\n", + " 'n_heads': 4,\n", + " 'n_factors': 256,\n", + " 'use_pos_emb': True,\n", + " 'use_causal_attn': True,\n", + " 'use_key_padding_mask': False,\n", + " 'dropout_rate': 0.2,\n", + " 'session_max_len': 100,\n", + " 'dataloader_num_workers': 0,\n", + " 'batch_size': 128,\n", + " 'loss': 'softmax',\n", + " 'n_negatives': 1,\n", + " 'gbce_t': 0.2,\n", + " 'lr': 0.001,\n", + " 'epochs': 3,\n", + " 'deterministic': False,\n", + " 'recommend_batch_size': 256,\n", + " 'recommend_accelerator': 'auto',\n", + " 'recommend_devices': 1,\n", + " 'recommend_n_threads': 0,\n", + " 'recommend_use_gpu_ranking': True,\n", + " 'train_min_user_interactions': 2,\n", + " 'item_net_block_types': ['rectools.models.nn.item_net.IdEmbeddingsItemNet',\n", + " 'rectools.models.nn.item_net.CatFeaturesItemNet'],\n", + " 'pos_encoding_type': 'rectools.models.nn.transformer_net_blocks.LearnableInversePositionalEncoding',\n", + " 'transformer_layers_type': 'rectools.models.nn.sasrec.SASRecTransformerLayers',\n", + " 'lightning_module_type': 'rectools.models.nn.transformer_base.TransformerLightningModule',\n", + " 'get_val_mask_func': '__main__.get_val_mask_func',\n", + " 'get_trainer_func': '__main__.get_custom_trainer'}" + ] + }, + "execution_count": 36, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "config = {\n", + " \"get_val_mask_func\": get_val_mask_func, # function to get validation mask\n", + " \"get_trainer_func\": get_custom_trainer, # function to get custom trainer\n", + " \"transformer_layers_type\": \"rectools.models.nn.sasrec.SASRecTransformerLayers\", # path to transformer layers class\n", + "}\n", + "\n", + "model = SASRecModel.from_config(config)\n", + "model.get_params(simple_types=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that if you didn't pass custom `get_trainer_func`, you can still replace default `trainer` after model initialization. But this way custom trainer will not be saved with the model and will not appear in model config and params." + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "metadata": {}, + "outputs": [], + "source": [ + "model._trainer = trainer" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Multi-gpu training\n", + "RecTools models use PyTorch Lightning to handle multi-gpu training.\n", + "Please refer to lighting documentation for details. We do not cover it in this guide." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "rectools-sasrec", + "language": "python", + "name": "rectools-sasrec" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.12" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/examples/tutorials/validate_transformers_turorial.ipynb b/examples/tutorials/validate_transformers_turorial.ipynb deleted file mode 100644 index 0f7624eb..00000000 --- a/examples/tutorials/validate_transformers_turorial.ipynb +++ /dev/null @@ -1,1514 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "# TODO: will remove\n", - "import sys\n", - "sys.path.append(\"../../\")" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "import numpy as np\n", - "import os\n", - "import pandas as pd\n", - "import itertools\n", - "import torch\n", - "import typing as tp\n", - "import warnings\n", - "from collections import Counter\n", - "from pathlib import Path\n", - "from functools import partial\n", - "\n", - "from lightning_fabric import seed_everything\n", - "from pytorch_lightning import Trainer\n", - "from pytorch_lightning.callbacks import EarlyStopping\n", - "from rectools import Columns, ExternalIds\n", - "from rectools.dataset import Dataset\n", - "from rectools.metrics import NDCG, Recall, Serendipity, calc_metrics\n", - "\n", - "from rectools.models import BERT4RecModel, SASRecModel\n", - "from rectools.models.nn.item_net import IdEmbeddingsItemNet\n", - "from rectools.models.nn.transformer_base import TransformerModelBase\n", - "\n", - "# Enable deterministic behaviour with CUDA >= 10.2\n", - "os.environ[\"CUBLAS_WORKSPACE_CONFIG\"] = \":4096:8\"\n", - "warnings.simplefilter(\"ignore\", UserWarning)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Load data" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [], - "source": [ - "%%time\n", - "!wget -q https://github.com/irsafilo/KION_DATASET/raw/f69775be31fa5779907cf0a92ddedb70037fb5ae/data_en.zip -O data_en.zip\n", - "!unzip -o data_en.zip\n", - "!rm data_en.zip" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "(5476251, 5)\n" - ] - }, - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
user_iditem_iddatetimetotal_durwatched_pct
017654995062021-05-11425072.0
169931716592021-05-298317100.0
\n", - "
" - ], - "text/plain": [ - " user_id item_id datetime total_dur watched_pct\n", - "0 176549 9506 2021-05-11 4250 72.0\n", - "1 699317 1659 2021-05-29 8317 100.0" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Download dataset\n", - "DATA_PATH = Path(\"./data_en\")\n", - "items = pd.read_csv(DATA_PATH / 'items_en.csv', index_col=0)\n", - "interactions = (\n", - " pd.read_csv(DATA_PATH / 'interactions.csv', parse_dates=[\"last_watch_dt\"])\n", - " .rename(columns={\"last_watch_dt\": Columns.Datetime})\n", - ")\n", - "\n", - "print(interactions.shape)\n", - "interactions.head(2)" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "(962179, 15706)" - ] - }, - "execution_count": 5, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "interactions[Columns.User].nunique(), interactions[Columns.Item].nunique()" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "(5476251, 4)\n" - ] - }, - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
user_iditem_iddatetimeweight
017654995062021-05-113
169931716592021-05-293
\n", - "
" - ], - "text/plain": [ - " user_id item_id datetime weight\n", - "0 176549 9506 2021-05-11 3\n", - "1 699317 1659 2021-05-29 3" - ] - }, - "execution_count": 6, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Process interactions\n", - "interactions[Columns.Weight] = np.where(interactions['watched_pct'] > 10, 3, 1)\n", - "raw_interactions = interactions[[\"user_id\", \"item_id\", \"datetime\", \"weight\"]]\n", - "print(raw_interactions.shape)\n", - "raw_interactions.head(2)" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [], - "source": [ - "# Process item features\n", - "# items = items.loc[items[Columns.Item].isin(raw_interactions[Columns.Item])].copy()\n", - "# items[\"genre\"] = items[\"genres\"].str.lower().str.replace(\", \", \",\", regex=False).str.split(\",\")\n", - "# genre_feature = items[[\"item_id\", \"genre\"]].explode(\"genre\")\n", - "# genre_feature.columns = [\"id\", \"value\"]\n", - "# genre_feature[\"feature\"] = \"genre\"\n", - "# content_feature = items.reindex(columns=[Columns.Item, \"content_type\"])\n", - "# content_feature.columns = [\"id\", \"value\"]\n", - "# content_feature[\"feature\"] = \"content_type\"\n", - "# item_features = pd.concat((genre_feature, content_feature))" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "Seed set to 60\n" - ] - }, - { - "data": { - "text/plain": [ - "60" - ] - }, - "execution_count": 8, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "RANDOM_STATE=60\n", - "torch.use_deterministic_algorithms(True)\n", - "seed_everything(RANDOM_STATE, workers=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "Dataset(user_id_map=IdMap(external_ids=array([176549, 699317, 656683, ..., 805174, 648596, 697262])), item_id_map=IdMap(external_ids=array([ 9506, 1659, 7107, ..., 10064, 13019, 10542])), interactions=Interactions(df= user_id item_id weight datetime\n", - "0 0 0 3.0 2021-05-11\n", - "1 1 1 3.0 2021-05-29\n", - "2 2 2 1.0 2021-05-09\n", - "3 3 3 3.0 2021-07-05\n", - "4 4 0 3.0 2021-04-30\n", - "... ... ... ... ...\n", - "5476246 962177 208 1.0 2021-08-13\n", - "5476247 224686 2690 3.0 2021-04-13\n", - "5476248 962178 21 3.0 2021-08-20\n", - "5476249 7934 1725 3.0 2021-04-19\n", - "5476250 631989 157 3.0 2021-08-15\n", - "\n", - "[5476251 rows x 4 columns]), user_features=None, item_features=None)" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "dataset_no_features = Dataset.construct(raw_interactions)\n", - "dataset_no_features" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# **Custome Validation** (Leave-One-Out Strategy)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Functionality for obtaining logged metrics after fitting model:**" - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "metadata": {}, - "outputs": [], - "source": [ - "def get_log_dir(model: TransformerModelBase) -> Path:\n", - " \"\"\"\n", - " Get logging directory.\n", - " \"\"\"\n", - " path = model.fit_trainer.log_dir\n", - " return Path(path) / \"metrics.csv\"\n", - "\n", - "\n", - "def get_losses(epoch_metrics_df: pd.DataFrame, is_val: bool) -> pd.DataFrame:\n", - " loss_df = epoch_metrics_df[[\"epoch\", \"train/loss\"]].dropna()\n", - " if is_val:\n", - " val_loss_df = epoch_metrics_df[[\"epoch\", \"val/loss\"]].dropna()\n", - " loss_df = pd.merge(loss_df, val_loss_df, how=\"inner\", on=\"epoch\")\n", - " return loss_df.reset_index(drop=True)\n", - "\n", - "\n", - "def get_val_metrics(epoch_metrics_df: pd.DataFrame) -> pd.DataFrame:\n", - " metrics_df = epoch_metrics_df.drop(columns=[\"train/loss\", \"val/loss\"]).dropna()\n", - " return metrics_df.reset_index(drop=True)\n", - "\n", - "\n", - "def get_log_values(model: TransformerModelBase, is_val: bool = False) -> tp.Tuple[pd.DataFrame, tp.Optional[pd.DataFrame]]:\n", - " log_path = get_log_dir(model)\n", - " epoch_metrics_df = pd.read_csv(log_path)\n", - "\n", - " loss_df = get_losses(epoch_metrics_df, is_val)\n", - " val_metrics = None\n", - " if is_val:\n", - " val_metrics = get_val_metrics(epoch_metrics_df)\n", - " return loss_df, val_metrics" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Callback for calculation RecSys metrics on validation step:**" - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "metadata": {}, - "outputs": [], - "source": [ - "from pytorch_lightning import LightningModule\n", - "from pytorch_lightning.callbacks import Callback\n", - "\n", - "\n", - "class ValidationMetrics(Callback):\n", - " \n", - " def __init__(self, top_k_saved_val_reco: int, val_metrics: tp.Dict, verbose: int = 0) -> None:\n", - " self.top_k_saved_val_reco = top_k_saved_val_reco\n", - " self.val_metrics = val_metrics\n", - " self.verbose = verbose\n", - "\n", - " self.epoch_n_users: int = 0\n", - " self.batch_metrics: tp.List[tp.Dict[str, float]] = []\n", - "\n", - " def on_validation_batch_end(\n", - " self, \n", - " trainer: Trainer, \n", - " pl_module: LightningModule, \n", - " outputs: tp.Dict[str, torch.Tensor], \n", - " batch: tp.Dict[str, torch.Tensor], \n", - " batch_idx: int, \n", - " dataloader_idx: int = 0\n", - " ) -> None:\n", - " logits = outputs[\"logits\"]\n", - " if logits is None:\n", - " logits = pl_module.torch_model.encode_sessions(batch[\"x\"], pl_module.item_embs)[:, -1, :]\n", - " _, sorted_batch_recos = logits.topk(k=self.top_k_saved_val_reco)\n", - "\n", - " batch_recos = sorted_batch_recos.tolist()\n", - " targets = batch[\"y\"].tolist()\n", - "\n", - " batch_val_users = list(\n", - " itertools.chain.from_iterable(\n", - " itertools.repeat(idx, len(recos)) for idx, recos in enumerate(batch_recos)\n", - " )\n", - " )\n", - "\n", - " batch_target_users = list(\n", - " itertools.chain.from_iterable(\n", - " itertools.repeat(idx, len(targets)) for idx, targets in enumerate(targets)\n", - " )\n", - " )\n", - "\n", - " batch_recos_df = pd.DataFrame(\n", - " {\n", - " Columns.User: batch_val_users,\n", - " Columns.Item: list(itertools.chain.from_iterable(batch_recos)),\n", - " }\n", - " )\n", - " batch_recos_df[Columns.Rank] = batch_recos_df.groupby(Columns.User, sort=False).cumcount() + 1\n", - "\n", - " interactions = pd.DataFrame(\n", - " {\n", - " Columns.User: batch_target_users,\n", - " Columns.Item: list(itertools.chain.from_iterable(targets)),\n", - " }\n", - " )\n", - "\n", - " prev_interactions = pl_module.data_preparator.train_dataset.interactions.df\n", - " catalog = prev_interactions[Columns.Item].unique()\n", - "\n", - " batch_metrics = calc_metrics(\n", - " self.val_metrics, \n", - " batch_recos_df,\n", - " interactions, \n", - " prev_interactions,\n", - " catalog\n", - " )\n", - "\n", - " batch_n_users = batch[\"x\"].shape[0]\n", - " self.batch_metrics.append({metric: value * batch_n_users for metric, value in batch_metrics.items()})\n", - " self.epoch_n_users += batch_n_users\n", - "\n", - " def on_validation_epoch_end(self, trainer: Trainer, pl_module: LightningModule) -> None:\n", - " epoch_metrics = dict(sum(map(Counter, self.batch_metrics), Counter()))\n", - " epoch_metrics = {metric: value / self.epoch_n_users for metric, value in epoch_metrics.items()}\n", - "\n", - " self.log_dict(epoch_metrics, on_step=False, on_epoch=True, prog_bar=self.verbose > 0)\n", - "\n", - " self.batch_metrics.clear()\n", - " self.epoch_n_users = 0" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Set up hyperparameters**" - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "((962179,), (2048,))" - ] - }, - "execution_count": 15, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "VAL_K_OUT = 1\n", - "N_VAL_USERS = 2048\n", - "\n", - "unique_users = raw_interactions[Columns.User].unique()\n", - "VAL_USERS = unique_users[: N_VAL_USERS]\n", - "\n", - "VAL_METRICS = {\n", - " \"NDCG@10\": NDCG(k=10),\n", - " \"Recall@10\": Recall(k=10),\n", - " \"Serendipity@10\": Serendipity(k=10),\n", - "}\n", - "VAL_MAX_K = max([metric.k for metric in VAL_METRICS.values()])\n", - "\n", - "MIN_EPOCHS = 2\n", - "MAX_EPOCHS = 10\n", - "\n", - "MONITOR_METRIC = \"NDCG@10\"\n", - "MODE_MONITOR_METRIC = \"max\"\n", - "\n", - "callback_metrics = ValidationMetrics(top_k_saved_val_reco=VAL_MAX_K, val_metrics=VAL_METRICS, verbose=1)\n", - "callback_early_stopping = EarlyStopping(monitor=MONITOR_METRIC, patience=MIN_EPOCHS, min_delta=0.0, mode=MODE_MONITOR_METRIC)\n", - "CALLBACKS = [callback_metrics, callback_early_stopping]\n", - "\n", - "TRAIN_MIN_USER_INTERACTIONS = 5\n", - "SESSION_MAX_LEN = 50\n", - "\n", - "unique_users.shape, VAL_USERS.shape" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Custom function for split data on train and validation:**" - ] - }, - { - "cell_type": "code", - "execution_count": 16, - "metadata": {}, - "outputs": [], - "source": [ - "def get_val_mask(interactions: pd.DataFrame, val_users: ExternalIds) -> pd.Series:\n", - " rank = (\n", - " interactions\n", - " .sort_values(Columns.Datetime, ascending=False, kind=\"stable\")\n", - " .groupby(Columns.User, sort=False)\n", - " .cumcount()\n", - " + 1\n", - " )\n", - " val_mask = (\n", - " (interactions[Columns.User].isin(val_users))\n", - " & (rank <= VAL_K_OUT)\n", - " )\n", - " return val_mask\n", - "\n", - "\n", - "GET_VAL_MASK = partial(\n", - " get_val_mask, \n", - " val_users=VAL_USERS,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# SASRec" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "GPU available: True (cuda), used: True\n", - "TPU available: False, using: 0 TPU cores\n", - "IPU available: False, using: 0 IPUs\n", - "HPU available: False, using: 0 HPUs\n" - ] - } - ], - "source": [ - "sasrec_trainer = Trainer(\n", - " accelerator='gpu',\n", - " devices=[0],\n", - " min_epochs=MIN_EPOCHS,\n", - " max_epochs=MAX_EPOCHS, \n", - " deterministic=True,\n", - " callbacks=CALLBACKS,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 18, - "metadata": {}, - "outputs": [], - "source": [ - "sasrec_non_default_model = SASRecModel(\n", - " n_factors=64,\n", - " n_blocks=2,\n", - " n_heads=2,\n", - " dropout_rate=0.2,\n", - " use_pos_emb=True,\n", - " train_min_user_interactions=TRAIN_MIN_USER_INTERACTIONS,\n", - " session_max_len=SESSION_MAX_LEN,\n", - " lr=1e-3,\n", - " batch_size=128,\n", - " loss=\"softmax\",\n", - " verbose=1,\n", - " deterministic=True,\n", - " item_net_block_types=(IdEmbeddingsItemNet, ), # Use only item ids in ItemNetBlock\n", - " trainer=sasrec_trainer,\n", - " get_val_mask_func=GET_VAL_MASK,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 19, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n", - "\n", - " | Name | Type | Params\n", - "---------------------------------------------------------------\n", - "0 | torch_model | TransformerBasedSessionEncoder | 987 K \n", - "---------------------------------------------------------------\n", - "987 K Trainable params\n", - "0 Non-trainable params\n", - "987 K Total params\n", - "3.951 Total estimated model params size (MB)\n" - ] - }, - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "63a9b4c625d24a3aa0ac3d333e71f60a", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "Sanity Checking: | | 0/? [00:00" - ] - }, - "execution_count": 19, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "%%time\n", - "sasrec_non_default_model.fit(dataset_no_features)" - ] - }, - { - "cell_type": "code", - "execution_count": 20, - "metadata": {}, - "outputs": [], - "source": [ - "loss_df, val_metrics_df = get_log_values(sasrec_non_default_model, is_val=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 21, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
epochtrain/lossval/loss
0016.39010215.514286
1115.72271315.147015
2215.56014315.003609
3315.49332514.918410
4415.45073614.874678
5515.42185414.841123
6615.40524214.814446
7715.39031814.782287
8815.37459114.762179
9915.36714814.763201
\n", - "
" - ], - "text/plain": [ - " epoch train/loss val/loss\n", - "0 0 16.390102 15.514286\n", - "1 1 15.722713 15.147015\n", - "2 2 15.560143 15.003609\n", - "3 3 15.493325 14.918410\n", - "4 4 15.450736 14.874678\n", - "5 5 15.421854 14.841123\n", - "6 6 15.405242 14.814446\n", - "7 7 15.390318 14.782287\n", - "8 8 15.374591 14.762179\n", - "9 9 15.367148 14.763201" - ] - }, - "execution_count": 21, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "loss_df" - ] - }, - { - "cell_type": "code", - "execution_count": 22, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
NDCG@10Recall@10Serendipity@10epochstep
00.0216770.1755420.00004302362
10.0234000.1886920.00007814725
20.0245690.1965810.00010327088
30.0248600.1978960.00010039451
40.0261000.2077580.000121411814
50.0262550.2064430.000139514177
60.0265670.2071010.000131616540
70.0266940.2031560.000130718903
80.0273460.2057860.000147821266
90.0269590.2051280.000139923629
\n", - "
" - ], - "text/plain": [ - " NDCG@10 Recall@10 Serendipity@10 epoch step\n", - "0 0.021677 0.175542 0.000043 0 2362\n", - "1 0.023400 0.188692 0.000078 1 4725\n", - "2 0.024569 0.196581 0.000103 2 7088\n", - "3 0.024860 0.197896 0.000100 3 9451\n", - "4 0.026100 0.207758 0.000121 4 11814\n", - "5 0.026255 0.206443 0.000139 5 14177\n", - "6 0.026567 0.207101 0.000131 6 16540\n", - "7 0.026694 0.203156 0.000130 7 18903\n", - "8 0.027346 0.205786 0.000147 8 21266\n", - "9 0.026959 0.205128 0.000139 9 23629" - ] - }, - "execution_count": 22, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "val_metrics_df" - ] - }, - { - "cell_type": "code", - "execution_count": 23, - "metadata": {}, - "outputs": [], - "source": [ - "del sasrec_non_default_model\n", - "torch.cuda.empty_cache()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# BERT4Rec" - ] - }, - { - "cell_type": "code", - "execution_count": 24, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "Trainer already configured with model summary callbacks: []. Skipping setting a default `ModelSummary` callback.\n", - "GPU available: True (cuda), used: True\n", - "TPU available: False, using: 0 TPU cores\n", - "IPU available: False, using: 0 IPUs\n", - "HPU available: False, using: 0 HPUs\n" - ] - } - ], - "source": [ - "bert_trainer = Trainer(\n", - " accelerator='gpu',\n", - " devices=[1],\n", - " min_epochs=MIN_EPOCHS,\n", - " max_epochs=MAX_EPOCHS, \n", - " deterministic=True,\n", - " callbacks=CALLBACKS,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 25, - "metadata": {}, - "outputs": [], - "source": [ - "bert4rec_id_softmax_model = BERT4RecModel(\n", - " mask_prob=0.5,\n", - " deterministic=True,\n", - " item_net_block_types=(IdEmbeddingsItemNet, ),\n", - " trainer=bert_trainer,\n", - " get_val_mask_func=GET_VAL_MASK,\n", - " verbose=1,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 26, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n", - "\n", - " | Name | Type | Params\n", - "---------------------------------------------------------------\n", - "0 | torch_model | TransformerBasedSessionEncoder | 2.1 M \n", - "---------------------------------------------------------------\n", - "2.1 M Trainable params\n", - "0 Non-trainable params\n", - "2.1 M Total params\n", - "8.202 Total estimated model params size (MB)\n" - ] - }, - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "e8d7827a3b1f4c3cb5f267f7b0dabee1", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "Sanity Checking: | | 0/? [00:00" - ] - }, - "execution_count": 26, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "%%time\n", - "bert4rec_id_softmax_model.fit(dataset_no_features)" - ] - }, - { - "cell_type": "code", - "execution_count": 27, - "metadata": {}, - "outputs": [], - "source": [ - "loss_df, val_metrics_df = get_log_values(bert4rec_id_softmax_model, is_val=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 28, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
epochtrain/lossval/loss
0016.92602716.136480
1117.87234316.835089
2218.27878416.187969
3318.39853716.172079
\n", - "
" - ], - "text/plain": [ - " epoch train/loss val/loss\n", - "0 0 16.926027 16.136480\n", - "1 1 17.872343 16.835089\n", - "2 2 18.278784 16.187969\n", - "3 3 18.398537 16.172079" - ] - }, - "execution_count": 28, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "loss_df" - ] - }, - { - "cell_type": "code", - "execution_count": 29, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
NDCG@10Recall@10Serendipity@10epochstep
00.0217230.1720370.00001404741
10.0223320.1774990.00001019483
20.0214320.1693060.000011214225
30.0215720.1709450.000021318967
\n", - "
" - ], - "text/plain": [ - " NDCG@10 Recall@10 Serendipity@10 epoch step\n", - "0 0.021723 0.172037 0.000014 0 4741\n", - "1 0.022332 0.177499 0.000010 1 9483\n", - "2 0.021432 0.169306 0.000011 2 14225\n", - "3 0.021572 0.170945 0.000021 3 18967" - ] - }, - "execution_count": 29, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "val_metrics_df" - ] - }, - { - "cell_type": "code", - "execution_count": 30, - "metadata": {}, - "outputs": [], - "source": [ - "del bert4rec_id_softmax_model\n", - "torch.cuda.empty_cache()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.9.12" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/rectools/compat.py b/rectools/compat.py index c983fbc2..2c4496dc 100644 --- a/rectools/compat.py +++ b/rectools/compat.py @@ -1,4 +1,4 @@ -# Copyright 2022-2024 MTS (Mobile Telesystems) +# Copyright 2022-2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/rectools/dataset/dataset.py b/rectools/dataset/dataset.py index 0936656c..afdb8a67 100644 --- a/rectools/dataset/dataset.py +++ b/rectools/dataset/dataset.py @@ -1,4 +1,4 @@ -# Copyright 2022-2024 MTS (Mobile Telesystems) +# Copyright 2022-2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -15,18 +15,93 @@ """Dataset - all data container.""" import typing as tp +from collections.abc import Hashable import attr import numpy as np import pandas as pd +import typing_extensions as tpe +from pydantic import PlainSerializer from scipy import sparse from rectools import Columns +from rectools.utils.config import BaseConfig -from .features import AbsentIdError, DenseFeatures, Features, SparseFeatures +from .features import AbsentIdError, DenseFeatures, Features, SparseFeatureName, SparseFeatures from .identifiers import IdMap from .interactions import Interactions +AnyFeatureName = tp.Union[str, SparseFeatureName] + + +def _serialize_feature_name(spec: tp.Any) -> Hashable: + type_error = TypeError( + f""" + Serialization for feature name '{spec}' is not supported. + Please convert your feature names and category feature values to strings, numbers, booleans + or their tuples. + """ + ) + if isinstance(spec, (list, np.ndarray)): + raise type_error + if isinstance(spec, tuple): + return tuple(_serialize_feature_name(item) for item in spec) + if isinstance(spec, (int, float, str, bool)): + return spec + if np.issubdtype(spec, np.number) or np.issubdtype(spec, np.bool_): # str is handled by isinstance(spec, str) + return spec.item() + raise type_error + + +FeatureName = tpe.Annotated[AnyFeatureName, PlainSerializer(_serialize_feature_name, when_used="json")] +DatasetSchemaDict = tp.Dict[str, tp.Any] + + +class BaseFeaturesSchema(BaseConfig): + """Features schema.""" + + names: tp.Tuple[FeatureName, ...] + + +class DenseFeaturesSchema(BaseFeaturesSchema): + """Dense features schema.""" + + kind: tp.Literal["dense"] = "dense" + + +class SparseFeaturesSchema(BaseFeaturesSchema): + """Sparse features schema.""" + + kind: tp.Literal["sparse"] = "sparse" + cat_feature_indices: tp.List[int] + cat_n_stored_values: int + + +FeaturesSchema = tp.Union[DenseFeaturesSchema, SparseFeaturesSchema] + + +class IdMapSchema(BaseConfig): + """IdMap schema.""" + + size: int + dtype: str + + +class EntitySchema(BaseConfig): + """Entity schema.""" + + n_hot: int + id_map: IdMapSchema + features: tp.Optional[FeaturesSchema] = None + + +class DatasetSchema(BaseConfig): + """Dataset schema.""" + + n_interactions: int + users: EntitySchema + items: EntitySchema + @attr.s(slots=True, frozen=True) class Dataset: @@ -60,6 +135,43 @@ class Dataset: user_features: tp.Optional[Features] = attr.ib(default=None) item_features: tp.Optional[Features] = attr.ib(default=None) + @staticmethod + def _get_feature_schema(features: tp.Optional[Features]) -> tp.Optional[FeaturesSchema]: + if features is None: + return None + if isinstance(features, SparseFeatures): + return SparseFeaturesSchema( + names=features.names, + cat_feature_indices=features.cat_feature_indices.tolist(), + cat_n_stored_values=features.get_cat_features().values.nnz, + ) + return DenseFeaturesSchema( + names=features.names, + ) + + @staticmethod + def _get_id_map_schema(id_map: IdMap) -> IdMapSchema: + return IdMapSchema(size=id_map.size, dtype=id_map.external_dtype.str) + + def get_schema(self) -> DatasetSchemaDict: + """Get dataset schema in a dict form that contains all the information about the dataset and its statistics.""" + user_schema = EntitySchema( + n_hot=self.n_hot_users, + id_map=self._get_id_map_schema(self.user_id_map), + features=self._get_feature_schema(self.user_features), + ) + item_schema = EntitySchema( + n_hot=self.n_hot_items, + id_map=self._get_id_map_schema(self.item_id_map), + features=self._get_feature_schema(self.item_features), + ) + schema = DatasetSchema( + n_interactions=self.interactions.df.shape[0], + users=user_schema, + items=item_schema, + ) + return schema.model_dump(mode="json") + @property def n_hot_users(self) -> int: """ diff --git a/rectools/dataset/features.py b/rectools/dataset/features.py index de51162d..d98b4aa9 100644 --- a/rectools/dataset/features.py +++ b/rectools/dataset/features.py @@ -1,4 +1,4 @@ -# Copyright 2022-2024 MTS (Mobile Telesystems) +# Copyright 2022-2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -450,16 +450,21 @@ def __len__(self) -> int: """Return number of objects.""" return self.values.shape[0] + @property + def cat_col_mask(self) -> np.ndarray: + """Mask that identifies category columns in feature values sparse matrix.""" + return np.array([feature_name[1] != DIRECT_FEATURE_VALUE for feature_name in self.names]) + + @property + def cat_feature_indices(self) -> np.ndarray: + """Category columns indices in feature values sparse matrix.""" + return np.arange(len(self.names))[self.cat_col_mask] + def get_cat_features(self) -> "SparseFeatures": """Return `SparseFeatures` only with categorical features.""" - cat_feature_ids: tp.List[int] = [] - for idx, (_, value) in enumerate(self.names): - if value != DIRECT_FEATURE_VALUE: - cat_feature_ids.append(idx) - return SparseFeatures( - values=self.values[:, cat_feature_ids], - names=tuple(map(self.names.__getitem__, cat_feature_ids)), + values=self.values[:, self.cat_feature_indices], + names=tuple(map(self.names.__getitem__, self.cat_feature_indices)), ) diff --git a/rectools/models/nn/__init__.py b/rectools/models/nn/__init__.py index 2f292dc9..d226c38e 100644 --- a/rectools/models/nn/__init__.py +++ b/rectools/models/nn/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/rectools/models/nn/bert4rec.py b/rectools/models/nn/bert4rec.py index c4f98d6d..1cdcd912 100644 --- a/rectools/models/nn/bert4rec.py +++ b/rectools/models/nn/bert4rec.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -18,18 +18,19 @@ import numpy as np import torch -from pytorch_lightning import Trainer +from .constants import MASKING_VALUE, PADDING_VALUE from .item_net import CatFeaturesItemNet, IdEmbeddingsItemNet, ItemNetBase from .transformer_base import ( - PADDING_VALUE, - SessionEncoderDataPreparatorType, - SessionEncoderLightningModule, - SessionEncoderLightningModuleBase, + TrainerCallable, + TransformerDataPreparatorType, + TransformerLightningModule, + TransformerLightningModuleBase, TransformerModelBase, TransformerModelConfig, + ValMaskCallable, ) -from .transformer_data_preparator import SessionEncoderDataPreparatorBase +from .transformer_data_preparator import TransformerDataPreparatorBase from .transformer_net_blocks import ( LearnableInversePositionalEncoding, PositionalEncodingBase, @@ -37,14 +38,14 @@ TransformerLayersBase, ) -MASKING_VALUE = "MASK" - -class BERT4RecDataPreparator(SessionEncoderDataPreparatorBase): +class BERT4RecDataPreparator(TransformerDataPreparatorBase): """Data Preparator for BERT4RecModel.""" train_session_max_len_addition: int = 0 + item_extra_tokens: tp.Sequence[Hashable] = (PADDING_VALUE, MASKING_VALUE) + def __init__( self, session_max_len: int, @@ -53,9 +54,8 @@ def __init__( dataloader_num_workers: int, train_min_user_interactions: int, mask_prob: float, - item_extra_tokens: tp.Sequence[Hashable], shuffle_train: bool = True, - get_val_mask_func: tp.Optional[tp.Callable] = None, + get_val_mask_func: tp.Optional[ValMaskCallable] = None, ) -> None: super().__init__( session_max_len=session_max_len, @@ -63,7 +63,6 @@ def __init__( batch_size=batch_size, dataloader_num_workers=dataloader_num_workers, train_min_user_interactions=train_min_user_interactions, - item_extra_tokens=item_extra_tokens, shuffle_train=shuffle_train, get_val_mask_func=get_val_mask_func, ) @@ -160,57 +159,98 @@ def _collate_fn_recommend(self, batch: List[Tuple[List[int], List[float]]]) -> D class BERT4RecModelConfig(TransformerModelConfig): """BERT4RecModel config.""" - data_preparator_type: SessionEncoderDataPreparatorType = BERT4RecDataPreparator + data_preparator_type: TransformerDataPreparatorType = BERT4RecDataPreparator use_key_padding_mask: bool = True mask_prob: float = 0.15 class BERT4RecModel(TransformerModelBase[BERT4RecModelConfig]): """ - BERT4Rec model. + BERT4Rec model: transformer-based sequential model with bidirectional attention mechanism and + "MLM" (masked item in user sequence) training objective. + Our implementation covers multiple loss functions and a variable number of negatives for them. + + References + ---------- + Transformers tutorial: https://rectools.readthedocs.io/en/stable/examples/tutorials/transformers_tutorial.html + Advanced training guide: + https://rectools.readthedocs.io/en/stable/examples/tutorials/transformers_advanced_training_guide.html + Public benchmark: https://github.com/blondered/bert4rec_repro + Original BERT4Rec paper: https://arxiv.org/abs/1904.06690 + gBCE loss paper: https://arxiv.org/pdf/2308.07192 - n_blocks : int, default 1 + Parameters + ---------- + n_blocks : int, default 2 Number of transformer blocks. - n_heads : int, default 1 + n_heads : int, default 4 Number of attention heads. - n_factors : int, default 128 + n_factors : int, default 256 Latent embeddings size. - use_pos_emb : bool, default ``True`` - If ``True``, learnable positional encoding will be added to session item embeddings. - use_causal_attn : bool, default ``False`` - If ``True``, causal mask will be added as attn_mask in Multi-head Attention. Please note that default - BERT4Rec training task (MLM) does not match well with causal masking. Set this parameter to - ``True`` only when you change the training task with custom `data_preparator_type` or if you - are absolutely sure of what you are doing. - use_key_padding_mask : bool, default ``False`` - If ``True``, key_padding_mask will be added in Multi-head Attention. dropout_rate : float, default 0.2 Probability of a hidden unit to be zeroed. - session_max_len : int, default 32 - Maximum length of user sequence that model will accept during inference. - train_min_user_interactions : int, default 2 - Minimum number of interactions user should have to be used for training. Should be greater than 1. mask_prob : float, default 0.15 Probability of masking an item in interactions sequence. - dataloader_num_workers : int, default 0 - Number of loader worker processes. - batch_size : int, default 128 - How many samples per batch to load. + session_max_len : int, default 100 + Maximum length of user sequence. + train_min_user_interactions : int, default 2 + Minimum number of interactions user should have to be used for training. Should be greater + than 1. loss : {"softmax", "BCE", "gBCE"}, default "softmax" Loss function. n_negatives : int, default 1 Number of negatives for BCE and gBCE losses. gbce_t : float, default 0.2 Calibration parameter for gBCE loss. - lr : float, default 0.01 + lr : float, default 0.001 Learning rate. + batch_size : int, default 128 + How many samples per batch to load. epochs : int, default 3 - Number of training epochs. + Exact number of training epochs. + Will be omitted if `get_trainer_func` is specified. + deterministic : bool, default ``False`` + `deterministic` flag passed to lightning trainer during initialization. + Use `pytorch_lightning.seed_everything` together with this parameter to fix the random seed. + Will be omitted if `get_trainer_func` is specified. verbose : int, default 0 Verbosity level. - deterministic : bool, default ``False`` - If ``True``, set deterministic algorithms for PyTorch operations. - Use `pytorch_lightning.seed_everything` together with this parameter to fix the random state. + Enables progress bar, model summary and logging in default lightning trainer when set to a + positive integer. + Will be omitted if `get_trainer_func` is specified. + dataloader_num_workers : int, default 0 + Number of loader worker processes. + use_pos_emb : bool, default ``True`` + If ``True``, learnable positional encoding will be added to session item embeddings. + use_key_padding_mask : bool, default ``True`` + If ``True``, key_padding_mask will be added in Multi-head Attention. + use_causal_attn : bool, default ``False`` + If ``True``, causal mask will be added as attn_mask in Multi-head Attention. Please note that default + BERT4Rec training task ("MLM") does not work with causal masking. Set this + parameter to ``True`` only when you change the training task with custom + `data_preparator_type` or if you are absolutely sure of what you are doing. + item_net_block_types : sequence of `type(ItemNetBase)`, default `(IdEmbeddingsItemNet, CatFeaturesItemNet)` + Type of network returning item embeddings. + (IdEmbeddingsItemNet,) - item embeddings based on ids. + (CatFeaturesItemNet,) - item embeddings based on categorical features. + (IdEmbeddingsItemNet, CatFeaturesItemNet) - item embeddings based on ids and categorical features. + pos_encoding_type : type(PositionalEncodingBase), default `LearnableInversePositionalEncoding` + Type of positional encoding. + transformer_layers_type : type(TransformerLayersBase), default `PreLNTransformerLayers` + Type of transformer layers architecture. + data_preparator_type : type(TransformerDataPreparatorBase), default `BERT4RecDataPreparator` + Type of data preparator used for dataset processing and dataloader creation. + lightning_module_type : type(TransformerLightningModuleBase), default `TransformerLightningModule` + Type of lightning module defining training procedure. + get_val_mask_func : Callable, default ``None`` + Function to get validation mask. + get_trainer_func : Callable, default ``None`` + Function for get custom lightning trainer. + If `get_trainer_func` is None, default trainer will be created based on `epochs`, + `deterministic` and `verbose` argument values. Model will be trained for the exact number of + epochs. Checkpointing will be disabled. + If you want to assign custom trainer after model is initialized, you can manually assign new + value to model `_trainer` attribute. recommend_batch_size : int, default 256 How many samples per batch to load during `recommend`. If you want to change this parameter after model is initialized, @@ -226,6 +266,7 @@ class BERT4RecModel(TransformerModelBase[BERT4RecModelConfig]): Used at predict_step of lightning module. Multi-device recommendations are not supported. If you want to change this parameter after model is initialized, + you can manually assign new value to model `recommend_device` attribute. recommend_n_threads : int, default 0 Number of threads to use in ranker if GPU ranking is turned off or unavailable. If you want to change this parameter after model is initialized, @@ -234,24 +275,6 @@ class BERT4RecModel(TransformerModelBase[BERT4RecModelConfig]): If ``True`` and HAS_CUDA ``True``, set use_gpu=True in ImplicitRanker.rank. If you want to change this parameter after model is initialized, you can manually assign new value to model `recommend_use_gpu_ranking` attribute. - trainer : Trainer, optional, default ``None`` - Which trainer to use for training. - If trainer is None, default pytorch_lightning Trainer is created. - item_net_block_types : sequence of `type(ItemNetBase)`, default `(IdEmbeddingsItemNet, CatFeaturesItemNet)` - Type of network returning item embeddings. - (IdEmbeddingsItemNet,) - item embeddings based on ids. - (CatFeaturesItemNet,) - item embeddings based on categorical features. - (IdEmbeddingsItemNet, CatFeaturesItemNet) - item embeddings based on ids and categorical features. - pos_encoding_type : type(PositionalEncodingBase), default `LearnableInversePositionalEncoding` - Type of positional encoding. - transformer_layers_type : type(TransformerLayersBase), default `PreLNTransformerLayers` - Type of transformer layers architecture. - data_preparator_type : type(SessionEncoderDataPreparatorBase), default `BERT4RecDataPreparator` - Type of data preparator used for dataset processing and dataloader creation. - lightning_module_type : type(SessionEncoderLightningModuleBase), default `SessionEncoderLightningModule` - Type of lightning module defining training procedure. - get_val_mask_func : Callable, default None - Function to get validation mask. """ config_class = BERT4RecModelConfig @@ -261,34 +284,34 @@ def __init__( # pylint: disable=too-many-arguments, too-many-locals n_blocks: int = 2, n_heads: int = 4, n_factors: int = 256, - use_pos_emb: bool = True, - use_causal_attn: bool = False, - use_key_padding_mask: bool = True, dropout_rate: float = 0.2, - epochs: int = 3, - verbose: int = 0, - deterministic: bool = False, - recommend_batch_size: int = 256, - recommend_accelerator: str = "auto", - recommend_devices: tp.Union[int, tp.List[int]] = 1, - recommend_n_threads: int = 0, - recommend_use_gpu_ranking: bool = True, + mask_prob: float = 0.15, session_max_len: int = 100, - n_negatives: int = 1, - batch_size: int = 128, + train_min_user_interactions: int = 2, loss: str = "softmax", + n_negatives: int = 1, gbce_t: float = 0.2, lr: float = 0.001, + batch_size: int = 128, + epochs: int = 3, + deterministic: bool = False, + verbose: int = 0, dataloader_num_workers: int = 0, - train_min_user_interactions: int = 2, - mask_prob: float = 0.15, - trainer: tp.Optional[Trainer] = None, + use_pos_emb: bool = True, + use_key_padding_mask: bool = True, + use_causal_attn: bool = False, item_net_block_types: tp.Sequence[tp.Type[ItemNetBase]] = (IdEmbeddingsItemNet, CatFeaturesItemNet), pos_encoding_type: tp.Type[PositionalEncodingBase] = LearnableInversePositionalEncoding, transformer_layers_type: tp.Type[TransformerLayersBase] = PreLNTransformerLayers, - data_preparator_type: tp.Type[SessionEncoderDataPreparatorBase] = BERT4RecDataPreparator, - lightning_module_type: tp.Type[SessionEncoderLightningModuleBase] = SessionEncoderLightningModule, - get_val_mask_func: tp.Optional[tp.Callable] = None, + data_preparator_type: tp.Type[TransformerDataPreparatorBase] = BERT4RecDataPreparator, + lightning_module_type: tp.Type[TransformerLightningModuleBase] = TransformerLightningModule, + get_val_mask_func: tp.Optional[ValMaskCallable] = None, + get_trainer_func: tp.Optional[TrainerCallable] = None, + recommend_batch_size: int = 256, + recommend_accelerator: str = "auto", + recommend_devices: tp.Union[int, tp.List[int]] = 1, + recommend_n_threads: int = 0, + recommend_use_gpu_ranking: bool = True, ): self.mask_prob = mask_prob @@ -318,21 +341,20 @@ def __init__( # pylint: disable=too-many-arguments, too-many-locals recommend_n_threads=recommend_n_threads, recommend_use_gpu_ranking=recommend_use_gpu_ranking, train_min_user_interactions=train_min_user_interactions, - trainer=trainer, item_net_block_types=item_net_block_types, pos_encoding_type=pos_encoding_type, lightning_module_type=lightning_module_type, get_val_mask_func=get_val_mask_func, + get_trainer_func=get_trainer_func, ) def _init_data_preparator(self) -> None: - self.data_preparator: SessionEncoderDataPreparatorBase = self.data_preparator_type( + self.data_preparator: TransformerDataPreparatorBase = self.data_preparator_type( session_max_len=self.session_max_len, n_negatives=self.n_negatives if self.loss != "softmax" else None, batch_size=self.batch_size, dataloader_num_workers=self.dataloader_num_workers, train_min_user_interactions=self.train_min_user_interactions, - item_extra_tokens=(PADDING_VALUE, MASKING_VALUE), mask_prob=self.mask_prob, get_val_mask_func=self.get_val_mask_func, ) diff --git a/rectools/models/nn/constants.py b/rectools/models/nn/constants.py new file mode 100644 index 00000000..fafb8da9 --- /dev/null +++ b/rectools/models/nn/constants.py @@ -0,0 +1,16 @@ +# Copyright 2025 MTS (Mobile Telesystems) +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +PADDING_VALUE = "PAD" +MASKING_VALUE = "MASK" diff --git a/rectools/models/nn/item_net.py b/rectools/models/nn/item_net.py index 1c9c9ee8..d2f88146 100644 --- a/rectools/models/nn/item_net.py +++ b/rectools/models/nn/item_net.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -19,7 +19,7 @@ import typing_extensions as tpe from torch import nn -from rectools.dataset import Dataset +from rectools.dataset.dataset import Dataset, DatasetSchema from rectools.dataset.features import SparseFeatures @@ -35,6 +35,11 @@ def from_dataset(cls, dataset: Dataset, *args: tp.Any, **kwargs: tp.Any) -> tp.O """Construct ItemNet from Dataset.""" raise NotImplementedError() + @classmethod + def from_dataset_schema(cls, dataset_schema: DatasetSchema, *args: tp.Any, **kwargs: tp.Any) -> tpe.Self: + """Construct ItemNet from Dataset schema.""" + raise NotImplementedError() + def get_all_embeddings(self) -> torch.Tensor: """Return item embeddings.""" raise NotImplementedError() @@ -219,6 +224,12 @@ def from_dataset(cls, dataset: Dataset, n_factors: int, dropout_rate: float) -> n_items = dataset.item_id_map.size return cls(n_factors, n_items, dropout_rate) + @classmethod + def from_dataset_schema(cls, dataset_schema: DatasetSchema, n_factors: int, dropout_rate: float) -> tpe.Self: + """Construct ItemNet from Dataset schema.""" + n_items = dataset_schema.items.n_hot + return cls(n_factors, n_items, dropout_rate) + class ItemNetConstructor(ItemNetBase): """ @@ -306,3 +317,22 @@ def from_dataset( item_net_blocks.append(item_net_block) return cls(n_items, item_net_blocks) + + @classmethod + def from_dataset_schema( + cls, + dataset_schema: DatasetSchema, + n_factors: int, + dropout_rate: float, + item_net_block_types: tp.Sequence[tp.Type[ItemNetBase]], + ) -> tpe.Self: + """Construct ItemNet from Dataset schema.""" + n_items = dataset_schema.items.n_hot + + item_net_blocks: tp.List[ItemNetBase] = [] + for item_net in item_net_block_types: + item_net_block = item_net.from_dataset_schema(dataset_schema, n_factors, dropout_rate) + if item_net_block is not None: + item_net_blocks.append(item_net_block) + + return cls(n_items, item_net_blocks) diff --git a/rectools/models/nn/sasrec.py b/rectools/models/nn/sasrec.py index 21f96c69..d010d21d 100644 --- a/rectools/models/nn/sasrec.py +++ b/rectools/models/nn/sasrec.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -17,20 +17,20 @@ import numpy as np import torch -from pytorch_lightning import Trainer from torch import nn from .item_net import CatFeaturesItemNet, IdEmbeddingsItemNet, ItemNetBase from .transformer_base import ( - PADDING_VALUE, - SessionEncoderDataPreparatorType, - SessionEncoderLightningModule, - SessionEncoderLightningModuleBase, + TrainerCallable, + TransformerDataPreparatorType, TransformerLayersType, + TransformerLightningModule, + TransformerLightningModuleBase, TransformerModelBase, TransformerModelConfig, + ValMaskCallable, ) -from .transformer_data_preparator import SessionEncoderDataPreparatorBase +from .transformer_data_preparator import TransformerDataPreparatorBase from .transformer_net_blocks import ( LearnableInversePositionalEncoding, PointWiseFeedForward, @@ -39,7 +39,7 @@ ) -class SASRecDataPreparator(SessionEncoderDataPreparatorBase): +class SASRecDataPreparator(TransformerDataPreparatorBase): """Data preparator for SASRecModel.""" train_session_max_len_addition: int = 1 @@ -75,8 +75,8 @@ def _collate_fn_train( def _collate_fn_val(self, batch: List[Tuple[List[int], List[float]]]) -> Dict[str, torch.Tensor]: batch_size = len(batch) x = np.zeros((batch_size, self.session_max_len)) - y = np.zeros((batch_size, 1)) # until only leave-one-strategy - yw = np.zeros((batch_size, 1)) # until only leave-one-strategy + y = np.zeros((batch_size, 1)) # Only leave-one-strategy is supported for losses + yw = np.zeros((batch_size, 1)) # Only leave-one-strategy is supported for losses for i, (ses, ses_weights) in enumerate(batch): input_session = [ses[idx] for idx, weight in enumerate(ses_weights) if weight == 0] @@ -190,55 +190,96 @@ def forward( class SASRecModelConfig(TransformerModelConfig): """SASRecModel config.""" - data_preparator_type: SessionEncoderDataPreparatorType = SASRecDataPreparator + data_preparator_type: TransformerDataPreparatorType = SASRecDataPreparator transformer_layers_type: TransformerLayersType = SASRecTransformerLayers use_causal_attn: bool = True class SASRecModel(TransformerModelBase[SASRecModelConfig]): """ - SASRec model. + SASRec model: transformer-based sequential model with unidirectional attention mechanism and + "Shifted Sequence" training objective. + Our implementation covers multiple loss functions and a variable number of negatives for them. - n_blocks : int, default 1 + References + ---------- + Transformers tutorial: https://rectools.readthedocs.io/en/stable/examples/tutorials/transformers_tutorial.html + Advanced training guide: + https://rectools.readthedocs.io/en/stable/examples/tutorials/transformers_advanced_training_guide.html + Public benchmark: https://github.com/blondered/bert4rec_repro + Original SASRec paper: https://arxiv.org/abs/1808.09781 + gBCE loss and gSASRec paper: https://arxiv.org/pdf/2308.07192 + + Parameters + ---------- + n_blocks : int, default 2 Number of transformer blocks. - n_heads : int, default 1 + n_heads : int, default 4 Number of attention heads. - n_factors : int, default 128 + n_factors : int, default 256 Latent embeddings size. - use_pos_emb : bool, default ``True`` - If ``True``, learnable positional encoding will be added to session item embeddings. - use_causal_attn : bool, default ``True`` - If ``True``, causal mask will be added as attn_mask in Multi-head Attention. Please note that default - SASRec training task ("Shifted Sequence") does not work without causal masking. Set this - parameter to ``False`` only when you change the training task with custom - `data_preparator_type` or if you are absolutely sure of what you are doing. - use_key_padding_mask : bool, default ``False`` - If ``True``, key_padding_mask will be added in Multi-head Attention. dropout_rate : float, default 0.2 Probability of a hidden unit to be zeroed. - session_max_len : int, default 32 + session_max_len : int, default 100 Maximum length of user sequence. train_min_user_interactions : int, default 2 - Minimum number of interactions user should have to be used for training. Should be greater than 1. - dataloader_num_workers : int, default 0 - Number of loader worker processes. - batch_size : int, default 128 - How many samples per batch to load. + Minimum number of interactions user should have to be used for training. Should be greater + than 1. loss : {"softmax", "BCE", "gBCE"}, default "softmax" Loss function. n_negatives : int, default 1 Number of negatives for BCE and gBCE losses. gbce_t : float, default 0.2 Calibration parameter for gBCE loss. - lr : float, default 0.01 + lr : float, default 0.001 Learning rate. + batch_size : int, default 128 + How many samples per batch to load. epochs : int, default 3 - Number of training epochs. + Exact number of training epochs. + Will be omitted if `get_trainer_func` is specified. + deterministic : bool, default ``False`` + `deterministic` flag passed to lightning trainer during initialization. + Use `pytorch_lightning.seed_everything` together with this parameter to fix the random seed. + Will be omitted if `get_trainer_func` is specified. verbose : int, default 0 Verbosity level. - deterministic : bool, default ``False`` - If ``True``, set deterministic algorithms for PyTorch operations. - Use `pytorch_lightning.seed_everything` together with this parameter to fix the random state. + Enables progress bar, model summary and logging in default lightning trainer when set to a + positive integer. + Will be omitted if `get_trainer_func` is specified. + dataloader_num_workers : int, default 0 + Number of loader worker processes. + use_pos_emb : bool, default ``True`` + If ``True``, learnable positional encoding will be added to session item embeddings. + use_key_padding_mask : bool, default ``False`` + If ``True``, key_padding_mask will be added in Multi-head Attention. + use_causal_attn : bool, default ``True`` + If ``True``, causal mask will be added as attn_mask in Multi-head Attention. Please note that default + SASRec training task ("Shifted Sequence") does not work without causal masking. Set this + parameter to ``False`` only when you change the training task with custom + `data_preparator_type` or if you are absolutely sure of what you are doing. + item_net_block_types : sequence of `type(ItemNetBase)`, default `(IdEmbeddingsItemNet, CatFeaturesItemNet)` + Type of network returning item embeddings. + (IdEmbeddingsItemNet,) - item embeddings based on ids. + (CatFeaturesItemNet,) - item embeddings based on categorical features. + (IdEmbeddingsItemNet, CatFeaturesItemNet) - item embeddings based on ids and categorical features. + pos_encoding_type : type(PositionalEncodingBase), default `LearnableInversePositionalEncoding` + Type of positional encoding. + transformer_layers_type : type(TransformerLayersBase), default `SasRecTransformerLayers` + Type of transformer layers architecture. + data_preparator_type : type(TransformerDataPreparatorBase), default `SasRecDataPreparator` + Type of data preparator used for dataset processing and dataloader creation. + lightning_module_type : type(TransformerLightningModuleBase), default `TransformerLightningModule` + Type of lightning module defining training procedure. + get_val_mask_func : Callable, default ``None`` + Function to get validation mask. + get_trainer_func : Callable, default ``None`` + Function for get custom lightning trainer. + If `get_trainer_func` is None, default trainer will be created based on `epochs`, + `deterministic` and `verbose` argument values. Model will be trained for the exact number of + epochs. Checkpointing will be disabled. + If you want to assign custom trainer after model is initialized, you can manually assign new + value to model `_trainer` attribute. recommend_batch_size : int, default 256 How many samples per batch to load during `recommend`. If you want to change this parameter after model is initialized, @@ -263,24 +304,6 @@ class SASRecModel(TransformerModelBase[SASRecModelConfig]): If ``True`` and HAS_CUDA ``True``, set use_gpu=True in ImplicitRanker.rank. If you want to change this parameter after model is initialized, you can manually assign new value to model `recommend_use_gpu_ranking` attribute. - trainer : Trainer, optional, default ``None`` - Which trainer to use for training. - If trainer is None, default pytorch_lightning Trainer is created. - item_net_block_types : sequence of `type(ItemNetBase)`, default `(IdEmbeddingsItemNet, CatFeaturesItemNet)` - Type of network returning item embeddings. - (IdEmbeddingsItemNet,) - item embeddings based on ids. - (CatFeaturesItemNet,) - item embeddings based on categorical features. - (IdEmbeddingsItemNet, CatFeaturesItemNet) - item embeddings based on ids and categorical features. - pos_encoding_type : type(PositionalEncodingBase), default `LearnableInversePositionalEncoding` - Type of positional encoding. - transformer_layers_type : type(TransformerLayersBase), default `SasRecTransformerLayers` - Type of transformer layers architecture. - data_preparator_type : type(SessionEncoderDataPreparatorBase), default `SasRecDataPreparator` - Type of data preparator used for dataset processing and dataloader creation. - lightning_module_type : type(SessionEncoderLightningModuleBase), default `SessionEncoderLightningModule` - Type of lightning module defining training procedure. - get_val_mask_func : Callable, default None - Function to get validation mask. """ config_class = SASRecModelConfig @@ -290,33 +313,33 @@ def __init__( # pylint: disable=too-many-arguments, too-many-locals n_blocks: int = 2, n_heads: int = 4, n_factors: int = 256, - use_pos_emb: bool = True, - use_causal_attn: bool = True, - use_key_padding_mask: bool = False, dropout_rate: float = 0.2, session_max_len: int = 100, - dataloader_num_workers: int = 0, - batch_size: int = 128, + train_min_user_interactions: int = 2, loss: str = "softmax", n_negatives: int = 1, gbce_t: float = 0.2, lr: float = 0.001, + batch_size: int = 128, epochs: int = 3, - verbose: int = 0, deterministic: bool = False, + verbose: int = 0, + dataloader_num_workers: int = 0, + use_pos_emb: bool = True, + use_key_padding_mask: bool = False, + use_causal_attn: bool = True, + item_net_block_types: tp.Sequence[tp.Type[ItemNetBase]] = (IdEmbeddingsItemNet, CatFeaturesItemNet), + pos_encoding_type: tp.Type[PositionalEncodingBase] = LearnableInversePositionalEncoding, + transformer_layers_type: tp.Type[TransformerLayersBase] = SASRecTransformerLayers, # SASRec authors net + data_preparator_type: tp.Type[TransformerDataPreparatorBase] = SASRecDataPreparator, + lightning_module_type: tp.Type[TransformerLightningModuleBase] = TransformerLightningModule, + get_val_mask_func: tp.Optional[ValMaskCallable] = None, + get_trainer_func: tp.Optional[TrainerCallable] = None, recommend_batch_size: int = 256, recommend_accelerator: str = "auto", recommend_devices: tp.Union[int, tp.List[int]] = 1, recommend_n_threads: int = 0, recommend_use_gpu_ranking: bool = True, - train_min_user_interactions: int = 2, - trainer: tp.Optional[Trainer] = None, - item_net_block_types: tp.Sequence[tp.Type[ItemNetBase]] = (IdEmbeddingsItemNet, CatFeaturesItemNet), - pos_encoding_type: tp.Type[PositionalEncodingBase] = LearnableInversePositionalEncoding, - transformer_layers_type: tp.Type[TransformerLayersBase] = SASRecTransformerLayers, # SASRec authors net - data_preparator_type: tp.Type[SessionEncoderDataPreparatorBase] = SASRecDataPreparator, - lightning_module_type: tp.Type[SessionEncoderLightningModuleBase] = SessionEncoderLightningModule, - get_val_mask_func: tp.Optional[tp.Callable] = None, ): super().__init__( transformer_layers_type=transformer_layers_type, @@ -344,11 +367,11 @@ def __init__( # pylint: disable=too-many-arguments, too-many-locals recommend_n_threads=recommend_n_threads, recommend_use_gpu_ranking=recommend_use_gpu_ranking, train_min_user_interactions=train_min_user_interactions, - trainer=trainer, item_net_block_types=item_net_block_types, pos_encoding_type=pos_encoding_type, lightning_module_type=lightning_module_type, get_val_mask_func=get_val_mask_func, + get_trainer_func=get_trainer_func, ) def _init_data_preparator(self) -> None: @@ -357,7 +380,6 @@ def _init_data_preparator(self) -> None: n_negatives=self.n_negatives if self.loss != "softmax" else None, batch_size=self.batch_size, dataloader_num_workers=self.dataloader_num_workers, - item_extra_tokens=(PADDING_VALUE,), train_min_user_interactions=self.train_min_user_interactions, get_val_mask_func=self.get_val_mask_func, ) diff --git a/rectools/models/nn/transformer_base.py b/rectools/models/nn/transformer_base.py index f3d4c14d..1de6d3ed 100644 --- a/rectools/models/nn/transformer_base.py +++ b/rectools/models/nn/transformer_base.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -12,8 +12,12 @@ # See the License for the specific language governing permissions and # limitations under the License. +import io import typing as tp +from collections.abc import Callable from copy import deepcopy +from pathlib import Path +from tempfile import NamedTemporaryFile import numpy as np import torch @@ -23,14 +27,14 @@ from pytorch_lightning import LightningModule, Trainer from rectools import ExternalIds -from rectools.dataset import Dataset +from rectools.dataset.dataset import Dataset, DatasetSchema, DatasetSchemaDict, IdMap from rectools.models.base import ErrorBehaviour, InternalRecoTriplet, ModelBase, ModelConfig from rectools.models.rank import Distance, ImplicitRanker from rectools.types import InternalIdsArray from rectools.utils.misc import get_class_or_function_full_path, import_object from .item_net import CatFeaturesItemNet, IdEmbeddingsItemNet, ItemNetBase, ItemNetConstructor -from .transformer_data_preparator import SessionEncoderDataPreparatorBase +from .transformer_data_preparator import TransformerDataPreparatorBase from .transformer_net_blocks import ( LearnableInversePositionalEncoding, PositionalEncodingBase, @@ -38,12 +42,10 @@ TransformerLayersBase, ) -PADDING_VALUE = "PAD" - -class TransformerBasedSessionEncoder(torch.nn.Module): +class TransformerTorchBackbone(torch.nn.Module): """ - Torch model for recommendations. + Torch model for encoding user sessions based on transformer architecture. Parameters ---------- @@ -115,6 +117,19 @@ def construct_item_net(self, dataset: Dataset) -> None: dataset, self.n_factors, self.dropout_rate, self.item_net_block_types ) + def construct_item_net_from_dataset_schema(self, dataset_schema: DatasetSchema) -> None: + """ + Construct network for item embeddings from dataset schema. + + Parameters + ---------- + dataset_schema : DatasetSchema + RecTools schema with dataset statistics. + """ + self.item_model = ItemNetConstructor.from_dataset_schema( + dataset_schema, self.n_factors, self.dropout_rate, self.item_net_block_types + ) + @staticmethod def _convert_mask_to_float(mask: torch.Tensor, query: torch.Tensor) -> torch.Tensor: return torch.zeros_like(mask, dtype=query.dtype).masked_fill_(mask, float("-inf")) @@ -234,14 +249,14 @@ def forward( # #### -------------- Lightning Model -------------- #### # -class SessionEncoderLightningModuleBase(LightningModule): +class TransformerLightningModuleBase(LightningModule): # pylint: disable=too-many-instance-attributes """ - Base class for lightning module. To change train procedure inherit + Base class for transfofmers lightning module. To change train procedure inherit from this class and pass your custom LightningModule to your model parameters. Parameters ---------- - torch_model : TransformerBasedSessionEncoder + torch_model : TransformerTorchBackbone Torch model to make recommendations. lr : float Learning rate. @@ -249,32 +264,38 @@ class SessionEncoderLightningModuleBase(LightningModule): Loss function. adam_betas : Tuple[float, float], default (0.9, 0.98) Coefficients for running averages of gradient and its square. - data_preparator : SessionEncoderDataPreparatorBase + data_preparator : TransformerDataPreparatorBase Data preparator. verbose : int, default 0 Verbosity level. - train_loss_name : str, default "train/loss" + train_loss_name : str, default "train_loss" Name of the training loss. - val_loss_name : str, default "val/loss" + val_loss_name : str, default "val_loss" Name of the training loss. """ def __init__( self, - torch_model: TransformerBasedSessionEncoder, + torch_model: TransformerTorchBackbone, + model_config: tp.Dict[str, tp.Any], + dataset_schema: DatasetSchemaDict, + item_external_ids: ExternalIds, + data_preparator: TransformerDataPreparatorBase, lr: float, gbce_t: float, - data_preparator: SessionEncoderDataPreparatorBase, - loss: str = "softmax", - adam_betas: tp.Tuple[float, float] = (0.9, 0.98), + loss: str, verbose: int = 0, - train_loss_name: str = "train/loss", - val_loss_name: str = "val/loss", + train_loss_name: str = "train_loss", + val_loss_name: str = "val_loss", + adam_betas: tp.Tuple[float, float] = (0.9, 0.98), ): super().__init__() + self.torch_model = torch_model + self.model_config = model_config + self.dataset_schema = dataset_schema + self.item_external_ids = item_external_ids self.lr = lr self.loss = loss - self.torch_model = torch_model self.adam_betas = adam_betas self.gbce_t = gbce_t self.data_preparator = data_preparator @@ -283,6 +304,8 @@ def __init__( self.val_loss_name = val_loss_name self.item_embs: torch.Tensor + self.save_hyperparameters(ignore=["torch_model", "data_preparator"]) + def configure_optimizers(self) -> torch.optim.Adam: """Choose what optimizers and learning-rate schedulers to use in optimization""" optimizer = torch.optim.Adam(self.torch_model.parameters(), lr=self.lr, betas=self.adam_betas) @@ -301,8 +324,8 @@ def predict_step(self, batch: tp.Dict[str, torch.Tensor], batch_idx: int) -> tor raise NotImplementedError() -class SessionEncoderLightningModule(SessionEncoderLightningModuleBase): - """Lightning module to train SASRec model.""" +class TransformerLightningModule(TransformerLightningModuleBase): + """Lightning module to train transformer models.""" def on_train_start(self) -> None: """Initialize parameters with values from Xavier normal distribution.""" @@ -332,9 +355,16 @@ def training_step(self, batch: tp.Dict[str, torch.Tensor], batch_idx: int) -> to def _calc_custom_loss(self, batch: tp.Dict[str, torch.Tensor], batch_idx: int) -> torch.Tensor: raise ValueError(f"loss {self.loss} is not supported") - def on_validation_epoch_start(self) -> None: - """Get item embeddings before validation epoch.""" - self.item_embs = self.torch_model.item_model.get_all_embeddings() + def on_validation_start(self) -> None: + """Save item embeddings""" + self.eval() + with torch.no_grad(): + self.item_embs = self.torch_model.item_model.get_all_embeddings() + + def on_validation_end(self) -> None: + """Clear item embeddings""" + del self.item_embs + torch.cuda.empty_cache() def validation_step(self, batch: tp.Dict[str, torch.Tensor], batch_idx: int) -> tp.Dict[str, torch.Tensor]: """Validate step.""" @@ -437,13 +467,13 @@ def _calc_gbce_loss( loss = self._calc_bce_loss(logits, y, w) return loss - def on_predict_epoch_start(self) -> None: + def on_predict_start(self) -> None: """Save item embeddings""" self.eval() with torch.no_grad(): self.item_embs = self.torch_model.item_model.get_all_embeddings() - def on_predict_epoch_end(self) -> None: + def on_predict_end(self) -> None: """Clear item embeddings""" del self.item_embs torch.cuda.empty_cache() @@ -499,8 +529,8 @@ def _serialize_type_sequence(obj: tp.Sequence[tp.Type]) -> tp.Tuple[str, ...]: ), ] -SessionEncoderLightningModuleType = tpe.Annotated[ - tp.Type[SessionEncoderLightningModuleBase], +TransformerLightningModuleType = tpe.Annotated[ + tp.Type[TransformerLightningModuleBase], BeforeValidator(_get_class_obj), PlainSerializer( func=get_class_or_function_full_path, @@ -509,8 +539,8 @@ def _serialize_type_sequence(obj: tp.Sequence[tp.Type]) -> tp.Tuple[str, ...]: ), ] -SessionEncoderDataPreparatorType = tpe.Annotated[ - tp.Type[SessionEncoderDataPreparatorBase], +TransformerDataPreparatorType = tpe.Annotated[ + tp.Type[TransformerDataPreparatorBase], BeforeValidator(_get_class_obj), PlainSerializer( func=get_class_or_function_full_path, @@ -529,8 +559,23 @@ def _serialize_type_sequence(obj: tp.Sequence[tp.Type]) -> tp.Tuple[str, ...]: ), ] -CallableSerialized = tpe.Annotated[ - tp.Callable, + +ValMaskCallable = Callable[[], np.ndarray] + +ValMaskCallableSerialized = tpe.Annotated[ + ValMaskCallable, + BeforeValidator(_get_class_obj), + PlainSerializer( + func=get_class_or_function_full_path, + return_type=str, + when_used="json", + ), +] + +TrainerCallable = Callable[[], Trainer] + +TrainerCallableSerialized = tpe.Annotated[ + TrainerCallable, BeforeValidator(_get_class_obj), PlainSerializer( func=get_class_or_function_full_path, @@ -543,7 +588,7 @@ def _serialize_type_sequence(obj: tp.Sequence[tp.Type]) -> tp.Tuple[str, ...]: class TransformerModelConfig(ModelConfig): """Transformer model base config.""" - data_preparator_type: SessionEncoderDataPreparatorType + data_preparator_type: TransformerDataPreparatorType n_blocks: int = 2 n_heads: int = 4 n_factors: int = 256 @@ -570,8 +615,9 @@ class TransformerModelConfig(ModelConfig): item_net_block_types: ItemNetBlockTypes = (IdEmbeddingsItemNet, CatFeaturesItemNet) pos_encoding_type: PositionalEncodingType = LearnableInversePositionalEncoding transformer_layers_type: TransformerLayersType = PreLNTransformerLayers - lightning_module_type: SessionEncoderLightningModuleType = SessionEncoderLightningModule - get_val_mask_func: tp.Optional[CallableSerialized] = None + lightning_module_type: TransformerLightningModuleType = TransformerLightningModule + get_val_mask_func: tp.Optional[ValMaskCallableSerialized] = None + get_trainer_func: tp.Optional[TrainerCallableSerialized] = None TransformerModelConfig_T = tp.TypeVar("TransformerModelConfig_T", bound=TransformerModelConfig) @@ -590,12 +636,12 @@ class TransformerModelBase(ModelBase[TransformerModelConfig_T]): # pylint: disa config_class: tp.Type[TransformerModelConfig_T] u2i_dist = Distance.DOT i2i_dist = Distance.COSINE - train_loss_name: str = "train/loss" - val_loss_name: str = "val/loss" + train_loss_name: str = "train_loss" + val_loss_name: str = "val_loss" def __init__( # pylint: disable=too-many-arguments, too-many-locals self, - data_preparator_type: SessionEncoderDataPreparatorType, + data_preparator_type: TransformerDataPreparatorType, transformer_layers_type: tp.Type[TransformerLayersBase] = PreLNTransformerLayers, n_blocks: int = 2, n_heads: int = 4, @@ -609,7 +655,7 @@ def __init__( # pylint: disable=too-many-arguments, too-many-locals batch_size: int = 128, loss: str = "softmax", n_negatives: int = 1, - gbce_t: float = 0.5, + gbce_t: float = 0.2, lr: float = 0.001, epochs: int = 3, verbose: int = 0, @@ -620,11 +666,11 @@ def __init__( # pylint: disable=too-many-arguments, too-many-locals recommend_n_threads: int = 0, recommend_use_gpu_ranking: bool = True, train_min_user_interactions: int = 2, - trainer: tp.Optional[Trainer] = None, item_net_block_types: tp.Sequence[tp.Type[ItemNetBase]] = (IdEmbeddingsItemNet, CatFeaturesItemNet), pos_encoding_type: tp.Type[PositionalEncodingBase] = LearnableInversePositionalEncoding, - lightning_module_type: tp.Type[SessionEncoderLightningModuleBase] = SessionEncoderLightningModule, - get_val_mask_func: tp.Optional[tp.Callable] = None, + lightning_module_type: tp.Type[TransformerLightningModuleBase] = TransformerLightningModule, + get_val_mask_func: tp.Optional[ValMaskCallable] = None, + get_trainer_func: tp.Optional[TrainerCallable] = None, **kwargs: tp.Any, ) -> None: super().__init__(verbose=verbose) @@ -659,18 +705,14 @@ def __init__( # pylint: disable=too-many-arguments, too-many-locals self.pos_encoding_type = pos_encoding_type self.lightning_module_type = lightning_module_type self.get_val_mask_func = get_val_mask_func + self.get_trainer_func = get_trainer_func - self._init_torch_model() self._init_data_preparator() + self._init_trainer() - if trainer is None: - self._init_trainer() - else: - self._trainer = trainer - - self.lightning_model: SessionEncoderLightningModuleBase - self.data_preparator: SessionEncoderDataPreparatorBase - self.fit_trainer: Trainer + self.lightning_model: TransformerLightningModuleBase + self.data_preparator: TransformerDataPreparatorBase + self.fit_trainer: tp.Optional[Trainer] = None def _check_devices(self, recommend_devices: tp.Union[int, tp.List[int]]) -> None: if isinstance(recommend_devices, int) and recommend_devices != 1: @@ -682,19 +724,22 @@ def _init_data_preparator(self) -> None: raise NotImplementedError() def _init_trainer(self) -> None: - self._trainer = Trainer( - max_epochs=self.epochs, - min_epochs=self.epochs, - deterministic=self.deterministic, - enable_progress_bar=self.verbose > 0, - enable_model_summary=self.verbose > 0, - logger=self.verbose > 0, - enable_checkpointing=False, - devices=1, - ) + if self.get_trainer_func is None: + self._trainer = Trainer( + max_epochs=self.epochs, + min_epochs=self.epochs, + deterministic=self.deterministic, + enable_progress_bar=self.verbose > 0, + enable_model_summary=self.verbose > 0, + logger=self.verbose > 0, + enable_checkpointing=False, + devices=1, + ) + else: + self._trainer = self.get_trainer_func() - def _init_torch_model(self) -> None: - self._torch_model = TransformerBasedSessionEncoder( + def _init_torch_model(self) -> TransformerTorchBackbone: + return TransformerTorchBackbone( n_blocks=self.n_blocks, n_factors=self.n_factors, n_heads=self.n_heads, @@ -708,13 +753,22 @@ def _init_torch_model(self) -> None: pos_encoding_type=self.pos_encoding_type, ) - def _init_lightning_model(self, torch_model: TransformerBasedSessionEncoder) -> None: + def _init_lightning_model( + self, + torch_model: TransformerTorchBackbone, + dataset_schema: DatasetSchemaDict, + item_external_ids: ExternalIds, + model_config: tp.Dict[str, tp.Any], + ) -> None: self.lightning_model = self.lightning_module_type( torch_model=torch_model, + dataset_schema=dataset_schema, + item_external_ids=item_external_ids, + model_config=model_config, + data_preparator=self.data_preparator, lr=self.lr, loss=self.loss, gbce_t=self.gbce_t, - data_preparator=self.data_preparator, verbose=self.verbose, train_loss_name=self.train_loss_name, val_loss_name=self.val_loss_name, @@ -728,10 +782,18 @@ def _fit( train_dataloader = self.data_preparator.get_dataloader_train() val_dataloader = self.data_preparator.get_dataloader_val() - torch_model = deepcopy(self._torch_model) + torch_model = self._init_torch_model() torch_model.construct_item_net(self.data_preparator.train_dataset) - self._init_lightning_model(torch_model) + dataset_schema = self.data_preparator.train_dataset.get_schema() + item_external_ids = self.data_preparator.train_dataset.item_id_map.external_ids + model_config = self.get_config() + self._init_lightning_model( + torch_model=torch_model, + dataset_schema=dataset_schema, + item_external_ids=item_external_ids, + model_config=model_config, + ) self.fit_trainer = deepcopy(self._trainer) self.fit_trainer.fit(self.lightning_model, train_dataloader, val_dataloader) @@ -771,7 +833,7 @@ def _recommend_u2i( user_embs = np.concatenate(session_embs, axis=0) user_embs = user_embs[user_ids] - item_embs = self.get_item_vectors() + item_embs = self.get_item_vectors_tensor().detach().cpu().numpy() ranker = ImplicitRanker( self.u2i_dist, @@ -796,19 +858,18 @@ def _recommend_u2i( all_target_ids = user_ids[user_ids_indices] return all_target_ids, all_reco_ids, all_scores - def get_item_vectors(self) -> np.ndarray: + def get_item_vectors_tensor(self) -> torch.Tensor: """ Compute catalog item embeddings through torch model. Returns ------- - np.ndarray + torch.Tensor Full catalog item embeddings including extra tokens. """ self.torch_model.eval() with torch.no_grad(): - item_embs = self.torch_model.item_model.get_all_embeddings().detach().cpu().numpy() - return item_embs + return self.torch_model.item_model.get_all_embeddings() def _recommend_i2i( self, @@ -820,7 +881,7 @@ def _recommend_i2i( if sorted_item_ids_to_recommend is None: sorted_item_ids_to_recommend = self.data_preparator.get_known_items_sorted_internal_ids() - item_embs = self.get_item_vectors() + item_embs = self.get_item_vectors_tensor().detach().cpu().numpy() # TODO: i2i recommendations do not need filtering viewed and user most of the times has GPU # We should test if torch `topk`` is faster @@ -839,7 +900,7 @@ def _recommend_i2i( ) @property - def torch_model(self) -> TransformerBasedSessionEncoder: + def torch_model(self) -> TransformerTorchBackbone: """Pytorch model.""" return self.lightning_model.torch_model @@ -847,7 +908,6 @@ def torch_model(self) -> TransformerBasedSessionEncoder: def _from_config(cls, config: TransformerModelConfig_T) -> tpe.Self: params = config.model_dump() params.pop("cls") - params["trainer"] = None return cls(**params) def _get_config(self) -> TransformerModelConfig_T: @@ -855,3 +915,58 @@ def _get_config(self) -> TransformerModelConfig_T: params = {attr: getattr(self, attr) for attr in attrs if attr != "cls"} params["cls"] = self.__class__ return self.config_class(**params) + + @classmethod + def _model_from_checkpoint(cls, checkpoint: tp.Dict[str, tp.Any]) -> tpe.Self: + """Create model from loaded Lightning checkpoint.""" + model_config = checkpoint["hyper_parameters"]["model_config"] + loaded = cls.from_config(model_config) + loaded.is_fitted = True + dataset_schema = checkpoint["hyper_parameters"]["dataset_schema"] + dataset_schema = DatasetSchema.model_validate(dataset_schema) + + # Update data preparator + item_external_ids = checkpoint["hyper_parameters"]["item_external_ids"] + loaded.data_preparator.item_id_map = IdMap(item_external_ids) + loaded.data_preparator._init_extra_token_ids() # pylint: disable=protected-access + + # Init and update torch model and lightning model + torch_model = loaded._init_torch_model() + torch_model.construct_item_net_from_dataset_schema(dataset_schema) + loaded._init_lightning_model( + torch_model=torch_model, + dataset_schema=dataset_schema, + item_external_ids=item_external_ids, + model_config=model_config, + ) + loaded.lightning_model.load_state_dict(checkpoint["state_dict"]) + + return loaded + + def __getstate__(self) -> object: + if self.is_fitted: + if self.fit_trainer is None: + raise RuntimeError("Model that was loaded from checkpoint cannot be saved without being fitted again") + with NamedTemporaryFile() as f: + self.fit_trainer.save_checkpoint(f.name) + checkpoint = Path(f.name).read_bytes() + state: tp.Dict[str, tp.Any] = {"fitted_checkpoint": checkpoint} + return state + state = {"model_config": self.get_config()} + return state + + def __setstate__(self, state: tp.Dict[str, tp.Any]) -> None: + if "fitted_checkpoint" in state: + checkpoint = torch.load(io.BytesIO(state["fitted_checkpoint"]), weights_only=False) + loaded = self._model_from_checkpoint(checkpoint) + else: + loaded = self.from_config(state["model_config"]) + + self.__dict__.update(loaded.__dict__) + + @classmethod + def load_from_checkpoint(cls, checkpoint_path: tp.Union[str, Path]) -> tpe.Self: + """Load model from Lightning checkpoint path.""" + checkpoint = torch.load(checkpoint_path, weights_only=False) + loaded = cls._model_from_checkpoint(checkpoint) + return loaded diff --git a/rectools/models/nn/transformer_data_preparator.py b/rectools/models/nn/transformer_data_preparator.py index ca35d43a..a13e6451 100644 --- a/rectools/models/nn/transformer_data_preparator.py +++ b/rectools/models/nn/transformer_data_preparator.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -28,6 +28,8 @@ from rectools.dataset.features import SparseFeatures from rectools.dataset.identifiers import IdMap +from .constants import PADDING_VALUE + class SequenceDataset(TorchDataset): """ @@ -81,7 +83,7 @@ def from_interactions( return cls(sessions=sessions, weights=weights) -class SessionEncoderDataPreparatorBase: +class TransformerDataPreparatorBase: """ Base class for data preparator. To change train/recommend dataset processing, train/recommend dataloaders inherit from this class and pass your custom data preparator to your model parameters. @@ -106,12 +108,13 @@ class SessionEncoderDataPreparatorBase: train_session_max_len_addition: int = 0 + item_extra_tokens: tp.Sequence[Hashable] = (PADDING_VALUE,) + def __init__( self, session_max_len: int, batch_size: int, dataloader_num_workers: int, - item_extra_tokens: tp.Sequence[Hashable], shuffle_train: bool = True, train_min_user_interactions: int = 2, n_negatives: tp.Optional[int] = None, @@ -127,7 +130,6 @@ def __init__( self.batch_size = batch_size self.dataloader_num_workers = dataloader_num_workers self.train_min_user_interactions = train_min_user_interactions - self.item_extra_tokens = item_extra_tokens self.shuffle_train = shuffle_train self.get_val_mask_func = get_val_mask_func @@ -145,7 +147,7 @@ def n_item_extra_tokens(self) -> int: return len(self.item_extra_tokens) def process_dataset_train(self, dataset: Dataset) -> None: - """TODO""" + """Process train dataset and save data.""" raw_interactions = dataset.get_raw_interactions() # Exclude val interaction targets from train if needed @@ -198,8 +200,7 @@ def process_dataset_train(self, dataset: Dataset) -> None: self.train_dataset = Dataset(user_id_map, item_id_map, dataset_interactions, item_features=item_features) self.item_id_map = self.train_dataset.item_id_map - extra_token_ids = self.item_id_map.convert_to_internal(self.item_extra_tokens) - self.extra_token_ids = dict(zip(self.item_extra_tokens, extra_token_ids)) + self._init_extra_token_ids() # Define val interactions if self.get_val_mask_func is not None: @@ -213,6 +214,10 @@ def process_dataset_train(self, dataset: Dataset) -> None: val_interactions = pd.concat([val_interactions, val_targets], axis=0) self.val_interactions = Interactions.from_raw(val_interactions, user_id_map, item_id_map).df + def _init_extra_token_ids(self) -> None: + extra_token_ids = self.item_id_map.convert_to_internal(self.item_extra_tokens) + self.extra_token_ids = dict(zip(self.item_extra_tokens, extra_token_ids)) + def get_dataloader_train(self) -> DataLoader: """ Construct train dataloader from processed dataset. @@ -304,7 +309,7 @@ def transform_dataset_u2i(self, dataset: Dataset, users: ExternalIds) -> Dataset interactions = dataset.interactions.df users_internal = dataset.user_id_map.convert_to_internal(users, strict=False) items_internal = dataset.item_id_map.convert_to_internal(self.get_known_item_ids(), strict=False) - interactions = interactions[interactions[Columns.User].isin(users_internal)] # todo: fast_isin + interactions = interactions[interactions[Columns.User].isin(users_internal)] interactions = interactions[interactions[Columns.Item].isin(items_internal)] # Convert to external ids @@ -358,7 +363,6 @@ def _collate_fn_val( self, batch: tp.List[tp.Tuple[tp.List[int], tp.List[float]]], ) -> tp.Dict[str, torch.Tensor]: - """TODO""" raise NotImplementedError() def _collate_fn_recommend( diff --git a/rectools/models/nn/transformer_net_blocks.py b/rectools/models/nn/transformer_net_blocks.py index 0c1a1de5..81fc54e2 100644 --- a/rectools/models/nn/transformer_net_blocks.py +++ b/rectools/models/nn/transformer_net_blocks.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -11,6 +11,7 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. + import typing as tp import torch diff --git a/rectools/models/serialization.py b/rectools/models/serialization.py index 48bcd867..91844187 100644 --- a/rectools/models/serialization.py +++ b/rectools/models/serialization.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2024-2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/rectools/utils/config.py b/rectools/utils/config.py index 10c74705..80013f7f 100644 --- a/rectools/utils/config.py +++ b/rectools/utils/config.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2024-2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/rectools/version.py b/rectools/version.py index 529bc448..e9fe619b 100644 --- a/rectools/version.py +++ b/rectools/version.py @@ -1,4 +1,4 @@ -# Copyright 2022-2024 MTS (Mobile Telesystems) +# Copyright 2022-2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/tests/dataset/test_dataset.py b/tests/dataset/test_dataset.py index 7d1f9dea..d1b3b421 100644 --- a/tests/dataset/test_dataset.py +++ b/tests/dataset/test_dataset.py @@ -1,4 +1,4 @@ -# Copyright 2022-2024 MTS (Mobile Telesystems) +# Copyright 2022-2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -15,6 +15,7 @@ # pylint: disable=attribute-defined-outside-init import typing as tp +from collections.abc import Hashable from datetime import datetime import numpy as np @@ -24,6 +25,8 @@ from rectools import Columns from rectools.dataset import Dataset, DenseFeatures, Features, IdMap, Interactions, SparseFeatures +from rectools.dataset.dataset import AnyFeatureName, _serialize_feature_name +from rectools.dataset.features import DIRECT_FEATURE_VALUE from tests.testing_utils import ( assert_feature_set_equal, assert_id_map_equal, @@ -60,6 +63,25 @@ def setup_method(self) -> None: columns=[Columns.User, Columns.Item, Columns.Weight, Columns.Datetime], ), ) + self.expected_schema = { + "n_interactions": 6, + "users": { + "n_hot": 3, + "id_map": { + "size": 3, + "dtype": "|O", + }, + "features": None, + }, + "items": { + "n_hot": 3, + "id_map": { + "size": 3, + "dtype": "|O", + }, + "features": None, + }, + } def assert_dataset_equal_to_expected( self, @@ -85,12 +107,16 @@ def test_construct_with_extra_cols(self) -> None: expected = self.expected_interactions expected.df["extra_col"] = self.interactions_df["extra_col"] assert_interactions_set_equal(actual, expected) + actual_schema = dataset.get_schema() + assert actual_schema == self.expected_schema def test_construct_without_features(self) -> None: dataset = Dataset.construct(self.interactions_df) self.assert_dataset_equal_to_expected(dataset, None, None) assert dataset.n_hot_users == 3 assert dataset.n_hot_items == 3 + actual_schema = dataset.get_schema() + assert actual_schema == self.expected_schema @pytest.mark.parametrize("user_id_col", ("id", Columns.User)) @pytest.mark.parametrize("item_id_col", ("id", Columns.Item)) @@ -133,6 +159,36 @@ def test_construct_with_features(self, user_id_col: str, item_id_col: str) -> No assert_feature_set_equal(dataset.get_hot_user_features(), expected_user_features) assert_feature_set_equal(dataset.get_hot_item_features(), expected_item_features) + expected_schema = { + "n_interactions": 6, + "users": { + "n_hot": 3, + "id_map": { + "size": 3, + "dtype": "|O", + }, + "features": { + "kind": "dense", + "names": ["f1", "f2"], + }, + }, + "items": { + "n_hot": 3, + "id_map": { + "size": 3, + "dtype": "|O", + }, + "features": { + "kind": "sparse", + "names": [["f1", DIRECT_FEATURE_VALUE], ["f2", 20], ["f2", 30]], + "cat_feature_indices": [1, 2], + "cat_n_stored_values": 3, + }, + }, + } + actual_schema = dataset.get_schema() + assert actual_schema == expected_schema + @pytest.mark.parametrize("user_id_col", ("id", Columns.User)) @pytest.mark.parametrize("item_id_col", ("id", Columns.Item)) def test_construct_with_features_with_warm_ids(self, user_id_col: str, item_id_col: str) -> None: @@ -441,3 +497,28 @@ def test_filter_dataset_interactions_df_rows_with_features( assert new_user_features.names == old_user_features.names assert_sparse_matrix_equal(new_item_features.values, old_item_features.values[kept_internal_item_ids]) assert new_item_features.names == old_item_features.names + + +class TestSerializeFeatureName: + @pytest.mark.parametrize( + "feature_name, expected", + ( + (("feature_one", "value_one"), ("feature_one", "value_one")), + (("feature_one", 1), ("feature_one", 1)), + ("feature_name", "feature_name"), + (True, True), + (1.0, 1.0), + (1, 1), + (np.array(["feature_name"])[0], "feature_name"), + (np.array([True])[0], True), + (np.array([1.0])[0], 1.0), + (np.array([1])[0], 1), + ), + ) + def test_basic(self, feature_name: AnyFeatureName, expected: Hashable) -> None: + assert _serialize_feature_name(feature_name) == expected + + @pytest.mark.parametrize("feature_name", (np.array([1]), [1], np.array(["name"]), np.array([True]))) + def test_raises_on_incorrect_input(self, feature_name: tp.Any) -> None: + with pytest.raises(TypeError): + _serialize_feature_name(feature_name) diff --git a/tests/models/nn/__init__.py b/tests/models/nn/__init__.py index 61e2ca1b..64b1423b 100644 --- a/tests/models/nn/__init__.py +++ b/tests/models/nn/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/tests/models/nn/test_bert4rec.py b/tests/models/nn/test_bert4rec.py index d522fc43..57b03bad 100644 --- a/tests/models/nn/test_bert4rec.py +++ b/tests/models/nn/test_bert4rec.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -23,12 +23,13 @@ from rectools.columns import Columns from rectools.dataset import Dataset from rectools.models import BERT4RecModel -from rectools.models.nn.bert4rec import MASKING_VALUE, PADDING_VALUE, BERT4RecDataPreparator +from rectools.models.nn.bert4rec import BERT4RecDataPreparator from rectools.models.nn.item_net import IdEmbeddingsItemNet from rectools.models.nn.transformer_base import ( LearnableInversePositionalEncoding, PreLNTransformerLayers, - SessionEncoderLightningModule, + TrainerCallable, + TransformerLightningModule, ) from tests.models.data import DATASET from tests.models.utils import ( @@ -36,10 +37,10 @@ assert_second_fit_refits_model, ) -from .utils import leave_one_out_mask +from .utils import custom_trainer, leave_one_out_mask -class TestBERT4RecModelConfiguration: +class TestBERT4RecModel: def setup_method(self) -> None: self._seed_everything() @@ -95,14 +96,18 @@ def dataset_devices(self) -> Dataset: return Dataset.construct(interactions_df) @pytest.fixture - def trainer(self) -> Trainer: - return Trainer( - max_epochs=2, - min_epochs=2, - deterministic=True, - accelerator="cpu", - enable_checkpointing=False, - ) + def get_trainer_func(self) -> TrainerCallable: + def get_trainer() -> Trainer: + return Trainer( + max_epochs=2, + min_epochs=2, + deterministic=True, + accelerator="cpu", + enable_checkpointing=False, + devices=1, + ) + + return get_trainer @pytest.mark.parametrize( "accelerator,n_devices,recommend_accelerator", @@ -217,14 +222,19 @@ def test_u2i( expected_gpu_1: pd.DataFrame, expected_gpu_2: pd.DataFrame, ) -> None: - trainer = Trainer( - max_epochs=2, - min_epochs=2, - deterministic=True, - devices=n_devices, - accelerator=accelerator, - enable_checkpointing=False, - ) + if n_devices != 1: + pytest.skip("DEBUG: skipping multi-device tests") + + def get_trainer() -> Trainer: + return Trainer( + max_epochs=2, + min_epochs=2, + deterministic=True, + devices=n_devices, + accelerator=accelerator, + enable_checkpointing=False, + ) + model = BERT4RecModel( n_factors=32, n_blocks=2, @@ -236,7 +246,7 @@ def test_u2i( deterministic=True, recommend_accelerator=recommend_accelerator, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer, ) model.fit(dataset=dataset_devices) users = np.array([10, 30, 40]) @@ -284,7 +294,7 @@ def test_u2i_losses( self, dataset_devices: Dataset, loss: str, - trainer: Trainer, + get_trainer_func: TrainerCallable, expected: pd.DataFrame, ) -> None: model = BERT4RecModel( @@ -299,7 +309,7 @@ def test_u2i_losses( deterministic=True, mask_prob=0.6, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, loss=loss, ) model.fit(dataset=dataset_devices) @@ -337,7 +347,7 @@ def test_u2i_losses( ), ) def test_with_whitelist( - self, dataset_devices: Dataset, trainer: Trainer, filter_viewed: bool, expected: pd.DataFrame + self, dataset_devices: Dataset, get_trainer_func: TrainerCallable, filter_viewed: bool, expected: pd.DataFrame ) -> None: model = BERT4RecModel( n_factors=32, @@ -349,7 +359,7 @@ def test_with_whitelist( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, ) model.fit(dataset=dataset_devices) users = np.array([10, 30, 40]) @@ -408,7 +418,7 @@ def test_with_whitelist( def test_i2i( self, dataset: Dataset, - trainer: Trainer, + get_trainer_func: TrainerCallable, filter_itself: bool, whitelist: tp.Optional[np.ndarray], expected: pd.DataFrame, @@ -423,7 +433,7 @@ def test_i2i( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, ) model.fit(dataset=dataset) target_items = np.array([12, 14, 17]) @@ -440,7 +450,7 @@ def test_i2i( actual, ) - def test_second_fit_refits_model(self, dataset_hot_users_items: Dataset, trainer: Trainer) -> None: + def test_second_fit_refits_model(self, dataset_hot_users_items: Dataset, get_trainer_func: TrainerCallable) -> None: model = BERT4RecModel( n_factors=32, n_blocks=2, @@ -449,7 +459,7 @@ def test_second_fit_refits_model(self, dataset_hot_users_items: Dataset, trainer batch_size=4, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, ) assert_second_fit_refits_model(model, dataset_hot_users_items, pre_fit_callback=self._seed_everything) @@ -479,7 +489,7 @@ def test_second_fit_refits_model(self, dataset_hot_users_items: Dataset, trainer ), ) def test_recommend_for_cold_user_with_hot_item( - self, dataset_devices: Dataset, trainer: Trainer, filter_viewed: bool, expected: pd.DataFrame + self, dataset_devices: Dataset, get_trainer_func: TrainerCallable, filter_viewed: bool, expected: pd.DataFrame ) -> None: model = BERT4RecModel( n_factors=32, @@ -491,7 +501,7 @@ def test_recommend_for_cold_user_with_hot_item( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, ) model.fit(dataset=dataset_devices) users = np.array([20]) @@ -570,7 +580,6 @@ def data_preparator(self) -> BERT4RecDataPreparator: batch_size=4, dataloader_num_workers=0, train_min_user_interactions=2, - item_extra_tokens=(PADDING_VALUE, MASKING_VALUE), shuffle_train=True, mask_prob=0.5, ) @@ -618,7 +627,6 @@ def test_get_dataloader_train_for_masked_session_with_random_replacement( batch_size=14, dataloader_num_workers=0, train_min_user_interactions=2, - item_extra_tokens=(PADDING_VALUE, MASKING_VALUE), shuffle_train=True, mask_prob=0.5, ) @@ -642,6 +650,15 @@ def test_get_dataloader_recommend( for key, value in actual.items(): assert torch.equal(value, recommend_batch[key]) + +class TestBERT4RecModelConfiguration: + def setup_method(self) -> None: + self._seed_everything() + + def _seed_everything(self) -> None: + torch.use_deterministic_algorithms(True) + seed_everything(32, workers=True) + @pytest.fixture def initial_config(self) -> tp.Dict[str, tp.Any]: config = { @@ -672,13 +689,18 @@ def initial_config(self) -> tp.Dict[str, tp.Any]: "pos_encoding_type": LearnableInversePositionalEncoding, "transformer_layers_type": PreLNTransformerLayers, "data_preparator_type": BERT4RecDataPreparator, - "lightning_module_type": SessionEncoderLightningModule, + "lightning_module_type": TransformerLightningModule, "mask_prob": 0.15, "get_val_mask_func": leave_one_out_mask, + "get_trainer_func": None, } return config - def test_from_config(self, initial_config: tp.Dict[str, tp.Any]) -> None: + @pytest.mark.parametrize("use_custom_trainer", (True, False)) + def test_from_config(self, initial_config: tp.Dict[str, tp.Any], use_custom_trainer: bool) -> None: + config = initial_config + if use_custom_trainer: + config["get_trainer_func"] = custom_trainer model = BERT4RecModel.from_config(initial_config) for key, config_value in initial_config.items(): @@ -686,12 +708,18 @@ def test_from_config(self, initial_config: tp.Dict[str, tp.Any]) -> None: assert model._trainer is not None # pylint: disable = protected-access + @pytest.mark.parametrize("use_custom_trainer", (True, False)) @pytest.mark.parametrize("simple_types", (False, True)) - def test_get_config(self, simple_types: bool, initial_config: tp.Dict[str, tp.Any]) -> None: - model = BERT4RecModel(**initial_config) - config = model.get_config(simple_types=simple_types) + def test_get_config( + self, simple_types: bool, initial_config: tp.Dict[str, tp.Any], use_custom_trainer: bool + ) -> None: + config = initial_config + if use_custom_trainer: + config["get_trainer_func"] = custom_trainer + model = BERT4RecModel(**config) + actual = model.get_config(simple_types=simple_types) - expected = initial_config.copy() + expected = config.copy() expected["cls"] = BERT4RecModel if simple_types: @@ -701,16 +729,22 @@ def test_get_config(self, simple_types: bool, initial_config: tp.Dict[str, tp.An "pos_encoding_type": "rectools.models.nn.transformer_net_blocks.LearnableInversePositionalEncoding", "transformer_layers_type": "rectools.models.nn.transformer_net_blocks.PreLNTransformerLayers", "data_preparator_type": "rectools.models.nn.bert4rec.BERT4RecDataPreparator", - "lightning_module_type": "rectools.models.nn.transformer_base.SessionEncoderLightningModule", + "lightning_module_type": "rectools.models.nn.transformer_base.TransformerLightningModule", "get_val_mask_func": "tests.models.nn.utils.leave_one_out_mask", } expected.update(simple_types_params) + if use_custom_trainer: + expected["get_trainer_func"] = "tests.models.nn.utils.custom_trainer" - assert config == expected + assert actual == expected + @pytest.mark.parametrize("use_custom_trainer", (True, False)) @pytest.mark.parametrize("simple_types", (False, True)) def test_get_config_and_from_config_compatibility( - self, simple_types: bool, initial_config: tp.Dict[str, tp.Any] + self, + simple_types: bool, + initial_config: tp.Dict[str, tp.Any], + use_custom_trainer: bool, ) -> None: dataset = DATASET model = BERT4RecModel @@ -723,6 +757,8 @@ def test_get_config_and_from_config_compatibility( } config = initial_config.copy() config.update(updated_params) + if use_custom_trainer: + config["get_trainer_func"] = custom_trainer def get_reco(model: BERT4RecModel) -> pd.DataFrame: return model.fit(dataset).recommend(users=np.array([10, 20]), dataset=dataset, k=2, filter_viewed=False) diff --git a/tests/models/nn/test_item_net.py b/tests/models/nn/test_item_net.py index 5984a5da..f4891e9a 100644 --- a/tests/models/nn/test_item_net.py +++ b/tests/models/nn/test_item_net.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/tests/models/nn/test_sasrec.py b/tests/models/nn/test_sasrec.py index 2c5a5def..fc87f1e9 100644 --- a/tests/models/nn/test_sasrec.py +++ b/tests/models/nn/test_sasrec.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,7 +14,6 @@ # pylint: disable=too-many-lines -import os import typing as tp from functools import partial @@ -23,18 +22,18 @@ import pytest import torch from pytorch_lightning import Trainer, seed_everything -from pytorch_lightning.loggers import CSVLogger from rectools import ExternalIds from rectools.columns import Columns from rectools.dataset import Dataset, IdMap, Interactions from rectools.models import SASRecModel from rectools.models.nn.item_net import CatFeaturesItemNet, IdEmbeddingsItemNet -from rectools.models.nn.sasrec import PADDING_VALUE, SASRecDataPreparator, SASRecTransformerLayers +from rectools.models.nn.sasrec import SASRecDataPreparator, SASRecTransformerLayers from rectools.models.nn.transformer_base import ( LearnableInversePositionalEncoding, - SessionEncoderLightningModule, - TransformerBasedSessionEncoder, + TrainerCallable, + TransformerLightningModule, + TransformerTorchBackbone, ) from tests.models.data import DATASET from tests.models.utils import ( @@ -43,7 +42,7 @@ ) from tests.testing_utils import assert_id_map_equal, assert_interactions_set_equal -from .utils import leave_one_out_mask +from .utils import custom_trainer, leave_one_out_mask class TestSASRecModel: @@ -144,30 +143,18 @@ def dataset_hot_users_items(self, interactions_df: pd.DataFrame) -> Dataset: return Dataset.construct(interactions_df[:-4]) @pytest.fixture - def trainer(self) -> Trainer: - return Trainer( - max_epochs=2, - min_epochs=2, - deterministic=True, - accelerator="cpu", - enable_checkpointing=False, - ) - - @pytest.fixture - def get_val_mask_func(self) -> partial: - def get_val_mask(interactions: pd.DataFrame, val_users: ExternalIds) -> pd.Series: - rank = ( - interactions.sort_values(Columns.Datetime, ascending=False, kind="stable") - .groupby(Columns.User, sort=False) - .cumcount() - + 1 + def get_trainer_func(self) -> TrainerCallable: + def get_trainer() -> Trainer: + return Trainer( + max_epochs=2, + min_epochs=2, + deterministic=True, + accelerator="cpu", + enable_checkpointing=False, + devices=1, ) - val_mask = (interactions[Columns.User].isin(val_users)) & (rank <= 1) - return val_mask - val_users = [10, 30] - get_val_mask_func = partial(get_val_mask, val_users=val_users) - return get_val_mask_func + return get_trainer @pytest.mark.parametrize( "accelerator,devices,recommend_accelerator", @@ -267,14 +254,20 @@ def test_u2i( expected_cpu_2: pd.DataFrame, expected_gpu: pd.DataFrame, ) -> None: - trainer = Trainer( - max_epochs=2, - min_epochs=2, - deterministic=True, - devices=devices, - accelerator=accelerator, - enable_checkpointing=False, - ) + + if devices != 1: + pytest.skip("DEBUG: skipping multi-device tests") + + def get_trainer() -> Trainer: + return Trainer( + max_epochs=2, + min_epochs=2, + deterministic=True, + devices=devices, + accelerator=accelerator, + enable_checkpointing=False, + ) + model = SASRecModel( n_factors=32, n_blocks=2, @@ -286,7 +279,7 @@ def test_u2i( deterministic=True, recommend_accelerator=recommend_accelerator, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer, ) model.fit(dataset=dataset_devices) users = np.array([10, 30, 40]) @@ -332,7 +325,7 @@ def test_u2i_losses( self, dataset: Dataset, loss: str, - trainer: Trainer, + get_trainer_func: TrainerCallable, expected: pd.DataFrame, ) -> None: model = SASRecModel( @@ -345,7 +338,7 @@ def test_u2i_losses( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, loss=loss, ) model.fit(dataset=dataset) @@ -372,7 +365,7 @@ def test_u2i_losses( def test_u2i_with_key_and_attn_masks( self, dataset: Dataset, - trainer: Trainer, + get_trainer_func: TrainerCallable, expected: pd.DataFrame, ) -> None: model = SASRecModel( @@ -385,7 +378,7 @@ def test_u2i_with_key_and_attn_masks( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, use_key_padding_mask=True, ) model.fit(dataset=dataset) @@ -412,7 +405,7 @@ def test_u2i_with_key_and_attn_masks( def test_u2i_with_item_features( self, dataset_item_features: Dataset, - trainer: Trainer, + get_trainer_func: TrainerCallable, expected: pd.DataFrame, ) -> None: model = SASRecModel( @@ -425,7 +418,7 @@ def test_u2i_with_item_features( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet, CatFeaturesItemNet), - trainer=trainer, + get_trainer_func=get_trainer_func, use_key_padding_mask=True, ) model.fit(dataset=dataset_item_features) @@ -463,7 +456,7 @@ def test_u2i_with_item_features( ), ) def test_with_whitelist( - self, dataset: Dataset, trainer: Trainer, filter_viewed: bool, expected: pd.DataFrame + self, dataset: Dataset, get_trainer_func: TrainerCallable, filter_viewed: bool, expected: pd.DataFrame ) -> None: model = SASRecModel( n_factors=32, @@ -474,7 +467,7 @@ def test_with_whitelist( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, ) model.fit(dataset=dataset) users = np.array([10, 30, 40]) @@ -533,7 +526,7 @@ def test_with_whitelist( def test_i2i( self, dataset: Dataset, - trainer: Trainer, + get_trainer_func: TrainerCallable, filter_itself: bool, whitelist: tp.Optional[np.ndarray], expected: pd.DataFrame, @@ -547,7 +540,7 @@ def test_i2i( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, ) model.fit(dataset=dataset) target_items = np.array([12, 14, 17]) @@ -564,7 +557,7 @@ def test_i2i( actual, ) - def test_second_fit_refits_model(self, dataset_hot_users_items: Dataset, trainer: Trainer) -> None: + def test_second_fit_refits_model(self, dataset_hot_users_items: Dataset, get_trainer_func: TrainerCallable) -> None: model = SASRecModel( n_factors=32, n_blocks=2, @@ -573,7 +566,7 @@ def test_second_fit_refits_model(self, dataset_hot_users_items: Dataset, trainer batch_size=4, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, ) assert_second_fit_refits_model(model, dataset_hot_users_items, pre_fit_callback=self._seed_everything) @@ -603,7 +596,7 @@ def test_second_fit_refits_model(self, dataset_hot_users_items: Dataset, trainer ), ) def test_recommend_for_cold_user_with_hot_item( - self, dataset: Dataset, trainer: Trainer, filter_viewed: bool, expected: pd.DataFrame + self, dataset: Dataset, get_trainer_func: TrainerCallable, filter_viewed: bool, expected: pd.DataFrame ) -> None: model = SASRecModel( n_factors=32, @@ -614,7 +607,7 @@ def test_recommend_for_cold_user_with_hot_item( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, ) model.fit(dataset=dataset) users = np.array([20]) @@ -656,7 +649,7 @@ def test_recommend_for_cold_user_with_hot_item( ), ) def test_warn_when_hot_user_has_cold_items_in_recommend( - self, dataset: Dataset, trainer: Trainer, filter_viewed: bool, expected: pd.DataFrame + self, dataset: Dataset, get_trainer_func: TrainerCallable, filter_viewed: bool, expected: pd.DataFrame ) -> None: model = SASRecModel( n_factors=32, @@ -667,7 +660,7 @@ def test_warn_when_hot_user_has_cold_items_in_recommend( epochs=2, deterministic=True, item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, + get_trainer_func=get_trainer_func, ) model.fit(dataset=dataset) users = np.array([10, 20, 50]) @@ -701,58 +694,7 @@ def test_raises_when_loss_is_not_supported(self, dataset: Dataset) -> None: def test_torch_model(self, dataset: Dataset) -> None: model = SASRecModel() model.fit(dataset) - assert isinstance(model.torch_model, TransformerBasedSessionEncoder) - - @pytest.mark.parametrize( - "verbose, is_val_mask_func, expected_columns", - ( - (0, False, ["epoch", "step", "train/loss"]), - (1, True, ["epoch", "step", "train/loss", "val/loss"]), - ), - ) - def test_log_metrics( - self, - dataset: Dataset, - tmp_path: str, - verbose: int, - get_val_mask_func: partial, - is_val_mask_func: bool, - expected_columns: tp.List[str], - ) -> None: - logger = CSVLogger(save_dir=tmp_path) - trainer = Trainer( - default_root_dir=tmp_path, - max_epochs=2, - min_epochs=2, - deterministic=True, - accelerator="cpu", - logger=logger, - log_every_n_steps=1, - enable_checkpointing=False, - ) - model = SASRecModel( - n_factors=32, - n_blocks=2, - session_max_len=3, - lr=0.001, - batch_size=4, - epochs=2, - deterministic=True, - item_net_block_types=(IdEmbeddingsItemNet,), - trainer=trainer, - verbose=verbose, - get_val_mask_func=get_val_mask_func if is_val_mask_func else None, - ) - model.fit(dataset=dataset) - - assert model.fit_trainer.logger is not None - assert model.fit_trainer.log_dir is not None - - metrics_path = os.path.join(model.fit_trainer.log_dir, "metrics.csv") - assert os.path.isfile(metrics_path) - - actual_columns = list(pd.read_csv(metrics_path).columns) - assert actual_columns == expected_columns + assert isinstance(model.torch_model, TransformerTorchBackbone) class TestSASRecDataPreparator: @@ -787,17 +729,11 @@ def dataset(self) -> Dataset: @pytest.fixture def data_preparator(self) -> SASRecDataPreparator: - return SASRecDataPreparator( - session_max_len=3, - batch_size=4, - dataloader_num_workers=0, - item_extra_tokens=(PADDING_VALUE,), - n_negatives=1, - ) + return SASRecDataPreparator(session_max_len=3, batch_size=4, dataloader_num_workers=0) @pytest.fixture def data_preparator_val_mask(self) -> SASRecDataPreparator: - def get_val_mask(interactions: pd.DataFrame, val_users: ExternalIds) -> pd.Series: + def get_val_mask(interactions: pd.DataFrame, val_users: ExternalIds) -> np.ndarray: rank = ( interactions.sort_values(Columns.Datetime, ascending=False, kind="stable") .groupby(Columns.User, sort=False) @@ -805,7 +741,7 @@ def get_val_mask(interactions: pd.DataFrame, val_users: ExternalIds) -> pd.Serie + 1 ) val_mask = (interactions[Columns.User].isin(val_users)) & (rank <= 1) - return val_mask + return val_mask.values val_users = [10, 30] get_val_mask_func = partial(get_val_mask, val_users=val_users) @@ -813,7 +749,6 @@ def get_val_mask(interactions: pd.DataFrame, val_users: ExternalIds) -> pd.Serie session_max_len=3, batch_size=4, dataloader_num_workers=0, - item_extra_tokens=(PADDING_VALUE,), get_val_mask_func=get_val_mask_func, ) @@ -964,25 +899,36 @@ def initial_config(self) -> tp.Dict[str, tp.Any]: "pos_encoding_type": LearnableInversePositionalEncoding, "transformer_layers_type": SASRecTransformerLayers, "data_preparator_type": SASRecDataPreparator, - "lightning_module_type": SessionEncoderLightningModule, + "lightning_module_type": TransformerLightningModule, "get_val_mask_func": leave_one_out_mask, + "get_trainer_func": None, } return config - def test_from_config(self, initial_config: tp.Dict[str, tp.Any]) -> None: - model = SASRecModel.from_config(initial_config) + @pytest.mark.parametrize("use_custom_trainer", (True, False)) + def test_from_config(self, initial_config: tp.Dict[str, tp.Any], use_custom_trainer: bool) -> None: + config = initial_config + if use_custom_trainer: + config["get_trainer_func"] = custom_trainer + model = SASRecModel.from_config(config) - for key, config_value in initial_config.items(): + for key, config_value in config.items(): assert getattr(model, key) == config_value assert model._trainer is not None # pylint: disable = protected-access + @pytest.mark.parametrize("use_custom_trainer", (True, False)) @pytest.mark.parametrize("simple_types", (False, True)) - def test_get_config(self, simple_types: bool, initial_config: tp.Dict[str, tp.Any]) -> None: - model = SASRecModel(**initial_config) - config = model.get_config(simple_types=simple_types) + def test_get_config( + self, simple_types: bool, initial_config: tp.Dict[str, tp.Any], use_custom_trainer: bool + ) -> None: + config = initial_config + if use_custom_trainer: + config["get_trainer_func"] = custom_trainer + model = SASRecModel(**config) + actual = model.get_config(simple_types=simple_types) - expected = initial_config.copy() + expected = config.copy() expected["cls"] = SASRecModel if simple_types: @@ -992,16 +938,22 @@ def test_get_config(self, simple_types: bool, initial_config: tp.Dict[str, tp.An "pos_encoding_type": "rectools.models.nn.transformer_net_blocks.LearnableInversePositionalEncoding", "transformer_layers_type": "rectools.models.nn.sasrec.SASRecTransformerLayers", "data_preparator_type": "rectools.models.nn.sasrec.SASRecDataPreparator", - "lightning_module_type": "rectools.models.nn.transformer_base.SessionEncoderLightningModule", + "lightning_module_type": "rectools.models.nn.transformer_base.TransformerLightningModule", "get_val_mask_func": "tests.models.nn.utils.leave_one_out_mask", } expected.update(simple_types_params) + if use_custom_trainer: + expected["get_trainer_func"] = "tests.models.nn.utils.custom_trainer" - assert config == expected + assert actual == expected + @pytest.mark.parametrize("use_custom_trainer", (True, False)) @pytest.mark.parametrize("simple_types", (False, True)) def test_get_config_and_from_config_compatibility( - self, simple_types: bool, initial_config: tp.Dict[str, tp.Any] + self, + simple_types: bool, + initial_config: tp.Dict[str, tp.Any], + use_custom_trainer: bool, ) -> None: dataset = DATASET model = SASRecModel @@ -1014,6 +966,8 @@ def test_get_config_and_from_config_compatibility( } config = initial_config.copy() config.update(updated_params) + if use_custom_trainer: + config["get_trainer_func"] = custom_trainer def get_reco(model: SASRecModel) -> pd.DataFrame: return model.fit(dataset).recommend(users=np.array([10, 20]), dataset=dataset, k=2, filter_viewed=False) diff --git a/tests/models/nn/test_transformer_base.py b/tests/models/nn/test_transformer_base.py new file mode 100644 index 00000000..df6f2c25 --- /dev/null +++ b/tests/models/nn/test_transformer_base.py @@ -0,0 +1,204 @@ +# Copyright 2025 MTS (Mobile Telesystems) +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import typing as tp +from tempfile import NamedTemporaryFile + +import pandas as pd +import pytest +import torch +from pytorch_lightning import Trainer, seed_everything +from pytorch_lightning.callbacks import ModelCheckpoint +from pytorch_lightning.loggers import CSVLogger + +from rectools import Columns +from rectools.dataset import Dataset +from rectools.models import BERT4RecModel, SASRecModel, load_model +from rectools.models.nn.item_net import IdEmbeddingsItemNet +from rectools.models.nn.transformer_base import TransformerModelBase +from tests.models.utils import assert_save_load_do_not_change_model + +from .utils import custom_trainer, leave_one_out_mask + + +class TestTransformerModelBase: + def setup_method(self) -> None: + torch.use_deterministic_algorithms(True) + + @pytest.fixture + def trainer(self) -> Trainer: + return Trainer( + max_epochs=3, min_epochs=3, deterministic=True, accelerator="cpu", enable_checkpointing=False, devices=1 + ) + + @pytest.fixture + def interactions_df(self) -> pd.DataFrame: + interactions_df = pd.DataFrame( + [ + [10, 13, 1, "2021-11-30"], + [10, 11, 1, "2021-11-29"], + [10, 12, 1, "2021-11-29"], + [30, 11, 1, "2021-11-27"], + [30, 12, 2, "2021-11-26"], + [30, 15, 1, "2021-11-25"], + [40, 11, 1, "2021-11-25"], + [40, 17, 1, "2021-11-26"], + [50, 16, 1, "2021-11-25"], + [10, 14, 1, "2021-11-28"], + [10, 16, 1, "2021-11-27"], + [20, 13, 9, "2021-11-28"], + ], + columns=Columns.Interactions, + ) + return interactions_df + + @pytest.fixture + def dataset(self, interactions_df: pd.DataFrame) -> Dataset: + return Dataset.construct(interactions_df) + + @pytest.mark.parametrize("model_cls", (SASRecModel, BERT4RecModel)) + @pytest.mark.parametrize("default_trainer", (True, False)) + def test_save_load_for_unfitted_model( + self, model_cls: tp.Type[TransformerModelBase], dataset: Dataset, default_trainer: bool, trainer: Trainer + ) -> None: + config = { + "deterministic": True, + "item_net_block_types": (IdEmbeddingsItemNet,), # TODO: add CatFeaturesItemNet + } + if not default_trainer: + config["get_trainer_func"] = custom_trainer + model = model_cls.from_config(config) + + with NamedTemporaryFile() as f: + model.save(f.name) + recovered_model = load_model(f.name) + + assert isinstance(recovered_model, model_cls) + original_model_config = model.get_config() + recovered_model_config = recovered_model.get_config() + assert recovered_model_config == original_model_config + + seed_everything(32, workers=True) + model.fit(dataset) + seed_everything(32, workers=True) + recovered_model.fit(dataset) + + self._assert_same_reco(model, recovered_model, dataset) + + def _assert_same_reco(self, model_1: TransformerModelBase, model_2: TransformerModelBase, dataset: Dataset) -> None: + users = dataset.user_id_map.external_ids[:2] + original_reco = model_1.recommend(users=users, dataset=dataset, k=2, filter_viewed=False) + recovered_reco = model_2.recommend(users=users, dataset=dataset, k=2, filter_viewed=False) + pd.testing.assert_frame_equal(original_reco, recovered_reco) + + @pytest.mark.parametrize("model_cls", (SASRecModel, BERT4RecModel)) + @pytest.mark.parametrize("default_trainer", (True, False)) + def test_save_load_for_fitted_model( + self, model_cls: tp.Type[TransformerModelBase], dataset: Dataset, default_trainer: bool, trainer: Trainer + ) -> None: + config = { + "deterministic": True, + "item_net_block_types": (IdEmbeddingsItemNet,), # TODO: add CatFeaturesItemNet + } + if not default_trainer: + config["get_trainer_func"] = custom_trainer + model = model_cls.from_config(config) + model.fit(dataset) + assert_save_load_do_not_change_model(model, dataset) + + @pytest.mark.parametrize("model_cls", (SASRecModel, BERT4RecModel)) + def test_load_from_checkpoint( + self, + model_cls: tp.Type[TransformerModelBase], + tmp_path: str, + dataset: Dataset, + ) -> None: + model = model_cls.from_config( + { + "deterministic": True, + "item_net_block_types": (IdEmbeddingsItemNet,), # TODO: add CatFeaturesItemNet + } + ) + model._trainer = Trainer( # pylint: disable=protected-access + default_root_dir=tmp_path, + max_epochs=2, + min_epochs=2, + deterministic=True, + accelerator="cpu", + devices=1, + callbacks=ModelCheckpoint(filename="last_epoch"), + ) + model.fit(dataset) + + assert model.fit_trainer is not None + if model.fit_trainer.log_dir is None: + raise ValueError("No log dir") + ckpt_path = os.path.join(model.fit_trainer.log_dir, "checkpoints", "last_epoch.ckpt") + assert os.path.isfile(ckpt_path) + recovered_model = model_cls.load_from_checkpoint(ckpt_path) + assert isinstance(recovered_model, model_cls) + + self._assert_same_reco(model, recovered_model, dataset) + + @pytest.mark.parametrize("model_cls", (SASRecModel, BERT4RecModel)) + @pytest.mark.parametrize("verbose", (1, 0)) + @pytest.mark.parametrize( + "is_val_mask_func, expected_columns", + ( + (False, ["epoch", "step", "train_loss"]), + (True, ["epoch", "step", "train_loss", "val_loss"]), + ), + ) + def test_log_metrics( + self, + model_cls: tp.Type[TransformerModelBase], + dataset: Dataset, + tmp_path: str, + verbose: int, + is_val_mask_func: bool, + expected_columns: tp.List[str], + ) -> None: + logger = CSVLogger(save_dir=tmp_path) + trainer = Trainer( + default_root_dir=tmp_path, + max_epochs=2, + min_epochs=2, + deterministic=True, + accelerator="cpu", + devices=1, + logger=logger, + enable_checkpointing=False, + ) + get_val_mask_func = leave_one_out_mask if is_val_mask_func else None + model = model_cls.from_config( + { + "verbose": verbose, + "get_val_mask_func": get_val_mask_func, + } + ) + model._trainer = trainer # pylint: disable=protected-access + model.fit(dataset=dataset) + + assert model.fit_trainer is not None + assert model.fit_trainer.logger is not None + assert model.fit_trainer.log_dir is not None + has_val_mask_func = model.get_val_mask_func is not None + assert has_val_mask_func is is_val_mask_func + + metrics_path = os.path.join(model.fit_trainer.log_dir, "metrics.csv") + assert os.path.isfile(metrics_path) + + actual_columns = list(pd.read_csv(metrics_path).columns) + assert actual_columns == expected_columns diff --git a/tests/models/nn/test_transformer_data_preparator.py b/tests/models/nn/test_transformer_data_preparator.py index 7e8edafd..cd11a620 100644 --- a/tests/models/nn/test_transformer_data_preparator.py +++ b/tests/models/nn/test_transformer_data_preparator.py @@ -1,4 +1,4 @@ -# Copyright 2024 MTS (Mobile Telesystems) +# Copyright 2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -19,8 +19,7 @@ from rectools.columns import Columns from rectools.dataset import Dataset, IdMap, Interactions -from rectools.models.nn.sasrec import PADDING_VALUE -from rectools.models.nn.transformer_data_preparator import SequenceDataset, SessionEncoderDataPreparatorBase +from rectools.models.nn.transformer_data_preparator import SequenceDataset, TransformerDataPreparatorBase from tests.testing_utils import assert_id_map_equal, assert_interactions_set_equal from ..data import INTERACTIONS @@ -66,7 +65,7 @@ def test_from_interactions( assert all(actual_list == expected_list for actual_list, expected_list in zip(actual.weights, expected_weights)) -class TestSessionEncoderDataPreparatorBase: +class TestTransformerDataPreparatorBase: @pytest.fixture def dataset(self) -> Dataset: @@ -111,12 +110,11 @@ def dataset_dense_item_features(self) -> Dataset: return ds @pytest.fixture - def data_preparator(self) -> SessionEncoderDataPreparatorBase: - return SessionEncoderDataPreparatorBase( + def data_preparator(self) -> TransformerDataPreparatorBase: + return TransformerDataPreparatorBase( session_max_len=4, batch_size=4, dataloader_num_workers=0, - item_extra_tokens=(PADDING_VALUE,), ) @pytest.mark.parametrize( @@ -147,7 +145,7 @@ def data_preparator(self) -> SessionEncoderDataPreparatorBase: def test_process_dataset_train( self, dataset: Dataset, - data_preparator: SessionEncoderDataPreparatorBase, + data_preparator: TransformerDataPreparatorBase, expected_interactions: Interactions, expected_item_id_map: IdMap, expected_user_id_map: IdMap, @@ -161,7 +159,7 @@ def test_process_dataset_train( def test_raises_process_dataset_train_when_dense_item_features( self, dataset_dense_item_features: Dataset, - data_preparator: SessionEncoderDataPreparatorBase, + data_preparator: TransformerDataPreparatorBase, ) -> None: with pytest.raises(ValueError): data_preparator.process_dataset_train(dataset_dense_item_features) @@ -190,7 +188,7 @@ def test_raises_process_dataset_train_when_dense_item_features( def test_transform_dataset_u2i( self, dataset: Dataset, - data_preparator: SessionEncoderDataPreparatorBase, + data_preparator: TransformerDataPreparatorBase, expected_interactions: Interactions, expected_item_id_map: IdMap, expected_user_id_map: IdMap, @@ -231,7 +229,7 @@ def test_transform_dataset_u2i( def test_tranform_dataset_i2i( self, dataset: Dataset, - data_preparator: SessionEncoderDataPreparatorBase, + data_preparator: TransformerDataPreparatorBase, expected_interactions: Interactions, expected_item_id_map: IdMap, expected_user_id_map: IdMap, diff --git a/tests/models/nn/utils.py b/tests/models/nn/utils.py index 7a74aebb..0aef8bad 100644 --- a/tests/models/nn/utils.py +++ b/tests/models/nn/utils.py @@ -13,6 +13,7 @@ # limitations under the License. import pandas as pd +from pytorch_lightning import Trainer from rectools import Columns @@ -24,3 +25,14 @@ def leave_one_out_mask(interactions: pd.DataFrame) -> pd.Series: .cumcount() ) return rank == 0 + + +def custom_trainer() -> Trainer: + return Trainer( + max_epochs=3, + min_epochs=3, + deterministic=True, + accelerator="cpu", + enable_checkpointing=False, + devices=1, + ) diff --git a/tests/models/test_serialization.py b/tests/models/test_serialization.py index 49c55ce2..ce95af17 100644 --- a/tests/models/test_serialization.py +++ b/tests/models/test_serialization.py @@ -76,6 +76,7 @@ def test_load_model(model_cls: tp.Type[ModelBase]) -> None: model.save(f.name) loaded_model = load_model(f.name) assert isinstance(loaded_model, model_cls) + assert not loaded_model.is_fitted class CustomModelConfig(ModelConfig): diff --git a/tests/models/utils.py b/tests/models/utils.py index e66d823a..8310f51f 100644 --- a/tests/models/utils.py +++ b/tests/models/utils.py @@ -1,4 +1,4 @@ -# Copyright 2022-2024 MTS (Mobile Telesystems) +# Copyright 2022-2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,12 +14,14 @@ import typing as tp from copy import deepcopy +from tempfile import NamedTemporaryFile import numpy as np import pandas as pd from rectools.dataset import Dataset from rectools.models.base import ModelBase +from rectools.models.serialization import load_model def _dummy_func() -> None: @@ -32,10 +34,14 @@ def assert_second_fit_refits_model( pre_fit_callback = pre_fit_callback or _dummy_func pre_fit_callback() - model_1 = deepcopy(model).fit(dataset) + model_1 = deepcopy(model) + pre_fit_callback() + model_1.fit(dataset) pre_fit_callback() - model_2 = deepcopy(model).fit(dataset) + model_2 = deepcopy(model) + pre_fit_callback() + model_2.fit(dataset) pre_fit_callback() model_2.fit(dataset) @@ -72,6 +78,32 @@ def get_reco(model: ModelBase) -> pd.DataFrame: assert recovered_model_config == original_model_config +def assert_save_load_do_not_change_model( + model: ModelBase, + dataset: Dataset, + check_configs: bool = True, +) -> None: + + def get_reco(model: ModelBase) -> pd.DataFrame: + users = dataset.user_id_map.external_ids[:2] + return model.recommend(users=users, dataset=dataset, k=2, filter_viewed=False) + + with NamedTemporaryFile() as f: + model.save(f.name) + recovered_model = load_model(f.name) + + assert isinstance(recovered_model, model.__class__) + + original_model_reco = get_reco(model) + recovered_model_reco = get_reco(recovered_model) + pd.testing.assert_frame_equal(recovered_model_reco, original_model_reco) + + if check_configs: + original_model_config = model.get_config() + recovered_model_config = recovered_model.get_config() + assert recovered_model_config == original_model_config + + def assert_default_config_and_default_model_params_are_the_same( model: ModelBase, default_config: tp.Dict[str, tp.Any] ) -> None: diff --git a/tests/test_compat.py b/tests/test_compat.py index 4dd53345..5f2780ff 100644 --- a/tests/test_compat.py +++ b/tests/test_compat.py @@ -1,4 +1,4 @@ -# Copyright 2022-2024 MTS (Mobile Telesystems) +# Copyright 2022-2025 MTS (Mobile Telesystems) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License.