Introducing Self-Attention to Target Attentive Graph Neural Networks (arXiv)
[ Model Architecture ] |
[ Task ] |
[ Paper ] |
[ arXiv ] |
---|
PyTorch implementation of the model TAGNN++, presented in the paper "Introducing Self-Attention to Target Attentive Graph Neural Networks"
Accepted at AISP '22
python3 train.py [-h] [--dataset DATASET] [--defaults DEFAULTS] [--batchSize BATCHSIZE]
[--hiddenSize HIDDENSIZE] [--nhead NHEAD] [--layer LAYER]
[--feedforward FEEDFORWARD] [--epoch EPOCH] [--lr LR]
[--lr_dc LR_DC] [--lr_dc_step LR_DC_STEP] [--l2 L2]
[--patience PATIENCE] [--validation]
[--valid_portion VALID_PORTION]
Arguments:
-h, --help Description and Help Message
--dataset DATASET Name of the Dataset:
diginetica | yoochoose1_64
--defaults DEFAULTS Use default configuration
--batchSize BATCHSIZE
Batch size
--hiddenSize HIDDENSIZE
Hidden state dimensions
--epoch EPOCH The number of epochs to train
--lr LR Set the Learning Rate
--lr_dc LR_DC Set the decay rate used with Learning rate
--lr_dc_step LR_DC_STEP
Steps in Learning rate decay
--l2 L2 Assign L2 Penalty
--patience PATIENCE Early stopping criterion
--validation validation
--valid_portion VALID_PORTION
Portion of train-set to split into val-set
This code was developed with python3.6
Python (3.6.x)
PyTorch (1.7.x)
CUDA (10.2)
cuDNN (7.6.5)
networkx (2.5.1)
numpy (1.19.5)
For original source of AGC, for further tweaks:
git clone https://github.com/vballoli/nfnets-pytorch.git
![](/The-Learning-Machines/SBR/raw/master/assets/SBR_Task.png)
For further details, contact Sai Mitheran via Linkedin, or via email by clicking the icon below.
To cite our paper:
@article{mitheran2021mproved,
title={Improved Representation Learning for Session-based Recommendation},
author={Sai Mitheran and Abhinav Java and Surya Kant Sahu and Arshad Shaikh},
year={2021},
journal={arXiv preprint arXiv:2107.01516}
}