TINTOlib is a state-of-the-art Python library that transforms tidy data (also known as tabular data) into synthetic images, enabling the application of advanced deep learning techniques, including Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs), to traditionally structured data. This transformation bridges the gap between tabular data and powerful vision-based machine learning models, unlocking new possibilities for tackling regression, classification, and other complex tasks.
Citing TINTO: If you used TINTO in your work, please cite the SoftwareX:
@article{softwarex_TINTO,
title = {TINTO: Converting Tidy Data into Image for Classification with 2-Dimensional Convolutional Neural Networks},
journal = {SoftwareX},
author = {Manuel Castillo-Cara and Reewos Talla-Chumpitaz and Raúl García-Castro and Luis Orozco-Barbosa},
volume={22},
pages={101391},
year = {2023},
issn = {2352-7110},
doi = {https://doi.org/10.1016/j.softx.2023.101391}
}
And use-case developed in INFFUS Paper
@article{inffus_TINTO,
title = {A novel deep learning approach using blurring image techniques for Bluetooth-based indoor localisation},
journal = {Information Fusion},
author = {Reewos Talla-Chumpitaz and Manuel Castillo-Cara and Luis Orozco-Barbosa and Raúl García-Castro},
volume = {91},
pages = {173-186},
year = {2023},
issn = {1566-2535},
doi = {https://doi.org/10.1016/j.inffus.2022.10.011}
}
-
Input data formats (2 options):
- Pandas Dataframe
- Files with the following format
-
Runs on Linux, Windows and macOS systems.
-
Compatible with Python 3.7 or higher.
Models | Class | Hyperparameters |
---|---|---|
TINTO | TINTO() |
problem normalize verbose pixels algorithm blur submatrix amplification distance steps option times train_m zoom random_seed |
IGTD | IGTD() |
problem normalize verbose scale fea_dist_method image_dist_method error max_step val_step switch_t min_gain zoom random_seed |
REFINED | REFINED() |
problem normalize verbose hcIterations n_processors zoom random_seed |
BarGraph | BarGraph() |
problem normalize verbose pixel_width gap zoom |
DistanceMatrix | DistanceMatrix() |
problem normalize verbose zoom |
Combination | Combination() |
problem normalize verbose zoom |
SuperTML | SuperTML() |
problem normalize verbose pixels feature_importance font_size random_seed |
FeatureWrap | FeatureWrap() |
problem normalize verbose size bins zoom |
BIE | BIE() |
problem normalize verbose precision zoom |
TINTOlib Crash Course repository provides a comprehensive crash course on using TINTOlib, a Python library designed to transform tabular data into synthetic images for machine learning tasks. It includes slides and Jupyter notebooks that demonstrate how to apply state-of-the-art vision models like Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) to problems such as regression and classification, using TINTOlib for data transformation.
The repository also features Hybrid Neural Networks (HyNNs), where one branch is an MLP designed to process tabular data, while another branch—either CNN or ViT—handles the synthetic images. This architecture leverages the strengths of both data formats for enhanced performance on complex machine learning tasks. Ideal for those looking to integrate image-based deep learning techniques into tabular data problems.
For example, the following table shows a classic example of the IRIS CSV dataset as it should look like for the run:
sepal length | sepal width | petal length | petal width | target |
---|---|---|---|---|
4.9 | 3.0 | 1.4 | 0.2 | 1 |
7.0 | 3.2 | 4.7 | 1.4 | 2 |
6.3 | 3.3 | 6.0 | 2.5 | 3 |
The following example shows how to create 20x20 images with characteristic pixels, i.e. without blurring. Also, as no other parameters are indicated, you will choose the following parameters which are set by default:
- Image size: 20x20 pixels
- Blurring: No blurring will be used.
- Seed: with the seed set to 20.
- TINTOlib Crash Course
- For more detailed information, refer to the TINTOlib ReadTheDocs.
- GitHub repository: TINTOlib Documentation.
- PyPI: PyPI.
TINTOlib is available under the Apache License 2.0.