Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added DLSR using AutoEncoders #1113

Merged
merged 3 commits into from
Jun 13, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Deep Learning
Welcome to the Deep Learning Super-Resolution project! This project leverages autoencoders to enhance the resolution of low-resolution or damaged images. Autoencoders are a type of neural network designed to learn efficient codings of input data, making them ideal for image super-resolution tasks.


## Introduction
This project aims to develop a super-resolution model using autoencoders. By training the model on a dataset containing pairs of high-resolution and low-resolution images, the autoencoder learns to reconstruct high-quality images from degraded ones. This process can effectively enhance the resolution of any low-resolution or damaged image, making it useful for various applications such as photo restoration, improving medical images, and enhancing surveillance footage. The model can significantly improve the visual quality and detail of images, which can be crucial for both personal and professional use.


## Installation
To set up the project, follow these steps:

* Clone the repository: Download the project files from GitHub to your local machine. This can be done using a Git command or directly downloading the ZIP file from the repository page. Ensure you have Git installed on your system.

* Create a virtual environment: Set up a virtual environment to manage the project's dependencies without affecting your system-wide Python installation. This is an important step to avoid conflicts between packages used in different projects. You can create a virtual environment using venv or conda.

* Install dependencies: Use the provided requirements.txt file to install all necessary Python libraries. This file lists all the dependencies needed for the project, including libraries for deep learning, image processing, and data handling. You can install these dependencies using a package manager like pip.


## Usage
### Preprocessing Data
Before training the model, the image data must be preprocessed to ensure consistency and efficiency during training:

* Prepare your data: Place your low-resolution and high-resolution image pairs in the appropriate directories within the folder.

* Run the preprocessing script: The preprocess_data.py script will prepare these images for training by performing necessary transformations such as resizing, normalization, and splitting into training and validation sets. This ensures that the images are in the correct format and scale for the autoencoder to learn effectively. The script handles image resizing, normalization, and any other required preprocessing steps.

### Training the Model
To train the autoencoder model, follow these steps:

* Configure the training script: The train_model.py script is used to train the autoencoder model. This script loads the preprocessed data, defines the autoencoder architecture, and trains the model over several epochs. It is important to review and adjust the script's parameters, such as the number of epochs, batch size, and learning rate, according to your specific requirements.

* Run the training script: Execute the script to start the training process. The trained model will be saved in the models/ directory. The training process involves feeding the autoencoder pairs of low-resolution and high-resolution images, allowing it to learn the mapping from low to high resolution by minimizing the reconstruction error. This process typically involves several iterations (epochs) and requires significant computational power, preferably with a GPU.

### Enhancing Images
Once the model is trained, you can use the script to enhance new low-resolution images. This script loads the trained model and applies it to the input images, producing higher-resolution versions that are saved for further use. Enhancing images with the trained model is straightforward and can be done as follows:

* Prepare the images for enhancement: Place the low-resolution images you want to enhance in a specific directory.

* Run the enhancement script: The enhance_image.py script loads the trained model and applies it to the input images, producing higher-resolution versions. The output images are saved for further use, allowing you to compare the enhanced images with the original low-resolution ones.

## Model Architecture
The autoencoder model used in this project consists of two main components:

* Encoder: Compresses the input image into a lower-dimensional representation. The encoder is designed to capture the essential features of the input image while reducing its dimensionality. It typically consists of several convolutional layers that progressively downsample the input image.

* Decoder: Reconstructs the high-resolution image from the compressed representation. The decoder takes the lower-dimensional representation produced by the encoder and reconstructs it back to the original image dimensions. It typically consists of several deconvolutional (or transposed convolutional) layers that progressively upsample the compressed representation to the desired high resolution.

The model is trained by minimizing the reconstruction error between the high-resolution images and the reconstructed images produced by the decoder. This process involves adjusting the weights of the encoder and decoder to minimize the difference between the original high-resolution images and their reconstructed counterparts. The training process typically involves several epochs, during which the model gradually improves its ability to enhance image resolution.
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Unsplash dataset

File too large to be shared in a repo
Kindly check it out and use it from https://github.com/unsplash/datasets

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions Machine Learning and Data Science/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
| 19. | [Single Neural Network](https://github.com/Kushal997-das/Project-Guidance/tree/main/Machine%20Learning%20and%20Data%20Science/Basic/Single%20Neural%20Network) | 20. | [Titanic Survival Prediction](https://github.com/Kushal997-das/Project-Guidance/tree/main/Machine%20Learning%20and%20Data%20Science/Basic/Titanic_Survival_Prediction) | 21. | [Uber Analysis](https://github.com/Kushal997-das/Project-Guidance/tree/main/Machine%20Learning%20and%20Data%20Science/Basic/Uber%20Analysis) |
| 22. | [Walmart Analysis](https://github.com/Kushal997-das/Project-Guidance/tree/main/Machine%20Learning%20and%20Data%20Science/Basic/Walmart%20Analysis) | 23. | [Wine Quality Prediction](https://github.com/Kushal997-das/Project-Guidance/tree/main/Machine%20Learning%20and%20Data%20Science/Basic/Wine%20Quality%20Prediction) | 24. | [Water Quality Data EDA](https://github.com/Kushal997-das/Project-Guidance/tree/main/Machine%20Learning%20and%20Data%20Science/Basic/Water%20Quality%20Data%20EDA) |
| 25.| [Calorie Burnt Prediction](https://github.com/Seersha9802/Project-Guidance/tree/main/Machine%20Learning%20and%20Data%20Science/Basic/Calorie%20Burnt%20Prediction)
| 26.| [Super Resolution Using Deep Learning](https://github.com/Kushal997-das/Project-Guidance/tree/main/Machine%20Learning%20and%20Data%20Science/Basic/Super%20Resolution%20using%20Deep%20Learning)


<br>
Expand Down