ControlNet-Guided Image Refinement
- Getting Started
- Training a Model
- Extending the Configuration
- Creating a Custom Config
- Developing the Codebase
- Running Experiments and Tracking with Weights & Biases
- Python 3.8 or higher
- PyTorch 2.0+
- CUDA (for GPU support)
- Weights & Biases account (optional for experiment tracking)
-
Clone the repository:
git clone https://github.com/yourusername/CoGIR.git cd CoGIR
-
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts�ctivate
-
Install the required dependencies:
pip install -r requirements.txt
You can train a model using the provided configuration files. The training process is managed by PyTorch Lightning and Hydra for configuration management.
To start training with the default settings, run:
python train.py
This will use the configuration specified in config/config.yaml
and will log training results to Weights & Biases if you have it configured.
To run with custom configurations, you can extend or modify any part of the config. For example, you can specify a custom dataset location and batch size:
python train.py data.dataset.location=/path/to/dataset data.batch_size=8
Hydra allows you to compose and extend configurations. The default configuration (config/config.yaml
) includes references to multiple components like the model, optimizer, and logger.
To extend or modify the configuration, you can create a new YAML file or pass specific overrides through the command line.
If you want to use a different loss function, you can modify the criterion as follows in a YAML file:
criterion:
_target_: torch.nn.CrossEntropyLoss
Then, run the training with:
python train.py --config-name custom_loss.yaml
Alternatively, you can override a specific part of the config directly through the command line:
python train.py criterion._target_=torch.nn.CrossEntropyLoss
We highly recommend creating custom configuration files rather than modifying the default YAML files (config/config.yaml
). This keeps your project organized and allows for easy switching between different environments or experiment settings.
To create a custom config file, you can use the following structure:
-
Create your custom YAML config file:
For example,jeremy-local.yaml
is a custom configuration:defaults: - config - _self_ data: dataset: location: /media/disk3/unsplash-lite_data batch_size: 4 train: gpus: [0, 1] logger: notes: "Hydra/Lightning refactor" criterion: _target_: criterion.lpips_loss.LPIPSLoss use_l1: true
-
Use the custom config:
Run the training command with your custom config:python train.py --config-name jeremy-local.yaml
This allows you to easily switch between different setups (e.g., local vs remote machine) without editing the default configs.
-
Project Structure:
train.py
: The entry point for model training.model/
: Contains the UNet model architecture.data/
: Contains data loaders and dataset utilities.config/
: Configuration files managed by Hydra.tests/
: Unit tests to ensure the code works as expected.
-
Running Unit Tests: We use
pytest
for unit testing. To run the tests:pytest tests/
-
Adding New Features: To add new models, datasets, or loss functions, create the necessary modules in
model/
,data/
, orcriterion/
respectively, and update the corresponding YAML configuration file.
CoGIR is set up to integrate seamlessly with Weights & Biases for experiment tracking.
-
Ensure that you have a Weights & Biases account and have logged in:
wandb login
-
In the configuration file (
config/logger/wandb.yaml
), modify the project and entity name:project: CoGIR entity: your-wandb-entity
During training, metrics like loss and validation accuracy will be logged automatically. You can also log custom images and other data to W&B from the training script.
To visualize input-output pairs, for example, the train.py
script logs images every few steps:
self.logger.log_image(key="train/input_output_target", images=[wandb.Image(x) for x in input_output_target])
To run a new experiment with W&B tracking enabled, simply run:
python train.py
Make sure you have modified the W&B configuration to suit your project. All experiment data will be saved and viewable in your W&B dashboard.