Skip to content

SOM-Research/ImageBiTe

Repository files navigation

ImageBiTe: A Bias Tester framework for T2I Models

ImageBiTe is a framework for testing representational harms in text-to-image (T2I) models. Given an ethical requirements model, ImageBiTe prompts a T2I model and evaluates the output in order to detect stereotyping, under-representation, ex-nomination and denigration of protected groups. It includes a library of prompts to test sexism, racism, ageism, and discrimination with regard to physical impairments. Any contributor is welcome to add new ethical concerns to assess and their prompt templates.

Code Repository Structure

The following tree shows the list of the repository's sections and their main contents:

└── imagebite                               // The source code of the package.
      ├── imagebite.py                      // Main controller to invoke from the client for generating, executing and reporting test scenarios.
      └── resources
             ├── factories.json             // Endpoints for invoking and testing online T2I models.
             ├── generic_validators.json    // Prompts for generic, qualitative evaluation of generated images.
             └── prompts_t2i_CO_RE.csv      // The prompt templates libraries in CSV format.

Requirements

  • huggingface-hub 0.26.1
  • numpy 2.1.2
  • openai 1.52.1
  • pandas 2.2.3
  • pillow 11.0.0
  • python-dotenv 1.0.1
  • PyGithub 2.4.0
  • requests 2.32.3

Your project needs the following keys in the .env file:

  • API_KEY_OPENAI, to properly connect to OpenAI's API and models.
  • API_KEY_HUGGINGFACE, to properly invoke Inference APIs in HuggingFace.
  • GITHUB_REPO, the name of the public repository where to upload generated images.
  • GITHUB_REPO_PREFIX, the URL of the public repository where to upload generated images.
  • GITHUB_TOKEN, to properly connect to the GitHub repository.

Governance and Contribution

The development and community management of this project follows the governance rules described in the GOVERNANCE.md document.

At SOM Research Lab we are dedicated to creating and maintaining welcoming, inclusive, safe, and harassment-free development spaces. Anyone participating will be subject to and agrees to sign on to our CODE_OF_CONDUCT.md.

This project is part of a research line of the SOM Research Lab, but we are open to contributions from the community. Any comment is more than welcome! If you are interested in contributing to this project, please read the CONTRIBUTING.md file.

Publications

Related publications:

Sergio Morales, Robert Clarisó and Jordi Cabot. "ImageBiTe: A Framework for Evaluating Representational Harms in Text-to-Image Models," 4th International Conference on AI Engineering – Software Engineering for AI (CAIN '25), April 27-28, 2025, Ottawa, Canada (to be published)

License

License: MIT

The source code for the site is licensed under the MIT License, which you can find in the LICENSE.md file.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages