Skip to content

SAFE-MEME is a novel framework designed for the detection of fine-grained hate speech in memes. The framework consists of two distinct variants: (a) SAFE-MEME-QA, which employs a Q&A approach, and (b) SAFE-MEME-H, which utilizes hierarchical classification to categorize memes into one of three classes: explicit, implicit, or benign.

License

Notifications You must be signed in to change notification settings

PalGitts/SAFE-MEME

Repository files navigation

SAFE-MEME

SAFE-MEME is a novel framework designed for the detection of fine-grained hate speech in memes. The framework consists of two distinct variants: (a) SAFE-MEME-QA, which employs a Q&A approach, and (b) SAFE-MEME-H, which utilizes hierarchical classification to categorize memes into one of three classes: explicit, implicit, or benign.

General Instruction:

About

SAFE-MEME is a novel framework designed for the detection of fine-grained hate speech in memes. The framework consists of two distinct variants: (a) SAFE-MEME-QA, which employs a Q&A approach, and (b) SAFE-MEME-H, which utilizes hierarchical classification to categorize memes into one of three classes: explicit, implicit, or benign.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages