Skip to content

Latest commit

 

History

History
20 lines (13 loc) · 991 Bytes

README.md

File metadata and controls

20 lines (13 loc) · 991 Bytes

SAFE-MEME

SAFE-MEME is a novel framework designed for the detection of fine-grained hate speech in memes. The framework consists of two distinct variants: (a) SAFE-MEME-QA, which employs a Q&A approach, and (b) SAFE-MEME-H, which utilizes hierarchical classification to categorize memes into one of three classes: explicit, implicit, or benign.

General Instruction: