Container Function app integrated with Azure AI Content Safety to moderate text or image contents.
Containerized azure function app to post text or images (jpg, jpeg, bmp, png and tiff) on Azure AI Content Safety server which reads the contents and returns the content safety response against Hate, SelfHarm, Sexual or Violence Severity level between 0 to 6, higher the value mean unsafe to publish.
Please find the API response below.
[
{
"category": "Hate",
"severity": 0
},
{
"category": "SelfHarm",
"severity": 0
},
{
"category": "Sexual",
"severity": 0
},
{
"category": "Violence",
"severity": 2
}
]
Setup What is Azure AI Content Safety? - Azure AI services | Microsoft Learn manually via the Azure Portal, and polpulated the Local setting files with "Key" and "endpoint", for the PoC you can use Free tier.
Function app local setting file.
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
"ContentSafetyApiKey": "{key from portal}",
"ContentSafetyApiEndpoint": "https://{ your content safety from portal}.cognitiveservices.azure.com/"
}
}
Also, please Install docker desktop if not exist on your machine to the function app inside a docker container.
To setup Cognitive Services Account via Terraform and set the Kind attribute to 'ContentSafety' and sku_name = F0 or F1 to use free tier, otherwise S0 to keep the cost low.
resource "azurerm_resource_group" "ai-example" {
name = "ai-example-resources"
location = "West Europe"
}
resource "azurerm_cognitive_account" "ai-example" {
name = "ai-example-account"
location = azurerm_resource_group.ai-example.location
resource_group_name = azurerm_resource_group.ai-example.name
kind = "ContentSafety"
sku_name = "F0"
tags = {
Acceptance = "Test"
}
}
To know more in details please find below additional azure AI content safety contents.