Content Safety Studio & Cognitive Services
I working through the AI900 labs and am getting the following error when trying to execute some of the sample Content Safety Studio queries. Your account does not have access to this resource, please contact your resource owner to get access. I have…
Error "Subdomain does not map to a resource".
I am doing "Microsoft Learn Challenge | Ignite Edition: Build trustworthy AI solutions on Microsoft Azure" and in the module "Moderate content and detect harm in Azure AI Foundry by using Content Safety" in the "Exercise-Text…
Content Safety availability in SEA
Is "Content Safety" available on Azure's Data Centers in Singapore? I don’t see it in the table. Thanks!
How to remove content filtering from Azure Open AI Deployment
Hello, I would like to ask for clarification regarding the removal of content filtering in an OpenAI deployment. I currently have two default profiles and have created one with all settings set to "low." However, I would like to completely…
Does azure-ai-contentsafety client library have functionality to call promptShield and protectedMaterialDetection endpoint? If not when will it come?
Does azure-ai-contentsafety client library have functionality to call promptShield and protectedMaterialDetection end? If not when will it come? I want to call Azure AI content safety services' promptShield and protectedMaterial endpoint through Azure…
Does azure-ai-contentsafety client library have functionality to call promptShield and protectedMaterialDetection endpoint? If not when will it come?
Does azure-ai-contentsafety client library have functionality to call promptShield and protectedMaterialDetection endpoint? If not when will it come? I want to call Azure AI content safety's promptShield and protectedMaterialDetection endpoint through…
How to remove trained custom categories in Azure AI Studio
I've been working with custom categories in Azure AI Studio for content safety and have successfully trained some models. However, I've run into a limitation where each Content Safety Resource only allows a certain number of custom categories. Currently,…
How to fix ai azure content safety customised cateogries training data failed issue
I have been trying to create AI azure content safety custom categories. And I have uploaded training data, and which seems fine (attaching the training data for your reference) but after 5-6 hours I am getting an following error. I have tried multiple…
How can I disable the Content Filter in Azure Open AI
I am writing to express my frustration and disappointment with the persistent issues I have been facing regarding the Content Filter functionality in OpenAI. The filter is malfunctioning, particularly for Spanish-language content. For example, it flagged…
DALL-E 3 Error: Content Policy Violation on Image Generation
When attempting to generate an image using the prompt "depict goku versus naruto," the following error occurs: openai.BadRequestError: Error code: 400 - {'error': {'code': 'content_policy_violation', 'inner_error': {'code':…
Response is being content_filtered even though every thing is safe and not filtered
I'm getting this response when I call Azure open AI service: {'choices': [{'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity':…
Getting the error when I try to evaluate quality, risk and safety metrics in Azure AI Studio.
Getting error : "unsupported operand type(s) for +: 'NoneType' and 'NoneType'", when calculating quality, risk and safety metrics in Azure AI Studio. This worked fine until last week, facing errors from the previous 2-3 days.
Why does GPT-4-O in azure openAI randomly return inappropriate content errors when the content is appropriate?
good morning, i'm using gpt4-o with azure openai in the US2 region, i've observed that it often returns inappropriate content errors knowing that my content is not at all inappropriate, the same request can be considered as inappropriate content at time…
Does no one care about these absurd content restriction policies?
Look at the prompt, even the simplest prompt can be filtered, and it shows that no rules have been violated. This is obviously a very serious bug, which almost makes Dall-E completely unusable on Azure, and also affects the gpt model. I have reported…
Unable to moderate images using custom categories rapid api
I have created an image-based incident in azure custom categories rapid api and trained it well with lot of image samples and deployed it. After deploying i have tested it with a sample image it is not giving proper moderation result. Despite giving a…
Microsoft Phishing URL Detection
Hi, I am building a simple app for educational purposes. I was wondering if there is a service in Azure that I can use, to which I can pass a URL to it and get a response that tells me whether it is phishing. Thanks in advance for any help. Regards, Mani
Azure Services for Video Content Moderation
I have used several Azure services for content moderation, including Azure AI Computer Vision, Azure AI Content Safety, and Azure Video Indexer. Among these, only Video Indexer supports video moderation directly, but it also provides features like…
Custom content filters and blocklists for GPT 4o in Azure OpenAI
I have GPT 4o deployed in North Central US region in my Azure OpenAI resource. I have created a custom content filter with custom blocklists (using instructions from https://zcusa.951200.xyz/en-us/azure/ai-services/openai/how-to/use-blocklists) via…
Account Access Issue
When I run Content Safety example, I meet the type of error. Error Your account does not have access to this resource, please contact your resource owner to get access. How can I resolve it?
how to get 100% accuracy in groundedness
need to know about an accuracy in azure content safety with respect to groundedness