Gemini Pro API Blocking Replies

1.7k Views Asked by At

Running Gemini AP added code to avoid blocking but getting blocked. My commands do not get blocked when using ChatGPT?

Code in Python

def get_gemini_response(question, safety_settings=None):
    # If safety settings are not provided, set it to 'block_none' for all categories
    if safety_settings is None:
        safety_settings = {
            'SEXUALLY_EXPLICIT': 'block_none',
            'HATE_SPEECH': 'block_none',
            'HARASSMENT': 'block_none',
            'DANGEROUS_CONTENT': 'block_none'
        }

Reply from Gemini Pro

BlockedPromptException: block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: HIGH } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE }

The safety settings page from Google implies it may still get blocked:

Now provide the same prompt to the model with newly configured safety settings, **and you may get a response.**


response = model.generate_content('[Questionable prompt here]',
                                  safety_settings={'HARASSMENT':'block_none'})
1

There are 1 best solutions below

3
On

Ran into the same issue trying to process descriptions of injuries and incidents.

The safety_settings argument takes a list of SafetySetting objects.

  1. You have to import these classes:

    from vertexai.preview.generative_models import (
        HarmCategory, 
        HarmBlockThreshold )
    from google.cloud.aiplatform_v1beta1.types.content import SafetySetting
    
  2. Use it in safety_settings attribute as follows:

     model = GenerativeModel("gemini-pro")
     response: GenerationResponse = model.generate_content(
     prompt_text.format(text),
     generation_config={
         "max_output_tokens": 2048,
         "temperature": 0,
         "top_p": 1,
    
     },
     safety_settings = [
         SafetySetting(
             category=HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
             threshold=HarmBlockThreshold.BLOCK_NONE,
         ),
         SafetySetting(
             category=HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
             threshold=HarmBlockThreshold.BLOCK_NONE,
         ),
         SafetySetting(
             category=HarmCategory.HARM_CATEGORY_HATE_SPEECH,
             threshold=HarmBlockThreshold.BLOCK_NONE,
         ),
         SafetySetting(
             category=HarmCategory.HARM_CATEGORY_HARASSMENT,
             threshold=HarmBlockThreshold.BLOCK_NONE,
         ),
     ]
    

    )