Running Gemini AP added code to avoid blocking but getting blocked. My commands do not get blocked when using ChatGPT?
Code in Python
def get_gemini_response(question, safety_settings=None):
# If safety settings are not provided, set it to 'block_none' for all categories
if safety_settings is None:
safety_settings = {
'SEXUALLY_EXPLICIT': 'block_none',
'HATE_SPEECH': 'block_none',
'HARASSMENT': 'block_none',
'DANGEROUS_CONTENT': 'block_none'
}
Reply from Gemini Pro
BlockedPromptException: block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: HIGH } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE }
The safety settings page from Google implies it may still get blocked:
Now provide the same prompt to the model with newly configured safety settings, **and you may get a response.**
response = model.generate_content('[Questionable prompt here]',
safety_settings={'HARASSMENT':'block_none'})
Ran into the same issue trying to process descriptions of injuries and incidents.
The
safety_settings
argument takes a list ofSafetySetting
objects.You have to import these classes:
Use it in
safety_settings
attribute as follows:)