More

    Google’s Gemini deemed “high risk” for kids in research by non-profit

    Published on:


    Over the past few months, AI chatbots offered by the top names, such as OpenAI and Meta, have been found engaged in problematic behavior, especially with young users. The latest investigation covers Gemini, noting that Google’s chatbot can share “inappropriate and unsafe” content with kids and teens. 

    What’s new in the chatbot risk arena?

    In an analysis by non-profit Common Sense Media, it was discovered that Gemini Under 13 and Gemini accounts with teen protections enabled are “high risk” for the target audience. “They still expose kids to some inappropriate material and fail to recognize serious mental health symptoms,” the organization shared. 

    Nadeem Sarwar / Digital Trends

    In its tests, the team discovered that Gemini can share content related to sex, drugs, alcohol, and unsafe mental health suggestions with young users. The report highlights numerous issues with how Gemini handles chats with young users, and how some of the responses can be too complex for children under the age of 13. 

    But the risks run deeper. “Gemini U13 doesn’t reject sexual content consistently,” the report points out, adding that some of the AI’s responses contained vivid explanations of sexual content. The non-profit also found that the drug-related filters are not triggered consistently, and as a result, it occasionally doled out instructions on obtaining material such as marijuana, ecstasy, Adderall, and LSD.

    What’s next?

    In the wake of the investigation, the non-profit suggests that Gemini Under 13 should only be used under the strict supervision of guardians. “Common Sense Media recommends that no user under 18 use chatbots for mental health advice or emotional support,” argues the risk assessment report. 

    Andy Boxall / Digital Trends

    It further advises parents to keep a vigilant eye on their children’s AI usage and interpret the answers for them. As for Google, the tech giant has been asked to fix the calibration of responses given by Gemini to specific age groups, perform extensive testing with kids, and go beyond simple content filters. 

    This won’t be the first such report of its kind. In the wake of recent uproar, OpenAI has announced that it will soon roll out parental controls in ChatGPT and an alert system for guardians when their wards show signs of acute distress. Meta also recently made changes to ensure that its Meta AI no longer talks about eating disorders, self-harm, suicide, and romantic conversations with teen users. 



    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here