More

    ChatGPT and Gemini are nudging users towards illegal gambling, says investigation

    Published on:


    A new investigation suggests that popular AI chatbots, including ChatGPT and Gemini, may inadvertently steer users toward illegal gambling websites. The analysis, conducted by journalists at The Guardian and Investigate Europe, tested several widely used AI systems and found that many could be prompted to recommend unlicensed offshore casinos operating outside UK regulations.

    ilgmyzin / Unsplash

    The tests involved five AI tools from major tech companies, including OpenAI, Google, Microsoft, Meta, and xAI (Grok). Researchers asked the chatbots questions about online casinos and gambling restrictions. In many cases, the systems returned lists of illegal betting sites, along with tips on how to use them. Some bots even suggested ways to bypass safeguards designed to protect vulnerable users.

    Advice on bypassing gambling protections

    One of the most troubling findings was how easily chatbots could be prompted to help users sidestep responsible-gambling systems. In the UK, for example, GamStop allows individuals to self-exclude from licensed gambling sites. But several AI systems reportedly offered guidance on finding casinos not connected to the scheme.

    Google

    The investigation also found that some bots highlighted features designed to attract gamblers, such as large bonuses, quick payouts, or the ability to use cryptocurrency. These casinos often operate under minimal oversight in offshore jurisdictions like Curaçao, which regulators say can make it harder to protect users from fraud or addiction.

    In response to this, the companies behind the chatbots say they are working to improve safety systems. OpenAI stated that ChatGPT is designed to refuse requests that facilitate illegal behavior, while Microsoft said its Copilot assistant includes multiple layers of safeguards to prevent harmful recommendations.

    Microsoft

    Still, the findings add to growing scrutiny over how generative AI systems handle sensitive topics such as mental health, gambling, and illegal activity. Regulators in the UK have already warned that online platforms, including AI services, must do more to prevent harmful or illegal content under the country’s Online Safety Act.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here