OpenAI’s Whisper model is reportedly ‘hallucinating’ in high-risk situations

    3
    0
    OpenAI’s Whisper model is reportedly ‘hallucinating’ in high-risk situations


    Researchers have found that OpenAI‘s audio-powered transcription tool, Whisper, is inventing things that were never said with potentially dangerous consequences, according to a new report.

    As per APNews, the AI model is inventing text (commonly referred to as a ‘hallucination’), where the large language model spots patterns that don’t exist in its own training material, thus creating nonsensical outputs. US Researchers have found that Whisper’s mistakes can include racial commentary, violence and fantasised medical treatments.



    Source link

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here