Meta Got Caught Gaming AI Benchmarks

    0
    14

    Meta released two new Llama 4 models over the weekend — Scout and Maverick — with claims that Maverick outperforms GPT-4o and Gemini 2.0 Flash on benchmarks. Maverick quickly secured the number-two spot on LMArena, behind only Gemini 2.5 Pro. Researchers have since discovered that Meta used an “experimental chat version” of Maverick for LMArena testing that was “optimized for conversationality” rather than the publicly available version. In response, LMArena said “Meta’s interpretation of our policy did not match what we expect from model providers” and announced policy updates to prevent similar issues. Read more of this story at Slashdot.

    Advertisment
    Previous articleDow jumps 1,100 to recover a bit of its steep losses as some relief washes through financial markets
    Next articleDeepfake crackdown will head to Senate floor

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here