Skip to content


Links for 2023-07-01

  • MDN can now automatically lie to people seeking technical information · Issue #9208

    Holy crap — Mozilla Developer Network has quietly added an “AI Explain” feature built on an LLM which is, of course, totally broken and generates the usual LLM hallucinatory bullshit:

    The generated text appears to be unreviewed, unreliable, unaccountable, and even unable to be corrected. at least if the text were baked into a repository, it could be subject to human oversight and pull requests, but as best i can tell it’s just in a cache somewhere? it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that’s precisely what they’re designed to do. and far from disclaiming that the responses might be confidently wrong, you have called it a “trusted companion”. i don’t understand this. Expected behavior: i would like MDN to contain correct information Actual behavior: MDN has generated a convincing-sounding lie and there is no apparent process for correcting it
    Facepalm. (via Abban)

    (tags: mozilla fail llm ai ml features mdn)