Skip to content

Archives

Links for 2023-03-31

  • A misleading open letter about sci-fi AI dangers ignores the real risks

    This essay is spot on about the recent AI open letter from the Future of Life Institute, asking for “a 6-month pause on training language models “more powerful than” GPT-4”:

    Over 1,000 researchers, technologists, and public figures have already signed the letter. The letter raises alarm about many AI risks: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” We agree that misinformation, impact on labor, and safety are three of the main risks of AI. Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people. It distracts from the real issues and makes it harder to address them. The letter has a containment mindset analogous to nuclear risk, but that’s a poor fit for AI. It plays right into the hands of the companies it seeks to regulate.
    Couldn’t agree more.

    (tags: ai scifi future risks gpt-4 regulation)

Comments closed