Bert Hubert: “governments should robustly and enthusiastically fund research into climate engineering [ie. geoengineering]. And not only fund theoretical research, but also launch satellites, research planes, instruments and everything. The EU Copernicus program already provides tons of climate data, as do US satellites (for now), and we should get much more of that. Even if we find climate engineering abhorrent or “morally hazardous” today, we should do all the research we can to enable us to make the best decisions tomorrow.”
Alison Parrish is making great work.
Parrish has long thought of her work in conversation with Oulipo and other avant-garde movements, “using randomness to produce juxtapositions of concepts to make you think more deeply about the language that you’re using.” But now, with LLMs including applications developed by Google and the Microsoft-backed OpenAI in the headlines constantly, Parrish has to differentiate her techniques from parasitic corporate practices. “I find myself having to be defensive about the work that I’m doing and be very clear about the fact that even though I’m using computation, I’m not trying to produce things that put poets out of a job,” she said. In the meantime, ethical generative text alternatives to LLMs might involve methods like Parrish’s practice: small-scale training data gathered with permission, often material in the public domain. “Just because something’s in the public domain doesn’t necessarily mean that it’s ethical to use it, but it’s a good starting point,” Parrish told me. … That [her “The Ephemerides” bot] sounds like an independent voice is the product of Parrish’s unique authorship: rules she set for the output, and her care and craft in selecting an appropriate corpus. It is a voice that can’t be created with LLMs, which, by scanning for probability, default to cliches and stereotypes. “They’re inherently conservative,” Parrish said. “They encode the past, literally. That’s what they’re doing with these data sets.”
via Waxy, a search engine that exclusively searches discussion forums
Fantastic quote, this:
The keynote speaker at the Royal Society was another Google employee: Geoffrey Hinton, who for decades has been a central figure in developing deep learning. As the conference wound down, I spotted him chatting with Bostrom in the middle of a scrum of researchers. Hinton was saying that he did not expect A.I. to be achieved for decades. “No sooner than 2070,” he said. “I am in the camp that is hopeless.” “In that you think it will not be a cause for good?” Bostrom asked. “I think political systems will use it to terrorize people,” Hinton said. Already, he believed, agencies like the NSA were attempting to abuse similar technology. “Then why are you doing the research?” Bostrom asked. “I could give you the usual arguments,” Hinton said. “But the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air — an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”