This is a good article on FB’s disastrous situation, which would be bad enough were it not endangering our societies. Despite warnings from Google and others, they switched their engagement optimization tactics to rely heavily on machine learning, which (as noted elsewhere) devolves into a situation where it’s thoroughly inscrutable:
It developed an internal tool known as FBLearner Flow that made it easy for engineers without machine learning experience to develop whatever models they needed at their disposal. By one data point, it was already in use by more than a quarter of Facebook’s engineering team in 2016. Many of the current and former Facebook employees I’ve spoken to say that this is part of why Facebook can’t seem to get a handle on what it serves up to users in the news feed. Different teams can have competing objectives, and the system has grown so complex and unwieldy that no one can keep track anymore of all of its different components. […] “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features. […] These phenomena are far worse in regions that don’t speak English because of Facebook’s uneven coverage of different languages. […] When the war in Tigray[, Ethiopia] first broke out in November, [AI ethics researcher Timnit] Gebru saw the platform flounder to get a handle on the flurry of misinformation. […] When fake news, hate speech, and even death threats aren’t moderated out, they are then scraped as training data to build the next generation of [language models]. And those models, parroting back what they’re trained on, end up regurgitating these toxic linguistic patterns on the internet.”What. A. Mess.
‘A community-contributed collection of software-related incident reports’ — this looks like it’ll be a great resource.
Looks like this is disinformation produced by an Aston-Martin-affiliated lobbyist/PR company — the true figure is 18,000 miles