Simon McGarr: “If you think that a public affairs show has failed to address a matter with proper balance, you can (Tweet) say it to the breeze or complain. There is a process to follow to make an effective complaint 1) complain to broadcaster 2) complain to BAI if unhappy with response.” Thread with more details, and yet more at https://twitter.com/IrishTV_films/status/927172642544783360
I am 100% behind this. There’s so much potential for hidden bias and unethical discrimination in careless AI/ML deployment.
While AI holds significant promise, we’re seeing significant challenges in the rapid push to integrate these systems into high stakes domains. In criminal justice, a team at Propublica, and multiple academics since, have investigated how an algorithm used by courts and law enforcement to predict recidivism in criminal defendants may be introducing significant bias against African Americans. In a healthcare setting, a study at the University of Pittsburgh Medical Center observed that an AI system used to triage pneumonia patients was missing a major risk factor for severe complications. In the education field, teachers in Texas successfully sued their school district for evaluating them based on a ‘black box’ algorithm, which was exposed to be deeply flawed. This handful of examples is just the start?—?there’s much more we do not yet know. Part of the challenge is that the industry currently lacks standardized methods for testing and auditing AI systems to ensure they are safe and not amplifying bias. Yet early-stage AI systems are being introduced simultaneously across multiple areas, including healthcare, finance, law, education, and the workplace. These systems are increasingly being used to predict everything from our taste in music, to our likelihood of experiencing mental illness, to our fitness for a job or a loan.
‘an essay on YouTube, children’s videos, automation, abuse, and violence, which crystallises a lot of my current feelings about the internet through a particularly unpleasant example from it. […] What we’re talking about is very young children [..] being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.’