Links for 2018-01-15

  • The likely user interface which led to Hawaii’s false-alarm incoming-ballistic-missile alert on Saturday 2018-01-13

    @supersat on Twitter: “In case you’re curious what Hawaii’s EAS/WEA interface looks like, I believe it’s similar to this. Hypothesis: they test their EAS authorization codes at the beginning of each shift and selected the wrong option.” This is absolutely classic enterprisey, government-standard web UX — a dropdown template selection and an easily-misclicked pair of tickboxes to choose test or live mode.

    (tags: testing ux user-interfaces fail eas hawaii false-alarms alerts nuclear early-warning human-error)

  • The Death of Microservice Madness in 2018

    Quite a good set of potential gotchas, which I’ve run into myself, including: ‘Real world systems often have poorly defined boundaries’ ‘The complexities of state are often ignored’ ‘The complexitities of communication are often ignored’ ‘Versioning can be hard’ ‘Microservices can be monoliths in disguise’

    (tags: architecture devops microservices services soa coding monoliths state systems)

  • Do algorithms reveal sexual orientation or just expose our stereotypes?

    ‘A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. […] Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone.’ ‘In [this paper], we have shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation, and lifestyle? — ?that is, differences in culture, not in facial structure. […] We’ve demonstrated that just a handful of yes/no questions about these variables can do nearly as good a job at guessing orientation as supposedly sophisticated facial recognition AI. Therefore?—?at least at this point?—?it’s hard to credit the notion that this AI is in some way superhuman at “outing” us based on subtle but unalterable details of our facial structure.’

    (tags: culture facial-recognition ai papers facial-structure sexual-orientation lgbt computer-vision)

  • Shanzhai ?? China & its Contents

    As he drinks Sino-coffee for around RMB 10, Comrade X might well be wearing the latest ‘ZARE’ couture while watching the TV news streaming on his HiPhone.[2] Back in Guangdong, his girlfriend — a sales consultant at a small stall in one of Shenzhen’s many wholesale electronics markets — sports a ‘high-end replica’ ?? Louis Vuitton bag and makes a living selling ‘domestically produced’ ?? and ‘smuggled’ ?? smartphones. The imitation products that festoon the couple’s lives are part of ‘shanzhai ?? China’. Shanzhai, the word means roughly ‘mass-produced imitation goods’, has created a Chinese landscape that is littered with products derided by the media, Chinese and international, as ‘copycat’, ‘guerrilla counterfeits’ and ‘knockoffs’, all the work of thieves.[3] Those who feel that their intellectual property and copyright has been infringed by shanzhai producers describe the products as ‘rubbish’, ‘piracy in disguise’ and ‘hooligan’.[4] Regardless of such righteous outrage, shanzhai — the producers, the products and the mentality — continues to flourish as an essential, quasi-legitimate shadow dimension of the Chinese economy. And, in practical terms, shanzhai products give disenfranchised ‘non-consumers’ of the orthodox economy — that is, people who would like to own but can’t afford the ‘original’ products — cut-price access to high-end technologies, as well as offering aspirational shoppers consumer satisfaction.

    (tags: shanzai china fakes consumerism hiphone smartphones copycat knockoffs imitation consumption)

  • Don Norman on “Human Error”, RISKS Digest Volume 23 Issue 07 2003

    It is far too easy to blame people when systems fail. The result is that over 75% of all accidents are blamed on human error. Wake up people! When the percentage is that high, it is a signal that something else is at fault — namely, the systems are poorly designed from a human point of view. As I have said many times before (even within these RISKS mailings), if a valve failed 75% of the time, would you get angry with the valve and simply continual to replace it? No, you might reconsider the design specs. You would try to figure out why the valve failed and solve the root cause of the problem. Maybe it is underspecified, maybe there shouldn’t be a valve there, maybe some change needs to be made in the systems that feed into the valve. Whatever the cause, you would find it and fix it. The same philosophy must apply to people.

    (tags: don-norman ux ui human-interface human-error errors risks comp.risks failures)

This entry was posted in Uncategorized. Bookmark the permalink. Both comments and trackbacks are currently closed.