Skip to content

Justin Mason's Weblog Posts

Links for 2023-03-28

Comments closed

Links for 2023-03-27

  • What Will Transformers Transform? – Rodney Brooks

    This is a great essay on GPT and LLMs:

    Roy Amara, who died on the last day of 2007, was the president of a Palo Alto based think tank, the Institute for the future, and is credited with saying what is now known as Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” This has been a common problem with Artificial Intelligence, and indeed of all of computing. In particular, since I first became conscious of the possibility of Artificial Intelligence around 1963 (and as an eight year old proceeded to try to build my own physical and intelligent computers, and have been at it ever since), I have seen these overestimates many many times.
    I think that GPTs will give rise to a new aphorism (where the last word might vary over an array of synonymous variations): “If you are interacting with the output of a GPT system and didn’t explicitly decide to use a GPT then you’re the product being hoodwinked.” I am not saying everything about GPTs is bad. I am saying that, especially given the explicit warnings from OpenAI, that you need to be aware that you are using an unreliable system. Using an unreliable system sounds awfully unreliable, but in August 2021 I had a revelation at TED in Monterey, California, when Chris Anderson (the TED Chris), was interviewing Greg Brockman, the Chairman of Open AI about an early version of GPT. He said that he regularly asked it questions about code he wanted to write and it very quickly gave him ideas for libraries to use, and that was enough to get him started on his project. GPT did not need to be fully accurate, just to get him into the right ballpark, much faster than without its help, and then he could take it from there. Chris Anderson (the 3D robotics one, not the TED one) has likewise opined (as have responders to some of my tweets about GPT) that using ChatGPT will get him the basic outline of a software stack, in a well tread area of capabilities, and he is many many times more productive than with out it. So there, where a smart person is in the loop, unreliable advice is better than no advice, and the advice comes much more explicitly than from carrying out a conventional search with a search engine. The opposite of useful can also occur, but again it pays to have a smart human in the loop. Here is a report from the editor of a science fiction magazine which pays contributors. He says that from late 2022 through February of 2023 the number of submissions to the magazine increased by almost two orders of magnitude, and he was able to determine that the vast majority of them were generated by chatbots. He was the person in the loop filtering out the signal he wanted, human written science fiction, from vast volumes of noise of GPT written science fiction. Why should he care? Because GPT is an auto-completer and so it is generating variations on well worked themes. But, but, but, I hear people screaming at me. With more work GPTs will be able to generate original stuff. Yes, but it will be some other sort of engine attached to them which produces that originality. No matter how big, and how many parameters, GPTs are not going to to do that themselves. When no person is in the loop to filter, tweak, or manage the flow of information GPTs will be completely bad. That will be good for people who want to manipulate others without having revealed that the vast amount of persuasive evidence they are seeing has all been made up by a GPT. It will be bad for the people being manipulated. And it will be bad if you try to connect a robot to GPT. GPTs have no understanding of the words they use, no way to connect those words, those symbols, to the real world. A robot needs to be connected to the real world and its commands need to be coherent with the real world. Classically it is known as the “symbol grounding problem”. GPT+robot is only ungrounded symbols. It would be like you hearing Klingon spoken, without any knowledge other than the Klingon sound stream (even in Star Trek you knew they had human form and it was easy to ground aspects of their world). A GPT telling a robot stuff will be just like the robot hearing Klingonese. My argument here is that GPTs might be useful, and well enough boxed, when there is an active person in the loop, but dangerous when the person in the loop doesn’t know they are supposed to be in the loop. [This will be the case for all young children.] Their intelligence, applied with strong intellect, is a key component of making any GPT be successful.

    (tags: gpts rodney-brooks ai ml amaras-law hype technology llms future)

  • Employees Are Feeding Sensitive Business Data to ChatGPT

    How unsurprising is this? And needless to say, a bunch of that is being reused for training:

    In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential information, client data, source code, or regulated information to the LLM.  In one case, an executive cut and pasted the firm’s 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient’s name and their medical condition and asked ChatGPT to craft a letter to the patient’s insurance company.

    (tags: chatgpt openai ip privacy data-protection security)

  • GitHub Copilot is open to remote prompt-injection attacks

    GitHub Copilot is also based on a large language model. What does indirect prompt injection do to it? Again, we demonstrate that, as long as an attacker controls part of the context window, the answer is: pretty much anything. Attackers only have to manipulate the documentation of a target package or function. As you reference and use them, this documentation is loaded into the context window based on complex and ever-changing heuristics. We show […] how importing a synthetic library can lead Copilot to introduce subtle or not-so-subtle vulnerabilities into the code generated for you.

    (tags: injection copilot security exploits github llms chatgpt)

Comments closed

Links for 2023-03-26

Comments closed

Links for 2023-03-24

  • Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow

    What we have here is an early sign we’re stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail. It’s a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.

    (tags: google ai ml microsoft openai chatgpt trust spam misinformation disinformation)

Comments closed

Links for 2023-03-23

  • Vatican flag SVG on Wikimedia Commons was incorrect for 5 years, and widely copied

    In 2017 a Wikimedia Commons user changed the inside of the tiara to red because that’s how it appears on the Vatican Coat of Arms. But this assumption turned out to be faulty, because the official flag spec sheet uses different colors than the Coat of Arms. The mistake was quickly noticed by an anonymous IP who wrote an extensive and well-researched explanation of the error on the file’s talk page. Unfortunately, nobody read it, and the mistake lived on for 5 years before another user noticed it and reverted the file.

    (tags: wikipedia wikimedia commons vatican flags oops)

  • ThumbHash

    “A very compact representation of an image placeholder. Store it inline with your data and show it while the real image is loading for a smoother loading experience.”

    (tags: graphics images webdev compression lossy thumbnails)

Comments closed

Links for 2023-03-22

  • new LFP batteries will unlock cheaper electric vehicles

    Lithium ferrous phosphate (LFP) batteries, the type to be produced at the new [Ford] plant are a lower-cost alternative to the nickel- and cobalt-containing batteries used in most electric vehicles in the US and Europe today. While the technology has grown in popularity in China, Ford’s factory, developed in partnership with the Chinese battery giant CATL, marks a milestone in the West. By cutting costs while also boosting charging speed and extending lifetime, LFP batteries could help expand EV options for drivers. 

    (tags: lfp technology ev cars batteries renewable-energy)

  • You Broke Reddit: The Pi-Day Outage : RedditEng

    Quality post-mortem writeup of last week’s Reddit outage. tl;dr: an in-place Kubernetes upgrade broke it. We use blue/green deployments — with two separate parallel k8s clusters — in order to avoid this risk, as k8s upgrades are very very risky in our experience; tiny “minor” changes often seem to cause breakage.

    (tags: k8s kubernetes outages reddit ops post-mortems)

  • Superb thread on effective AI regulation

    from Baldur Bjarnason:

    First, you clarify that for the purposes of Section 230 protection (or similar), whoever provides the AI as a service is responsible for its output as a publisher. If Bing Chat says something offensive then Microsoft would be as liable as if it were an employee; You’d set a law requiring tools that integrate generative AI to attach disclosures to the content. Gmail/Outlook should pop up a notice when you get an email that their AI generated. Word/Docs should have metadata fields and notices when you open files that have used built-in AI capabilities. AI chatbots have to disclose that they are bots. Copilot should add a machine-parsable code comment. You could always remove the metadata, but doing so would establish an intent to deceive; Finally, you’d mandate that all training data sets be made opt-in (or that all of its contents are released under a permissive license) and public. Heavy fines for non-disclosure. Heavy fines for violating opt-in. Even heavier fines for lying about your training data set. Make every AI model a “vegan” model. Remove every ethical and social concern about the provenance and rights regarding the training data.
    I think #3 in particular is the most important of all.

    (tags: ai regulation data-privacy training llm ethics)

  • Bing Chat is still vulnerable to hidden prompt injection attacks

    happily parses hidden text in webpages, acting on information there that isn’t visible to human viewers. Related: , where Matteo Contrini demonstrated an attack to turn it into a scammer with prompt injection.

    (tags: bing-chat bing chatgpt openai prompt-injection exploits attacks hidden-text)

Comments closed

Links for 2023-03-20

  • Pop Culture Pulsar: Origin Story of Joy Division’s Unknown Pleasures Album Cover

    Great dig into the CP1919 pulsar signal plot that was used for “Unknown Pleasures”:

    This plotting of sequences like this, it started just a little bit earlier when we were looking at potentially drifting subpulses within the major pulse itself. So, the thought was, well, is there something like this peak here, which on the next pulse moves over here, and then moves over here, and over there. Actually, would be moving this way in that case – either way. I think Frank Drake and I published a paper in Science Magazine on exactly that issue – suggesting there might be drifting subpulses within the major pulse, which would then get back to the physics of what was causing the emission in the first place. So, then the thought was, well let’s plot out a whole array of pulses, and see if we can see particular patterns in there. So that’s why, this one was the first I did – CP1919 – and you can pick out patterns in there if you really work at it. But I think the answer is, there weren’t any that were real obvious anyway. I don’t really recall, but my bet is that the first one of these that I did, I didn’t bother to block out the stuff, and I found that it was just too confusing. So then, I wrote the program so that I would block out when a hill here was high enough, then the stuff behind it would stay hidden. And it was pretty easy to do from a computer perspective.

    (tags: design joy-division music science physics pulsars astronomy cp1919 dataviz)

  • moyix/gpt-wpre: Whole-Program Reverse Engineering with GPT-3

    This is a little toy prototype of a tool that attempts to summarize a whole binary using GPT-3 (specifically the text-davinci-003 model), based on decompiled code provided by Ghidra. However, today’s language models can only fit a small amount of text into their context window at once (4096 tokens for text-davinci-003, a couple hundred lines of code at most) — most programs (and even some functions) are too big to fit all at once. GPT-WPRE attempts to work around this by recursively creating natural language summaries of a function’s dependencies and then providing those as context for the function itself. It’s pretty neat when it works! I have tested it on exactly one program, so YMMV.

    (tags: gpt-3 reverse-engineering ghidra decompilation reversing llm)

Comments closed

Links for 2023-03-16

Comments closed

Links for 2023-03-15

  • Cat6a FTP Tool-Less Keystone Module

    For future use — CAT6A cable endpoints which don’t require tricky crimping: “no crimp tool required at all, very much worth the extra cost, and they clip into the wall sockets or a patch panel … you can do them with your fingers and a flush snips to get rid of the ends after you push the wires in” says Adam C on ITC Slack, at

    (tags: cat6a wiring home networking cables via:itc)

Comments closed

Links for 2023-03-14

  • Infra-Red, In Situ (IRIS) Inspection of Silicon


    This post introduces a technique I call “Infra-Red, In Situ” (IRIS) inspection. It is founded on two insights: first, that silicon is transparent to infra-red light; second, that a digital camera can be modified to “see” in infra-red, thus effectively “seeing through” silicon chips. We can use these insights to inspect an increasingly popular family of chip packages known as Wafer Level Chip Scale Packages (WLCSPs) by shining infrared light through the back side of the package and detecting reflections from the lowest layers of metal using a digital camera. This technique works even after the chip has been assembled into a finished product. However, the resolution of the imaging method is limited to micron-scale features.

    (tags: electronics hardware reversing bunnie-huang infrared x-ray-vision silicon)

Comments closed

Links for 2023-03-09

  • Seabirds are not at risk from offshore wind turbines

    At least according to this survey by Swedish power giant Vattenfall:

    The movements of herring gulls, gannets, kittiwakes, and great black-backed gulls were studied in detail from April to October, when bird activity is at its height. (This study only looked at four bird species, but Vattenfall says the model can and should be applied to more types of seabirds and to onshore wind farms as well.) The study’s findings: Not a single collision between a bird and a rotor blade was recorded.

    (tags: seabirds birds safety wind-turbines offshore-wind renewables wildlife)

  • Metformin, a new drug to prevent long covid

    ‘Over a thousand people with mild-to-moderate Covid were randomly assigned to 2 weeks of metformin (500 mg pills, 1 on day 1, twice a day for 4 days, then 500 mg in AM and 1000 mg in PM for 9 days) or placebo. There was a 42% reduction of subsequent Long Covid as you can see by the event curve below, which corresponds to an absolute decrease of 4.3%, from 10.6% reduced to 6.3%.’ Still no use for _treating_ long COVID though.

    (tags: covid-19 long-covid metformin drugs papers)

Comments closed

Links for 2023-03-03

Comments closed

Links for 2023-03-02

  • ChatGPT for r/BuyItForLife

    This is actually really effective; the past 3 years of product recommendations from r/BuyItForLife, queryable using ChatGPT (via valen)

    (tags: via:valen ai recommendations search products reviews)

  • Hundreds of residents vent anger over ‘entirely pointless’ hydrogen heating trial

    Greenwashing grey hydrogen as a “renewable” means of keeping home gas heating alive is not going well in Whitby:

    Influential energy analyst Michael Liebreich and University of Cambridge mechanical engineering professor David Cebon drew attention to the now-37 independent studies showing that hydrogen boilers would require about five times more renewable energy than heat pumps — likely making them significantly more expensive to run. “This trial is entirely pointless in terms of proving whether hydrogen is the most cost-effective way of decarbonising homes,” Liebreich told the audience. “Every single systems analysis from every single expert who is not paid for by the gas industry or the heating industry has concluded that hydrogen plays little or no role. “The thing that it’s intended to do, though, is maintain the debate and discussion and the delay [of decarbonisation]. If you’re running a gas network organisation, as our next speaker [Cadent head of strategy, Angela Needle] does, what you really want is to continue to harvest profits off that. If you invest today in a gas distribution network, you get to charge 6% per year for 45 years on that investment and that’s until 2068.”

    (tags: hydrogen h2 grey-hydrogen greenwashing gas natural-gas heating homes decarbonisation)

Comments closed

Links for 2023-03-01

  • Nokia G22

    This is a decent product — “Nokia has announced one of the first budget Android smartphones designed to be repaired at home allowing users to swap out the battery in under five minutes, in partnership with iFixit.” I’ve been planning to buy a more repairable phone for my next iteration, so it’s either this or a Fairphone.

    (tags: android hardware nokia phones right-to-repair repair ifixit)

  • copyright-respecting AI model training

    Alex J Champandard is thinking about how AI model training can be done in a copyright-respecting and legal fashion:

    With the criticism of web-scale datasets, it’s legitimate to ask the question: “What models are trained with best-in-class Copyright practices?” Answer: StyleGAN and FFHQ 100% transparent dataset, clear copyright, opt-in licensing, model respects terms.

    (tags: copyright legal rights ip ai ml models training stylegan ffhq flickr)

  • The tech tycoon martyrdom charade

    Anil Dash:

    It’s impossible to overstate the degree to which many big tech CEOs and venture capitalists are being radicalized by living within their own cultural and social bubble. Their level of paranoia and contrived self-victimization is off the charts, and is getting worse now that they increasingly only consume media that they have funded, created by their own acolytes. In a way, it’s sort of like a “VC Qanon”, and it colors almost everything that some of the most powerful people in the tech industry see and do — and not just in their companies or work, but in culture, politics and society overall. We’re already seeing more and more irrational, extremist decision-making that can only be understood through this lens, because on its own their choices seem increasingly unfathomable.

    (tags: vc tech anil-dash radicalization politics us-politics)

Comments closed

Links for 2023-02-20

  • Better Thermostat

    Interesting smart home component for Home Assistant —

    This custom component will add crucial features to your climate-controlling TRV (Thermostatic Radiator Valves) to save you the work of creating automations to make it smart. It combines a room-temperature sensor, window/door sensors, weather forecasts, or an ambient temperature probe to decide when it should call for heat and automatically calibrate your TRVs to fix the imprecise measurements taken in the radiator’s vicinity.
    So basically if you have smart TRVs and a room temperature sensor, you can drive that as a pair.

    (tags: thermostat smart-home home-assistant heating trvs)

Comments closed

Links for 2023-02-16

Comments closed

Links for 2023-02-14

  • a COVID-aware activity tracker

    Interesting thought experiment regarding chronic disease, long COVID, ME/CFS etc: ‘what might be in a convalescence mode, or a rest mode? And while I’m thinking of that, there’s a separate need, I think (hey! validate through research!) for, I don’t know, a chronic illness mode, because convalescence and rest are different things with different qualities distinct from the requirements and needs of people with long-term chronic illnesses. Some people who responded to my thinking-out-loud thread shared that you can use sleep tracking as a way to inform the spoons-for-the-day.’

    (tags: apple fitness accessibility convalescence chronic-disease activity-tracking long-covid me)

Comments closed

Links for 2023-02-13

  • A New Drug Switched Off My Appetite. What’s Left? | WIRED

    How long is it before there’s an injection for your appetites, your vices? Maybe they’re not as visible as mine. Would you self-administer a weekly anti-avarice shot? Can Big Pharma cure your sloth, lust, wrath, envy, pride? Is this how humanity fixes climate change—by injecting harmony, instead of hoping for it at Davos?

    (tags: mounjaro food eating weight calories future)

  • Silicon Valley tech companies are the real paperclip maximizers

    Another good Ted Chiang article —

    Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” […] This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies. Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

    (tags: superintelligence ted-chiang silicon-valley capitalism ai future civilization paperclip-maximisers)

Comments closed

Links for 2023-02-08

Comments closed

Links for 2023-02-02

Comments closed

Links for 2023-01-31

  • Study of 500,000 Medical Records Links Viruses to Alzheimer’s Again And Again

    While not demonstrating a causal link, the correlations are pretty striking — good argument for greatly increasing vaccination rates for many viral diseases.

    Around 80 percent of the viruses implicated in brain diseases were considered ‘neurotrophic’, which means they could cross the blood-brain barrier. “Strikingly, vaccines are currently available for some of these viruses, including influenza, shingles (varicella-zoster), and pneumonia,” the researchers write. “Although vaccines do not prevent all cases of illness, they are known to dramatically reduce hospitalization rates. This evidence suggests that vaccination may mitigate some risk of developing neurodegenerative disease.” The impact of viral infections on the brain persisted for up to 15 years in some cases. And there were no instances where exposure to viruses was protective.

    (tags: viruses health medicine vaccines vaccination alzheimers parkinsons diseases)

Comments closed

Links for 2023-01-30

Comments closed

Links for 2023-01-24

  • CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism

    CNET used an AI to generate automated content for their site, and are definitely in the “finding out” stage from the looks of things:

    All told, a pattern quickly emerges. Essentially, CNET’s AI seems to approach a topic by examining similar articles that have already been published and ripping sentences out of them. As it goes, it makes adjustments — sometimes minor, sometimes major — to the original sentence’s syntax, word choice, and structure. Sometimes it mashes two sentences together, or breaks one apart, or assembles chunks into new Frankensentences. Then it seems to repeat the process until it’s cooked up an entire article. […] The question of exactly how CNET’s disastrous AI was trained may end up taking center stage as the drama continues to unfold. At a CNET company meeting late last week […] the outlet’s executive vice president of content and audience refused to tell staff — many of them acclaimed tech journalists who have written extensively about the rise of machine learning — what data had been used to train the AI. The legality of using data to train an AI without the consent of the people who created that data is currently being tested by several lawsuits against the makers of prominent image generators, and could become a flashpoint in the commercialization of the tech.

    (tags: ai cnet content seo spam llms plagiarism training-data)

  • omni-epd

    A Python module to abstract usage of several different types of EPD (electronic paper displays), including Inky and Waveshare hardware.

    (tags: epd inky waveshare e-paper displays hardware python linux)

  • pycasso

    “a picture frame to show you random AI art every day” — nice little epd/pi hack

    (tags: diy photos projects hacks epd e-paper ai art dall-e)

  • EC2 instance network error metrics

    looks like Amazon are now exposing a bunch of error metrics for their EC2 instance network drivers in Linux

    (tags: metrics ec2 ops drivers networking bandwidth errors)

Comments closed

Links for 2023-01-23

  • The bivalent vaccine booster outperforms

    Solid data now up for the bivalent BA.5 SARS-CoV-2 vaccine, says Eric Topol: “we now have extensive data that is quite encouraging — better and broader than expected — that I’m going to briefly review here”

    (tags: sars-cov-2 covid-19 vaccines eric-topol medicine health)

  • Long COVID: major findings, mechanisms and recommendations

    Current state of research into Long COVID, courtesy of Nature Reviews Microbiology.

    Long COVID is an often debilitating illness that occurs in at least 10% of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections. More than 200 symptoms have been identified with impacts on multiple organ systems. At least 65 million individuals worldwide are estimated to have long COVID, with cases increasing daily. Biomedical research has made substantial progress in identifying various pathophysiological changes and risk factors and in characterizing the illness; further, similarities with other viral-onset illnesses such as myalgic encephalomyelitis/chronic fatigue syndrome and postural orthostatic tachycardia syndrome have laid the groundwork for research in the field. In this Review, we explore the current literature and highlight key findings, the overlap with other conditions, the variable onset of symptoms, long COVID in children and the impact of vaccinations. Although these key findings are critical to understanding long COVID, current diagnostic and treatment options are insufficient, and clinical trials must be prioritized that address leading hypotheses.

    (tags: long-covid covid-19 health medicine reviews nature papers)

Comments closed

Links for 2023-01-11

  • This app will self-destruct: How Belarusian hackers created an alternative Telegram

    Great idea:

    When a 25-year-old activist from Minsk who goes by Pavlo was detained by Belarusian KGB security forces last summer, he knew they would search his phone, looking for evidence of his involvement in anti-government protests. The police officer asked for Pavlo’s password to Telegram, the most popular messenger app among Belarusian activists, which he gave him. The officer entered it and… found nothing. All secret chats and news channels had disappeared, and after a few minutes of questioning Pavlo was released. Pavlo’s secret? A secure version of Telegram, developed by a hacktivist group from Belarus called the Cyber Partisans. Partisan Telegram, or P-Telegram, automatically deletes pre-selected chats when someone enters the so-called SOS password.
    … after entering a fake [SOS] password, P-Telegram can automatically log out of the account, delete selected chats and channels, and even send a notification about the arrest of the account owners to their friends or families. P-Telegram also allows other activists to remotely activate the SOS password on the detainee’s phone. For this, they need to send a code word to any of the shared Telegram chats. Another feature on P-Telegram automatically takes photos of law enforcement officers on the front camera when they enter a fake password. “We warn users that this can be dangerous, as this photo will be stored on the phone, revealing that a person may use Partisan Telegram,” Shemetovets said.  Cyber Partisans are constantly updating their app, fixing bugs, and adding new features. They also regularly conduct independent audits to ensure that P-Telegram complies with all security measures. A recent audit by Open Technology Fund’s Red Team Lab proved that it is almost impossible for “casual observers without technical knowledge and specialized equipment” to identify the existence of P-Telegram on a device.

    (tags: p-telegram hacktivism security telegram messaging privacy activism duress-passwords)

Comments closed

Links for 2023-01-10

Comments closed

Links for 2023-01-09

  • A healthcare algorithm started cutting care, and no one knew why

    This is an absurd hellscape:

    Legal Aid filed a federal lawsuit in 2016, arguing that the state had instituted a new [healthcare] policy without properly notifying the people affected about the change. There was also no way to effectively challenge the system, as they couldn’t understand what information factored into the changes, De Liban argued. No one seemed able to answer basic questions about the process. “The nurses said, ‘It’s not me; it’s the computer,’” De Liban says. When they dug into the system, they discovered more about how it works. Out of the lengthy list of items that assessors asked about, only about 60 factored into the home care algorithm. The algorithm scores the answers to those questions, and then sorts people into categories through a flowchart-like system. It turned out that a small number of variables could matter enormously: for some people, a difference between a score of a three instead of a four on any of a handful of items meant a cut of dozens of care hours a month. (Fries didn’t say this was wrong, but said, when dealing with these systems, “there are always people at the margin who are going to be problematic.”) […] From the state’s perspective, the most embarrassing moment in the dispute happened during questioning in court. Fries was called in to answer questions about the algorithm and patiently explained to De Liban how the system works. After some back-and-forth, De Liban offered a suggestion: “Would you be able to take somebody’s assessment report and then sort them into a category?” […] Fries said he could, although it would take a little time. He looked over the numbers for Ethel Jacobs. After a break, a lawyer for the state came back and sheepishly admitted to the court: there was a mistake. Somehow, the wrong calculation was being used. They said they would restore Jacobs’ hours. “Of course we’re gratified that DHS has reported the error and certainly happy that it’s been found, but that almost proves the point of the case,” De Liban said in court. “There’s this immensely complex system around which no standards have been published, so that no one in their agency caught it until we initiated federal litigation and spent hundreds of hours and thousands of dollars to get here today. That’s the problem.”

    (tags: algorithms government health healthcare automation grim-meathook-future future)

Comments closed

Links for 2023-01-04

  • Turning Google smart speakers into wiretaps for $100k

    This is some very impressive work on reverse engineering a fairly advanced IoT device (the Google Home Mini), discovering and exploiting its security holes.

    I was recently rewarded a total of $107,500 by Google for responsibly disclosing security issues in the Google Home smart speaker that allowed an attacker within wireless proximity to install a “backdoor” account on the device, enabling them to send commands to it remotely over the Internet, access its microphone feed, and make arbitrary HTTP requests within the victim’s LAN (which could potentially expose the Wi-Fi password or provide the attacker direct access to the victim’s other devices). These issues have since been fixed.

    (tags: security google wiretapping exploits hacking iot reverse-engineering)

  • Infectiousness of SARS-CoV-2 breakthrough infections and reinfections during the Omicron wave | Nature Medicine

    This was an open question from earlier in the pandemic — does vaccination reduce transmission and infectiousness: ‘In our main analysis, we found that any COVID-19 vaccine reduced infectiousness by 22% (6–36%) and prior infection reduced infectiousness by 23% (3–39%). Hybrid immunity reduced infectiousness by 40% (20–55%).’

    (tags: immunity covid-19 infection transmission hybrid-immunity papers)

  • Caddy

    lhl likes Caddy:

    Caddy came up in conversation earlier today. It’s been my favorite reverse proxy/web server for the past few years because of how simple it is to setup and for it’s automagic LetsEncrypt setup. (This post is actually being pushed through Caddy on my fediverse server, and was basically the easiest part of the setup). For those interested, it performs pretty competitively with nginx: but IMO the main selling point (why I first installed it) was the automagic HTTPS setup:

    (tags: caddy reverse-proxies ops http https lets-encrypt servers)

Comments closed

Links for 2022-12-28


    A gateway bot from Twitter to Mastodon —

    One of the things I would miss here on Mastodon was all of the alerts from my local infrastructure and government twitter accounts. These will likely take a very long time to make the migration. With, you can create bot accounts that put those tweets in your Mastodon timeline.

    (tags: twitter mastodon gateways bots tweets)

Comments closed

Links for 2022-12-26

Comments closed

Links for 2022-12-20

Comments closed

Links for 2022-12-16

  • Digital scrapie

    “a hypothetical scenario in which a machine learning system trained on its own output becomes unable to function properly or make meaningful predictions”

    (tags: scrapie brains training ai ml feedback)

  • Clip retrieval

    Via ted byfield: “If you’ve wondered what AI-bots are ~thinking while they generate an image, here you go.” Reverse-engineering the training samples which Stable Diffusion et al are combining for a given text query, in the laion5B or laion_400m datasets

    (tags: ai clips laion ml stable-diffusion text2image)

Comments closed

Links for 2022-12-15

Comments closed