Skip to content


Links for 2023-03-22

  • new LFP batteries will unlock cheaper electric vehicles

    Lithium ferrous phosphate (LFP) batteries, the type to be produced at the new [Ford] plant are a lower-cost alternative to the nickel- and cobalt-containing batteries used in most electric vehicles in the US and Europe today. While the technology has grown in popularity in China, Ford’s factory, developed in partnership with the Chinese battery giant CATL, marks a milestone in the West. By cutting costs while also boosting charging speed and extending lifetime, LFP batteries could help expand EV options for drivers. 

    (tags: lfp technology ev cars batteries renewable-energy)

  • You Broke Reddit: The Pi-Day Outage : RedditEng

    Quality post-mortem writeup of last week’s Reddit outage. tl;dr: an in-place Kubernetes upgrade broke it. We use blue/green deployments — with two separate parallel k8s clusters — in order to avoid this risk, as k8s upgrades are very very risky in our experience; tiny “minor” changes often seem to cause breakage.

    (tags: k8s kubernetes outages reddit ops post-mortems)

  • Superb thread on effective AI regulation

    from Baldur Bjarnason:

    First, you clarify that for the purposes of Section 230 protection (or similar), whoever provides the AI as a service is responsible for its output as a publisher. If Bing Chat says something offensive then Microsoft would be as liable as if it were an employee; You’d set a law requiring tools that integrate generative AI to attach disclosures to the content. Gmail/Outlook should pop up a notice when you get an email that their AI generated. Word/Docs should have metadata fields and notices when you open files that have used built-in AI capabilities. AI chatbots have to disclose that they are bots. Copilot should add a machine-parsable code comment. You could always remove the metadata, but doing so would establish an intent to deceive; Finally, you’d mandate that all training data sets be made opt-in (or that all of its contents are released under a permissive license) and public. Heavy fines for non-disclosure. Heavy fines for violating opt-in. Even heavier fines for lying about your training data set. Make every AI model a “vegan” model. Remove every ethical and social concern about the provenance and rights regarding the training data.
    I think #3 in particular is the most important of all.

    (tags: ai regulation data-privacy training llm ethics)

  • Bing Chat is still vulnerable to hidden prompt injection attacks

    happily parses hidden text in webpages, acting on information there that isn’t visible to human viewers. Related: , where Matteo Contrini demonstrated an attack to turn it into a scammer with prompt injection.

    (tags: bing-chat bing chatgpt openai prompt-injection exploits attacks hidden-text)

Comments closed