A solid LLM XSS prompt-injection exploit on Bard; inject chat history into a Google Apps Script invocation and exfiltrate via a Google Doc. The thing I find most shocking about this is that it’s entirely by-the-numbers. This is the simplest possible way to exploit Bard (well, maybe the second after an IMG tag), and it’s a frankly shocking that it worked. I am particularly unimpressed that Google Apps Script was permitted as an output from Bard! LLM security is going to be a total shambles if this is the state of the art.
I knew Oz was bad for fauna, but apparently the flora are just as bad. The Gympie Gympie tree is “a Queensland native plant covered in microscopic hairy spines containing a neurotoxin. Brushing against it whilst walking past has occasionally been lethal because it caused enough pain to drive its victims to suicide. There is no treatment, and pain and welts can be expected to last for months, sometimes years”.
I don’t use Lambda, personally, as I find it too expensive and it doesn’t fit well with our current infrastructure (and I still fear the availability risks that might come with it, viz. this year’s outage). But this seems like a good guideline for those who might be using it:
The argument to limit the blast radius on a per route level by default is too fine-grained, adds bloat and optimizes too early. The boundary of the blast radius should be on the whole API/service level, just as it is and always has been for traditional software. Use a Lambdalith if you are not using any advance features of AWS REST API Gateway and you want the highest level of portability to other AWS gateways or compute layer. There are also many escape hatches to fill some of the promises that single-purpose functions offer.
good write-up on the AWS-style COE process (COEs being Amazon’s take on the post-outage postmortem)
Bloody hell! This is a big one, from the ICCL:
Our investigation highlights a widespread trade in data about sensitive European personnel and leaders that exposes them to blackmail, hacking and compromise, and undermines the security of their organisations and institutions. These data flow from Real-Time Bidding (RTB), an advertising technology that is active on almost all websites and apps. RTB involves the broadcasting of sensitive data about people using those websites and apps to large numbers of other entities, without security measures to protect the data. This occurs billions of times a day. Our examination of tens of thousands of pages of RTB data reveals that EU military personnel and political decision makers are targeted using RTB. This report also reveals that Google and other RTB firms send RTB data about people in the U.S. to Russia and China, where national laws enable security agencies to access the data. RTB data are also broadcast widely within the EU in a free-for-all, which means that foreign and non-state actors can indirectly obtain them, too. RTB data often include location data or time-stamps or other identifiers that make it relatively easy for bad actors to link them to specific individuals. Foreign states and non-state actors can use RTB to spy on target individuals’ financial problems, mental state, and compromising intimate secrets. Even if target individuals use secure devices, data about them will still flow via RTB from personal devices, their friends, family, and compromising personal contacts. In addition, private surveillance companies in foreign countries deploy RTB data for surreptitious surveillance. We reveal “Patternz”, a previously unreported surveillance tool that uses RTB to profile 5 billion people, including the children of their targets.