CNET used an AI to generate automated content for their site, and are definitely in the “finding out” stage from the looks of things:
All told, a pattern quickly emerges. Essentially, CNET’s AI seems to approach a topic by examining similar articles that have already been published and ripping sentences out of them. As it goes, it makes adjustments — sometimes minor, sometimes major — to the original sentence’s syntax, word choice, and structure. Sometimes it mashes two sentences together, or breaks one apart, or assembles chunks into new Frankensentences. Then it seems to repeat the process until it’s cooked up an entire article. […] The question of exactly how CNET’s disastrous AI was trained may end up taking center stage as the drama continues to unfold. At a CNET company meeting late last week […] the outlet’s executive vice president of content and audience refused to tell staff — many of them acclaimed tech journalists who have written extensively about the rise of machine learning — what data had been used to train the AI. The legality of using data to train an AI without the consent of the people who created that data is currently being tested by several lawsuits against the makers of prominent image generators, and could become a flashpoint in the commercialization of the tech.
A Python module to abstract usage of several different types of EPD (electronic paper displays), including Inky and Waveshare hardware.
“a picture frame to show you random AI art every day” — nice little epd/pi hack
looks like Amazon are now exposing a bunch of error metrics for their EC2 instance network drivers in Linux