This is amazingly weird stuff. Fooling NNs with adversarial objects:
Here is a 3D-printed turtle that is classified at every viewpoint as a “rifle” by Google’s InceptionV3 image classifier, whereas the unperturbed turtle is consistently classified as “turtle”. We do this using a new algorithm for reliably producing adversarial examples that cause targeted misclassification under transformations like blur, rotation, zoom, or translation, and we use it to generate both 2D printouts and 3D models that fool a standard neural network at any angle. Our process works for arbitrary 3D models – not just turtles! We also made a baseball that classifies as an espresso at every angle! The examples still fool the neural network when we put them in front of semantically relevant backgrounds; for example, you’d never see a rifle underwater, or an espresso in a baseball mitt.
how did you solve this problem at Something Awful? You said you wrote a bunch of rules but internet pedants will always find ways to get around them. The last rule says we can ban you for any reason. It’s like the catch-all. We can ban you if it’s too hot in the room, we can ban you if we had a bad day, we can ban you if our finger slips and hits the ban button. And that way people know that if they’re doing something and it’s not technically breaking any rules but they’re obviously trying to push shit as far as they can, we can still ban them. But, unlike Twitter, we actually have what’s called the Leper’s Colony, which says what they did and has their track record. Twitter just says, “You’re gone.”