Spammers “giving up” according to Google

According to this Wired story, Google reckons spammers are giving up on spam:

a remarkable trend is underfoot, according to Brad Taylor, a staff software engineer at Google: The number of spam attempts — that is, the number of junk messages sent out by spammers — is flat, and may even be declining for the first time in years.

Actually, this is a wilful misunderstanding of what the Googler in question really said, which was that ‘attempts to spam Gmail users have been leveling off over the last year and more recently, even declining slightly’. In other words, they didn’t make an observation about the state of the spam problem on an internet-wide basis — just about the “local” situation as it pertains to Gmail. Bad reporting there, Wired.

But, in passing…

David Berlind at ZDNet recently blogged a rather grumpy response to InfoWorld coverage of CEAS 2007. He raised a very important point:

If I could say something to the author of that story, it would be that so long as any anti-spam solution is not deployed universally throughout the Internet’s e-mail system (in other words, so long as some anti-spam tech is not a standard), that anti-spam solution actually makes the spam problem worse. You read that right. Worse. Proprietary anti-spam solutions make the global spam problem worse. They are digging us deeper into the hole that the Internet is already in because everyone who makes those solutions is under the false belief that “s/he who is finally successful at filtering out all spam while allowing the legitimate mail in wins.”

Google’s blog post is a case in point: ‘we’re keeping more spam out of your inbox than ever before, so more and more, you can use Gmail for things you enjoy without even realizing that the spam filter is there most of the time.’

That’s great — but it doesn’t help anyone except Gmail. It’s a myopic view of the spam problem, and David’s point stands.

(I disagree with his later conclusion that the only way forward is for Google, MS, AOL and Yahoo! to get together and ‘commit to jointly supporting the same technical solutions’ — when the usual BigCos get together, they tend to focus on their own priorities. Take what happened back in 2005 with nofollow for blog-spam — while it helped the search giants with their own overriding priority, which was to tweak their algorithms to filter out the spam on the search results page, it did nothing to slow the spam flood itself, which has continued unabated.)

We need more open-source, and open-data, anti-spam work.

This entry was posted in Uncategorized and tagged , , , , , , , , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.


  1. Posted November 30, 2007 at 14:58 | Permalink

    So what is the current state of the art in collaborative + open spam space? If I wanted to leave gmail and start contributing my locally generated spam knowledge to an open/collaborative database, what tools should I be using and what dataset should I be uploading my stuff to?

    (And on the blog side, is there any non-shit, open service alternative to Akismet, by the way?)

  2. Posted November 30, 2007 at 16:16 | Permalink

    hey Luis —

    I think the state of the art is probably — it’s the most open of the established services, and at least at one stage was open source (not sure if it still is).

    There’s a shortage of good open-source anti-spam tools. Meng Weng Wong’s may be a promising future option for an open/collaborative email reputation database….

  3. Posted November 30, 2007 at 19:31 | Permalink

    Here’s my URL reputation service, in progress: It’s designed to be easy to use from scripts, for filtering web comments and referer logs. However it does tend to pick up on a lot of spamvertised URLs, so I plan to start using it from SpamAssassin too.

  4. Posted November 30, 2007 at 20:56 | Permalink

    My gmail spam folder has 2375 emails. That’s after about 3 or 4 days. I think I need to enable grey listing and SA on my mail server before it gets to gmail. Even scanning 100 at a time, it takes ages to make sure of no false positives, and it does happen.

    Postgrey, where are you again?

  5. Posted November 30, 2007 at 21:02 | Permalink

    I’ve given up on checking for false positives (my gmail spam folder gets about a thousand emails a day). Amusingly, the last time I found out about a false positive, it was to discover that google had decided that all email from [email protected] was spam.

  6. Sandra
    Posted December 2, 2007 at 01:35 | Permalink

    I’ve given up on checking for false positives Me, too ;) Interesting website on how to use Gmail as hosted spam filter.

  7. Posted December 3, 2007 at 03:18 | Permalink

    Well, I released X-Grey, a greylist implementation that currently works with both Sendmail and Postfix that can handle a huge load from multiple mail servers (I work at a web hosting company that uses both Sendmail and Postfix for email (depends upon the server) and we needed something that could work with both and handle a high volume of email. The owner of the company was kind enough to let me GPL the resulting software).

  8. Posted December 3, 2007 at 13:35 | Permalink

    Don — how does it work?

  9. Posted December 6, 2007 at 08:16 | Permalink

    Justin, for now the answer is “slowly” — it’s still on my slow, old VIA Epia-based home server that was supposed to be a backup file server, not a web server.

    The algorithm is designed to let anyone, even untrusted riff-raff, post a “good” or “bad” report about any URL, so you don’t need to preregister for an API key. (see the “API” link on the page). The model is more like Wikipedia than Advogato, and deliberately different from PageRank. (It’s also designed so that it could use a simple DHT for storage, so it should scale pretty far.)