Skip to content

Month: May 2006

Blog Spam, and a ‘nofollow’ Post-Mortem

An interesting article on blog-spam countermeasures — Google’s embarrassing mistake. Quote:

I think it’s time we all agreed that the ‘nofollow’ tag has been a complete failure.

For those of you new to the concept, nofollow is a tag that blogs can add to hyperlinks in blog comments. The tag tells Google not to use that link in calculating the PageRank for the linked site. […]

Since its enthusiastic adoption a year and a half ago, by Google, Six Apart, WordPress, and of course the eminent Dave Winer, I think we can all agree that nofollow has done — nothing. Comment spam? Thicker than ever. It’s had absolutely no effect on the volume of spam. That’s probably because comment spammers don’t give a crap, because the marginal cost of spamming is so low. Also, nofollow-tagged links are still links, which means that humans can still click on them — and if humans can click, there’s a chance somebody might visit the linked sites after all.

I agree. At the time, I pointed at this comment from Mark Pilgrim:

Spammers have it in their heads now that weblog comments are a vector to exploit. They don’t look at individual results and tweak their software to stop bothering individuals. They write generic software that works with millions of sites and goes after them en masse. So you would end up with just as much spam, it would just be displayed with unlinked URLs.

Spammers don’t read blogs; they just write to them.

I still think he was spot on.

However, one part of the ‘Google’s embarrassing mistake’ article is a red herring — I think the chilling effect on “nonspam links” is not to be worried about; as Jeremy Zawodny said, life’s too short to worry about dropping links purely in the hopes of giving yourself Page Rank. I don’t know if I really want links that people are leaving purely for that reason. ;)

In fact, I wouldn’t be surprised to hear that Google’s crawler starts treating “nofollow” links as mildly non-spammy in a future revision, due to their wide use in wikis, blogs etc.

To be honest, though — I don’t see the problem of blog-spam much anymore. As I said here:

[Weblog] comment spam should be a lot easier to deal with than SMTP spam. … With weblog comments, you control the protocol entirely, whereas with SMTP you’re stuck with an existing protocol and very little “wiggle room”.

On my WordPress weblog [ie. here] — which, admittedly, gets only about 1/4 of the traffic plasticbag.org does — I’ve instituted a very simple check stolen from Jeremy Zawodny. I simply include a form field which asks the comment poster for my first name, and if they fail to supply that, the comment is dropped. In addition, I’ve removed the form fields to post directly, requiring that all comments are previewed; this has the nice bonus of increasing comment quality, too.

Those are the only antispam measures I’m using there, and as a result of those two I get about 1 successful spam posted per week, which is a one-click moderation task in my email. That’s it.

The key is to not use the same measures as everyone else — if every weblog has a different set of protocols, with different form fields asking different simple questions, the only spammers that can beat that are the ones that write custom code for your site — or use human operators sitting down to an IE window.

Trackbacks, however — turn that off. The protocol was designed poorly, with insufficient thought given to its abuse potential; there’s no point keeping it around, now that it’s a spam vector.

Finally, a “perfect” solution to blog spam, while allowing comments, is unachievable. There will always be one guy who’s going to sit down at a real web browser to hand-type a comment extolling the virtues of some product or another. The goal is to get it to a level where you get one of those per week, and it’s a one-click operation to discard them.

(Update: This story got Slashdotted! The poor server’s been up and down repeatedly — looks like it needs an upgrade. In the meantime, WP-Cache has proven its weight in gold; recommended…)

Retroactive Tagging With TagThe.Net

Hacky hack hack.

Ever since I enabled tags on taint.org, I’ve been mildly annoyed by the fact that there were thousands of older entries deprived of their folksonomic chunky goodness. A way to ‘retroactively tag’ those entries somehow would be cool.

Last week, Leonard posted a link on his linkblog to TagThe.net, a web service which offers a nifty REST API; simply upload a chunk of text, and it’ll suggest a few tags for that text, like this:

echo 'Hi there, I am a tag-suggesting robot' | curl "http://tagthe.net/api/?text=`urlencode`"
<?xml version="1.0" encoding="UTF-8"?>
<memes>
  <meme source="urn:memanage:BAD542FA4948D12800AA92A7FAD420A1" updated="Tue May 30 20:20:39 CEST 2006">
    <dim type="topic">
      <item>robot</item>
    </dim>
    <dim type="language">
      <item>english</item>
    </dim>
  </meme>
</memes>

This looked promising.

Anyway, I’ve now implemented this — it worked great! If you’re curious, here’s details of how I did it. It’s a bit hacky, since I’m only going to be doing this once — and very UNIXy and perlish, because that’s how I do these things — but maybe somebody will find it useful.

How I Retroactively Tagged taint.org

This weblog runs WordPress — so all the entries are stored in a MySQL database. I took the MySQL dump of the tables, and a quick script figured out that out of somewhere over 1600-ish posts, there were 1352 that came from the pre-tag era, requiring tag inference. A mail to the TagThe.Net team established that they were happy with this level of usage.

I grepped the post IDs and text out of the SQL dump, threw those into a text file using the simple format ‘id=NNN text=SQLHTMLSTRING’ (where SQLHTMLSTRING was the nicely-escaped HTML text taken directly from the SQL dump), and ran them through this script.

That rendered the first 2k of each of those entries as a URL-encoded string, invoked the REST API with that, got the XML output, and extracted the tags into another UNIXy text-format output file. (It also added one tag for the ‘proto-tag’ system I used in the early days, where the first word of the entry was a single tag-style category name.)

Next, I ran this script, which in turn took that intermediate output and converted it to valid PHP code, like so:

cat suggestedtags | ./taglist-to-php.pl  > addtags.php
scp addtags.php my.server:taint.org/wp-admin/

The generated page ‘addtags.php’ looks like this:

<?php
  require_once('admin.php');
  global $utw;
  $utw->SaveTags(997, array("music","all","audio","drm-free",
      "faq","lunchbox","destination","download","premiere","quote"));
  [...]
  $utw->SaveTags(998, array("software","foo","swf","tin","vnc"));
  $utw->SaveTags(999, array("oses","eek","longhorn","ram",
    "winsupersite","windows","amount","base","dog","preview","system"));
?>

Once that page was in place, I just visited it in my (already logged in) web browser window, at http://taint.org/wp-admin/addtags.php, and watched as it gronked for a while. Eventually it stopped, and all those entries had been tagged. (If I wasn’t so hackish, I might have put in a little UI text here — but I didn’t.)

The results are very good, I think.

A success: http://taint.org/tag/research has picked up a lot of the interesting older entries where I discussed things like IBM’s Tieresias pattern-recognition algorithm. That’s spot on.

A minor downside: it’s not so good at nouns. This entry talks about Silicon Valley and geographical insularity, and mentions “Silicon Valley” prominently — one or both of those words would seem to be a good thing to tag with, but it missed them.

Still, that’s a minor issue — the tags it has suggested are generally very appropriate and useful.

Next, I need to find a way to auto-generate titles for the really old entries ;)

links for 2006-05-29

Web 2.0 and Open Source

A commenter at this post on Colm MacCarthaigh’s weblog writes:

I guess I still don’t understand how Open Source makes sense for the developers, economically. I understand how it makes sense for adapters like me, who take an app like Xoops or Gecko and customize it gently for a contract. Saves me hundreds of hours of labour. The down side of this is that the whole software industry is seeing a good deal of undercutting aimed at sales to small and medium sized commercial institutions.

Similarly, in the follow-up to the O’Reilly “web 2.0” trademark shitstorm, there’s been quite a few comments along the lines of “it’s all hype anyway”.

I disagree with that assertion — and Joe Drumgoole has posted a great list of key Web 2.0 vs Web 1.0 differentiators, which nails down some key ideas about the new concepts, in a clear set of one-liners.

Both open source software companies, and “web 2.0” companies, are based on new economic ideas about software and the internet. There’s still quite a lot of confusion, fear and doubt about both, I think.

Open Source

As I said in my comment at Colm’s weblog — open source is a network effect. If you think of the software market as a single buyer and seller, with the seller producing software and selling to the buyer, it doesn’t make sense.

But that’s not the real picture of a software market. If you expand the picture beyond that, to a more realistic picture of a larger community of all sorts of people at all levels, with various levels interacting in a more complex maze of conversation and transactions, open source creates new opportunities.

Here’s one example, speaking from experience. As the developer of SpamAssassin, open source made sense for me because I could never compete with the big companies any other way.

If I had been considering it in terms of me (the seller) and a single customer (the buyer), economically I could make a case of ‘proprietary SpamAssassin’ being a viable situation — but that’s not the real situation; in reality there was me, the buyer, a few 800lb gorillas who could stomp all over any puny little underfunded Irish company I could put together, and quite a few other very smart people, who I could never afford to employ, who were happy to help out on ‘open-source SpamAssassin’ for free.

Given this picture, I’m quite sure that I made the right choice by open sourcing my code. Since then, I’ve basically had a career in SpamAssassin. In other words my open source product allowed me to make income that I wouldn’t have had, any other way.

It’s certainly not simple economics, is a risk, and is complicated, and many people don’t believe it works — but it’s viable as an economic strategy for developers, in my experience. (I’m not sure how to make it work for an entire company, mind you, but for single developers it’s entirely viable.)

Web 2.0

Similarly — I feel some of the companies that have been tagged as “web 2.0” are using the core ideas of open source code, and applying them in other ways.

Consider Threadless, which encourages designers to make their designs available, essentially for free — the designer doesn’t get paid when their tee shirt is printed; they get entered into a contest to win prizes.

Or Upcoming.org, where event tracking is entirely user-contributed; there’s no professional content writers scribbling reviews and leader text, just random people doing the same. For fun, wtf!

Or Flickr, where users upload their photos for free to create the social experience that is the site’s unique selling point.

In other words — these companies rely heavily on communities (or more correctly certain actors within the community) to produce part of the system — exactly as open source development relies on bottom-up community contribution to help out a little in places.

The alternative is the traditional, “web 1.0” style; it’s where you’re Bill Gates in the late 90’s, running a commercial software company from the top down.

  • You have the “crown jewels” — your source code — and the “users” don’t get to see it; they just “use”.
  • Then they get to pay for upgrades to the next version.
  • If you deal with users, it’s via your sales “channels” and your tech support call centre.
  • User forums are certainly not to be encouraged, since it could be a PR nightmare if your users start getting together and talking about how buggy your products are.
  • Developers (er, I mean “engineers”) similarly can’t go talking to customers on those forums, since they’ll get distracted and give away competitive advantage by accidentally leaking secrets.
  • Anyway, the best PR is the stuff that your PR staff put out — if customers talk to engineers they’ll just get confused by the over-technical messages!

Yeah, so, good luck with that. I remember doing all that back in the ’90’s and it really wasn’t much fun being so bloody paranoid all the time ;)

URLs:

(PS: The web2.0 companies aren’t using all of the concepts of open-source, of course — not all those web apps have their source code available for public reimplementation and cloning. I wish they were, but as I said, I can’t see how that’s entirely viable for every company. Not that it seems to stop the cloners, anyway. ;)

links for 2006-05-26

links for 2006-05-25

Pam on the AIDS/LifeCycle

My mate Pam is cycling in this year’s AIDS/LifeCycle — for a week from June 4 to 10, she’ll be cycling from San Francisco to LA, for charity. That’s 585 miles. Since she bought her bike to do this ride, she’s clocked up a terrifying 2040 miles. Blimey.

It’s for a good cause — go on, make a donation!

links for 2006-05-23

Poll: keep ‘Fixing Email Weblog’ in Planet Antispam?

I added the Fixing Email weblog to Planet Antispam a while back — however, I’m not entirely sure at this stage that its content (which is seems to be primarily news syndication) fits with the “planet” concept (which is primarily intended for first-person posts).

So — quick poll. Let me know what you think, pro or con, Planet readers: should I remove the Fixing Email feed from that site?

Update: that was a pretty resounding ‘yes’. Done!

links for 2006-05-22

Dear Recruiters

Dear Recruiters,

If you’re going to (a) scrape my CV page from my website, then (b) spam me, unsolicited, offering to represent me for jobs I don’t want in places I don’t live, in explicit contravention of the terms of use [*] of that document — here’s a tip.

Don’t compound the problem by asking me to resend the document in bloody Microsoft Word format. FFS.

([*]: Those terms were, of course, added in an attempt to stem the tide of recruiter spam. Thanks to Colm MacCarthaigh for the idea…)

links for 2006-05-18

Bebo’s “Irish Invasion”

Reading this post at Piaras Kelly’s blog, I was struck by something — I never realised quite how bizarre the situation with Bebo is. If you check out the Google Trends ‘country’ tab, Ireland is the only country listed — meaning that search volume for “bebo” is infinitesimal, by comparison, elsewhere! (Update: Ireland was the only country listed, because the URL used limited it to Ireland only. However, the point is still valid when other countries are included, too ;)

It is also destroying Myspace as a search term on the Irish internet. (Update: also fixed)

As a US-based company, they must be mystified by all this attention — the Brazilian invasion of Orkut has nothing on this ;)

I’ll recycle a comment I made on Joe Drumgoole’s weblog as to why this happened:

My theory is that social networking systems, like Bebo, Myspace, linkedin, Friendster, Tribe.net, Orkut, Facebook etc. have all developed their own emergent specialisations. These are entirely driven by their users — although the sites can attempt to push or pull in certain directions (such as Friendster banning ‘non-person’ accounts), fundamentally the users will drive it. All of those sites have massively different user populations; Tribe has the Burning Man crowd, Friendster the daters, Orkut the brazilians etc.

Next, I think kids of school age form a set of small set of cliques. They don’t want to appear cool to friends thousands of miles away, on the internet; they want to appear cool to their peer group in their local school. So all it takes is a group of influential ‘tastemakers’ — the alpha males and females in a year — to go onto Bebo, and it becomes the site for a certain school; and given enough of that, it’ll spread to other schools, and soon Bebo becomes the SNS for the irish school system. In other words, Irish kids couldn’t really care less what US kids think of them; they want to be cool locally.

Also I think MySpace has a similar problem to Orkut — it’s already ‘owned’ by a population somewhere else, who are talking about stuff that makes little sense to Irish teenagers. As a result, it’s not being used as a social system here in Ireland; instead, it’s just used by musicians who want a cheap place to host a few tracks without having to set up their own website.

(Aside: part of the latter is driven by clueless local press coverage of the Arctic Monkeys — they have latched onto their success, put the cart before the horse, and decided that they were somehow ‘made’ by hosting music on MySpace, rather than by the attention of their fans. duh!)

links for 2006-05-17

5 Years of taint.org

Five years ago, on 15 May 2001, I started writing this weblog.

Subject matter started with a forward of something odd from the Forteana list — ‘Why Finns are sick of illnesses named after them’. In terms of subject matter, I started the weblog to reduce the amount of forwards I was passing on by email to other groups — hence the preponderance of forteana posts early on.

Nowadays, by contrast, I try to write original ramblings^Wresearch for the main part of the site, and the occasional “fresh bits” I unearth elsewhere are kept separate, posted to the link-blog at del.icio.us/jm.

However, the real reason I started the thing was to act as an experiment in using WebMake as a blog platform — at least, that was the excuse. It worked quite successfully, for what it’s worth — but in mid-August 2005, I eventually accepted that there weren’t enough hours in the day to maintain a weblogging CMS, and its templates, as well as everything else, and that I didn’t really need to test WebMake’s abilities any more, and switched to WordPress. I’m glad I did; WP is a great piece of software.

So what’s been the biggest hit on taint.org, by far? Here it is: http://taint.org/xfer/2004/kittens.jpg . Lots and lots of Google Image referrers, MySpace hotlinkers, etc. etc. ;) It’s a top hit for a GIS search for [kittens], I think.

Random stats, based on April’s logs:

  • About 81247 hits were received during April to the RSS 2.0 feed (the default), 9921 to the Atom feed, and 7795 for the RSS 1.0 rendering. That indicates that format-wars-wise, people just use the default. ;)
  • Assuming the RSS reader apps average out to 1 HTTP GET every 30 mins (as Bloglines and Apple’s reader do), that means there are somewhere around (98963 / (30 * 24 * 2)) = 68 subscribers.
  • In terms of the old style browser-using readership — there were 44926 hits on the front page using web browsers.
  • AWStats claims 2700 visits per day, from around 33000 visitors per month. I find the latter figure hard to believe.

After the front page and the feeds, the scraped RSS feeds at http://taint.org/scraped/ come second, Threadless beating out Perry Bible Fellowship by a little bit.

Top stories last month, based on hits:

  • http://taint.org/2006/04/29/230814a.html — Single-Letter Google Hits
  • http://taint.org/2006/01/20/220239a.html — the SweetheartsConnection.com Scam (still attracting comments from scammees!)
  • http://taint.org/2004/04/15/033025a.html — really outdated stats on GMail’s spam filtering accuracy
  • http://taint.org/2006/04/20/213624a.html — Automatically Invoking screen(1) on Remote Logins
  • http://taint.org/2006/04/15/134751a.html — Google Calendar
  • http://taint.org/2006/04/03/121837a.html — A Gotcha With perl’s “each()”
  • http://taint.org/2005/08/06/024026a.html — The Life of a SpamAssassin Rule
  • http://taint.org/2006/04/21/133432a.html — Phishing and Inept Banks
  • http://taint.org/2006/04/06/210519a.html — RSS Feeds for Events in Dublin
  • http://taint.org/2006/04/13/140841a.html — BT DSL’s Daily Disconnects

Technorati says there are 514 links from 105 sites. I still don’t know what the hell that means. ;)

Update: I’ve remembered that, before I started blogging at taint.org, I kept a diary at Advogato, which dates all the way back to March 2000!

Also, here are some pretty graphs from the graph-top-referers script:

The several slashdottings and a Boing Boinging are quite clear ;)

links for 2006-05-12

Link-blog Networking

Cool — del.icio.us just added a feature whereby you can now see who has you in their network, and, of course, you can further view their networks and see who’s in them.

This’d be great to produce social-network graphs, although I daresay Joshua mightn’t be so keen on the spidering load. ;) I’ve optimistically requested some form of dump, anyway.

The social networking aspect of link collection and link-blogging via del.icio.us is emerging nicely; I’m keen to see what’s next in the pipeline.

A few interesting things:

  • Almost everyone who’s using del.icio.us seriously for link collection — ie. applying some quality control thresholds, and bothering to write one-line descriptions, at least — has filled out their ‘network’ by now.

  • It’d be useful to have “groups”, so that we can now assert things like “jm, boogah, n0wak, negatendo, tweebiscuit, leonardr, muckster and torrez form a group”. I’m sure that’d provide useful info, although could probably be inferred anyway. (People are attempting to hack it by using a shared tag on all their postings, like the “irishblogs” tag, but that’s an awful misuse of tagging in my opinion ;)

  • Also, it’ll be interesting to see what’ll happen once Google Co-op figures out a way to incorporate the del.icio.us network data. To be honest, I’m very surprised it wasn’t already in there — it seems like a no-brainer… maybe some Y!/G corporate rivalry is getting in the way.

Anyway, in the meantime it’s producing lots of good fodder for my SpicyLinks feed.

SpicyLinks is an implementation of something that I mentioned in a comment on this weblog entry, regarding future methods of reading weblogs; in essence, it’s an automated blog aggregation summariser. It reads other people’s link-blogs, so I don’t have to, and reports the stuff that proves popular in my personal collection of sources.
(Credit where due: HotLinks provided much of the inspiration, but doesn’t support personalisation, hence the reimplementation.)

SpicyLinks is similar to Populicious, but that app really misses the point, in my opinion. I don’t particularly want to know what everyone is pointing at; I want to know what a selected set of trusted sources (with good taste!) are pointing at.

This aggregation is pretty similar to the del.icio.us ‘network’ feed, but with much lower volume, and a higher signal/noise ratio, attained by dropping the ‘one-off’ items that only one person is pointing at. Initially, that may seem like a major failure, since you miss the ‘fresh bits’ — but as long as you’ve got the right people in your source network, it actually works very well.

It’d be great if this was one of the features implemented in the del.icio.us ‘network’ system…

links for 2006-05-11

links for 2006-05-10

links for 2006-05-09

Script: new-referrer-rss

new-referrer-rss.pl – generate RSS feed of new referrer URLs from access_log

SYNOPSIS

new-referrers-rss nameofsite [source ...] > new-referrers.xml

DESCRIPTION

Given the name of a web site, and a selection of Apache combined log format ‘access_log’ files containing referrer URL data, this will generate an RSS feed containing the latest referrers.

The script should be run periodically with ‘fresh’ access_log data, from cron.

Todd Underwood on BlueSecurity DDoS

Renesys Blog: The Bluesecurity Fiasco — in which Todd Underwood, CSO for Renesys Corporation, applies some real-world knowledge of how the internet works to the “timeline of events” press release, issued by BlueSecurity as part of their ongoing PR about the DDoS.

Judging by the comments at Slashdot, this really needs to be more widely read.

Here’s some highlights:

The timeline from BlueSecurity […] is frustratingly vague. It uses phrases like ‘tampering with the Internet backbone using a technique called “Blackhole Filtering”.’ As Thomas Pogge, a philosophy professor of mine, used to say: that’s not even wrong yet. There is no “Internet backbone”, there is no technique known as “Blackhole Filtering”, and blackhole routing is not normally described as tampering. So the whole explanation is nonsense. […] Let’s clear one thing up for the press and everyone else: this event just wasn’t that interesting. The attack against bluesecurity was a run-of-the-mill denial of service attack.

His conclusion:

I believe that the PR engine from BS is in overdrive spinning this event as fast as they can. But the concrete facts being put out by them simply to not add up. In the process they seem to be doing two things: 1) trying to imply or state that someone at UUnet was bribed by a spammer. This is simply ridiculous. I know many of the people who work for UUnet and they are honest, hardworking and extraordinarily clever people. They would not be crooked, or stupid, enough to do such a thing and if they were, they would have been trivially caught by change-management procedures. Moreover, such a change at UUnet (or BTN) wouldn’t have caused the event BS claims to have witnessed anyway. Additionally, 2) BS is trying to deflect attention from the damage that they caused at Six Apart. It would be much better if they could just claim ignorance of the DOS, apologize and move on. I recognize that that isn’t going to happen, but it sure would make this whole thing easier to handle.

Well said.

Of course, this is pretty much immaterial — the people who are using Blue Frog, and vocally supporting Blue Security, don’t really care what happened. All they care about is that someone is taking some kind of direct action against spammers, in some way or another, and if there’s a little “friendly fire” and some bending of the truth, why, this is a war! What, do you support the spammers?

It’s disappointing — the amount of disinformation being successfully pumped out (and accepted!) on this story is massive.

Outside My Window Right Now

Bubba, now safely back in Dublin after his 8000-mile flight from LAX, is getting back into exploring his old manor.

Here he is, ignoring a very brave magpie. Judging by the way the magpie was brazenly hopping around him, cawing, and the way that Bubba was ignoring him, I suspect there may be a nest nearby….

links for 2006-05-04

London’s Oyster RFID card to become a full cashless payment system

Apparently, Transport For London are planning ‘e-money’ trials based on their remotely-readable Oyster RFID cards.

Combine that with Kevin Mahaffey of Flexilis’ talk at Black Hat last year, where he demonstrated apparatus to extend RFID read range from 4-6 inches to approximately 50 feet, and things could get messy. ;)

The slides for that talk are available here (PDF); slide 20 specifically mentions the Hong Kong “Octopus” cashless-payment card.

links for 2006-05-03

Blue Frog List Leaked?

Blue Frog is a company who operates a “Do Not Email” list, on the (optimistic) basis that spammers will vet their lists against it.

Reportedly, it’s been compromised. If this is true, I’m not surprised — as Dr. Aviel Rubin‘s report to the FTC of May 2004 regarding a Do-Not-Email list notes:

The scrubbing approach [to running a D-N-E list] requires that a list of live email addresses exist. While the party owning that list may be well intentioned, it is unlikely that such a valuable list would not leak out. History is replete with insider attacks, as well as external break-ins to highly sensitive sites, such as the Pentagon computers. The Do Not Email Registry represents the kind of prize that attracts hackers. In this case, the prize has monetary value as well. Once the list is exposed, there is no way to undo it.

Also, it’s almost inevitable:

If this service were running for some time, it is more likely than not that the plaintext addresses would leak at some point, given the history of computer security incidents.

Update: it appears, according to this white paper, that the Blue Frog “Do Not Intrude” list is hashed, rather than plain-text. Rubin’s advice still applies:

Without hashing, a compromise of the registry database results in exposure of all of the registered email addresses. This is a total disaster. However, even exposure of a hashed list is a catastrophe. A spammer with a copy of a hashed list of email addresses is able to find out, for any email address, if the address is in the registry. The attacker simply hashes a candidate email address and sees if the hashed value is in the list. This is very powerful. [….]

Hashing provides absolutely no security against a marketer who obtains a scrubbed list and uses that to sell the addresses that were scrubbed by the registry. Whether or not the list is hashed has no impact on a malicious marketer in the scrubbing approach.