Skip to content

Tag: open-source

OSSBarCamp this weekend

It’s two days until OSSBarCamp, a free open-source-focussed Bar Camp unconference at Kevin Street DIT, this Saturday. I’m looking forward to it — although unfortunately I missed the boat on giving a talk. (Unlike the traditional Bar Camp model, this is using a pre-booked talk system.)

Particularly if you’re working with open source in Ireland, you should come along!

I have high hopes for John Looney’s discussion of cloud computing and how it interacts with open source. Let’s hope he’s not too Google-biased in his definition of “cloud computing”. ;)

Also of interest — Fintan Boyle’s “An Introduction To Developing With Flex”. To be honest, I hadn’t even realised that Adobe Flex was now open source. cool.

2 Comments

Closed phish data costing $326mm per year

Richard Clayton posted a very interesting article over at Light Blue Touchpaper; he notes:

Tyler Moore and I are presenting another one of our academic phishing papers today at the Anti-Phishing Working Group’s Third eCrime Researchers Summit here in Atlanta, Georgia. The paper “The consequence of non-cooperation in the fight against phishing” (pre-proceedings version here) goes some way to explaining anomalies we found in our previous analysis of phishing website lifetimes. The “take-down” companies reckon to get phishing websites removed within a few hours, whereas our measurements show that the average lifetimes are a few days.

When we examined our data […] we found that we were receiving “feeds” of phishing website URLs from several different sources — and the “take-down” companies that were passing the data to us were not passing the data to each other.

So it often occurs that take-down company A knows about a phishing website targeting a particular bank, but take-down company B is ignorant of its existence. If it is company B that has the contract for removing sites for that bank then, since they don’t know the website exists, they take no action and the site stays up.

Since we were receiving data feeds from both company A and company B, we knew the site existed and we measured its lifetime — which is much extended. In fact, it’s somewhat of a mystery why it is removed at all! Our best guess is that reports made directly to ISPs trigger removal.

They go on to estimate that ‘an extra $326 million per annum is currently being put at risk by the lack of data sharing.’

This is a classic example of how the proprietary mindset fails where it comes to dealing with abuse and criminal activity online. It would be obviously more useful for the public at large if the data were shared between organisations, and published publicly, but if you view your data feed as a key ingredient of your company’s proprietary “secret sauce” IP, you are not likely to publish and share it :(

The anti-phishing world appears to be full of this kind of stuff, disappointingly — probably because of the money-making opportunities available when providing services to big banks — but anti-spam isn’t free of it either.

Mark another one up for open source and open data…

(thanks to ryanr for the pic)

Comments closed

Links for 2008-10-01

3 Comments

Links for 2008-09-19

Comments closed

Links for 2008-09-12

Comments closed

Links for 2008-08-13

1 Comment

Links for 2008-07-22

ZSFA — I Want The Mutt Of Feed Readers Zed recommends Newsbeuter. must take a look

We Want A Dead Simple Web Tablet For $200. Help Us Build It. having worked on a project to do just this, believe me, this is doomed. DOOMED

Science Clouds ‘compute cycles in the cloud for scientific communities .. allows you to provision customized compute nodes .. that you have full control over using a leasing model based on the Amazon’s EC2 service.’ Wonder if they’d like to give SA some time ;)

1 Comment

Firefox Download Evening

Download Day

Happy Firefox Download Day — or rather, Firefox Download Evening!

It turns out that the “day” in question has been defined as a 24-hour period starting at 10am Pacific Time; rather than compensating for the effects of timezones around the world, they’ve just picked an arbitrary 24-hour period.

That’s 6pm in Irish time, for example. At least I’m not one of the 57,000 Japanese pledgers, who’d be waiting up until 2am to kick off their download. It seems a little bizarre that there’s little leeway provided for non-US downloaders, who are right now twiddling their thumbs, waiting, while their “day” passes.

Annoyingly, the main world record page simply says ‘the official date for the launch of Firefox 3 is June 17, 2008’ — no mention of a starting time or official timezone at all!

This is the top thread on their forum right now — in addition to the omission of an entire continent ;)

37 Comments

TypePad AntiSpam

TypePad AntiSpam looks pretty cool. I’ve been trying it out for the past week on taint.org and underseacommunity.com, with no false positives or false negatives so far (although mind you I don’t get much spam, anyway, on those blogs, fortunately). Both are WordPress blogs — I set up Akismet, got a TypePad API key, and edited 3 lines in “wp-content/plugins/akismet/akismet.php”, and I was off.

However, here’s the key bit, the bit I’m most excited about — /svn/antispam/trunk/, particularly the GPL v2 LICENSE file — a fully open source backend!

The backend is a perl app built on Gearman and memcached. It uses DSpam instead of SpamAssassin, but hey, you can’t have everything ;) Nice, clean-looking perl code, too. Here’s hoping I get some tuits RSN to get this installed locally…

4 Comments

BBC’s iPlayer — what a mess

I haven’t paid a whole lot of attention to the BBC’s “iPlayer” project, since, as a non-UK resident, I’m not allowed to use it anyway. But this interview at Groklaw with Mark Taylor, President of the UK Open Source Consortium, was really quite eye-opening. Here’s some choice snippets.

On the management team’s Microsoft links:

The iPlayer is not what it claimed to be, it is built top-to-bottom on a Microsoft-only stack. The BBC management team who are responsible for the iPlayer are a checklist of senior employees from Microsoft who were involved with Windows Media. A gentleman called Erik Huggers who’s responsible for the iPlayer project in the BBC, his immediately previous job was director at Microsoft for Europe, Middle East & Africa responsible for Windows Media. He presided over the division of Windows Media when it was the subject of the European Commission’s antitrust case. He was the senior director responsible. He’s now shown up responsible for the iPlayer project.

On their attempts to bullshit the BBC Trust on the cross-platform issue:

In the consultations that the BBC Trust made, there were 10,000 responses from the public. And the overwhelming majority of them, over 80% — which is an unheard-of figure in these kind of things — said, we don’t like the platform. We don’t like it being single-platform. So it’s a big issue. And the BBC Trust said to us, “Why the vehemence? Why have people reacted this way?” And I explained the ‘Auntie’ analogy. It’s people don’t expect that from the BBC. It’s got this huge history of integrity, doing the right thing, standing up to bullies. (laughter) They’ve done this for a very long time. And people find that it’s surprising. And they said, “Yeah, but,” you know, the BBC guys said, “Well, trust us. This is going to be cross-platform.” And we said, “Well, how? It’s completely single-platform.” They say that, but we haven’t been able to find anyone who’s been able to explain how they’re going to achieve that at the moment, even though they’re entirely locked into one single platform.

(aside: MS did this at one point with Internet Explorer — remember, there was some mystery team in Germany that supposedly had IE ported to Solaris, hence it therefore qualified as ‘cross-platform’.)

On the architecture of the product:

Q: it’s a Verisign Kontiki architecture, it’s peer-to-peer, and in fact one of the more worrying aspects is that you have no control over your node. It loads at boot time under Windows, the BBC can use as much of your bandwidth as they please (laughter), in fact I think OFCOM … made some kind of estimate as to how many hundreds of millions of pounds that would cost everyone […]. There is a hidden directory called “My Deliveries” which pre-caches large preview files, it phones home to the Microsoft DRM servers of course, it logs all the iPlayer activity and errors with identifiers in an unencrypted file. Now, does this assessment agree with what you’ve looked at?

Mark Taylor: Yes.

Q: What are the privacy implications for an implementation like this?

Mark Taylor: Well, just briefly going back to the assessment thing, yes it does log precisely RSS and stuff like that and more importantly, anyone technically informed who’s had a look at it — even more importantly, the user’s assessment as well and — frankly horrified if you go and spend some time in the BBC iPlayer forums, it’s eye-opening to see the sheer horror of the users, some of them technically not — you know, relatively early-stage users — but when it gets explained to them by some of the longer-using users of it, it’s concentrated misery. (laughter)

[…]

it’s a remarkable thing with them as well, there’s a lot of pain going on in the user forums, and some of the main technical support questions in there are “how do I remove Kontiki from my computer?” See, it’s not just while iPlayer is running that Kontiki is going, it’s booted up. When the machine boots up, it runs in the background, and it’s eating people’s bandwidth all the time. (laughter) In the UK we still have massive amounts of people who’ve got bandwidth capping from their ISPs and we’ve got poor users on the online forums saying, “Well, my internet connection has just finished, my ISP tells me I’ve used up all of my bandwidth.”

Q: It uses up their quota, but they can’t throttle it, they can’t reduce it —

Mark Taylor: No, they can’t throttle it. […] It’s malware as well as spyware.

And to top this off, there’s a (frankly insane) budget of UKP 130,000,000 to build this — that’s $266,000,000 — for something that could be built better by just hiring the guys behind UKNova and simply negotiating with the rights-holders directly.

Holy crap. Talk about a technical disaster masquerading as a solution to a business problem…

5 Comments

Maximise value, not protection (fwd)

Here’s an excellent quote from the OpenGeoData weblog, really worth reproducing:

”We think the natural tendency is for producers to worry too much about protecting their intellectual property. The important thing is to maximise the value of your intellectual property, not to protect it for the sake of protection. If you lose a little of your property when you sell it or rent it, that’s just a cost of doing business, along with depreciation, inventory losses, and obsolescence.” — Information Rules, Carl Shapiro and Hal Varian, page 97.

Words to live by!

1 Comment

Web 2.0 and Open Source

A commenter at this post on Colm MacCarthaigh’s weblog writes:

I guess I still don’t understand how Open Source makes sense for the developers, economically. I understand how it makes sense for adapters like me, who take an app like Xoops or Gecko and customize it gently for a contract. Saves me hundreds of hours of labour. The down side of this is that the whole software industry is seeing a good deal of undercutting aimed at sales to small and medium sized commercial institutions.

Similarly, in the follow-up to the O’Reilly “web 2.0” trademark shitstorm, there’s been quite a few comments along the lines of “it’s all hype anyway”.

I disagree with that assertion — and Joe Drumgoole has posted a great list of key Web 2.0 vs Web 1.0 differentiators, which nails down some key ideas about the new concepts, in a clear set of one-liners.

Both open source software companies, and “web 2.0” companies, are based on new economic ideas about software and the internet. There’s still quite a lot of confusion, fear and doubt about both, I think.

Open Source

As I said in my comment at Colm’s weblog — open source is a network effect. If you think of the software market as a single buyer and seller, with the seller producing software and selling to the buyer, it doesn’t make sense.

But that’s not the real picture of a software market. If you expand the picture beyond that, to a more realistic picture of a larger community of all sorts of people at all levels, with various levels interacting in a more complex maze of conversation and transactions, open source creates new opportunities.

Here’s one example, speaking from experience. As the developer of SpamAssassin, open source made sense for me because I could never compete with the big companies any other way.

If I had been considering it in terms of me (the seller) and a single customer (the buyer), economically I could make a case of ‘proprietary SpamAssassin’ being a viable situation — but that’s not the real situation; in reality there was me, the buyer, a few 800lb gorillas who could stomp all over any puny little underfunded Irish company I could put together, and quite a few other very smart people, who I could never afford to employ, who were happy to help out on ‘open-source SpamAssassin’ for free.

Given this picture, I’m quite sure that I made the right choice by open sourcing my code. Since then, I’ve basically had a career in SpamAssassin. In other words my open source product allowed me to make income that I wouldn’t have had, any other way.

It’s certainly not simple economics, is a risk, and is complicated, and many people don’t believe it works — but it’s viable as an economic strategy for developers, in my experience. (I’m not sure how to make it work for an entire company, mind you, but for single developers it’s entirely viable.)

Web 2.0

Similarly — I feel some of the companies that have been tagged as “web 2.0” are using the core ideas of open source code, and applying them in other ways.

Consider Threadless, which encourages designers to make their designs available, essentially for free — the designer doesn’t get paid when their tee shirt is printed; they get entered into a contest to win prizes.

Or Upcoming.org, where event tracking is entirely user-contributed; there’s no professional content writers scribbling reviews and leader text, just random people doing the same. For fun, wtf!

Or Flickr, where users upload their photos for free to create the social experience that is the site’s unique selling point.

In other words — these companies rely heavily on communities (or more correctly certain actors within the community) to produce part of the system — exactly as open source development relies on bottom-up community contribution to help out a little in places.

The alternative is the traditional, “web 1.0” style; it’s where you’re Bill Gates in the late 90’s, running a commercial software company from the top down.

  • You have the “crown jewels” — your source code — and the “users” don’t get to see it; they just “use”.
  • Then they get to pay for upgrades to the next version.
  • If you deal with users, it’s via your sales “channels” and your tech support call centre.
  • User forums are certainly not to be encouraged, since it could be a PR nightmare if your users start getting together and talking about how buggy your products are.
  • Developers (er, I mean “engineers”) similarly can’t go talking to customers on those forums, since they’ll get distracted and give away competitive advantage by accidentally leaking secrets.
  • Anyway, the best PR is the stuff that your PR staff put out — if customers talk to engineers they’ll just get confused by the over-technical messages!

Yeah, so, good luck with that. I remember doing all that back in the ’90’s and it really wasn’t much fun being so bloody paranoid all the time ;)

URLs:

(PS: The web2.0 companies aren’t using all of the concepts of open-source, of course — not all those web apps have their source code available for public reimplementation and cloning. I wish they were, but as I said, I can’t see how that’s entirely viable for every company. Not that it seems to stop the cloners, anyway. ;)

15 Comments

My ApacheCon Roundup

Back from ApacheCon!

I’ve got to say, I found it really useful this year. Last year, I was pretty new to the ASF, and found that my expectations of ApacheCon didn’t quite match reality; it wasn’t a rip-roaring success exactly, for me, as a result.

However, many details of how the ASF works — and how the conference itself works and is organised — are much clearer after you’ve spent some time lurking and absorbing practices in the meantime. (The visibility one gets into the process as a member of the ASF makes this a lot easier.)

Result: it was much more of a success for me this time around. Plenty of networking, putting faces to the names, hanging out, and discussing many aspects of our work.

The hackathon really worked out, too; while we didn’t produce a hell of a lot of code per se, it made for a good ‘developer summit’ and I think we established solid agreement on SpamAssassin’s short-term directions and goals. (summary: rules, and faster).

On top of that, I got to meet up with Colm MacCarthaigh and Cory Doctorow for discussion of Digital Rights Ireland. Looks like I’ll be spending a bit of time on that next year ;)

Finally: Solaris. On Monday night, I got to sit down with Daniel Price, one of the kernel engineers behind Solaris Zones, work through a quick demo of a bug I was running into with chroot(2) and zones on our rule-QA buildbot server, and watch as he visually traced it through the OpenSolaris kernel source on the web. From this — and from talking to Daniel — it’s pretty clear that things have changed at Sun. Pretty much the entire Solaris operating system is now a full-on open-source project; it’s not just a marketing gimmick. The source is up there on the web, that’s the source for the code they’re running now, and there’s no half-assed ‘freeze it, cut out the good bits, and throw it over the wall’ fake-open-source tricks.

The concept of getting this level of access to Solaris source code and engineers, would have blown my mind when I was Iona’s sysadmin back in the 1990s ;) I’m very impressed.

Comments closed

IFSO Seminar In Dublin

Passing this on for readers in Ireland — this sounds like an interesting event. From the FSFE-IE mailing list:

On the morning of Friday November 18th, IFSO is organising an event hosted by MEP Proinsias De Rossa about preventing software patents in the EU. Topics covered will be:

  • An analysis of the software patent directive;
  • a discussion of Free Software and computer security;
  • an introduction to IFSO/FSFE and their work;
  • the future of legislative obstacles to the development and distribution of software.

The event will be held in the European Parliament Office in Ireland, and spaces are limited. Participants are therefore asked to register their intent to attend. See here for more details.

Comments closed

OpenWRT vs Netgear MR814: no contest

Hardware: After a few weeks running OpenWRT on a Linksys WRT54G, here’s a status report.

Things that the new WRT54G running OpenWRT does a whole lot better than the Netgear MR814:

  • Baseline: obviously it doesn’t DDoS the University of Wisconsin, and it doesn’t lose the internet connection regularly, as noted in that prior post. I knew that, so those are not really new wins, though.
  • It’s quite noticeably faster. I’ve seen it spike to double the old throughput rates, and it’s solid, too; less deviation in those rates.
  • It doesn’t break my work VPN. I wasn’t sure if it was the MR814 that was doing this, requiring an average of about 20 reconnects per day — now, I know it for a fact. I’ve had to reconnect my VPN connection about 4 times over the past week.
  • It doesn’t break the Gigafast UIC-741 USB wifi dongle I’m using on the MythTV box. Previously that would periodically disappear from the HAN. Again, I had this pegged as an issue with the driver for that; removing the MR814 from the equation has solved it, too, and it’s now running with 100% uptime so far.
  • It does traffic shaping with Wondershaper, so I can use interactive SSH, VNC, or remote desktop while downloading, even if it’s another machine on the HAN doing the download.
  • It’s running linux — ssh’ing in, using ifconfig, and vi’ing shell scripts on my router is very, very nice.

Man, that MR814 was a piece of crud. ;) I can’t recommend OpenWRT enough…

Comments closed

Hackability as a selling point

Hardware: On my home network, I recently replaced my NetGear MR814 with a brand new Linksys WRT54G.

My top criteria for what hardware to buy for this job weren’t price, form factor, how pretty the hardware is, or even what features it had — instead, I bought it because it’s an extremely hackable router/NAT/AP platform. Thanks to a few dedicated reverse engineers, the WRT hardware can now be easily reflashed with a wide variety of alternative firmware distributions, including OpenWRT, a fully open-source distro that offers no UI beyond a command-line.

Initially, I considered a few prettier UIs — HyperWRT, for example — since I didn’t want to have to spend days hacking on my router, of all things, looking stuff up in manuals, HOWTOs and in Google. Finally I decided to give OpenWRT a spin first. I’m glad I did — it turned out to be a great decision.

(There was one setup glitch btw — by default, OpenWRT defaults to setting up WPA, but the documentation claims that the default is still no crypto, as it was previously.)

The flexibility is amazing; I can log in over SSH and run the iftop tool to see what’s going on on the network, which internal IPs are using how much bandwidth, how much bandwidth I’m really seeing going out the pipe, and get all sorts of low-level facts out of the device that I’d never see otherwise. I could even run a range of small servers directly on the router, if I wanted.

Bonus: it’s rock solid. My NetGear device had a tendency to hang frequently, requiring a power cycle to fix; this bug has been going on for nearly a year and a half without a fix from NetGear, who had long since moved on to the next rev of cheapo home equipment and weren’t really bothering to support the MR814. I know this is cheap home equipment — which is why I was still muddling along with it — but that’s just ridiculous. None of that crap with the (similarly low-cost) WRT. OpenWRT also doesn’t contain code to DDOS NTP servers at the University of Wisconsin, which is a bonus, too. ;)

Sadly, I don’t think Cisco/Linksys realise how this hackability is making their market for them. They’ve been plugging the security holes used to gain access to reflash the firmware in recent revisions of the product (amazingly, you have to launch a remote command execution attack through an insecure CGI script!), turning off the ability to boot via TFTP, and gradually removing the ways to reflash the hardware. If they succeed, it appears the hackability market will have to find another low-cost router manufacturer to give our money to. (update, June 2006: they since split the product line into a reflashable Linux-based “L” model and a less hackable “S” model, so it appears they get this 100%. great!)

Given that, it’s interesting to read this interview with Jack Kelliher of pcHDTV, a company making HDTV video capture cards:

Our market isn’t really the mass market. We were always targeting early adopters: videophiles, hobbyists, and students. Those groups already use Linux, and those are our customers.

Matthew Gast: The sort of people who buy Linksys APs to hack on the firmware?

Jack Kelliher: Exactly. The funny thing is that we completely underestimated the size of the market. When we were starting up the company, we went to the local Linux LUG and found out how many people were interested in video capture. Only about 2 percent were interested in video on Linux, so we thought we could sell 2,000 cards. (Laughs.) We’ve moved way beyond that!

Well worth a read. There’s some good stuff about ulterior motives for video card manufacturers to build MPEG decoding into their hardware, too:

The broadcast flag rules are conceptually simple. After the digital signal is demodulated, the video stream must be encrypted before it goes across a user accessible bus. User accessible is defined in an interesting way. Essentially, it’s any bus that a competent user with a soldering iron can get the data from. Video streams can only be decrypted right before the MPEG decode and playback to the monitor.

To support the broadcast flag, the video capture must have an encryptor, and the display card must have a decryptor. Because you can’t send the video stream across a user accessible bus, the display card needs to be a full MPEG decoder as well, so that unencrypted video never has to leave the card.

Matthew Gast: So the MPEG acceleration in most new video cards really isn’t really for my benefit? Is it to help the vendors comply with the broadcast flag?

Jack Kelliher: Not quite yet. Most video cards don’t have a full decoder, so they can’t really implement the broadcast flag. ATI and nVidia don’t have full decoders yet. They depend on some software support from the operating system, so they can’t really implement the broadcast flag. Via has a chipset with a full decoder, so it would be relatively easy for them to build the broadcast flag into that chipset.

Aha.

1 Comment