At the other end of the spectrum is the impact this has had at the executive level. I don’t think there was ever a doubt that open source licenses themselves were legal, enforceable documents; but I think the SCO lawsuit has shown that what felt like a solid case against Linux floundered not just on technical grounds (the claims were false) but that by trying to take on something with as much momentum as Linux, the suitor is nearly guaranteed to fail. The array of resources, be they technical, archival, legal, etc., available to fight these suits were more than anyone could have predicted. It’s a good rebuke to the cynical but widespread notion that all it takes is a big pot of gold to litigate your competition out of existance or otherwise win a legal challenge. Good did prevail in the end. Hopefully it won’t make us too cocky, because the next challenge could be much harder to fight.
Q. Are the questions raised about the GNU Public License likely to lead to significant changes in how open source software is created, improved and distributed?
A. If you’re referring to SCO’s challenges, I think it’s becoming clear that it’s all been hogwash. I suspect the claims that the GPL “violates the U.S. Constitution” will get recorded in some historical analysis of corporate Tourette’s syndrome. While there are slightly different interpretations of the GPL, they all vary by nuance; the basics still are understood well, and can be explained clearly enough that the FSF still hasn’t needed to resort to a lawsuit to compel conformance when they’ve been notified of a company possibly violating it. That may change – there may yet be another company interested in challenging it, or v3 of the GPL may shift the boundary between acceptable and unacceptable behaviors enough to cause people to reconsider whether the GPL is right for them. But, today, it stands up pretty well.
On the other hand, I am seeing more and more people coming around to the Apache license or the BSD/MIT licenses. I see people buying the premise that you don’t need to compel improvements or bugfixes to also be released under the GPL – it’s better to entice a community into forming and sharing than force unwilling participants to do so. The carrot vs. the stick, as I’ve said in a couple circumstances. I want us to see as many people using Open Source software as we can, directly or indirectly, embedded or up-front, and a license like Apache or BSD or MIT does the most for that; my premise is that a larger userbase will necessarily mean a larger developer base.
Q. Tech stocks are posting gains on Wall Street, and there’s lots of buzz about Google and other upcoming Internet IPOs. As someone who’s been through the ups and downs of the Internet economy, what do you make of the current tech recovery and its sustainability?
A.Google, Salesforce.com, and other recent IPO filings have certainly caused people to ask whether it’s 1999 all over again. In some ways you’re seeing some of the impact here again in the Bay Area, where it’s getting harder to hire quality people, and rents are starting to go up again as well. So there’s the tendency to worry that this is just the pendulum swinging back towards “irrational exuberance.”
However, I think that so far, the companies that have filed all have much more credibility than the companies going out in 1999. It’s still no longer tolerable to see a company IPO that isn’t already profitable in a space that is potentially huge that they’ve only started to grab.
Today’s IPO candidates appear to have some real justification for going public. If we start to see companies going out who don’t have a real revenue stream, who aren’t taking in more money than they’re spending, or who don’t have much of a business plan, then we can start to call it irrational again. But one would hope that today’s investors still have enough scars from 2000/2001 to avoid a repeat of the worst of it.
Q. Bring us up to speed on recent developments at CollabNet, which you founded with O’Reilly & Associates, with the goal of using open source methodology on corporate projects. How has the company fared through the tech recession, and how has it evolved?
A. We’re doing quite well today, but it’s been a hard road to get here. In 2001, right after 9-11, a series of cancelled orders right at the last minute (after months of selling, negotiating, contract writing) hit us hard, though we were able to keep most of the existing customers we had. In fact, most added more users, so our revenue kept climbing slowly.
That’s one of the beauties of the managed-services model: you set up an agreement, customers pay monthly based on usage with some baseline, and so long as you live up to your service level agreements and continue to improve the product over time, your revenue numbers become dependable and predictable. So this model *saved* us during the downturn.
Unfortunately we had to cut staff to make this happen. In 2001 we took the headcount down by a third. At the end of 2002 we raised another round of financing to last us through to where revenue would exceed expenses. At the beginning of 2003, there was much discussion around the executive staff about outsourcing and/or offshoring. We had a dedicated and productive engineering staff in the U.S., but the amount of stuff we *wanted* to do was huge – and customers were demanding new features constantly. I was skeptical about the model where you hand someone a spec and magically they write code for you. While looking at this we met with a company named Enlite Technologies, who had a collaborative project management tool for the electronics-design market, and who had the majority of their engineers in Chennai, India. We were considering outsourcing some work to them, but I really liked the founder (Gopinath Ganapathy) and the team he’d formed, and I wanted something much closer and more, er, collaborative – so we decided to merge. Our products were complimentary, they had a great team in Chennai, and I figured that it was time for us to become our own best-use case in showing how our product could be used to build worldwide engineering teams, as many of our customers had done.
Since that point in time, we’ve integrated the two teams very tightly. Engineers in each location are spread across the combined codebases, and they know each other on a first name basis. We were the subject of an article in Salon about this. No doubt the topic is controversial, and there are huge challenges to making an offshore or outsourcing model actually work.
The open source model has a lot to do with making that possible. The principles of basing as much communication as possible on email or other mediated formats does a lot to bring down the barriers of time, distance, and culture. The transparancy implicit in most open source development is a good way to build trust between teams and make it efficient to debug someone else’s code. The mindset around constantly questioning, rebutting, investigating, is a healthy thing in engineering, and yet is sometimes foreign to other cultures; again, open source leads the way in recognizing those kinds of activities as legitimate forms of technical discource. So open source is still changing the world, but perhaps in an unexpected way.
Meanwhile, we’ve not forgotten our open source roots. We chose a hybrid model (like most software companies who contribute to open source projects) whereby much of the underpinning is stuff we give away, and we sell the layer on top that enterprises care about. Subversion has been our high note for the last two years in this space – it hit a 1.0 release in February to a lot of acclaim. As we move “up the stack” in what we do, look for possibly more projects in the future.
Q. Open source software is widely used on the web, but is slower making its way to the desktop. Is that likely to change soon?
A. I think it’s happening much more than “slowly,” particularly if one looks outside the U.S. and Europe, and in particular looks at the government and educational markets. The cost advantages of open source software now are just the start of the reasons why the international community is paying attention; the mature state of internationalization and localization in all the major components of the desktop has played a big role. Another big factor was China joining the WTO, and therefore not being able to be as flippant about pirating Windows as they perhaps might have been before.
The worldwide political situation also appears to be motivating companies and institutions outside the U.S. and Europe to look at ways to reduce the dependency on U.S. and European commercial software providers. I have a $1 bet with a Microsoft employee that says that their international revenue from MS Office in 2006 will be half of that in 2005, thanks nearly entirely to OpenOffice (which runs fine on Windows too, I might add). So I’m very bullish on desktop Linux outside the U.S. and Europe accelerating very quickly.
Within the U.S. and Europe, it’ll take longer; Novell, Sun, and others are selling corporate enterprises on the idea of enterprise-wide Linux seats pretty aggressively and making good cases for it; I think we’ll see some noteworthy deals soon. The consumers? I’m not sure. I think it’ll still be awhile.
Many in the community are worried about XAML, as Miguel de Icaza was in your recent interview with him. That worries me too, and I think the Open Source community needs something equivalent to counter it around the same time frame. But, developers don’t tend to jump on the latest-and-greatest; new paradigms like that take eons to roll out, so we’ll have a good shot at it, especially if the community works together on it. The good news is once we’re there, it won’t really “matter” whether the operating system underneath is Windows, Mac, or Linux. The objective here isn’t “eliminate MS”, it should be “keep the future open.” I’m bullish on our ability to do that.
Q. Corporate IT departments tend to want to maintain control over costs and data at the same time, which is a challenge for application service providers. Is the ASP model gaining traction?
A. Good question. That’s our delivery model for our suite of developer tools, and we do have to make the case that our ability to store and manage the application and your data is cheaper and more reliable than what a company’s own IT department can provide. If you can have a conversation with the right person and talk about specifics, the cost advantages are overwhelming, and most IT shops can’t even deliver 95% uptime, let alone 99% (which is our minimum contracted guarantee, though our actual uptimes are usually 100%). Our operational infrastructure has been audited by some pretty strict customers, and we’re able to do all the usual VPN/leased line/SSL kinds of things people expect and need. Most bandwidth concerns go away when we show that our data center tends to have better latency and throughput to a company’s branch offices than they themselves have to each other, which is the typical case.
What’s hard to counter is the emotional argument around where the data sits. We counter that by using off-site bonded tape backup storage companies, so that if we were to disappear one day the customer still can get their data. We also show them how to export from the various tools, if they want to keep a copy of the data locally. We use bog-standard database formats, and CVS’s data format is obviously not proprietary, neither is Subversion (a big part of the justification for making SVN open-source). So there are ways to answer the concern, it seems. I can say that the percentage of sales opportunities that disappear because we don’t have a version customers can run on their own has declined significantly over the last few years. And when Salesforce.com IPOs, they’ll answer these questions for the market too.
Q. Back in 2000, in an e-mail exchange with Dave Winer, you wrote that “The world needs less software … It is the only way that the software world can attack complexity, which is the number one problem with software today.” Four years down the road, how would you assess the progress that’s been made? Do we still need less software?
A. I always worry in these interviews that my words will come back to haunt me! I think we’ve done very poorly on this mark, frankly. Look at the state of Wiki software, for example. You’ve got dozens and dozens of Wiki applications, written in lots of different languages, and while some of them are really good, it “feels” like some of them could be *great* if a couple of them just got together and worked as a larger team. Of course, this doesn’t seem like a problem to the programmer – I bet most started as small projects that grew very quickly, and felt it would be easier to start their own than figure out how someone else’s worked. But this is a disaster for the end users, who feel obligated to figure out the differences between these applications before deciding which one to install. Given the usual paucity of documentation, this gets painful really quickly.
Not every space is this broken – I’m amazed at how well the OpenOffice, Gnome, and Mono communities are able to work together on projects and avoid this incoherence problem. We do fairly well on this in Apache, though there are a few examples of overlapping projects where the developer communities are split more for personality reasons than functionality-related reasons.
I suppose sorting through this all is an opportunity for open source businesses to get started and provide value, deciding which apps are best of breed, improving them, integrating them … and maybe that’s how this plays out in the long run, where the Open Source community essentially becomes the wild and wacky and non-proprietary R&D laboratory for the world’s software services companies, who then reduce the chaos into something suitable for end-users.
While I still sense tragedy in seeing what feels to the outsider like duplicated effort, if you look at the whole software market like an ecology ruled by Darwinian evolutionary rules, then this diversity could be seen as a healthy thing, just like the diversity of the island finches Darwin wrote about. But that diversity has to be sustainable, the divergent populations need to be able to feed themselves and procreate. So an open source ecosystem with lots of overlapping but *sustainable* projects probably isn’t too bad. One where every project gets about half-way there, then development stops because it’s “good enough” and everyone moves on, seems lame to me.
Q. You’ve worked with some of the better-known projects and companies of the Internet era. Which has been the most fun, and which has been the most satisfying?
A.Oh man, that’s like asking a parent which child she or he likes the most. Not fair! In every job, be it Wired, Organic, C2Net, or CollabNet, I’ve been lucky enough to feel like I was doing something that in some way, large or small, could change the world. In every job I’ve been able to meet really cool and really wacky people all over the map, from lots of different fields. I keep finding new things that motivate me in any situation, I don’t think I’d allow myself to get that bored. I think I’m proudest of all of CollabNet, and the impact we’re having and going to have in helping people (and companies and institutions) find the practical, pragmatic, but deeply revolutionary advantages to changing how they think about software and community.