In the course of our discussing the proposed Mozilla CA certificate policy, Ian Grigg happened to ask about the existing Mozilla policy on handling security bugs and how we tried to forge a compromise between people advocating full disclosure of security bugs and people who were opposed to that. (Ian was interested in this because he and Adam Shostack have been blogging on the “economics of disclosure.”) I happened to look back at the Google archives of the discussions we had, and found some material that I thought was worth revising, reprinting, and commenting upon, especially for people who are not aware of how the current Mozilla policy came to be.
First, some background: As you may recall, the Bugzilla bug tracking system used in the Mozilla project originated in the bug tracking system developed and used within Netscape by developers working on Netscape Communicator and other products, and after the Mozilla project was created the Netscape developers used the public Bugzilla system for bugs that were common to both Mozilla and Netscape Communicator. (Netscape continued to use its internal bug tracking system “BugSplat!” for bugs in non-Mozilla code.)
In order to support this dual use the Bugzilla system had the ability to mark bugs as “Netscape Confidential” if they contained Netscape-proprietary information. Over time this mechanism was used to support other purposes, most notably to hide security-related bug reports that were deemed to be too sensitive to release to public view. Eventually people outside of Netscape became more aware of this, and a controversy erupted in March 2000 concerning this practice, sparked by a post “Security bugs and disclosure” from Mike Shaver and including the inevitable Slashdot article and discussion.
My thoughts at the time were as noted below, and I stand by them today. (I’ve revised my original post slightly to bring the contents up to date, including referring to Firefox and Thunderbird as example applications.)
Some of the people on Slashdot and elsewhere focused on the issue of AOL/Netscape keeping bugs to itself, and saw this as just another example of AOL trying to control the Mozilla project for its own benefit. To my mind the true debate is not about whether AOL (or any other corporate contributor) should keep Mozilla-related security bugs to itself. The Mozilla project is a public open source project, and Bugzilla is a resource for that project as a whole, not just for AOL or anybody else. If the Bugzilla database is going to contain information that is not available to everyone in the Mozilla project then the justifications for imposing such restrictions have to be pretty compelling.
Security-related bugs are a case where I believe you can’t justify restricting Mozilla-related bug information to a single vendor or entity (whether AOL or Sun or IBM or Google or even the Mozilla Foundation). For example, another company basing their own products on Mozilla code has just as much claim as AOL or others to have access to Mozilla security bug information in order to protect and provide for their own customers. A similar argument would apply in the case of developers creating their own version of Mozilla for distribution—for example, if the MathML developers were to create and distribute a custom MathML-enabled version of Firefox to the mathematical community.
So I don’t think the real argument is about restricting access to security bugs only to AOL or other major corporate contributors. Rather I think the debate is about a) whether security bugs in Bugzilla should be fully public or restricted to some smaller group; and b) if restricted to a smaller group, how that group should be chosen. (Or to put it another and I think a better way, how could any particular individual get themselves admitted to that smaller group?)
I also think some people misinterpreted Bruce Schneier’s remarks on publicizing vulnerabilities as his being in favor of full disclosure under all circumstances. Schneier wrote “In general, I am in favor of the full-disclosure movement,” but then later went on to write “I believe in giving the vendor advance notice” to allow them some reasonable amount of time to fix the problem. (Schneier’s idea of “reasonable” appears to be more than a week but at most a month.) So in my opinion Schneier can’t be represented as advocating for an absolute requirement that vendors fully and publicly disclose security bugs as soon as they receive reports of them; otherwise what would be the point of giving the vendor advance notice?
As I understand it, Schneier in effect was saying that the vendor of a software product is in a special position relative to anyone else, and is justified to some degree in concealing information about security-related bugs until they can be fixed. This is not a justification that allows concealing security bugs forever, but it does allow concealing them for some reasonable period.
The problem with applying Schneier’s argument in this case (and here we’re getting into what I think are the real issues) is that in the Mozilla project we’re not dealing with proprietary software supplied by an single identifiable vendor; we’re talking about an open source project where in effect anyone and everyone can potentially be a Mozilla “vendor.” In the proprietary world vendors are special because a) only they can fix bugs and b) only they are (ultimately) responsible for supporting users of their software. In the open source world anyone can potentially fix bugs, and anyone can distribute versions of the software to end users. So either there is no “vendor” in Schneier’s sense, or anyone can be a “vendor”; in either case it’s hard to see how Schneier’s ideas of “giving the vendor advance notice” would apply.
The discussion in the previous paragraph leads directly to two different arguments for full disclosure of security bugs, arguments that in my opinion are reasonable and deserve to be addressed.
The first argument goes somewhat like the following: given that a) anyone can (potentially) fix security bugs in an open source project like Mozilla, and b) we collectively have an obligation to maximize the chances that security bugs will be fixed, therefore we have an obligation to immediately and fully disclose information on security bugs to as many people as possible, because only in that way can we maximize the probability that the problems will be fixed.
I don’t accept that particular argument in the general case, because it doesn’t take into account the fact that with security bugs there are real risks and that disclosing details of bugs can potentially increase those risks. These risks go beyond just not having the software work, or even suffering from simple denial of service attacks, for example by malicious people putting up web sites that are designed to crash Firefox; with major security bugs you have a risk that users’ personal data could be compromised and altered, and that their systems could be subverted for malicious purposes (e.g., through trojans).
If you expose details of such security bugs to more people you increase the probability that they will be fixed but you also increase the probability that they will be exploited by people previously unaware of the bugs. (And here I should say that I don’t accept as a general truth the statement that everyone who can and will exploit a bug already knows about it by the time it’s reported to the developers.) In my opinion those risks (of the bugs being exploited vs. not being fixed) need to be balanced; I can’t say exactly what the balance point is, but I believe it’s reasonable to assume that there’s some point in disclosure (below disclosing to everyone) beyond which you could significantly increase the risk of exploitation without significantly increasing the chance that the bug will be fixed.
Of course, we don’t know what that point is exactly. We can’t say for certain, for example, that there is a group of exactly 10, or 20, or 100, or 1,000 Mozilla developers that is the optimum audience for Mozilla-related security bug information (i.e., because no one else outside that group is likely to fix those bugs). But in my opinion we still have an obligation to maximise the chances that bugs will be fixed. So if we accept the idea of limiting access to security bug information, then in my opinion we also have an obligation to a) ensure that people who have the ability to fix Mozilla-related security bugs are able to join the group without undue hassle and b) put some sort of reasonable time limit on how long security bug information is not publicly disclosed. These two policies help ensure that the initial group includes the people most likely able to fix the bug, and that the bug will likely be fixed in any event even if the initial group is unable to do so.
The second argument for full disclosure goes as follows: There are system administrators and other people who are responsible for a user community that would be using Firefox, Thunderbird, and related software and who have the means, the knowledge, and the motivation to help fix Mozilla-related security problems. Don’t they have a reasonable claim to be able to view information on reported Mozilla-related security problems, arising from the responsibility that they owe to their users, and that we owe to them as representatives of those users?
I think the answer has to be, yes, they do have some claim to view those security bug reports. If you accept that argument, then one can go on to make the subsequent argument that since anyone in the world could potentially be in the position of having some reasonable claim on seeing Mozilla-related security bug reports, then the only justifiable policy is to make the bug reports fully public as soon as they are received into Bugzilla.
Again, I don’t believe that this argument for full disclosure is fully convincing. I believe that the population of sysadmins supporting Mozilla-related software, distributors of Mozilla-based products, and other people responsible for end users is a relatively small subset of the total Firefox/Thunderbird/etc. user population, and you can make a case for limiting information on security bugs to that subset. Of course, as with the case of people who can fix security bugs, we have no foolproof way of determining exactly who should be in that group or not. Since we still in my opinion have an obligation to sysadmins, Firefox/Thunderbird distributors, etc., we should take the same approach as we should for developers: provide some reasonable way for motivated people to become part of the “inner” group allowed to see security bug reports, and set some reasonable time limits on how long information is restricted to the group.
Dan Veditz gave another argument for less than full disclosure of security bug reports: that mandating full and immediate disclosure for security bug reports placed into Bugzilla was likely to encourage Mozilla vendors (including AOL, but also potentially others) to bypass the Bugzilla mechanisms for handling security-related bugs, by handling that information internally and not making it available to other interested parties in the Mozilla project.
I believe that it’s in everyone’s interest that Bugzilla be used as a common repository for bug information by all parties involved with Mozilla development. I also believe that when deciding on a policy you have to consider the likely consequences of adopting it. Even though mandating absolute full disclosure of security bugs can be justified by arguments I myself can potentially accept (e.g., by the arguments I’ve given above), I believe that the consequences of trying to force full disclosure are in practice likely to lead to a situation where in practice we end up with less disclosure rather than more.
Therefore I’m willing to make the compromise of limiting full disclosure of security bugs in some way, as long as we follow the general guidelines mentioned above:
Information on security bugs is not limited to any particular vendor.
There is some reasonable way for people to apply and be approved for access to the information on the same basis as the others already “in the know.”
There is some mechanism to make full public disclosure of the information after a reasonable amount of time.
I concluded my original post by writing “I’ll leave it to others to make more detailed proposals on how this might be accomplished.” As it happens, I ended up being one of the people called upon to create those more detailed proposals. Luckily Mitch Stoltz or Asa Dotzler (I can’t remember which) came up with a neat “policy hack” that eventually enabled a consensus policy to be created: allow the creation of confidential bug reports for security vulnerabilities, but allow those bug reports to be “opened up” by the person who originally reported the bug.
My job then was just to flesh out the proposed policy and provide justifications for it that other people could accept (or at least could potentially accept). The final policy bears some traces of those justifications, but for the full story you need to go back to the discussions themselves. (Unfortunately these discussions were spread out over multiple threads, and I haven’t been able to track all of them down, but people with a truly obsessive interest in this topic can check out the threads “Security bugs and disclosure,” “Security disclosure - let’s resolve this,” “Handling Mozilla security vulnerabilities,” “Security announce group,” and the comments on draft 6, draft 7, and draft 8 of the proposed policy.)
A final comment: When I reread what I’d written back in the year 2000 I noticed something I hadn’t noticed the first time, namely that the three arguments I addressed for and against full disclosure fell into three different classes: The first argument (about maximizing the probability of fixing security bugs) is essentially a technical argument that reduces the question of full disclosure to an optimization problem. The second argument (about what the Mozilla project owes vendors, sysadmins, and others) is an argument about what is fair, and hence at its root is a political argument. The final argument from Dan Veditz (about the likelihood of Netscape pulling security bugs out of Bugzilla) is an argument that urges us to look at the consequences of the policies we adopt.
Having realized this, I also realized that these three types of arguments matched up with the intellectual “filters” discussed by the ecologist Garrett Hardin in his book Filters Against Folly. In order for us to penetrate to the depth of intellectual controversies, Hardin claimed that
What we need most is a categorization . . . of the methods whereby people express and test statements. We need to be acutely aware of the virtues and shortcomings of the tools of the mind. In the light of that awareness, many public controversies can be resolved. [p. 20, paperback edition]
Hardin then categorized these methods into three different “filters for reducing reality to manageable simplicity,” the “literate,” “numerate,” and “ecolate,” with three associated questions:
- Literacy: “What are the words?”
- Numeracy: “What are the numbers?”
- Ecolacy: “And then what?”
(“Literacy” and “literate” and “numeracy” and “numerate” are of course common terms; “ecolacy” and “ecolate” are Hardin’s rarely-used neologisms for knowledge and analysis of causes and effects in a connected system of organisms and related entities.)
Hardin may have been accused of follies of his own (for example, his ideas on population reduction and aid to the poor) but leaving all that aside I think that in Filters Against Folly he elucidated a key truth: that making good policy requires taking into account multiple kinds of arguments as well as consideration of the consequences of whatever policies we might create. To that end I suggest to those studying the “economics of disclosure” that we also have to study the “politics of disclosure” and the “ecology of disclosure” as well.
Where I differ from Hardin is in his contention that through use of his approach controversies can be resolved. I believe instead that in the real world the best we can hope for is to manage controversies well enough to get some work done; I hope to write more about this topic in a later post.
UPDATED: Michael Krax and Jacek Piskozub separately wrote me to comment on one major issue with regard to the handling of Mozilla security bugs: That although security-sensitive bugs in Bugzilla can be kept confidential, the actual code fixes to such bugs are publicly visible as soon as they’re checked into the Mozilla CVS repository. This has at least two implications:
First, even though the bug itself may remain confidential, attackers can potentially identify the underlying security vulnerability by tracking CVS checkins; this lessens the case for keeping the bug confidential past that point. (Or to put it another way, one can make a good argument that security-sensitive bugs should be publicly disclosed as soon as fixes have been found and checked in.)
Second, since security bugs can be potentially identified once fixes have been checked in, any delay in producing a new release incorporating such fixes raises the risk that users will suffer from exploits of the bugs. (Users could of course download nightly releases, but this isn’t a suitable solution for the vast majority of users.)
Neither of these issues is strictly speaking a problem with the Mozilla security policy itself; however they do illustrate some of the practical concerns associated with implementing it.