Pages

Vulnerability Disclosure FAQ

Published: 2019-07-31

Last updated: 2021-11-29

Project Zero follows Google’s vulnerability disclosure policy on all of our vulnerability reports. What does this mean exactly, and why do we do things this way? This document explains how Project Zero currently handles vulnerability disclosure, and answers some of the questions we receive about our disclosure policy.

What is Project Zero's 90-day disclosure deadline policy?

When Project Zero finds a new vulnerability, we send a detailed technical description of the issue to the relevant vendor or open source project. This initial vulnerability report includes the following statement:

"This bug is subject to a 90 day disclosure deadline. If a fix for this issue is made available to users before the end of the 90-day deadline, this bug report will become public 30 days after the fix was made available. Otherwise, this bug report will become public at the deadline."

Our expectation is that the developer will fix the security vulnerability and make a patch available to users within 90 days. If so, Project Zero will release details about the vulnerability 30 days after the fix is made available to users. In some cases the vendor may agree to publish details at an earlier date (for example, if they want to align disclosure to an official security bulletin release, or if the technical details are already public due to normal development practices).

What happens if a patch isn't broadly available after 90 days?

If the patch is expected to arrive within 14 days of the deadline expiring, then Project Zero may offer an extension. We implemented a 14-day grace extension after receiving some good feedback from other vendors. There had been some awkward timing in the past, for example, where a vendor's scheduled monthly patch release was due two days after the deadline expiry date, and we agreed that a clearly defined grace extension is a reasonable compromise for this situation. Note however, that the 14-day grace period overlaps with the 30-day patch uptake window, such that any vulnerability fixed within the grace period will still be publicly disclosed on day 120 at the latest (30 days after the original 90-day deadline).

If we don't think a fix will be ready within 14 days, then we will use the original 90-day deadline as the time of disclosure. That means we grant a 14-day grace extension only when there's a commitment by the developer to ship a fix within the 14-day grace period.

How does Project Zero publicly disclose a vulnerability?

Our vulnerability discoveries are tracked in the Project Zero issue tracker:

https://bugs.chromium.org/p/project-zero/issues/list

Initially, all of our bug reports are restricted so that only Project Zero team members can see the technical content. When it's time to disclose, we "derestrict" access to the issue tracker entry for the bug, which means the technical description of the vulnerability will become publicly accessible.

If the disclosure happens because of a missed deadline, the "Deadline-Exceeded" label is used. If the 14-day grace extension was applied, the bug will have the "Deadline-Grace" label.

What proportion of vulnerabilities are fixed before the 90-day deadline?

As of November 29, 2021 we have 1,806 vulnerabilities with a 90-day deadline in a "New" or "Fixed" state in our issue tracker, and 70 vulnerabilities have been disclosed without a patch being available to users. That means that over the total lifetime of Project Zero, 96.1% of issues have been fixed under deadline.

If we limit the analysis to the time period where grace extensions were an option (from February 13, 2015 to present) then we have 1,617 "New" or "Fixed" issues. Of these, 1,375 were fixed within 90 days, and a further 197 issues were fixed within the 14-day grace period. That leaves 45 vulnerabilities that were disclosed without a patch being available to users, meaning 97.2% of our issues are fixed under deadline.

Why are disclosure deadlines necessary?

We were concerned that patches were taking a long time to be developed and released to users, and we felt that disclosure deadlines set up the right balance of incentives.

Software vendors have responded to disclosure deadlines in a way that other options were historically unable to accomplish. Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. Coordinated vulnerability disclosure is premised on the idea that any public disclosure prior to a fix being released unnecessarily exposes users to malicious attacks, and so the vendor should always set the time frame for disclosure.

We used this model of disclosure for over a decade, and the results weren't particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren't seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users.

But why do slow patch timelines matter? If you assume that only the vendor and the reporter have knowledge of the vulnerability, then the issue can be fixed without urgency. However, we increasingly have evidence that attackers are finding (or acquiring) many of the same vulnerabilities that defensive security researchers are reporting.

We can't know for sure when a security bug we have reported has previously been found by an attacker (recent attempts to quantify the rate of bug collision can be found here and here), but we know that it happens regularly enough to factor into our disclosure policy. We think that our policy introduces an appropriate level of urgency into the vulnerability remediation process.

Essentially, disclosure deadlines are a way for security researchers to set expectations and provide a clear incentive for vendors and open source projects to improve their vulnerability remediation efforts. We tried to calibrate our disclosure timeframes to be ambitious, fair, and realistically achievable.

While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across hundreds of vulnerability reports, we can say that we're very satisfied with the results. No one on Project Zero is happy when a deadline is missed, but a consistent and fair approach to enforcing disclosure deadlines goes a long way.

For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy.

Have there been any cases where an exception to the disclosure deadline policy has been given?

Yes, in 6 out of 1797 cases the disclosure deadlines for Project Zero's issues were extended by Google:

  1. Issue 837 -- "task_t considered harmful", 145 days
  2. Issue 1272 -- "Spectre and Meltdown", 216 days
  3. CVE-2020-1027 -- "In the Wild Series: Windows Exploits", 23 days (actively exploited issue under a 7-day deadline)
  4. Issue 2105 -- "In-the-Wild Series: October 2020 0-day discovery", 101 days (actively exploited issue under a 7-day deadline)
  5. Issue 2107 -- "In-the-Wild Series: October 2020 0-day discovery", 98 days (actively exploited issue under a 7-day deadline)
  6. Issue 2108 -- "In-the-Wild Series: October 2020 0-day discovery", 98 days (actively exploited issue under a 7-day deadline)

Doesn't disclosing a vulnerability when there's no fix endanger users?

The answer is counterintuitive at first: disclosing a small number of unfixed vulnerabilities doesn't meaningfully increase or decrease attacker capability. Our "deadline-based" disclosures have a neutral short-term effect on attacker capability.

We certainly know that there are groups and individuals that are waiting to use public attacks to harm users (like exploit kit authors), but we also know that the cost of turning a typical Project Zero vulnerability report into a practical real-world attack is non-trivial.

Since Project Zero typically discloses only one part of an exploit chain, attackers need to perform substantial additional research and development to complete the exploit and make it reliable. Any attacker with the resources and technical skills to turn a bug report into a reliable exploit chain would usually be able to build a similar exploit chain even if we had never disclosed the bug. They would either have the ability to find and exploit their own 0day vulnerabilities, or have access to a range of other interchangeable bugs (e.g. other fixed/disclosed bugs from the past weeks/months).

Also, the window of exposure between disclosure and a fix being released is very small, i.e., a patch usually arrives shortly after a deadline is missed, and the attacker's risk of detection increases rapidly from the point of disclosure.

For any attackers that are willing to exploit publicly disclosed bugs (despite the increased risk of failure or detection), there currently seems to be two alternative options that are preferred for their cost-effectiveness:

  1. Waiting for disclosed bugs that require only a small amount of additional research and development (design flaws and logic bugs, or other easily exploitable conditions); or
  2. Waiting for a fully developed and reliable exploit to be leaked (typically when a targeted exploit attempt using 0day is detected).

All of this means that there isn't a substantial difference between deadline enforced disclosures or our normal post-patch disclosure in terms of the observed rates of "opportunistic reuse" by attackers. If most bugs are fixed in a reasonable timeframe (i.e. less than 90 days), then we are only enforcing the deadline on a very small number of unfixed cases. And if disclosing a handful of unfixed vulnerabilities doesn't substantially help attackers in the short-term, but does lead to the demonstrated long term benefits of shortened patch timelines and more frequent patching cycles, then it would follow that a deadline based disclosure policy is good for user security overall.

Why do you disclose technical details about a bug after it's fixed?

We think there's tremendous long-term benefit in publishing details about our research methodologies and results. We use discussions about vulnerabilities and exploits to drive a pipeline of work on structural improvements to software and hardware security: attack surface reduction, exploit mitigations, improved sandboxing, fixing bug classes, and improving the state of public security research.

We're also big believers in the educational benefits of sharing our results and insights, and we hope that our blog posts and issue tracker reports can provide a pathway for new researchers to join the security community. Additionally we want to share our insights and areas of focus with other security experts in order to drive attention towards important attack surfaces, and to encourage more researchers to share their own results.

Information about how a modern exploit works is extremely valuable, and increasingly there are incentives for offensive practitioners to withhold this information from other security researchers, developers, and the public. To counter this shift towards privately held attack research, we think that encouraging high-quality public research on modern attacks is a key part of building a better ecosystem of well-informed defenders.

Why do you release information about the vulnerability so quickly after a fix is released?

With the 2021 policy update, we release information about the vulnerability 30 days after the vendor has made a fix available (assuming they did so within the 90-day deadline).We hope that this is a good incentive for vendors to provide timely fixes to users, and to improve their ecosystem's patch adoption rates.

We've seen that attackers spend time analyzing security patches in order to learn about vulnerabilities (both through source code review and binary reverse engineering), and they quickly establish the full details even if the vendor and researcher attempt to withhold technical data.

Since the utility of information about vulnerabilities is very different for defenders vs attackers, we don't expect that defenders can typically afford to do the same depth of analysis as attackers. The feedback that we get from defenders is that they want more information about the risks that they and their users face.

The information that we release can commonly be used by defenders to immediately improve defenses, testing the accuracy of bug fixes, and can always be used to make informed decisions about patch adoption or short-term mitigations.

Timely information also generates a level of momentum and excitement in the security research community. We aim to harness this to drive follow-up research and to motivate discussions about long-term structural improvements to security.

How do you decide who to report a vulnerability to?

We think that vulnerability reports should be communicated directly to the vendor or open source project that is responsible for developing the fix. Generally we use an official point of contact for security bug reports (e.g., an email address or issue tracker) and we follow each project's documented process for handling security bugs until a bug is fixed or a disclosure deadline has passed.

Sometimes we get asked to share our vulnerability reports with third parties, such as organizations that are affected by the vulnerability. By default we decline these requests, and we generally ask that vendors refrain from sharing our vulnerability reports with third parties unnecessarily. We have observed several unintended outcomes from vulnerability sharing under embargo arrangements, such as: increased risk of leaks, slower patch release cycles, and inconsistent criteria for inclusion.

Do you ever help software vendors or open source projects fix the issues you report?

Absolutely! We want to be involved as much as possible in the patch development process, and encourage vendors to collaborate with our researchers to make sure patches are correct and complete. We often directly suggest a source code patch that will resolve the underlying bug, but for complex cases we will typically work with the software maintainer to develop and verify a correct fix.

Project Zero researchers are always available to provide feedback during the patch development process—an extra pair of eyes on a security patch can make a big difference, so we encourage vendors to reach out to our researchers if they have any questions or ideas that they'd like to discuss further. There have been several occasions where the initial patch was incomplete or inadvertently introduced another vulnerability, and we’ve happily worked with the maintainer/vendor to come up with a correct fix.

We often include additional guidance about opportunities for code hardening, attack surface reduction, design improvements, testing and so on. This often results in structural improvements above-and-beyond an individual bug fix. Collaborating on these structural improvements is a specific goal for Project Zero, and is seen as an important long-term component of our work.

Would you recommend other security researchers use a disclosure deadline policy?

Yes, we'd encourage other security researchers to use disclosure deadlines as well.

We think that industry practices will improve as more researchers start to include timeline expectations in their bug reports. There are many good reasons why a security researcher might choose not to adopt a disclosure deadline policy on their bug reports, but overall we've seen many positive outcomes from adopting disclosure deadlines and we can certainly recommend it to other security researchers.

We understand that some software vendors have chosen to prioritize Project Zero's vulnerability reports at the expense of other vulnerability reports that don't have a specific disclosure timeline. As more security researchers apply deadlines, we're expecting software vendors to prioritize bug fixes based on overall impact and to invest appropriately to ensure that all important security issues can be fixed in a timely manner, and we think that would be a step in the right direction for user security.

What do you do if a vendor says a bug is invalid, or says that they cannot, or will not, fix it?

If we report an issue and the vendor indicates that they won't be issuing a patch, then we derestrict the technical details in our issue tracker (i.e., make the issue publicly available for discussion) with a status of "WontFix" and include an additional technical assessment of the developer's response.

In essence we shift from treating the bug report as a vulnerability (where the rules of vulnerability disclosure apply) and instead begin to treat the issue as a non-security bug (where there are typically no restrictions on public discussion). We think this incentivizes vendors to perform high-quality triaging of our bug reports, and we've seen a significant improvement in the quality of the triage we receive in response to this approach.

Software maintainers have been very good at assessing the security risk of the issues we report to them, and it's rare that Project Zero and a developer disagree about the severity of an issue.

So is a publicly available source code patch a "fix" even if there's no build for it?

We think a public source code patch is usually equivalent to a public disclosure, even if it's not clearly marked as a security-relevant change. There's a good amount of research that supports this, such as Barth et al's "How Open Should Open Source Be?" (link) or Aubizzierre's "Unearthing the World's Best Bugs" (link). We also have experience at Project Zero with analyzing security patches, so we have a good sense for what is technically feasible here, and we know that attackers have an incentive to perform this analysis against high-profile targets.

We've reported vulnerabilities in dozens of different open source projects, and we've noticed all projects handle security fixes in a slightly different way. Some prefer immediately releasing security patches as soon as they're ready, while others try for a more coordinated approach. Open source projects and their user communities are in the best position to choose how to disseminate patches, but our view is that once a patch is public we can start to discuss the vulnerability in more detail with the wider security community.

Why does Project Zero release proof-of-concept exploit code? Doesn't this help attackers?

The primary argument against releasing proof-of-concept exploit code is that malicious parties can quickly repurpose our research into an attack that harms users. While this may occur when “full chain” exploits are released, in almost all cases our proof-of-concept code is not immediately repurposable for an attack — i.e., substantial additional research and development will be required before an exploit can be used in the wild.

On the flip side, we think there are some benefits to giving defenders concrete data on what an exploit might look like for any particular bug — it can assist network administrators in prioritizing patch deployment, it gives security experts the ability to validate, understand, mitigate and detect some attacks, and it provides public, real-world data to effectively drive the future of secure software development.

Project Zero has publicly announced the existence of a bug prior to the 90-day deadline in the past. Isn't this a type of disclosure that goes against your own policy?

From a business perspective, a disclosure at any level of detail can have a range of serious consequences. From a technical and user risk perspective however, the level of detail shared is important to factor in.

In most cases we don't think that announcing the existence of a vulnerability is equivalent to a detailed vulnerability disclosure. All software of sufficient complexity will contain vulnerabilities, so saying things like "I just reported a vulnerability in the Android media server" isn't materially useful information for an attacker. It's common that software vendors give early notification of upcoming advisories, and other security researchers have had good success with announcing high-level summaries of pending publications.

One concern we've heard from vendors about announcements like this: customers will often contact their software provider to inquire about the status of a fix or potential mitigations, and this can increase costs.

Project Zero doesn't currently announce the existence of pending vulnerability fixes, but we're keeping a close eye on how other researchers approach this, and we may experiment with early notifications again in the future if there's sufficient interest in this approach.

Are vulnerabilities that are being actively exploited "in the wild" handled differently?

Yes. Google has a different policy for how to handle vulnerabilities that have been discovered "in the wild", i.e., vulnerabilities that are being actively exploited to harm users. It is described in this Google Online Security blog from 2013:

"Based on our experience, however, we believe that more urgent action -- within 7 days -- is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised."

Google expects that vendors will address an actively exploited vulnerability within 7 days. This is in contrast to the 90-day time period used for vulnerabilities that are not categorically known to be under active exploitation.

Are hardware vulnerabilities treated differently to software vulnerabilities in your disclosure policy?

For the time being, we intend to apply the same disclosure policy for both hardware and software issues. These cases are rare and often discussed at length, and we have historical precedence for enforcing disclosure deadlines on both hardware and software issues. Each of these discussions has been unique and valuable, and so we think it's too early to reset our expectations specifically for hardware vendors.

All of the systems that we research have different pre-existing constraints and capabilities, and we have observed legacy architectural and process issues that can make timely patch development incredibly challenging for hardware vendors. However, we don't think that resolving hardware security issues in a timely manner is impossible or infeasible, and instead, it appears that our disclosure policy has been effective at motivating increased investment in hardware security. Similar to our software vulnerability reporting, we're excited to see the results from our hardware vulnerability reporting over time.

Does Google have early access to Project Zero's detailed technical vulnerability reports?

Typically only Project Zero team members (who are Google employees) and a small number of security engineers working inside the team on “20% projects” have access to Project Zero's vulnerability reports prior to public disclosure. An obvious exception is when Google is the recipient of our reports: i.e. we discover vulnerabilities in Chrome, Android, and other Google-supported software, and in those cases we follow Google's standard external bug reporting procedures and follow the same processes that a non-Google security researcher would experience.

There's a temptation to "short circuit" the normal bug fixing process for third-party software that Google relies on, i.e. to give Google's products and services a head start, but we like the idea of setting the same expectations for everyone. While this can make things a little awkward at the office sometimes, in the end it encourages Google to further invest in having great procedures and relationships in place with upstream projects and vendors, and that's a good thing. Most patched security issues aren't discovered by Project Zero, so having a "special case" for our team's findings wouldn't change much in practice, and getting the fundamentals of good patch management right is much more important in the long run.

No comments:

Post a Comment