The first Pwnium competition held earlier this year exceeded our expectations. We received two submissions of such complexity and quality that both of them won Pwnie Awards at this year’s Black Hat industry event. Most importantly, we were able to make Chromium significantly stronger based on what we learned.
We’re therefore going to host another Pwnium competition, called... Pwnium 2. It will be held on Oct 10th, 2012 at the Hack In The Box 10 year anniversary conference in Kuala Lumpur, Malaysia.
This time, we’ll be sponsoring up to $2 million worth of rewards at the following reward levels:
- $60,000: “Full Chrome exploit”: Chrome / Win7 local OS user account persistence using only bugs in Chrome itself.
- $50,000: “Partial Chrome exploit”: Chrome / Win7 local OS user account persistence using at least one bug in Chrome itself, plus other bugs. For example, a WebKit bug combined with a Windows kernel bug.
- $40,000: “Non-Chrome exploit”: Flash / Windows / other. Chrome / Win7 local OS user account persistence that does not use bugs in Chrome. For example, bugs in one or more of Flash, Windows or a driver.
- $Panel decision: “Incomplete exploit”: An exploit that is not reliable, or an incomplete exploit chain. For example, code execution inside the sandbox but no sandbox escape; or a working sandbox escape in isolation. For Pwnium 2, we want to reward people who get “part way” as we could definitely learn from this work. Our rewards panel will judge any such works as generously as we can.
Exploits should be demonstrated against the latest stable version of Chrome. Chrome and the underlying operating system and drivers will be fully patched and running on an Acer Aspire V5-571-6869 laptop (which we’ll be giving away to the best entry.) Exploits should be served from a password-authenticated and HTTPS Google property, such as App Engine. The bugs used must be novel i.e. not known to us or fixed on trunk. Please document the exploit.
You may have noticed that we’ve compressed the reward levels closer together for Pwnium 2. This is in response to feedback, and reflects that any local account compromise is very serious. We’re happy to make the web safer by any means -- even rewarding vulnerabilities outside of our immediate control.
Another well-received piece of feedback from the first Pwnium was that more notice would have been nice. Accordingly, we’re giving about two months notice. We hope this gives enough time for the security community to craft more beautiful works, which we’d be more than happy to reward and celebrate.
The Chromium Vulnerability Rewards Program was created to help reward the contributions of security researchers who invest their time and effort in helping us make Chromium more secure. We’ve been very pleased with the response: Google’s various vulnerability reward programs have kept our users protected and netted more than $1 million dollars of total rewards for security researchers. Recently, we’ve seen a significant drop-off in externally reported Chromium security issues. This signals to us that bugs are becoming harder to find, as the efforts of the wider community have made Chromium significantly stronger.
Therefore, we’re making the following changes to the reward structure:
- Adding a bonus of $1,000 or more on top of the base reward for bugs in stable areas of the code base—see below for an example. By “stable”, we mean that the defect rate appears to be low and we think it’s harder to find a security bug in the area.
- Adding a bonus of $1,000 or more on top of the base reward for serious bugs which impact a significantly wider range of products than just Chromium. For example, certain open source parsing libraries—see below for an example.
The rewards panel has always reserved the right to reward at our discretion. At times, rewards have reached the $10,000 level for particularly significant contributions. An extraordinary contribution could be a sustained level of bug finding, or even one individual impressive report. Examples of individual items that might impress the panel include:
- Nvidia / ATI / Intel GPU driver vulnerabilities. High or critical severity vulnerabilities in the respective Windows drivers, demonstrated and triggered from a web page. Submissions on Chrome OS would also be interesting. Chrome OS typically runs on a device with an Intel GPU.
- Local privilege escalation exploits in Chrome OS via the Linux kernel. Chrome OS has a stripped-down kernel, so a working exploit against it would certainly be worth examining. We reserve the right to reward more generously if the exploit works inside our “setuid sandbox” and / or our fast-evolving “seccomp BPF sandbox”.
- Serious vulnerabilities in IJG libjpeg. For well over a decade, there hasn’t been a serious vulnerability against IJG libjpeg. Can one be found?
- 64-bit exploits. Any working code execution exploit on a 64-bit Chrome release. Sandbox escape not required.
- Renderer to browser exploit. Any working browser code execution exploit, starting from the assumed precondition of full code execution inside a normal web renderer or PPAPI process.
Aside from the new bonuses, it’s worth recapping some details of the existing reward structure that aren’t as widely known:
- Our reward program covers vulnerabilities in Adobe Flash as well as other well-known software such as the Linux kernel, various open-source libraries and daemons, X windows, etc.
- Our base reward is $2,000 for well-reported UXSS bugs, covering both the Chromium browser and also Adobe Flash. (With the new reward bonus for exploitability, UXSS rewards will likely become $4,000.)
- Our reward program already includes a bonus of $500 to $1,000 when the reporter becomes a more involved Chromium community member and provides a peer-reviewed patch.
- We have always considered rewards for regressions affecting our Beta or Dev channel releases. It’s a big success to fix security regressions before they ship to the Stable channel.
To illustrate how the new reward bonuses will work, we’re retroactively applying the bonuses to some older, memorable bugs:
- $1,000 to Atte Kettunen of OUSPG for bug 104529 (new total: $2,000). We believe that our PDF component is one of the more secure (C++) implementations of PDF, hence the $1,000 top-up.
- $3,000 to Jüri Aedla for bug 107128 (new total: $4,000). There is a $1,000 bonus because this bug affects many projects via core libxml parsing, and we added a $2,000 bonus for exploitability: this is a heap-based buffer overflow involving user-controlled data with a user-controlled length.
We’re more excited than ever to work with the community and reward their efforts.
Windows and Linux only. Thanks to a sharp focus, Google Chrome engineers are able to work just on a few, rather than dozen features at the same time, delivering stable rather than clunky web experience. Now, according to the recent blog post, the latest final build of the Google Chrome 21 web browser improves something [...]
Last year, we posted on the Google Online Security Blog about our desire to end mixed scripting vulnerabilities. A “mixed scripting” vulnerability affects HTTPS websites that are improperly implemented; these vulnerabilities are serious because they eliminate most of the security protections afforded by HTTPS. All web browsers have historically taken it upon themselves to try and work around these bugs by informing or protecting users in some way.
With the recent release of Chrome 21, we’ve taken several steps forward:
- We continue to protect end users by blocking mixed scripting conditions by default, but we now do it in a way that is less intrusive. This change minimizes “security dialog fatigue” and reduces the likelihood that users will expose themselves to risk by clicking through the warning.
- We’ve improved resistance to so-called “clickjacking” attacks. Electing to run any mixed script is now a two-click process.
- We now silently block mixed scripting conditions for websites that opt in to the HSTS security standard. This is the strongest default protection available.
If you visit a non-HSTS web site with a mixed scripting condition, a new shield icon in the omnibox (to the right, next to the star) indicates that Chrome’s protection has kicked in:
You can click on the shield to see the option to run the mixed script, but we don’t recommend it. Instead, if you see the shield icon, we recommend contacting the website owners to make sure they know they may have a security vulnerability.
It has been an interesting journey to get to this point. For about a year, we blocked mixed scripting by default on Chrome’s Dev and Beta channel releases. Rolling out the block to Stable was more challenging because of widespread mixed scripting across the web. To move forward, we turned blocking on for certain web sites, starting with google.com. Later, we reached out to and then collaborated with twitter.com and facebook.com to opt them into blocking, too. All these websites hold themselves to a high standard of security, so this approach worked well. We later took the additional step of opting in sites to mixed script blocking for any site using the HSTS standard.
We bit the bullet and let full mixed script blocking for all sites hit Stable back in Chrome 19. Predictably, we uncovered a range of buggy web sites, and some users were confused about the “infobar” warning displayed by the older versions of Chrome:
Fortunately—and no doubt driven by the high visibility of this warning—some prominently affected websites were able to deploy quick fixes to resolve their mixed scripting vulnerabilities. This work aligns with one of our Core Security Principles: Make the web safer for everyone. Unfortunately, the warning confused some users, which conflicts with another principle: Don’t get in the way. (We’re sorry for any temporary disruption.)
With Chrome 21, we believe we’ve achieved a good balance between top-flight protection for end users, a pleasant UI experience, and notifications that help buggy websites improve their security.
Adobe told us they were only working on Flash for Windows and Mac PCs and it turns out they were serious. There will be no native Flash for Google's Android 4.1.
According to reports, Google and Asus will be releasing a new 7-inch tablet running Android 4.1, Jelly Bean, at this week's Google Input/Ouput conference.
Chrome: Ford KeyFree is a Chrome extension that automatically logs into your Google, Facebook, and Twitter logins when your phone is near your computer, then logs you out when you walk away. It's made by the car company; it looks awesome. More »
When we wrapped up our recent Pwnium event, we praised the creativity of the submissions and resolved to provide write-ups on how the two exploits worked. We already covered Pinkie Pie’s submission in a recent post, and this post will summarize the other winning Pwnium submission: an amazing multi-step exploit from frequent Chromium Security Reward winner Sergey Glazunov.
From the start, one thing that impressed us about this exploit was that it involved no memory corruption at all. It was based on a so-called “Universal Cross-Site Scripting” (or UXSS) bug. The UXSS bug in question (117226) was complicated and actually involved two distinct bugs: a state corruption and an inappropriate firing of events. Individually there was a possible use-after-free condition, but the exploit -- perhaps because of various memory corruption mitigations present in Chromium -- took the route of combining the two bugs to form a “High” severity UXSS bug. However, a Pwnium prize requires demonstrating something “Critical”: a persistent attack against the local user’s account. A UXSS bug alone cannot achieve that.
The exploit still had a long way to go, though, as there are plenty of additional barriers:
- chrome://chromewebdata does not have any sensitive APIs associated with it.
- chrome://a is not same-origin with chrome://b.
- chrome://* origins only have privileges when the backing process is tagged as privileged by the browser process, and this tagging only happens as a result of a top-level navigation to a chrome:// URL.
- The sensitive chrome://* pages generally have CSPs applied that prevent the UXSS bug in question.
The exploit became extremely creative at this point. To get around the defenses, the compromised chrome://chromewebdata origin opened a window to chrome://net-internals, which had an iframe in its structure. Another WebKit bug -- the ability to replace a cross-origin iframe (117583) -- was used to run script that navigated the popped-up window, simply “back” to chrome://net-internals (117417). This caused the browser to reassess the chrome://net-internals URL as a top-level navigation -- granting limited WebUI permissions to the backing process as a side-effect (117418).
As you can imagine, it took us some time to dissect all of this. Distilling the details into a blog post was a further challenge; we’ve glossed over the use of the UXSS bug to bypass pop-up window restrictions. The UXSS bug was actually used three separate times in the exploit. We also omitted details of various other lockdowns we applied in response to the exploit chain.
What’s clear is that Sergey certainly earned his $60k Pwnium reward. He chained together a whopping 14[*] bugs, quirks and missed hardening opportunities. Looking beyond the monetary prize, Sergey has helped make Chromium significantly safer. Besides fixing the array of bugs, we’ve landed hardening measures that will make it much tougher to abuse chrome:// WebUI pages in the future.
Just over two months ago, Chrome sponsored the Pwnium browser hacking competition. We had two fantastic submissions, and successfully blocked both exploits within 24 hours of their unveiling. Today, we’d like to offer an inside look into the exploit submitted by Pinkie Pie.
So, how does one get full remote code execution in Chrome? In the case of Pinkie Pie’s exploit, it took a chain of six different bugs in order to successfully break out of the Chrome sandbox.
Pinkie’s first bug (117620) used Chrome’s prerendering feature to load a Native Client module on a web page. Prerendering is a performance optimization that lets a site provide hints for Chrome to fetch and render a page before the user navigates to it, making page loads seem instantaneous. To avoid sound and other nuisances from preloaded pages, the prerenderer blocks plug-ins from running until the user chooses to navigate to the page. Pinkie discovered that navigating to a pre-rendered page would inadvertently run all plug-ins—even Native Client plug-ins, which are otherwise permitted only for installed extensions and apps.
Of course, getting a Native Client plug-in to execute doesn’t buy much, because the Native Client process’ sandbox is even more restrictive than Chrome’s sandbox for HTML content. What Native Client does provide, however, is a low-level interface to the GPU command buffers, which are used to communicate accelerated graphics operations to the GPU process. This allowed Pinkie to craft a special command buffer to exploit the following integer underflow bug (117656) in the GPU command decoding:
The issue here is that if size_of_buffer is smaller than sizeof(uint32), the result would be a huge value, which was then used as input to the following function:
This calculation then overflowed and made the result of this function zero, instead of a value at least equal to sizeof(uint32). Using this, Pinkie was able to write eight bytes of his choice past the end of his buffer. The buffer in this case is one of the GPU transfer buffers, which are mapped in both processes’ address spaces and used to transfer data between the Native Client and GPU processes. The Windows allocator places the buffers at relatively predictable locations; and the Native Client process can directly control their size as well as certain object allocation ordering. So, this afforded quite a bit of control over exactly where an overwrite would occur in the GPU process.
The next thing Pinkie needed was a target that met two criteria: it had to be positioned within range of his overwrite, and the first eight bytes needed to be something worth changing. For this, he used the GPU buckets, which are another IPC primitive exposed from the GPU process to the Native Client process. The buckets are implemented as a tree structure, with the first eight bytes containing pointers to other nodes in the tree. By overwriting the first eight bytes of a bucket, Pinkie was able to point it to a fake tree structure he created in one of his transfer buffers. Using that fake tree, Pinkie could read and write arbitrary addresses in the GPU process. Combined with some predictable addresses in Windows, this allowed him to build a ROP chain and execute arbitrary code inside the GPU process.
The GPU process is still sandboxed well below a normal user, but it’s not as strongly sandboxed as the Native Client process or the HTML renderer. It has some rights, such as the ability to enumerate and connect to the named pipes used by Chrome’s IPC layer. Normally this wouldn’t be an issue, but Pinkie found that there’s a brief window after Chrome spawns a new renderer where the GPU process could see the renderer’s IPC channel and connect to it first, allowing the GPU process to impersonate the renderer (bug 117627).
Even though Chrome’s renderers execute inside a stricter sandbox than the GPU process, there is a special class of renderers that have IPC interfaces with elevated permissions. These renderers are not supposed to be navigable by web content, and are used for things like extensions and settings pages. However, Pinkie found another bug (117417) that allowed an unprivileged renderer to trigger a navigation to one of these privileged renderers, and used it to launch the extension manager. So, all he had to do was jump on the extension manager’s IPC channel before it had a chance to connect.
Once he was impersonating the extensions manager, Pinkie used two more bugs to finally break out of the sandbox. The first bug (117715) allowed him to specify a load path for an extension from the extension manager’s renderer, something only the browser should be allowed to do. The second bug (117736) was a failure to prompt for confirmation prior to installing an unpacked NPAPI plug-in extension. With these two bugs Pinkie was able to install and run his own NPAPI plug-in that executed outside the sandbox at full user privilege.
So, that’s the long and impressive path Pinkie Pie took to crack Chrome. All the referenced bugs were fixed some time ago, but some are still restricted to ensure our users and Chromium embedders have a chance to update. However, we’ve included links so when we do make the bugs public, anyone can investigate in more detail.
In an upcoming post, we’ll explain the details of Sergey Glazunov’s exploit, which relied on roughly 10 distinct bugs. While these issues are already fixed in Chrome, some of them impact a much broader array of products from a range of companies. So, we won’t be posting that part until we’re comfortable that all affected products have had an adequate time to push fixes to their users.
Web browsers are big, complicated pieces of software that are extremely difficult to secure. In the case of Chrome, it’s an even more interesting challenge as we contend with a codebase that evolves at a blisteringly fast pace. All of this means that we need to move very quickly to keep up, and one of the ways we do so is with a scaled out fuzzing infrastructure.
Chrome’s fuzzing infrastructure (affectionately named "ClusterFuzz") is built on top of a cluster of several hundred virtual machines running approximately six-thousand simultaneous Chrome instances. ClusterFuzz automatically grabs the most current Chrome LKGR (Last Known Good Revision), and hammers away at it to the tune of around fifty-million test cases a day. That capacity has roughly quadrupled since the system’s inception, and we plan to quadruple it again over the next few weeks.
With that kind of volume, we’d be overloaded if we just automated the test case generation and crash detection. That’s why we’ve automated the entire fuzzing pipeline, including the following processes:
- Managing test cases and infrastructure - To run at maximum capacity we need to generate a constant stream of test cases, distribute them across thousands of Chrome instances running on hundreds of virtual machines, and track the results.
- Analyzing crashes - The only crashes we care about for security purposes are the exploitable ones. So we use Address Sanitizer to instrument our Chrome binaries and provide detailed reports on potentially exploitable crashes.
- Minimizing test cases - Fuzzer test cases are often very large files—usually as much as several hundred kilobytes each. So we take the generated test cases and distill them down to the few, essential pieces that actually trigger the crash.
- Identifying regressions - The first step in getting a crash fixed is figuring out where it is and who should fix it. So this phase tracks the crash down to the range of changes that introduced it.
- Verifying fixes - In order to verify when a crash is actually fixed, which we run the open crash cases against each new LKGR build.
In addition to manageability, this level of scale and automation provides a very important additional benefit. By aggressively tracking the Chrome LKGR builds, ClusterFuzz is evolving into a real-time security regression detection capability. To appreciate just what that means, consider that ClusterFuzz has detected 95 unique vulnerabilities since we brought it fully online at the end of last year. In that time, 44 of those vulnerabilities were identified and fixed before they ever had a chance to make it out to a stable release. As we further refine our process and increase our scale, we expect potential security regressions in stable releases to become increasingly less common.
Just like Chrome itself, our fuzzing work is constantly evolving and pushing the state of the art in both scale and techniques. In keeping with Chrome’s security principles, we’re helping to make the web safer by upstreaming the security fixes into projects we rely upon, like WebKit and FFmpeg. As we expand and improve ClusterFuzz, users of those upstream projects will continue to benefit.
I may be old-fashioned in this regard but I prefer websites and companies to know as little about me as possible, unless the information are used for a service that I make active use of. I do not mind Amazon knowing that I’m an adult male, as this is blocking recommendations and offers aimed at a female audience on the site.
The method used tests if certain extensions are installed in the browser, which is different from listing all installed extensions. Here are the technical details on how this can done:
Every addon has a manifest.json file. In http[s]:// page you can try to load a script cross-scheme from chrome-extension:// URL, in this case – the manifest file. You just need the addon unique id to put into URL. If the extension is installed, manifest will load and onload event will fire. If not – onerror event is there for you.
You may still remember the CSS History Leak issue were a list of popular web addresses was used on websites to find out if a visitor did visit those sites in the past. The principle is the same, only the execution is different.
A proof-of-concept page has been created that Chrome users can visit for a demonstration. Chrome users without extensions installed, or other browser users, are not affected by this at all.
This has two implications. First a privacy one, as websites can use the information for a variety of purposes. They can for instance test if an adblocker is installed, or social networking, shopping or pregnancy extensions. Security is the other one. Malicious websites could check if add-ons with known vulnerabilities are installed that are no longer maintained by the author.
According to information posted in the comment section, add-ons installed from a custom-packed extension file or that are loaded unpacked are not recognized by the script.
Identified as a bug CVE-2011-3046, discovered vulnerability is described as “UXSS and bad history navigation”, with no additional details revealed.
Having said that, the latest stable build of Google Chrome (17.0.963.78) also fixes earlier reported issues with the Flash games and videos.
Security contests prove to be useful.
Just as some might have thought that Google’s Chrome sandboxing feature is bullet proof, Sergey Glazunov, a security researcher who have found quite a few vulnerabilities in the fast, has enriched his life with a $60k reward, received for a “Full Chrome” exploit, which bypassed the sandbox feature. Although Google Chrome was previously known to withstand various attacks in Pwn2Own and similar contests, this time it was the first to fail.
Justin Schuh, Chrome’s security team member said, “It was an impressive exploit. It required a deep understanding of how Chrome works. This is not a trivial thing to do. It’s a very difficult and that’s why we’re paying $60,000.”
The second exploit was executed by a team from VuPen Security, which took about 6 weeks to write and test. According to Chaouki Bekrar, the co-founder of VuPen Security, they wanted to demonstrate that Chrome not as unbreakable as some might have though.
While details about exploits were not revealed, he said, “We had to use two vulnerabilities. The first one was to bypass DEP and ASLR on Windows and a second one to break out of the Chrome sandbox. It was a use-after-free vulnerability in the default installation of Chrome [which] worked against the default installation so it really doesn’t matter if it’s third-party code anyway.”
The keyword here is “up to”.
Called Pwnium, contest attendees will be asked to exploit the Google Chrome web browser and in return, will be rewarded as follows:
$60,000 – “Full Chrome exploit”
$40,000 – “Partial Chrome exploit”
$20,000 – “Consolation reward, Flash / Windows / other”
So where does this $1 million reward come from? Well, Google will be giving away money not for the first two or three hackers, but for pretty much everyone, who manages to compromise their web browsers security.
As simple as that.
Security is one of our core values, alongside speed, stability and simplicity. From day one, we’ve designed Chrome’s extension system with security in mind. Since we launched the extension system, the state of the art in web security has advanced with technologies like Content-Security-Policy (CSP). Extension developers have been able to opt into these features, and now we’re enabling these security features by default.
Unfortunately, securing extensions with CSP by default is incompatible with the legacy extension system. We’re passionate about extension compatibility, so we’re going to make this change gradually to minimize pain for users and developers.
Users can continue to install extensions that are available in the store regardless of whether they are secured with CSP or not. This means they will not lose any of the functionality they've added to Chrome.
Developers will be able to choose when to enable the new behavior. To ease the transition, we've introduced a new manifest version attribute in the extension manifest in Chrome 18 (currently in beta). When a developer updates his or her extension to use manifest_version 2, Chrome will enforce the following CSP policy by default:
script-src 'self'; object-src 'self'
This policy imposes the following restrictions on extensions:
- Extensions can no longer use inline scripts, such as
. Instead, extensions must use out-of-line scripts loaded from within their package, such as .
- Extensions can no longer use eval(). Note: If you’re using eval to parse JSON today, we suggest using JSON.parse instead.
- Extensions can load plug-ins, such as SWF files, only from within their package or from a whitelist of HTTPS hosts.
These defenses are extremely effective: adopting one of the recommended CSPs would prevent 96% (49 out of 51) of the core extension vulnerabilities we found.
For most extensions, updating them to manifest_version 2 will require the developer to move inline scripts out-of-line and to move scripts loaded from the network into the extension package. Developers are not required to update their extensions to manifest_version 2 immediately, but, over time, more of the extension ecosystem will encourage developers to update their extensions. For example, at some point, we’ll likely start requiring new extensions uploaded to the web store to use manifest_version 2. You can find a complete list of changes and more details about CSP in the extension documentation.
Introduced years ago, Do Not Track allows users to opt out of tracking by advertising, social and other web sites that enjoy such data.
However, it’s not coming anytime soon, according to the report, Google Chrome is likely to introduce Do Not Track feature by the end of this year, which is 8-10 months away.
Susan Wojcicki, Google’s senior vice president said, “This agreement will not solve all the privacy issues users face on the Web today. However, it represents a meaningful step forward in privacy controls for users. We look forward to making this happen.”
HTTPS Everywhere Keeps Your Personal Information Safe on Over 1,400 Sites, Available for Firefox and Chrome
Chrome/Firefox: HTTPS Everywhere is a simple extension that, with just a one-click installation, can seriously increase your security on over 1,400 web sites by encrypting your connection. More »
This year at the CanSecWest security conference, we will once again sponsor rewards for Google Chrome exploits. This complements and extends our Chromium Security Rewards program by recognizing that developing a fully functional exploit is significantly more work than finding and reporting a potential security bug.
The aim of our sponsorship is simple: we have a big learning opportunity when we receive full end-to-end exploits. Not only can we fix the bugs, but by studying the vulnerability and exploit techniques we can enhance our mitigations, automated testing, and sandboxing. This enables us to better protect our users.
While we’re proud of Chrome’s leading track record in past competitions, the fact is that not receiving exploits means that it’s harder to learn and improve. To maximize our chances of receiving exploits this year, we’ve upped the ante. We will directly sponsor up to $1 million worth of rewards in the following categories:
$60,000 - “Full Chrome exploit”: Chrome / Win7 local OS user account persistence using only bugs in Chrome itself.
$40,000 - “Partial Chrome exploit”: Chrome / Win7 local OS user account persistence using at least one bug in Chrome itself, plus other bugs. For example, a WebKit bug combined with a Windows sandbox bug.
$20,000 - “Consolation reward, Flash / Windows / other”: Chrome / Win7 local OS user account persistence that does not use bugs in Chrome. For example, bugs in one or more of Flash, Windows or a driver. These exploits are not specific to Chrome and will be a threat to users of any web browser. Although not specifically Chrome’s issue, we’ve decided to offer consolation prizes because these findings still help us toward our mission of making the entire web safer.
All winners will also receive a Chromebook.
We will issue multiple rewards per category, up to the $1 million limit, on a first-come-first served basis. There is no splitting of winnings or “winner takes all.” We require each set of exploit bugs to be reliable, fully functional end to end, disjoint, of critical impact, present in the latest versions and genuinely “0-day,” i.e. not known to us or previously shared with third parties. Contestant’s exploits must be submitted to and judged by Google before being submitted anywhere else.
Originally, our plan was to sponsor as part of this year’s Pwn2Own competition. Unfortunately, we decided to withdraw our sponsorship when we discovered that contestants are permitted to enter Pwn2Own without having to reveal full exploits (or even all of the bugs used!) to vendors. Full exploits have been handed over in previous years, but it’s an explicit non-requirement in this year’s contest, and that’s worrisome. We will therefore be running this alternative Chrome-specific reward program. It is designed to be attractive -- not least because it stays aligned with user safety by requiring the full exploit to be submitted to us. We guarantee to send non-Chrome bugs to the appropriate vendor immediately.
Drop by our table at CanSecWest to participate and check the latest news.
It's no secret that there's big money to be made in violating your privacy. Companies will pay big bucks to learn more about you, and service providers on the web are eager to get their hands on as much information about you as possible. More »
It’s hard for us to believe, but it’s been just over two years since we first announced the Chromium Security Rewards Program.
We’ve been delighted with the program’s success; we’ve issued well over $300,000 of rewards across hundreds of qualifying bugs, all of which we promptly fixed. It also helped inspire a wave of similar efforts from companies across the web, including Google’s own vulnerability reward program for web properties, which has also beena big hit.
We’ve been fascinated by the variety and ingenuity of bugs submitted by dozens of researchers. We’ve received bugs in roughly every component, ranging from system software (Windows kernel / Mac OS X graphics libraries / GNU libc) to Chromium / WebKit code and to popular open source libraries (libxml, ffmpeg). Chromium is a more stable and robust browser thanks to the efforts of the wider security community.
Today we’re expanding the scope of the Chromium program to formally include more items that deserve recognition:
- High-severity Chromium OS security bugs are now in scope. Chromium OS includes much more than just the Chromium browser, so we’re rewarding security bugs across the whole system, as long as they are high severity and present when “developer mode” is switched off. Examples of issues that may generate a reward could include (but are not limited to):
- Renderer sandbox escapes via Linux kernel bugs.
- Memory corruptions or cross-origin issues inside the Pepper Flash plug-in.
- Serious cross-origin or memory corruption issues in default-installed apps, extensions or plug-ins.
- Violations of the verified boot path.
- Web- or network-reachable vulnerabilities in system libraries, daemons or drivers.
- We may elect to issue “bonuses” ranging from $500 to $1000 if a bug reporter takes on fixing the bug they have found themselves. For eligibility, this process involves working with the Chromium community to produce a peer reviewed patch. These bonuses are granted on top of the base reward, which typically runs between $500 and $3133.70.
- The base reward for a well-reported and significant cross-origin bug (for example a so-called UXSS or “Universal XSS”) is now $2000.
Perhaps most importantly, this program reflects several of our core security principles: engaging the community, building defense in depth, and particularly making the web safer for everyone.
Related to this third core principle, we’re particularly excited by all the work that has been done on shared components. For example, a more robust WebKit not only helps users of two major desktop browsers, but also a variety of tablet and mobile browsers.