With Millions Paid in Hacker Bug Bounties, Is the Internet Any Safer?

Security researcher “Pinkie Pie” demonstrated the exploit he developed for attacking Google’s Chrome browser earlier this year. Photo: Kim Zetter/Wired

The night before the end of Google’s Pwnium contest at the CanSecWest security conference this year in Vancouver, a tall teen dressed in khaki shorts, tube socks and sneakers was hunkered down on a hallway bench at the Sheraton hotel hacking away at his laptop.

With a $60,000 cash prize on the line, the teen, who goes by the hacker handle “Pinkie Pie,” was working hard to get his exploit for the Chrome browser stabilized before the close of the competition.

The only other contestant, a Russian university student named Sergey Glazunov, had already made off with one $60,000 prize for a zero-day exploit that attacked 10 different bugs.

Finally, with just hours to go before the end of the three-day competition, Pinkie Pie achieved his goal and dropped his exploit, a beauty of a hack that ripped through six zero-day vulnerabilities in Chrome and slipped out of the browser’s security sandbox.

Google called both hacks “works of art,” and within 24 hours of receiving each submission, had patched all of the bugs that they exploited. Within days, the company had also added new defensive measures to Chrome to ward off future similar attacks.

Google’s Pwnium contest is a new addition to its year-round bug bounty programs, launched in 2010, that are aimed at encouraging independent security researchers to find and report security vulnerabilities in Google’s Chrome browser and web properties, and to get paid for doing so.

Vendor bounty programs like Google’s have been around since 2004, when the Mozilla Foundation launched the first modern pay-for-bugs plan for its Firefox browser. (Netscape tried a bounty program in 1995, but the idea didn’t spread at that time.) In addition to Google and Mozilla, Facebook and PayPal have also launched bug bounty programs, and even the crafts site Etsy got into the game recently with a program that pays not only for new bugs, but also retroactively for previously reported bugs, to thank researchers who contributed to the site’s security before the bounty program began.

The Mozilla Foundation has paid out more than $750,000 since launching its bounty program; Google has paid out more than $1.2 million.

But some of the biggest vendors, who might be expected to have bounty programs, don’t. Microsoft, Adobe and Apple are just three software makers who have been criticized for not paying independent researchers for bugs they have found, even though the companies benefit greatly from the free work done by those who uncover and disclose security vulnerabilities.

Microsoft says its new BlueHat security program, which pays $50,000 and $250,000 to security pros who can devise defensive measures for specific kinds of attacks, is better than paying for bugs.

“I don’t think that filing and rewarding point issues is a long-term strategy to protect customers,” Microsoft security chief Mike Reavey said recently.

All of which begs the question: Eight years down the line, have bug bounty programs made browsers and web services more secure? And is there any way to really test that proposition?

Security Science

There’s no scientific method for determining if software is more secure than it used to be. And there’s no way to know how much a bounty program has improved the security of a particular software program, as opposed to other measures undertaken by software makers. Security isn’t just about patching bugs; it’s also about adding defensive measures — such as browser sandboxes — to mitigate entire classes of bugs. The combination of these two make software more secure.

But everyone interviewed for this story says the anecdotal evidence strongly supports the conclusion that bounty programs have indeed improved the security of software. And more than this, the programs have yielded other security benefits that go far beyond the individual bugs they’ve helped fix.

In the most obvious sense, bounty programs make software more secure simply by the fact that they reduce the number of security holes hackers can attack.

“There’s a finite number of bugs in these products, so every time you can knock out a bunch of them, you’re in a better place,” says top security researcher Charlie Miller, who’s responsible for finding a number of high-profile vulnerabilities in Apple’s iPhone and other products.

But one of the biggest indications that bounty programs have improved security is the decreasing number of bug reports that come in, according to Google.

“It’s a hard measurement to take, but we’re seeing a fairly sustained drop-off in the number of incoming reports we’re receiving for the Chromium program,” says Chris Evans, information security engineer at Google who leads the company’s Chromium vulnerability rewards program as well as its new Pwnium contest, launched this year.

Google has its own internal fuzzing program to uncover security vulnerabilities, and the rate at which that team is finding bugs has dropped, too, Evans says. Google recently asked some of its best outside bug hunters why bug reports had declined and was told it was just “harder to find” vulnerabilities these days. Harder-to-find bugs for researchers also means harder-to-find bugs for hackers.

Bounty programs also improve security by encouraging researchers to disclose bugs responsibly — that is, passing the information to vendors first, so that they can release a patch to customers before the information is publicly disclosed. And they help mend the fractious relationship that has long existed between researchers and vendors.

In 2009, Miller and fellow security researchers Alex Sotirov and Dino Dai Zovi launched a “No More Free Bugs” campaign to protest freeloading vendors who weren’t willing to pay for the valuable service bug hunters provided and to call attention to the fact that researchers often got punished by vendors for trying to do a good deed.