An Intentional Mistake: The Anatomy of Google’s Wi-Fi Sniffing Debacle

Photo: jadjadjad /Flickr

Google’s public version of events of how it came to secretly intercept Americans’ data sent on unencrypted Wi-Fi routers over a two-year period doesn’t quite mesh with what the search giant told federal regulators.

And if Google had its way, the public would have never learned the software on Google’s Street View mapping cars was “intended” to collect payload data from open Wi-Fi networks.

A Federal Communications Commission document disclosed Saturday showed for the first time that the software in Google’s Street View mapping cars was “intended” to collect Wi-Fi payload data, and that engineers had even transferred the data to an Oregon Storage facility. Google tried to keep that and other damning aspects of the Street View debacle from public review, the FCC said.

Google accompanied its responses to the FCC inquiry “with a very broad request for confidential treatment of the information it submitted,” the FCC said, in a letter to Google, saying it would remove most of the redaction from the FCC’s public report and other documents surrounding the debacle.

The FCC document unveiled Saturday is an unredacted version of an FCC finding, which was published last month with dozens of lines blacked out. The report said that Google could not be held liable for wiretapping, despite a federal judge holding otherwise.

The unredacted FCC report refers to a Google “design document” written by an engineer who crafted the Street View software to collect so-called payload data, which includes telephone numbers, URLs, passwords, e-mail, text messages, medical records, video and audio files sent over open Wi-Fi networks.

The engineer is referred to as “Engineer Doe” in the report, though he was identified on Sunday as Marius Milner, a well-known figure in the Wi-Fi hacking community. The document says the software Milner used collected 200 gigabytes of data via Street View cars between 2008 and 2010:

The design document showed that, in addition to collecting data that Google could use to map the location of wireless access points, Engineer Doe intended to collect, store, and analyze payload data from unencrypted Wi-Fi networks. The design document notes that ‘[w]ardriving can be used in a number of ways,’ including ‘to observe typical Wi-Fi usage snapshots.’ In a discussion of ‘Privacy Considerations,’ the design document states, ‘A typical concern might be that we are logging user traffic along with sufficient data to precisely triangulate their position at a given time, along with information about what they were doing.’ That statement plainly refers to the collection of payload data because MAC addresses, SSIDs, signal-strength measurements. and other information used to map the location of wireless access points would reveal nothing about what end users ‘were doing.’” Engineer Doe evidently intended to capture the content of Wi-Fi communications transmitted when Street View cars were in the vicinity, such as e-mail, and text messages sent to or from wireless access points. Engineer Doe identified privacy as an issue but concluded that it was not a significant concern because the Street View cars would not be ‘in proximity to any given user for an extended period of time,’ and ‘[n]one of the data gathered … [would] be presented to end users of [Google’s] services in raw form. Nevertheless, the design document listed as a ‘to do’ item, ‘[D]iscuss privacy considerations with Product Counsel.’ That never occurred. The design document also states that the Wi-Fi data Google gathered ‘be analyzed offline for use in other initiatives,’ and that ‘[analysis of the gathered data [was] a non goal (though it [would] happen.’

The majority of those words were originally blacked out at Google’s request, but the commission subsequently concluded, after the report was filed, that much of it should be made publicly available because “Disclosure of this information may cause commercial embarrassment, but that is not a basis for requesting confidential treatment.”

Rewind to May 2010, when Google announced the Street View debacle:

So how did this happen? Quite simply, it was a mistake. In 2006 an engineer working on an experimental Wi-Fi project wrote a piece of code that sampled all categories of publicly broadcast WiFi data. A year later, when our mobile team started a project to collect basic WiFi network data like SSID information and MAC addresses using Google’s Street View cars, they included that code in their software—although the project leaders did not want, and had no intention of using, payload data.

While those sentences are technically true, one would have no idea from reading it that the payload-slurping software was intentionally included and that project leaders had been informed, in detail, about the software. (Google’s unnamed project manager claims not to have read Milner’s design document.)

In fact, an editorial from the Electronic Frontier Foundation in 2010 shows that even experts read Google’s blog post to mean that the sensitive data was collected via an honest mistake by code-reusing engineers, rather than via an engineering team’s intentional choice that was totally missed by management tasked with overseeing them, as the FCC report makes clear.

“[T]he company admitted that its audit of the software deployed in the Street View cars revealed that the devices actually had been inadvertently collecting content transmitted over non-password protected Wi-Fi networks…. Penalties for wiretapping electronic communications in the federal Electronic Communications Privacy Act (ECPA) only apply to intentional acts of interception, yet Google claims it collected the content by accident,” wrote then-EFF attorney Jennifer Granick.

Google also demanded that the FCC black out passages revealing that several engineers had access to the Street View code, and that payload data was reviewed by engineers on at least two occasions. The unredacted FCC report also showed that Google’s supervision of the Street View project was “minimal.”

“In October 2006, Engineer Doe shared the software code and a ‘design document’ explaining his plans with other members of the Street View project. The design document identified “Privacy Considerations” and recommended review by counsel, but that never occurred. Indeed, it appears that no one at the company carefully reviewed the substance of Engineer Doe’s software code or the design document,” the unredacted document said.

Google management said publicly it did not realize it was sniffing packets of data on unsecured Wi-Fi networks in about a dozen countries until German privacy authorities began questioning what data Google’s Street View mapping cars were collecting. Google, along with other companies, use databases of Wi-Fi networks and their locations to augment or replace GPS when attempting to figure out the location of a computer or mobile device.

Google initially stored “all Wi-Fi data in machine-readable format” on hard disks on each Street View car, “the Company ultimately transferred the data to servers at a Google data center in Oregon,” the unredacted report revealed.

The FCC originally released a heavily redacted version of its investigation into the Street View debacle last month,
fining the company $25,000 for stonewalling the investigation.

But the report had black bars over the key findings. The FCC followed procedures that allow companies to withhold business-related confidential information from the public. So, at Google’s request, it initially redacted its report, known as a “notice of apparent liability,” according to an e-mail from Tammy Sun, an FCC spokeswoman.

However, the FCC did not agree with Google’s “broad requests for confidential treatment” and was moving to uncensor its report, which required giving Google an opportunity to protest the decision.

So Google decided to preempt the FCC.

On Saturday, a dumping ground day for news, Google forwarded a virtually unredacted version of the report to the Los Angeles Times. The FCC posted its mostly unredacted version of the document on its website three days later.

Google declined to be interviewed for this story. Instead, it released a canned statement attributable to “a Google spokesperson.”

“We decided to voluntarily make the entire document available except for the names of individuals. While we disagree with some of the statements made in the document, we agree with the FCC’s conclusion that we did not break the law. We hope that we can now put this matter behind us.”

Both the redacted and unredacted FCC reports concluded that, between 2008 and 2010, “Google’s Street View cars collected names, addresses, telephone numbers, URLs, passwords, e-mail, text messages, medical records, video and audio files, and other information from internet users in the United States.”

But, the commission said, Google did not engage in illegal wiretapping because the data was flowing, unencrypted, over open radio waves.

The commission found that legal precedent — and engineer Milner’s invocation of the Fifth Amendment — meant Google was off the hook for wiretapping. The FCC agreed with Google that its actions did not amount to wiretapping because the unencrypted Wi-Fi signals were “readily accessible to the general public.”

According to the Wiretap Act, amended in 1986, it’s not considered wiretapping “to intercept or access an electronic communication made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public.”

But U.S. District Judge James Ware, a California federal judge presiding over about a dozen lawsuits accusing Google of wiretapping Americans, ruled last year that Google could be held liable for wiretapping damages.

Judge Ware said that the FCC interpretation did not apply to open, unencrypted Wi-Fi networks and instead applied only to “traditional radio services” like police scanners. The lawsuits have been stayed, pending the outcome of Google’s appeal.

Researchers: Skype Ignored Location-Tracking Vulnerability for More Than a Year

Skype learned more than a year ago about a privacy vulnerability that would allow someone to identify the IP address and possibly the geographic location of a user, but left it unfixed, according to researchers who say they notified the company in 2010.

Stevens Le Blond, a former researcher at the polytechnic institute Inria in France, who now works at the Max Planck Institute for Software Systems, told the CIO Journal that he and fellow researchers at the Polytechnic Institute of New York University disclosed the vulnerability to Skype in November 2010 and published the information in October 2011. Therefore they were surprised to find that the vulnerability was still unfixed last week after someone posted a script online showing Skype being exploited to uncover the local and remote IP addresses for users.

When asked about the researchers’ disclosure, Skype, which is owned by Microsoft, repeated only what Skype had told reporters last week when a different exploit also exposing IP addresses was published. Adrian Asher, director of product security for Skype, said at the time that Skype was “investigating reports of a new tool that captures a Skype user’s last known IP address. This is an ongoing, industry-wide issue faced by all peer-to-peer software companies.”

“By calling it a ‘new tool’ it means they don’t have to respond as urgently,” Le Blond told the Journal. “It makes it seem like they just found out.”

The researchers found that they were able to uncover the IP address of Skype users, and their city location, by conducting a masked call to a user. The call could be made in a way that would prevent a notification from popping up on the user’s screen and prevent the call from appearing in a user’s call history.

Once the call was made, the researchers obtained the IP address from information that Skype automatically sends to the caller. By repeating a call every hour, they could actually map a user’s movement to determine if they moved between cities. In this way, they surreptitiously tracked the city-level location of 10,000 Skype users for two weeks.

They decided to check if the vulnerability had been fixed after someone released information anonymously on Pastebin last week that showed how to exploit a patched version of Skype 5.5 to obtain an IP address in a different manner that doesn’t require a masked call.

The technique involves enabling debug logging, doing a search on active users as if to add them as a contact, and then viewing their vcard, or contact information card, which will generate an IP address in the logs. Using IP address research tools, someone could then track the location of the IP address to a city.

Keith Ross, one of the researchers who notified Skype in 2010, told the CIO Journal that Skype had likely not fixed the problem because it may be “deeply embedded in the code” and require “heavy restructuring” to resolve.

Phishers Offer Fake Storage Upgrades

Co-Author: Ayub Khan

Customers of popular email service providers have been a common target for phishers for identity theft purposes. Phishers are constantly devising new phishing bait strategies in the hope of stealing user email adresses and passwords. In April 2012, Symantec observed phishing pages that mimicked popular email services in an attempt to dupe users with attractive storage plans.

Customers were flooded with fake offers of free additional storage space for services such as email, online photo albums, and documents. In the first example, the phishing site was titled “Welcome to New [BRAND NAME] Quota Verification Page”. According to the bogus offer, the additional storage plan ranged from 20 GB to 1 TB per year, at no extra cost. The phishing page boasted that the free additional storage plan will help customers prevent loss of data and the inability to send and receive emails due to exhausted storage space. It also stated that the plan will auto-renew each year and the customer can choose to cancel at any time by returning to the same page:


To avoid customer suspicion when the bogus offer doesn’t materialize, phishers used a time-buying strategy. They indicated that customers would be contacted 30 days prior to renewal and also that the upgrade process will take effect in a 24-hour time span. After user credentials are entered, the phishing page redirected to a page which confirmed the upgrade was initiated and complete. The phishing page then redirected back to the legitimate service website:

Similar phishing pages were observed spoofing other email services. The phishing site in this second example is titled “Obtain Free Additional Storage”. The same bait was used here as well:


To gain customer trust, the email address field was auto-populated on the fake page and is also concealed in the query string. Looking deep into these scams, it is evident these phishing scams are targted attacks. By randomising the email address in the query string of the phishing URL, the same phishing page can be used for targeting multiple users. Below is the URL format:

http://*****/[email protected]&[email protected]

Internet users are advised to follow best practices to avoid phishing attacks:

  • Do not click on suspicious links in email messages.
  • Avoid providing any personal information when answering an email.
  • Never enter personal information in a pop-up page or screen.
  • When entering personal or financial information, ensure the website is encrypted with an SSL certificate by looking for the padlock, ‘https’, or the green address bar.
  • Frequently update your security software (such as Norton Internet Security 2012) which protects you from online phishing.

U.S. Appeals Court Clears Torture Memo Author

Protesters confront John Yoo, a constitutional law professor at the University of California, Berkeley, as he makes his way to a classroom on Monday, Aug. 17, 2009, in Berkeley, Calif. Photo: Noah Berger/Associated Press

A federal appeals court said Wednesday that John Yoo, the George W. Bush administration lawyer who wrote memos used to rationalize torture of suspected terrorists, cannot be sued by enemy combatants who claim they were tortured.

Yoo, who was the deputy assistant attorney general in the Justice Department’s Office of Legal Counsel from 2001 to 2003, was sued by Jose Padilla, the so-called “dirty bomber.” Padilla, an American citizen, claims Yoo’s internal legal opinions paved the way for his harsh interrogation while he was secretly held without charges at a Navy brig in South Carolina for more than three years.

A federal judge said in 2009 that Padilla, who was convicted of terror-related charges, could sue Yoo for damages because his lawsuit “has alleged sufficient facts to satisfy the requirement that Yoo set in motion a series of events that resulted in the deprivation of Padilla’s constitutional rights.” Yoo’s memos concluded that techniques such as prolonged sleep deprivation, binding in stress positions, and waterboarding did not amount to torture.

Hearing Yoo’s appeal, a three-judge panel of the 9th U.S. Circuit Court of Appeals agreed with Yoo’s contention that he should be immune from the suit because it was not clearly established that harsh treatment was unconstitutional. Padilla claims he “suffered gross physical and psychological abuse” by government authorities, which included death threats, psychotropic drugs, shackling and manacling, and being subjected to noxious fumes and constant surveillance.

“It was not ‘beyond debate’ at that time that Padilla — who was not a convicted prisoner or criminal defendant, but a suspected terrorist designated an enemy combatant and confined to military detention by order of the president — was entitled to the same constitutional protections as an ordinary convicted prisoner or accused criminal,” Judge Raymond Fisher wrote for the 3-0 appeals court.

Fisher added that, even today, “it remains murky whether an enemy combatant detainee may be subjected to conditions of confinement and methods of interrogation that would be unconstitutional if applied in the ordinary prison and criminal settings.”

Yoo, now a University of California, Berkeley law professor, said in an e-mail that Padilla now “will need to find a new hobby for his remaining time in prison.”

The appellate court’s decision, he said, “confirms that this litigation has been baseless from the outset. For several years, Padilla and his attorneys have been harassing the government officials he believes to have been responsible for his detention and ultimately conviction as a terrorist.”

Padilla, of Brooklyn, was charged originally in connection to an al-Qaeda plot to unleash a radioactive “dirty bomb” in the United States. Padilla is serving a 17-year sentence after being convicted of unrelated charges of conspiring to commit murder overseas.