1&1 Running Nearly Seven Years Out of Date Version of phpMyAdmin

Two weeks ago we posted about FatCow was running an over six years out of date version of phpMyAdmin on their servers. In the post we mentioned that was the most out of date software we had seen in a long time, but that dubious distinction has now been taken by 1&1 and the nearly sevens years out of date version of phpMyAdmin they use. They are running phpMyAdmin 2.6.4-pl3, which was released on October 22, 2005. The subsequent version, a security update, was released on November 15, 2005.

1&1 tells their customers it is important to keep software up to date to avoid being hacked:

One way to avoid attacks, is to make sure to keep your programs
and scripts up-to-date. Check regularly for security warnings and
make sure to install security patches as they become available.

They obviously don’t listen to their own advice, but they do claim that they do:

1&1 system administrators work hard to make sure that our 1&1 servers are protected from known vulnerabilities by keeping all programs and services up-to-date with.

phpMyAdmin provides a page that provides a listing of all security announcements for the software (something that other software developers should also be providing). In 2005, there were three serious security vulnerabilities found that probably impact the version of phpMyAdmin 1&1 is running. The version probably contains most, if not all, of the 16 serious severity security issues and 1 considered “quite dangerous” fixed in 2006 and 2007, that we counted that impact in the version used FatCow. And the version probably contains more vulnerabilities that were fixed in later years.

New Zealand Intel Agency Investigated for Unlawful Spying on Kim Dotcom

The legal case against Megaupload founder Kim Dotcom continues to spiral out of control as New Zealand’s Prime Minister announced this week that an inquiry into unlawful government spying on Dotcom and others has been launched.

Prime Minister John Key announced on Monday that an inspector general has launched an investigation into allegations that a government intelligence service had illegally intercepted the communications of Dotcom and other individuals targeted in the case.

The wiretapping was allegedly done by the Government Communications Security Bureau, or GCSB, as part of a controversial January raid on the Dotcom mansion. The GCSB intercepted communications in an effort to help the New Zealand police locate individuals who were being sought for arrest in the Megaupload case, according to Key.

Dotcom and co-defendant Bram van Der Kolktheir, as well as their families, are all New Zealand residents and were reportedly targeted in the communications interceptions.

By law the GCSB is required to obtain warrants to intercept communications involving New Zealand citizens and residents, and to have the prime minister sign off on such warrants before conducting the surveillance. But Key said he was not asked to approve warrants in this case, nor was he briefed on the GCSB operation beforehand.

Key said he was “quite shocked” when he found out last week that the GCSB had committed the unlawful communications intercepts.

“I expect our intelligence agencies to operate always within the law,” he said in a statement released by his office. “Their operations depend on public trust.”

In an interview with Radio New Zealand (.mp3), Dotcom’s U.S. lawyer, Ira Rothken, said he and the Megaupload founder were “deeply concerned that there appears to be allegations of domestic spying on residents that bypasses the judicial process and the checks and balances on that.”

Rothken and Dotcom found out about the spying and inquiry only through a press release issued by Key’s office.

“We look forward to this inquiry and appreciate the Prime Minister doing the right thing,” Rothken said in the interview.

The GCSB, which proclaims on its website that it has “Mastery of Cyberspace for the security of New Zealand,” is in charge of conducting foreign telecommunications and internet intelligence, as well as ensuring the integrity and confidentiality of government communications.

Dotcom and the Megaupload case has become a cause célèbre in New Zealand after his mansion was raided by 70 heavily armed police officers who arrived via helicopters.

The Megaupload founder and his three co-defendants, van der Kolk, Mathias Ortmann and Finn Batato, are currently out on bail in New Zealand, awaiting a hearing next March to determine if they should be extradited to the U.S. to face charges of secondary copyright infringement for operating file-sharing websites.

If found guilty, the four could face up to 20 years in prison and million-dollar fines.

But so far, Dotcom has scored multiple legal victories in the campaign to defend himself against what the U.S. calls the biggest copyright infringement case in history.

A court has already ruled that warrants used to conduct the raid on his residence were unlawful. And a New Zealand judge also declared that the FBI acted illegally when it cloned the data on computer hard disks seized from Dotcom’s residence in the raid and sent them to the U.S.

Via Twitter, Dotcom expressed surprise that a government intelligence agency was involved in his copyright case and said that he welcomed the government inquiry into the spying, but that he thought the inquiry into wrongdoing should be extended to his entire case.

The joys and hazards of multi-process browser security

Web browsers with some form of multi-process model are becoming increasingly common. Depending on the exact setup, there can be significant consequences for security posture and exploitation methods.

Spray techniques

Probably the most significant security effect of multi-process models is the effect on spraying. Spraying, of course, is a technique where parts of a processes' heap or address space are filled with data helpful for exploitation. It's sometimes useful to spray the heap with a certain pattern of data, or spray the address space in general with executable JIT mappings, or both.

In the good ol' days, when every part of the browser and all the plug-ins were run in the same process, there were many possible attack permutations:
  • Spray Java JIT pages to exploit a browser bug.
  • Spray Java JIT pages to exploit a Flash bug.
  • Spray Flash JIT pages to exploit a browser bug.
  • Spray Java JIT pages to exploit Java.
  • You could even spray browser JS JIT pages to exploit Java if you wanted to ;-)
  • ...etc.

Since the good ol' days, various things happened to lock all this down:
  • The Java plug-in was rearchitected so that it runs out-of-process in most browsers.
  • IE and Chromium placed page limits on JavaScript-derived JIT pages (covered a little in the famous Accuvant paper.)
  • Firefox introduced its out-of-process plug-ins feature (for some plug-ins, most notably Flash) and Chromium had all plug-ins out-of-process since the first release.

The end result is trickier exploitation, although it's worth noting that one worrysome combination remains: IE still runs Flash in-process, and this has been abused by attackers in many of the recent IE 0days.

One-shot vs. multi-shot

The terms "one-shot" and "multi-shot" have long been used in the world of server-side exploitation. "One-shot" refers to a service that is dead after just one crash -- so your exploit had better be reliable! "Multi-shot" refers to a service whereby it remains running after your lousy exploit causes a crash. This could be because the service has a parent process that launches new children if they die or it could simply be because the service is launched by a framework that automatically restarts dead services.

Although moving to a multi-process browser is generally very positive thing for security and stability, you do run the risk of introducing "multi-shot" attacks.

In other words, let's say your exploit isn't 100% reliable. Wouldn't it be nice if you could just use a bit of JavaScript to run the exploit over and over in a child process until it works? Perhaps you simply weren't able to default ASLR and you're in the situation where you have a 1/256 chance of your hard-coded address being correct. Again, this could be brute-forced in a "multi-shot" attack.

The most likely "multi-shot" attacks are against plug-ins that are run out-of-process, or against browser tabs, if browser tabs can have separate processes.

These attacks can be defended against by limiting the rate of child process crashes or spawns. Chromium deploys some tricks in this area.

Broker escalation

Once an attack has gained code execution inside a sandbox, there are various directions it might go next. It might attack the OS kernel. Or for the purposes of this discussion, it might attack the privileged broker. The privileged broker typically runs outside of the sandbox, so any memory corruption vulnerability in the broker is a possible avenue for sandbox escape.

To attack the memory corruption bug, you'll likely need to defeat DEP / ASLR in the broker process. An interesting question is, how far along are you already, by virtue of code execution in the sandboxed process? Obviously, you know the full memory map layout of the compromised sandboxed process.

The answer, is it depends on your OS and the way the various processes relate to each other. The situation is not ideal on Windows; due to the way the OS works, certain system-critical DLLs are typically located at the same address across all processes. So ASLR in the broker process is already compromised to an extent, no matter how the sandboxed processes are created. I found this interesting.

The situation is better on Linux, where each process can have a totally different address space layout, including system libraries, executable, heap, etc. This is taken advantage of by the Chromium "zygote" process model for the sandboxed processes. So a compromise of a sandboxed process does not give any direct details about the address space layout of the broker process. There may be ways to leak it, but not directly, and /proc certainly isn't mapped in the sandboxed context! All this is another reason I recommend 64-bit Linux running Chrome as a browsing platform.

The joys and hazards of multi-process browser security

Web browsers with some form of multi-process model are becoming increasingly common. Depending on the exact setup, there can be significant consequences for security posture and exploitation methods.

Spray techniques

Probably the most significant security effect of multi-process models is the effect on spraying. Spraying, of course, is a technique where parts of a processes' heap or address space are filled with data helpful for exploitation. It's sometimes useful to spray the heap with a certain pattern of data, or spray the address space in general with executable JIT mappings, or both.

In the good ol' days, when every part of the browser and all the plug-ins were run in the same process, there were many possible attack permutations:

  • Spray Java JIT pages to exploit a browser bug.
  • Spray Java JIT pages to exploit a Flash bug.
  • Spray Flash JIT pages to exploit a browser bug.
  • Spray Java JIT pages to exploit Java.
  • You could even spray browser JS JIT pages to exploit Java if you wanted to ;-)
  • ...etc.

Since the good ol' days, various things happened to lock all this down:

  • The Java plug-in was rearchitected so that it runs out-of-process in most browsers.
  • IE and Chromium placed page limits on JavaScript-derived JIT pages (covered a little in the famous Accuvant paper.)
  • Firefox introduced its out-of-process plug-ins feature (for some plug-ins, most notably Flash) and Chromium had all plug-ins out-of-process since the first release.

The end result is trickier exploitation, although it's worth noting that one worrysome combination remains: IE still runs Flash in-process, and this has been abused by attackers in many of the recent IE 0days.

One-shot vs. multi-shot

The terms "one-shot" and "multi-shot" have long been used in the world of server-side exploitation. "One-shot" refers to a service that is dead after just one crash -- so your exploit had better be reliable! "Multi-shot" refers to a service whereby it remains running after your lousy exploit causes a crash. This could be because the service has a parent process that launches new children if they die or it could simply be because the service is launched by a framework that automatically restarts dead services.

Although moving to a multi-process browser is generally very positive thing for security and stability, you do run the risk of introducing "multi-shot" attacks.

In other words, let's say your exploit isn't 100% reliable. Wouldn't it be nice if you could just use a bit of JavaScript to run the exploit over and over in a child process until it works? Perhaps you simply weren't able to default ASLR and you're in the situation where you have a 1/256 chance of your hard-coded address being correct. Again, this could be brute-forced in a "multi-shot" attack.

The most likely "multi-shot" attacks are against plug-ins that are run out-of-process, or against browser tabs, if browser tabs can have separate processes.

These attacks can be defended against by limiting the rate of child process crashes or spawns. Chromium deploys some tricks in this area.

Broker escalation

Once an attack has gained code execution inside a sandbox, there are various directions it might go next. It might attack the OS kernel. Or for the purposes of this discussion, it might attack the privileged broker. The privileged broker typically runs outside of the sandbox, so any memory corruption vulnerability in the broker is a possible avenue for sandbox escape.

To attack the memory corruption bug, you'll likely need to defeat DEP / ASLR in the broker process. An interesting question is, how far along are you already, by virtue of code execution in the sandboxed process? Obviously, you know the full memory map layout of the compromised sandboxed process.

The answer, is it depends on your OS and the way the various processes relate to each other. The situation is not ideal on Windows; due to the way the OS works, certain system-critical DLLs are typically located at the same address across all processes. So ASLR in the broker process is already compromised to an extent, no matter how the sandboxed processes are created. I found this interesting.

The situation is better on Linux, where each process can have a totally different address space layout, including system libraries, executable, heap, etc. This is taken advantage of by the Chromium "zygote" process model for the sandboxed processes. So a compromise of a sandboxed process does not give any direct details about the address space layout of the broker process. There may be ways to leak it, but not directly, and /proc certainly isn't mapped in the sandboxed context! All this is another reason I recommend 64-bit Linux running Chrome as a browsing platform.