Oct 20 2009

Chromium and Linux sandboxing

It was great to talk to so many people about Chromium security at HITB Malaysia. I was quite amused to be at a security conference and have a lot of conversations like:

Me: What browser do you use?
Other: Google Chrome.
Me: Why is that?
Other: Oh, it's so much faster.
Me: Oh, you saw that awesome JSNES, huh? (http://benfirshman.com/projects/jsnes/)

It's a sobering reminder that users -- and even security experts -- are often making decisions on things like speed and stability. It was similar with vsftpd. I set out to build the most secure FTP server, but usage took off unexpectedly because of the speed and scalability.

Julien talked about his clever Linux sandboxing trick that is used in the Chromium Linux port. One component of the sandbox is an empty chroot() jail, but setting up such a jail is a pain on many levels. The problems and solutions are as follows:
  • chroot() needs root privilege. Therefore, a tiny setuid wrapper binary has been created to execute sandboxed renderer processes. Some people will incorrectly bleat and moan about any new setuid binary, but the irony is that is it required to make your browser more secure. Also, a setuid binary can be made small and careful. It will only execute a specific trusted binary (the Chromium renderer) inside an empty jail.

  • exec()ing something from inside an empty jail is hard, because your view of the filesystem is empty. You could include copies of the needed dynamic libraries or a static executable but both of these are a maintenance and packaging nightmare. This is where Julien's clever tweak comes in. By using the clone() flag CLONE_FS, and sharing the FS structure between a trusted / privileged thread and the exec()ed renderer, the trusted thread can call chroot() and have it affect the unprivileged, untrusted renderer process post-exec. Neat, huh?

  • Other tricks such as CLONE_NEWPID and CLONE_NEWNET are used or will be used to prevent sending of signals from a compromised renderer, and network access.

Finally, it is worth noting that sandboxing vs. risks on the web are widely misunderstood. The current Chromium sandbox mitigates Critical risk bugs to High risk bugs. This may be enhanced in the future. Since any bugs within the sandbox are still High risk, I of course take them very seriously and fix them as a priority. But, that lowering of a certain percentage of your bugs away from Critical risk is really key. The vast majority of web pwnage out there is enabled by Critical risk bugs (i.e. full compromise of user account => ability to install malware), so mitigating any of these down to High is a huge win. It's easy to get excited about any security bug, we as an industry really need to get more practical and concentrate on the ones actually causing damage.

Attacking this point from another angle: any complicated software will inevitably have bugs, and a certain subset of bugs are security bugs. Note that any web browser is certainly a complicated piece of software :) Therefore, any web browser is always going to be having security bugs. And indeed, IE, Opera, Firefox, Safari and Chrome are issuing regular security updates. For some reason, the media reports on each and every patch as if it is a surprising or relevant event. The real question, of course, is what you do in the face of the above realization. The Chromium story is two powerful mitigations: sandboxing to reduce severity away from Critical, and a very fast and agile update system to close any window of risk.
Oct 19 2009

vsftpd-2.2.1 released

Nothing too exciting, just two regressions fixed: "pasv_address" should work again, and SSL data connections should no longer fail after a long previous transfer or an extended idle period.
Oct 14 2009

Google shares malware samples with hacked site admins

google-logo

Google has rolled out a feature that provides webmasters of compromised sites with samples of malicious code and other detailed information to help them clean up.

The search giant has long scanned websites for malware while indexing the world wide web. When it detects outbreaks, it includes language in search results that warns the site may be harmful and passes that information along so the Google Chrome, Mozilla Firefox, and Apple Safari browsers can more prominently warn users. Google also provides administrators a private list of infected pages so they can be cleaned up.

Now, Google will give additional detail by offering samples of malicious code that criminal hackers may have injected into a website. In some cases, the service will also identify the underlying cause of the malicious code. Admins of compromised websites will get the information automatically when logging in to Google’s Webmaster Tools.

“While it is important to protect users, we also know that most of these sites are not intentionally distributing malware,” Google’s Lucas Ballard wrote here in announcing the new feature. “We understand the frustration of webmasters whose sites have been compromised without their knowledge and who discover that their site has been flagged.”

Over the past few years, a variety of studies have concluded that the majority of malware being foisted on web surfers comes from legitimate sites that have been compromised. Web applications that don’t properly vet text entered into search boxes and other website fields is one of the chief causes. Sloppy password hygiene by webmasters and compromises of website administration tools are two others.

The new feature will allow webmasters to view the the malicious javascript, HTML, or Adobe Flash that has been injected in to a site and provide the exact URL where it’s found. Ballard cautioned the information should be considered a starting point in the process of cleaning the sullied site.

“If the underlying vulnerability is not identified and patched, it is likely that the site will be compromised again,” he said.

Source:  TheRegister

Oct 14 2009

Nat Probe

lan

This little, but very usefull program, try to sends ICMP packet out the LAN, and detect all the host that allow it. Whit this you can find bugs in your (“company’s”) network ( or others), for example hosts that allow p2p connections.

Explanation
When we use a Gateway, we send the packets with IP dest of the target, but the dest mac on the Ethernet is the mac at the Gateway. If we send a packet to the different macs in the LAN, we can know who is the gateway when we receive an response from this mac.
Some times we can discover more than one box configured to be an gateway, generally, this is an wrong configuration, and the box will response with an ICMP-Redirect. This is the same, because the script only verify if the mac response.

Source: http://code.google.com/p/natprobe/