Category: two-factor

May 31 2013

Someday you may ditch your two-factor authenticator for an electronic tattoo

Electronic “tattoos” and pills that turn your body into an authenticator are two next-steps in password protection that Motorola is working on, as described at a session Wednesday at AllThingsD’s D11 conference. Regina Dugan, senior vice president of the Advanced Technology and Projects group at Motorola Mobility, showed off two “wearable computing” oriented methods that remove the security tokens from the two-factor equation.

The electronic tattoos described must strike a balance between the “mechanical mismatch” of hard, rigid machines and soft, pliable humans, Dugan said. The “tattoo” Dugan wore, which appeared to be more like a sticker on her left wrist, uses “islands of high-performance silicon connected by accordion-like structures” that allow the tattoo to flex and move with her skin to stay on and remain functional. Presumably, the silicon and wires would eventually be embedded into the skin to make the user a proper bionic human.

The pill, on the other hand, turns one’s entire body into an authenticator. Dugan described the pill as a vitamin “reverse potato battery” that uses stomach acid as the electrolyte to power a switch. As the switch pulses on and off, it “creates an 18-bit EKG-like symbol in your body, and your body becomes the authenticator,” Dugan said.

Read 1 remaining paragraphs | Comments

May 30 2013

iCloud users take note: Apple two-step protection won’t protect your data

A diagram showing how Apple's two-step verification works.

If you think your pictures, contacts, and other data are protected by the two-step verification protection Apple added to its iCloud service in March, think again. According to security researchers in Moscow, the measure helps prevent fraudulent purchases made with your Apple ID but does nothing to augment the security of files you store.

To be clear, iCloud data is still secure so long as the password locking it down is strong and remains secret. But in the event that your account credentials are compromised—which is precisely the eventuality Apple's two-factor verification is intended to protect against—there's nothing stopping an adversary from accessing data stored in your iCloud account. Researchers at ElcomSoft—a developer of sophisticated software for cracking passwords—made this assessment in a blog post published Thursday.

"In its current implementation, Apple’s two-factor authentication does not prevent anyone from restoring an iOS backup onto a new (not trusted) device," ElcomSoft CEO Vladimir Katalov wrote. "In addition, and this is much more of an issue, Apple’s implementation does not apply to iCloud backups, allowing anyone and everyone knowing the user’s Apple ID and password to download and access information stored in the iCloud. This is easy to verify; simply log in to your iCloud account, and you’ll have full information to everything stored there without being requested any additional logon information."

Read 11 remaining paragraphs | Comments

Mar 21 2013

Apple follows Google, Facebook, and others with two-step authentication

Apple has finally responded to increasing online security threats by introducing two-step authentication for iCloud. Like Google and other companies that already employ two-step authentication, Apple's system would provide an extra layer of security on top of the existing iCloud passwords when users try to access their accounts from unrecognized devices. iCloud users can set up two-step authentication on Apple IDs today by going to the Apple ID website and clicking the "Password and Security" tab.

Apple walks you through the process on its Apple ID management site.

For Apple, this means an authentication code is either sent via SMS to a phone number or found within the Find My iPhone app (if you have it installed) whenever you try to log in from somewhere new. This means that a potential attacker will have a harder time getting into your iCloud account without having physical access to your "trusted" device receiving the code. (Users are prompted to set up at least one trusted device when they turn on two-step authentication, though you can have more than one if you like.) Currently, two-step authentication is available to iCloud users in the US, UK, Australia, Ireland, and New Zealand.

One of the benefits to setting this up on your iCloud account is that you'll no longer have to rely on security questions—which are inherently insecure—in order to gain access to your account if you lose your password. The downside (if you consider it that) is that once you set up two-step authentication, Apple will no longer be able to reset your password for you should you lose or forget it. This is what ended up biting Wired editor Mat Honan in the behind when his various accounts were compromised—hackers were able to gather enough personal information from Honan's e-mail and Amazon accounts to trick Apple support into resetting his iCloud password, giving them free reign to remotely wipe his iPhone, iPad, and MacBook.

Read 1 remaining paragraphs | Comments

Jun 08 2011

FLAMING RETORT – Three words for RSA. Promptness. Clarity. Openness.

What a lot of fuss RSA’s security breach has caused! And what a lot of fear and uncertainty and doubt still surrounds it!

In case you haven’t been following the story, it began in mid-March 2011, when RSA admitted that its security had been breached and that “certain information [was] extracted from RSA’s systems.” Some of that information was specifically related to RSA’s SecurID products; the CEO admitted that “this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation.”

The CEO, Arthur Coviello, also assured everybody that the company was “very actively communicating this situation to RSA customers.”

I thought this was a good start, even though it raised more questions than it answered.

An admission and an apology go a long way – provided that they are quickly followed by genuinely useful information which explains how the problem arose, what holes it introduced, how those holes can be closed, and what is being done to prevent anything like it from happening again.

But RSA’s version of “very actively communicating” with its customers didn’t go that way. We still don’t really know what happened. We don’t know what holes were opened up because of the attack. And RSA customers still can’t work out for themselves what sort of risk they’re up against. They have to assume the worst.

What we do know is that US engineering giant Lockheed Martin subsequently suffered an attempted breakin. Lockheed stated that the data stolen from RSA was a “contributing factor” to its own attack, and RSA’s Coviello agreed:

[O]n Thursday, June 2, 2011, we were able to confirm that information taken from RSA in March had been used as an element of an attempted broader attack on Lockheed Martin, a major U.S. government defense contractor. Lockheed Martin has stated that this attack was thwarted.

Additionally, as I reported yesterday, RSA is offering to replace SecurID tokens for at least some of its customers.

What’s fanning the flames in the technosphere is this: why would replacing your existing tokens with more of the same from RSA make any difference?

Because RSA has offered to replace tokens, speculation seems to be that the crooks who broke into RSA got away with a database linking tokens to customers in such a way that tokens for each company could be cloned. With that database, an attacker would only need to work out which employee had which token in order to produce the right “secret number” sequence.

That, the theory goes, lets you mount an effective attack. It goes something like this.

To tie a token to a user, use a keylogger to grab one or more of the user’s token codes, along with his username, network password, and token PIN. (The token PIN is essentially a password for the token itself.)

You can’t reuse the token code, of course – that’s why the person you’re attacking chose to use tokens in the first place – but you can use it to match the keylogged user with a token number sequence in your batch of cloned customer tokens.

So you now have a soft-clone of the user’s token. And, thanks to the keylogger, you have their username, password and PIN. Bingo. Remote login.

I don’t accept this speculation as complete.

Even if it was the method used in the Lockheed attack, why would I accept that it’s a sufficient explanation? And even if it were, why would I accept – in the absence of any other information from RSA – that the same thing won’t happen again? Are they now offering to stop retaining data which makes it possible for an intruder into their network to compromise mine? Why would they insist on doing that anyway?

More confusingly, if the only practicable attack requires an attacker to keylog the PIN of a user’s token, why is the entire SecurID product range considered at risk?

RSA sells tokens in which the PIN is entered on the token itself, which is equipped with a tiny keypad. Those PINs can’t be keylogged.

So why isn’t RSA stating that its more upmarket tokens are safe? Users of those devices could immediately relax. Or is RSA unwilling to make those claims because there are other potential attacks against its devices which might be mounted by attackers equipped with the stolen data?

Perhaps this token-to-customer mapping database theory is a red herring? After all, there might be other trade secrets the attackers made off with which would facilitate other sorts of attack.

For example, a cryptanalytical report might show how to clone tokens without any customer-specific data. Or confidential engineering information might suggest how to extract cryptographic secrets from tokens without triggering any tamper-protection, allowing them to be cloned with just brief physical access.

In short, the situation is confused because RSA hasn’t attempted to remove our confusion.

It’s no good having mandatory data breach disclosure laws if all they teach us is to admit we had a breach. We also need to convey information of obvious practical value to all affected parties. I’ll repeat my earlier list again. When disclosing breaches, we need to explain:

* How the problem arose.

* What holes it introduced. (And what it did not.)

* How those holes can be closed.

* What is being done to prevent it from happening again.

Three words. Promptness. Clarity. Openness.

PS: Lockheed Martin makes the world’s most desirable vehicle. Here it is at Avalon airport, near Geelong in Australia. That’s what I call a flying kangaroo!