Category Archives: two-factor

Someday you may ditch your two-factor authenticator for an electronic tattoo

Electronic “tattoos” and pills that turn your body into an authenticator are two next-steps in password protection that Motorola is working on, as described at a session Wednesday at AllThingsD’s D11 conference. Regina Dugan, senior vice president of the Advanced Technology and Projects group at Motorola Mobility, showed off two “wearable computing” oriented methods that remove the security tokens from the two-factor equation.

The electronic tattoos described must strike a balance between the “mechanical mismatch” of hard, rigid machines and soft, pliable humans, Dugan said. The “tattoo” Dugan wore, which appeared to be more like a sticker on her left wrist, uses “islands of high-performance silicon connected by accordion-like structures” that allow the tattoo to flex and move with her skin to stay on and remain functional. Presumably, the silicon and wires would eventually be embedded into the skin to make the user a proper bionic human.

The pill, on the other hand, turns one’s entire body into an authenticator. Dugan described the pill as a vitamin “reverse potato battery” that uses stomach acid as the electrolyte to power a switch. As the switch pulses on and off, it “creates an 18-bit EKG-like symbol in your body, and your body becomes the authenticator,” Dugan said.

Read 1 remaining paragraphs | Comments

iCloud users take note: Apple two-step protection won’t protect your data

A diagram showing how Apple's two-step verification works.

If you think your pictures, contacts, and other data are protected by the two-step verification protection Apple added to its iCloud service in March, think again. According to security researchers in Moscow, the measure helps prevent fraudulent purchases made with your Apple ID but does nothing to augment the security of files you store.

To be clear, iCloud data is still secure so long as the password locking it down is strong and remains secret. But in the event that your account credentials are compromised—which is precisely the eventuality Apple's two-factor verification is intended to protect against—there's nothing stopping an adversary from accessing data stored in your iCloud account. Researchers at ElcomSoft—a developer of sophisticated software for cracking passwords—made this assessment in a blog post published Thursday.

"In its current implementation, Apple’s two-factor authentication does not prevent anyone from restoring an iOS backup onto a new (not trusted) device," ElcomSoft CEO Vladimir Katalov wrote. "In addition, and this is much more of an issue, Apple’s implementation does not apply to iCloud backups, allowing anyone and everyone knowing the user’s Apple ID and password to download and access information stored in the iCloud. This is easy to verify; simply log in to your iCloud account, and you’ll have full information to everything stored there without being requested any additional logon information."

Read 11 remaining paragraphs | Comments

Apple follows Google, Facebook, and others with two-step authentication

Apple has finally responded to increasing online security threats by introducing two-step authentication for iCloud. Like Google and other companies that already employ two-step authentication, Apple's system would provide an extra layer of security on top of the existing iCloud passwords when users try to access their accounts from unrecognized devices. iCloud users can set up two-step authentication on Apple IDs today by going to the Apple ID website and clicking the "Password and Security" tab.

Apple walks you through the process on its Apple ID management site.

For Apple, this means an authentication code is either sent via SMS to a phone number or found within the Find My iPhone app (if you have it installed) whenever you try to log in from somewhere new. This means that a potential attacker will have a harder time getting into your iCloud account without having physical access to your "trusted" device receiving the code. (Users are prompted to set up at least one trusted device when they turn on two-step authentication, though you can have more than one if you like.) Currently, two-step authentication is available to iCloud users in the US, UK, Australia, Ireland, and New Zealand.

One of the benefits to setting this up on your iCloud account is that you'll no longer have to rely on security questions—which are inherently insecure—in order to gain access to your account if you lose your password. The downside (if you consider it that) is that once you set up two-step authentication, Apple will no longer be able to reset your password for you should you lose or forget it. This is what ended up biting Wired editor Mat Honan in the behind when his various accounts were compromised—hackers were able to gather enough personal information from Honan's e-mail and Amazon accounts to trick Apple support into resetting his iCloud password, giving them free reign to remotely wipe his iPhone, iPad, and MacBook.

Read 1 remaining paragraphs | Comments

FLAMING RETORT – Three words for RSA. Promptness. Clarity. Openness.

What a lot of fuss RSA’s security breach has caused! And what a lot of fear and uncertainty and doubt still surrounds it!

In case you haven’t been following the story, it began in mid-March 2011, when RSA admitted that its security had been breached and that “certain information [was] extracted from RSA’s systems.” Some of that information was specifically related to RSA’s SecurID products; the CEO admitted that “this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation.”

The CEO, Arthur Coviello, also assured everybody that the company was “very actively communicating this situation to RSA customers.”

I thought this was a good start, even though it raised more questions than it answered.

An admission and an apology go a long way – provided that they are quickly followed by genuinely useful information which explains how the problem arose, what holes it introduced, how those holes can be closed, and what is being done to prevent anything like it from happening again.

But RSA’s version of “very actively communicating” with its customers didn’t go that way. We still don’t really know what happened. We don’t know what holes were opened up because of the attack. And RSA customers still can’t work out for themselves what sort of risk they’re up against. They have to assume the worst.

What we do know is that US engineering giant Lockheed Martin subsequently suffered an attempted breakin. Lockheed stated that the data stolen from RSA was a “contributing factor” to its own attack, and RSA’s Coviello agreed:

[O]n Thursday, June 2, 2011, we were able to confirm that information taken from RSA in March had been used as an element of an attempted broader attack on Lockheed Martin, a major U.S. government defense contractor. Lockheed Martin has stated that this attack was thwarted.

Additionally, as I reported yesterday, RSA is offering to replace SecurID tokens for at least some of its customers.

What’s fanning the flames in the technosphere is this: why would replacing your existing tokens with more of the same from RSA make any difference?

Because RSA has offered to replace tokens, speculation seems to be that the crooks who broke into RSA got away with a database linking tokens to customers in such a way that tokens for each company could be cloned. With that database, an attacker would only need to work out which employee had which token in order to produce the right “secret number” sequence.

That, the theory goes, lets you mount an effective attack. It goes something like this.

To tie a token to a user, use a keylogger to grab one or more of the user’s token codes, along with his username, network password, and token PIN. (The token PIN is essentially a password for the token itself.)

You can’t reuse the token code, of course – that’s why the person you’re attacking chose to use tokens in the first place – but you can use it to match the keylogged user with a token number sequence in your batch of cloned customer tokens.

So you now have a soft-clone of the user’s token. And, thanks to the keylogger, you have their username, password and PIN. Bingo. Remote login.

I don’t accept this speculation as complete.

Even if it was the method used in the Lockheed attack, why would I accept that it’s a sufficient explanation? And even if it were, why would I accept – in the absence of any other information from RSA – that the same thing won’t happen again? Are they now offering to stop retaining data which makes it possible for an intruder into their network to compromise mine? Why would they insist on doing that anyway?

More confusingly, if the only practicable attack requires an attacker to keylog the PIN of a user’s token, why is the entire SecurID product range considered at risk?

RSA sells tokens in which the PIN is entered on the token itself, which is equipped with a tiny keypad. Those PINs can’t be keylogged.

So why isn’t RSA stating that its more upmarket tokens are safe? Users of those devices could immediately relax. Or is RSA unwilling to make those claims because there are other potential attacks against its devices which might be mounted by attackers equipped with the stolen data?

Perhaps this token-to-customer mapping database theory is a red herring? After all, there might be other trade secrets the attackers made off with which would facilitate other sorts of attack.

For example, a cryptanalytical report might show how to clone tokens without any customer-specific data. Or confidential engineering information might suggest how to extract cryptographic secrets from tokens without triggering any tamper-protection, allowing them to be cloned with just brief physical access.

In short, the situation is confused because RSA hasn’t attempted to remove our confusion.

It’s no good having mandatory data breach disclosure laws if all they teach us is to admit we had a breach. We also need to convey information of obvious practical value to all affected parties. I’ll repeat my earlier list again. When disclosing breaches, we need to explain:

* How the problem arose.

* What holes it introduced. (And what it did not.)

* How those holes can be closed.

* What is being done to prevent it from happening again.

Three words. Promptness. Clarity. Openness.

PS: Lockheed Martin makes the world’s most desirable vehicle. Here it is at Avalon airport, near Geelong in Australia. That’s what I call a flying kangaroo!

Facebook announces new security features – but do they go far enough?

Facebook has just published an article entitled Keeping You Safe from Scams and Spam. It’s all about improving security on its network.

In the past, Facebook has seemed curiously reluctant to do anything which might impede traffic.

After all, Facebook’s revenue doesn’t come from protecting you, the user. It comes from the traffic you generate whilst using the site.

So this latest announcement is a welcome sign, since some of the new security features prevent or actively discourage you from doing certain things on the Facebook network. Let’s hope that everyone at Facebook has accepted that reduced traffic from safer users will amost certainly give the company higher value in the long term.

But do Facebook’s new security features go far enough? Let’s look them over.

* Partnership with Web of Trust (WOT)

WOT is a Finnish company whose business is based around community site ratings. You tell WOT if you think a site is bad; WOT advises you as you browse what other people have said about the sites you visit.

Community block lists aren’t a new idea – they’ve been used against both email-borne spam and dodgy websites for years – and they aren’t perfect. Here’s what I said about them at the VB2006 conference in Montreal:

[C]ommunity-based block lists can help, and it is suggested that they can be very responsive if the community is large and widespread. (If just one person in the entire world reports a [dodgy] site, everyone else can benefit from this knowledge.)

But the [cybercriminals] can react nimbly, too. For example, using a network of botnet-infected PCs, it would be a simple matter to 'report' that a slew of legitimate sites were bogus. Correcting errors of this sort could take the law-abiding parts of the community a long time, and render the block list unusable until it is sorted out. Alternatively, the community might need to make it tougher to get a [site] added to the list, to resist false positives. This would render the service less responsive.

Another problem with a block list based on “crowd wisdom” is that it can be difficult for sites which were hacked and then cleaned up to get taken off the list. Users will willingly report bad sites, but are rarely prepared to affirm good ones.

False positives, in fact, have already been a problem for Facebook’s own bad-link detector, which is also mentioned in the announcement. Naked Security has had its own articles blocked on Facebook simply for mentioning the name of a scam site.

In short, the effectiveness, accuracy and coverage of the WOT partnership remains to be evaluated. But I approve of the deal. It’s a step forward by Facebook. However, Facebook’s own bad-link detector could do with improvement.

* Clickjacking protection

Facebook introduced some anti-clickjacking measures a while ago. It’s a good idea. If you’re trying to Like a page known to be associated with acquiring Likes through clickjacks, Facebook won’t blindly accept the click. You’ll have to re-confirm it.

Again, I approve of this. But in my opinion, it’s not going far enough. It would be much better if Facebook popped up a confirmation dialog every time you Liked something, so that the “blind Likes” triggered by clickjacking would neither work nor go unnoticed. (Indeed, this popup dialog would be a great place for users to report clickjacks to the WOT community block list!)

That’s not going to happen. Facebook wants Liking to be easy – really easy – as it helps to generate lots of traffic. A popup for every Like almost certainly wouldn’t get past Facebook’s business development managers. Not yet, at any rate. But if we all keep asking, perhaps they’ll see the value?

* Self-XSS

This is a geeky way of saying “Pasting JavaScript into your own address bar.”

We’ve already reported on the potential danger of doing this. When you put JavaScript in your address bar, you implicitly give it permission to run as if it were part of the page you just visited. That’s always a risky proposition. Facebook is adding protection against this behaviour.

Facebook also says it’s working with browser makers on this problem. That’s good.

Perhaps all browsers should simply disallow Javascript in the address bar by default? It’s a useful feature, but the sort of user who might need it would surely be technically savvy enough to turn it on when needed.

* Login approvals

Facebook’s final announcement is what it describes as two factor authentication (2FA). Facebook will optionally send you an SMS every time someone logs in from “a new or unrecognised device”. (Facebook doesn’t say how it defines “new”, or how it recognises devices.)

This is a useful step, and will make stolen Faceook passwords harder to abuse. In the past, you would only see Facebook’s “login from new or unrecognised device” warning next time you used the site, by which time it might have been too late.

The new feature means that you’ll get warnings about unauthorised access attempts pushed to you. Furthermore, the crooks won’t be able to login because they won’t have the magic code in the SMS which is needed to proceed.

It’s a pity Facebook isn’t offering an option to let you enable 2FA every time you login. It would be even nicer if they added a token-based option (and they’d be welcome to charge a reasonable amount for the token) for the more security-conscious user.

A token would also allow users to enjoy the benefits of 2FA without sharing their mobile phone number with Facebook – something they might be unwilling to do after Facebook’s controversial flirtation, earlier this year, with letting app developers get at your address and phone number.

In summary

Where does this leave us?

Good work. I’m delighted that Facebook is getting more visibly involved in boosting the security of its users. But there’s still a long way to go.

In particular, this latest announcement doesn’t address any of the issues in Naked Security’s recent Open Letter to Facebook. Those issues represent more general problems which still need attention: Privacy by default, Vetted app developers, and HTTPS for everything.

(If you use Facebook and want to learn more about spam, malware, scams and other threats, you should join the Sophos Facebook page where we have a thriving community of over 80,000 people.)

Facebook’s two-factor authentication announcement raises questions

Text message receivedAmid mounting criticism of Facebook’s attitude to its users’ privacy and safety, the social network has announced that it is introducing a two-factor authentication system in an attempt to prevent unauthorised logins to accounts.

The idea is that if you log into your Facebook account from a computer or mobile device that Facebook doesn’t recognise as one that you have used before to access the website, then you’ll have to enter a code to confirm you are who you say you are.

Two factor Facebook authentication announcement

I’m glad to see Facebook introduce what sounds like an additional layer of protection for users, at least for those users who chose to enable the option. Two factor authentication doesn’t address the many other Facebook privacy and safety concerns that are troubling users, but it’s no bad thing.

Unfortunately the short mention of the feature on Facebook’s blog leaves some questions unanswered.

    1. How can users enable the option? My guess is that users will find the option, once it has been rolled out to their accounts, under Account / Account settings / Account security, but it would have been nice if Facebook had told people. None of the Facebook accounts I have checked so far appear to have received the option, so I cannot confirm.

    2. How often will the code change? It would be sensible if the code changed each time someone tries to access your Facebook account from an unknown computer, but Facebook doesn’t say in its blog post.

    3. How will users receive the code? Again, Facebook doesn’t say. But my guess is that Facebook will send you the code via an SMS message to your mobile phone. That means, of course, that you have to trust Facebook with your mobile phone number which privacy-conscious people may be understandably wary of doing.

    The one-time password system announced by Facebook last October also relied upon SMS messages – which raised some valid safety concerns.

So, it sounds like it may be a case of swings and roundabouts. A win for security and privacy on one hand is a loss on the other, as you have to trust Facebook with your phone number.

Remember, Facebook has been wanting your mobile phone number for some time and isn’t been above using scare tactics to get you to hand it over.

Blizzard authentication tokenI, for one, won’t be handing over my mobile phone number to Facebook in exchange for this two-factor authentication system.

I might, however, have considered signing up for a small hardware token that I could keep on my keychain, and rely upon it produce a one-time code that can be entered at login alongside my username and password.

You may have seen such devices being offered by online banks and some of the major online games like World of Warcraft.

Of course, such authentication devices cost money and require infrastructure changes at the website’s end, but – hey! – if Facebook introduced something like that they could potentially charge a small amount of money for those users who want to take a stronger line on their privacy and online safety.

If you’re a member of Facebook don’t forget to join the Sophos Facebook page to stay up-to-date with the latest security news.

Update: Naked Security follower Neil Adam raises the valid point that you probably wouldn’t want a hardware authentication fob for every website you log into. If we did, we’d probably all have very lumpy trouser pockets.

Copyright © 2014. Powered by WordPress & Romangie Theme.