Category Archives: university of california

Treasury Dept: Tor a Big Source of Bank Fraud

A new report from the U.S. Treasury Department found that a majority of bank account takeovers by cyberthieves over the past decade might have been thwarted had affected institutions known to look for and block transactions coming through Tor, a global communications network that helps users maintain anonymity by obfuscating their true location online.

The findings come in a non-public report obtained by KrebsOnSecurity that was produced by the Financial Crimes Enforcement Network (FinCEN), a Treasury Department bureau responsible for collecting and analyzing data about financial transactions to combat domestic and international money laundering, terrorist financing and other financial crimes.

In the report, released on Dec. 2, 2014, FinCEN said it examined some 6,048 suspicious activity reports (SARs) filed by banks between August 2001 and July 2014, searching the reports for those involving one of more than 6,000 known Tor network nodes. Investigators found 975 hits corresponding to reports totaling nearly $24 million in likely fraudulent activity.

“Analysis of these documents found that few filers were aware of the connection to Tor, that the bulk of these filings were related to cybercrime, and that Tor related filings were rapidly rising,” the report concluded. “Our BSA [Bank Secrecy Act] analysis of 6,048 IP addresses associated with the Tor darknet [link added] found that in the majority of the SAR filings, the underlying suspicious activity — most frequently account takeovers — might have been prevented if the filing institution had been aware that their network was being accessed via Tor IP addresses.”

Tables from the FinCEN report.

Tables from the FinCEN report.

FinCEN said it was clear from the SAR filings that most financial institutions were unaware that the IP address where the suspected fraudulent activity occurred was in fact a Tor node.

“Our analysis of the type of suspicious activity indicates that a majority of the SARs were filed for account takeover or identity theft,” the report noted. “In addition, analysis of the SARs filed with the designation ‘Other revealed that most were filed for ‘Account Takeover,’ and at least five additional SARs were filed incorrectly and should have been ‘Account Takeover.'”

The government also notes that there has been a fairly recent and rapid rise in the number of SAR filings over the last year involving bank fraud tied to Tor nodes.

“From October 2007 to March 2013, filings increased by 50 percent,” the report observed. “During the most recent period — March 1, 2013 to July 11, 2014 — filings rose 100 percent.”

While banks may be able to detect and block more fraudulent transactions by paying closer attention to or outright barring traffic from Tor nodes, such an approach is unlikely to have a lasting impact on fraud, said Nicholas Weaver, a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley.

“I’m not surprised by this: Tor is easy for bad actors to use to isolate their identity,” Weaver said “Yet blocking all Tor will do little good, because there are many other easy ways for attackers to hide their source address.”

Earlier this summer, the folks who maintain the Tor Project identified this problem — that many sites and even ISPs are increasingly blocking Tor traffic because of its abuse by fraudsters — as an existential threat to the anonymity network. The organization used this trend as a rallying cry for Tor users to consider lending their brainpower to help the network thrive in spite of these threats.

A growing number of websites treat users from anonymity services differently Slashdot doesn’t let you post comments over Tor, Wikipedia won’t let you edit over Tor, and Google sometimes gives you a captcha when you try to search (depending on what other activity they’ve seen from that exit relay lately),” wrote Tor Project Leader Roger Dingledine. “Some sites like Yelp go further and refuse to even serve pages to Tor users.”

Dingledine continued:

“The result is that the Internet as we know it is siloing. Each website operator works by itself to figure out how to handle anonymous users, and generally neither side is happy with the solution. The problem isn’t limited to just Tor users, since these websites face basically the same issue with users from open proxies, users from AOL, users from Africa, etc.

Weaver said the problem of high volumes of fraudulent activity coming through the Tor Network presents something of a no-win situation for any website dealing with Tor users.

“If you treat Tor as hostile, you cause collateral damage to real users, while the scum use many easy workarounds.  If you treat Tor as benign, the scum come flowing through,” Weaver said. “For some sites, such as Wikipedia, there is perhaps a middle ground. But for banks? That’s another story.”

Silk Road Lawyers Poke Holes in FBI’s Story

New court documents released this week by the U.S. government in its case against the alleged ringleader of the Silk Road online black market and drug bazaar suggest that the feds may have some ‘splaining to do.

The login prompt and CAPTCHA from the Silk Road home page.

The login prompt and CAPTCHA from the Silk Road home page.

Prior to its disconnection last year, the Silk Road was reachable only via Tor, software that protects users’ anonymity by bouncing their traffic between different servers and encrypting the traffic at every step of the way. Tor also lets anyone run a Web server without revealing the server’s true Internet address to the site’s users, and this was the very technology that the Silk road used to obscure its location.

Last month, the U.S. government released court records claiming that FBI investigators were able to divine the location of the hidden Silk Road servers because the community’s login page employed an anti-abuse CAPTCHA service that pulled content from the open Internet — thus leaking the site’s true Internet address.

But lawyers for alleged Silk Road captain Ross W. Ulbricht (a.k.a. the “Dread Pirate Roberts”) asked the court to compel prosecutors to prove their version of events.  And indeed, discovery documents reluctantly released by the government this week appear to poke serious holes in the FBI’s story.

For starters, the defense asked the government for the name of the software that FBI agents used to record evidence of the CAPTCHA traffic that allegedly leaked from the Silk Road servers. The government essentially responded (PDF) that it could not comply with that request because the FBI maintained no records of its own access, meaning that the only record of their activity is in the logs of the seized Silk Road servers.

The response that holds perhaps the most potential to damage the government’s claim comes in the form of a configuration file (PDF) taken from the seized servers. Nicholas Weaver,a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley, explains the potential significance:

“The IP address listed in that file — 62.75.246.20 — was the front-end server for the Silk Road,” Weaver said. “Apparently, Ulbricht had this split architecture, where the initial communication through Tor went to the front-end server, which in turn just did a normal fetch to the back-end server. It’s not clear why he set it up this way, but the document the government released in 70-6.pdf shows the rules for serving the Silk Road Web pages, and those rules are that all content – including the login CAPTCHA – gets served to the front end server but to nobody else. This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server.”

Translation: Those rules mean that the Silk Road server would deny any request from the Internet that wasn’t coming from the front-end server, and that includes the CAPTCHA.

“This configuration file was last modified on June 6, so on June 11 — when the FBI said they [saw this leaky CAPTCHA] activity — the FBI could not have seen the CAPTCHA by connecting to the server while not using Tor,” Weaver said. “You simply would not have been able to get the CAPTCHA that way, because the server would refuse all requests.”

The FBI claims that it found the Silk Road server by examining plain text Internet traffic to and from the Silk Road CAPTCHA, and that it visited the address using a regular browser and received the CAPTCHA page. But Weaver says the traffic logs from the Silk Road server (PDF) that also were released by the government this week tell a different story.

“The server logs which the FBI provides as evidence show that, no, what happened is the FBI didn’t see a leakage coming from that IP,” he said. “What happened is they contacted that IP directly and got a PHPMyAdmin configuration page.” See this PDF file for a look at that PHPMyAdmin page. Here is the PHPMyAdmin server configuration.

But this is hardly a satisfying answer to how the FBI investigators located the Silk Road servers. After all, if the FBI investigators contacted the PHPMyAdmin page directly, how did they know to do that in the first place?

“That’s still the $64,000 question,” Weaver said. “So both the CAPTCHA couldn’t leak in that configuration, and the IP the government visited wasn’t providing the CAPTCHA, but instead a PHPMyAdmin interface. Thus, the leaky CAPTCHA story is full of holes.”

Many in the Internet community have officially called baloney [that's a technical term] on the government’s claims, and these latest apparently contradictory revelations from the government are likely to fuel speculation that the government is trying to explain away some not-so-by-the-book investigative methods.

“I find it surprising that when given the chance to provide a cogent, on-the record explanation for how they discovered the server, they instead produced a statement that has been shown inconsistent with reality, and that they knew would be inconsistent with reality,” Weaver said. “”Let me tell you, those tin foil hats are looking more and more fashionable each day.”

Buying Battles in the War on Twitter Spam

The success of social networking community Twitter has given rise to an entire shadow economy that peddles dummy Twitter accounts by the thousands, primarily to spammers, scammers and malware purveyors. But new research on identifying bogus accounts has helped Twitter to drastically deplete the stockpile of existing accounts for sale, and holds the promise of driving up costs for both vendors of these shady services and their customers.

Image: Twitterbot.info

Image: Twitterbot.info

Twitter prohibits the sale and auto-creation of accounts, and the company routinely suspends accounts created in violation of that policy. But according to researchers from George Mason University, the International Computer Science Institute and the University of California, Berkeley, Twitter traditionally has done so only after these fraudulent accounts have been used to spam and attack legitimate Twitter users.

Seeking more reliable methods of detecting auto-created accounts before they can be used for abuse, the researchers approached Twitter last year for the company’s blessing to purchase credentials from a variety of Twitter account merchants. Permission granted, the researchers spent more than $5,000 over ten months buying accounts from at least 27 different underground sellers.

In a report to be presented at the USENIX security conference in Washington, D.C. today, the research team details its experience in purchasing more than 121,000 fraudulent Twitter accounts of varying age and quality, at prices ranging from $10 to $200 per one thousand accounts.

The research team quickly discovered that nearly all fraudulent Twitter account merchants employ a range of countermeasures to evade the technical hurdles that Twitter erects to stymie the automated creation of new accounts.

“Our findings show that merchants thoroughly understand Twitter’s existing defenses against automated registration, and as a result can generate thousands of accounts with little disruption in availability or instability in pricing,” the paper reads. “We determine that merchants can provide thousands of accounts within 24 hours at a price of $0.02 – $0.10 per account.”

SPENDING MONEY TO MAKE MONEY

For example, to fulfill orders for fraudulent Twitter accounts, merchants typically pay third-party services to help solve those squiggly-letter CAPTCHA challenges. I’ve written here and here about these virtual sweatshops, which rely on low-paid workers in China, India and Eastern Europe who earn pennies per hour deciphering the puzzles.

topemailThe Twitter account sellers also must verify new accounts with unique email addresses, and they tend to rely on services that sell cheap, auto-created inboxes at HotmailYahoo and Mail.ru, the researchers found. ”The failure of email confirmation as a barrier directly stems from pervasive account abuse tied to web mail providers,” the team wrote. “60 percent of the accounts were created with Hotmail, followed by yahoo.com and mail.ru.”

Bulk-created accounts at these Webmail providers are among the cheapest of the free email providers, probably because they lack additional account creation verification mechanisms required by competitors like Google, which relies on phone verification. Compare the prices at this bulk email merchant: 1,000 Yahoo accounts can be had for $10 (1 cent per account), and the same number Hotmail accounts go for $12. In contrast, it costs $200 to buy 1,000 Gmail accounts.

topcountriesFinally, the researchers discovered that Twitter account merchants very often spread their new account registrations across thousands of Internet addresses to avoid Twitter’s IP address blacklisting and throttling. They concluded that some of the larger account sellers have access to large botnets of hacked PCs that can be used as proxies during the registration process.

“Our analysis leads us to believe that account merchants either own or rent access to thousands of compromised hosts to evade IP defenses,” the researchers wrote.

Damon McCoy, an assistant professor of computer science at GMU and one of the authors of the study, said the top sources of the proxy IP addresses were computers in developing countries like India, Ukraine, Thailand, Mexico and Vietnam.  ”These are countries where the price to buy installs [installations of malware that turns PCs into bots] is relatively low,” McCoy said.

PAYPAL, DOUBLE-DIPPING AND STOCKPILING

The researchers paid for most of the accounts using PayPal, which means that most Twitter account sellers accept credit cards. They also found that freelance merchants selling accounts via Fiverr.com and other sellers not associated with a static Web site were the most likely to resell credentials to other buyers — essentially trying to sell the same accounts to multiple buyers. This was possible because the researchers made the decision not to change the passwords of the accounts they purchased.

One of 27 merchants the researchers studied who were selling mass-registered Twitter accounts.

One of 27 merchants the researchers studied who were selling mass-registered Twitter accounts.

They found that bulk-created Twitter accounts sold via Fiverr merchants were also among the shortest lived: 57 percent of the accounts purchased from Fiverr sellers were cancelled during the time of their analysis. In contrast, Web storefronts like buyaccs[dot]com (pictured at left) had only five percent of their purchased accounts eventually detected as fraudulent.

Turns out, most of the Twitter account merchants stockpile huge quantities of accounts in advance of their sale; the researchers determined that the average age of accounts for sale was about 30 days, while some sellers routinely marketed accounts that were more than a year old. For these latter merchants, “pre-aged” accounts appeared to be a proud selling point, although the researchers said they found little correlation between the age of an account and its ability to outlive others after purchase.

THE TAKEDOWN

Twitter did not respond to multiple requests for comment. One of the researchers named in the paper — Berkeley grad student Kurt Thomas — was a Twitter employee at the time of the study; he also deferred comment to Twitter. But the other researchers say they had full cooperation from Twitter to test the efficacy of their merchant profiling techniques. They focused on building unique signatures that could be used to identify accounts registered by each of the 27 merchants they studied, based on qualities such as browser user agent strings, submission timing, signup flow and similarly-named accounts.

Vern Paxson, a professor of computer sciences at UC Berkeley and a key researcher at the International Computer Science Institute, said that in cooperation with Twitter the group analyzed the total fraction of all suspended accounts that appeared to originate from the 27 merchants they tracked. They found that at its peak, the underground marketplace was responsible for registering 60% of all accounts that would go on to be suspended for spamming. During more typical periods of activity, the merchants they tracked contributed 10–20% of all spam accounts.

Following Twitter's mass suspension of accounts, buyaccs.com alerts customers that it is "temporarily not selling twitter accounts."

Following Twitter’s mass suspension of accounts, buyaccs.com alerts customers that it is “temporarily not selling twitter accounts.”

Paxson said that when Twitter went back and applied the group’s merchant signatures to all of the Twitter accounts registered during the ten months of the study, they were able to disable 95 percent of all fraudulent accounts registered by those 27 merchants, including those previously sold but not yet suspended for spamming. Only .08 percent of those accounts that were cancelled asked to be unsuspended, and the researchers believe that 93 percent of those requests were performed by fraudulent accounts abusing the unsuspend process.

Immediately after Twitter suspended the accounts, the researchers placed 16 new orders for accounts from the 10 sellers with the largest stockpiles; of the 14,067 accounts they purchased, 90 percent were dead on arrival due to Twitter’s previous intervention.

“There was a fair amount of confusion on the [black hat hacker] forums about what Twitter was doing,” Paxson said. When the researchers requested working replacements, one of the merchants responded: “All of the stock got suspended….Not just mine…..It happened with all of the sellers….Don’t know what twitter has done….”

Within a few weeks, however, the bigger merchants were back in business, and the templates the researchers built to detect accounts registered by the various merchants began to show their age: Of the 6,879 accounts they purchased two weeks after Twitter’s intervention, only 54 percent were suspended on arrival.

Nevertheless, Paxson said Twitter is actively working to integrate their techniques into its real-time detection framework to help prevent abuse at signups. The trick, he says, is finding a sustainable way to fine-tune their merchant signatures going forward and continue to stay ahead in the arms race.

“We would love to keep doing this, but the hard part is you kind of have to keep doing the buys, and that’s a lot of work,” Paxson said. “The signatures we have created so far are definitely useful to them and they’ve gotten a lot of traction out of it already in actively suspending accounts that match those signatures. But as soon as the account merchants get wise, they change things slightly and our signatures no longer match.”

As such, the paper concludes that a long term disruption of the fraudulent Twitter account marketplace requires both increasing the cost of account registration — perhaps through additional hurdles such as phone verification – and integrating at-signup time abuse classification into the account registration process.

For more on this research, see: Trafficking Fraudulent Accounts: The Role of the Underground Market in Twitter Spam and Abuse (PDF).

Android keylogging with no access to keystrokes?

July and August – summer in the Northern Hemisphere, especially in Nevada and California – often produce some interesting and unusual computer security research.

This is when the media-meets-hacker-in-Vegas two-ring circus of Black Hat and DEFCON (BH/D) takes place.

It’s also when you can attend the academically-styled, but in many ways much groovier, USENIX Security Symposium.

(BH/D probably has way more beards per capita than USENIX events, but USENIX beards are the ones to watch. BH/D is to beards as the United States is to Olympic medals in athletics. USENIX is Usain Bolt.)

We’ve already reported on various intriguing work presented at BH/D. There was Charlie Miller hacking Macbook batteries, and Jay Radcliffe attacking insulin pumps.

Artem Dinaburg took a somewhat chasmic leap of faith to suggest that DRAM errors might be exploited by typosquatters, whilst the gloriously-named triumvirate of Markus, Mlodzianowski and Rowley had fun with juicejacking.

Because it was August when we wrote about these, a handful of our readers complained that both the research and our reporting was nothing more than ‘silly season’ trivia. Maybe.

But if I might mix a metaphor for a moment, it’s only silly season stuff until someone loses an eye.

With that in mind, here’s an interesting paper from the USENIX HotSec ’11 workshop, by Liang Cai and Hao Chen from the University of California, Davis.

Talk titles are another aspect in which USENIX events outshine their Vegas cousins, since they tend to be written for the reader rather than for the media, like this one: TouchLogger: Inferring Keystrokes on Touch Screen from Smartphone Motion.

I won’t do more than give the briefest of summaries here – if you want to do it justice, you can read or even watch the whole paper for yourself.

Simply explained, the authors decided to see if they could guess what you’d typed on your mobile phone by looking only at the data stream from the motion sensors as you pecked at the on-screen keys.

The experiment was rather limited, using a dedicated, full-screen application with a numeric keypad. This allowed the researchers to record what keys you’d actually typed, as well as how you’d typed them.

The results were satisfactory, but far from excellent: the average accuracy was just 70%. One key – the one, as it happens – was correctly diagnosed 80% of the time. But the seven was misidentified almost half the time.

So I don’t expect to see this technique used by cybercrooks any time soon, if at all. But the research is neverthless not just ‘silly season’ stuff.

One of the goals of the authors was to give us a clear and practical security reminder: operating system data which, during design, seems to be of low sensitivity and of little value to an attacker, may turn out to be no such thing.

In particular, the authors point out that most smartphone operating systems deliberately prevent applications from reading from the keyboard unless they are active, visible and have focus. This is a sensible security precaution. But the paper reminds us that we can’t simply assume that this is enough, on its own, to prevent a background keylogger of the sort we’re used to on operating systems such as Windows or Linux.

In short, this sort of ‘silly season’ research is not silly at all.

By making regular attempts to expect the unexpected, research like this helps stop us getting bogged down in a rut of security assumptions.

And the researchers get to have some fun with computer science at the same time.



Copyright © 2014. Powered by WordPress & Romangie Theme.