Category Archives: Security

iSniff-GPS – Passive Wifi Sniffing Tool With Location Data

iSniff GPS is a passive wifi sniffing tool which sniffs for SSID probes, ARPs and MDNS (Bonjour) packets broadcast by nearby iPhones, iPads and other wireless devices. The aim is to collect data which can be used to identify each device and determine previous geographical locations, based solely on information each device discloses about...

Read the full post at darknet.org.uk

Year of the RAT: China’s malware war on activists goes mobile

Activists involved in Hong Kong's "Umbrella Revolution" have been targeted by remote access malware for Android and iOS that can eavesdrop on their communications—and do a whole lot more.

Malware-based espionage targeting political activists and other opposition is nothing new, especially when it comes to opponents of the Chinese government. But there have been few attempts at hacking activists more widespread and sophisticated than the current wave of spyware targeting the mobile devices of members of Hong Kong’s “Umbrella Revolution.”

Over the past few days, activists and protesters in Hong Kong have been targeted by mobile device malware that gives an attacker the ability to monitor their communications. What’s unusual about the malware, which has been spread through mobile message “phishing “ attacks, is that the attacks have targeted and successfully infected both Android and iOS devices.

The sophistication of the malware has led experts to believe that it was developed and deployed by the Chinese government. But Chinese-speaking hackers have a long history of using this sort of malware, referred to as remote access Trojans (RATs), as have other hackers around the world for a variety of criminal activities aside from espionage. It’s not clear whether this is an actual state-funded attack on Chinese citizens in Hong Kong or merely hackers taking advantage of a huge social engineering opportunity to spread their malware. But whoever is behind it is well-funded and sophisticated.

Read 17 remaining paragraphs | Comments

Free Mobile Apps = Compromises On User Safety?

Free mobile apps may introduce security risks that need to be addressed. While businesses need to find ways of monetizing when consumers are not ready to pay directly for using an app,  monetization mechanisms that involve the use of user data should be legal, secure and an informed choice. A bigger disussion follows.

80% of the apps were free in 2011,  95% of the apps expected to be free by 2017

In last few years, mobile apps have seen a general downward pressure on pricing. A Flurry analytics report on app pricing show that while 80% of the apps were free in 2011, the number of free apps has increased to 90% as of 2013. Even the price of paid apps showed a lower revenue per app—in 2011, 15% of paid apps had a price close to $0.99, by 2013 only 6% of apps had this price point as the free apps increased. In a press release early this year, Gartner also confirmed this trend when they said that 95% of the total apps (across all OS’) would become free by 2017.

So how do app developers make money on their apps?

There are three specific trends:

  1. Freemium route with in-app-purchases – This is a growing trend. App developers bifurcate their feature set between free and paid. The idea is to hook users through a free offering and provide offers to the user that would like to get access to richer feature set in a paid version. In some cases, some of the app activities, some of the app enticements are available through in-app-purchases.
  2. In-app advertisements - Many app developers embed various kinds of advertisements with their app through the use of ad-libraries. Every impression/click earns revenue for app developer. There are many app developer libraries including one from Google.
  3. Sponsorships – This is only relevant for a very small group of app developers. In this case the entire cost of the app’s engineering and operations is covered by an outside sponsor. For example, Subway sponsored the ING New York City Marathon app.

However, we have seen some worrying trends! 

  • Over-aggressive ad-libraries – Some of the ad-libraries that app developers normally use for monetization were found to be over-aggressive in collecting user details.  A couple of these ad-libraries were collecting details related to a user’s calendar, tracking their locations, last call details, etc. This is something that is beyond the normal realm of ad-libraries. We also had a one-off case of Yahoo! ad-libraries delivering potential scareware to consumers.
  • Willful encroachment of user privacy – Some apps have questionable privacy policies  and sell user data to marketing companies without users’ explicit permissions. And other apps such as Path, deliberately upload users’ contact lists without users’ explicit permission.
  • Embedding risky URLs - Between April and June 2014, McAfee analyzed approximately 733k apps. Out of those almost 95k (12%) of the apps were found to contain at least one risky URL. While in some small cases this might have been willful insertion, this largely could be attributed to developer ignorance and lack of stricter quality controls in their app development process.
  • Weak implementation by app developers - Recently Credit Karma and Fandango were fined by FTC for having exposed sensitive user data by not implementing secure communications between device and their servers. This was due to them not including SSL as part of their implementation when transferring sensitive user data over Internet.

What can be done to address this situation?

Many of the action items clearly lie in the hands of app developers. While the trajectory for app monetization would lie in alternate means as documented earlier, however lack of focus on user privacy/safety would blow up on app developer if they are not cautious (as it happened on Path, Credit Karma and Fandango). The following four suggestions could be considered by app developers:

  1. Be extremely cautious of ad-libraries with past incidents – An app developer should look for past privacy violation of any ad-libraries that you are considering to integrate with your app. Also, remember that ad-libraries may not improve your monetization, but a single bad ad-library may destroy your reputation or get you into legal trouble. Also, always read through privacy policies of ad-libraries to understand how they plan to use user data.
  2. Implement three principles of safe privacy - Inform, consent and control. Always inform the user about what you plan to do with their data such as encouraging the user to read through your app’s privacy policy. Get explicit consent from the user on use of their personal data, and allow the user to control his/her information that is submitted through your app.
  3. Check for URL reputation before adding it to your app - Embedding public facing URLs without validating their security status may put user at risk. An app developer may use McAfee’s free URL verification service to validate a web link before using it into his/her app.
  4. Follow a privacy-aware development practice – An app developer should be aware of secure coding practices and ensure that privacy needs are met. Here is an excellent book written by McAfee privacy experts that could be used for reference: http://www.amazon.com/The-Privacy-Engineers-Manifesto-Getting/dp/1430263555.

The post Free Mobile Apps = Compromises On User Safety? appeared first on McAfee.

Free Mobile Apps = Compromises On User Safety?

Free mobile apps may introduce security risks that need to be addressed. While businesses need to find ways of monetizing when consumers are not ready to pay directly for using an app,  monetization mechanisms that involve the use of user data should be legal, secure and an informed choice. A bigger disussion follows.

80% of the apps were free in 2011,  95% of the apps expected to be free by 2017

In last few years, mobile apps have seen a general downward pressure on pricing. A Flurry analytics report on app pricing show that while 80% of the apps were free in 2011, the number of free apps has increased to 90% as of 2013. Even the price of paid apps showed a lower revenue per app—in 2011, 15% of paid apps had a price close to $0.99, by 2013 only 6% of apps had this price point as the free apps increased. In a press release early this year, Gartner also confirmed this trend when they said that 95% of the total apps (across all OS’) would become free by 2017.

So how do app developers make money on their apps?

There are three specific trends:

  1. Freemium route with in-app-purchases – This is a growing trend. App developers bifurcate their feature set between free and paid. The idea is to hook users through a free offering and provide offers to the user that would like to get access to richer feature set in a paid version. In some cases, some of the app activities, some of the app enticements are available through in-app-purchases.
  2. In-app advertisements - Many app developers embed various kinds of advertisements with their app through the use of ad-libraries. Every impression/click earns revenue for app developer. There are many app developer libraries including one from Google.
  3. Sponsorships – This is only relevant for a very small group of app developers. In this case the entire cost of the app’s engineering and operations is covered by an outside sponsor. For example, Subway sponsored the ING New York City Marathon app.

However, we have seen some worrying trends! 

  • Over-aggressive ad-libraries – Some of the ad-libraries that app developers normally use for monetization were found to be over-aggressive in collecting user details.  A couple of these ad-libraries were collecting details related to a user’s calendar, tracking their locations, last call details, etc. This is something that is beyond the normal realm of ad-libraries. We also had a one-off case of Yahoo! ad-libraries delivering potential scareware to consumers.
  • Willful encroachment of user privacy – Some apps have questionable privacy policies  and sell user data to marketing companies without users’ explicit permissions. And other apps such as Path, deliberately upload users’ contact lists without users’ explicit permission.
  • Embedding risky URLs - Between April and June 2014, McAfee analyzed approximately 733k apps. Out of those almost 95k (12%) of the apps were found to contain at least one risky URL. While in some small cases this might have been willful insertion, this largely could be attributed to developer ignorance and lack of stricter quality controls in their app development process.
  • Weak implementation by app developers - Recently Credit Karma and Fandango were fined by FTC for having exposed sensitive user data by not implementing secure communications between device and their servers. This was due to them not including SSL as part of their implementation when transferring sensitive user data over Internet.

What can be done to address this situation?

Many of the action items clearly lie in the hands of app developers. While the trajectory for app monetization would lie in alternate means as documented earlier, however lack of focus on user privacy/safety would blow up on app developer if they are not cautious (as it happened on Path, Credit Karma and Fandango). The following four suggestions could be considered by app developers:

  1. Be extremely cautious of ad-libraries with past incidents – An app developer should look for past privacy violation of any ad-libraries that you are considering to integrate with your app. Also, remember that ad-libraries may not improve your monetization, but a single bad ad-library may destroy your reputation or get you into legal trouble. Also, always read through privacy policies of ad-libraries to understand how they plan to use user data.
  2. Implement three principles of safe privacy - Inform, consent and control. Always inform the user about what you plan to do with their data such as encouraging the user to read through your app’s privacy policy. Get explicit consent from the user on use of their personal data, and allow the user to control his/her information that is submitted through your app.
  3. Check for URL reputation before adding it to your app - Embedding public facing URLs without validating their security status may put user at risk. An app developer may use McAfee’s free URL verification service to validate a web link before using it into his/her app.
  4. Follow a privacy-aware development practice – An app developer should be aware of secure coding practices and ensure that privacy needs are met. Here is an excellent book written by McAfee privacy experts that could be used for reference: http://www.amazon.com/The-Privacy-Engineers-Manifesto-Getting/dp/1430263555.

The post Free Mobile Apps = Compromises On User Safety? appeared first on McAfee.

Silk Road Lawyers Poke Holes in FBI’s Story

New court documents released this week by the U.S. government in its case against the alleged ringleader of the Silk Road online black market and drug bazaar suggest that the feds may have some ‘splaining to do.

The login prompt and CAPTCHA from the Silk Road home page.

The login prompt and CAPTCHA from the Silk Road home page.

Prior to its disconnection last year, the Silk Road was reachable only via Tor, software that protects users’ anonymity by bouncing their traffic between different servers and encrypting the traffic at every step of the way. Tor also lets anyone run a Web server without revealing the server’s true Internet address to the site’s users, and this was the very technology that the Silk road used to obscure its location.

Last month, the U.S. government released court records claiming that FBI investigators were able to divine the location of the hidden Silk Road servers because the community’s login page employed an anti-abuse CAPTCHA service that pulled content from the open Internet — thus leaking the site’s true Internet address.

But lawyers for alleged Silk Road captain Ross W. Ulbricht (a.k.a. the “Dread Pirate Roberts”) asked the court to compel prosecutors to prove their version of events.  And indeed, discovery documents reluctantly released by the government this week appear to poke serious holes in the FBI’s story.

For starters, the defense asked the government for the name of the software that FBI agents used to record evidence of the CAPTCHA traffic that allegedly leaked from the Silk Road servers. The government essentially responded (PDF) that it could not comply with that request because the FBI maintained no records of its own access, meaning that the only record of their activity is in the logs of the seized Silk Road servers.

The response that holds perhaps the most potential to damage the government’s claim comes in the form of a configuration file (PDF) taken from the seized servers. Nicholas Weaver,a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley, explains the potential significance:

“The IP address listed in that file — 62.75.246.20 — was the front-end server for the Silk Road,” Weaver said. “Apparently, Ulbricht had this split architecture, where the initial communication through Tor went to the front-end server, which in turn just did a normal fetch to the back-end server. It’s not clear why he set it up this way, but the document the government released in 70-6.pdf shows the rules for serving the Silk Road Web pages, and those rules are that all content – including the login CAPTCHA – gets served to the front end server but to nobody else. This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server.”

Translation: Those rules mean that the Silk Road server would deny any request from the Internet that wasn’t coming from the front-end server, and that includes the CAPTCHA.

“This configuration file was last modified on June 6, so on June 11 — when the FBI said they [saw this leaky CAPTCHA] activity — the FBI could not have seen the CAPTCHA by connecting to the server while not using Tor,” Weaver said. “You simply would not have been able to get the CAPTCHA that way, because the server would refuse all requests.”

The FBI claims that it found the Silk Road server by examining plain text Internet traffic to and from the Silk Road CAPTCHA, and that it visited the address using a regular browser and received the CAPTCHA page. But Weaver says the traffic logs from the Silk Road server (PDF) that also were released by the government this week tell a different story.

“The server logs which the FBI provides as evidence show that, no, what happened is the FBI didn’t see a leakage coming from that IP,” he said. “What happened is they contacted that IP directly and got a PHPMyAdmin configuration page.” See this PDF file for a look at that PHPMyAdmin page. Here is the PHPMyAdmin server configuration.

But this is hardly a satisfying answer to how the FBI investigators located the Silk Road servers. After all, if the FBI investigators contacted the PHPMyAdmin page directly, how did they know to do that in the first place?

“That’s still the $64,000 question,” Weaver said. “So both the CAPTCHA couldn’t leak in that configuration, and the IP the government visited wasn’t providing the CAPTCHA, but instead a PHPMyAdmin interface. Thus, the leaky CAPTCHA story is full of holes.”

Many in the Internet community have officially called baloney [that's a technical term] on the government’s claims, and these latest apparently contradictory revelations from the government are likely to fuel speculation that the government is trying to explain away some not-so-by-the-book investigative methods.

“I find it surprising that when given the chance to provide a cogent, on-the record explanation for how they discovered the server, they instead produced a statement that has been shown inconsistent with reality, and that they knew would be inconsistent with reality,” Weaver said. “”Let me tell you, those tin foil hats are looking more and more fashionable each day.”

Hacked Snapchat accounts use native chat feature to spread diet pill spam

Compromised Snapchat accounts have sent out photo messages of a box of Garcinia Cambogia, followed by a chat message with a suspicious link containing ‘groupon.com’ in the URL.

Copyright © 2014. Powered by WordPress & Romangie Theme.