Georgia’s Lax voting security exposed just in time for crucial special election

(credit: Verified Voting)

To understand why many computer scientists and voting rights advocates don't trust the security of many US election systems, consider the experience of Georgia-based researcher Logan Lamb. Last August, after the FBI reported hackers were probing voter registration systems in more than a dozen states, Lamb decided to assess the security of voting systems in his state.

According to a detailed report published Tuesday in Politico, Lamb wrote a simple script that would pull documents off the website of Kennesaw State University’s Center for Election Systems, which under contract with Georgia, tests and programs voting machines for the entire state. By accident, Lamb's script uncovered a breach whose scope should concern both Republicans and Democrats alike. Reporter Kim Zetter writes:

Within the mother lode Lamb found on the center’s website was a database containing registration records for the state’s 6.7 million voters; multiple PDFs with instructions and passwords for election workers to sign in to a central server on Election Day; and software files for the state’s ExpressPoll pollbooks — electronic devices used by poll workers to verify that a voter is registered before allowing them to cast a ballot. There also appeared to be databases for the so-called GEMS servers. These Global Election Management Systems are used to prepare paper and electronic ballots, tabulate votes and produce summaries of vote totals.

The files were supposed to be behind a password-protected firewall, but the center had misconfigured its server so they were accessible to anyone, according to Lamb. “You could just go to the root of where they were hosting all the files and just download everything without logging in,” Lamb says.

And there was another problem: The site was also using a years-old version of Drupal — content management software — that had a critical software vulnerability long known to security researchers. “Drupageddon,” as researchers dubbed the vulnerability, got a lot of attention when it was first revealed in 2014. It would let attackers easily seize control of any site that used the software. A patch to fix the hole had been available for two years, but the center hadn’t bothered to update the software, even though it was widely known in the security community that hackers had created automated scripts to attack the vulnerability back in 2014.

Lamb was concerned that hackers might already have penetrated the center’s site, a scenario that wasn’t improbable given news reports of intruders probing voter registration systems and election websites; if they had breached the center’s network, they could potentially have planted malware on the server to infect the computers of county election workers who accessed it, thereby giving attackers a backdoor into election offices throughout the state; or they could possibly have altered software files the center distributed to Georgia counties prior to the presidential election, depending on where those files were kept.

Lamb privately reported the breach to University officials, the report notes. But he learned this March that the critical Drupal vulnerability had been fixed only on the HTTPS version of the site. What's more, the same mother lode of sensitive documents remained as well. The findings meant that the center was operating outside the scope of both the University and the Georgia Secretary of State for years.

Read 2 remaining paragraphs | Comments

How to avoid a disastrous recovery

Every chief information officer speculates on the health and resiliency of their data center to ensure the continuity of their business in the event of a disaster. Many go as far as to hold periodic tests to discover and mitigate vulnerabilities.

Netflix has gone even further by introducing testing in the form of their Simian Army which randomly tests the resiliency of their production environment against all manner of failures. And though cloud computing has provided a wealth of options for ensuring business continuity in the event of natural or manmade interruptions, disaster recovery (DR) is your last line of defense when every business continuity procedure and plan fails.

With outages costing enterprises up to $60 million a year, according to IHS Markit, DR planning is a critical component of every data center plan, even if the data center is in the cloud.

Furthermore, there are now regulations that require companies to have a DR plan in place. For instance, the Federal Financial Institutions Examination Council (FFIEC) has guidelines about the maximum allowable downtime for IT systems based on how critical downtime is to the business. If a disaster arises and a company isn’t prepared for it, the company can face fines and legal penalties in addition to the loss of service, data, and customer good will.

The ultimate goal of DR planning is to move “cold” data, complete copies of the data center frozen at a point in time, to the most cost effective location possible that provides for meaningful SLA recovery if/when necessary. These copies are then constantly updated to ensure any subsequent changes to the production environment are replicated to the DR environment.

Before moving forward with DR planning, organizations must look at industry-specific regulations such as HIPAA or the Sarbanes-Oxley Act to determine the right hosting infrastructure for their data. For example, strict data sovereignty and security requirements prevent organizations from saving personal data to the cloud if that data leaves the country of residence at any time.

After evaluating these requirements, it may be that the CIO will see that hybrid cloud makes the greatest financial and risk permissive option for that organization. Where previously, “cold” data was moved to tape for offsite storage, cloud based cold storage provides for cost effective retention of data and quicker recovery in the event of a disaster.

Implementing a hybrid IT infrastructure where data is backed up to the cloud – private or public – enables IT to continue to control and align the appropriate levels of data performance, protection, and security across all environments. By replicating data to the cloud and/or other physical sites, organizations can quickly recover operations to that facility when a primary site outage occurs.

Even in the absence of natural disasters, one potential disaster that is wreaking havoc on sensitive enterprise data today is ransomware – malware that takes the victim’s data hostage until ransom is paid. However, organizations with backup/DR solutions as simple as snapshot management software can use it to combat ransomware as part of the DR plan.

The concept is rooted in user-driven data recovery, and fights ransomware with its read-only feature that prevents encryption of the snapshot by an outside source. The protection occurs in the background for added reassurance and halts the need to pay cyber criminals for taking data hostage, as users will have a point-in-time recovery from which to restore their uncompromised data.

These days it’s rarely a matter of if disasters will strike, rather when they will strike. Organizations must create and test a comprehensive DR plan to prevent the potential for lost productivity, reputation, and revenue for the business.

By understanding the threats to their data, taking compliance regulations into careful consideration and creating an all-encompassing DR strategy, organizations will be well positioned to quickly recover operations and avoid the consequences of downtime from any disaster.

 

This article was written by Mike Elliott from Information Management and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to [email protected].

The post How to avoid a disastrous recovery appeared first on McAfee Blogs.

Fileless malware targeting US restaurants went undetected by most AV

Enlarge (credit: Carol Von Canon)

Researchers have detected a brazen attack on restaurants across the United States that uses a relatively new technique to keep its malware undetected by virtually all antivirus products on the market.

Malicious code used in so-called fileless attacks resides almost entirely in computer memory, a feat that prevents it from leaving the kinds of traces that are spotted by traditional antivirus scanners. Once the sole province of state-sponsored spies casing the highest value targets, the in-memory techniques are becoming increasingly common in financially motivated hack attacks. They typically make use of commonly used administrative and security-testing tools such as PowerShell, Metasploit, and Mimikatz, which feed a series of malicious commands to targeted computers.

FIN7, an established hacking group with ties to the Carbanak Gang, is among the converts to this new technique, researchers from security firm Morphisec reported in a recently published blog post. The dynamic link library file it's using to infect Windows computers in an ongoing attack on US restaurants would normally be detected by just about any AV program if the file was written to a hard drive. But because the file contents are piped into computer memory using PowerShell, the file wasn't visible to any of the 56 most widely used AV programs, according to a Virus Total query conducted earlier this month.

Read 6 remaining paragraphs | Comments