Researchers have revealed a never-before-seen piece of cross-platform malware that has infected a wide range of Linux and Windows devices, including small office routers, FreeBSD boxes, and large enterprise servers.
Black Lotus Labs, the research arm of security firm Lumen, is calling the malware Chaos, a word that repeatedly appears in function names, certificates, and file names it uses. Chaos emerged no later than April 16, when the first cluster of control servers went live in the wild. From June through mid-July, researchers found hundreds of unique IP addresses representing compromised Chaos devices. Staging servers used to infect new devices have mushroomed in recent months, growing from 39 in May to 93 in August. As of Tuesday, the number reached 111.
Black Lotus has observed interactions with these staging servers from both embedded Linux devices as well as enterprise servers, including one in Europe that was hosting an instance of GitLab. There are more than 100 unique samples in the wild.
“The potency of the Chaos malware stems from a few factors,” Black Lotus Labs researchers wrote in a Wednesday morning blog post. “First, it is designed to work across several architectures, including: ARM, Intel (i386), MIPS and PowerPC—in addition to both Windows and Linux operating systems. Second, unlike largescale ransomware distribution botnets like Emotet that leverage spam to spread and grow, Chaos propagates through known CVEs and brute forced as well as stolen SSH keys.”
CVEs refer to the mechanism used to track specific vulnerabilities. Wednesday’s report referred to only a few, including CVE-2017-17215 and CVE-2022-30525 affecting firewalls sold by Huawei, and CVE-2022-1388, an extremely severe vulnerability in load balancers, firewalls, and network inspection gear sold by F5. SSH infections using password brute-forcing and stolen keys also allow Chaos to spread from machine to machine inside an infected network.
Chaos also has various capabilities, including enumerating all devices connected to an infected network, running remote shells that allow attackers to execute commands, and loading additional modules. Combined with the ability to run on such a wide range of devices, these capabilities have led Black Lotus Labs to suspect Chaos “is the work of a cybercriminal actor that is cultivating a network of infected devices to leverage for initial access, DDoS attacks and crypto mining,” company researchers said.
Black Lotus Labs believes Chaos is an offshoot of Kaiji, a piece of botnet software for Linux-based AMD and i386 servers for performing DDoS attacks. Since coming into its own, Chaos has gained a host of new features, including modules for new architectures, the ability to run on Windows, and the ability to spread through vulnerability exploitation and SSH key harvesting.
Infected IP addresses indicate that Chaos infections are most heavily concentrated in Europe, with smaller hotspots in North and South America and Asia-Pacific.
Black Lotus Labs
Black Lotus Labs researchers wrote:
Over the first few weeks of September, our Chaos host emulator received multiple DDoS commands targeting roughly two dozen organizations’ domains or IPs. Using our global telemetry, we identified multiple DDoS attacks that coincide with the timeframe, IP and port from the attack commands we received. Attack types were generally multi-vector leveraging UDP and TCP/SYN across multiple ports, often increasing in volume over the course of multiple days. Targeted entities included gaming, financial services and technology, media and entertainment, and hosting. We even observed attacks targeting DDoS-as-a-service providers and a crypto mining exchange. Collectively, the targets spanned EMEA, APAC and North America.
One gaming company was targeted for a mixed UDP, TCP and SYN attack over port 30120. Beginning September 1 – September 5, the organization received a flood of traffic over and above its typical volume. A breakdown of traffic for the timeframe before and through the attack period shows a flood of traffic sent to port 30120 by approximately 12K distinct IPs – though some of that traffic may be indicative of IP spoofing.
A few of the targets included DDoS-as-a-service providers. One markets itself as a premier IP stressor and booter that offers CAPTCHA bypass and “unique” transport layer DDoS capabilities. In mid-August, our visibility revealed a massive uptick in traffic roughly four times higher than the highest volume registered over the prior 30 days. This was followed on September 1 by an even larger spike of more than six times the normal traffic volume.
The two most important things people can do to prevent Chaos infections are to keep all routers, servers, and other devices fully updated and to use strong passwords and FIDO2-based multifactor authentication whenever possible. A reminder to small office router owners everywhere: Most router malware can’t survive a reboot. Consider restarting your device every week or so. Those who use SSH should always use a cryptographic key for authentication.
Businesses are a prime target for cybercriminals, regardless of their size, industry, or location.
In this graphic sponsored by Global X ETFs, we’ve visualized the largest corporate hacks of 2021, as measured by ransom size. The full list is also tabulated below.
Victim
Country
Industry
Amount paid or requested (USD millions)
Microsoft
U.S.
Technology
Undisclosed
Kia Motors
South Korea
Automotive
$20M*
Bombardier
Canada
Aviation
Undisclosed
CNA Financial
U.S.
Financial Services
$40M
Harris Federation
UK
Education
$8M*
Colonial Pipeline
U.S.
Energy
$4.4M
Brenntag
German
Chemicals
$4.4M
JBS
Canada
Food
$11M
Kaseya
U.S.
Technology
$70M*
Accenture
U.S.
Technology
$50M*
Acer
Taiwan
Technology
$50M*
*Requested but not paid in full. Source: Microsoft (2021), CRN (2021)
Continue reading below for details on some of these extraordinary hacks.
Energy: Colonial Pipeline Co.
The Colonial Pipeline ransomware attack was the largest ever cyberattack on an American oil infrastructure target.
On May 7, hackers took down the company’s billing system and threatened to release stolen data if a ransom was not paid. During negotiations, the company halted its pipelines, resulting in gas shortages across the Southeastern United States.
It’s been reported that Colonial Pipeline promptly paid a ransom of $4.4 million in bitcoin (based on prices at the time). The FBI managed to retrieve some of these bitcoins, but their exact method was not revealed.
Technology: Accenture
Accenture, one of the world’s largest IT consultants, fell victim to a ransomware attack in August of 2021. While this may seem ironic, it further proves that any business, regardless of industry, can be susceptible to hackers.
“There was no impact on Accenture’s operations, or on our client’s systems. As soon as we detected the presence of this threat, we isolated the affected servers.” – Accenture spokesperson
The hack was traced back to LockBit, which claims to have stolen several terabytes of data from Accenture’s servers. A $50 million ransom was demanded, though it’s unknown whether the company actually made any payments.
Automotive: Kia Motors
Kia’s American business fell victim to a ransomware attack in February by a group called DoppelPaymer. Hackers threatened to release stolen data within 2 to 3 weeks if a ransom of $20 million (in bitcoin) was not paid.
This hack affected various systems including the Kia Owner Portal, Kia Connect (a mobile app for Kia owners), and internal programs used by dealerships. This also prevented buyers from picking up their new cars.
Kia denied it was hacked, but the timing of the ransom note and Kia’s service outages was suspicious. According to the FBI, DoppelPaymer has been responsible for numerous attacks since 2020. Victims include U.S. police departments, community colleges, and even a hospital in Germany.
Food: JBS
JBS, one of the world’s largest meat processing companies, experienced disruptions at its North American facilities in May. Shortly after, the company confirmed it had paid hackers a ransom of $11 million in bitcoin.
“This was a very difficult decision to make for our company and for me personally.” – Andre Nogueira, CEO, JBS USA
This attack, along with the Colonial Pipeline hack, represents an alarming trend of critical industries being targeted. For context, JBS claims it has an annual IT budget of over $200 million, and employs over 850 IT personnel globally. The group responsible for this attack is known as REvil, a now defunct hacker group based in Russia.
Increased Spending on the Menu
The rising frequency and sophistication of corporate hacks is a major threat to the world. In fact, recent research from PricewaterhouseCoopers has highlighted that 69% of businesses predict a rise in future cybersecurity spending.
The Global X Cybersecurity ETF is a passively managed solution that can be used to gain exposure to the rising adoption of cybersecurity technologies. Click the link to learn more.
From creating new features to tuning your rational application developer, the test server role is an essential part of any software development project. But testing applications isn’t something you learn in a lectureship; it’s something you need to practice with actual code.
And even though there are plenty of online tutorials and articles on how to use the test server role, there’s a good chance that your application doesn’t make it past the initial build stage without some help from the developer.
Our goal throughout this article is to provide you with a foundation on which to build your own test server application.
From there, we’ll be covering: How the test server works What its components are used for How to set them up correctly How to debug problems Corresponding tests The benefits of using the test server app as your base If you follow along, you’ll have no problem understanding every concept and algorithm behind our examples. However, if you miss some important points, don’t worry—our examples assume no previous knowledge about the test server role or computer technology.
Read on through everything we contain and implement at our end-of-the-article guide before moving on to the rest of this article’s content.
What is a rational application developer?
A rational application developer is a software control plane that allows you to execute automated tests on your application. By writing test code yourself, you can ensure that your application follows pre-built functionality.
Test programs are usually deployed as code manageability central parts of your application. While test code is a great way to ensure that your app follows suit with the rest of the web stack, it can be tricky to set up and maintain. For this reason, it’s wise to hire a professional test engineer with a strong understanding of test server architecture and usage.
With a rational application developer, you don’t need to be an experienced web developer to create clean, repeatable code.
Instead, you need to know how to write tests in order to execute automated, complex tasks. As an application developer, you can leverage the test server role to execute and test code that would otherwise need to be created manually.
You can find a number of online and print books with articles on how to use the test server role, but our examples assume no previous knowledge about the test server role or computer technology.
How does the test server work?
When a business fully accepts an order, they’ll typically send a request to the customer service department for approval. At the same time, the business developer is responsible for developing and testing the application. The developer’s job is to write code that allows the customer to make the requested actions.
To ensure that your code meets requirements, you’ll need to know exactly how the system works. As the developer, you’ll have access to an array of tools that can help you collect data, write code, and test your application.
Some of these tools will be architecture-agnostic, which means that you can use them in your production applications without any need to change anything else about the application. Others will require you to use a specific language or build a platform to operate. Depending on what kind of application you’re working on, you may want to use a different set of tools to help you get the most information out of them.
What is an application developer looking for when developing their application?
The development of an application is a critical step in the software lifecycle. It takes the developer from concept to functional and then, finally, to practicality. The core skill you’ll need to build and maintain an application is any one of the following: Effective User Experience – This is the responsibility of the customer support team.
It includes understanding the customer’s specific needs and designing a user experience that meets or exceeds expectations. Effective Architecture – This is the responsibility of the system architect. It includes understanding the dependencies and dependencies relationships between components in your application. Effective testability – The customer experience team will test your application using pull requests. This will ensure that the code written by the application developer matches the expected level of organization.
The advantages of using the test server app as your base
The test server role offers many benefits above and beyond the obvious of making your application testable and repeatable. These will vary depending on your business and application, but they’re all important when building a test server app.
Here are some of the most important ones: The user experience is the same as the in-house product. – Test server apps are highly reliable, not O/S, and don’t require special setup or installation. The setup is exactly the same as the in-house product. – Apps created with the test server role automatically connect to the support team when a customer orders.
Apps are verified with security – Apps created with the test server role are verified with a variety of security Authorities, such as Google, Amazon, Facebook, and Twitter.
The benefits of using the test server app as your base
As the only source of truth for your application, the test server role really helps maximize its impact. By letting your users know exactly what is happening inside of your application, you’ll have more clarity when deciding how to proceed with their orders.
The test server role helps you reduce uncertainty and improve testability by providing a consistent, automated test environment. You can set up your test server app for repeatability and efficiency by using a single source of truth. This kind of software uses a single source of truth to verify the application, which means that there’s no middleman or human error involved. With the test server app as your base, you won’t have to deal with inconsistencies in the user experience, lack of accuracy in your code, or other common issues that can hurt your business.
The Pros and Cons of Using the Test Server Role
The test server role has a lot of inherent pros, but there are also some disadvantages that you might expect. Here are the most significant ones: It’s not a ready-to-use product. – You won’t be able to use the test server in a production application without any changes. It’s a 0- Goff app – You won’t be able to use the test server in your application until you have a full-blown app that works with it.
Every year, billions of credentials appear online, be it on the dark web, clear web, paste sites, or in data dumps shared by cybercriminals. These credentials are often used for account takeover attacks, exposing organizations to breaches, ransomware, and data theft.
While CISOs are aware of growing identity threats and have multiple tools in their arsenal to help reduce the potential risk, the reality is that existing methodologies have proven largely ineffective. According to the 2022 Verizon Data Breach Investigations Report, over 60% of breaches involve compromised credentials.
Attackers use techniques such as social engineering, brute force, and purchasing leaked credentials on the dark web to compromise legitimate identities and gain unauthorized access to victim organizations’ systems and resources.
Adversaries often leverage the fact that some passwords are shared among different users, making it easier to breach multiple accounts in the same organization. Some employees reuse passwords. Others use a shared pattern in their passwords among various websites. An adversary can use cracking techniques and dictionary attacks to overcome password permutations by leveraging a shared pattern, even if the password is hashed. The main challenge to the organization is that hackers only need a single password match to break in.
To effectively mitigate their exposure, given current threat intelligence, organizations need to focus on what is exploitable from the adversary’s perspective.
Here are five steps organizations should take to mitigate credentials exposure:
Gather Leaked Credentials Data
To start addressing the problem, security teams need to collect data on credentials that have been leaked externally in various places, from the open web to the dark web. This can give them an initial indication of the risk to their organization, as well as the individual credentials that need to be updated.
Analyze the Data
From there, security teams need to identify the credentials that could actually lead to security exposures. An attacker would take the username and password combinations (either cleartext or hashed), then try to use them to access services or systems. Security teams should use similar techniques to assess their risks. This includes:
Checking if the credentials allow access to the organization’s externally exposed assets, such as web services and databases
Attempting to crack captured password hashes
Validating matches between leaked credential data and the organization’s identity management tools, such as Active Directory
Manipulating the raw data to increase the achieved number of compromised identities. For example, users commonly use the same password patterns. Even if the leaked credentials do not allow access to external-facing assets or match Active Directory entries, it may be possible to find additional matches by testing variations.
Mitigate Credential Exposures
After validating the leaked credentials to identify actual exposures, organizations can take targeted action to mitigate the risk of an attacker doing the same. For instance, they could erase inactive leaked accounts in Active Directory or initiate password changes for active users.
Reevaluate Security Processes
After direct mitigation, security teams should evaluate whether their current processes are safe and make improvements where possible. For instance, if they are dealing with many matched leaked credentials, they may recommend changing the entire password policy across the organization. Similarly, if inactive users are found in Active Directory, it may be beneficial to revisit the employee offboarding process.
Repeat Automatically
Attackers are continuously adopting new techniques. Attack surfaces change, with new identities being added and removed on a routine basis. Similarly, humans will always be prone to accidental mistakes. As a result, a one-time effort to find, validate, and mitigate credential exposures is not enough. To achieve sustainable security in a highly dynamic threat landscape, organizations must continuously repeat this process.
However, resource-constrained security teams cannot afford to manually perform all these steps on a sufficient cadence. The only way to effectively manage the threat is to automate the validation process.
Pentera offers one way for organizations to automatically emulate attackers’ techniques, attempting to exploit leaked credentials both externally and inside the network. To close the validation loop, Pentera provides insights into full attack paths, along with actionable remediation steps that allow organizations to efficiently maximize their identity strength.
To find out how Pentera can help you reduce your organization’s risk of inadvertent credential exposure, contact us today to request a demo.
Around 40% of ethical hackers recently surveyed by the SANS Institute said they can break into most environments they test, if not all. Nearly 60% said they need five hours or less to break into a corporate environment once they identify a weakness.
The SANS ethical hacking survey, done in partnership with security firm Bishop Fox, is the first of its kind and collected responses from over 300 ethical hackers working in different roles inside organizations, with different levels of experience and specializations in different areas of information security. The survey revealed that on average, hackers would need five hours for each step of an attack chain: reconnaissance, exploitation, privilege escalation and data exfiltration, with an end-to-end attack taking less than 24 hours.
The survey highlights the need for organizations to improve their mean time-to-detect and mean-time-to-contain, especially when considering that ethical hackers are restricted in the techniques they’re allowed to use during penetration testing or red team engagements. Using black hat techniques, like criminals do, would significantly improve the success rate and speed of attack.
Hackers find exploitable weaknesses in only a few hours
When asked how much time they typically need to identify a weakness in an environment, 57% of the polled hackers indicated ten or fewer hours: 16% responded six to ten hours, 25% three to five hours, 11% one to two hours and 5% less than an hour. It’s also worth noting that 28% responded that they didn’t know, which could be because of multiple reasons and not necessarily because it would take them more than ten hours.
One possibility is that many ethical hackers don’t keep track of how much time perimeter discovery and probing might take because it is not an important metric for them or a time-sensitive matter. Many factors could influence this, from the size of the environment and number of assets to their preexisting familiarity with the tested environment.
Over two-thirds of the questioned hackers indicated that they work or worked in the past as members of internal security teams and half said they served as consultants for offensive security providers. Almost 90% of respondents held an information security certification and the top specializations among them were network security, internal penetration testing, application security, red-teaming, and cloud security. Code-level security, IoT security and mobile security were less common at 30% prevalence or less.
“Our data shows that the majority of respondents with application security, network security, and internal pen testing experience were able to find an exploitable exposure within five hours or less,” Matt Bromiley, a SANS digital forensics and incident response instructor said in the report.
Around 58% indicated that they needed five hours or less to exploit a weakness once found, with 25% saying between one and two hours and 7% less than an hour. When asked to rank different factors that lead to exposures, the majority indicated third-party connections, the rapid pace of application development and deployment, adoption of cloud infrastructure, remote work, and mergers and acquisitions.
In terms of types of exposures they encounter most, the top place were misconfigurations followed by vulnerable software, exposed web services, sensitive information exposure, and authentication or access control issues.
“We also asked our respondents with cloud security experience how often they encountered improperly configured or insecure cloud/IaaS assets,” Bromiley said. “There’s an even split between ‘half the time’ and ‘more often than not.’ It’s only small percentages at either end that rarely see (4.6%) or always see (8%) misconfigured public cloud or IaaS assets. These stats support an unfortunate truth that … organizations develop and deploy applications that expose vulnerabilities, insecurities, and improper configurations for adversaries to take advantage of.”
Privilege escalation and lateral movement also happens quickly
The under five-hour time frame seemed to prevail across all other stages of an attack, with 36% of respondents reporting they could escalate privileges and move laterally through the environment within three to five hours after the initial intrusion, while 20% estimated they could do it in two or fewer hours. This remained consistent when it came to data collection and exfiltration with 22% of respondents indicating it would take them three to five hours, 24% between one and two hours and 16% less than an hour.
“We see a consistent theme of adversaries able to perform intrusion actions within a five-hour window,” Bromiley said in the survey report. “Whether it’s lateral movement, privilege escalation, or data exfiltration, security teams should be measuring their ability to proactively identify and detect and respond as quickly as possible.”
When it comes to the average time required to complete an end-to-end attack, most respondents (57%) indicated a time frame of less than 24 hours with another 23% saying they don’t know.
Good detection and response methods are effective
One potential good news for security teams is that only 38% of respondents indicated that they could “more often than not” successfully pivot to a new attack method that could bypass the defenses that blocked their initial attack vector. This indicates that having good detection and prevention methods in place pays off in blocking intrusion attempts, especially since criminals typically go for the path of least resistance and move on to an easier target if they don’t succeed.
Furthermore, 59% of respondents said they rely on open-source tools in their intrusions and 14% said they use public exploit packs. Only 6% use private exploits and 7% use custom tools they wrote themselves. This means security teams could get a lot of value from focusing on defending against known and public tools and exploits. Unfortunately, three-quarters of respondents indicated that only few or some organizations have detection and response capabilities in place that are effective at stopping attacks. Almost 50% said that organizations are moderately or highly incapable of detecting and preventing cloud-specific and application-specific attacks.