Know thy enemy: thinking like a hacker can boost cybersecurity strategy

As group leader for Cyber Adversary Engagement at MITRE Corp., Maretta Morovitz sees value in getting to know the enemy – she can use knowledge about cyber adversaries to distract, trick, and deflect them and develop strategies to help keep threat actors from getting whatever they’re after.

That could mean placing decoys and lures that exploit their expectations for what an attacker will find when they first hack into an environment, she says. Or it could mean deliberately disorienting them by creating scenarios that don’t match up to those expectations. “It’s about how to drive defenses by knowing how the adversaries actually behave,” says Morovitz, who is also group leader for MITRE Engage, a cyber adversary engagement framework.

The concept of understanding one’s adversary is not new. Sixth-century BCE warrior Sun Tzu promoted the idea “Know thy enemy” in his still-famous work The Art of War. Nor is its application in cybersecurity new. Ethical hacking, which dates back decades, is partially based on acting as the threat actors would to find weak spots within enterprise IT environments.

Similarly, enterprise security leaders have long worked to identify their likely adversaries and what they might be after. However, their ability to delve into the hacker mindset has been limited by both available resources and knowledge as well as conventional strategies which stressed first perimeter defenses and then graduated defenses that provide the highest protection to the most valuable assets.

Hacker thinking helps shape security strategy

Now security experts – MITRE and others – advocate for CISOs and their security teams to use threat intel, security frameworks, and red team skills to think like a hacker and – more importantly – use that insight to shape security strategies. This, they say, means considering motives and mentalities which in turn influence their levels of persistence, the pathways they may take, and what exactly they want – all of which could be different or broader than assumed. That insight should then shape the direction of a defense-in-depth security; it should be used to create a truly threat-driven security strategy.

“If you’re not thinking like a hacker, you’re not able to take the actions that are right for your environment. But the more you know about the threats, the more effective you can be in applying that technology,” says Jim Tiller, global CISO for Nash Squared and Harvey Nash USA.

The 2022 Ethical Hacking Survey, an inaugural survey on the topic from security training association SANS, speaks to those points, with report writers saying that they “aimed to understand the intricacies of how attackers think, the tools they use, their speed, their specialization, their favorite targets, etc.”

The report further notes that “these insights are critical to investment decisions across an increasingly complex attack surface that is becoming more difficult to protect. Oftentimes, we see organizations that invest in security technologies that mitigate a wide range of threats leave commonly attacked ports and protocols wide open. Adversaries will choose the path of least resistance or the one they are most familiar with – and far too often, these are the same. Overlooked or assumed safety presents too much of a risk.”

Benefiting from a hacker’s perspective

Like Morovitz, the SANS report calls out the value of having a “bird’s-eye view of an adversary – whether sanctioned or not,” noting that it “can be a guiding light for security analysts and decision makers alike.” Research, however, has found that many security teams don’t have that insight, nor do they seek it out.

“There is a misconception security teams have about how hackers target our networks,” says Alex Spivakovsky, who as vice-president of research at security software maker Pentera has studied this topic. “Today, many security teams hyperfocus on vulnerability management and rush to patch [common vulnerabilities and exposures] as quickly as possible because, ultimately, they believe that the hackers are specifically looking to exploit CVEs. In reality, it doesn’t actually reduce their risk significantly, because it doesn’t align with how hackers actually behave.”

Spivakovsky, an experienced penetration tester who served with the Israel Defense Forces unit responsible for collecting signal intelligence (SIGINT) and code decryption, says hackers operate like a business, seeking to minimize resources and maximize returns. In other words, they generally want to put in as little effort as possible to achieve maximum benefit.

He says hackers typically follow a certain path of action: once they breach an IT environment and have an active connection, they collect such data as usernames, IP addresses, and email addresses. They use those to assess the maturity of the organization’s cybersecurity posture. Then they start doing deeper dives, looking for open ports, areas with poor protection such as end-of-life systems and resources that aren’t properly managed. “And now that hackers understand the operating systems running, they will start to understand if there’s something exploitable to launch a hacking campaign,” Spivakovsky says.

Hackers are adaptable in the search for poor security hygiene

“Hackers don’t generally approach organizations only looking to exploit CVEs, or any one tactic, for that matter. Instead, they are very adaptable to the different opportunities that present themselves while they are interacting with the organization,” he says. “As a process, hackers engage in a broad discovery and enumeration process, examining the organization for indicators of poor security hygiene. These could be factors like the lack of a web application firewall, the presence of too many anonymously accessible services, or any number of other indicators.”

“If there aren’t any attractive elements, the likelihood of breaking in decreases substantially. However, if something sparks their interest, they look to escalate the attack from there.”

That’s why, Spivakovsky says, organizations should evaluate their enterprise security not from their own perspectives but from that of a hacker.

“What attracts hackers today is how it looks externally,” he adds. “So CISOs [must ask]: am I making myself an easy target and how so?”

Understanding the hacker mindset and motivation

Others say it’s also important to understand why hackers want to target organizations – and why they might want to come after yours. “Are you just a target for ransomware? Or do you have the secret formula for Coke? And if I’m a criminal, how can I best take advantage of this to make the money I can or cause the most damage I can?” says Tiller, the CISO with Nash Squared.

This gets into motivations and mindsets, which security chiefs can also use to refine their security strategies.

The goal is to focus on identifying adversaries or adversarial groups and determining their intent, says Adam Goldstein, an associate professor at Champlain College and academic director at the Leahy Center for Digital Forensics & Cybersecurity. “Is it disruption? Is it financial gain, is it intellectual property theft? Achieving [access to] resources for other goals? And are they mission-focused so they’ll keep trying and trying and trying no matter how strong the defenses are? Or are they looking for opportunities? Having this big picture of understanding all the different adversaries and what their intents are can help you identify the different types of risk.”

Such inquiry matters, Goldstein says, as it often challenges faulty assumptions and sometimes reveals to enterprise leaders that they’re bigger targets than they realized. Such analysis could have helped universities breached nearly a decade ago by foreign adversaries who targeted faculty for their connections to US political figures and institutions.

“They used techniques to target and acquire communications – emails and documents – that were not of monetary value, were not research documents. It was really focused on gaining access to correspondence that could potentially be of value in an international political landscape with some espionage element as well. That really caught the higher ed community off guard.” It also eventually shifted security strategy within the higher education community, Goldstein adds.

Not taking the hacker viewpoint can leave security gaps

Despite such anecdotes, though, security experts say many enterprise security departments aren’t incorporating a hacker viewpoint into their strategies and defenses. “We’re still seeing attacks and breaches in areas [organizations] didn’t consider,” says Chris Thompson, global adversary services lead at IBM X-Force Red.

Thompson says he sees organizations that engage in penetration testing to comply with regulations but don’t assess the range of reasons for which they could be targeted in the first place. Take a telecommunications company, for example, he says. It may be targeted by hackers looking for a financial payoff through a ransomware attack, which typically means they’re looking for easy targets. But if that telco is also supporting police communications, it could also be targeted by more persistent threat actors who are seeking to cause disruption.

“That’s why we tell clients to really challenge assumptions. What assumptions have you made about what paths an attacker would take? Validate them by getting a red team in to challenge those assumptions,” he says, adding that CISOs and their teams must weigh the fact that hackers have “the same access to all the security blogs and training and tools out there” that they do.

Operationalizing and leveraging hackerthink

Not surprisingly, security teams face challenges in cultivating the capacity to think like a hacker and to use the insights garnered by the exercise. Security leaders must commit resources to the task, and those resources are typically people rather than tools and technologies that can be deployed and let to run, all of which is a tall order for resource-strapped security teams and security organizations struggling to find talent, Morovitz says.

Moreover, CISOs may find it challenging to get funding for such activities as it’s difficult to demonstrate the returns on them. “It’s hard for organizations to wrap their minds around something that doesn’t have a lot of alerts. And even though the alerts they do get will be high-fidelity alerts, it’s still hard to prove value,” Morovitz explains, adding that some of the tools that support these activities are relatively expensive.

Security teams may also find it challenging to shift their own skill sets from defense – for example, identifying and closing vulnerabilities – to offense. As Tiller says, “It’s a very difficult thing to do because it’s a criminal mindset. And people who are in defensive industry, the white hats, they may not always be thinking of the willingness [that hackers have] to be low and slow.”

Still, it’s worth training blue teams in some red team skills, experts say.

Organizations also now have access to a growing list of resources to help them make this shift. Those resources include NIST frameworks, MITRE Engage and MITRE ATT&CK. Additionally, there’s threat intelligence from vendors; Information Sharing and Analysis Center (ISACs); and academic, government and similar entities.

Furthermore, there’s an emerging class of technologies supporting red team-type work, Morovitz says.

Morovitz notes that organizations doing such work are tight-lipped about their activities, as they don’t want to give away any advantages their work may be generating, but she points to conference agenda items on the hacker mindset as evidence that more security teams are trying to think like hackers as a way to inform their strategies.

And there are indeed advantages, she and other experts say, in making this shift to a hacker mindset.

“Understanding the hackers’ approach,” Spivakovsky says, “would help reorient our security priorities to be more effective.”

12 steps to building a top-notch vulnerability management program

Security executives have long known the importance of addressing vulnerabilities within their IT environments.

And other executives in the C-suite have also come around to the criticality of this task, given the number of high-profile breaches that happened as a result of an unpatched system.

Recent news should put to rest any lingering doubts about the importance of this task.

The US Federal Trade Commission, for example, in early January put the business community on notice about addressing Log4j, writing in an online post that “the duty to take reasonable steps to mitigate known software vulnerabilities implicates laws including, among others, the Federal Trade Commission Act and the Gramm Leach Bliley Act. It is critical that companies and their vendors relying on Log4j act now, in order to reduce the likelihood of harm to consumers, and to avoid FTC legal action.”

The FTC has good reason to warn about such issues: Reports consistently find unpatched known vulnerabilities remain one of the top attack vectors.

Consider figures from the Ransomware Spotlight Year End 2021 Report from security firms Ivanti, Cyber Security Works and Cyware. The report tallied 65 new vulnerabilities tied to ransomware in 2021, a 29% increase over the previous year, and counted a total of 288 known vulnerabilities associated with ransomware.

Despite such findings, many organizations lack a formal vulnerability management program. A 2020 survey from the SANS Institute, a cybersecurity training and certification organization, found that nearly 37% have either only informal approach or no program at all.

Experienced security leaders agree that vulnerability management should not be handled on an ad hoc basis or through informal methods. Rather, it should be programmatic to enforce action, accountability, and continuous improvement.

To that end, these experts offered 12 steps for building a top-notch vulnerability management program:

1. Assemble a team

“Before you buy anything, do any processes, or create procedures, you need to build a team,” says Daniel Floyd, who as CISO of Blackcloak oversees its SOC, threat intelligence platform, its penetration testing and digital forensics teams.

In addition to assigning the security and IT workers who typically handle vulnerability management and patching, Floyd recommends including other key stakeholders, such as business-side employees who can speak to the impact the organization faces when systems are taken down for rebooting so the team can understand how their work affects others.

2. Keep a current, comprehensive inventory of assets

Alex Holden, CISO, Hold Security Hold Security

Alex Holden, CISO, Hold Security

Another foundational element for any effective vulnerability management program is an up-to-date asset inventory with a process to ensure that it remains as current and comprehensive as possible. “It’s definitely something that everyone knows about but it’s an area that’s really difficult,” Floyd says, particularly in today’s modern environments with its physical items, remote employee connections, and IoT components as well as cloud, SaaS, and open source elements.

But the hard work is critical, says Alex Holden, CISO with Hold Security and a member of the ISACA Emerging Trends Working Group. “It all has to be taken into account, so when something new comes up, you’ll know if it’s something you have to fix.”

3. Develop an ‘obsessive focus on visibility’

With a comprehensive asset inventory in place, Salesforce SVP of information security William MacMillan advocates taking the next step and developing an “obsessive focus on visibility” by “understanding the interconnectedness of your environment, where the data flows and the integrations.”

“Even if you’re not mature yet in your journey to be programmatic, start with the visibility piece,” he says. “The most powerful dollar you can spend in cybersecurity is to understand your environment, to know all your things. To me that’s the foundation of your house, and you want to build on that strong foundation.”

4. Be more aggressive with scanning

Vulnerability scanning is another foundational element within a solid cybersecurity program, yet experts say many organizations that are regularly running scans still fail to identify problems because they’re not being thorough enough. “Where I think people are falling down is in coverage,” Floyd says.

Consequently, high-performing vulnerability management programs have adopted more aggressive scanning practices incorporating multiple scanning options. Floyd, for example, says he believes teams should include credentialed scans for a more thorough search of weak configurations and missing patches in addition to running the more commonly used agent-based and network scanning.

5. Have documented, deliberate workflows

Mature, well-established vulnerability management programs have documented, deliberate workflows that lay out what happens and who is responsible for what, MacMillan says.

William MacMillan, SVP of Information Security, Salesforce Salesforce

William MacMillan, SVP of Information Security, Salesforce

“Larger, complex businesses understand [security vulnerabilities] are an existential threat and that they have to move past the ad hoc stage pretty quickly and lay out what needs to happen in a deliberate and focused way,” he explains.

Security teams everywhere can benefit from following that best practice and establish those workflows, adding automation wherever possible.

Furthermore, MacMillan says teams should develop a common operating picture, with the same data and threat intelligence available to all team members working on vulnerability management.

“Everyone should operate from that common operating picture, and they all should synch,” he adds.

6. Establish, track KPIs

“To validate the effectiveness of your controls and to prove to management that it’s effective, it’s good to have metrics that report on the performance of your vulnerability management program,” says Niel Harper, ISACA board director and CISO for a large global company.

Niel Harper, ISACA board director ISACA

Niel Harper, ISACA board director

He says organizations could use any of the commonly used key performance indicators—such as percentage of critical vulnerabilities remediated on time and percentage of critical vulnerabilities not remediated on time—to measure current state and track improvement over time.

Other KPIs to use could include percentage of assets inventoried, time to detect, mean time to repair, number of incidents due to vulnerabilities, vulnerability re-open rate and number of exceptions granted.

As Harper explains: “All those will present management with an idea of how well your vulnerability management program is performing.”

7. Benchmark

Tracking KPIs can indicate whether your own vulnerability management program is improving over time, but you’ll need to measure against other companies’ efforts to determine whether your program exceeds or fall short compared to others, Harper says.

“Benchmarking helps you to understand how you’re performing against your peers and competitors, and it also provides assurances to management that your vulnerability management program is effective,” he says. “It can also serve as a differentiator in the marketplace, which you could even use to drive the top line.”

Harper says managed service providers often have data that security teams can use for this exercise.

8. Make someone responsible and accountable for success

To have a true vulnerability management program, multiple experts say organizations must make someone responsible and accountable for its work and ultimately its successes and failures.

“It has to be a named position, someone with a leadership job but separate from the CISO because the CISO doesn’t have the time for tracking KPIs and managing teams,” says Frank Kim, founder of ThinkSec, a security consulting and CISO advisory firm, and a SANS Fellow.

Frank Kim, founder, ThinkSec ThinkSec

Frank Kim, founder, ThinkSec

Kim says larger enterprises often have enough vulnerability management work to have someone take on this role full time, but smaller and midsize companies that don’t require a full-time manager should still make this accountability work an official part of someone’s job.

“Because if you don’t give responsibility to that one person,” Kim says, “that’s where you get everyone pointing figures at each other.”

9. Align incentives to program improvement, successes

Assigning responsibility for the program is one step, but Kim and others say organizations should also establish incentives such as bonuses tied to improving KPIs.

“And incentivize not only the teams responsible for doing the patching but the stakeholders across the organization,” Floyd says, whether those incentives are in the way of extra compensation, bonus days off, or other forms of recognition. “It’s about incentivizing and celebrating successes. It shows that this needs to be a priority.”

10. Create a bug bounty program

Salesforce rewarded ethical hackers more than $2.8 million in rewards in 2021 for identifying security issues in its products, seeing this bug bounty as an important part of managing vulnerabilities, MacMillan says.

MacMillan recommends other organizations implement bug bounty programs as part of their vulnerability management efforts. “It’s an effective way to surface problems,” he says.

Others agree. Holden, for example, says smaller organizations can set up an internal bug bounty program that rewards employees who find vulnerabilities or work with external parties or cybersecurity companies offering such services to draw on a larger pool of expertise.

11. Set expectations and adjust them over time

The number of publicly disclosed computer security flaws on the Common Vulnerabilities and Exposures (CVE) list continues to grow, with the number of new ones added annually having increased nearly every year during the past decade. There were 4,813 CVEs in 2011; in 2020 there were 11,463, according to an analysis from Kenna Security.

Given the volume, experts agree that organizations must prioritize which vulnerabilities pose the greatest risks to them so they can address those first.

Peter Chestna, CISO of North America for Checkmarx, concurs, but he also says organizations should be upfront and clear about priorities and focus their vulnerability management program on those vulnerabilities that they actually plan to address.

Peter Chestna, CISO of North America, Checkmarx Checkmarx

Peter Chestna, CISO of North America, Checkmarx

For example, if an organization only plans to address vulnerabilities that are rated high, why even scan for low-risk ones? Chestna says that approach can drain resources and distract teams from high-priority work, making it more likely that they miss critical issues.

“Instead, set the rules you want to follow (they have to be rules you can actually follow) and then follow them,” he says, adding that this helps organization better focus on risk reduction. “And when we get really good at those highest priorities, then talk about opening up the flood gates.”

12. Report on the program’s performance to stakeholders, the board

In addition to keeping stakeholders within the organization informed about any patching work that could impact their access to systems, experts say the security department should report on the vulnerability management program’s overall performance—framed in business terms around risk and risk reduction.

“This is something you should actually be reporting to your board,” Floyd adds. “Hold yourself accountable.”

Who is your biggest insider threat?

Penetration testing has shown cybersecurity manager David Murphy just how problematic people can be.

In his career, he has seen people pick up and use dropped thumb drives, give up passwords over the phone and, yes, even click on simulated phishing links.

He has also seen the real-world consequences of such actions.

Murphy, manager of cybersecurity at Schneider Downs, a certified public accounting and business advisory firm, says he once investigated the root cause of a ransomware attack at a company and traced the incident back to a worker who had clicked on an invoice for pickles.

“It was unrelated to anything in his job duties. It was unrelated to anything the company does. The only reason it was clicked was because he was in the mode of opening everything. He was an insider risk just waiting to happen,” says Murphy, a former consultant for the National Security Agency (NSA) Computer Network Operations Team.

According to the 2022 Cost of Insider Threats Global study from Ponemon Institute, the overall number of insider threat incidents jumped by 44% in the past two years.

The report found that negligent insiders were the root cause of 56% of incidents, and they cost on average $484,931 per incident.

The report found that malicious or criminal insiders cost even more: $648,062 on average, with malicious or criminal insiders behind 26% of incidents.

Meanwhile, credential theft accounted for 18% of incidents in 2022, up from 14% of incidents in 2020.

Taking a multilayered approach

Security experts say simulated phishing attacks can help identify individuals who continue to click without thinking. But it’s much harder to figure out who might be vulnerable to a sophisticated social engineering attack based on information scraped from LinkedIn, who might be disgruntled enough to sell their credentials to criminal syndicates, or who has meticulous cyber hygiene when working on a laptop but isn’t suspicious of a phony text message.

Ferreting out those weak links takes a lot more work and requires the use of multiple tools in the corporate toolbox, not just the security one.

As Murphy says: “To find those insider risks, you don’t rely on one particular point.” He says, for example, that he might not be suspicious of an intern driving a Porsche, but he would if that intern’s working late nights alone and trying to access restricted accounts.

That approach fits with current security thinking.

As CISOs know, security today requires a multilayered approach that increasingly incorporates information about the users themselves. User behavior analytics, a zero trust policy, and the principle of least privilege all speak to that point, as each approach takes into account the individual user, his or her role, and his or her typical activities when considering access levels and security risks.

Jason Dury, Guidehouse Guidehouse

Jason Dury

But some security experts are thinking beyond that and considering what personas within their organization are weak links, how to identify them, and how best to minimize their risk.

“What’s important is for the program to identify potential risk on an ongoing basis and create weightings around risk areas so when something does pop to the surface, they know to take a look,” says Jason Dury, director in cybersecurity open source solutions at Guidehouse.

A slew of potential threats

The ability to detect insider threats as well as those individuals who are either the weakest links or pose the biggest risks (depending on your perspective) is much more complicated today than it was even a decade ago, says Sarb Sembhi, CISO and CTO at Virtually Informed Ltd.

Sarb Sembhi, CISO and CTO at Virtually Informed Ltd. Virtually Informed Ltd.

Sarb Sembhi

Sembhi acknowledges that data loss prevention software, network scanning tools, identity and access management platforms, and the zero trust methodology all together can significantly lower the risk of a careless or malicious insider doing harm.

But, he says, like all else in security they’re not a complete guarantee against insider threats.

Consider, he says, the risk that the internet of things presents to organizations. An employee could bring in a seemingly innocuous IoT device—a printer, perhaps—not realizing he or she is introducing an unsecured internet connection into the enterprise. “These devices are more of an insider threat than perhaps humans would be,” adds Sembhi, a member of the ISACA Emerging Trends Working Group.

Adam Goldstein Champlain College

Adam Goldstein

Remote work further complicates the insider threat issue as does the trend toward an increasing tolerance for business units deploying their own technology, he says. Others note those factors, too, citing, for example, that a malicious or criminal insider working remotely could use a cell phone to photograph sensitive information knowing there’s no one around to see.

Adam Goldstein, an assistant professor of cybersecurity at Champlain College and the academic director of its Leahy Center for Digital Forensics & Cybersecurity, says CISOs can categorize individuals who present additional risks into at least several different groups.

To start, he says remote workers in general can be considered a more vulnerable group. “[Workers] are on their personal machines, and there’s a different level of oversight in both what they’re doing on their computer but also in their connection to their company and their coworkers and the like,” he says.

The busiest employees as well as the ones doing multiple roles also create more risk, he says. “Being stretched thin can force people to take shortcuts they wouldn’t normally take, or have to jump into tasks or systems that they haven’t had the time to adequately train for, or have that depth of support they need,” he says.

Add to that the class of workers who still struggle to understand the technologies they use and the controls in place as well as those “who prioritize personal convenience over diligence and security,” he says.

Goldstein adds: “Those are some of the unintentional challenges that may not have anything to do with an employee’s motivations or skillset but can cause security issues.”

At the same time, Goldstein says bad actors continue to evolve their strategies, making it more likely that even a cautious individual could fall victim to a scam and expose the organization.

“A sophisticated attacker who is attempting social engineering-type attacks or coming up with schemes can catch anybody if it’s particularly well executed or if someone is distracted that day,” he says.

Bad actors have also found ways to make it easier for disgruntled or malicious employees to take action, creating channels that allow workers to sell their credentials or other organizational assets, Goldstein says. “And the risk to the insider is much less than it used to be, because they can make it look like a phishing attack, making it much harder to trace it back to that individual,” he adds.

Furthermore, there are those who might be vulnerable due to personal factors who may turn to such options, Goldstein says.

Michael Ebert, a partner in Guidehouse’s cybersecurity practice, says he worked with a company that experienced such a case, which came to light when law enforcement alerted the organization to an employee selling information. The worker had appropriate levels of access for her job but was pressured by a friend and accomplice who saw the opportunity to make quick money.

Actions to take

Such incidents highlight why CISOs should consider personas as part of their security strategy. As Ebert says: “People get caught in situations and do stupid things.” Given that reality, Ebert and others say executives should think of that potential before someone actually takes action and puts the organization at risk.

Michael Ebert Guidehouse

Michael Ebert

Yet he and others acknowledge that CISOs have limited ability on this front—especially if they’re working on their own.

Ebert notes, for example, that the employee in the law enforcement case had passed the company’s initial background check as well as the subsequent background checks it runs on employees every two years.

“A lot of organizations do background checks and other work during the hiring process to ensure that folks, before they join, meet certain requirements and have [security] training. But it can be hard to do with existing employees who may be going through transitions in their personal lives or develop different feelings about the organization and their role in it,” Goldstein says.

Companies in highly regulated industries have a leg up here, Goldstein says, as compliance requirements have forced security and the human resources departments to work more closely to identify workers who could pose threats and to have the appropriate policies and procedures for dealing with such situations.

But Goldstein acknowledges that such work is a heavy lift and a task that can raise ethical questions in many organizations.

“So how do you balance protecting organizational assets and not stepping into a big brother-type approach of monitoring employees?” he asks.

Goldstein advises CISOs to run tabletop drills that involve insider threats. “Ask: What if [hackers] got this person’s credentials? And then present that to the C-suite to help them understand what the risk is.”

Dury goes further, saying that CISOs should work with other department heads—particularly HR—to identify and understand what behaviors or activities could indicate someone is a risk. “Every corporate function has a role,” he adds. “This type of program should not be done in a silo.”

But Dury and others also caution against weighting security measures too much toward the risks any particular persona or role presents.

Rather, they say consider the potential scenarios and assess the layers of controls to ensure they’re as effective as possible in preventing any individual—regardless of the motivations or circumstances—from inflicting harm.

“You have to look at individuals, and doing that extra analysis into the risk of individuals can help you understand what the risk is and whether the controls in place are adequate or whether there are areas where more investments could be made,” Goldstein explains.

Ebert points again to the law enforcement case to highlight this point, noting that, based on what he saw, the company likely could have prevented or limited the damage had they better monitored the employee’s activities.

More on insider threats:

8 keys to more effective vulnerability management

CISOs preach the need to get security fundamentals right, yet many still struggle to build a rock-solid vulnerability management program.

They can be stymied by the volume of vulnerabilities that need attention, or the pace required to address them, or the resources required to be effective.

Consider, for instance, the challenges that security teams had in addressing the Log4j vulnerabilities. A recent survey from (ISC)², a nonprofit association of certified cybersecurity professionals, found that 52% of respondents spent weeks or more than a month remediating Log4j.

Granted, the scope of the flaw is significant, but experts still say that the figure, along with other research and their own observations, shows many organizations are still maturing the processes they use to identify, prioritize, and fix security problems within the software they have.

The following best practices can help on that journey toward building an effective and efficient vulnerability management program.

Know your environment

Security experts stress the need for CISOs to have an accurate inventory of the tech environment they need to protect; this helps them know whether known and newly identified vulnerabilities exist within their technology stack.

That, however, remains easier said than done.

“Everyone says they have an inventory, but they usually need to go a little deeper. “They don’t know what’s running under the covers. That continues to be the biggest challenge,” says Jorge Orchilles, a certified instructor with cybersecurity training firm SANS Institute, CTO of security tech company Scythe, and co-creator of the C2 Matrix project. “We’re getting compromised on things we simply didn’t know we had.”

He says he has seen mature security operations account for the major components of their environment yet overlook smaller elements and the code itself, an oversight that can—and, in fact, has—left critical vulnerabilities unpatched.

Orchilles advises security leaders to ensure that they have a detailed record of their tech environment, one that includes all components such as programming libraries (something that proved essential for organizations patching the Log4j vulnerability). Moreover, he says CISOs must be diligent about updating that record “whenever you put a new system out there.”

Have a real program (not just ad hoc work)

Scanning for vulnerabilities and remediating any vulnerabilities that come up may seem adequate, but advisors say that ad hoc approach is both inefficient and inadequate.

For example, security teams could spend valuable time patching vulnerabilities that pose a limited threat to their organizations instead of prioritizing a high-risk issue. Or they get bogged down in other projects and postpone vulnerability management work until their schedules free up.

To prevent such scenarios, CISOs should have a programmatic approach to vulnerability management, one that incorporates their organization’s tolerance for risk as well as its processes for prioritizing, remediating, and mitigating identified vulnerabilities, says Bryce Austin, who as CEO of TCE Strategy serves as a cybersecurity expert, risk consultant, and fractional CISO.

The program should also establish how often the organization performs vulnerability scans, and it should include schedules tied to vendors’ patch release dates.

A good vulnerability management program should have defined processes and policies, a chartered team, and governance, adds Farid Abdelkader, managing director of technology risk, IT audit & cybersecurity services at consulting firm Protiviti and president of the New York Metropolitan chapter of ISACA.

Abdelkader also advises CISOs to determine what “good looks like” using key performance indicators that can show how well they’re performing, identify areas for improvement, and then indicate progress over time.

Moreover, organizations with mature vulnerability management programs have a process for reporting their activities to enterprise executives so that they understand the importance of the program as well as its track record, says Austin, author of Secure Enough? 20 Questions on Cybersecurity for Business Owners and Executives.

That, he notes, helps ensure there’s effective oversight and that vulnerability management is treated like any other business risk within the organization.

Customize to the organization’s own risks

New vulnerabilities are constantly being identified. Combine those with the number of existing known vulnerabilities, and the volume of issues to be fixed becomes nearly impossible to tackle. So it’s critical to have a way to pinpoint the vulnerabilities that matter most and to prioritize remediation work, Abdelkader says.

“Understand the criticality of an incident. [Ask] what happens if there’s a breach? How does that impact the data? Or if systems go down? What kind of impact would that have to our business, our customers, or our reputation?” Abdelkader says. “Understand the true risk to those assets and the actual risk of those things happening.”

That work can be guided by the classifications—high, medium, low—offered by vulnerability scans, vendors, and other security outlets, but the process should account for the organization’s tolerance for risk, its technical environment, its industry, etc.

“The definition of critical must be interpreted,” Austin explains. “It has to be in the context of your company, of your critical assets and resources, your data, how much exposure a computer or system has to a critical threat.”

As he points out, a vulnerability present in an isolated, internal-only system poses a different level of risk than one in an internet-facing system; thus, each one should get a different level of prioritization for remediation that corresponds with its own risk.

That customization and prioritization based on an organization’s own risk profile doesn’t always happen, though, he adds. “I see a lot of vulnerability management programs start with a list of what the vulnerability scans find vs. what the critical risks to the organization actually are and what the company really cares about.”

Revisit risks and priorities

Establishing the organization’s risk tolerance as well as creating a process for prioritizing work are both essential for a strong vulnerability management program. But those tasks can’t be viewed as one and done.

They should be revisited at least annually as well as anytime there’s a major change within the organization or its IT environment, Austin adds.

Use frameworks, systems

No need to reinvent the wheel to help with all the vulnerability management tasks, because various organizations have developed frameworks and other systems to aid CISOs in managing them, says Jon Baker, co-founder and acting director of research and development of MITRE Engenuity’s Center for Threat-Informed Defense.

“These provide ways for defenders to look at vulnerabilities and understand how an adversary might use them, so you can use them to prioritize vulnerabilities and your responses,” Baker says.

MITRE, for one, has its Common Vulnerabilities and Exposures (CVE) system, which since 1999 has provided information on publicly known vulnerabilities and exposures (just as its name states) as well as has associated specific versions of code bases to those vulnerabilities.

There’s also NIST Special Publication 800-30, which organizations can use for conducting risk assessments.

Then there’s the Common Vulnerability Scoring System (CVSS), an open framework that organizations can use for assessing the severity of security vulnerabilities so they can be prioritized according to the level of threat.

MITRE also has its ATT&CK framework (which leverages CVE) that organizations can use to prioritize the vulnerabilities that need their attention as part of a comprehensive threat-informed defense strategy.

Consider vulnerabilities introduced by direct suppliers, third parties

As CISOs know, the Log4j vulnerability was so problematic in part because the Log4J tool is so prevalent, existing in so many applications developed by both enterprise IT teams and by software vendors.

That threat, which surfaced in late 2021, also made clear to CISOs the need to understand, assess, prioritize, and mitigate vulnerabilities that exist within their vendors’ products and/or are introduced by third parties, Baker says.

He acknowledges that enterprise security teams run into challenges here, as enterprise security often may not know what vulnerabilities exist within vendor solutions and may not even be able to run vulnerability scans on those systems.

“We really lack transparency into the code and tools that are leveraged by the systems we rely on,” he says, noting that the software bill of materials (SBOM), the list of components in a piece of software, can provide some visibility in some cases.

Baker says he advises CISOs to review the agreements that they have with vendors in regards to the vendors’ role in managing vulnerabilities within their products and then, if necessary, seek to insert contractual language that limits the chances of a bug going unnoticed or unfixed.

“That’s part of your vulnerability management program: understanding how providers and third parties are tracking and prioritizing and patching vulnerabilities,” he says.

Establish checks and balances

Another best practice: Don’t assign vulnerability management to the IT team. Rather, security experts say the CISO should have a dedicated individual or team tasked with identifying vulnerabilities and prioritizing fixes as well as overseeing execution of remediation and mitigation.

“They need someone working in healthy tension to the teams doing the actual patching, because it is too easy for the infrastructure people whose job is going to be made harder by doing more patching, not less, to be rigorous with the vulnerability scans [and the follow-up work]. It’s just human nature,” Austin says. “Any self-policing function is much, much more vulnerable to apathy or, worse case, corruption. You need checks and balances.”

Others agree, noting that CISOs can opt for a managed security service provider (MSSP) to run its vulnerability management program and then work with internal infrastructure, engineering, and/or devops teams to execute the patches and handle any needed downtime and required testing.

Invest in tools, teams

Security experts stress that effective vulnerability management, like all else in security, needs the right people, processes, and technology.

They note that many organizations have pieces of all those but don’t always have all three working effectively together. For example, security teams typically have scanning tools but may not have introduced the automation needed to efficiently handle the workload, Baker says.

Moreover, Orchilles says CISOs and their organizations must commit to providing the resources required to make those teams successful “so it’s not a fire drill every month.”

Yes, he says, it might seem intuitive but such advice isn’t always followed. For example, he has seen CISOs invest in a new tool but not the staff needed to run the technology, the training needed to maximize its use, and the change management required.

“Tools won’t work without all the rest of that,” he adds.