Penetration testing has shown cybersecurity manager David Murphy just how problematic people can be.
In his career, he has seen people pick up and use dropped thumb drives, give up passwords over the phone and, yes, even click on simulated phishing links.
He has also seen the real-world consequences of such actions.
Murphy, manager of cybersecurity at Schneider Downs, a certified public accounting and business advisory firm, says he once investigated the root cause of a ransomware attack at a company and traced the incident back to a worker who had clicked on an invoice for pickles.
“It was unrelated to anything in his job duties. It was unrelated to anything the company does. The only reason it was clicked was because he was in the mode of opening everything. He was an insider risk just waiting to happen,” says Murphy, a former consultant for the National Security Agency (NSA) Computer Network Operations Team.
The report found that negligent insiders were the root cause of 56% of incidents, and they cost on average $484,931 per incident.
The report found that malicious or criminal insiders cost even more: $648,062 on average, with malicious or criminal insiders behind 26% of incidents.
Meanwhile, credential theft accounted for 18% of incidents in 2022, up from 14% of incidents in 2020.
Taking a multilayered approach
Security experts say simulated phishing attacks can help identify individuals who continue to click without thinking. But it’s much harder to figure out who might be vulnerable to a sophisticated social engineering attack based on information scraped from LinkedIn, who might be disgruntled enough to sell their credentials to criminal syndicates, or who has meticulous cyber hygiene when working on a laptop but isn’t suspicious of a phony text message.
Ferreting out those weak links takes a lot more work and requires the use of multiple tools in the corporate toolbox, not just the security one.
As Murphy says: “To find those insider risks, you don’t rely on one particular point.” He says, for example, that he might not be suspicious of an intern driving a Porsche, but he would if that intern’s working late nights alone and trying to access restricted accounts.
That approach fits with current security thinking.
As CISOs know, security today requires a multilayered approach that increasingly incorporates information about the users themselves. User behavior analytics, a zero trust policy, and the principle of least privilege all speak to that point, as each approach takes into account the individual user, his or her role, and his or her typical activities when considering access levels and security risks.
But some security experts are thinking beyond that and considering what personas within their organization are weak links, how to identify them, and how best to minimize their risk.
“What’s important is for the program to identify potential risk on an ongoing basis and create weightings around risk areas so when something does pop to the surface, they know to take a look,” says Jason Dury, director in cybersecurity open source solutions at Guidehouse.
A slew of potential threats
The ability to detect insider threats as well as those individuals who are either the weakest links or pose the biggest risks (depending on your perspective) is much more complicated today than it was even a decade ago, says Sarb Sembhi, CISO and CTO at Virtually Informed Ltd.
Sembhi acknowledges that data loss prevention software, network scanning tools, identity and access management platforms, and the zero trust methodology all together can significantly lower the risk of a careless or malicious insider doing harm.
But, he says, like all else in security they’re not a complete guarantee against insider threats.
Consider, he says, the risk that the internet of things presents to organizations. An employee could bring in a seemingly innocuous IoT device—a printer, perhaps—not realizing he or she is introducing an unsecured internet connection into the enterprise. “These devices are more of an insider threat than perhaps humans would be,” adds Sembhi, a member of the ISACA Emerging Trends Working Group.
Remote work further complicates the insider threat issue as does the trend toward an increasing tolerance for business units deploying their own technology, he says. Others note those factors, too, citing, for example, that a malicious or criminal insider working remotely could use a cell phone to photograph sensitive information knowing there’s no one around to see.
Adam Goldstein, an assistant professor of cybersecurity at Champlain College and the academic director of its Leahy Center for Digital Forensics & Cybersecurity, says CISOs can categorize individuals who present additional risks into at least several different groups.
To start, he says remote workers in general can be considered a more vulnerable group. “[Workers] are on their personal machines, and there’s a different level of oversight in both what they’re doing on their computer but also in their connection to their company and their coworkers and the like,” he says.
The busiest employees as well as the ones doing multiple roles also create more risk, he says. “Being stretched thin can force people to take shortcuts they wouldn’t normally take, or have to jump into tasks or systems that they haven’t had the time to adequately train for, or have that depth of support they need,” he says.
Add to that the class of workers who still struggle to understand the technologies they use and the controls in place as well as those “who prioritize personal convenience over diligence and security,” he says.
Goldstein adds: “Those are some of the unintentional challenges that may not have anything to do with an employee’s motivations or skillset but can cause security issues.”
At the same time, Goldstein says bad actors continue to evolve their strategies, making it more likely that even a cautious individual could fall victim to a scam and expose the organization.
“A sophisticated attacker who is attempting social engineering-type attacks or coming up with schemes can catch anybody if it’s particularly well executed or if someone is distracted that day,” he says.
Bad actors have also found ways to make it easier for disgruntled or malicious employees to take action, creating channels that allow workers to sell their credentials or other organizational assets, Goldstein says. “And the risk to the insider is much less than it used to be, because they can make it look like a phishing attack, making it much harder to trace it back to that individual,” he adds.
Furthermore, there are those who might be vulnerable due to personal factors who may turn to such options, Goldstein says.
Michael Ebert, a partner in Guidehouse’s cybersecurity practice, says he worked with a company that experienced such a case, which came to light when law enforcement alerted the organization to an employee selling information. The worker had appropriate levels of access for her job but was pressured by a friend and accomplice who saw the opportunity to make quick money.
Actions to take
Such incidents highlight why CISOs should consider personas as part of their security strategy. As Ebert says: “People get caught in situations and do stupid things.” Given that reality, Ebert and others say executives should think of that potential before someone actually takes action and puts the organization at risk.
Yet he and others acknowledge that CISOs have limited ability on this front—especially if they’re working on their own.
Ebert notes, for example, that the employee in the law enforcement case had passed the company’s initial background check as well as the subsequent background checks it runs on employees every two years.
“A lot of organizations do background checks and other work during the hiring process to ensure that folks, before they join, meet certain requirements and have [security] training. But it can be hard to do with existing employees who may be going through transitions in their personal lives or develop different feelings about the organization and their role in it,” Goldstein says.
Companies in highly regulated industries have a leg up here, Goldstein says, as compliance requirements have forced security and the human resources departments to work more closely to identify workers who could pose threats and to have the appropriate policies and procedures for dealing with such situations.
But Goldstein acknowledges that such work is a heavy lift and a task that can raise ethical questions in many organizations.
“So how do you balance protecting organizational assets and not stepping into a big brother-type approach of monitoring employees?” he asks.
Goldstein advises CISOs to run tabletop drills that involve insider threats. “Ask: What if [hackers] got this person’s credentials? And then present that to the C-suite to help them understand what the risk is.”
Dury goes further, saying that CISOs should work with other department heads—particularly HR—to identify and understand what behaviors or activities could indicate someone is a risk. “Every corporate function has a role,” he adds. “This type of program should not be done in a silo.”
But Dury and others also caution against weighting security measures too much toward the risks any particular persona or role presents.
Rather, they say consider the potential scenarios and assess the layers of controls to ensure they’re as effective as possible in preventing any individual—regardless of the motivations or circumstances—from inflicting harm.
“You have to look at individuals, and doing that extra analysis into the risk of individuals can help you understand what the risk is and whether the controls in place are adequate or whether there are areas where more investments could be made,” Goldstein explains.
Ebert points again to the law enforcement case to highlight this point, noting that, based on what he saw, the company likely could have prevented or limited the damage had they better monitored the employee’s activities.
More on insider threats: