Information security vs cyber security: distinguishing the expertise

Information security vs cyber security: distinguishing the expertise


Information security vs cyber security: distinguishing the expertise image

It’s vital for organisations to understand where differences lie, and where common ground may reside.

David Steele, managing director and principal security consultant at SecuriCentrix, identifies the differences of information security vs cyber security

The terms ‘information security’ (often shortened to infosec) and ‘cyber security’ are often used interchangeably, but they should be viewed as distinct areas of expertise within organisations.

The overarching field is really information security, which covers all information, be it physical or electronic. Cyber security, meanwhile, is a sub-speciality dealing specifically with the access and protection of electronic data.

What is information security?

When we think of data in this age of tech, we generally think of electronic information. But valuable information still exists in the physical realm — on paper and in files, locked in warehouses, or on removable disks, for example.

Information security looks chiefly at protecting the integrity of information — all information, be it physical or in the form of bits and bytes in a database. Its focus is on guarding the integrity, confidentiality, and availability of data, including securing the physical environment where information is held.

The NIST defines information security as: “The protection of information and information systems from unauthorised access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability”.

A good information security strategy should include a range of controls, covering procedure, access and controls embedded within technology.

Procedural controls help detect and prevent security risks within the physical environment, such as filing cabinets, access to data centres, and computer systems. This might stretch to awareness-building and education building on issues such as compliance and response plans.

Access controls determine who is allowed access to what data. These might refer to both physical as well as virtual access.

Technical controls include tools like multi-factor authentication for users, firewalls, and antivirus software.

What is cyber security?

The NIST’s definition of cyber security can be broken into a single nugget: “The ability to protect or defend the use of cyberspace from cyber attacks”.

But if you’re in the mood for the full definition, here it is too: “The prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation.”

While it’s considered just one area of information security, cyber security has become the most important: as the value of data increases, so does its desirability.

It’s no secret that criminal activity around data theft is on the rise, with attacks ranging from aggressive hacking into systems, to the more subversive, luring people in using emotional techniques to access data or dumping malicious software onto devices, and nicking data.

Cyber security also refers to information that originates as digital files, making it distinctly different from information security. Cyber security refers to the protection of digital information, systems, and networks.

Cyber security is about ensuring that all data in cyberspace is kept safe. A solid cyber security strategy will look at network security; application security; cloud security; as well as critical security infrastructure.

  • Network security aims to protect the network from unauthorised access, possible interference, or interruption of service.
  • Application security is about protecting and fixing the app to ensure neither code nor data is breached or robbed.
  • Cloud security refers to policies, controls, and technologies to protect the virtualised IP, data, services and cloud-based infrastructure.
  • The critical infrastructure around cyber security has to do with systems like virus scanners, intrusion prevention software, anti-malware, and so on.

Common ground

The protection of confidentiality and integrity, and the provision of availability is an area of cross-over for infosec and cyber security.

Integrity refers to the protection of data modification or destruction and ensuring information nonrepudiation and authenticity over its lifecycle. Confidentiality is about safeguarding the personal privacy aspect of data, maintaining restrictions on access and disclosure. Ensuring availability means that data can be accessed in a timely and reliable manner, for its intended use.

Both areas take the physical security of assets into account. The concern with physical access to data assets or paper-based information is a generally relevant security matter. With electronic data, the tools to control access are perhaps more sophisticated than a padlock, but physical access to the technology needs to be controlled too.

Although all data is considered to be a highly valuable resource, there is a hierarchy of more and less valuable information in organisations. The concern for both infosec and cyber security areas of expertise, focuses on the protection of information that is shared.

But for those who work in information security, the main concern is to shield company data from any sort of unauthorised access. For the cyber security specialists though, the aim is to protect particularly sensitive data from unauthorised electronic access. Data is prioritised, and cyber security experts will determine how best to protect what is most important.

This plays into the cyber risk management strategy that organisations develop to protect and monitor data activity, the design of which must involve both areas.

Written by David Steele, managing director and principal security consultant at SecuriCentrix

Related:

Overcoming the biggest cyber security staff challenges — Andrew Rose, resident CISO EMEA at Proofpoint, discusses the biggest cyber security staff challenges facing organisations, and how to overcome them.

How to boost internal cyber security training — This article will explore how organisations can boost their cyber security training initiatives to ensure staff are sufficiently equipped with the right skills.

“;
jQuery(“#BH_IA_MPU_RIGHT_MPU_1”).insertAfter(jQuery(“.single .post-story p:nth-of-type(5)”));
//googletag.cmd.push(function() { googletag.display(‘BH_IA_MPU_INPAGE_MPU_1’); });
}
else {

}
});

How the Online Safety Bill can make the Internet safer for all

How the Online Safety Bill can make the Internet safer for all


How the Online Safety Bill can make the Internet safer for all image

It may be a good starting point, but aspects of the new legislation need ironing out.

Martin Wilson, CEO of Digital Identity Net, discusses how the Online Safety Bill can help to make the Internet safer for all users

The Online Safety Bill is a landmark piece of legislation designed to lay down in law a set of rules about how online platforms should behave to better protect their customers and users.

It aims to:

  • prevent the spread of illegal content and activity such as images of child abuse, terrorist material and hate crimes, including racist abuse;
  • protect children from harmful material;
  • protect adults from legal – but harmful – content.

The bill was introduced in the House of Commons on the 17th March 2022, having been scrutinised by the joint parliamentary committee over several months and reviewed by the Department for Digital, Culture, Media and Sport (DCMS).

Even before its introduction, various parts of the bill were drip-fed via the media, such as measures to protect people from anonymous trolls, protect children from pornography and stamp out illegal content. Each development was met with intense scrutiny.

And since its introduction, this has continued with many current and former politicians, tech execs and business leaders sharing their views on the bill described by the UK government as ‘another important step towards ending the damaging era of tech self-regulation’.

Each development and announcement to date has been met with intense scrutiny. But the big question is, will the bill protect people online and hold tech giants to account?

Important step towards a safer internet

The bill has broadly been accepted as a good starting point for proposed updates to rules that have needed to be changed for a long time. These rules are now much clearer and, therefore, should be easier to police.

At last, big tech will be held accountable as the bill imposes a duty of care on social media platforms to protect users from harmful content, at the risk of a substantial fine brought by Ofcom, the communications industry regulator implementing the act.

It’s a step towards making the Internet a safer, collaborative place for all users, rather than leaving it in its current ‘Wild West’ state, where many people are vulnerable to abuse, fraud, violence and in some cases even loss of life.

Lacking clarity

When you get into the nitty gritty of it, there is some language that could be tightened and issues which need ironing out.

For example, the bill needs to be more specific about the balance between freedom of speech and how people are protected from online abuse.

While fraud is mentioned, it is often lost amongst the headline-catching underage access to porn and abuse. Fraud is an epidemic in the UK and needs to be a central part of the bill.

An initial issue I had with the earlier version of the bill, is that it positions algorithms which can spot and deal with abusive content as the main solution. This does not prevent the problem; it merely enables action to be taken after the event.

Arguably in recognition of this, the UK Government recently added the introduction of user verification on social media. It will enable people to choose only see content from users who have verified they are who they say they are — all of which are welcomed.

But the Government isn’t clear on what those accounts look like and its suggestions on how people can verify their identity are flawed. The likes of passports and sending a text to a smartphone simply aren’t fit for the digital age.

Account options

In my view, there should be three account options for social media users:

  • Anonymous accounts — available for those who need it e.g., whistle blowers, journalists or people under threat. There will still be a minority who use this for nefarious reasons, but this is a necessary price to pay to maintain anonymity for those who need it. The bad actors will receive the focus of AI to identify and remove content and hold the platforms to account.
  • Verified account: Orthonymous (real name) — accounts that use a real name online (e.g., LinkedIn) and are linked to a verified person.
  • Verified accounts: Pseudonymous — accounts that use an online name that does not necessarily identify the actual user to peers on the network (e.g., some Twitter), but are linked to verified accounts by the services of an independent third-party provider. Leaving identification in the hands of the social media platforms would only enable them to further exploit personal information for their own gain and not engender the security and trust a person needs to use such a service. The beauty of this approach is that it remains entirely voluntary and in the control of each individual to choose whether to verify themselves or continue to engage in the anonymous world we currently live in.

We expect that most users would choose to only interact with verified accounts if such a service was available and so the abuse and bile from anonymous, unverified accounts can be turned off. After all, who doesn’t want a nicer Internet where there are no trolls or scammers?

Verifying users

In terms of verification, the solution is a simple one. Let’s looks to digital identity systems which let people prove who they are without laborious and potentially unreliable manual identity checks.

Using data from the banks, which have already verified 98% of the UK adult population, social media firms can ensure their users are who they say they are, while users share only the data they want to, so protecting their privacy. This system can also protect underage people from age-restricted content.

Such digital identity systems already exist in countries such as Belgium, Norway and Sweden and have seen strong adoption and usage for a range of use cases. There is of course no suggestion that such a service will eradicate online abuse all on its own, but it would certainly be a big step in the right direction.

The Online Safety Bill is certainly a progressive move. While this type of legislation is being discussed in different countries, the UK is now leading the charge and its approach is consistent to those being considered around the world.

However, the Government can’t win this fight on its own. It needs buy-in from social media firms, banks, businesses and consumers. Through collaboration and adopting the right tools, we can help make the Internet and social media platforms a safer place for all.

Written by Martin Wilson, CEO of Digital Identity Net

Related:

What the new Online Safety Bill updates will mean for users — As the updated Online Safety Bill looks to make the UK “the safest place to go online”, we explore what the new legislation could mean for users.

Could social media networks pave the way towards stronger authentication? — John Gilbert, general manager UK&I at Yubico, discusses whether social media networks could pave the way towards stronger authentication.

The security implications of the hybrid working mega-trend

The security implications of the hybrid working mega-trend


The security implications of the hybrid working mega-trend image

Now more than ever, having zero trust infrastructure in place is essential.

With hybrid working looking set to continue long term across the tech industry, Kevin Peterson, senior cyber security strategist at Xalient, explores the security implications that could come with this mega-trend

Remote and hybrid working patterns have extended the corporate world into every home and user device, and as the global pandemic recedes, this is a trend that is here for the long term. In fact, it is hard to overstate the pace and extent of digital transformation undergone by the enterprise environment in the past two years. As 2022 rolls on, the daily working experience for employees looks very different to the way it looked before the pandemic.

Why “the network” has become irrelevant

Now that the hybrid environment has evolved, employees can be anywhere; in the office, at home, on a train or in a coffee shop. From a security point of view, locking down the enterprise perimeter and securing network access is no longer what matters; to some extent the network has become almost irrelevant. Instead, the focus is now around securing applications. At the same time, organisations need to harness the power of applications, and need to be highly productive with fast and easy access to the applications they need to do their job. This is not only essential, it is foundational to becoming a modern digitised business. To enable this environment, businesses need reliable network access from the edge to the core and security based on a zero trust model to ensure robust, efficient and secure access to essential business applications from wherever employees are located.

As enterprises have accelerated their digital transformation initiatives, the number of possible attack vectors has grown, as digital systems need to have multiple access points for customers, partners, and employees, and this has created a vastly expanded attack surface. As a result, cyber crime has escalated, and a record-breaking number of data breaches of increasing sophistication and severity are taking place year-on-year.

Operating on a zero trust basis

The stark reality is that this new hybrid workforce brings an increasing level of risk. With work happening at home, the office, and almost anywhere, and cyber attacks surging, security must be the same, no matter who, what, when, where and how business applications are being accessed. Now that the security control organisations once had has quite literally left the building, this makes it critical that each and every connection operates on a zero trust basis. Cyber security leaders have historically called this “default deny”, which it still is. Only now, thanks to cloud platforms that tie user and device identity into the equation, the controls to make it a reality are both scalable and elegant.

What we mean by zero trust is that organisations effectively eliminate implicit trust from their IT systems, and this is replaced or embodied by the maxim ‘never trust, always verify’. In practice this means only trust those who have appropriate authority to access. Zero trust recognises that internal and external threats are pervasive, and the de facto elimination of the traditional network perimeter requires a different security approach. Every device, user, network, and application flow should be checked to remove excessive access privileges and any other potential threat vectors.

Nevertheless, working with a remote workforce isn’t a new concept. There are plenty of visionary enterprise organisations that have been thinking about this issue for a long time, but sophisticated solutions haven’t always been available. In the past, enterprises relied on Virtual Private Networks (VPNs) to help, albeit minimally, solve user trust issues. But now, the time is right to re-think enterprise security models in light of the modern security solutions that are available which can be implemented easily and cost-effectively.

Rewind to the security backstory

Ultimately, any high-level security model really breaks down into a trust issue: Who and what can I trust? – the employee, the devices, and the applications the employee is trying to connect to. In the middle is the network, but today, more often than not, the network is the internet. Think about it. Employees sit in coffee shops and log onto public browsers to access their email.

So now what organisations are looking for is a secure solution for their applications, devices, and users.

Every trusted or ‘would-be trusted’ end-user computing device has security software installed on it by the enterprise IT department. That software makes sure the device and the user who is on the device is validated, so the device becomes the proxy to talk to the applications on the corporate network. So now the challenge lies in securing the application itself.

Today’s cloud infrastructure connects the user directly to the application, so there is no need to have the user connect via an enterprise server or network. The client is always treated as an outsider, even while sitting in a corporate office. The servers never even see the client’s real IP address (because they don’t need to), and even data centre firewalls are of far less value as the zero trust model, and expertly applied policies and controls are now exponentially better.

Death to the VPN

In this new construct the VPN dies, thanks to Zero Trust Network Access (ZTNA), and networks become simplified with lower operational running costs, thanks to SD-WAN.

So, does the old client VPN truly die? Yes, it does! The reason is that we are now only concerned with what we trust: the user, their device, and the destination. Notice that “the network” isn’t part of that. Why? Because we don’t trust users or their devices any more on the corporate network than we do on public networks. So even when connected to a LAN port on the desk, they have the same seamless security posture and always-on application access that they would if there were on public Wi-Fi.

Just as film is no longer used for taking pictures, VPNs are no longer the future for application access. Everyone now sees that the real need is not for users to access networks, but rather just to access the applications as though they are all cloud accessible. That’s the zero trust-based future for us all.

New thinking

Most enterprises realise that it is time to enhance remote access strategies and eliminate sole reliance on perimeter-based protection, with employees instead connecting from a zero trust standpoint. However, most organisations will find that their zero trust journey is not an overnight accomplishment – particularly if they have legacy systems or mindsets that don’t transition well to this model. That said, many companies are moving all or part of their workloads to cloud and, thus, greenfield environments. Those are the perfect places to start that journey and larger organisations, with complex IT environments and legacy systems, might see the road to zero trust as a multi-phase, multi-year initiative.

This is where organisations can work with partners, to assist with implementing security controls and zero trust models in the cloud by utilising a framework. This framework would provide a firm security foundation to underpin digital transformation initiatives, helping organisations take their first steps towards becoming a zero trust connected enterprise. It would do this by addressing common areas of compromise between a user or device and the application or data source being accessed or consumed. And this is achieved wherever the users, devices, data and applications are located.

In today’s hybrid environment, implementing a zero trust approach enables organisations to start to really drive down the risk factors, while ensuring the enterprise is future-proofed for 21st century business. With cyber threats only set to escalate, this peace of mind is essential.

Written by Kevin Peterson, senior cyber security strategist at Xalient

Related:

How businesses can combat data security and GDPR issues when working remotely — Oliver Rowe, managing director of Fusion Communications, discusses how businesses can combat data security and GDPR issues when working remotely

Zero trust: the five reasons CIOs should care — Tony Scott, board member at ColorTokens and former federal CIO of the US Government, identifies five reasons why chief information officers (CIOs) should care about zero trust.

Implementing a data backup strategy with agility in mind

Implementing a data backup strategy with agility in mind

Implementing a data backup strategy with agility in mind image

When it comes to backing up your data, what worked a few years ago won’t be sufficient now.

On World Backup Day, industry experts discuss the need for agility in your data backup strategy

It’s World Backup Day. Here comes the annual reminder – back up your data to avoid losing it and making yourself an April Fool. Yes, you’ve heard it before. But what is different this year is the scale of change and uncertainty we’ve had to adapt to in the face of a global health crisis, regulatory developments, and now war in Europe.

To help you navigate your way to a successful backup strategy, we have asked industry experts to pinpoint the main challenges IT teams are facing, and to shed light on which approaches are winning.

The need for inclusivity

We create and consume data on levels that were until recently unimaginable. Most of this data is unstructured and exposed to threats such as disk failure, human error, and malware. Backup technology vendors have adapted to this by exploiting the power of cloud computing to build scalable products and services which look very different to traditional backup solutions.

Omdia analyst Roy Illsley said: “The backup market has, over the past few years, seen a shift from mostly backing up VMs, to being more diverse and able to support different types of workloads from SaaS to cloud-native. Backup today is more inclusive.”

It stands to reason. The data we produce is more varied and complex. So, backup firms have to accommodate a wider range of business needs. Fred Lherault, CTO at Pure Storage expanded: “With modern approaches and technology such as open source, containerisation, DevOps — what is backed up has changed.

“For example, in the old world, an application was made of a few servers and maybe one or two big databases. Today, apps are running in microservices, they’re containerised, so organisations can’t point to one server and say this is where the app lies: back up there. Now, organisations need to have a data protection solution which understands the application make up, containerisation, and Kubernetes. It must be able to back up not just the data but also the container orchestration configuration and images that were used to deploy it.”

But according to StorOne’s CMO George Crump, backup software itself has seen significant innovation as now “modern software solutions can transfer data at much finer levels of granularity thanks to block-level and change-block backups.”

Crump explained: “The increase in granularity means that IT can increase the frequency of protection events to reduce RPO.”

Considering remote working and cloud

The sudden increase in employees working from home left organisations scrambling to keep control of their data. Lherault said: “The trend for remote work and bring-your-own-device has made organisations implement thorough remote desktop and SaaS strategies, which help them ensure people don’t have data on their laptop which isn’t being backed up.”

The tech stack to support remote working has shifted, too. Teams, often working asynchronously, are now collaborating through Slack, Zoom, Google Docs, and so on, replacing in-person communications.

“The shift towards remote work has accelerated cloud adoption by several years,” says Veniamin Simonov, director of product management at NAKIVO, “which, in turn, has contributed to the adoption of cloud-based solutions, including backup ones.”

Krista Macomber, senior analyst on data protection and multi-cloud data management at Evaluator Group, elaborated on the impact of moving towards cloud-based services: “We see backup software being developed on container/microservices architectures, so that they are more suited for delivery in the cloud.”

The flip side, Macomber said, is that, “cloud resources also need to be protected, so we’re seeing an emphasis on protection for SaaS applications like Office 365, for example.”

Clumio’s co-founder and CEO, Poojan Kumar, lays out how cloud adoption has transformed backup technologies in three areas. “First, cloud backup technologies are facing unprecedented scales. One example is Amazon S3. You need billions of object scale and PB of data scale.

“Secondly, cloud backup technologies are facing deployments across many regions while providing a single pane of glass. Simplicity has to transcend regional boundaries.

“Thirdly, cloud backup technologies have to be delivered as-a-service, not a piece of software that needs to be managed by customers on an ongoing basis.”

Managing backup costs

But has the rising adoption of the cloud model affected the cost of backup? “Absolutely,” said Lherault, because “there may now need to be multiple data protection solutions to support the disparate environments. Organisations need to balance this with repatriation costs as bringing data back on-premise can be prohibitively expensive.”

Crump insisted: “Cloud adoption does not decrease the cost of backup.” Experts have slightly diverged opinions on the question, but most point to consumption models; for Clumio’s Kumar, “the cloud model requires everything including backup to be consumption oriented. Pay for what you consume: no less, no more. No fixed costs. No licenses. No software to manage. No software to run. Not having to manage a backup software also frees up valuable IT resources that can now be dedicated to efforts that are core to your business.”

Lherault agrees, saying flexible consumption models “enable organisations to pay for consumption based on how often backups are performed and how much data is stored. This avoids an up-front outlay for capacity which will go largely unused for years.”

While cloud backup has advantages, “many organisations turn to more traditional tape backup instead,” says Peter Donnelly, director of products at ATTO Technology, who highlights several advantages including “long-term costs, data privacy and security, process assurance and control and, in some instances, regulatory considerations.” Lherault identifies data sovereignty as another advantage of on-premises backup: “it means organisations know exactly where their data is, which lowers compliance and regulatory risk.”

Effects on cyber security

The fast climb of cloud adoption combined with a more distributed workforce, has had an impact on cyber security, which traditional backup vendors are turning towards. With their workforce based outside the ‘castle walls’, suddenly companies are more vulnerable, and the new remote model seems to have “broadened the attack surface, making it quite challenging to maintain cyber security policies,” NAKIVO’s Simonov says.

Spectra Logic’s product marketing manager Eric Polet explains that “attaining as close as possible to fool-proof data protection in today’s times, requires a tactical upshift in approach that extends beyond purely focusing on data protection, but also a thorough consideration of the organisation’s ability to be ‘data resilient’.

“The latter refers to the ability of data to ‘spring back’ once compromised, achieved by smartly leveraging cloud, tape and/or disk, managed by data lifecycle management software.”

Curtis Anderson, senior software architect at Panasas, expands on cyber attacks in HPC environments: “Historically, as long as their systems were separated from the corporate LAN and the internet, HPC users didn’t worry about security.

“This meant that they could avoid the accompanying overhead of security solutions and drive as much performance into their HPC installations as possible. However, this is no longer the case due to two key trends: first, with the expansion of HPC use cases into manufacturing, big data analytics, AI, and ML, organisations need to integrate their HPC appliances into the rest of their infrastructures. Second, hackers are looking for more data-rich targets. Government labs, manufacturers, and other HPC environments, are often of national importance.”

For all the above reasons, backup vendors are widening the reach of their products by including cyber security features and to enable their customers to put their data in air-gapped, vendor-managed storage vaults.

“The need to provide ransomware protection has made backup and security more aligned and in demand,” according to Illsley, though he adds that “the rise of cloud and the multi-availability zone approach has led customers to a false sense of data security; they believe the cloud providers include this, but in reality, they do not.”

Are on-premise measures still needed?

So, is there still a need for on-premises backup solutions in this era of cloud adoption? According to Polet, “in the past, backup technologies always included hardware such as tape. With the adoption of cloud and the advancement of technology, today’s backup technologies can include any combination of cloud integration as well as hardware and software.”

Donnelly makes the point that “aside from being a cost-effective solution, tape’s primary advantage is its physical immutability; data on a tape drive is not physically connected to a network when at rest, providing an ‘air gap’ that keeps backup data untouched during, say, a ransomware attack.”

Paul Speciale, Scality CMO, agrees: “the key to swift and simple recovery is immutable backups. By making data immutable, i.e. impervious to deletion or modification, it is therefore also protected from malicious encryption and safe in the event of a cyber attack.

“AWS S3 Object Lock creates a virtual air gap, enabling IT teams to create modern WORM systems, but with a key advantage that it stays online and accessible for normal (authorised) access, as opposed to offline data.”

The future of backup

So what does the future hold for backup technologies and vendors? “Edge is the next battleground,” claimed Illsley. The explosion of sensor-based technology and AI is leading a demand for data at the edge. As always, backup will need to go where data goes.

And so, Illsley explained, “how to protect it and understand the data and the relevance of it to backup is the next challenge.”

Evaluator Group’s Macomber predicted “an ongoing emphasis on protection for Kubernetes and container environments, as well as SaaS applications.” She added that “data protection vendors will continue to invest in the sophistication of their AI/ML and vaulting/air gapping capabilities for ransomware resiliency.”

According to Aron Brand, CTERA’s CTO, “the term backup is becoming obsolete, as dedicated backup solutions that are built on the principle of periodical backups and user-initiated, time-consuming restore sessions, are being phased out for next-generation protected storage solutions that offer ransomware detection and instant recovery capabilities.”

More innovation is expected in backup technologies and solutions. Kumar believes “with the cloud, we can finally deliver on services running on top of your backup data to solve more advanced use cases like global search, data classification, and even machine learning and analytics.” Simonov added that “better automation and AI-based technologies will assist in detecting and preventing threats.”

Brand added: “To be honest, I don’t like the name World Backup Day. I prefer to think of this day as World Disaster Recovery Day.

“The truth is that most organisations are giving way too much weight to backup and giving too little thought to their recovery process. You need to have a plan for how you’re going to recover your data and systems in the event of a disaster.”

Indeed, today is the day to prevent data loss. Or, rather, the day to remind ourselves to do it regularly! HYCU’s founder and CEO Simon Taylor said: “This World Backup Day, we are reminded of the global reality that backup and recovery and data protection are even more important than ever.

“Both have quickly become not just another piece of the technology puzzle or check box on a compliance list; they have become an important last line of defence for businesses, employees, and families.”

Written by A3 Communications

Why fraud is getting more sophisticated

Why fraud is getting more sophisticated

Why fraud is getting more sophisticated image

Fraud is continuing to rise online, with evolving methods catching users out.

Dimitrie Dorgan, senior fraud specialist at Onfido, explores why fraud is getting more sophisticated, and how organisations can prevent it

Fraudsters have come a long way in the digital age, assisted by a unique context. Over the last five years, consumers and businesses have increased the adoption and provision of digital services, and cyber criminals have had more opportunities than ever to conduct nefarious activities. In fact, there have been growing concerns over a ‘fraud epidemic’ engulfing the UK, accelerated by the economic impact following the global pandemic.

While a rise in the volume of fraud is widely discussed, its sophistication has also increased. So, as consumers and businesses have and continue to adapt to a more digital-first world, how have fraudsters raised their game?

The changing face of fraud

In 2020, at the height of the pandemic, online fraud was dominated by opportunists taking advantage of an unprecedented situation. A surge in basic or ‘unsophisticated’ fraud attacks indicated a rise in first-time fraudsters. As a global identity verification company, we saw attackers trying their luck but fail with information on an identity document that doesn’t match the sign-up details, or identity documents that fail for data validation.

In the business world, the equivalent shift at the start of the pandemic meant pivoting models or embracing more digital tactics to stay ahead or stay afloat. This attitude was reflected in the activity of attackers looking to take advantage on a bigger scale. We saw opportunists target specific marketing events, such as when a business offers sign-up bonuses or when there was a spike in a certain market, for example, crypto. A sign of this type of fraud is receiving large numbers of the same document type or repeated information, such as an email address.

To combat the rise in such fraud tactics, businesses formulated a more organised response, whether by adopting technology more strategically or catching up with concepts such as hybrid working. Unfortunately, this evolution was mirrored by the fraudsters.

While the general identity fraud rate remained consistently high last year, Onfido’s 2022 Fraud Report found a concerning 57% year-on-year increase in sophisticated fraud, perpetrated by criminal groups and fraud rings. It took some time for these organised fraudsters to adapt, but now that they have, we are seeing an increase in more coordinated attempts and more advanced techniques.

Getting ahead of the curve on mitigating mobile fraud

Ralitsa Miteva, manager of digital identity and mobile security at OneSpan, discusses how organisations can get ahead of the curve on mitigating mobile fraud threats. Read here

Getting inside the mind of the fraudster

Generally speaking, fraudsters take the path of least resistance. This means they are likely to target documents such as ID cards or passports which are easier to replicate. As demonstrated by amateur criminals early in the pandemic, there was a high volume of fraud attempts, but with obvious errors such as missing letters and other obvious signs signified fraudulent documents. In fact, the average ID fraud rate increased from 4.1% in October 2019, to 5.8% in October 2020.

Last year, National Identity Cards were the most frequently attacked document type, but this year, passports have moved firmly into the crosshairs. ID cards typically include personal information on both the front and back of the document compared to a passport which only includes this information on one page. This points to a shift in fraudsters’ behaviour, who are increasingly targeting one-sided documents and looking for attacks that require less effort and maximum reward.

Signs of sophisticated fraud

In comparison to the efforts of amateurs, sophisticated fraud is more difficult to detect and often undertaken by criminal gangs who run large scale operations. The threat landscape over the last year has been defined by an increase in the number of these organised attacks, as the perpetrators have the resources to conduct advanced tactics such as deepfakes, 2D and 3D masks, or even coercion.

Often, sophisticated fraud can be indicated by fraudsters re-using the same information on multiple occasions. For example, techniques like creating verified accounts with fake documents, to then use in subsequent attacks, points to more organised activity. Subtler signs, such as incorrect fonts, the wrong photo printing technique, or imitated security features, can only be picked up by advanced document analysis.

Criminal gangs and fraud rings are most commonly behind these types of attacks, as a higher volume of attempts – perhaps enhanced by automation – means a higher chance of success.

Particular tell-tell signs may also indicate, such as using the same background in every submitted photo of the ID or selfie – a sign that fraudsters are attempting to attack a business en masse.

How will we authenticate our digital identities in 2022?

Stuart Dobbie, senior vice-president of innovation at Callsign, predicts how organisations will go about authenticating digital identities in 2022. Read here

Translating knowledge into prevention

Understanding criminal tactics is crucial to protecting against them. Data discrepancy, Photoshop templates and document duplication are more common with sophisticated fraud rings, so cross-referencing information like country of origin or passport number with other data in the document will often highlight mistakes.

Traditionally, businesses rely on knowledge-based authentication or signals, such as device IP, phone number or credit databases, to trust a new user. However, these can expose them to fraud because mass data breaches have left huge amounts of personally identifiable information available for sale on the dark web.

To combat this, businesses need to have robust authentication methods in place. For example, layering identity processes helps them build a strong assurance in their users’ real identities. And as fraudsters get more advanced, combining an individual’s ID card with their physical biometrics is more effective than document checks alone – particularly when videos are included in the verification process to prove the attempt is being carried out by a ‘present’ human.

As we’ve seen from the analysis of criminal habits, they are more likely to move onto an easier target if faced with robust defences. It’s been a busy year for these fraudsters – with 17% more breaches in the first nine months of 2021 than in the entirety of 2020 – but as they adapt, a strong defence increasingly means spotting more advanced techniques than it does holding firm against a deluge.

Written by Dimitrie Dorgan, senior fraud specialist at Onfido