HP releases Wolf Connect solution for secure remote PC management

HP Inc. has announced the launch of HP Wolf Connect, a new IT management solution that provides resilient and secure connections to remote PCs. The solution enables IT teams to manage PCs remotely even if they are powered down or offline and was showcased at HP’s Amplify Partner Conference. HP Wolf Connect uses a cellular-based network that helps teams manage a dispersed hybrid workforce, reducing the time and effort needed to resolve support tickets, securing data from loss or theft, and optimizing asset management, the vendor said. The release comes as businesses face ongoing challenges in securing and managing the hybrid workforce.

HP Wolf Connect can locate, lock, erase a PC remotely

HP Wolf Protect and Trace with Wolf Connect is the world’s first software service capable of locating, locking, and erasing a PC remotely, even when it’s turned off or disconnected from the internet, HP Inc claimed. This capability protects sensitive data on the move and helps lower IT costs by reducing the need for PC remediation or replacement.

“Hybrid work has made remote management at scale more complex, yet more essential,” said Ian Pratt, global head of security for personal systems for HP. “The cloud has helped but hasn’t solved ITs ability to manage devices that are powered down or offline. HP Wolf Connect’s highly resilient connection opens new doors to remote device management, enabling efficient and effective management of dispersed workforces.” This is particularly crucial in industries where devices may contain personally identifiable information or intellectual property, he added.

Securing hybrid workers will be “more difficult” this year

New HP research suggested that 82% of global IT and security leaders operating a hybrid work model have gaps in their organization’s security posture. Of 1,492 IT and security leaders surveyed, 61% said protecting their hybrid workers will be more difficult over the coming year, while 70% said hybrid work increases the risk of lost or stolen devices. Beyond PC loss and theft, endpoints (laptops, PCs, or printers) continue to face serious threats from ransomware and are ground zero for attacks on hybrid workers, HP said.

Two-thirds of those polled said the greatest cybersecurity weakness is the potential for hybrid employees to be compromised with phishing, ransomware, and attacks via unsecured home networks cited as the top risks. Furthermore, 65% said it is challenging to update threat detection measures to reflect the behavior of hybrid employees, making it harder to spot attacks., while 76% agreed that application isolation is key to protecting hybrid worker devices, although only 23% are currently benefiting from using it.

“The shift to hybrid work requires a move away from old perimeter-focused thinking. Adopting hardware-enforced security features and protection above, in, and below the OS — such as application isolation — will be key for protecting users without impinging on the freedoms that hybrid work allows,” Pratt said.

Only 39% of businesses have infrastructure to support secure hybrid working

Just 39% of global businesses have the infrastructure in place (zero trust-based or VPN) to support a secure hybrid working environment, while a further 35% either haven’t started implementing one or have no plans to, according to a new report from Zscaler. The security vendor surveyed 1,900 senior IT decision-makers across EMEA, AMS, and APAC. It found that 54% of IT leaders believe VPNs or perimeter firewalls are ineffective at protecting against cyberattacks and provide poor visibility into application traffic for hybrid workforces, with 47% of organizations with a VPN-based architecture stating it is too complex to administer different security for on-premises and remote employees. Meanwhile, 52% of respondents said that a zero-trust architecture would help tackle inconsistent access experiences for on-premises and cloud-based applications/data.

France bans TikTok, all social media apps from government devices

The French government has banned TikTok and all other “recreational apps” from phones issued to its employees. The Minister of Transformation and the Public Service Stanislas Guerini, said in a statement that recreational applications do not have sufficient levels of cybersecurity and data protection to be deployed on government equipment. This prohibition applies immediately and uniformly, although exemptions may be granted on an exceptional basis for professional needs such as the institutional communication of an administration, the statement read.

The move follows the banning of TikTok on government/senior official devices in the US, UK, and other countries on the grounds that user data from the app (owned by Beijing-based company ByteDance) could end up in the hands of the Chinese government, posing national security risks. France’s banning of all social media apps goes further than other countries, whose bans currently forbid TikTok specifically.

France joins international partners by banning TikTok on data security grounds

“For several weeks, several of our European and international partners have adopted measures restricting or prohibiting the downloading and installation of the TikTok application by their administrations,” Guerini said. After an analysis of the issues, in particular security, the government has decided to ban the downloading and installation of recreational applications on professional telephones provided to public officials. “These applications can therefore constitute a risk to the protection of the data of these administrations and their public officials,” Guerini claimed.

The Interministerial Digital Department (DINUM) will ensure the implementation of this instruction, in close collaboration with the National Agency for Information Systems Security (ANSSI).

TikTok bans continue across the globe, CEO says app poses no national security risk

In the last two weeks, TikTok has been banned from both UK government and parliament devices/networks, with US federal and state government TikTok bans having been in place for several weeks. Meanwhile, TikTok CEO Shou Zi Chew has disputed that the app is an “agent of China” and argued that TikTok poses no risk to national security in a US Congress hearing. Over multiple days, Chew was pressed by deputies on the House Committee on Energy and Commerce on various topics such as TikTok’s content moderation practices and the company’s spying on journalists.

Chew argued that TikTok parent company ByteDance prioritizes the safety of its young users, highlighting the firm’s intention to protect US user data by storing information on servers maintained and owned by server giant Oracle. Several committee members reportedly found some of Chew’s answers evasive.

8 strange ways employees can (accidently) expose data

Employees are often warned about the data exposure risks associated with the likes of phishing emails, credential theft, and using weak passwords. However, they can risk leaking or exposing sensitive information about themselves, the work they do, or their organization without even realizing. This risk frequently goes unexplored in cybersecurity awareness training, leaving employees oblivious to the risks they can pose to the security of data which, if exposed, could be exploited both directly and indirectly to target workers and businesses for malicious gain.

Here are eight unusual, unexpected, and relatively strange ways employees can accidently expose data, along with advice for addressing and mitigating the risks associated with them.

1. Eyeglass reflections expose screen data on video conferencing calls

Video conferencing platforms such as Zoom and Microsoft Teams have become a staple of remote/hybrid working. However, new academic research has found that bespectacled video conferencing participants may be at risk of accidently exposing information via the reflection of their eyeglasses.

In a paper titled Private Eye: On the Limits of Textual Screen Peeking via Eyeglass Reflections in Video Conferencing, a group of researchers at Cornell University revealed a method of reconstructing screen text exposed via participants’ eyeglasses and other reflective objects during video conferences. Using mathematical modeling and human subject experiments, the research explored the extent to which webcams leak recognizable textual and graphical information gleaming from eyeglass.

“Our models and experimental results in a controlled lab setting show it is possible to reconstruct and recognize with over 75% accuracy on-screen texts that have heights as small as 10 mm with a 720p webcam,” the researchers wrote. “We further applied this threat model to web textual contents with varying attacker capabilities to find thresholds at which text becomes recognizable.” The 20-participant study found that present-day 720p webcams are sufficient for adversaries to reconstruct textual content on big-font websites, while the evolution toward 4K cameras will tip the threshold of text leakage to reconstruction of most header texts on popular websites.

Such capabilities in the hands of a malicious actor could potentially threaten the security of some confidential and sensitive data. The research proposed near-term mitigations including a software prototype that users can use to blur the eyeglass areas of their video streams. “For possible long-term defenses, we advocate an individual reflection testing procedure to assess threats under various settings and justify the importance of following the principle of least privilege for privacy-sensitive scenarios,” the researchers added.

2. LinkedIn career updates trigger “new hire SMS” phishing attacks

On professional networking site LinkedIn, it’s common for people to post upon starting a new role, updating their profile to reflect their latest career move, experience, and place of work. However, this seemingly innocuous act can open new starters to so-called “new hire SMS” phishing attacks, whereby attackers scour LinkedIn for new job posts, look up a new hire’s phone number on a data brokerage site, and send SMS phishing messages pretending to be a senior executive from within the company, trying to trick them during the first weeks of their new job.

As detailed by social engineering expert and SocialProof Security CEO Rachel Tobac, these messages typically ask for gift cards or bogus money transfers, but they have been known to request login details or sensitive decks. “I’ve seen an increase in the new hire SMS phish attack method recently,” she wrote on Twitter, adding that it has become so common that most organizations she works with have stopped announcing new hires on LinkedIn and recommend new starters to limit posts about their new roles.

These are good mitigative steps for reducing the risks of new hire SMS phishing scams, Tobac stated, and security teams should also educate new employees about these attacks, outlining what genuine communication from the firm will look like and what methods will be used. She also recommended providing employees with DeleteMe to remove their contact details from data brokerage sites.

3. Social media, messaging app pictures reveal sensitive background info

Users may not associate posting pictures on their personal social media and messaging apps as posing a risk to sensitive corporate information, but as Dmitry Bestuzhev, most distinguished threat researcher at BlackBerry, tells CSO, accidental data disclosure via social apps such as Instagram, Facebook, and WhatsApp is a very real threat. “People like taking photos but sometimes they forget about their surroundings. So, it’s common to find sensitive documents on the table, diagrams on the wall, passwords on sticky notes, authentication keys and unlocked screens with applications open on the desktop. All that information is confidential and could be put to use for nefarious activities.”

It’s easy for employees to forget that, on an unlocked screen, it’s simple to spot which browser they use, what antivirus products they are connected to, and so on, Bestuzhev adds. “This is all valuable information for attackers and can so easily be exposed in photos on Instagram, Facebook, and WhatsApp status updates.”

Keiron Holyome, VP UKI, Eastern Europe, Middle East, and Africa at BlackBerry, emphasizes the importance of security education and awareness about this issue. “Companies can’t stop employees taking and sharing photos, but they can highlight the risks and cause employees to stop and think about what they are posting,” he says.

4. Data ingestion script mistypes result in incorrect database use

Speaking to CSO, Tom Van de Wiele, principal threats and technology researcher at WithSecure, says his team has treated some unusual cases whereby a simple mistype of an IP address or URL for a data ingestion script has led to the wrong database being used. “This then results in a mixed database that needs to be sanitized or rolled back before the backup process kicks in or else the organization might have a PII [personally identifiable information] incident that violates GDPR,” he adds. “Companies deal with data mixing incidents on a regular basis and sometimes the operations are irreversible if a succession of failures occurs too far back in the past.”

Van de Wiele therefore advises security teams to leverage the authentication aspect of TLS where possible. “This will lower the risk of mistaken identity of servers and databases but understand that the risk cannot be fully eliminated – so act and prepare accordingly by making sure you have logs in place that are acted upon as part of a larger detection and monitoring strategy. That includes successful as well as unsuccessful events,” he adds.

Van de Wiele also advocates enforcing strict rules, processes, awareness, and security controls on how and when to use production/pre-production/staging/testing environments. “This will result in less data mixing incidents, less impact when dealing with real product data and ensures that any kind of update or change as a result of the discovery of a security issue can be tested thoroughly in pre-production environments.” Naming servers so that they can be distinguished from each other versus going over-board with abbreviations is another useful tip, as is performing security testing in production, he says. “Invest in detection and monitoring as one of the compensating controls for this and test to make sure detection works within expectations.”

5. Certificate transparency logs expose rafts of sensitive data

Certificate transparency (CT) logs allow users to navigate the web with a higher degree of trust and allow administrators and security professionals to detect certificate anomalies and verify trust chains quickly. However, because of the nature of these logs, all the details in a certificate are public and stored forever, says Art Sturdevant, VP of technical operations at Censys. “A quick audit of Censys’ certificates data shows usernames, emails, IP addresses, internal projects, business relationships, pre-release products, organizational structures, and more. This information can be used by attackers to footprint the company, compile a list of valid username or email addresses, target phishing emails and, in some cases, target development systems, which may have fewer security controls, for takeover and lateral movement.”

Since the data in a CT log is forever, it’s best to train developers, IT admins, etc. to use a generic email account to register certificates, Sturdevant adds. “Administrators should also train users on what goes into a CT log so they can help avoid accidental information disclosure.”

6. “Innocent” USB hardware become a backdoor for attackers

Employees may be inclined to purchase and use their own hardware such as USB fans or lamps with their corporate laptops, but CyberArk malware research team leader Amir Landau warns that these seemingly innocent gadgets can be used as backdoors to a user’s device and the wider business network. Such hardware attacks typically have three main attack vectors, he says:

  • “Malicious-by-design hardware, where devices come with pre-installed malware on them, with one example known as BadUSB. BadUSBs can be purchased very easily on AliExpress, or people can make their own with open sources, such as USB Rubber Ducky, from any USB device.”
  • Next are worm infections – also called replication through removable media – where USB devices are infected by worms, such as USBferry and Raspberry Robin.
  • Third are compromised hardware supply chains. “As part of a supply chain attack, bad software or chips are installed inside legitimate hardware, like in the case of the malicious microchips inserted into motherboards which ended up in servers used by Amazon and Apple in 2018.”

Detecting these kinds of attacks at the endpoint is difficult, but antivirus and endpoint detection and response can, in some cases, protect against threats by monitoring the execution flow of extended devices and validating code integrity policies, Landau says. “Privileged access management (PAM) solutions are also important due to their ability to block the USB ports to unprivileged users and prevent unauthorized code.”

7. Discarded office printers offer up Wi-Fi passwords

When an old office printer stops working or is replaced by a newer model, employees could be forgiven for simply discarding it for recycling. If this is done without first wiping data such as Wi-Fi passwords, it can open an organization up to data exposure risks.

Van de Wiele has seen this firsthand. “Criminals extracted the passwords and used them to log onto the network of the organization in order to steal PII,” he says. He advises encrypting data at rest and in use/transit and ensuring an authentication process exists to protect the decryption key for end-point devices in general. “Make sure removable media are under control, that data is always encrypted, and that recovery is possible through a formal process with the necessary controls in place.”

8. Emails sent to personal accounts leak corporate, customer information

Avishai Avivi, CISO at SafeBreach, recounts an incident where a non-malicious email sent by an employee for the purpose of training almost led to the exposure of data including customers’ Social Security numbers. “As part of the training of new associates, the training team took a real spreadsheet that contained customers’ SSNs, and simply hid the columns containing all the SSNs. They then provided this modified spreadsheet to the trainees. The employee was looking to continue training at home, and simply emailed the spreadsheet to his personal email account,” he tells CSO.

Thankfully, the firm had a reactive data leak protection (DLP) control monitoring all employee emails, which detected the existence of multiple SSNs in the attachment, blocked the email, and alerted the SOC. However, it serves as a reminder that sensitive information can be exposed by even the most genuine, benevolent of actions.

“Rather than relying on reactive controls, we should have had better data classification preventative controls that would have indicated the movement of real SSN data from the production environment into a file in the training department, a control which would have stopped the employee from even attempting to email the attachment out to a personal email account,” Avivi says.

New York-barred attorneys required to complete cybersecurity, privacy, and data protection training

New York-barred attorneys will be required to complete one continuing legal education (CLE) credit hour of cybersecurity, privacy, and data protection training as part of their biennial learning requirement beginning July 1, 2023. New York is the first jurisdiction to stipulate this specific requirement as the state aims to emphasize the technical competence duty of lawyers to meet professional, ethical and contractual obligations to safeguard client information.

Lawyers have ethical obligations and professional responsibilities around cybersecurity

A New York Courts document outlined a new category of CLE credit – Cybersecurity, Privacy and Data Protection – that has been added to the CLE Program Rules. This category is defined in the CLE Program Rules 22 NYCRR 1500.2(h) and clarified in the Cybersecurity, Privacy, and Data Protection FAQs and Guidance document. “Providers may issue credit in cybersecurity, privacy, and data protection to attorneys who complete courses in this new category on or after January 1, 2023,” it stated. It also noted changes to both Experienced and Newly Admitted Attorney Biennial CLE requirements to include one credit hour of training in cybersecurity, privacy and data protection.

The new requirements are based on fresh rules around cybersecurity, privacy, and data protection for legal practitioners, effective from January 2023. “Cybersecurity, privacy and Data protection-ethics must relate to lawyers’ ethical obligations and professional responsibilities regarding the protection of electronic data and communication,” it read. These may include:

  • Sources of lawyers’ ethical obligations and professional responsibilities and their application to electronic data and communication
  • Protection of confidential, privileged, and proprietary client and law office data and communication
  • Client counseling and consent regarding electronic data, communication and storage protection policies, protocols, risks, and privacy implications
  • Security issues related to the protection of escrow funds
  • Inadvertent or unauthorized electronic disclosure of confidential information, including through social media, data breaches and cyberattacks
  • Supervision of employees, vendors and third parties as it relates to electronic data and communication

Furthermore, cybersecurity, privacy, and data protection-general must relate to the practice of law and may include, among other things, technological aspects of protecting client and law office electronic data and communication, vetting and assessing vendors and other third parties relating to policies, protocols and practices on protecting electronic data and communication, applicable laws relating to cybersecurity and data privacy, and law office cybersecurity, privacy and data protection policies and protocols.

Increasing cybersecurity, data protection concentration of legal regulators

Jonathan Armstrong, lawyer and partner at compliance firm Cordery, tells CSO that there is an increasing focus on cybersecurity, data protection, and privacy standards among legal regulators. “The [UK] Solicitors Regulation Authority (SRA), for example, had a cybersecurity break out session last week at the COLP/COFA conference for law firm compliance officers. I think it could catch on in other countries,” he says.

Similar requirements in the UK (and EU) have come under the spotlight recently with the Information Commissioner’s Office (ICO) investigating data security issues at law firms. “This happened in the ACS:Law case where there was an ICO fine first and then a SRA suspension for the lawyer involved. More recently, we’ve had the ICO fine for Tuckers, which also mentioned SRA obligations in the Enforcement Notice. The ICO noted Tuckers’ failure to comply with the SRA code of conduct but has not applied any increase to the penalty percentage of 3.25% in this instance.”

Opsera’s GitCustodian detects vulnerable data in source code

DevOps orchestration platform provider Opsera has announced the launch of GitCustodian, a new Software-as-a-Service (SaaS) product that detects and reports vulnerable data in code repositories including Gitlab, Github, and Bitbucket.

GitCustodian scans the code repositories for vulnerable data and alerts security and DevOps teams so that they can prevent vulnerabilities from leaking into production, protecting software development pipelines. Once vulnerabilities are found, the solution automates the remediation process for any uncovered secrets or other sensitive artifacts, Opsera says.

The release comes at a time of heightened awareness around data leaks in source code repositories. In April, GitHub revealed that attackers had used stolen authorization tokens to download private data stored on the platform.

GitCustodian provides “proactive visibility”

Opsera notes that many software developers unknowingly keep sensitive data (e.g., passwords, certificates, keys) in source code repositories, which, if pushed to production, is at risk of being exposed to cyber attackers. GitCustodian was designed to provide proactive visibility into vulnerable data in source code repositories and help security and DevOps teams address it early in the continuous delivery/continuous integration (CI/CD) process, the company says. Teams receive a centralized snapshot of any vulnerable secrets and other sensitive artifacts at risk across version control systems. According to Opsera, GitCustodian’s  key features and benefits include:

  • Secrets detection based on multiple algorithms and industry-standard profiles.
  • Source code repository scanning.
  • Ability to add proactive secrets governance to existing CI/CD workflows.
  • Secure storage for secrets and keys via a built-in vault.
  • Collaboration enablement that notifies impacted teams.
  • Insights and analytics with actionable insights and compliance reporting.

Speaking to CSO, Kumar Chivukula, Co-Founder and CTO of Opsera, explains that GitCustodian works in three main ways. “One, GitCustodian helps companies scan their source code management (SCMs) for catching and watching secrets with a dashboard tracking the violators and highlighting the source of the problem. Two, whether you use an Opsera or existing pipeline, you can add a guardrail to scan the pipeline for secrets before the pipeline continues. Most enterprises need to have an option to catch secrets before they deploy into production or a customer environment. Three, when a secret is exposed, we give you the option to add secrets into our built-in Vault, directly allowing you to add secrets in a vault as a parameter and not disclose them in plain text.”

GitCustodian is available for existing and new customers, with pricing based on the number of repos and number of users.

All software vulnerabilities lead back to insecure code

Industry analysts recognize the security risks and complexities surrounding source code, along with the need for modern businesses to implement effective strategies for detecting and managing source code vulnerabilities. “The way all software vulnerabilities make their way into the world is through source code,” Fernando Montenegro, Senior Principal Analyst at Omdia, tells CSO. “The possible issues with vulnerable code in production run the gamut from simple denial of service through to full-blown data breaches. The moment vulnerable software is exposed in production, it creates not only a new attack surface for a potential attacker, but adds to the “technical debt” that organizations accumulate over time.” The impact can be significant for companies, up to and including public disclosures and regulatory fallout such as fines, he adds.

“Making efforts to remove vulnerabilities before they leak into production should be extremely high on any security executive’s priority list,” Montenegro says. Janet Worthington, Senior Analyst at Forrester agrees. “To ensure that code deployed to production is secure, organizations must make use of security scanning tools that look for security weakness in the source code and known vulnerabilities in the open source and third-party libraires that developers pack into their applications,” she tells CSO. “Integrating and automating security scanning tools as part of your CI/CD pipeline provides developers with feedback while the code is still fresh in their mind.” This has taken on greater significance since the outbreak of the COVID-19 pandemic and mass adoption of digital transformation, adds Omdia Senior Principal Analyst Rik Turner. “The rate at which development teams are pushing code into production has accelerated and will continue to do so,” he tells CSO. “With one of the foundations of the agile development process being the reusable componentry that was pioneered by the service-orientated architecture revolution, ever more pre-written and freely available open-source components are being included in the apps developers are writing, so if they come with vulnerabilities, they’re going straight into the apps too.”