Following reports of network downtime after a cyberattack in March, Partnership HealthPlan of California has since confirmed the Hive ransomware group stole a trove of health information ahead of the ransomware deployment. Reports show 854,913 patients were impacted.
As previously reported, PHC faced a long period of computer system disruptions immediately following the attack and were working with third-party working forensic specialists to recover the network. The incident also disrupted PHC’s ability to receive or process treatment authorization requests, the forms used to gain pre-approved funding for treatment.
At the time, multiple reports claimed Hive was behind the attack, after a dark web posting of data proofs allegedly exfiltrated from PHC. The listing was soon removed, but screenshots showed proofs containing approximately 850,000 unique records, or about 400GB of data.
The official breach notice from PHC confirms the attack was deployed on March 19 and that its investigation found evidence the hacker accessed or stolen patient data from the network on the same day.
The stolen data could include patient names, Social Security numbers, driver’s licenses, Tribal IDs, medical record numbers, treatments, diagnoses, prescriptions, medical data, health insurance details, patient portal credentials, and other sensitive information.
PHC is still working to identify the information contained in the stolen files and just what patients were involved. All impacted patients will receive two years of credit monitoring services.
Unfortunately, PHC is included in the spate of healthcare data breach lawsuits filed within the last six months. For the California health plan, a law firm filed a lawsuit on behalf of patient “John Joe” on May 17.
The lawsuit is currently soliciting other patients to join the suit. As noted in an earlier SC Media report, these advertisements are increasingly common but are ethically questionable given the Supreme Court ruling on actual harm and the highly targeted nature of the sector that puts the majority of providers at risk of a breach.
Cooper University Health reports breach from December
Cooper University Health Care is just now informing an undisclosed number of current and former patients that their data was accessed or likely stolen after an email hack in December 2021. Cooper is a health system with sites across south New Jersey and the Delaware Valley.
The almost six-month delay in notification should serve as a reminder that the Health Insurance Portability and Accountability Act requires patients to be notified of breaches to their health information within 60 days of discovery and without undue delay — not at the close of a lengthy forensic analysis.
Cooper first “learned of unusual activity” within an employee’s email account on Dec. 13, 2021. The accounts were quickly secured and an investigation was launched with support from an outside cybersecurity team.
The investigation confirmed an employee email account was hacked on Nov. 24, 2021, several weeks before it was discovered. The potentially stolen data could include names,dates of birth, provider names, diagnoses, treatment information, billing and claims data, and medical record numbers.
Hack, data theft at Val Verde medical center impacts 87K patients
The personal and protected health information tied to 86,562 patients of Val Verde Regional Medical Center in Texas was stolen after a “network disruption” on March 10.
Upon discovery, VVRMC secured the network and launched an investigation with support from third-party digital forensics experts. The post-mortem determined that a threat actor was able to access or acquire “certain files” during the security incident. The medical center also contacted the FBI and is cooperating with their investigation.
The impacted data included patient names, Social Security numbers, dates of birth, medical information, health insurance details, and other data. All patients will receive free identity monitoring services.
Notably, VVRMC apologized for the timing of the notification: “While the extensive data identification and processing was lengthy and time-consuming, it was a necessary process that helped us thoroughly identify the impacted individuals.” But the notice appears to have been sent within the 60-day HIPAA requirement.
VVRMC has since bolstered its security measures to prevent a recurrence.
Email hack impacts 90K Alameda Health patients
California-based Alameda Health System recently notified the Department of Health and Human Services that an email hack compromised the data belonging to 90,000 patients.
There are currently no public breach notices detailing the incident. However, the notice comes less than two years after the health system reported another email hack that wasn’t discovered for nearly two months. It should serve as a reminder for provider organizations to learn from past mistakes to avoid regulatory issues and protect patient privacy.
SAC Health reports paper records theft affecting 150K
In one of the largest thefts of paper records reported in recent years, Social Action Community Health System recently notified 149,940 patients that their information was stolen after a break-in at its off-site storage facility. The notice comes after SAC Health sent notice to 28,000 patients following the hack of its vendor, Netgain, in 2020.
SAC Health was notified of the incident on March 4, where a burglar stole six boxes of paper documents from the facility. The provider has been working with local law enforcement with its investigation, alongside its own. It’s since been confirmed the theft included data tied to patients who visited SAC in 1997 and between 2006 and 2020.
The information stored in the stolen containers could include contact details, dates of birth, and diagnosis codes. All patients will receive complimentary credit monitoring services. SAC Health is currently assessing its policies and procedures for paper document storage.
Allwell Behavioral hack impacts 30K patients
A “data security incident” at Allwell Behavioral Health in Georgia likely led to the theft of protected health information tied to 29,972 patients.
The subsequent investigation found that an attacker first gained access to a computer system used to store quality assurance information on March 2. The incident was detected three days later. During that time, the actor was able to take “an undetermined number of files containing client information.”
The stolen data was related to treatments and could include patient names, dates of birth, SSNs, contact information, treatment activity and dates, locations, and payer details. All impacted patients will receive free identity theft protection services.
Allwell has since upgraded its IT and computer systems to bolster security and prevent further unauthorized access.
It’s no secret that 3rd party apps can boost productivity, enable remote and hybrid work and are overall, essential in building and scaling a company’s work processes.
An innocuous process much like clicking on an attachment was in the earlier days of email, people don’t think twice when connecting an app they need with their Google workspace or M365 environment, etc. Simple actions that users take, from creating an email to updating a contact in the CRM, can result in several other automatic actions and notifications in the connected platforms.
As seen in the image below, the OAuth mechanism makes it incredibly easy to interconnect apps and many don’t consider what the possible ramifications could be. When these apps and other add-ons for SaaS platforms ask for permissions’ access, they are usually granted without a second thought, presenting more opportunities for bad actors to gain access to a company’s data. This puts companies at risk for supply chain access attacks, API takeovers and malicious third party apps.
Oauth mechanism permission request
When it comes to local machines and executable files, organizations already have control built in that enables security teams to block problematic programs and files. It needs to be the same when it comes to SaaS apps.
OAuth 2.0 has greatly simplified authentication and authorization, and offers a fine-grained delegation of access rights. Represented in the form of scopes, an application asks for the user’s authorization for specific permissions. An app can request one or more scopes. Through approval of the scopes, the user grants these apps permissions to execute code to perform logic behind the scenes within their environment. These apps can be harmless or as threatening as an executable file.
Best Practices to Mitigate Third Party App Access Risk
To secure a company’s SaaS stack, the security team needs to be able to identify and monitor all that happens within their SaaS ecosystem. Here’s what a security team can share with employees and handle themselves to mitigate third party app access risk.
1 — Educate the employees in the organization
The first step in cybersecurity always comes back to raising awareness. Once the employees become more aware of the risks and dangers that these OAuth mechanisms present, they will be more hesitant to use them. Organizations should also create a policy that enforces employees to submit requests for third party apps.
2 — Gain visibility into the 3rd party access for all business-critical apps
Security teams should gain visibility into every business critical app and review all the different third party apps that have been integrated with their business-critical SaaS apps – across all tenets. One of the first steps when shrinking the threat surface is gaining an understanding of the full environment.
3 — Map the permissions and access levels requested by the connected third party apps
Once the security team knows which third party apps are connected, they should map the permissions and the type of access that each third party app has been given. From there they will be able to see which third party app presents a higher risk, based on the higher level of scope. Being able to differentiate between an app that can read versus an app that can write will help the security team prioritize which needs to be handled first.
In addition, the security team should map which users granted these permissions. For example, a high-privileged user, someone who has sensitive documents in their workspace, who grants access to a third party app can present a high risk to the company and needs to be remediated immediately.
4 — Get the automated approach to handle 3rd party app access
SaaS Security Posture Management solutions can automate the discovery of 3rd party apps. The right SSPM solution, like Adaptive Shield, has built-in logic that maps out all the 3rds party apps with access to the organization’s SSPM integrated apps. This visibility and oversight empowers security teams so whether a company has a 100 or 600 apps, they can easily stay in control, monitor and secure their company’s SaaS stack.
The Bigger SaaS Security Picture
To secure a company’s SaaS stack, the security team needs to be able to identify and monitor all that happens within their SaaS ecosystem. 3rd party app access is just one component of the SaaS Security Posture Management picture.
Most existing cybersecurity solutions still do not offer adequate protection or a convenient way to monitor a company’s SaaS stack, let alone the communications between their known apps and platforms, leaving companies vulnerable and unable to effectively know or control which parties have access to sensitive corporate or personal data.
Organizations need to be able to see all the configurations and user permissions of each and every app, including all the 3rd party apps that have been granted access by users. This way security teams can retain control of the SaaS stack, remediate any issues, block any apps using too many privileges and mitigate their risk.
bleepingcomputer.com – Security researchers have discovered a new Microsoft Office zero-day vulnerability that is currently exploited in phishing attacks to install malware on devices simply by opening a Word document.
Tweeted by @precision_mats https://twitter.com/precision_mats/status/1531281874882019328
Verizon is dealing with an incident where a hacker captured a database containing company employee data, including the full names of workers as well as their ID numbers, email addresses, and phone numbers. Motherboard reported that the database is legitimate, as the anonymous hacker contacted them last week, and they were able to verify the data by calling some of the numbers.
“These employees are idiots,” the hacker told Motherboard via chat. The hacker is seeking $250,000 in exchange for not leaking the database and said they are in contact with Verizon.
A Verizon spokesperson contacted Motherboard confirming the incident, saying, “A fraudster recently contacted us threatening to release readily available employee directory information in exchange for payment from Verizon. We do not believe the fraudster has any sensitive information and we do not plan to engage with the individual further. As always, we take the security of Verizon data very seriously and we have strong measures in place to protect our people and systems.”
The hacker claims they nabbed the database by social engineering their way into remotely connecting to a Verizon employee’s computer. The hacker’s account, in an email sent to Vice, is that they posed as internal support, coerced the Verizon employee to allow remote access, and then launched a script that copied data from the computer.
The information that was stolen could still be harmful. If you’ve ever had to get support from a carrier over the phone, you might have had to deal with the different departments that handle activating your SIM card. If a purported hacker poses as an employee and spoofed their number as one from the database, they could continue to use social engineering for SIM swapping fraud. The technique has been used frequently over the years as attackers manipulated accounts through carriers like T-Mobile and AT&T to steal cryptocurrency or access to social media accounts, including one belonging to former Twitter CEO Jack Dorsey.
Four high severity vulnerabilities have been disclosed in a framework used by pre-installed Android System apps with millions of downloads.
The issues, now fixed by its Israeli developer MCE Systems, could have potentially allowed threat actors to stage remote and local attacks or be abused as vectors to obtain sensitive information by taking advantage of their extensive system privileges.
“As it is with many of pre-installed or default applications that most Android devices come with these days, some of the affected apps cannot be fully uninstalled or disabled without gaining root access to the device,” the Microsoft 365 Defender Research Team said in a report published Friday.
The weaknesses, which range from command-injection to local privilege escalation, have been assigned the identifiers CVE-2021-42598, CVE-2021-42599, CVE-2021-42600, and CVE-2021-42601, with CVSS scores between 7.0 and 8.9.
The vulnerabilities were discovered and reported in September 2021 and there is no evidence that the shortcomings are being exploited in the wild.
Microsoft didn’t disclose the complete list of apps that use the vulnerable framework in question, which is designed to offer self-diagnostic mechanisms to identify and fix issues impacting an Android device.
This also meant that the framework had broad access permissions, including that of audio, camera, power, location, sensor data, and storage, to carry out its functions. Coupled with the issues identified in the service, Microsoft said it could permit an attacker to implant persistent backdoors and take over control.
Some of the affected apps are from large international mobile service providers such as Telus, AT&T, Rogers, Freedom Mobile, and Bell Canada –
Additionally, Microsoft is recommending users to look out for the app package “com.mce.mceiotraceagent” — an app that may have been installed by mobile phone repair shops — and remove it from the phones, if found.
The susceptible apps, although pre-installed by the phone providers, are also available on the Google Play Store and are said to have passed the app storefront’s automatic safety checks without raising any red flags because the process was not engineered to look out for these issues, something that has since been rectified.