Microsoft assigns CVE to Snipping Tool bug, pushes patch to Store

Last week was aCropalypse week, where a bug in the Google Pixel image cropping app made headlines, and not just because it had a funky name.

(We formed the opinion that the name was a little bit OTT, but we admit that if we’d thought of it ourselves, we’d have wanted to use it for its word-play value alone, even though it turns out to be harder to say out loud than you might think.)

The bug was the kind of programming blunder that any coder could have made, but that many testers might have missed:

Image cropping tools are very handy when you’re on the road and you want to share an impulse photo, perhaps involving a cat, or an amusing screenshot, perhaps including a wacky posting on social media or a bizarre ad that popped up on a website.

But quickly-snapped pics or hastily-grabbed screenshots often end up including bits that you don’t want other people to see.

Sometimes, you want to crop an image because it simply looks better when you chop off any extraneous content, such as the graffiti-smeared bus stop on the left hand side.

Sometimes, however, you want to edit it out of decency, such as cutting out details that could hurt your own (or somone else’s) privacy by revealing your location or situation unnecessarily.

The same is true for screenshots, where the extraneous content might include the content of your next-door browser tab, or the private email directly below the amusing one, which you need to cut out in order to stay on the right side of privacy regulations.

Be aware before you share

Simply put, one of the primary reasons for cropping photos and screenshots before you send them out is to get rid of content that you don’t want to share.

So, like us, you probably assumed that if you chopped bits out of a photo or screenshot and hit [Save], then even if the app kept a record of your edits so you could revert them later and recover the exact original…

…those chopped-off bits would not be included in any copies of the edited file that you chose to post online, email to your chums, or send to a friend.

The Google Pixel Markup app, however, didn’t quite do that, leading to a bug denoted CVE-2023-20136.

When you saved a modified image over the old one, and then opened it back up to check your changes, the new image would appear in its cropped form, because the cropped data would be correctly written over the start of the previous version.

Anyone testing the app itself, or opening the image to verify it “looked right now” would see its new content, and nothing more.

But the data written at the start of the old file would be followed by a special internal marker to say, “You can stop now; ignore any data hereafter”, followed entirely incorrectly by all the data that used to appear thereafter in the old version of the file.

As long as the new file was smaller than the old one (and when you chop the edges off an image, you expect the new version to be smaller), at least some chunks of the old image would escape at the end of the new file.

Traditional, well-behaved image viewers, including the very tool you just used to crop the file, would ignore the extra data, but deliberately-coded data recovery or snooping apps might not.

Pixel problems repeated elsewhere

Google’s buggy Pixel phones were apparently patched in the March 2023 Android update, and although some Pixel devices received this month’s updates two weeks later than usual, all Pixels should now be up-to-date, or can be force-updated if you perform a manual update check.

But this class of bug, namely leaving data behind in an old file that you overwrite by mistake, instead of truncating its old content first, could in theory appear in almost any app with a [Save] feature, notably including other image-cropping and screenshot-trimming apps.

And it wasn’t long before both the Windows 11 Snipping Tool and the Windows 10 Snip & Sketch app were found to have the same flaw:

You could crop a file quickly and easily, but if you did a [Save] over the old file and not a [Save As] to a new file, where there would be no previous content to leave behind, a similar fate would await you.

The low-level causes of the bugs are different, not least because Google’s software is a Java-style app and uses Java libraries, while Microsoft’s apps are written in C++ and use Windows libraries, but the leaky side-effects are identical.

As our friend and colleague Chester Wisniewski quipped in last week’s podcast, “I suspect there may be a lot of talks in August in Las Vegas discussing this in other applications.” (August is the season of the Black Hat and DEF CON events.)

What to do?

The good news for Windows users is that Microsoft has now assigned the identifier CVE-2023-28303 to its own flavour of the aCropalypse bug, and has uploaded patched versions of the affected apps to the Microsoft Store.

In our own Windows 11 Enterprise Edition install, Windows Update showed nothing new or patched that we needed since last week, but manually updating the Snipping Tool app via the Microsoft Store updated us from 11.2302.4.0 to 11.2302.20.0.

We’re not sure what version number you’ll see if you open the buggy Windows 10 Snip & Sketch app, but after updating from the Microsoft Store, you should be looking for 10.2008.3001.0 or later.

Microsoft considers this a low-severity bug, on the grounds that “successful exploitation requires uncommon user interaction and several factors outside of an attacker’s control.”

We’re not sure we quite agree with that assessment, because the problem is not that an attacker might trick you into cropping an image in order to steal parts of it. (Surely they’d just talk you into sending them the whole file without the hassle of cropping it first?)

The problem is that you might follow exactly the workflow that Microsoft considers “uncommon” as a security precaution before sharing a photo or screenshot, only to find that you unintentionally leaked into a public space the very data you thought you had chopped out.

After all, the Microsoft Store’s own pitch for the Snipping Tool describes it as a quick way to “save, paste or share with other apps.”

In other words: Don’t delay, patch it today.

It only takes a moment.


Dangerous Android phone 0-day bugs revealed – patch or work around them now!

Google has just revealed a fourfecta of critical zero-day bugs affecting a wide range of Android phones, including some of its own Pixel models.

These bugs are a bit different from your usual Android vulnerabilities, which typically affect the Android operating system (which is Linux-based) or the applications that come along with it, such as Google Play, Messages or the Chrome browser.

The four bugs we’re talking about here are known as baseband vulnerabilities, meaning that they exist in the special mobile phone networking firmware that runs on the phone’s so-called baseband chip.

Strictly speaking, baseband is a term used to describe the primary, or lowest-frequency parts of an individual radio signal, in contrast to a broadband signal, which (very loosely) consists of multiple baseband signals adjusted into numerous adjacent frequency ranges and transmitted at the same time in order to increase data rates, reduce interference, share frequency spectrum more widely, complicate surveillance, or all of the above. The word baseband is also used metaphorically to describe the hardware chip and the associated firmware that is used to handle the actual sending and receving of radio signals in devices that can communicate wirelessly. (Somewhat confusingly, the word baseband typically refers to the subsystem in a phone that handles conecting to the mobile telephone network, but not to the chips and software that handle Wi-Fi or Bluetooth connections.)

Your mobile phone’s modem

Baseband chips typically operate independently of the “non-telephone” parts of your mobile phone.

They essentially run a miniature operating system of their own, on a processor of their own, and work alongside your device’s main operating system to provide mobile network connectivity for making and answering calls, sending and receiving data, roaming on the network, and so on.

If you’re old enough to have used dialup internet, you’ll remember that you had to buy a modem (short for modulator-and-demodulator), which you plugged either into a serial port on the back of your PC or into an expansion slot inside it; the modem would connect to the phone network, and your PC would connect to the modem.

Well, your mobile phone’s baseband hardware and software is, very simply, a built-in modem, usually implemented as a sub-component of what’s known as the phone’s SoC, short for system-on-chip.

(You can think of an SoC as a sort of “integrated integrated circuit”, where separate electronic components that used to be interconnected by mounting them in close proximity on a motherboard have been integrated still further by combining them into a single chip package.)

In fact, you’ll still see baseband processors referred to as baseband modems, because they still handle the business of modulating and demodulating the sending and receiving of data to and from the network.

As you can imagine, this means that your mobile device isn’t just at risk from cybercriminals via bugs in the main operating system or one of the apps you use…

…but also at risk from security vulnerabilities in the baseband subsystem.

Sometimes, baseband flaws allow an attacker not only to break into the modem itself from the internet or the phone network, but also to break into the main operating system (moving laterally, or pivoting, as the jargon calls it) from the modem.

But even if the crooks can’t get past the modem and onwards into your apps, they can almost certainly do you an enormous amount of cyberharm just by implanting malware in the baseband, such as sniffing out or diverting your network data, snooping on your text messages, tracking your phone calls, and more.

Worse still, you can’t just look at your Android version number or the version numbers of your apps to check whether you’re vulnerable or patched, because the baseband hardware you’ve got, and the firmware and patches you need for it, depend on your physical device, not on the operating system you’re running on it.

Even devices that are in all obvious respects “the same” – sold under the same brand, using the same product name, with the same model number and outward appearance – might turn out to have different baseband chips, depending on which factory assembled them or which market they were sold into.

The new zero-days

Google’s recently discovered bugs are described as follows:

[Bug number] CVE-2023-24033 (and three other vulnerabilities that have yet to be assigned CVE identities) allowed for internet-to-baseband remote code execution. Tests conducted by [Google] Project Zero confirm that those four vulnerabilities allow an attacker to remotely compromise a phone at the baseband level with no user interaction, and require only that the attacker know the victim’s phone number.

With limited additional research and development, we believe that skilled attackers would be able to quickly create an operational exploit to compromise affected devices silently and remotely.

In plain English, an internet-to-baseband remote code execution hole means that criminals could inject malware or spyware over the internet into the part of your phone that sends and receives network data…

…without getting their hands on your actual device, luring you to a rogue website, persuading you to install a dubious app, waiting for you to click the wrong button in a pop-up warning, giving themselves away with a suspicious notification, or tricking you in any other way.

18 bugs, four kept semi-secret

There were 18 bugs in this latest batch, reported by Google in late 2022 and early 2023.

Google says that it is disclosing their existence now because the agreed time has passed since they were disclosed (Google’s timeframe is usually 90 days, or close to it), but for the four bugs above, the company is not disclosing any details, noting that:

Due to a very rare combination of level of access these vulnerabilities provide and the speed with which we believe a reliable operational exploit could be crafted, we have decided to make a policy exception to delay disclosure for the four vulnerabilities that allow for internet-to-baseband remote code execution

In plain English: if we were to tell you how these bugs worked, we’d make it far too easy for cybercriminals to start doing really bad things to lots of people by sneakily implanting malware on their phones.

In other words, even Google, which has attracted controversy in the past for refusing to extend its disclosure deadlines and for openly publishing proof-of-concept code for still-unpatched zero-days, has decided to follow the spirit of its Project Zero responsible disclosure process, rather than sticking to the letter of it.

Google’s argument for generally sticking to the letter and not the spirit of its disclosure rules isn’t entirely unreasonable. By using an inflexible algorithm to decide when to reveal details of unpatched bugs, even if those details could be used for evil, the company argues that complaints of favouritism and subjectivity can be avoided, such as, “Why did company X get an extra three weeks to fix their bug, while company Y did not?”

What to do?

The problem with bugs that are announced but not fully disclosed is that it’s difficult to answer the questions, “Am I affected? And if so, what should I do?”

Apparently, Google’s research focused on devices that used a Samsung Exynos-branded baseband modem component, but that doesn’t necessarily mean that the system-on-chip would identify or brand itself as an Exynos.

For example, Google’s recent Pixel devices use Google’s own system-on-chip, branded Tensor, but both the Pixel 6 and Pixel 7 are vulnerable to these still-semi-secret baseband bugs.

As a result, we can’t give you a definitive list of potentially affected devices, but Google reports (our emphasis):

Based on information from public websites that map chipsets to devices, affected products likely include:

  • Mobile devices from Samsung, including those in the S22, M33, M13, M12, A71, A53, A33, A21s, A13, A12 and A04 series;
  • Mobile devices from Vivo, including those in the S16, S15, S6, X70, X60 and X30 series;
  • The Pixel 6 and Pixel 7 series of devices from Google; and
  • any vehicles that use the Exynos Auto T5123 chipset.

Google says that the baseband firmware in both the Pixel 6 and Pixel 7 was patched as part of the March 2023 Android security updates, so Pixel users should ensure they have the latest patches for their devices.

For other devices, different vendors may take different lengths of time to ship their updates, so check with your vendor or mobile provider for details.

In the meantime, these bugs can apparently be sidestepped in your device settings, if you:

  • Turn off Wi-Fi calling.
  • Turn off Voice-over-LTE (VoLTE).

In Google’s words, “turning off these settings will remove the exploitation risk of these vulnerabilities.”

If you don’t need or use these features, you may as well turn them off anyway until you know for sure what modem chip is in your phone and if it needs an update.

After all, even if your device turns out to be invulnerable or already patched, there’s no downside to not having things you don’t need.


Featured image from Wikipedia, by user Köf3, under a CC BY-SA 3.0 licence.


Firefox 111 patches 11 holes, but not 1 zero-day among them…

Heard of cricket (the sport, not the insect)?

It’s much like baseball, except that batters can hit the ball wherever they like, including backwards or sideways; bowlers can hit the batter with the ball on purpose (within certain safety limits, of course – it just wouldn’t be cricket otherwise) without kicking off a 20-minute all-in brawl; there’s almost always a break in the middle of the afternoon for tea and cake; and you can score six runs at a time as long as you hit the ball high and far enough (seven if the bowler makes a mistake as well).

Well, as cricket enthusiasts know, 111 runs is a superstitious score, considered inauspicious by many – the cricketer’s equivalent of Macbeth to an actor.

It’s known as a Nelson, though nobody actually seems to know why.

Today therefore sees Firefox’s Nelson release, with version 111.0 coming out, but there doesn’t seem to be anything inauspicious about this one.

Eleven individual patches, and two batches-of-patches

As usual, there are numerous security patches in the update, including Mozilla’s usual combo-CVE vulnerability numbers for potentially exploitable bugs that were found automatically and patched without waiting to see if a proof-of-concept (PoC) exploit was possible:

  • CVE-2023-28176: Memory safety bugs fixed in Firefox 111 and Firefox ESR 102.9. These bugs were shared between the current version (which includes new features) and the ESR version, short for extended support release (security fixes applied, but with new features frozen since version 102, nine releases ago).
  • CVE-2023-28177: Memory safety bugs fixed in Firefox 111 only. These bugs almost certainly only exist in new code that brought in new features, given that they didn’t show up in the older ESR codebase.

These bags-of-bugs have been rated High rather than Critical.

Mozilla admits that “we presume that with enough effort some of these could have been exploited to run arbitrary code”, but no one has yet figured out how to do so, or even if such exploits are feasible.

None of the other eleven CVE-numbered bugs this month were worse thah High; three of them apply to Firefox for Android only; and no one has yet (so far as we yet know) come up with a PoC exploit that shows how to abuse them in real life.

Two notably interesting vulnerabilities appear amongst the 11, namely:

  • CVE-2023-28161: One-time permissions granted to a local file were extended to other local files loaded in the same tab. With this bug, if you opened a local file (such as downloaded HTML content) that wanted access, say, to your webcam, then any other local file you opened afterwards would magically inherit that access permission without asking you. As Mozilla noted, this could lead to trouble if you were looking through a collection of items in your download directory – the access permission warnings you’d see would depend on the order in which you opened the files.
  • CVE-2023-28163: Windows Save As dialog resolved environment variables. This is another keen reminder to sanitise thine inputs, as we like to say. In Windows commands, some character sequences are treated specially, such as %USERNAME%, which gets converted to the name of the currently logged-on user, or %PUBLIC%, which denotes a shared directory, usually in C:Users. A sneaky website could use this as a way to trick you into seeing and approving the download of a filename that looks harmless but lands in a directory you wouldn’t expect (and where you might not later realise it had ended up).

What to do?

Most Firefox users will get the update automatically, typically after a random delay to stop everyone’s computer downloading at the same moment…

…but you can avoid the wait by manually using Help > About (or Firefox > About Firefox on a Mac) on a laptop, or by forcing an App Store or Google Play update on a mobile device.

(If you’re a Linux user and Firefox is supplied by the maker of your distro, do a system update to check for the availability of the new version.)


SHEIN shopping app goes rogue, grabs price and URL data from your clipboard

SHEIN shopping app goes rogue, grabs price and URL data from your clipboard

Chinese “fast fashion” brand SHEIN is no stranger to controversy, not least because of a 2018 data breach that its then-parent company Zoetop failed to spot, let alone to stop, and then handled dishonestly.

As Letitia James, Attorney General of the State of New York, said in a statement at the end of 2022:

SHEIN and [sister brand] ROMWE’s weak digital security measures made it easy for hackers to shoplift consumers’ personal data. […]

[P]ersonal data was stolen and Zoetop tried to cover it up. Failing to protect consumers’ personal data and lying about it is not trendy. SHEIN and ROMWE must button up their cybersecurity measures to protect consumers from fraud and identity theft.

At the time of the New York court judgment, we expressed surprise at the apparently modest $1.9 million fine imposed, considering the reach of the business:

Frankly, we’re surprised that Zoetop (now SHEIN Distribution Corporation in the US) got off so lightly, considering the size, wealth and brand power of the company, its apparent lack of even basic precautions that could have prevented or reduced the danger posed by the breach, and its ongoing dishonesty in handling the breach after it became known.


Snoopy app code now revealed

What we didn’t know, even as this case was grinding through the New York judicial system, was that SHEIN was adding some curious (and dubious, if not actually malicious) code to its Android app that turned it into a basic sort of “marketing spyware tool”.

That news emerged earlier this week when Microsoft researchers published a retrospective analysis of version 7.9.2 of SHEIN’s Android app, from early 2022.

Although that version of the app has been updated many times since Microsoft reported its dubious behaviour, and although Google has now added some mitigations into Android (see below) to help you spot apps that try to get away with SHEIN’s sort of trickery…

…this story is a strong reminder that even apps that are “vetted and approved” into Google Play may operate in devious ways that undermine your privacy and security – as in the case of those rogue “Authenticator” apps we wrote about two weeks ago.



The Microsoft researchers didn’t say what piqued their interest in this particular SHEIN app.

For all we know, they may simply have picked a representative sample of apps with high download counts and searched their decompiled code automatically for intriguing or unexpected calls to system functions in order to create a short list of interesting targets.

In the researchers’ own words:

We first performed a static analysis of the app to identify the relevant code responsible for the behavior. We then performed a dynamic analysis by running the app in an instrumented environment to observe the code, including how it read the clipboard and sent its contents to a remote server.

SHEIN’s app is designated as having 100M+ downloads, which is a fair way below super-high-flying apps such as Facebook (5B+), Twitter (1B+) and TikTok (1B+), but up there with other well-known and widely-used apps such as Signal (100M+) and McDonald’s (100M+).

Digging into the code

The app itself is enormous, weighing in at 93 MBytes in APK form (an APK file, short for Android Package, is essentially a compressed ZIP archive) and 194 MBytes when unpacked and extracted.

It includes a sizeable chunk of library code in a set of packages with a top-level name of com.zzkko (ZZKKO was the original name of SHEIN), including a set of utility routines in a package called com.zzkko.base.util.

Those base utilities include a function called PhoneUtil.getClipboardTxt() that will grab the clipboard using standard Android coding tools imported from android.content.ClipboardManager:

Searching the SHEIN/ZZKKO code for calls to this utility function shows it’s used in just one place, a package intriguingly named com.zzkko.util.­MarketClipboardPhaseLinker:

As explained in Microsoft’s analysis, this code, when triggered, reads in whatever happens to be in the clipboard, and then tests to see if it contains both :// and $, as you might expect if you’d copied and pasted a search result involving someone else’s website and a price in dollars:

If the test succeeds, then the code calls a function compiled into the package with the unimaginative (and presumably auto-generated) name k(), sending it a copy of the snooped-on text as a parameter:

As you can see, even if you’re not a programmer, that uninteresting function k() packages the sniffed-out clipboard data into a POST request, which is a special sort of HTTP connection that tells the server, “This is not a traditional GET request where I’m asking you to send me something, but an upload request in which I’m sending data to you.”

The POST request in this case is uploaded to the URL https://api-service.shein.com/marketing/tinyurl/phrase, with HTTP content that would typically look something like this:


 POST //marketing/tinyurl/phrase
 Host: api-service.shein.com
 . . .
 Content-Type: application/x-www-form-urlencoded

 phrase=...encoded contents of the parameter passed to k()...

As Microsoft graciously noted in its report:

Although we’re not aware of any malicious intent by SHEIN, even seemingly benign behaviors in applications can be exploited with malicious intent. Threats targeting clipboards can put any copied and pasted information at risk of being stolen or modified by attackers, such as passwords, financial details, personal data, cryptocurrency wallet addresses, and other sensitive information.

Dollar signs in your clipboard don’t invariably denote price searches, not least because the majority of countries in the world have currencies that use diferent symbols, so a wide range of personal information could be siphoned off this way…

…but even if the data grabbed did indeed come from an innocent and unimportant search that you did elsewhere, it would still be no one else’s business but yours.

URL encoding is generally used when you want to transmit URLs as data, so they can’t be mixed up with “live” URLs that are supposed to be visited, and so that they won’t contain any illegal characters. For example, spaces aren’t allowed in URLs, so they’re converted in URL data into %20, where the percent sign means “special byte follows as two hexadecimal characters”, and 20 is the hexadecimal ASCII code for space (32 in decimal). Likewise, a special sequence such as :// will be translated into %3A%2F%2F, because a colon is ASCII 0x3A (58 in decimal) and a forward slash is 0x2F (47 in decimal). The dollar sign comes out as %24 (36 in decimal).

What to do?

According to Microsoft, Google’s response to this kind of behaviour in otherwise-trusted apps – what you might think of as “unintentional betrayal” – was to beef up Android’s clipboard handling code.

Presumably, making clipboard access permissions very much stricter and more restrictive would have been a better solution in theory, as would being more rigorous with Play Store app vetting, but we’re assuming that these response were considered too intrusive in practice.

Loosely speaking, the more recent the version of Android you have (or can upgrade to), the more restrictively the clipboard is managed.

Apparently, in Android 10 and later, an app can’t read the clipboard at all unless it’s running actively in the foreground.

Admittedly, this doesn’t help much, but it does stop apps you’ve left idle and perhaps even forgotten about from snooping on your copying-and-pasting all the time.

Android 12 and later will pop up a warning message to say “XYZ app pasted from your clipboard”, but apparently this warning only appears the first time it happens for any app (which might be when you expected it), not on subsequent clipboard grabs (when you didn’t).

And Android 13 automatically wipes out the clipboard every so often (we’re not sure how often that actually is) to stop data you might have forgotten about lying around indefinitely.

Given that Google apparently doesn’t intend to control clipboard access as strictly as you might hope, we’ll repeat Microsoft’s advice here, which runs along the lines of, “If you see something, say something… and vote with your feet, or at least your fingers”:

Consider removing applications with unexpected behaviors, such as clipboard access […] notifications, and report the behavior to the vendor or app store operator.

If you have a fleet of company mobile devices, and you haven’t yet adopted some form of mobile device management and anti-malware protection, why not take a look at what’s on offer now?



S3 Ep123: Crypto company compromise kerfuffle [Audio + Text]

LEARNING FROM OTHERS

The first search warrant for computer storage. GoDaddy breach. Twitter surprise. Coinbase kerfuffle. The hidden cost of success.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG. Crypto company code captured, Twitter’s pay-for-2FA play, and GoDaddy breached.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin

And it is episode 123, Paul.

We made it!


DUCK. We did!

Super, Doug!

I liked your alliteration at the beginning…


DOUG. Thank you for that.

And you’ve got a poem coming up later – we’ll wait with bated breath for that.


DUCK. I love it when you call them poems, Doug, even though they really are just doggerel.

But let’s call it a poem…


DOUG. Yes, let’s call it a poem.


DUCK. All two lines of it… [LAUGHS]


DOUG. Exactly, that’s all you need.

As long as it rhymes.

Let’s start with our Tech History segment.

This week, on 19 February 1971, what is believed to be the first warrant in the US to search a computer storage device was issued.

Evidence of theft of trade secrets led to the search of computer punch cards, computer printout sheets, and computer memory bank and other data storage devices magnetically imprinted with the proprietary computer program.

The program in question, a remote plotting program, was valued at $15,000, and it was ultimately determined that a former employee who still had access to the system had dialled in and usurped the code, Paul.


DUCK. I was amazed when I saw that, Doug, given that we’ve spoken recently on the podcast about intrusions and code thefts in many cases.

What was it… LastPass? GoDaddy? Reddit? GitHub?

It really is a case of plus ça change, plus c’est la même chose, isn’t it?

They even recognised, way back then, that it would be prudent to do the search (at least of the office space) at night, when they knew that the systems would be running but the suspect probably wouldn’t be there.

And the warrant actually states that “experts have made us aware that computer storage can be wiped within minutes”.


DOUG. Yes, it’s a fascinating case.

This guy that went and worked for a different company, still had access to the previous company, and dialled into the system, and then accidentally, it seems, printed out punch cards at his old company while he was printing out paper of the code at his new company.

And the folks at the old company were like, “What’s going on around here?”

And then that’s what led to the warrant and ultimately the arrest.


DUCK. And the other thing I noticed, reading through the warrant, that the cop was able to put in there…

…is that he had found a witness at the old company who confirmed that this chap who’d moved to the new company had let slip, or bragged about, how he could still get in.

So it has all the hallmarks of a contemporary hack, Doug!

[A] the intruder made a blunder which led to the attack being spotted, [B] didn’t cover his tracks well enough, and [C] he’d been bragging about his haxxor skills beforehand. [LAUGHS]

As you say, that ultimately led to a conviction, didn’t it, for theft of trade secrets?

Oh, and the other thing of course, that the victim company didn’t do is…

…they forgot to close off access to former staff the day they left.

Which is still a mistake that companies make today, sadly.


DOUG. Yes.

Aside from the punch cards, this could be a modern day story.


DUCK. Yes!


DOUG. Well, let’s bring things into the modern, and talk about GoDaddy.

It has been hit with malware, and some of the customer sites have been poisoned.

This happened back in December 2022.

They didn’t come out and say in December, “Hey, this is happening.”

GoDaddy admits: Crooks hit us with malware, poisoned customer websites


DUCK. Yes, it did seem a bit late, although you could say, “Better late than never.”

And not so much to go into bat for GoDaddy, but at least to explain some of the complexity of looking into this…

… it seems that the malware that was implanted three months ago was designed to trigger intermittent changes to the behaviour of customers’ hosted web servers.

So it wasn’t as though the crooks came in, changed all the websites, made a whole load of changes that would show up in audit logs, got out, and then tried to profit.

It’s a little bit more like what we see in the case of malvertising, which is where you poison one of the ad networks that a website relies on, for some of the content that it sometimes produces.

That means every now and then someone gets hit up with malware when they visit the site.

But when researchers go back to have a look, it’s really hard for them to reproduce the behaviour.

[A] it doesn’t happen all the time, and [B] it can vary, depending on who you are, where you’re coming from, what browser you’re using…

…or even, of course, if the crooks recognise that you’re probably a malware researcher.

So I accept that it was tricky for GoDaddy, but as you say, it might have been nice if they had let people know back in December that there had been this “intermittent redirection” of their websites.


DOUG. Yes, they say the “malware intermittently redirected random customer websites to malicious sites”, which is hard to track down if it’s random.

But this wasn’t some sort of really advanced attack.

They were redirecting customer sites to other sites where the crooks were making money off of it…


DUCK. [CYNICAL] I don’t want to disagree with you, Doug, but according to GoDaddy, this may be part of a multi-year campaign by a “sophisticated threat actor”.


DOUG. [MOCK ASTONISHED] Sophisticated?


DUCK. So the S-word got dropped in there all over again.

All I’m hoping is that, given that there’s not much we can advise people about now because we have no indicators of compromise, and we don’t even know whether, at this remove, GoDaddy has been able to come up with what people could go and look for to see if this happened to them…

…let’s hope that when their investigation, that they’ve told the SEC (Securities and Exchange Commission) they’re still conducting); let’s hope that when that finishes, that there’ll be a bit more information and that it won’t take another three months.

Given not only that the redirects happened three months ago, but also that it looks as though this may be down to essentially one cybergang that’s been messing around inside their network for as much as three years.


DOUG. I believe I say this every week, but, “We will keep an eye on that.”

All right, more changes afoot at Twitter.

If you want to use two-factor authentication, you can use text messaging, you can use an authenticator app on your phone, or you can use a hardware token like a Yubikey.

Twitter has decided to charge for text-messaging 2FA, saying that it’s not secure.

But as we also know, it costs a lot to send text messages to phones all over the world in order to authenticate users logging in, Paul.

Twitter tells users: Pay up if you want to keep using insecure 2FA


DUCK. Yes, I was a little mixed up by this.

The report, reasonably enough, says, “We’ve decided, essentially, that text-message based, SMS-based 2FA just isn’t secure enough”…

…because of what we’ve spoken about before: SIM swapping.

That’s where crooks go into a mobile phone shop and persuade an employee at the shop to give them a new SIM, but with your number on it.

So SIM swapping is a real problem, and it’s what caused the US government, via NIST (the National Institute of Standards and Technology), to say, “We’re not going to support this for government-based logins anymore, simply because we don’t feel we’ve got enough control over the issuing of SIM cards.”

Twitter, bless their hearts (Reddit did it five years ago), said it’s not secure enough.

But if you buy a Twitter Blue badge, which you’d imagine implies that you’re a more serious user, or that you want to be recognised as a major player…

…you can keep on using the insecure way of doing it.

Which sounds a little bit weird.

So I summarised it in the aforementioned poem, or doggerel, as follows:


  Using texts is insecure 
    for doing 2FA. 
  So if you want to keep it up, 
    you're going to have to pay.

DOUG. Bravo!


DUCK. I don’t quite follow that.

Surely if it’s so insecure that it’s dangerous for the majority of us, even lesser users whose accounts are perhaps not so valuable to crooks…

…surely the very people who should at least be discouraged from carrying on using SMS-based 2FA would be the Blue badge holders?

But apparently not…


DOUG. OK, we have some advice here, and it basically boils down to: Whether or not you pay for Twitter Blue, you should consider moving away from text-based 2FA.

Use a 2FA app instead.


DUCK. I’m not as vociferously against SMS-based 2FA as most cybersecurity people seem to be.

I quite like its simplicity.

I like the fact that it does not require a shared secret that could be leaked by the other end.

But I am aware of the SIM-swapping risk.

And my opinion is, if Twitter genuinely thinks that its ecosystem is better off without SMS-based 2FA for the vast majority of people, then it should really be working to get *everybody* off 2FA…

…especially including Twitter Blue subscribers, not treating them as an exception.

That’s my opinion.

So whether you’re going to pay for Twitter Blue or not, whether you already pay for it or not, I suggest moving anyway, if indeed the risk is as big as Twitter makes out to be.


DOUG. And just because you’re using app-based 2FA instead of SMS-based 2FA, that does not mean that you’re protected against phishing attacks.


DUCK. That’s correct.

It’s important to remember that the greatest defence you can get via 2FA against phishing attacks (where you go to a clone site and it says, “Now put in your username, your password, and your 2FA code”) is when you use a hardware token-based authenticator… like, as you said, a Yubikey, which you have to go and buy separately.

The idea there is that that authentication doesn’t just print out a code that you then dutifully type in on your laptop, where it might be sent to the crooks anyway.

So, if you’re not using the hardware key-based authentication, then whether you get that magic six-digit code via SMS, or whether you look it up on your phone screen from an app…

…if all you’re going to do is type it into your laptop and potentially put it into a phishing site, then neither app-based nor SMS-based 2FA has any particular advantage over the other.


DOUG. Alright, be safe out there, people.

And our last story of the day is Coinbase.

Another day, another cryptocurrency exchange breached.

This time, by some good old fashioned social engineering, Paul?

Coinbase breached by social engineers, employee data stolen


DUCK. Yes.

Guess what came into the report, Doug?

I’ll give you a clue: “I spy, with my little eye, something beginning with S.”


DOUG. [IRONIC] Oh my gosh!

Was this another sophisticated attack?


DUCK. Sure was… apparently, Douglas.


DOUG. [MOCK SHOCKED] Oh, my!


DUCK. As I think we’ve spoken about before on the podcast, and as you can see written up in Naked Security comments, “‘Sophisticated’ usually translates as ‘better than us’.”

Not better than everybody, just better than us.

Because, as we pointed out in the video for last week’s podcast, no one wants to be seen as the person who fell for an unsophisticated attack.

But as we also mentioned, and as you explained very clearly in last week’s podcast, sometimes the unsophisticated attacks work…

…because they just seem so humdrum and normal that they don’t set off the alarm bells that something more diabolical might.

The nice thing that Coinbase did is they did provide what you might call some indicators of compromise, or what are known as TTPs (tools, techniques and procedures) that the crooks followed in this attack.

Just so you can learn from the bad things that happened to them, where the crooks got in and apparently had a look around and got some source code, but hopefully nothing further than that.

So firstly: SMS based phishing.

You get a text message and it has a link in the text message and, of course, if you click it on your mobile phone, then it’s easier for the crooks to disguise that you’re on a fake site because the address bar is not so clear, et cetera, et cetera.

It seemed that that bit failed because they needed a two-factor authentication code that somehow the crooks weren’t able to get.

Now, we don’t know…

…did they forget to ask because they didn’t realise?

Did the employee who got phished ultimately realise, “This is suspicious. I’ll put in my password, but I’m not putting in the code.”

Or were they using hardware tokens, where the 2FA capture just didn’t work?

We don’t know… but that bit didn’t work.

Now, unfortunately, that employee didn’t, it seems, call it in and tell the security team, “Hey, I’ve just had this weird thing happen. I reckon someone was trying to get into my account.”

So, the crooks followed up with a phone call.

They called up this person (they had some contact details for them), and they got some information out of them that way.

The third telltale was they were desperately trying to get this person to install a remote access program on their say so.


DOUG. [GROAN]


DUCK. And, apparently, the programs suggested were AnyDesk and ISL Online.

It sounds as though the reason they tried both of those is that the person must have baulked, and in the end didn’t install either of them.

By the way, *don’t do that*… it’s a very, very bad idea.

A remote access tool basically bumps you out of your chair in front of your computer and screen, and plops the attacker right there, “from a distance.”

They move their mouse; it moves on your screen.

They type at their keyboard; it’s the same as if you were typing at your keyboard while logged in.

And then the last telltale that they had in all of this is presumably someone trying to be terribly helpful: “Oh, well, I need to investigate something in your browser. Could you please install this browser plugin?”

Whoa!

Alarm bells should go off there!

In this case, the plugin they wanted is a perfectly legitimate plug in for Chrome, I believe, called “Edit This Cookie”.

And it’s meant to be a way that you can go in and look at website cookies, and website storage, and delete the ones that you don’t want.

So if you go, “Oh, I didn’t realise I was still logged into Facebook, Twitter, YouTube, whatever, I want to delete that cookie”, that will stop your browser automatically reconnecting.

So it’s a good way of keeping track of how websites are keeping track of you.

But of course it’s designed so that you, the legitimate user of the browser, can basically spy on what websites are doing to try and spy on you.

But if a *crook* can get you to install that, when you don’t quite know what it’s all about, and they can then get you to open up that plugin, they can get a peek at your screen (and take a screenshot if they’ve got a remote access tool) of things like access tokens for websites.

Those cookies that are set because you logged in this morning, and the cookie will let you stay logged in for the whole day, or the whole week, sometimes even a whole month, so you don’t have to log in over and over again.

If the crook gets hold of one of those, then any username, password and two-factor authentication you have kind-of goes by the board.

And it sounds like Coinbase were doing some kind of XDR (extended detection response).

At least, they claimed that someone in their security team noticed that there was a login for a legitimate user that came via a VPN (in other words, disguising your source) that they would not normally expect.

“That could be right, but it kind-of looks unusual. Let’s dig a bit further.”

And eventually they were actually able to get hold of the employee who’d fallen for the crooks *while they were being phished, while they were being socially engineered*.

The Coinbase team convinced the user, “Hey, look, *we’re* the good guys, they’re the bad guys. Break off all contact, and if they try and call you back, *don’t listen to them anymore*.”

And it seems that that actually worked.

So a little bit of intervention goes an awful long way!


DOUG. Alright, so some good news, a happy ending.

They made off with a little bit of employee data, but it could have been much, much worse, it sounds like?


DUCK. I think you’re right, Doug.

It could have been very much worse.

For example, if they got loads of access tokens, they could have stolen more source code; they could have got hold of things like code-signing keys; they could have got access to things that were beyond just the development network, maybe even customer account data.

They didn’t, and that’s good.


DOUG. Alright, well, let’s hear from one of our readers on this story.

Naked Security reader Richard writes:

Regularly and actively looking for hints that someone is up to no good in your network doesn’t convince senior management that your job is needed, necessary, or important.

Waiting for traditional cybersecurity detections is tangible, measurable and justifiable.

What say you, Paul?


DUCK. It’s that age-old problem that if you take precautions that are good enough (or better than good enough, and they do really, really well)…

…it kind-of starts undermining the arguments that you used for applying those precautions in the first place.

“Danger? What danger? Nobody’s fallen over this cliff for ten years. We never needed the fencing after all!”

I know it’s a big problem when people say, “Oh, X happened, then Y happened, so X must have caused Y.”

But it’s equally dangerous to say, “Hey, we did X because we thought it would prevent Y. Y stopped happening, so maybe we didn’t need X after all – maybe that’s all a red herring.”


DOUG. I mean, I think that XDR and MDR… those are becoming more popular.

The old “ounce of prevention is worth a pound of cure”… that might be catching on, and making its way upstairs to the higher levels of the corporation.

So we will hopefully keep fighting that good fight!


DUCK. I think you’re right, Doug.

And I think you could argue also that there may be regulatory pressures, as well, that make companies less willing to go, “You know what? Why don’t we just wait and see? And if we get a tiny little breach that we don’t have to tell anyone about, maybe we’ll get away with it.”

I think people are realising, “It’s much better to be ahead of the game, and not to get into trouble with the regulator if something goes wrong, than to take unnecessary risks for our own and our customers’ business.”

That’s what I hope, anyway!


DOUG. Indeed.

And thank you very much, Richard, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email [email protected], you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH. Stay secure!

[MUSICAL MODEM]