Twitter tells users: Pay up if you want to keep using insecure 2FA

Twitter tells users: Pay up if you want to keep using insecure 2FA

Twitter has announced an intriguing change to its 2FA (two-factor authentication) system.

The change will take effect in about a month’s time, and can be summarised very simply in the following short piece of doggerel:

    Using texts is insecure 
        for doing 2FA,
    So if you want to keep it up
       you're going to have to pay.

We said “about a month’s time” above because Twitter’s announcement is somewhat ambiguous with its dates-and-days calculations.

The product announcement bulletin, dated 2023-02-15, says that users with text-message (SMS) based 2FA “have 30 days to disable this method and enroll in another”.

If you include the day of the announcement in that 30-day period, this implies that SMS-based 2FA will be discontinued on Thursday 2023-03-16.

If you assume that the 30-day window starts at the beginning of the next full day, you’d expect SMS 2FA to stop on Friday 2023-03-17.

However, the bulletin says that “after 20 March 2023, we will no longer permit non-Twitter Blue subscribers to use text messages as a 2FA method. At that time, accounts with text message 2FA still enabled will have it disabled.”

If that’s strictly correct, then SMS-based 2FA ends at the start of Tuesday 21 March 2022 (in an undisclosed timezone), though our advice is to take the shortest possible interpretation so you don’t get caught out.

SMS considered insecure

Simply put, Twitter has decided, as Reddit did a few years ago, that one-time security codes sent via SMS are no longer safe, because “unfortunately we have seen phone-number based 2FA be used – and abused – by bad actors.”

The primary objection to SMS-based 2FA codes is that determined cybercriminals have learned how to trick, cajole or simply to bribe employees in mobile phone companies to give them replacement SIM cards programmed with someone else’s phone number.

Legitimately replacing a lost, broken or stolen SIM card is obviously a desirable feature of the mobile phone network, otherwise you’d have to get a new phone number every time you changed SIM.

But the apparent ease with which some crooks have learned the social engineering skills to “take over” other people’s numbers, usually with the very specific aim of getting at their 2FA login codes, has led to bad publicity for text messages as a source of 2FA secrets.

This sort of criminality is known in the jargon as SIM-swapping, but it’s not strictly any sort of swap, given that a phone number can only be programmed into one SIM card at a time.

So, when the mobile phone company “swaps” a SIM, it’s actually an outright replacement, because the old SIM goes dead and won’t work any more.

Of course, if you’re replacing your own SIM because your phone got stolen, that’s a great security feature, because it restores your number to you, and ensures that the thief can’t make calls on your dime, or listen in to your messages and calls.

But if the tables are turned, and the crooks are taking over your SIM card illegally, this “feature” turns into a double liability, because the criminals start receiving your messages, including your login codes, and you can’t use your own phone to report the problem!

Is this really about security?

Is this change really about security, or is it simply Twitter aiming to simplify its IT operations and save money by cutting down on the number of text messages it needs to send?

We suspect that if the company really were serious about retiring SMS-based login authentication, it would impel all its users to switch to what it considers more secure forms of 2FA.

Ironically, however, users who pay for the Twitter Blue service, a group that seems to include high-profile or popular users whose accounts we suspect are much more attractive targets for cybercriminals…

…will be allowed to keep using the very 2FA process that’s not considered secure enough for everyone else.

SIM-swapping attacks are difficult for criminals to pull off in bulk, because a SIM swap often involves sending a “mule” (a cybergang member or “affiliate” who is willing or desperate enough to risk showing up in person to conduct a cybercrime) into a mobile phone shop, perhaps with fake ID, to try to get hold of a specific number.

In other words, SIM-swapping attacks often seem to be premeditated, planned and targeted, based on an account for which the criminals already know the username and password, and where they think that the value of the account they’re going to take over is worth the time, effort and risk of getting caught in the act.

So, if you do decide to go for Twitter Blue, we suggest that you don’t carry on using SMS-based 2FA, even though you’ll be allowed to, because you’ll just be joining a smaller pool of tastier targets for SIM-swapping cybergangs to attack.

Another important aspect of Twitter’s announcement is that although the company is no longer willing to send you 2FA codes via SMS for free, and cites security concerns as a reason, it won’t be deleting your phone number once it stops texting you.

Even though Twitter will no longer need your number, and even though you may have originally provided it on the understanding that it would be used specificially for the purpose of improving login security, you’ll need to remember to go in and delete it yourself.

What to do?

  • If you already are, or plan to become, a Twitter Blue member, consider switching away from SMS-based 2FA anyway. As mentioned above, SIM-swapping attacks tend to be targeted, because they’re tricky to do in bulk. So, if SMS-based login codes aren’t safe enough for the rest of Twitter, they’ll be even less safe for you once you’re part of a smaller, more select group of users.
  • If you are a non-Blue Twitter user with SMS 2FA turned on, consider switching to app-based 2FA instead. Please don’t simply let your 2FA lapse and go back to plain old password authentication if you’re one of the security-conscious minority who has already decided to accept the modest inconvenience of 2FA into your digital life. Stay out in front as a cybersecurity trend-setter!
  • If you gave Twitter your phone number specifically for 2FA messages, don’t forget to go and remove it. Twitter won’t be deleting any stored phone numbers automatically.
  • If you’re already using app-based authentication, remember that your 2FA codes are no more secure than SMS messages against phishing. App-based 2FA codes are generally protected by your phone’s lock code (because the code sequence is based on a “seed” number stored securely on your phone), and can’t be calculated on someone else’s phone, even if they put your SIM into their device. But if you accidentally reveal your latest login code by typing it into a fake website along with your password, you’ve given the crooks all they need anyway, whether that code came from an app or via a text message.
  • If your phone loses mobile service unexpectedly, investigate promptly in case you’ve been SIM-swapped. Even if you aren’t using your phone for 2FA codes, a crook who’s got control over your number can neverthless send and receive messages in your name, and can make and answer calls while pretending to be you. Be prepared to show up at a mobile phone store in person, and take your ID and account receipts with you if you can.
  • If haven’t set a PIN code on your phone SIM, consider doing so now. A thief who steals your phone probably won’t be able to unlock it, assuming you’ve set a decent lock code. Don’t make it easy for them simply to eject your SIM and insert it into another device to take over your calls and messages. You’ll only need to enter the PIN when you reboot your phone or power it up after turning it off, so the effort involved is minimal.

By the way, if you’re comfortable with SMS-based 2FA, and are worried that app-based 2FA is sufficiently “different” that it will be hard to master, remember that app-based 2FA codes generally require a phone too, so your login workflow doesn’t change much at all.

Instead of unlocking your phone, waiting for a code to arrive in a text message, and then typing that code into your browser…

…you unlock your phone, open your authenticator app, read off the code from there, and type that into your browser instead. (The numbers typically change every 30 seconds so they can’t be re-used.)

PS. The free Sophos Intercept X for Mobile security app (available for iOS and Android) includes an authenticator component that works with almost all online services that support app-based 2FA. (The system generally used is called TOTP, short for time-based one-time password.)

Sophos Authenticator with one account added. (Add as many as you want.)
The countdown timer shows you how long the current code is still valid for.

VMWare user? Worried about “ESXi ransomware”? Check your patches now!

Cybersecurity news, in Europe at least, is currently dominated by stories about “VMWare ESXi ransomware” that is doing the rounds, literally and (in a cryptographic sense at least) figuratively.

CERT-FR, the French government’s computer emergency response team, kicked off what quickly turned into a mini-panic at the tail end of last week, with a bulletin entitled simply: Campagne d’exploitation d’une vulnérabilité affectant VMware ESXi (Cyberattack exploiting a VMWare ESXi vulnerability).

Although the headline focuses directly on the high-level danger, namely that any remotely exploitable vulnerability typically gives attackers a path into your network to do something, or perhaps even anything, that they like…

…the first line of the report gives the glum news that the something the crooks are doing in this case is what the French call rançongiciel.

You probably don’t need to know that logiciel is the French word for “software” to guess that the word stem ranço- came into both modern French (rançon) and English (ransom) from the Old French word ransoun, and thus that the word translates directly into English as ransomware.

Back in the Middle Ages, one occupational hazard for monarchs in time of war was getting captured by the enemy and held for a ransoun, typically under punitive terms that effectively settled the conflict in favour of the captors.

These days, of course, it’s your data that gets “captured” – though, perversely, the crooks don’t actually need to go to the trouble of carrying it off and holding it in a secure prison on their side of the border while they blackmail you.

They can simply encrypt it “at rest”, and offer to give you the decrpytion key in return for their punitive ransoun.

Ironically, you end up acting as your own jailer, with the crooks needing to hold onto just a few secret bytes (32 bytes, in this case) to keep your data locked up in your very own IT estate for as long as they like.

Good news and bad news

Here’s the good news: the current burst of attacks seem to be the work of a boutique gang of cybercriminals who are relying on two specific VMWare ESXi vulnerabilities that were documented by VMware and patched about two years ago.

In other words, most sysadmins would expect to have been ahead of these attackers since early 2021 at the latest, so this is very definitely not a zero-day situation.

Here’s the bad news: if you haven’t applied the needed patches in the extended time since they came out, you’re not only at risk of this specific ransomware attack, but also at risk of cybercrimes of almost any sort – data stealing, cryptomining, keylogging, database poisoning, point-of-sale malware and spam-sending spring immediately to mind.

Here’s some more bad news: the ransomware used in this attack, which you’ll see referred to variously as ESXi ransomware and ESXiArgs ransomware, seems to be a general-purpose pair of malware files, one being a shell script, and the other a Linux program (also known as a binary or executable file).

In other words, although you absolutely need to patch against these old-school VMWare bugs if you haven’t already, there’s nothing about this malware that inextricably locks it to attacking only via VMWare vulnerabilities, or to attacking only VMWare-related data files.

In fact, we’ll just refer to the ransomware by the name Args in this article, to avoid giving the impression that it is either specifically caused by, or can only be used against, VMWare ESXi systems and files.

How it works

According to CERT-FR. the two vulnerabilities that you need to look out for right away are:

  • CVE-2021-21974 from VMSA-2021-0002. ESXi OpenSLP heap-overflow vulnerability. A malicious actor residing within the same network segment as ESXi who has access to port 427 may be able to trigger [a] heap-overflow issue in [the] OpenSLP service resulting in remote code execution.
  • CVE-2020-3992 from VMSA-2020-0023. ESXi OpenSLP remote code execution vulnerability. A malicious actor residing in the management network who has access to port 427 on an ESXi machine may be able to trigger a use-after-free in the OpenSLP service resulting in remote code execution.

In both cases, VMWare’s official advice was to patch if possible, or, if you needed to put off patching for a while, to disable the affected SLP (service location protocol) service.

VMWare has a page with long-standing guidance for working around SLP security problems, including script code for turning SLP off temporarily, and back on again once you’re patched.

The damage in this attack

In this Args attack, the warhead that the crooks are apparently unleashing, once they’ve got access to your ESXi ecosystem, includes the sequence of commands below.

We’ve picked the critical ones to keep this description short:

  • Kill off running virtual machines. The crooks don’t do this gracefully, but by simply sending every vmx process a SIGKILL (kill -9) to crash the program as soon as possible. We assume this is a quick-and-dirty way of ensuring all the VMWare files they want to scramble are unlocked and can therefore be re-opened in read/write mode.
  • Export an ESXi filesystem volume list. The crooks use the esxcli storage filesystem list command to get a list of ESXi volumes to go after.
  • Find important VMWare files for each volume. The crooks use the find command on each volume in your /vmfs/volumes/ directory to locate files from this list of extensions: .vmdk, .vmx, .vmxf, .vmsd, .vmsn, .vswp, .vmss, .nvram and .vmem.
  • Call a general-purpose file scrambling tool for each file found. A program called encrypt, uploaded by the crooks, is used to scramble each file individually in a separate process. The encryptions therefore happen in parallel, in the background, instead of the script waiting for each file to be scrambled in turn.

Once the background encryption tasks have kicked off, the the malware script changes some system files to make sure you know what to do next.

We don’t have our own copies of any actual ransom notes that the Args crooks have used, but we can tell you where to look for them if you haven’t seen them yourself, because the script:

  • Replaces your /etc/motd file with a ransom note. The name motd is short for message of the day, and your original version is moved to /etc/motd1, so you could use the presence of a file with that name as a crude indicator of compromise (IoC).
  • Replaces any index.html files in the /usr/lib/vmware tree with a ransom note. Again, the original files are renamed, this time to index1.html. Files called index.html are the home pages for any VMWare web portals you might openm in your browser.

From what we’ve heard, the ransoms demanded are in Bitcoin, but vary both in the exact amount and the wallet ID they’re to be paid into, perhaps to avoid creating obvious payment patterns in the BTC blockchain.

However, it seems that the blackmail payment is typically set at about BTC 2, currently just under US$50,000.


The encryptor in brief

The encrypt program is, effectively, a standalone, one-file-at-a-time scrambling tool.

Given how it works, however, there is no conceivable legitimate purpose for this file.

Presumably to save time while encrypting, given that virtual machine images are typically many gigabytes, or even terabytes, in size, the program can be given parameters that tell it to scramble some chunks of the file, while leaving the rest alone.

Loosely speaking, the Args malware does its dirty work with a function called encrypt_simple() (in fact, it’s not simple at all, because it encrypts in a complicated way that no genuine security program would ever use), which goes something like this.

The values of FILENAME, PEMFILE, M and N below can be specified at runtime on the command line.

Note that the malware contains its own implementation of the Sosemanuk cipher algorithm, though it relies on OpenSSL for the random numbers it uses, and for the RSA public-key processing it does:

  1. Generate PUBKEY, an RSA public key, by reading in PEMFILE.
  2. Generate RNDKEY, a random, 32-byte symmetric encryption key.
  3. Go to the beginning of FILENAME
  4. Read in M megabytes from FILENAME.
  5. Scramble that data using the Sosemanuk stream cipher with RNDKEY.
  6. Overwrite those same M megabytes in the file with the encrypted data.
  7. Jump forwards N megabytes in the file.
  8. GOTO 4 if there is any data left to sramble.
  9. Jump to the end of FILENAME.
  10. Use RSA public key encyption to scramble RNDKEY, using PUBKEY.
  11. Append the scrambled decryption key to FILENAME.

In the script file we looked at, where the attackers invoke the encrypt program, they seem to have chosen M to be 1MByte, and N to be 99Mbytes, so that they only actually scramble 1% of any files larger than 100MBytes.

This means they get to inflict their damage quickly, but almost certainly leave your VMs unusable, and very likely unrecoverable.

Overwriting the first 1MByte typically makes an image unbootable, which is bad enough, and scrambling 1% of the rest of the image, with the damage distributed throughout the file, represents a huge amount of corruption.

That degree of corruption might leave some original data that you could extract from the ruins of the file, but probably not much, so we don’t advise relying on the fact that 99% of the file is “still OK” as any sort of precaution, because any data you recover this way should be considered good luck, and not good planning.

If the crooks keep the private-key counterpart to the public key in their PEMFILE secret, there’s little chance that you could ever decrypt RNDKEY, which means you can’t recover the scrambled parts of the file yourself.

Thus the ransomware demand.

What to do?

Very simply:

  • Check you have the needed patches. Even if you “know” you applied them right back when they first came out, check again to make sure. You often only need to leave one hole to give attackers a beachhead to get in.
  • Revisit your backup processes. Make sure that you have a reliable and effective way to recover lost data in a reasonable time if disaster should strike, whether from ransomware or not. Don’t wait until after a ransomware attack to discover that you are stuck with the dilemma of paying up anyway because you haven’t practised restoring and can’t do it efficiently enough.
  • If you aren’t sure or don’t have time, ask for help. Companies such as Sophos provide both XDR (extended detection and response) and MDR (managed detection and response) that can help you go beyond simply waiting for signs of trouble to pop up on your dashboard. It’s not a copout to ask for help from someone else, especially if the alternative is simply never having time to catch up on your own.

Tracers in the Dark: The Global Hunt for the Crime Lords of Crypto


We talk to renowned cybersecurity author Andy Greenberg about his tremendous new book, Tracers in the Dark.

Hear Andy’s thoughtful commentary on cybercrime, law enforcement, anonymity, privacy, and whether we really need a “war against cryptography” – codes and ciphers that the government can easily crack if it thinks there’s an emergency – to cement our collective online security.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


PAUL DUCKLIN. Hello, everybody.

Welcome to this very, very special episode of the Naked Security podcast, where we have the most amazing guest: Mr. Andy Greenberg, from New York City.

Andy is the author of a book I can very greatly recommend, with the fascinating title Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency.

So, Andy, let’s start off…

..what made you write this book in the first place?

It seems fascinatingly complicated!

ANDY.GREENBERG.  Yes, well, thank you, Paul.

I guess [LAUGHS]… I’m not sure if that’s a compliment?

DUCK.  Oh, it is, it is!

ANDY.  Thank you.

So, I’ve covered this world of hackers, and cybersecurity, and encryption for about 15 years now.

And around, let’s see – I guess 2010 – I started working on a book, a different book, that was about the cypherpunk movement in the 1990s…

…and the ways that it gave rise to the modern internet, but also to things like WikiLeaks, and other kinds of encryption, anonymity tools, and ultimately what we now call the dark web, I suppose.

And I’ve always been fascinated with the ways, on this beat, that anonymity can play this fascinating, dramatic role – and allow people to become someone else, or to reveal to you in secret to who they truly are.

And as I dug into this cypherpunk world, around 2010 and 2011, I came upon this thing that seemed to be a new phenomenon in that world of online anonymity – which was Bitcoin.

I wrote, I think, the first print magazine piece about Bitcoin for Forbes magazine in 2011.

I interviewed one of the first Bitcoin developers, Gavin Andresen, for that piece.

And Gavin and many others at the time were describing Bitcoin as a kind-of anonymous digital cash for the internet.

You could actually use this new invention, Bitcoin, to put unmarked bills in a briefcase, basically, and send it across the internet to anyone in the world.

And, being the kind of reporter I am, I’m interested in the subversive and sometimes criminal, sometimes politically motivated… I don’t know, the underhanded and dark corners of the internet.

I just saw how this would enable a new world of… yes, people seeking financial privacy, but also money laundering, and drug dealing online, and all of this that would come to pass in the next few years.

But what I didn’t foresee is that, ten years later or so, it would be by then apparent that Bitcoin is actually the *opposite* of anonymous.

I mean, that is the big surprise, and the big reveal.

For me, it was a kind of slow-motion epiphany to realise that cryptocurrency was actually *extremely* traceable.

It was the opposite of this “anonymous cash for the internet” that many people once thought it was.

And the result, I think, was that it served as a kind of trap for many people seeking financial privacy… and criminals, over that decade.

And as I realised the extent of this… I fully realised it in 2020 or so.

I began, at the same time, to see that this one company, Chainalysis, a blockchain-analysis Bitcoin cryptocurrency tracing firm, was being venked in one US Department of Justice announcement after another in all of these major busts.

And so I started talking to Chainalysis, and then to their customers and law enforcement, and slowly realised that there had been this one small group of detectives that had figured this out much earlier than me.

They had started actually tracing Bitcoins years earlier, and had used this incredibly powerful investigative technique to go on this spree of one massive cybercriminal bust after another…

…using cryptocurrency as this surprise trap that had been laid for so many people on the dark web, and in the cybercriminal world as a whole.

DUCK.  Now, I suppose we shouldn’t really be surprised at that, should we, as you explain in the book?

Because the whole idea, at least of the Bitcoin blockchain, is that it is, by design, entirely and utterly public and irrevocable.

That’s how it can work as a ledger that is equivalent to something that would normally be held privately and individually by your bank.

It doesn’t actually have your name on it, but it has a magic identifier that, once tied to you, can’t really be cut loose…

…if there’s other evidence to say, “Yes, long-hexadecimal-string-of-stuff is Andy Greenberg, and here’s why.”

Now try denying it!

So, I think you’re right.

This idea that it’s *possible* to trade anonymously with Bitcoin – I think was taken by very many people to mean that it is fundamentally anonymous and ever-untraceable.

But the world is not like that, is it?

ANDY.  I sometimes look back on my 2011 self, and in that piece for Forbes, I *did* write that Bitcoin was potentially untraceable.

And I sort of scold myself, “How could you be such an idiot?”

The whole idea of Bitcoin is that there’s a blockchain that records every transaction.

But then I remind myself that even Satoshi Nakamoto, the mysterious creator of Bitcoin (whoever he, she or they are), in their first email to a cryptography mailing list introducing the idea of Bitcoin…

…listed among its features that participants can be anonymous.

That was a feature of Bitcoin as Satoshi described it.

So I think there’s always been this idea that Bitcoin, if it’s not anonymous, at least is pseudonymous, that you can hide behind the pseudonym of your Bitcoin address, and that if you can’t figure out somebody’s address, you can’t figure out their transactions.

I guess we all should have known… I should have known, and maybe even Satoshi should have known, that, given this massive corpus of data, there would be patterns in it that allow people to identify clusters of addresses that all belong to one person or service.

Or to follow the money from one address to another to find interesting giveaways in this massive collection of data.

The biggest giveaway of all is when you cash in or cash out at a cryptocurrency exchange that has Know-Your-Customer [KYC] requirements, as almost all of them do now.

They have your identity, so if somebody can just subpoena that exchange, then they have your actual driver’s licence in hand.

And any illusion of anonymity just completely backfires.

So that is the story, I think, of how Bitcoin’s anonymity turned out to be the opposite.

DUCK.  Andy, do you think, perhaps, though, that there’s nothing wrong with Satoshi Nakamoto saying, “You *can* be anonymous when you use Bitcoin?”

I think what’s wrong is that lots of people assume that because technology *can* let you do something that is desirable for your privacy, therefore, *however you use it*, it always will.

And the original idea of Bitcoin didn’t include exchanges, did it?

And so there wouldn’t be any exchanges that would take a copy of your driving licence if Bitcoin were used in its original sort of cypherpunk way, as far as I can see…

ANDY.  Well, I certainly don’t blame Satoshi for not predicting the entire cryptocurrency economy, including the ways that exchanges would interface with the traditional finance world.

It’s all incredibly complex economics; Bitcoin was brilliant enough as it is.

But I do think that it’s more than just, “You *can* be anonymous with Bitcoin if you’re careful, but most people are not careful.”

It turns out, I think, that the possibility, no matter how smart you are, of using Bitcoin anonymously is vanishingly small.

Also, there is the property of blockchain *that it is forever*.

So, if you use the kind of smartest ideas of the day to try to avoid any of these patterns that reveal your transactions on the blockchain, but then someone years later figures out a new trick to identify transactions…

…then you’re still screwed.

They can go back in time, and use their new ideas to foil your cutting-edge anonymity tricks from years earlier.

DUCK.  Absolutely.

With a bank fraud you can imagine you *could* get lucky, couldn’t you?

That just when you’re about to be investigated, years later, you find the bank’s had a data security disaster, and they’ve lost all their backups and, oh, they can’t recover the data…

With the blockchain, that ain’t never going to happen! [LAUGHS]

Because everybody’s got a copy, and that’s a requirement for the system to work as it does.

So, once locked in, always locked in: it can never be lost.

ANDY.  That’s the thing!

To be anonymous with cryptocurrency, you truly have to be perfect – perfect for all time.

And to catch someone who’s trying to be anonymous with cryptocurrency slipping up, you just have to be smart, and persistent, and work on it for years, which is what, first, Chainalysis…

…actually, first was academic researchers like Sarah Meiklejohn at the University of California at San Diego, who, as I document the book, came up with a lot of these techniques.

But then Chainalysis, this startup that’s now almost a nine-billion-dollar unicorn, selling polished cryptocurrency tracing tools to law enforcement agencies.

And now, all of these law enforcement agencies that have professional Bitcoin tracers – their savvy, their know-how in doing this, is just growing by leaps and bounds.

And I think it’s almost just a better rule to say, “No, you cannot be anonymous with cryptocurrency,” that it is fully transparent.

That’s a safer way to operate, almost.

To be fair, Satoshi Nakamoto said participants *can* be anonymous… but it turns out that the only participant who has *remained* anonymous is Satoshi Nakamoto.

And that is, in part, because very few people have that other-worldly restraint that Satoshi had to amass a million Bitcoins and then never spend them or move them.

If you do that… yes, I think you can perhaps be anonymous.

But if you ever want to use your cryptocurrency, or to put it in a liquid form where you can spend it, then I think you’re toast.

DUCK.  Yes, because there are some amazing things that have happened, one of which you allude to because it was in the works just at the end of the book…

…[LAUGHS] what I call the Crocodile Lady and her husband: Heather Morgan and Ilya Liechtenstein.

Self-styled “Crocodile of Wall Street” arrested with husband over Bitcoin megaheist

They’re alleged to have somehow received a whole load of cryptocoins from a cryptocurrency bank robbery against Bitfinex.

In their cases, they received stolen cryptocurrencies in vast quantities, so that they could quite literally have been billionaires *if they could have cashed it out*.

But when bust, they still had the vast majority of that stuff sitting around.

So it seems that, in a lot of cryptocurrency crimes, your eyes can be a lot bigger than your stomach.

You may live the high life a little bit… the Crocodile Lady and her husband, it does seem they were living quite a flash lifestyle.

But when they were bust, what was the amount?

It was more than $3 billions’ worth of Bitcoins that they had, but couldn’t cash out.

ANDY.  The Department of Justice said that they seized $3.6 billion from them.

That was the biggest seizure not just of cryptocurrency in history, but of money in the history of the Department of Justice.

In fact, as I document in the book… actually, one of these happened after the book, but the IRS criminal investigators, who are the main subjects of this book, have now pulled off the first, second, and third-biggest seizures of money in American criminal justice history, by following cryptocurrency and seizing Bitcoins.

Your point is absolutely right, which is that cryptocurrency is easy to steal, it turns out… that is, I think, one of its big drawbacks for the businesses, like exchanges, that have to hold sometimes billions of dollars in a kind of digital safe.

But then if you do steal it, if you pull off one of these massive heists – and two of the three of the cases that we’re discussing are actually people who stole money from the Silk Road dark web drug market…

DUCK.  Yes [LAUGHS]… when you steal from a crook, it’s still a crime, eh?

ANDY.  [LAUGHS] Yes, unfortunately – for those crooks, anyway.

DUCK.  One of the most intriguing bits for me in the book was somebody that you identify as “Individual X”, only because that’s the way they were identified by the court.

This individual had stolen 70,000 Bitcoins, and was busted, and basically gave them back… sort-of in return for getting let off.

They didn’t get prosecuted, they didn’t go to prison, they didn’t – I imagine – even get a criminal record.

And they were never named.

ANDY.  That’s right.

DUCK.  So that seems like an almost unreadable mystery, doesn’t it?

If we look forward a few years, now that Bitcoin’s… what, in the last year, it’s gone down to about a third of its value; Ether is down to about a third; Monero is about half.

Do you think that that gambit of saying, “I’ll give the money back, let me off” would have worked if the prices were reversed, and what they were handing back was now worth a fraction of what it was when it was stolen?

Or do you think that Individual X was lucky because what they had to hand back was actually worth much more than when they stole it?

ANDY.  I think it’s the latter.

Individual X stole that money while the Silk Road was still online…

DUCK.  Wow!

So that would have been when BTC was, what, hundreds [of dollars] then?

ANDY.  Yes, probably, or thousands at most – Silk road went offline in 2013, when Bitcoin had just broken through $1000, if I remember.

This person (I don’t want to say “guy” – who knows who Individual X is?) sat on these 70,000 Bitcoins for seven years, ultimately…

…probably, exactly as you said, just terrified to move them or cash them out for fear of being caught.

DUCK.  Yes, can you imagine?

“Hey, I’m a millionaire!”

“Hey, I’m a *billionaire*!”

“Oh, golly, but where am I going to get my rent money?”

[LAUGHS] Shouldn’t laugh….

ANDY.  As you say – like the hand stuck in the cookie jar!

The hand just gets bigger and bigger until it’s all-consuming, and you cannot move it, you can’t get it out.

In fact, even without trying to get it out, IRS criminal investigators found it through other means, including the seizure of the BTC-e exchange, which was a kind-of money-laundering, criminal Bitcoin exchange.

DUCK.  That was a rogue exchange that basically did as little as is humanly possible along the Know Your Customer front?

“Ask no questions, tell no lies,” that kind of thing?

Is that right?

ANDY.  Yes, exactly.

That was another surprise for many users who believed that, “Maybe I can use BTC-e a little bit and not get caught, because that doesn’t have Know Your Customer, that doesn’t co-operate with law enforcement.”

But, nonetheless, when that exchange was busted and its servers seized, that provided more clues to the IRS.

That helped, in fact, to figure out who Individual X was… I don’t know who they are, but the government does.

And to knock on his or her door and say, “Hey, hand over a billion dollars or you’re going to jail,” and that’s exactly what happened.

Now, poor James Zhong is a very similar case.

Silk Road drugs market hacker pleads guilty, faces 20 years inside

He seems to have taken 50,000 Bitcoins from the Silk Road, probably around the same time, and then held onto them for even longer.

And then, a year after Individual X, Zhong got a knock on his door…

Similarly, they had traced the money, even though he had just left it sitting on a USB drive in a popcorn tin under the floorboards of his closet.

In his case, he did not manage to make a deal somehow, and he’s being criminally charged.

DUCK.  *And* he has given the money back, obviously?

[WRY LAUGH] Aaaargh!

ANDY.  He was a Bitcoin billionaire, and now is facing criminal charges… and never got to even spend his loot.

The Bitfinex case, I don’t know… I have less sympathy for them because they truly were trying to launder a massive theft from a legitimate business.

And they did, I think, launder some of it.

They tried several different clever techniques.

They put the money through…. I mean, this is all alleged, I should say; they’re still innocent until proven guilty, this couple in New York.

But they tried to put the money through the AlphaBay dark web market as a kind of laundering technique, thinking that would be a black box that law enforcement would not be able to see through.

But then AlphaBay was busted and seized.

That’s perhaps the biggest story I tell in the book, the most exciting cloak-and-dagger story: how they tracked down the kingpin of AlphaBay in Bangkok and arrested him.

DUCK.  Yes… spoiler alert, that’s where the helicopter gunships come in!


Yes, and much more!

I mean, that story is one of the craziest that I will probably tell in my career…

But then, also, this New York money-laundering couple tried to put some of the money through Monero, a cryptocurrency that is advertised as a privacy coin, a potentially truly untraceable cryptocurrency.

And yet, in the IRS documents where they describe how they caught this couple in New York, they show how they continued to follow the money, even after it’s exchanged for Monero.

So that was a sign to me that perhaps even Monero – this newer, “untraceable” cryptocurrency – is a bit traceable too, to some degree.

And perhaps this trap persists… that even coins that are designed to outstrip Bitcoin in terms of their anonymity are not all they’re cracked up to be.

Although I should say that Monero people hate it when I even say this out loud, and I don’t know how that worked…

…all I can say is that it looks very possible that Monero tracing was used in that case.

DUCK.  Well, there could be some operational security blunders that the Crocodile Lady and her husband made as well, that kind of tied it all together.

So, Andy, I’d like to ask you, if I may…

Thinking of cryptocurrency tokens like Monero, which as you say, is meant to be more privacy focused than Bitcoin because it inherently, if you like, joins transactions together.

And then there’s also Zcash, designed by cryptography experts specifically using technology known in the jargon as zero-knowledge proofs, which is at least supposed to work so that neither side can tell who the other is, yet it’s still impossible to double-spend…

With all eyes on these much more privacy-focused tokens, where do you think the future is going?

Not just for law enforcement, but where do you think it might drag our legislators?

There’s certainly been a fascination for decades, amongst sometimes very influential parliamentarians, to say, “You know what, this encryption thing, it’s actually a really, really bad idea!”

“We need backdoors; we need to be able to break it; somebody has to ‘think of the children’; et cetera, et cetera.”

ANDY.  Well, it’s interesting to talk about crypto backdoors and the legal debate over encryption that even law enforcement can’t crack.

I think that, in some ways, the story of this book shows that that is often not necessary.

I mean, the criminals in this book were using traditional encryption – they were using Tor and the dark web, and none of that was cracked to bust them.

Instead, investigators followed the money and *that* turned out to be the backdoor.

It’s an interesting parable, and a good example of how, very often, there is a side-channel in criminal operations, this “other leak” of information that, without cracking the main communications, offers a way in…

…and doesn’t necessitate any kind of backdoor in Tor, or the dark web, or Signal, or hard disk encryption, or whatever.

In fact, speaking of ‘thinking of the children’, one of the last major stories that I dig deeply into in the book is the bust of the Welcome To Video market for child sexual abuse videos that accepted cryptocurrency.

And as a result, the IRS investigators at the centre of the book were able to track down and arrest 337 people around the world who used that market.

It was the biggest bust of what we call child sexual abuse materials, by some measures, in history…

…all based on cryptocurrency tracing.

DUCK.  And they didn’t need to do anything that you would really consider privacy-violating, did they?

They quite literally followed the money, in a trail of evidence that was public by design.

And in conjunction, admittedly, with warrants and subpoenas from places where the money popped out, and where internet connections were made, they were able to identify the people involved…

…and largely to avoid trampling on millions of people who had absolutely no connection with the case whatsoever.

ANDY.  Yes!

I think that it is an example of a way to do… it is, in some ways, mass surveillance – but mass surveillance in a way that nonetheless does not require weakening anybody’s security.

I guess that cryptocurrency users, and people who believe in the power of cryptocurrency for enabling activists, and dissidents, and journalists, and money transmissions to countries like Ukraine, that need injections of money for survival…

They would argue that, nonetheless, we need to fix cryptocurrency to make it as untraceable as we once thought it might be.

And that’s where we get into the new, I would say *a* new, crypto-war over cryptocurrency.

We’re just starting to see the beginning of that with tools like Monero and Zcash, as you said.

I do think that there will probably still be surprises about the ways that Monero can be traced.

I’ve seen a leaked Chainalysis document where they told Italian law enforcement… it’s a presentation in Italian to the Italian police from Chainalysis, where they say that they can trace Monero, in the majority of cases, to find a usable lead.

I don’t know how they do that, but it does seem like it’s probabilistic more than definitive.

Now I don’t think a lot of people understand – that is often enough for law enforcement to get a subpoena, to start subpoenaing cryptocurrency exchanges, just based on a probabilistic guess.

They can just check every possibility, if there are a few enough of them.

DUCK.  Andy, I’m conscious of time, so I’d like to finish up now by just asking you one final question, and that is…

In ten years’ time, do you see yourself being in a position where you’ll be able to write a book like this one, but where the “unravelling” parts are even more fascinating, complicated, exciting, and amazing?

ANDY.  I tried, with this book, *not* to make too many predictions.

And, in fact, the book begins with this “mea culpa” that ten years ago I believed exactly the wrong thing about Bitcoin.

So nobody should listen to any ten-year prediction that I have!


But the simplest prediction to make, that *has* to be true, is that this cat-and-mouse game will still be going on in ten years.

People will still be using cryptocurrency thinking that they have outsmarted the tracers…

…and the tracers will still be coming up with new tricks to prove them wrong.

The stories, as you say, will, I think, be much more convoluted because they’ll be dealing with these cryptocurrencies like Monero, that build in vast mix-networks, and Zcash, that have zero-knowledge proofs.

But it does seem that there will always be some way – and maybe not even cryptocurrency, but in some other side channel… as I was saying, there will be a new one that unravels the whole thing.

But there’s no question that this cat-and-mouse game will go on.

DUCK.  And I’m sure there’ll be another Tigran Gambaryan sometime in the future for you to interview?

ANDY.  Well, I do think the game of anonymity…

…it does favour the Tigran Gambaryans of the world.

They, as I said, just have to be persistent and smart.

But the mice in this cat-and-mouse game have to be perfect.

And no one is perfect.

DUCK.  Absolutely.

ANDY.  So, if I do have to make a prediction…

…then I would just place my bet on the cats, on the Tigran Gambaryans of the world.

DUCK.  [LAUGHS] Andy, thank you so much.

Before we go, why don’t you tell our listeners where they can get your book?

ANDY.  Yes, thanks, Paul!

The book is called “Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency.”

[ISBN 978-0-385-54809-0]

And it’s available at all the normal places books are sold.

But if you go to, then you can just find links to a bunch of places.

DUCK.  Andy, thank you so much for your time.

It was as fascinating talking to you and listening to you as it was reading your book.

I recommend it to anybody who wants a galloping read that is nevertheless detailed and insightful about how law enforcement works…

…and, importantly, why criminal convictions for cybercrimes often only happen years after the crime occurred.

The devil really is in the details.

ANDY.  Thank you, Paul.

It’s been a super-fun conversation.

I’m just glad you enjoyed the book!

DUCK.  Excellent!

Thanks to everybody who listened.

And, as always: Until next time, stay secure!


OpenSSH fixes double-free memory bug that’s pokable over the network

The open source operating system distribution OpenBSD is well-known amongst sysadmins, especially those who manage servers, for its focus on security over speed, features and fancy front-ends.

Fittingly, perhaps, its logo is a puffer fish – inflated, with its spikes ready to repel any wily hackers who might come along.

But the OpenBSD team is probably best known not for its entire distro, but for the remote access toolkit OpenSSH that was written in the late 1990s for inclusion in the operating system itself.

SSH, short for secure shell, was originally created by Finnish computer scientist Tatu Ylönen in the mid-1990s in the hope of weaning sysadmins off the risky habit of using the Telnet protocol.

The trouble with Telnet

Telnet was remarkably simple and effective: instead of connecting physical wires (or using a modem over a telephone line) to make a teletype connection to remote servers, you used a TELetype NETwork connection instead.

Basically, the data that would usually flow back and forth over a dedicated serial connection or dial-up phone line was sent and received over the internet, using a packet-switched TCP network connection instead of a circuit-switched point-to-point link.

Same familiar login system, cheaper connections, no need for dedicated data lines!

The giant flaw in Telnet, of course, was its total lack of encryption, so that sniffing out your exact terminal session was trivial, allowing crackers to see every command you typed (even the mistakes you made, and all the times you hit [Backspace]), and every byte of output produced…

…and, of course, your username and password at the start of the session.

Anyone on your network path could not only easily reconstruct your sysadmin sessions in real time on their own screen, but probably also tamper with your session by modifying the commands you sent to the remote server and faking the replies coming back so you didn’t notice the subterfuge.

They could even set up an imposter server, lure you to it, and make it surprisingly difficult for you to spot the deception.

Strong encryption FTW

Ylönen’s SSH aimed to add a layer of strong encryption and authentication to each end of a Telnet-like session, creating a secure shell (that’s what the name stands for, if you’ve ever wondered, although almost everyone just calls it ess-ess-aitch these days).

It was an instant hit, and the protocol was quickly adopted by sysadmins everywhere.

OpenSSH soon followed, as we mentioned above, first appearing in late 1999 as part of the OpenBSD 2.6 release.

The OpenBSD team wanted to create a free, reliable, open-source implementation of the protocol that they and anyone else could use, without any of the licensing or commercial complications that had encumbered Ylönen’s original implementation in the years immediately after its release.

Indeed, if you run the Windows SSH server and connect to it from a Linux computer right now, you’ll almost certainly be relying on the OpenSSH implementation at both ends.

The SSH protocol is also used in other popular client-server services including SCP and SFTP, short for secure copy and secure FTP respectively. SSH loosely means, “connect Securely and run a command SHell at the other end”, typically for interactive logins, because the Unix program for a command shell is usually /bin/sh. SCP is similar, but for CoPying files, because the Unix file-copy command is generally called /bin/cp, and SFTP is named in much the same way.

OpenSSH isn’t the only SSH toolkit in town.

Other well-known implementations include: libssh2, for developers who want to build SSH support right into their own applications; Dropbear, a stripped-down SSH server from Australian coder Matt Johnston that’s widely found on so-called IoT (Internet of Things) devices such as home routers and printers; and PuTTY, a popular, free collection of SSH-related tools for Windows from indie open-source developer Simon Tatham in England.

But if you’re a regular SSH user, you’ve almost certainly connected to at least one OpenSSH server today, not least because most contemporary Linux distributions include it as their standard remote access tool, and Microsoft offers both an OpenSSH client and an OpenSSH server as official Windows components these days.

Double-free bug fix

OpenSSH version 9.2 just came out, and the release notes report as follows:

This release contains fixes for […] a memory safety problem. [This bug] is not believed to be exploitable, but we report most network-reachable memory faults as security bugs.

The bug affects sshd, the OpenSSH server (the -d suffix stands for daemon, the Unix name for the sort of background process that Windows calls a service):

sshd(8): fix a pre-authentication double-free memory fault introduced in OpenSSH 9.1. This is not believed to be exploitable, and it occurs in the unprivileged pre-auth process that is subject to chroot(2) and is further sandboxed on most major platforms.

A double-free bug means that a memory block you already returned to the operating system to be re-used in other parts of your program…

…will later get handed back again by a part of the program that no longer actually “owns” that memory, but doesn’t know it doesn’t.

(Or handed back deliberately at the prompting of code that is trying to provoke the bug on purpose in order to turn a vulnerability into an exploit.)

This can lead to subtle and hard-to-unravel bugs, especially if the system marks the freed-up block as available when the first free() happens, later allocates it to another part of your code when it asks for memory via malloc(), and then marks the block free once again when the superfluous call to free() appears.

That leaves you in the sort of situation you experience when you check into a hotel that says, “Oh, good news! We thought we were full up, but another guest just decided to check out early, so you can have their room.”

Even if the room is neatly cleaned and prepared for new occupants when you go in, and thus looks as though it was properly allocated for your exclusive use, youstill have to trust that the previous guest’s keycard did indeed get correctly cancelled, and that their “early checkout” wasn’t a cunning ruse to sneak back later the same day and steal your laptop.

Bug fix for bug fix

Ironically, if you look at the recent OpenSSH code history, you’ll see that OpenSSH had a modest bug in a function called compat_kex_proposal(), used to check what sort of key-exchange algorithm to use when setting up a connection.

But fixing that modest bug introduced a more severe vulnerability instead.

By the way, the presence of the bug in a part of the software that’s used during the setup of a connection is what makes this a so-called network-reachable pre-authentication vulnerability (or pre-auth bug for short).

The double-free bug happens in code that needs to run after a client has initiated a remote session, but before any key-agreement or authentication has taken place, so the vulnerability can, in theory, be triggered before any passwords or cryptographic keys have been presented for validation.

In OpenSSH 9.0, compat_kex_proposal looked something like this (greatly simplified here):

char* compat_kex_proposal(char* suggestion) {

   if (condition1) { return suggestion; }
   if (condition2) { suggestion = allocatenewstring1(); }
   if (condition3) { suggestion = allocatenewstring2(); }
   if (isblank(suggestion)) { error(); }
   return suggestion;


The idea is that the caller passes in their own block of memory containing a text string suggesting a key-exchange setting, and gets back either an approval to use the very suggestion they sent in, or a newly-allocated text string with an updated suggestion.

The bug is that if condition 1 is false but conditions 2 and 3 are both true, the code allocates two new text strings, but only returns one.

The memory block allocated by allocatenewstring1() is never freed up, and when the function returns, its memory address is lost forever, so there’s no way for any code to free() it in future.

That block is essentially abandoned, causing what’s known as a memory leak.

Over time, this could cause trouble, perhaps even forcing the server to shut down to recover from memory overload.

In OpenSSH 9.1, the code was updated in an attempt to avoid allocating two strings but abandoning one of them:

/* Always returns pointer to allocated memory, caller must free. */

char* compat_kex_proposal(char* suggestion){

   char* previousone = NULL;

   if (condition1) { return newcopyof(suggestion); }
   if (condition2) { suggestion = allocatenewstring1(); }
   if (condition3) {
      previousone = suggestion;                          
      suggestion  = allocatenewstring2(); }
   if (isblank(suggestion()) { error(); }
   return suggestion; 

This has the double-free bug, because if condition 1 and condition 2 are both false, but condition 3 is true, then the code allocates a new string to send back as its answer…

…but incorrectly frees up the string that the caller originally passed in, because the function allocatenewstring1() never gets called to update the variable suggestion.

The passed-in suggestion string is memory that belongs to the caller, and that the caller will therefore free up themeslves later on, leading to the double-free danger.

In OpenSSH 9.2, the code has become more cautious, keeping track of all three possible memory blocks used: the original suggestion (memory owned by someone else), and two possible new strings that might be allocated on the way:

/* Always returns pointer to allocated memory, caller must free. */

char* compat_kex_proposal(char* suggestion) {

   char* newone = NULL; char* newtwo = NULL;

   if (condition1) { return newcopyof(suggestion); }
   if (condition2) { newone = allocatenewstring1(); }
   if (condition3) {
      newtwo = allocatenewstring2(); }
      newone = newtwo;
   if (isblank(newone)) { error(); }
   return newone; 


If condition 1 is true, a new copy of the passed-in string is used, so the caller can later free() their passed-in string’s memory whenever they like.

If we get past condition 1, and condition 2 is true but condition 3 is false, then the alternative suggestion created by allocatenewstring1() gets returned, and the passed-in suggestion string is left alone.

If condition 2 is false and condition 3 is true, then a new string gets generated and returned, and the passed-in suggestion string is left alone.

If both condition 2 and condition 3 are true, then two new strings get allocated along the way; the first one gets freed up because it’s not needed; the second one is returned; and the passed-in suggestion string is left alone.

You can RTxM to confirm that if you call free(newone) when newone is NULL, then “no operation is performed”, because it’s always safe to free(NULL). Nevertheless, lots of programmers still robustly guard against it with code such as if (ptr != NULL) { free(ptr); }.

What to do?

As the OpenSSH team suggests, exploiting this bug will be hard, not least because of the limited privileges that the sshd program has while it’s setting up the connection for use.

Nevertheless, they reported it as a security hole because that’s what it is, so make sure you’ve updated to OpenSSH 9.2.

And if you’re writing code in C, remember that no matter how experienced you get, memory management is easy to get wrong…

…so take care out there.

(Yes, Rust and its modern friends will help you to write correct code, but sometimes you will still need to use C, and even Rust can’t guarantee to stop you writing incorrect code if you program injudiciously!)

S3 Ep120: When dud crypto simply won’t let go [Audio + Text]


Latest epidode – listen now.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


DOUG.   Busts, shutdowns, Samba, and GitHub.

All that, and more, on the Naked Security podcast.


Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do today, Sir?

DUCK.   I’m very well, Douglas.

DOUG.   Let us start the show with our Tech History segment – this is an interesting one.

This week, on 01 February 1982, the Intel 80286 16-bit microprocessor was introduced, and went on to become a mainstay in IBM PC/AT computers for years.

Interestingly, Intel didn’t expect the 286 to be used for personal computers, and designed a chip with multitasking and multi-user systems in mind.

DUCK.   Its primary use, as you say, was the PC/AT, the “Advanced Technology” computer from IBM, which was basically designed to run DOS.

Although DOS is limited to 1MB of RAM (or 640KB RAM and the rest ROM), you could have extra memory, and you could use it for things like…

…remember HIMEM.SYS, and RAM caches, all of that stuff?

Except that because Intel had security in mind, bless their hearts, when they designed the 286…

…once you had switched from the mode where it ran like an 8086 into the super-powerful so-called “protected mode”, *you couldn’t switch back*.

Once you flipped into the mode that let you access your HIMEM or your RAMDISK, you were stuck.

You couldn’t go back and carry on running DOS!

And IBM actually jury-rigged their PC – you sent this special command to (believe it or not) the keyboard controller, and the keyboard controller basically rebooted the CPU.

Then, when the CPU started up again, the BIOS said, “Oh, that’s not a true reboot, that’s a sneaky ‘switch back illegally to real mode’ reboot,” [LAUGHTER] and it went back to where you were in DOS.

So the problem is, it was super-inefficient.

The other thing with the 286, even though it could access 16MB RAM in total, is that, just like the 8086, it could only work on a maximum of 64KB at a time.

So the 64-kilobyte limit was still basically wired into the DNA of that 286 microprocessor.

It was majestically and needlessly, as it turned out, complicated.

It’s kind of like a product that was super-cool, but didn’t really fit a need in the market at the time, sadly.

DOUG.   Well, let’s start in on our first stories.

We have a two-pack – it’s crime time.

Let’s talk about shutdowns and lock-ups, starting with the FBI shutting down the Hive ransomware servers at long last.

That’s good news!

Hive ransomware servers shut down at last, says FBI

DUCK.   It does seem so, doesn’t it, Doug?

Although we need to say, as we always do, essentially, that “cybercrime abhors a vacuum”.

Sadly, other operators steam in when one lot get busted…

…or if all that happens is that their servers get taken down, and the actual people operating them don’t get identified and arrested, typically what happens is they keep their heads below the parapet for a little while, and then they just pop up somewhere else.

Sometimes they reinvent the old brand, just to thumb their nose at the world.

Sometimes they’d come back with a new name.

So the thing with Hive – it turns out that the FBI had infiltrated the Hive ransomware gang, presumably by taking over some sysadmin’s account, and apparently that happened in the middle of 2022.

But, as we have said on the podcast before, with the dark web, the fact that you have someone’s account and you can log in as them…

…you still can’t just look up the IP number of the server you’re connecting to, because the dark web is hiding that.

So it seems that, for the first part of this operation, the FBI weren’t actually able to identify where the servers were, although apparently they were able to get free decryption keys for quite a number of people – I think several hundred victims.

So that was quite good news!

And then, whether it was some operational intelligence blunder, whether they just got lucky, or… we don’t know, but it seems that eventually they did work out where the servers were, and bingo!


DOUG.   OK, very good.

And then our second of these crime stories.

We’ve got a Dutch suspect in custody, charged for not just personal data theft, but [DOOM-LADEN VOICE] “megatheft”, as you put it. Paul:

Dutch suspect locked up for alleged personal data megathefts

DUCK.   Yes!

It seems that his “job” was… he finds data, or buys data from other people, or breaks into sites and steals huge tranches of data himself.

Then he slices-and-dices it in various ways, and puts it up for sale on the dark web.

He was caught because the company that looks after TV licensing in Austria (a lot of European countries require you to have a permit to own and operate a TV set, which essentially funds national television)… those databases pretty much have every household, minus a few.

The Austrian authorities became aware that there was a database up for sale on the dark web that looked very much like the kind of data you’d get – the fields, and the way everything was formatted… “That looks like ours, that looks like Austrian TV licences. My gosh!”

So they did a really cool thing, Doug.

They did an undercover buy-back, and in the process of doing so, they actually got a good handle on where the person was: “It looks like this person is probably in Amsterdam, in the Netherlands.”

And so they got in touch with their chums in the Dutch police, and the Dutch were able to get warrants, and find out more, and do some raids, and bust somebody for this crime.

Perhaps unusually, they got the right from the court, essentially, to hold the guy incommunicado – it was all a secret.

He was just locked away, didn’t get bail – in fact, they’ve still got a couple more months, I think, that they can hold him.

So he’s not getting out.

I’m assuming they’re worried that [A] he’s got loads of cryptocurrency lying around, so he’d probably do a runner, and [B] he’d probably tip off all his compadres in the cyberunderworld.

It also seemed that he was making plenty of money out of it, because he’s also being charged with money laundering – the Dutch police claim to have evidence that he personally cashed out somewhere in the region of half-a-million euros of cryptocoins last year.

So there you are!

Quite a lot of derring-do in an investigation, once again.

DOUG.   Yes, indeed.

OK, this is a classic “We will keep an eye on that!” type of story.

In the meantime, we have a Samba logon bug that reminds us why cryptographic agility is so important:

Serious Security: The Samba logon bug caused by outdated crypto

DUCK.   It is a reminder that when the cryptographic gurus of the world say, “XYZ algorithm is no longer fit for purpose, please stop using it”, snd the year is – shall we say – the mid 2000s…

…it’s well worth listening!

Make sure that there isn’t some legacy code that drags on, because you kind-of think, “No one will use it.”

This is a logon process in Microsoft Windows networking which relies on the MD5 hashing algorithm.

And the problem with the MD5 hashing algorithm is it is much too easy to create two files that have exactly the same hash.

That shouldn’t happen!

For me to get two separate inputs that have exactly the same hash should take me, on my laptop, approximately 10,000 years…

DOUG.   Approximately! [LAUGHS]

DUCK.   More or less.

However, just for that article alone, using tools developed by a Dutch cryptographer for his Master’s thesis back in 2007, I created *ten* colliding MD5 hash-pair files…

…in a maximum of 14 seconds (for one of them) and a minimum of under half a second.

So, billions of times faster than it’s supposed to be possible.

You can therefore be absolutely sure that the MD5 hash algorithm *simply doesn’t live up to its promise*.

That is the core of this bug.

Basically, in the middle of the authentication process, there’s a part that says, “You know what, we’re going to create this super-secure authentication token from data supplied by the user, and using a secret key supplied by the user. So, what we’ll do is we’ll first do an MD5 hash of the data to make it nice and short, and then we’ll create the authentication code *based on that 128-bit hash.”

In theory, if you’re an attacker, you can create alternative input data *that will come up with the same authentication hash*.

And that means you can convince the other end, “Yes, I *must* know the secret key, otherwise how could I possibly create the right authentication code?”

The answer is: you cheat in the middle of the process, by feeding in data that just happens to come up with the same hash, which is what the authentication code is based upon.

The MD5 algorithm died years ago, but yet it lives on – and it shouldn’t!

So the fix is easy.

Samba just said, “What we’re going to do is, if you want to use this old algorithm, from now on, you will have to jump through hoops to turn it on. And if that breaks things, and if suddenly you can’t log into your own network because you were using weak security without realising it… that’s the price we’re all willing to pay.”

And I agree with that.

DOUG.   OK, it’s version 4.17.5 that now forces those two options, so head out there and pick that up if you haven’t already.

And last, but certainly not least, we’ve got code-signing certificates stolen from GitHub.

But there’s a silver lining here, fortunately:

GitHub code-signing certificates stolen (but will be revoked this week)

DUCK.   It’s been quite the few months for cloud breaches and potential supply chain attacks.

DOUG.   Seriously!

DUCK.   “Oh dear, stolen signing keys”… GitHub realised this had happened on 07 December 2022.

Now, hats off to them, they realised the very day after the crooks had got in.

The problem is that they hadn’t got into wander around – it seems that their ability to get in was based on the fact that they could download private GitHub repositories.

This is not a breach of the GitHub systems, or the GitHub infrastructure, or how GitHub stores files – it’s just that GitHub’s code on GitHub… some of the stuff that was supposed to be private got downloaded.

And as we’ve spoken about before, the problem when source code repositories that are supposed to be private get downloaded…

…the problem is that, surprisingly often, those repositories might have stuff in that you don’t want to make public.

For example, passwords to other services.

And, importantly, the code-signing keys – your signet ring, that you use to put your little seal in the wax of the program that you actually build.

Even if you’re an open source project, you’re not going to put your code-signing keys in the public version of the source code!

So that was GitHub’s fear: “Oh dear. We found the crooks almost immediately, but they came in, they grabbed the code, they went… thus, damage already done.”

It took them quite a long time, nearly two months, to figure out what they could say about this.

Or at least it took two months until they said anything about it.

And it sounds as though the only things that might have an effect on customers that did get stolen were indeed code-signing keys.

Only two projects were affected.

One is the source code editor known as “Atom”, GitHub Atom.

That was basically superseded in most developers’ lives by Visual Studio Code [LAUGHS], so the whole project got discontinued in the middle of 2022, and its last security update was December 2022.

So you probably shouldn’t be using Atom anyway.

And the good news is that, because they weren’t going to be building it any more, the certificates involved…

…most of them have already expired.

And in the end, GitHub found, I think, that there are only three stolen certificates that were actually still valid, in other words, that crooks could actually use for signing anything.

And those three certificates were all encrypted.

One of them expired on 04 January 2023, and it doesn’t seem that the crooks did crack that password, because I’m not aware of any malware that was signed with that certificate in the gap between the crooks getting in and the certificate expiring one month later.

There is a second certificate that expires the day we’re recording the podcast, Wednesday, 01 February 2022; I’m not aware of that one having been abused, either.

The only outlier in all of this is a code-signing certificate that, unfortunately, doesn’t expire until 2027, and that’s for signing Apple programs.

So GitHub has said to Apple, “Watch out for anything that comes along that’s signed with that.”

And from 02 February 2022, all of the code-signing certificates that were stolen (even the ones that have already expired) will be revoked.

So it looks as though this is a case of “all’s well that ends well.”

Of course, there’s a minor side-effect here, and that is that if you’re using the GitHub Desktop product, or if you’re still using the Atom editor, then essentially GitHub is revoking signing keys *for their own apps*.

In the case of the GitHub Desktop, you absolutely need to upgrade, which you should be doing anyway.

Ironically, because Atom is discontinued… if you desperately need to continue using it, you actually have to downgrade slightly to the most recent version of the app that was signed with a certificate that is not going to get revoked.

I may have made that sound more complicated than it really is…

…but it’s a bad look for GitHub, because they did get breached.

It’s another bad look for GitHub that included in the breach were code-signing certificates.

But it’s a good look for GitHub that, by the way they managed those certificates. most of them were no longer of any use.

Two of the three that could be dangerous will have expired by the time you listen to this podcast, and the last one, in your words, Doug, “they’re really keeping an eye on.”

Also, they’ve revoked all the certificates, despite the fact that there is a knock-on effect on their own code.

So, they’re essentially disowning their own certificates, and some of their own signed programs, for the greater good of all.

And I think that’s good!

DOUG.   Alright, good job by GitHub.

And, as the sun begins to set on our show for today, it’s time to hear from one of our readers.

Well, if you remember from last week, we’ve been trying to help out reader Steven roll his own USB-key-based password manager.

Based on his quandary, reader Paul asks:

Why not just store your passwords on a USB stick with hardware encryption and a keypad… in a portable password manager such as KeePass? No need to invent your own, just shell out a few bucks and keep a backup somewhere, like in a safe.

DUCK.   Not a bad idea at all. Doug!

I’ve been meaning to buy-and-try one of those special USB drives… you get hard-disk sized ones (although they have SSDs in general these days), where there’s plenty of room for a keypad on the top of the drive.

But you even get USB sticks, and they typically have two rows of five keys or two rows of six keys next to each other.

It’s not like those commodity USB drives that, say, “Includes free encryption software,” which is on the stick and you can then install it on your computer.

The idea is that it’s like BitLocker or FileVault or LUKS, like we spoke about last week.

There’s a full-disk encryption layer *inside the drive enclosure itself*, and as soon as you unplug it, even if you don’t unmount it properly, if you just yank it out of the computer…

…when the power goes down, the key gets flushed from memory and the thing gets locked again.

I guess the burning question is, “Well, why doesn’t everyone just use those as USB keys, instead of regular USB devices?”

And there are two reasons: the first is that it’s a hassle, and the other problem is that they’re much, much more expensive than regular USB keys.

So I think, “Yes, that’s a great idea.”

The problem is, because they’re not mainstream products, I don’t have any I can recommend – I’ve never tried one.

And you can’t just go into the average PC shop and buy one.

So if any listeners have a brand, or a type, or a particular class of such product that they use and like…

…we’d love to hear about it, so do let us know!

DOUG.   OK, great.. I love a little crowd-sourcing, people helping people.

Thank you very much, Paul, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email [email protected], comment on any one of our articles, or hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…

BOTH.   Stay secure!