S3 Ep119: Breaches, patches, leaks and tweaks! [Audio + Text]

BREACHES, PATCHES, LEAKS AND TWEAKS

Latest epidode – listen now.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Breaches, breaches, patches, and typios.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Daul Pucklin…

…I’m sorry, Paul!


DUCK.  I think I’ve worked it out, Doug.

“Typios” is an audio typo.


DOUG.  Exactly!


DUCK.  Yes… well done, that man!


DOUG.  So, what do typos have to do with cybersecurity?

We’ll get into that…

But first – we like to start with our This Week in Tech History segment.

This week, 23 January 1996, version 1.0 of the Java Development Kit said, “Hello, world.

Its mantra, “Write once, run anywhere”, and its release right as the web’s popularity was really reaching a fever pitch, made it an excellent platform for web-based apps.

Fast-forward to today, and we’re at version 19, Paul.


DUCK.  We are!

Java, eh?

Or “Oak”.

I believe that was its original name, because the person who invented the language had an oak tree growing outside his office.

Let us take this opportunity, Doug, to clear up, for once and for all, the confusion that lots of people have between Java and JavaScript.


DOUG.  Ooooooh…


DUCK.  A lot of people think that they are related.

They’re not related, Doug.

They’re *exactly the same* – one is just the shortened… NO, I’M COMPLETELY KIDDING YOU!

Java is not JavaScript – tell your friends!


DOUG.  I was, like, “Where is this going?” [LAUGHS]


DUCK.  JavaScript basically got that name because the word Java was cool…

…and programmers run on coffee, whether they’re programming in Java or JavaScript.


DOUG.  Alright, very good.

Thank you for clearing that up.

And on the subject of clearing things up, GoTo, the company behind such products as GoToMyPC, GoToWebinar, LogMeIn, and (cough, cough) others says that they’ve “detected unusual activity within our development environment and third party cloud storage service.”

Paul, what do we know?

GoTo admits: Customer cloud backups stolen together with decryption key


DUCK.  That was back on the last day of November 2022.

And the (cough, cough) that you mentioned earlier, of course, is GoTo’s affiliate/subsidiary, or company that’s part of their group, LastPass.

Of course, the big story over Christmas was LastPass’s breach.

Now, this breach seems to be a different one, from what Goto has come out and said now.

They admit that the cloud service that ultimately got breached is the same one that is shared with LastPass.

But the stuff that got breached, at least from the way they wrote it, sounds to have been breached differently.

And it took until this week – nearly two months later – for GoTo to come back with an assessment of what they found.

And the news is not good at all, Doug.

Because a whole load of products… I’ll read them out: Central, Pro, join.me, Hamachi and RemotelyAnywhere.

For all of those products, encrypted backups of customer stuff, including account data, got stolen.

And, unfortunately, the decryption key for at least some of those backups was stolen with them.

So that means they’re essentially *not* encrypted once they’re in the hands of the crooks.

And there were two other products, which were Rescue and GoToMyPC, where so-called “MFA settings” were stolen, but were not even encrypted.

So, in both cases we have, apparently: hashed-and-salted passwords missing, and we have these mysterious “MFA (multifactor authentication) settings”.

Given that this seems to be account-related data, it’s not clear what those “MFA settings” are, and it’s a pity that GoTo was not a little bit more explicit.

And my burning question is…

..do those settings include things like the phone number that SMS 2FA codes might be sent to?

The starting seed for app-based 2FA codes?

And/or those backup codes that many services let you create a few of, just in case you lose your phone or your SIM gets swapped?

SIM swapper sent to prison for 2FA cryptocurrency heist of over $20m


DOUG.  Oh, yes – good point!


DUCK.  Or your authenticator program fails.


DOUG.  Yes.


DUCK.  So, if they are any of those, then that could be big trouble.

Let’s hope those weren’t the “MFA settings”…

…but the omission of the details there means that it’s probably worth assuming that they were, or might have been, in amongst the data that was stolen.


DOUG.  And, speaking of possible omissions, we’ve got the requisite, “Your passwords have leaked. But don’t worry, they were salted and hashed.”

But not all salting-and-hashing-and-stretching is the same, is it?

Serious Security: How to store your users’ passwords safely


DUCK.  Well, they didn’t mention the stretching part!

That’s where you don’t just hash the password once.

You hash it, I don’t know… 100,100 times, or 5000 times, or 50 times, or a million times, just to make it a bit harder for the crooks.

And as you say… yes., not all salting-and-hashing is made equal.

I think you spoke fairly recently on the podcast about a breach where there were some salted-and-hashed passwords stolen, and it turned out, I think, that the salt was a two digit code, “00” to “99”!

So, 100 different rainbow tables is all you need…

…a big ask, but it’s do-able.

And where the hash was *one round* of MD5, which you can do at billions of hashes a second, even on modest equipment.

So, just as an aside, if you’re ever unfortunate enough to suffer a breach of this sort yourself, where you lose customers’ hashed passwords, I recommend that you go out of your way to be definitive about what algorithm and parameter settings you are using.

Because it does give a little bit of comfort to your users about how long it might take crooks to do the cracking, and therefore how frenziedly you need to go about changing all your passwords!


DOUG.  Alright.

We’ve got some advice, of course, starting with: Change all passwords that relate to the services that we talked about earlier.


DUCK.  Yes, that is something that you should do.

It’s what we would normally recommend when hashed passwords are stolen, even if they’re super-strongly hashed.


DOUG.  OK.

And we’ve got: Reset any app-based 2FA code sequences that you’re using on your accounts.


DUCK.  Yes, I think you might as well do that.


DOUG.  OK.

And we’ve got: Regenerate new backup codes.


DUCK.  When you do that with most services, if backup codes are a feature, then the old ones are automatically thrown away, and the new ones replace them entirely.


DOUG.  And last, but certainly not least: Consider switching to app-based 2FA codes if you can.


DUCK.  SMS codes have the advantage that there’s no shared secret; there’s no seed.

It’s just a truly random number that the other end generates each time.

That’s the good thing about SMS-based stuff.

As we said, the bad thing is SIM-swapping.

And if you need to change either your app-based code sequence or where your SMS codes go…

…it’s much, much easier to start a new 2FA app sequence than it is to change your mobile phone number! [LAUGHS]


DOUG.  OK.

And, as I’ve been saying repeatedly (I might get this tattooed on my chest somewhere), we will keep an eye on this.

But, for now, we’ve got a leaky T-Mobile API responsible for the theft of…

(Let me check my notes here: [LOUD BELLOW OFF-MIC] THIRTY-SEVEN MILLION!?!??!)

37 million customer records:

T-Mobile admits to 37,000,000 customer records stolen by “bad actor”


DUCK.  Yes.

That’s a little bit annoying, isn’t it? [LAUGHTER]

Because 37 million is an incredibly large number… and, ironically, comes after 2022, the year in which T-Mobile paid out $500 million to settle issues relating to a data breach that T-Mobile had suffered in 2021.

Now, the good news, if you can call it that, is: last time, the data that got breached included things like Social Security Numbers [SSNs] and driving licence details.

So that’s really what you might call “high-grade” identity theft stuff.

This time, the breach is big, but my understanding is that it’s basic electronic contact details, including your phone number, along with date of birth.

That goes some way towards helping crooks with identity theft, but nowhere near as far as something like an SSN or a scanned photo of your driving licence.


DOUG.  OK, we’ve got some tips if you are affected by this, starting with: Don’t click “helpful” links in emails or other messages.

I’ve got to assume that a tonne of spam and phishing emails are going to be generated from this incident.


DUCK.  If you avoid the links, as we always say, and you find your own way there, then whether it’s a legitimate email or not, with a genuine link or a bogus one…

…if you don’t click the good links, then you won’t click the bad links either!


DOUG.  And that dovetails nicely with our second tip: Think before you click.

And then, of course, our last tip: Report those suspicious emails to your work IT team.


DUCK.  When crooks start phishing attacks, the crooks generally don’t send it to one person inside the company.

So, if the first person that sees a phish in your company happens to raise the alarm, then at least you have a chance of warning the other 49!


DOUG.  Excellent.

Well, for you iOS 12 users out there… if you were feeling left out from all the recent zero-day patches, have we got a story for you today!

Apple patches are out – old iPhones get an old zero-day fix at last!


DUCK.  We have, Doug!

I’m quite happy, because everyone knows I love my old iOS 12 phone.

We went through some excellent times, and on some lengthy and super-cool bicycle rides together until… [LAUGHTER]

…the fateful one where I got injured well enough to recover, and the phone got injured well enough that you can barely see through the cracks of the screen anymore, but it still works!

I love it when it gets an update!


DOUG.  I think this was when I learned the word prang.


DUCK.  [PAUSE] What?!

That’s not a word to you?


DOUG.  No!


DUCK.  I think it comes from the Royal Air Force in the Second World War… that was “pranging [crashing] a plane”.

So, there’s a ding, and then, well above a ding, comes a prang, although they both have the same sound.


DOUG.  OK, gotcha.


DUCK.  Surprise, surprise – after having no iOS 12 updates for ages, the pranged phone got an update…

…for a zero-day bug that was the mysterious bug fixed some time ago in iOS 16 only… [WHISPER] very secretively by Apple, if you remember that.


DOUG.  Oh, I remember that!

Apple pushes out iOS security update that’s more tight-lipped than ever


DUCK.  There was this iOS 16 update, and then some time later updates came out for all the other Apple platforms, including iOS 15.

And Apple said, “Oh, yes, actually, now we think about it, it was a zero-day. Now we’ve looked into it, although we rushed out the update for iOS 16 and didn’t do anything for iOS 15, it turns out that the bug only applies to iOS 15 and earlier.” [LAUGHS]

Apple patches everything, finally reveals mystery of iOS 16.1.2

So, wow, what a weird mystery it was!

But at least they patched everything in the end.

Now, it turns out, that old zero-day is now patched in iOS 12.

And this is one of those WebKit zero-days that sounds as though the way it’s been used in the wild is for malware implantation.

And that, as always, smells of something like spyware.

By the way, that was the only bug fixed in iOS 12 that was listed – just that one 0-day.

The other platforms got loads of fixes each.

Fortunately, those all seem to be proactive; none of them are listed by Apple as “actively being exploited.”

[PAUSE]

Right, let’s move on to something super-exciting, Doug!

I think we’re into the “typios”, aren’t we?


DOUG.  Yes!

The question I’ve been asking myself… [IRONIC] I can’t remember how long, and I’m sure other people are asking, “How can deliberate typos improve DNS security?”

Serious Security: How dEliBeRaTe tYpOs might imProVe DNS security


DUCK.  [LAUGHS]

Interestingly, this is an idea that first surfaced in 2008, around the time that the late Dan Kaminsky, who was a well-known security researcher in those days, figured out that there were some significant “reply guessing” risks to DNS servers that were perhaps much easier to exploit than people thought.

Where you simply poke replies at DNS servers, hoping that they just happen to match an outbound request that hasn’t had an official answer yet.

You just think, “Well, I’m sure somebody in your network must be interested in going to the domain naksec.test just about now. So let me send back a whole load of replies saying, ‘Hey, you asked about naksec.test; here it is”…

…and they send you a completely fictitious server [IP] number.

That means that you come to my server instead of going to the real deal, so I basically hacked your server without going near your server at all!

And you think, “Well, how can you just send *any* reply? Surely there’s some kind of magic cryptographic cookie in the outbound DNS request?”

That means the server could notice that a subsequent reply was just someone making it up.

Well, you’d think that… but remember that DNS first saw the light of day in 1987, Doug.

And not only was security not such a big deal then, but there wasn’t room, given the network bandwidth of the day, for long-enough cryptographic cookies.

So DNS requests, if you go to RFC 1035, are protected (loosely speaking, Doug) by a unique identification number, hopefully randomly generated by the sender of the request.

Guess how long they are, Doug…


DOUG.  Not long enough?


DUCK.  16 bits.


DOUG.  Ohhhhhhhh.


DUCK.  That’s kind-of quite short… it was kind-of quite short, even in 1987!

But 16 bits is *two whole bytes*.

Typically the amount of entropy, as the jargon has it, that you would have in a DNS request (with no other cookie data added – a basic,original-style, old-school DNS request)…

…you have a 16-bit UDP source port number (although you don’t get to use all 16 bits, so let’s call it 15 bits).

And you have that 16-bit, randomly-chosen ID number… hopefully your server chooses randomly, and doesn’t use a guessable sequence.

So you have 31 bits of randomness.

And although 231 [just over 2 billion] is a lot of different requests that you’d have to send, it’s by no means out of the ordinary these days.

Even on my ancient laptop, Doug, sending 216 [65,536] different UDP requests to a DNS server takes an almost immeasurably short period of time.

So, 16 bits is almost instantaneous, and 31 bits is do-able.

So the idea, way back in 2008 was…

What if we take the domain name you’re looking up, say, naksec.test, and instead of doing what most DNS resolvers do and saying, “I want to look up n-a-k-s-e-c dot t-e-s-t,” all in lowercase because lowercase looks nice (or, if you want to be old-school, all in UPPERCASE, because DNS is case-insensitive, remember)?

What if we look up nAKseC.tESt, with a randomly chosen sequence of lowercase, UPPERCASE, UPPERCASE, lower, et cetera, and we remember what sequence we used, and we wait for the reply to come back?

Because DNS replies are mandated to have a copy of the original request in them.

What if we can use some of the data in that request as a kind of “secret signal”?

By mashing up the case, the crooks will have to guess that UDP source port; they will have to guess that 16-bit identification number in the reply; *and* they will have to guess how we chose to miS-sPEll nAKsEc.TeST.

And if they get any of those three things wrong, the attack fails.


DOUG.  Wow, OK!


DUCK.  And Google decided, “Hey, let’s try this.”

The only problem is that in really short domain names (so they’re cool, and easy to write, and easy to remember), like Twitter’s t.co, you only get three characters that can have their case changed.

It doesn’t always help, but loosely speaking, the longer your domain name, the safer you’ll be! [LAUGHS]

And I just thought that was a nice little story…


DOUG.  As the sun begins to set on our show for today, we have a reader comment.

Now, this comment came on the heels of last week’s podcast, S3 Ep118.

S3 Ep118: Guess your password? No need if it’s stolen already! [Audio + Text]

Reader Stephen writes… he basically says:

I’ve been hearing you guys talk about password managers a lot recently – I decided to roll my own.

I generate these secure passwords; I could store them on a memory stick or sticks, only connecting the stick when I need to extract and use a password.

Would the stick approach be reasonably low risk?

I guess I could become familiar with encryption techniques to encode and decode information on the stick, but I can’t help feeling that may take me way beyond the simple approach I am seeking.

So, what say you, Paul?


DUCK.  Well, if it takes you way beyond the “simple” approach, then that means it’s going to be complicated.

And if it’s complicated, then that’s a great learning exercise…

…but maybe password encryption is not the thing where you want to do those experiments. [LAUGHTER]


DOUG.  I do believe I’ve heard you say before on this very programme several different times: “No need to roll your own encryption; there are several good encryption libraries out there you can leverage.”


DUCK.  Yes… do not knit, crochet, needlepoint, or cross-stitch your own encryption if you can possibly help it!

The issue that Stephen is trying to solve is: “I want to dedicate a removable USB drive to have passwords on it – how do I go about encrypting the drive in a convenient way?”

And my recommendation is that you should go for something that does full-device encryption [FDE] *inside the operating system*.

That way, you’ve got a dedicated USB stick; you plug it in, and the operating system says, ‘”That’s scrambled – I need the passcode.”

And the operating system deals with decrypting the whole drive.

Now, you can have encrypted *files* inside the encrypted *device*, but it means that, if you lose the device, the entire disk, while it’s unmounted and unplugged from your computer, is shredded cabbage.

And instead of trying to knit your own device driver to do that, why not use one built into the operating system?

That is my recommendation.

And this is where it gets both easy and very slightly complicated at the same time.

If you’re running Linux, then you use LUKS [Linux Unified Key Setup].

On Macs, it’s really easy: you have a technology called FileVault that’s built into the Mac.

On Windows, the equivalent of FileVault or LUKS is called BitLocker; you’ve probably heard of it.

The problem is that if you have one of the Home versions of Windows, you can’t do that full-disk encryption layer on removable drives.

You have to go and spend the extra to get the Pro version, or the business-type Windows, in order to be able to use the BitLocker full-disk encryption.

I think that’s a pity.

I wish Microsoft would just say, “We encourage you to use it as and where you can – on all your devices if you want to.”

Because even if most people don’t, at least some people will.

So that’s my advice.

The outlier is that if you have Windows, and you bought a laptop, say, at a consumer store with the Home version, you’re going to have to spend a little bit of extra money.

Because, apparently, encrypting removable drives, if you’re a Microsoft customer, isn’t important enough to build into the Home version of the operating system.


DOUG.  Alright, very good.

Thank you, Stephen, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email [email protected], you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Multi-million investment scammers busted in four-country Europol raid

Another day, another series of cryptocurrency scams…

…these, fortunately, brought to a halt, though sadly not before they’d defrauded “investors” around the globe to the tune of millions of dollars.

According to Europol, 216 people were questioned in Bulgaria, Cyprus, Germany and Serbia; 15 have already been arrested; 22 searches were conducted, including at four separate call centres; and about $1,000,000 in cryptocurrency was seized.

Law enforcement also confiscated €50,000 in cash; got hold of numerous electronic devices, presumably including laptops, servers, phones and backup devices; and towed away three vehicles.

As we’ve mentioned before, scammers’ cars are often at the show-off end of the vehicular spectrum, and thus worth lots of money, but also potentially include valuable forensic evidence from their numerous on-board computer systems.

All a pack of lies

These scammers used a well-known mechanism for drawing in their victims: start small, simulate regular and substantial gains via totally fictitious online reports, and use this bogus “success” as a lure to convince victims to invest more and more.

Europol notes that although most of the victims seem to be from Germany, where this investigation started, the scammers are known to have fleeced people worldwide, including in Switzerland, Australia and Canada.

Remember that in a scam of this sort, the criminals often allow victims to withdraw a percentage of their “gains”, as a way of convincing them that their investments really do have some sort of “liquidity” and aren’t just being swallowed up forever.

Of course, all they’re really doing is giving you a small fraction of your own money back, under the guise of an interest payment or some other gain in capital value.

Likewise, given that all the “gains” you are looking at are ficticious, concocted via a fake “trading” website that shows everyone’s investments booming, it’s easy for the crooks to pretend to pay you “incentives” for investing more, or to award “bonuses” if you help them draw new people into the scam.

When sufficiently many victims start demanding to withdraw their “investments” – or at least to access more funds than they originally put in – then the crooks know that the game is up…

…and at this point, they will typically cut and run, shutting down the scam site abruptly and vanishing into cyberspace with all the “investments” they’ve tricked people into handing over so far.

We’re guessing that in this case, because Europol describes the criminals as having four call centres, and as operating “fake cryptocurrency schemes” (note the use of the plural noun schemes), that when one fake website was shut down, another “investment opportunity” would soon spring up targeting new victims.

Post-scam scamming

We’ve even reported before on a cryptocoin scam, prevalent in South East Asia, where the crooks throw in a sting-at-the-end-of-the-sting.

These scammers, known as the CryptoRom gang, don’t simply break off contact and run away when a victim tries to withdraw all their “funds” – they try out a post-scam scam where they tell the victim that their withdrawal is on its way, except that it’s been frozen by the government for tax reasons.

The victim is presented with a tax bill, typically 20% of the “gain” they’ve made, so they’ll only be getting 80% of their “earnings” out.

Unfortunately, the scammers say, simply subtracting the 20% tax amount from the withdrawal (a method used by genuine tax authorities, commonly known as a witholding tax) isn’t an option, because of the “government freeze” on the funds.

The victim will need to pay in that 20% themselves – indeed, they’d jolly well better pay in quickly, the scammers claim, given that the “authorities” are now involved and looking for their share.

What was initially a love-your-victim attitude, aimed at praising them for their wise “investments” and congratulating them on their “success”…

…turns into a squeeze-as-hard-as-you can approach aimed at scaring victims into parting with a final lump sum that the criminals know full well they can’t afford, and may well leave them destitute or deeply in debt to friends and family.

Scam on top of scam on top of scam

As we’ve written before, some victims even experience a sting in the tail-of-the-tail of multi-layer scams like this.

Once you realise you’ve been scammed, whether the scammers pull the plug on you, or you pull the plug on them, you may “co-incidentally” be contacted by someone who sympathises with your plight (they may claim that this recently happened to them), and who knows just the thing for you to try next…

…a cryptocurrency recovery service!

Cryptocoins, by design, are largely unregulated, pseudo-anonymous, and typically hard or even impossible to trace and recover.

But cryptocoin recoveries do sometimes happen, occasionally in astonishing amounts and after lengthy periods.

At the end of 2022, for example, US Internal Revenue Service (IRS) investigators announced that they had tracked down and arrested an individual called James Zhong, of Gainesville, Georgia.

They allege that Zhong had stolen about 50,000 Bitcoins from the infamous Silk Road dark web market not long before it was shut down in 2013.

The investigators apparently recovered the majority of those Bitcoins, then worth well over than $3 billion (yes, we do really mean $3000 million), that had been hidden for nearly a decade in a popcorn tin that they found under a pile of blankets in the corner of one of Zhong’s cupboards.

Sadly, if you go down this alleged “cryptcoin recovery service” rabbit hole, you aren’t going to get any money back, because you’re simply wandering into yet another level of the scam.

You will just be pouring yet more good money after bad, and your overall losses will be even more catastrophic.

What to do?

  • If it sounds too good to be true, it IS too good to be true. Talk is cheap, and the fact that these scammers apparently ran four call centres involving hundreds of people is a good reminder that you have no reason to trust anyone who contacts you unexpectedly.
  • Take your time when online talk turns from friendship to money. Some scammers use social media and dating sites to stalk and befriend potential victims in a more personal way than simply cold-calling thousands of people. Don’t be swayed by the fact that your new “friend” happens to have a lot in common with you, and don’t let yourself be mesmerised by their “investment advice”. It’s easy for scammers to pitch themselves as kindred spirits if they’ve studied your social networking or dating site profiles in advance.
  • Don’t be fooled because a scam website looks well-branded and professional. Setting up a website with live graphs, investment pages and “account” management tools is easier than you think. Crooks can readily copy official logos, taglines, branding and even JavaScript code from legitimate sites, and modify it to suit their malicious purposes.
  • Don’t let the scammers drive a wedge between you and your family. If scammers think your family are trying to get you out of trouble, they think nothing of deliberately turning you against your family as part of their scam. Alternatively, they may lure you with the promise of “bonuses” to draw your friends and family into the scam as well.

S3 Ep117: The crypto crisis that wasn’t (and farewell forever to Win 7) [Audio + Text]

THE CRYPTO CRISIS THAT WASN’T

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT


DOUG.  Call centre busts, cracking cryptography, and patches galore.

All that more on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody – thank you for listening!

My name is Doug Aamoth; he is Paul Ducklin.

Paul, how do you do?


DUCK.  Very well, Douglas.


DOUG.  All right.

We like start the show with a This Week in Tech History segment, and I have a twofer for you today – two things that went live this week.

One in 1863 and one in 2009.

Both exciting, one perhaps less controversial than the other.

We’re talking, of course, about the first stretch of the London Underground going into service in 1863, the first underground system of its kind.

And then we’ve got the Bitcoin floodgates opening in 2009, the first decentralised cryptocurrency of its kind.

Although we should pencil in an asterisk, because Bitcoin followed in the footsteps of such digital currencies as eCash and DigiCash in the 1980s and 1990s.


DUCK.  Yes, the latter was a rather different sort of “underground movement” to the first, wasn’t it?


DOUG.  [LAUGHS] Exactly, yes!


DUCK.  But you’re right… 160 years of the London Underground!


DOUG.  That’s amazing.

Let us talk about this…


DUCK.  [LAUGHS] You skipped the need to talk about Bitcoin/Controversy


DOUG.  Oh!


DUCK.  Let’s leave our listeners to ponder that one for themselves, Doug, because I think everyone has to have their own opinion about where Bitcoin led us… [LAUGHS]


DOUG.  And their own story.

I had a chance to buy it at $30 a coin and thought that was way too expensive.


DUCK.  Yes, Doug, but if you’d bought at $30, you would have sold at $60 and gone around patting yourself on the back and bragging to everybody.


DOUG.  Oh, not even $60!


DUCK.  Yes, exactly…


DOUG.  I’d have sold at $40. [LAUGHS]

And sticking with the subject of regret, there was a fake call centre in Ukraine that got busted:

Inside a scammers’ lair: Ukraine busts 40 in fake bank call-centre raid

This call centre looks nicer inside than some of the startups I’ve worked at.

So that’s something – this is a full infrastructure here.

What happened with this story, Paul?


DUCK.  Like you say, it looks like a nice little startup, but strangely, when you look at the photos provided by the Ukraine cyberpolice, no one seemed to have turned up for work that day.

And it wasn’t that they went during the vacation. [LAUGHTER]

It was that all the people – and there were, I think, three founders and 37 staff, so this was a biggish boutique business…

…they were all in the next room getting arrested, Doug.

Because although it was a call centre, their primary goal was preying on victims in another country.

In fact, in this case, they were specifically targeting victims in Kazakhstan with banking scams.

Basically, where they call up and they’re talking to you using the same sort of language that the bank would, following a carefully planned script that convinces the person, or convinces sufficiently many of the people they’re calling.

Remember, they’ve got a long list, so they can deal with lots of hang-ups, but eventually they’ll convince someone that they really are talking to the bank.

And once the other end believes that they really are talking to the bank, then…

Everyone says, “Oh, they should have realised it was a scam; they should have known when they were asked to transfer the funds, when they were asked to read out 2FA codes, when they were asked to hand over passwords, when they were asked to disclose details about the account.”

But it’s easy to say that with hindsight…


DOUG.  And I think we’ve talked about this on prior shows – when people ask, “How could someone fall for this?”

Well, they make hundreds and hundreds of calls, but they only need to trick one person. (In this case, it looks like they defrauded about 18,000 people!)

So you don’t need a super-high hit rate based on your calls.

That’s what makes these so dangerous… once you get a victim on the line, and you get access to their bank account, you just start sucking the money right out.


DUCK.  Once someone genuinely believes that they *are* talking to the bank, and they’ve got a call centre person who’s “really” (apparently!) trying to help them – probably giving them better service, support, time, and compassion than any call centre they’ve called themselves lately…

Once the person has crossed that bridge, you can see why they might get drawn in.

And, of course, as soon as the crooks had enough personally identifiable information to fleece the person, they’d jump in and start sucking money out of their account, and moving it to other accounts they controlled…

…so they could then move it on immediately, out of the regular banking system, shoving it into cryptocurrencies.

And that was what they did, day in, day out.

I don’t have much compassion for people who don’t have much compassion for the victims of these scams, to be honest, Doug.

I think a lot of techies sometimes look down their noses: “How could a person fall for this phishing scam? It’s full of mistakes, it’s full of spelling errors, it’s badly punctuated, it’s got a weird URL in it.”

You know, life’s like that!

I can see why people do fall for this – it’s not difficult for a good social engineer to talk to someone in a way that it sounds like they’re confirming security details, or that they’re going to say to you, “Let me just check with you that this really is your address”…

..but then, instead of *them* reading out your address, they’ll somehow wangle the conversation so *you* blurt it out first.

And then, “Oh, yes!” – they’ll just agree with you.

It’s surprisingly easy for someone who’s done this before, and who’s practised being a scammer, to lead the conversation in a way that makes you feel that it’s legitimate when it absolutely isn’t.

Like I said, I don’t think you should point any fingers or be judgmental about people who fall for this.

And in this case, 18,000 people went for… I think, an average of thousands of dollars each.

That’s a lot of money, a lot of turnover, for a medium sized business of 40 people, isn’t it, Doug?


DOUG.  [WRY] That’s not too shabby… other than the illegality of it all.

We do have some advice in the article, much of which we’ve said before.

Certain things like…

Not believing anyone who contacts you out of the blue and says that they’re helping you with an investigation.

Don’t trust the contact details given to you by someone on the other end of the phone….


DUCK.  Exactly.


DOUG.  We’ve talked about Caller ID, how that can’t be trusted:

Voice-scamming site “iSpoof” seized, 100s arrested in massive crackdown

Don’t be talked into to handing over your personal data in order to prove your identity – the onus should be on them.

And then, of course, don’t transfer funds to other accounts.


DUCK.  Yes!

Of course, we all need to do that at times – that’s the benefit of electronic banking, particularly if you live in a far-flung region where your bank has closed branches, so you can’t go in anymore.

And you do sometimes need to add new recipients, and to go through the whole process with passwords, and 2FA, and authentication, everything to say, “Yes, I do want to pay money to this person that I’ve never dealt with before.”

You are allowed to do that, but treat adding a new recipient with the extreme caution it deserves.

And if you don’t actually know the person, then tread very carefully indeed!


DOUG.  And the last bit of advice…

Instead of saying, “How could people fall for this?” – because *you* will not fall for this, look out for friends and family who may be vulnerable.


DUCK.  Absolutely.

Make sure that your friends and family know, if they have the slightest doubt, that they should Stop – Think – and and Connect *with you first*, and ask for your assistance.

Never be pressurised by fear, or cajoling, or wheedling, or anything that comes from the other end.


DOUG.  Fear – cajoling – wheedling!

And we move on to a classic kerfuffle concerning RSA and the technology media…

…and trying to figure out whether RSA can be cracked:

RSA crypto cracked? Or perhaps not!


DUCK.  Yes, this was a fascinating paper.

I think there are 20-something co-authors, all of whom are listed as primary authors, main authors, on the paper.

It came out of China, and it basically goes like this…

“Hey, guys, you know that there are these things called quantum computers?

And in theory, if you have a super-powerful quantum computer with a million qubits (that’s a quantum binary storage unit, the equivalent of a bit, but for a quantum computer)… if you have a computer with a million qubits, then, in theory, you could probably crack encryption systems like the venerable RSA (Rivest – Shamir – Adleman).

However, the biggest quantum computer yet built, after years and years of trying, has just over 400 qubits. So we’re a long way short of having a powerful enough quantum computer to get this amazing speed-up that lets us crack things that we previously thought uncrackable.

However, we think we’ve come up with a way of optimising the algorithm so that you actually only need a few hundred qubits. And maybe, just maybe, we have therefore paved the way to cracking RSA-2048.”

2048 is the number of bits in the prime product that you use for RSA.

If you can take that product of two 1024- bit prime numbers, big prime numbers…

…*if* you can take that 2048-bit number and factorise it, divide it back into the two numbers that were multiplied together, you can crack the system.

And the theory is that, with conventional computers, it’s just not possible.

Not even a super-rich government could build enough computers that were powerful enough to do that work of factorising the number.

But, as I say, with this super-powerful quantum computer, which no one’s near building yet, maybe you could do it.

And what these authors were claiming is, “Actually we found a shortcut.”


DOUG.  Do they detail the shortcut in the paper, or are they just saying, “Here’s a theory”?


DUCK.  Well, the paper is 32 pages, and half of it is appendix, which has an even higher “squiggle factor” than the rest of the paper.

So yes, they’ve got this *description*, but the problem is they didn’t actually do it.

They just said, “Hypothetically, you might be able to do this; you may be able to do the other. And we did a simulation using a really stripped-down problem”… I think, with just a few simulated qubits.

They didn’t try it on a real quantum computer, and they didn’t show that it actually works.

And the only problem that they actually solved in “proving how quickly” (airquotes!) they could do it is a factorising problem that my own very-many-year-old laptop can solve anyway in about 200 milliseconds on a single core, using a completely unoptimised, conventional algorithm.

So the consensus seems to be… [PAUSE] “It’s a nice theory.”

However, we did speak – I think, in the last podcast – about cryptographic agility.

If you are in the United States, Congress says *in a law* that you need cryptographic agility:

US passes the Quantum Computing Cybersecurity Preparedness Act – and why not?

We collectively need it, so that if we do have a cryptographic algorithm which is found wanting, we can switch soon, quickly, easily…

…and, better yet, we can swap even in advance of the final crack being figured out.

And that specifically applies because of the fear of how powerful quantum computers might be for some kinds of cryptographic cracking problems.

But it also applies to *any* issue where we’re using an encryption system or an online security protocol that we suddenly realise, “Uh-oh, it doesn’t work like we thought – we can’t carry on using the old one because the bottom fell out of that bucket.”

We need to be not worrying about how we’re going to patch said bucket for the next ten years!

We need to be able to chuck out the old, bring in the new, and bring everyone with us.

That’s the lesson to learn from this.

So, RSA *doesn’t* seem to have been cracked!

There’s an interesting theoretical paper, if you have the very specialised mathematics to wade through it, but the consensus of other cryptographic experts seems to be along the lines of: “Nothing to see here yet.”


DOUG.  And of course, the idea is that if and when this does become crackable, we’ll have a better system in place anyway, so it won’t matter because we’re cryptographically agile.


DUCK.  Indeed.


DOUG.  Last but not least, let us talk about the most recent Patch Tuesday.

We’ve got one zero-day, but perhaps even bigger than that, we say, “Thanks for the memories, Windows 7 and Windows 8.1, we hardly knew ye.”

Microsoft Patch Tuesday: One 0-day; Win 7 and 8.1 get last-ever patches


DUCK.  Well, I don’t know about “hardly”, Doug. [LAUGHTER]

Some of us liked one of you a lot, so much they didn’t want to give it up…

..and a lot of you, apparently, didn’t like the other *at all*.


DOUG.  Yes, kind of an awkward going-away party! [LAUGHS]


DUCK.  So much so that there never was a Windows 9, if you remember.

Somehow, a drained canal was placed between Windows 8.1 and Windows 10.

So, let’s not go into the details of all the patches – there are absolutely loads of them.

There’s one zero-day, which I think is an elevation of privilege, and that applies right from Windows 8.1 all the way to Windows 11 2022H2, the most recent release.

So that’s a big reminder that even if crooks are looking for vulnerabilities in the latest version of Windows, because that’s what most people are using, often those vulnerabilities turn out to be “retrofittable” back a long way.

In fact, I think Windows 7 had 42 CVE-numbered bugs patched; Windows 8.1 had 48.

And I think, as a whole, in all of the Windows products, there were 90 CVEs listed on their website, and 98 CVE-numbered bugs patched altogether, suggesting that about half of the bugs that were actually fixed (they all have CVE-2023- numbers, so they’re all recently discovered bugs)…

…about 50% of them go way back, if you want to go back that far.

So, for the details of all the fixes, go to news.sophos.com, where SophosLabs has published a more detailed analysis of Patch Tuesday.

January 2023 patch roundup: Microsoft tees up 98 updates


DUCK.  On Naked Security, the real thing we wanted to remind you about is…

…if you still have Windows 7, or you’re one of those people who still has Windows 8.1 (because somebody must have liked it), *you aren’t going to get any more security updates ever*.

Windows 7 had three years of “You can pay a whole lot of extra money and get extended security updates” – the ESU programme, as they call it.

But Windows 8.1? [LAUGHS]

The thing that gives credibility to that argument that they wanted to leave a dry ditch called Windows 9 between 8.1 and 10 is that Microsoft is now announcing:

“This extended support thing that we do, where we’ll happily take money off you for up to three years for products that are really ancient?

We’re not going to do that with Windows 8.1.”

So, at the same time as Windows 7 sails into the sunset, so does Windows 8.1.

So… if you don’t want to move on for your own sake, please do it for mine, and for Doug’s [LAUGHTER], and for everybody else’s.

Because you are not going to get any more security fixes, so there will just be more and more unpatched holes as time goes on.


DOUG.  All right!

We do have a comment on this article that we’d like to spotlight.

It does have to do with the missing Windows 9.

Naked Security reader Damon writes:

“My recollection of the reason there was no Windows 9 was to avoid poorly written version-checking code erroneously concluding that something reporting ‘Windows 9’ was Windows 95 or Windows 98.

That’s what I read at the time, anyway – I don’t know the veracity of the claim.”

Now, I had heard the same thing you did, Paul, that this was more of a marketing thing to add a little distance…


DUCK.  The “firebreak”, yes! [LAUGHS]

I don’t think we’ll ever know.

I’ve seen, and even reported in the article, on several of these stories.

One, as you say, it was the firebreak: if we just skip Windows 9 and we go straight to Windows 10, it’ll feel like we’ve distanced ourselves from the past.

I heard the story that they wanted a fresh start, and that the number wasn’t going to be a number anymore.

They wanted to break the sequence deliberately, so the product would just be called “Windows Ten”, and then it would get sub-versions.

The problem is that that story is kind of undermined by the fact that there’s now Windows 11! [LAUGHTER]

And the other problem with the “Oh, it’s because they might hear Windows 9 and think it’s Windows 95 when they’re doing version checking” is…

My recollection is that actually when you used the now-deprecated Windows function GetVersion() to find out the version number, it didn’t tell you “Windows Vista” or “Windows XP”.

It actually gave you a major version DOT minor version.

And amazingly, if I’m remembering correctly, Vista was Windows 6.0.

Windows 7, get this, was Windows 6.1… so there’s already plenty of room for confusion long before “Windows 9” was coming along.


DOUG.  Sure!


DUCK.  Windows 8 was “indows 6.2.

Windows 8.1 was essentially Windows 6.3.

But because Microsoft said, “No, we’re not using this GetVersion() command any more”, until this day (I put some code in the article – I tried it on the Windows 11 2022H2 release)…


unsigned int GetVersion(void);
int printf(const char* fmt,...);
 
int main(void) {
   unsigned int ver = GetVersion();
 
   printf("GetVersion() returned %08X:n",ver);
   printf("%u.%u (Build %u)n",ver&255,(ver>>8)&255,(ver>>16)&65535);
 
   return 0;
}

…to this day, unless you have a specially packaged, designed-for-a-particular-version-of-Windows executable installation, if you just take a plain EXE and run it, it will tell you to this day that you’ve got Windows 6.2 (which is really Windows 8):


GetVersion() returned 23F00206:
6.2 (Build 9200)

And, from memory, the Windows 9x series, which was Windows 95, Windows 98, and of course Windows Me, was actually version 4-dot-something.

So I’m not sure I buy this “Windows 9… version confusion” story.

Firstly, we would already have had that confusion when Windows Me came out, because it didn’t start with a “9”, yet it was from that series.

So products would already have had to fix that problem.

And secondly, even Windows 8 didn’t identify itself as “8” – it was still major version 6.

So I don’t know what to believe, Doug.

I’m sticking to the “drained and uncrossable emergency separation canal theory” myself!


DOUG.  All right, we’ll stick with that for now.

Thank you very much, Damon, for sending that in.

If you have an interesting story, comment, or question you’d like to submit, we’d love to read it on the podcast.

You can email [email protected], you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay Secure!

[MUSICAL MODEM]


S3 Ep116: Last straw for LastPass? Is crypto doomed? [Audio + Text]

S3 Ep116: Last straw for LastPass? Is crypto doomed? [Audio + Text]

LAST STRAW FOR LASTPASS? IS CRYPTO DOOMED?

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  LastPass again, fun with quantum computing, and cybersecurity predictions for 2023.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth.

He is Paul Ducklin.

Paul, let’s see if I remember how how to do this…

It’s been a couple of weeks, but I hope you had a great holiday break – and I do have a post-holiday gift for you!

As you know, we like to be in the show with a This Week in Tech History segment.


DUCK.  Is this the gift?


DOUG.  This is the gift!

I believe you will be interested in this more than just about any other This Week in Tech History segment…

…this week, on 04 January 1972, the HP-35 Portable Scientific Calculator, a world first, was born.

Image from The Museum of HP Calculators.
Click on calculator to visit Museum exhibit.

Named the HP-35 simply because it had 35 buttons, the calculator was a challenge by HP’s Bill Hewlett to shrink down the company’s desktop-size 9100A scientific calculator so it could fit in his shirt pocket.

The HP-35 stood out for being able to perform trigonometric and exponential functions on the go, things that until then had required the use of slide rules.

At launch, it sold for $395, almost $2500 in today’s money.

And Paul, I know you to be a fan of old HP calculators…


DUCK.  Not *old* HP calculators, just “HP calculators”.


DOUG.  Just in general? [LAUGHS]

Yes, OK…


DUCK.  Apparently, at the launch, Bill Hewlett himself was showing it off.

And remember, this is a calculator that is replacing a desktop calculator/computer that weighed 20kg…

…apparently, he dropped it.

If you’ve ever seen an old HP calculator, they were beautifully built – so he picked it up, and, of course, it worked.

And apparently all the salespeople at HP built that into their repartee. [LAUGHS]

When they went out on the road to do demos, they’d accidentally (or otherwise) let their calculator fall, and then just pick it up and carry on regardless.


DOUG.  Love it! [LAUGHS]


DUCK.  They don’t make ’em like they used to, Doug.


DOUG.  They certainly don’t.

Those were the days – incredible.

OK, let’s talk about something that’s not so cool.


DUCK.  Uh-oh!


DOUG.  LastPass: we said we’d keep an eye on it, and we *did* keep an eye on it, and it got worse!

LastPass finally admits: Those crooks who got in? They did steal your password vaults, after all…


DUCK.  It turns out to be a long running story, where LastPass-the-company apparently simply did not realise what had happened.

And every time they scratched that rust spot on their car a little bit, the hole got bigger, until eventually the whole thing fell in.

So how did it start?

They said, “Look, the crooks got in, but they were only in for four days, and they were only in the development network. So it’s our intellectual property. Oh, dear. Silly us. But don’t worry, we don’t think they got into the customer data.”

Then they came back and said, “They *definitely* didn’t get into the customer data or the password vaults, because those aren’t accessible from the development network.”

Then they said, “W-e-e-e-e-e-l, actually, it turns out that they *were* able to do what’s known in the jargon as “lateral movement. Based on what they stole in incident one, there was incident two, where actually they did get into customer information.”

So, we all thought, “Oh, dear, that’s bad, but at least they haven’t got the password vaults!”

And then they said, “Oh, by the way, when we said ‘customer information’, let us tell you what we mean. We mean a whole lot of stuff about you, like: who you are; where you live; what your phone and email contact details are; stuff like that. *And* [PAUSE] your password vault.”


DOUG.  [GASP] OK?!


DUCK.  And *then* they said, “Oh, when we said ‘vault’,” where you probably imagined a great big door being shut, and a big wheel being turned, and huge bolts coming through, and everything inside locked up…

“Well, in our vault, only *some* of the stuff was actually secured, and the other stuff was effectively in plain text. But don’t worry, it was in a proprietary format.”

So, actually your passwords were encrypted, but the websites and the web services and an unstated list of other stuff that you stored, well, that wasn’t encrypted.

So it’s a special sort of “zero-knowledge”, which is a phrase they’d used a lot.

[LONGISH SILENCE]

[COUGHS FOR ATTENTION] I left a dramatic pause there, Doug.

[LAUGHTER]

And *THEN* it turned out that…

…you know how they’ve been telling everybody, “Don’t worry, there’s 100,100 iterations of HMAC-SHA-256 in PBKDF2“?

Well, *maybe*.


DOUG.  Not for everyone!


DUCK.  If you had first installed the software after 2018, that might be the case.


DOUG.  Well, I first installed the software in 2017, so I was not privy to this “state-of-the-art” encryption.

And I just checked.

I did change my master password, but it’s a setting – you’ve got to go into your Account Settings, and there’s an Advanced Settings button; you click that and then you get to choose the number of times your password is tumbled…

…and mine was still set at 5000.

Between that, and getting the email on the Friday before Christmas, which I read; then clicked through to the blog post; read the blog post…

…and my impression of my reaction is as follows:

[VERY LONG TIRED SIGH]

Just a long sigh.


DUCK.  
But probably louder than that in real life…


DOUG.  It just keeps getting worse.

So: I’m out!

I think I’m done…


DUCK.  Really?

OK.


DOUG.  That’s enough.

I had already started transitioning to a different provider, but I don’t even want to say this was “the last straw”.

I mean, there were so many straws, and they just kept breaking. [LAUGHTER]

When you choose a password manager, you have to assume that this is some of the most advanced technology available, and it’s protected better than anything.

And it just doesn’t seem like this was the case.


DUCK.  [IRONIC] But at least they didn’t get my credit card number!

Although I could have got a new credit card in three-and-a-quarter days, probably more quickly than changing all my passwords, including my master password and *every* account in there.


DOUG.  Ab-so-lutely!

OK, so if we have people out there who are LastPass users, if they’re thinking of switching, or if they’re wondering what they can do to shore up their account, I can tell them firsthand…

Go into your account; go to the general settings and then click the Advanced Settings tab, and see what the what the iteration count is.

You choose it.

So mine was set… my account was so old that it was set at 5000.

I set it to something much higher.

They give you a recommended number; I would go even higher than that.

And then it re-encrypts your whole account.

But like we said, the cat’s out of the bag…. if you don’t change all your passwords, and they manage to crack your [old] master password, they’ve got an offline copy of your account.

So just changing your master password and just re-encrypting everything doesn’t do the job completely.


DUCK.  Exactly.

If you go in and your iteration count is still at 5000, that’s the number of times they hash-hash-hash-and-rehash your password before it’s used, in order to slow down password-guessing attacks.

That’s the number of iterations used *on the vault that the crooks now have*.

So even if you change it to 100,100…

…strange number: Naked Security recommends 200,000 [date: October 2022]; OWASP, I believe, recommends something like 310,000, so LastPass saying, “Oh, well, we do a really, really sort of gung-ho, above average 100,100”?

Serious Security: How to store your users’ passwords safely

I would call that somewhere in the middle of the pack – not exactly spectacular.

But changing that now only protects the cracking of your *current* vault, not the one that the crooks have got.


DOUG.  So, to conclude.

Happy New Year, everybody; you’ve got your weekend plans already, so “you’re welcome” there.

And I can’t believe I’m saying this again, but we will keep an eye on this.

Alright, we’ll stay on the cryptography train, and talk about quantum computing.

According to the United States of America, it’s time to get prepared, and the best preparation is…

[DRAMATIC] …cryptographic agility.

US passes the Quantum Computing Cybersecurity Preparedness Act – and why not?


DUCK.  Yes!

This was a fun little story that I wrote up between Christmas and New Year because I thought it was interesting, and apparently so did loads of readers because we’ve had active comments there… quantum computing is the cool thing, isn’t it?

It’s like nuclear fusion, or dark matter, or superstring theory, or gravitons, all that sort of stuff.

Everyone kind-of has an idea of what it’s about, but not many people really understand it.

And the idea of quantum computing, loosely speaking, is a way of constructing a sort-of analog computing device, if you like, that is able to do certain types of calculation in such a way that essentially all the answers appear immediately inside the device.

And the trick you now have is, can you collapse this… what’s called, I believe, a “superposition”, based on quantum mechanics.

Can you collapse it in such a way that what’s revealed is the actual answer that you wanted?

The problem for cryptography is: if you can build a device like this that is powerful enough, then essentially you’re massively parallelising a certain type of computation.

You’re getting all the answers at once.

You’re getting rid of all the wrong ones and extracting the right one instantly.

You can imagine how, for things like cracking passwords, if you could do that… that would be a significant advantage, wouldn’t it?

You reduce a problem that should have a complexity that is, say, two-to-the-power 128 to an equivalent problem that has a complexity on the order of just 128 [the logarithm of the first number].

And so, the fear is not just that today’s cryptographic algorithms might require replacing at some time in the future.

The problem is more like what is now happening with LastPass users.

That stuff we encrypted today, hoping it would remain secure, say, for a couple of years or even a couple of decades…

…during the lifetime of that password, might suddenly become crackable almost in an instant.

So, in other words, we have to make the change *before* we think that these quantum computers might come along, rather than waiting until they appear for the first time.

You’ve got to be ahead in order to stay level, as it were.

It’s not just enough to rest on our laurels.

We have to remain cryptographically agile so that we can adapt to these changes, and if necessary, so we can adapt proactively, well in advance.

And *that* is what I think they meant by cryptographic agility.

Cybersecurity is a journey, not a destination.

And part of that journey is anticipating where you’re going next, not waiting until you get there.


DOUG.  What a segue to our next story!

When it comes to predicting what will happen in 2023, we should remember that history has a funny way of repeating itself…

Naked Security 33 1/3 – Cybersecurity predictions for 2023 and beyond


DUCK.  It does, Doug.

And that is why I had a rather curious headline, where I was thinking, “Hey, wouldn’t it be cool if I could have a headline like ‘Naked Security 33 1/3’?

I couldn’t quite remember why I thought that was funny… and then I remembered it was Frank Drebin… it was ‘Naked *Gun* 33 1/3’. [LAUGHS]

That wasn’t why I wrote it… the 33 1/3 was a little bit of a joke.

It should really have been “just over 34”, but it’s something we’ve spoken about on the podcast at least a couple of times before.

The Internet Worm, in 1988 [“just over 34” years ago], relied on three main what-you-might-call hacking, cracking and malware-spreading techniques.

Poor password choice.

Memory mismanagement (buffer overflows).

And not patching or securing your existing software properly.

The password guessing… it carried around its own dictionary of 400 or so words, and it didn’t have to guess *everybody’s* password, just *somebody’s* password on the system.

The buffer overflow, in this case, was on the stack – those are harder to exploit these days, but memory mismanagement still accounts for a huge number of the bugs that we see, including some zero-days.

And of course, not patching – in this case, it was people who’d installed mail servers that had been compiled for debugging.

When they realised they shouldn’t have done that, they never went back and changed it.

And so, if you’re looking for cybersecurity predictions for 2023, there will be lots of companies out there who will be selling you their fantastic new vision, their fantastic new threats…

…and sadly, all of the new stuff is something that you have to worry about as well.

But the old things haven’t gone away, and if they haven’t gone away in 33 1/3 years, then it is reasonable to expect, unless we get very vigorous about it, as Congress is suggesting we do with quantum computing, that in 16 2/3 years time, we’ll still have those very problems.

So, if you want some simple cybersecurity predictions for 2023, you can go back three decades…


DOUG.  [LAUGHS] Yes!


DUCK.  …and learn from what happened then.

Because, sadly, those who cannot remember history are condemned to repeat it.


DOUG.  Exactly.

Let’s stay with the future here, and talk about machine learning.

But this isn’t really about machine learning, it’s just a good old supply chain attack involving a machine learning toolkit.

PyTorch: Machine Learning toolkit pwned from Christmas to New Year


DUCK.  Now, this was PyTorch – it’s very widely used – and this attack was on users of what’s called the “nightly build”.

In many software projects, you will get a “stable build”, which might get updated once a month, and then you’ll get “nightly builds”, which is the source code as the developers are working on it now.

So you probably don’t want to use it in production, but if you’re a developer, you might have the nightly build along with a stable build, so you can see what’s coming next.

So, what these crooks did is… they found a package that PyTorch depended upon (it’s called torchtriton), and they went to PyPI, the Python Package Index repository, and they created a package with that name.

Now, no such package existed, because it was normally just bundled along with PyTorch.

But thanks to what you could consider a security vulnerability, or certainly a security issue, in the whole dependency-satisfying setup for Python package management…

…when you did the update, the update process would go, “Oh, torchtriton – that’s built into PyTorch. Oh, no, hang on! There’s a version on PyPI, there’s a version on the public Package Index; I’d better get that one instead! That’s probably the real deal, because it’s probably more up to date.”


DOUG.  Ohhhhhhhh….


DUCK.  And it was more “up to date”.

It wasn’t *PyTorch* that ended up infected with malware, it was just that when you did the install process, a malware component was injected into your system that sat and ran there independently of any machine learning you might do.

It was a program with the name triton.

And basically what it did was: it read a whole load of your private data, like the hostname; the contents of various important system files, like /etc/passwd (which on Linux doesn’t actually contain password hashes, fortunately, but it does contain a complete list of users on the system); and your .gitconfig, which, if you’re a developer, probably says a whole lot of stuff about projects that you’re working on.

And most naughtily-and-nastily of all: the contents of your .ssh directory, where, usually, your private keys are stored.

It packaged up all that data and it sent it out, Doug, as a series of DNS requests.

So this is Log4J all over again.

You remember Log4J attackers were doing this?

Log4Shell explained – how it works, why you need to know, and how to fix it


DOUG.  Yes.


DUCK.  They were going, “I’m not going to bother using LDAP and JNDI, and all those .class files, and all that complexity. That’ll get noticed. I’m not going to try and do any remote code execution… I’m just going to do an innocent-looking DNS lookup, which most servers will allow. I’m not downloading files or installing anything. I’m just converting a name into an IP number. How harmful could that be?”

Well, the answer is that if I’m the crook, and I am running a domain, then I get to choose which DNS server tells you about that domain.

So if I look up, against my domain, a “server” (I’m using air-quotes) called SOMEGREATBIGSECRETWORD dot MYDOMAIN dot EXAMPLE, then that text string about the SECRETWORD gets sent in the request.

So it is a really, really, annoyingly effective way of stealing (or to use the militaristic jargon that cybersecurity likes, exfiltrating) private data from your network, in a way that many networks don’t filter.

And much worse, Doug: that data was encrypted (using 256-bit AES, no less), so the string-that-actually-wasn’t-a-server-name, but was actually secret data, like your private key…

…that was encrypted, so that if you were just looking through your logs, you wouldn’t see obvious things like, “Hey, what are all those usernames doing in my logs? That’s weird!”

You’d just see crazy, weird text strings that looked like nothing much at all.

So you can’t go searching for strings that might have escaped.

However: [PAUSE] hard-coded key and initialisation vector, Doug!

Therefore. anybody on your network path who logged it could, if they had evil intention, go and decrypt that data later.

There was nothing involving a secret known only to the crooks.

The password you use to decrypt the stolen data, wherever it lives in the world, is buried in the malware – it’s five minutes’ work to go and recover it.

The crooks who did this are now saying, [MOCK HUMILITY] “Oh, no, it was only research. Honest!”

Yeah, right.

You wanted to “prove” (even bigger air-quotes than before) that supply chain attacks are an issue.

So you “proved”( even bigger air-quotes than the ones I just used) that by stealing people’s private keys.

And you chose to do it in a way that anybody else who got hold of that data, by fair means or foul, now or later, doesn’t even have to crack the master password like they do with LastPass.


DOUG.  Wow.


DUCK.  Apparently, these crooks, they’ve even said, “Oh, don’t worry, like, honestly, we deleted all the data.”

Well…

A) I don’t believe you. Why should I?


DOUG.  [LAUGHS]


DUCK.  And B) [CROSS] TOO. LATE. BUDDY.


DOUG.  So where do things stand now?

Everything’s back to normal?

What do you do?


DUCK.  Well, the good news is that if none of your developers installed this nightly build, basically between Christmas and New Year 2022 (the exact times are in the article), then you should be fine.

Because that was the only period that this malicious torchtriton package was on the PyPI repository.

The other thing is that, as far as we can tell, only a Linux binary was provided.

So, if you’re working on Windows, then I’m assuming, if you don’t have the Windows Subsystem for Linux (WSL) installed, then this thing would just be so much harmless binary garbage to you.

Because it’s an Elf binary, not a PE binary, to use the technical terms, so it wouldn’t run.

And there are also a bunch of things that, if you’re worried you can go and check for in the logs.

If you’ve got DNS logs, then the crooks used a specific domain name.

The reason that the thing suddenly became a non-issue (I think it was on 30 December 2022) is that PyTorch did the right thing…

…I imagine in conjunction with the Python Package Index, they kicked out the rogue package and replaced it essentially with a “dud” torchtriton package that doesn’t do anything.

It just exists to say, “This is not the real torchtriton package”, and it tells you where to get the real one, which is from PyTorch itself.

And this means that if you do download this thing, you don’t get anything, let alone malware.

We’ve got some Indicators of Compromise [IoCs] in the Naked Security article.

We have an analysis of the cryptographic part of the malware, so you can understand what might have got stolen.

And sadly, Doug, if you are in doubt, or if you think you might have got hit, then it would be a good idea, as painful as it’s going to be… you know what I’m going to say.

It’s exactly what you had to do with all your LastPass stuff.

Go and regenerate new private keys, or key pairs, for your SSH logins.

Because the problem is that what lots of developers do… instead of using password-based login, they use public/private key-pair login.

You generate a key pair, you put the public key on the server you want to connect to, and you keep the private key yourself.

And then, when you want to log in, instead of putting in a password that has to travel across the network(even though it might be encrypted along the way), you decrypt your private key locally in memory, and you use it to sign a message to prove that you’ve got the matching private key to the server… and it lets you in.

The problem is that, if you’re a developer, a lot of the time you want your programs and your scripts to be able to do that private-key based login, so a lot of developers will have private keys that are stored unencrypted.


DOUG.  OK.

Well, I hesitate to say this, but we will keep an eye on this!

And we do have an interesting comment from an anonymous reader on this story who asks in part:

“Would it be possible to poison the crooks’ data cache with useless data, SSH keys, and executables that expose or infect them if they’re dumb enough to run them? Basically, to bury the real exfiltrated data behind a ton of crap they have to filter through?”


DUCK.  Honeypots, or fake databases, *are* a real thing.

They’re a very useful tool, both in cybersecurity research… letting the crooks think they’re into a real site, so they don’t just go, “Oh, that’s a cybersecurity company; I’m giving up”, and don’t actually try the tricks that you want them to reveal to you.

And also useful for law enforcement, obviously.

The issue is, if you wish to do it yourself, just make sure that you don’t go beyond what is legally OK for you.

Law enforcement might be able to get a warrant to hack back…

…but where the commenter said, “Hey, why don’t I just try and infect them in return?”

The problem is, if you do that… well, you might get a lot of sympathy, but in most countries, you would nevertheless almost certainly be breaking the law.

So, make sure that your response is proportionate, useful and most importantly, legal.

Because there’s no point in just trying to mess with the crooks and ending up in hot water yourself.

That would be an irony that you could well do without!


DOUG.  Alright, very good.

Thank you very much for sending that in, dear Anonymous Reader.

If you have an interesting story, comment, or question you’d like to submit, we’d love to read it on the podcast.

You can email [email protected], you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today.

Thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth reminding you, until next time, to…


BOTH.  Stay Secure!

[MUSICAL MODEM]


Naked Security 33 1/3 – Cybersecurity predictions for 2023 and beyond

It’s the last regular working weekday of 2022 (in the UK and the US, at least), in the unsurprisingly relaxed and vacationistic gap between Christmas and New Year…

…so you were probably expecting us to come up either with a Coolest Stories Of The Year In Review listicle, or with a What You Simply Must Know About Next Year (Based On The Coolest Stories Of The Year) thinly-disguised-as-not-a-listicle listicle.

After all, even technical writers like to glide into holiday mode at this time of year (or so we have been told), and nothing is quite as relaxed and vacationistic as putting old wine into new skins, mixing a few metaphors, and gilding a couple of lilies.

So we decided to do something almost, but not quite, entirely unlike that.

Those who cannot remember history…

We are, indeed, going to look forward by gazing back, but – as you might have guessed from the headline – we’re going to go further back than New Year’s Day 2022.

In truth, that mention of 33 1/3 is neither strictly accurate nor specifically a tribute to the late Lieutenant-Sergeant Frank Drebbin, because that headline number should, by rights, have been somewhere between 34.16 and 34.19, depending on how you fractionalise years.

We’d better explain.

Our historical reference here goes back to 1988-11-02, which anyone who has studied the early history of computer viruses and other malware will know, was the day that the dramatic Internet Worm kicked off.

This infamous computer virus was written by one Robert Morris, then a student at Cornell, whose father, who also just happened to be called Robert Morris, was a cryptographer at the US National Security Agency (NSA).

You can only imagine the watercooler gossip at the NSA on the day after the worm broke out.

In case you’re wondering what the legal system thought of malware back then, and whether releasing computer viruses into the wild has ever been considered helpful, ethical, useful, thoughtful or lawful… Morris Jr. ended up on probation for three years, doing 400 hours of community service, and paying a fine of just over $10,000 – apparently the first person in the US convicted under the Computer Fraud and Abuse Act.

The Morris Worm is therefore within a year of 33 1/33 years old…

…and so, because 34.1836 common years is close enough to 33 1/3, and because we rather like the number 33 1/3, apparently a marketing-friendly choice of rotational speed for long-playing gramophone records nearly a century ago, that is the number we chose to sneak into the headline.

Not 33, not 34, and not the acutely factorisable and computer-friendly 32, but 33 1/3 = 100/3.

That’s a delightfully simple and precise rational fraction that, annoyingly, has no exact representation either in decimal or in binary. (1/3 = 0.333…10 = 0.010101…2)

Predicting the future

But we’re not really here to learn about the frustrations of floating point arithmetic, or that there are unexceptionable, human-friendly numbers that your computer’s CPUs can’t directly represent.

We said we’d make some cybersecurity predictions, so here goes.

We’re going to predict that in 2023 we will, collectively, continue to suffer from the same sort of cybersecurity trouble that was shouted from the rooftops more than 100010.010101…2 years ago by that alarming, fast-spreading Morris Worm.

Morris’s worm had three primary self-replication mechanisms that relied on three common coding and system administration blunders.

You might not be surprised to find out that they can be briefly summarised as follows:

  • Memory mismanagement. Morris exploited a buffer overflow vulnerability in a popular-at-the-time system network service, and achieved RCE (remote code execution).
  • Poor password choice. Morris used a so-called dictionary attack to guess likely login passwords. He didn’t need to guess everyone’s password – just cracking someone’s would do.
  • Unpatched systems. Morris probed for email servers that had been set up insecurely, but never subsequently updated to remove the dangerous remote code execution hole he abused.

Sound familiar?

What we can infer from this is that we don’t need a slew of new cybersecurity predictions for 2023 in order to have a really good idea of where to start.

In other words: we mustn’t lose sight of the basics in a scramble to sort out only specific and shiny new security issues.

Sadly, those shiny new issues are important, too, but we’re also still stuck with the cybersecurity sins of the past, and we probably will be for at least another 16 2/3 years, or even longer.

What to do?

The good news is that we’re getting better and better at dealing with many of those old-school problems.

For example, we’re learning to use safer programming practices and safer programming languages, as well as to cocoon our running code in better behaviour-blocking sandboxes to make buffer overflows harder to exploit.

We’re learning to us password managers (though they have brought intriguing issues of the their own) and alternative identity verification technologies as well or instead of relying on simple words that we hope no one will predict or guess.

And we’re not just getting patches faster from vendors (responsible ones, at least – the joke that the S in IoT stands for Security still seems to have plenty of life in it yet), but also showing ourselves willing to apply patches and updates more quickly.

We’re also embracing TLAs such as XDR and MDR (extended and managed detection and response respectively) more vigorously, meaning that we’re accepting that dealing with cyberattacks isn’t just about finding malware and removing it as needed.

These days, we’re much more inclined than we were a few years ago to invest time not only for looking out for known bad stuff that needs fixing, but also for ensuring that the good stuff that’s supposed to be there actually is, and that’s it’s still doing something useful.

We’re also taking more time to seek out potentially bad stuff proactively, instead of waiting until the proverbial alerts pop automatically into our cybersecurity dashboards.

For a fantastic overview both of cybercrime prevention and incident response, why not listen to our latest holiday season podcasts, where our experts liberally share both their knowledge and their advice:

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

Thanks for your support of the Naked Security community in 2022, and please accept our best wishes for a malware-free 2023!