Windows Password Hints: Big Deal?

Jonathan Claudius and Ryan Reynolds, white-hats in the security game, have discovered a registry key in Windows 7 and Windows 8 that contains password hints, the little reminders that pop up when you try to log in and make a mistake.  The hint is supposed to be an additional cue to a user’s recall, one that is meaningful to the user but not to anyone else.

There’s some debate about the significance of this discovery.  On one hand, the hint is freely available to anyone trying to log in to an account, so it’s not meant to be privileged knowledge.  Anyone can try to log in to any account, mash the keyboard to make a random guess at the password, and get that account’s hint in return.  On the other hand, the hint is crafted by the user with the goal of helping him/herself to remember something.  The amount and kind of information they can put into a hint has the potential to make guessing much, much easier, and not just for the user.

Claudius and Reynolds’ discovery shows us that anyone or any program with access to the registry can read the hint.  How is this beneficial for an attacker?

For one, there’s no actual failed login attempt needed to get the hint.  If I walk up to a machine, click on someone’s account name and type in a random password to get the hint, the login attempt can be logged.  In the aftermath of an attack, that log entry can tie me to a particular place or network connection at a particular time.  Even if there hasn’t been an attack, it’s a possible sign that someone other than the intended user has probed the system.

The fact that the hint is stored essentially in plaintext is understandable given the way Microsoft intends it to work (it’s given out prior to authentication, so why bother encrypting it for storage?), but it’s a bad, bad, bad, bad, bad idea for security purposes.  A user could practically spell out their entire password in the plaintext hint and kneecap the security model altogether.

How can this be fixed?  Well, ask the user to remember something before you give them the hint.  Show them a set of images, one of which has been pre-selected by the user.  The user is asked to click their special image to reveal the hint.  Only after this soft authentication would the hint be unencrypted and revealed.  The image would (or should) have absolutely no relationship to the password, no relationship to the hint.  This would be a small barrier for an attacker, but even a small barrier could profoundly reduce the incidence and success of casual or automated attacks.  The additional cognitive burden on the user should be very small, certainly much smaller than that of a password.

Some have called this discovery a non-issue, and in many ways this changes nothing about the Windows security model, but it does highlight some bad characteristics of the model itself.

Cliff’s Notes for Password Vulnerability

This article at Ars is a great introduction to the current state of password strength/vulnerability.

The gist is that password reuse is steadily increasing, brute force and hash attack costs are plummeting, and password composition is pretty much as bad as it always was.  No big surprise at any of those trends, probably because those trends have held for the past 20 years at least, but it’s still disconcerting.

The article gives examples of several attack methods, including the trusty old dictionary attack (and not just Webster’s).

What I found really interesting was the focus on pattern analysis as a tool for reducing search space.  The idea is that you can use some piece of information about a site’s user population or about the site itself to predict patterns in user-generated passwords.  A site that requires one uppercase, one number, and one special character will have a different password pattern distribution than a site that requires a minimum 10 letters with no common words.  Using this kind of a priori information isn’t the cool part, though…

In an attack on a large set of user generated passwords, there will always be a large percentage that will fall to easy patterns and simple dictionaries.  The cool part comes from using analysis of these broken passwords to inform your attack on the ones that didn’t break.  Say, for example, 10% of a password population was broken through an easy attack.  Just because that 10% was easily cracked, it doesn’t mean that they are wholly dissimilar to the 90% that weren’t cracked.  Both populations may have similar common patterns and only differ in length or size of their character set.  If we assume that the patterns used in the easy set also describe the uncracked set, we can focus attack resources on those patterns.

Imagine playing a game of Battleship where the board is very large and you can use as many ships as you want.  Play against a large number of opponents.  You win some, you lose some.  Look at the games you won.  You can see patterns emerge in people’s tactics, how close together they place their ships, whether to clump them in groups or line them end to end, etc.  The game board may be very large, but you have reasonable limits on your time to play… maybe you only want to spend 10 minutes playing any one game, but you want to win as many games as possible.  If you know that the most popular pattern used by your opponents is to line their ships end to end in a long line, you will try to find a ship, then continue attacking in a line until you sink all the ships.  If this attack is generally successful in 10 minute games, you may suppose that it will work even if you extend the play to 1 hour.  The pattern may have been in use in the games you lost  — perhaps there were simply too many ships to have sunk them all in 10 minutes.  By finding the common pattern(s) in the short games, you’ve increased the chances of winning longer games without having to play many, many long games to discover the pattern.

Your computational resources are finite, just like the amount of time you have to spend playing games of Battleship.  If you want to get rich hustling the underground Battleship circuit (hey, it could be a real thing), you want to win as many games as you can in a set amount of time.  If you want to be a 5up3r h4ck3r, you want to crack as many passwords as you can with a set amount of computational resources.

Okay, the analogy isn’t perfect, but you get the idea.

Blah.

 

Microsoft Thinks Two-Factor Authentication Isn’t Important

Reps for Microsoft’s new Outlook.com service suggest that strong passwords and vague statements about R&D are enough to protect their users.

Mashable questioned Microsoft about Outlook.com’s security and was told that, unlike Google, two-factor authentication will not be implemented.  Google does not require use of two-factor authentication, but they do offer it to users on an opt-in basis.  Microsoft’s decree that they won’t even offer an opt-in service is disappointing to say the least, and will very likely come back to haunt them in the following months/years.

The tone of the MS rep’s comments gives the impression that two-factor-auth is a sort of anachronism or secret handshake — something only a Spock-eared nerd or snobby IT elite would encumber himself with.  Whether they take issue with the two-factor concept generally, or Google’s implementation specifically, is unclear.

The rep’s case boils down to two propositions:

  1. (Google’s?) Two-factor auth creates a bad user experience.
  2. Strong passwords and unspecified future schemes are secure enough.

Let’s examine these in more detail.

Google’s two-factor scheme works like this:

  • Joe Blow has a Google account which stores email, browser passwords, documents, and other private stuff
  • Joe tells Google that his password will be “JoeIsCool”, that his cell phone number is 867-555-5309, and that he wants to use two-factor authentication
  • If Joe accesses his Google account from his home computer or other trusted machine, he may be asked to enter his password, “JoeIsCool”, once a day or every other day; Joe enters the password and Google lets him in
  • If Joe wants to use his Google account from an internet cafe or untrusted computer, Google will ask him to enter his password, but it will also send a code to his cell phone; he must enter both tokens (password and one-time code) before Google will let him in
  • If Hacker Henry discovers Joe’s password, he isn’t able break in to Joe’s account since he doesn’t have Joe’s phone and thus can’t receive the one-time code; what’s more, Joe is now alerted that someone is trying to break in

To me, the biggest usability barrier in this scheme is getting the initial user buy-in.  The user needs to know that the option exists, that it is important, that it is good, and that it is easy.  If the user opts-in, use of the two-factor scheme is fairly straightforward; Joe attempts to log in from an untrusted computer, he enters his password correctly, he receives a text with a number in it, he enters the number, he’s in.

Think about it this way: he only needs to remember his password.  Whether he uses two-factor or not, he only needs to remember his password.  If he uses two-factor, the system itself gives him a token and asks for it back — no additional memory burden.  Compare this to the most common method of enhanced authentication: testing your memory.  When I log on to my bank’s website from an untrusted computer, the site asks for the usual stuff and then tests my memory about certain things.  What is your favourite movie? Who was your first grade teacher’s name?  What is your pet’s name?  These seem like simple questions to answer, but there be dragons here.  You may remember the facts just fine, but must also reproduce the answer you gave when you created the account.  This gets into issues of case-sensitive tokens, ambiguous questions that can have several perfectly sensible but incorrect answers, etc.

How is that not a giant usability minefield?  Google prompts you with the exact token it wants, zero memory burden on the user (aside from the password, obviously) each and every time it asks you to authenticate.  A memory test, by comparison, is always a memory test; each time you are asked to remember something, you have to perform a search in your memory or in your little black book of things you can’t be bothered to actually remember.

One potential pitfall of Google’s two-factor is that the user must have access to his or her phone during the authentication.  In 2012, I don’t see this as an unreasonable requirement, but there’s always that one day when you’ve lost your phone or its battery died right when you need to log in.

So, what authentication scheme will Microsoft employ that is plenty secure and user friendly?  They aren’t saying.  The rep assures us that they’re pouring money and effort into R&D on this matter, but I can’t see them inventing a totally new scheme that satisfies ease of use and strong security.  I’m expecting a reliance on strong passwords (on pain of death, little user!), the usual memory tests, and perhaps something gimmicky tacked on… Something graphical?

Keep watching the skies…