No; iMessage isn’t intercept-proof.

*** (April 5, 2013) Update: TechDirt has a nice post about the whole affair. They summarize the counterarguments against the DEA memo and the original CNET story; and they line up quite nicely with mine ūüôā They also include snippets from Julian Sanchez that offer more details and some possible motives for this whole exercise. Woot!

Argh. This story is traveling around the OMGosphere. A DEA office sent an internal notice among its agents and investigators. The notice was meant to warn them about the inability of pen registers and trap and trace devices to log Apple iMessages. The devices in question work like the call list on your phone; every call you make and every call you receive are logged. Extend that idea to include SMS messages (mobile texts) and you get the idea. It’s a form of wiretapping, but it doesn’t necessarily include logging the content of the communication.

The DEA uses these devices to record evidence of contact and communication between suspects. If they’re logging the phone calls made and received by gang members, the record of their intercommunication history could be used in court to show collusion in criminal activity, for example. RICO Act type of stuff.

Most of this equipment is installed and maintained by the phone companies to meet their legal disclosure requirements; when an agency comes knocking and asks for a full bidirectional record of calls for a certain phone number, the company is required to produce it.

The DEA warning was issued because agents discovered that the communication records they received weren’t always complete. The missing events were iMessages sent between two Apple devices; two iPhones, an iPhone and an iPad, two iToilets, etc.

So, that means that Apple iMessages have unbreakable encryption and are so amazingly great that EVEN THE DEA CAN’T TRACK THEM! ¬†Right?



Internet, there are times when I want to hit you with an atomic hockey stick.

DEA foiled again!

Why are SMS messages logged while iMessages are not? A few reasons that have nothing do with super Apple encryption framice plates.

1. SMS messages are handled by the phone company network. The capability to transport text messages between mobile phones is built right into the specifications of the mobile phone networks. When you send a mobile text message, the message protocol includes source and destination headers telling the tower where the message originated and who it’s for. The logging equipment at the phone company can simply take those headers and add them to the record.

2. iMessage is not a standard adopted by the Mobile Phone Industry. Apple handles the routing of iMessages. When you send an iMessage from your iPhone — assuming you send it via mobile data and not Wifi — the cell tower treats it like a bunch of ordinary data packets; you might as well be uploading a photo or streaming some music. The packets will have source and destination headers of their own, but only to move the packets to an Apple server. The actual source and destination of the iMessage will be part of the data packets’ content, not as cleartext metadata on the outside of an SMS message.

3. Pen registers and traps aren’t psychic. There are people in the world who think that a virus scanner is capable of identifying any kind of virus. Surprisingly, the scanner is not an oracle; it’s just pattern matching to a list of known patterns. Have you ever been bothered by anti-virus software begging you to update your virus definitions? The software needs to have the latest set of known virus patterns (or signatures) so that it can detect known threats. If the definitions haven’t been updated in 2 years, there are lots of new virussessesesesssii the scanner will miss. The wiretaps can work in a similar fashion. They can sit in the network and look for SMS-shaped things, voice call-shaped things, etc. They have been told how to identify those events; they don’t get a tingling spidey-sense when an SMS is nearby. It’s entirely possible that the wiretap equipment could be given an update allowing it to identify the signature of an iMessage, if not the ability to decode it. Depending on the iMessage spec, messages may have a structure that is observable even when encrypted; messages may have a specific preamble; all packets heading to a set of identified iMessage servers could be flagged, etc.

4. It is almost certain that Apple IS maintaining a log of iMessages in order to comply with legal requirements. If so ordered, they would be required by law to produce activity logs for individual iMessage accounts. In this case, the DEA agents weren’t aware that the Apple-held data wouldn’t be logged by the phone company. This wasn’t a triumph of Apple tech against evil government privacy violations. This was a temporary ignorance of modern communications tech.

Thus endeth the lesson.


Google Play says your username and password don’t match?

UX designers and coders take note: nothing will frustrate your users more than being asked for login credentials and being told that they’re wrong.

This is especially true when the user (me) is trying to enter a long alphanumeric password on a tablet with a stylus. ¬†Every time the user sees “username and password don’t match”, they will naturally assume that they’ve hit an extra key or capitalized something accidentally, and will grumble to themselves as they try again. ¬†Things get even more fun when the password field is masked with stars to prevent shoulder surfing.

It’s pretty easy to humble your user this way. ¬†So easy, in fact, that you should spend time analyzing the user’s task to see if you’re asking them the right questions and giving them enough help…

Case in point: Google Play Store. ¬†I have a very low cost (cheap) tablet on which I managed to load the Google Play packages. ¬†When asked to login to my Google account, I received the very helpful response “username and password do not match”. ¬†I attempted to login several times with my normal credentials and failed every time. ¬†There were any number of reasons for this to have failed (including the fact that my tablet was unsupported, ahem), but the real reason was ridiculous:

I use Google’s two-factor authentication.

Logging in to Google from a new computer usually means entering my username, password, and then a 6-digit number that is sent to my cellphone over SMS. ¬†If I enter the user/pass incorrectly, the error would be “username and password do not match.” ¬†If I enter the 6-digit number incorrectly, the error would be something like “incorrect PIN.” ¬†This is straightforward proposition: enter your Google username, your Google password, the PIN that Google sends to you; if you get something wrong, you entered the user/pass incorrectly, or you mistyped the PIN.

Google Play’s device login, however, doesn’t mention anything about PINs or two-factor authentication. ¬†A naive user, like myself, assumes that he must enter his normal Google username and his normal Google password. ¬†But that’s¬†wrong. ¬†Normal username, yes, but you¬†must enter your “application specific password”.

What’s that? ¬†Rather than implementing the SMS PIN step, Google lets you create a sort of special password that you only use on mobile devices or desktop apps. ¬†There are many good reasons for doing this; it’s extra security against rogue apps or compromised devices (not exposing your main Google credentials), it saves developers using Google APIs from having to rework their products, and the application specific password is only made of lower-case letters so that mobile users won’t have to fiddle with entering special characters.

Good reasons, all of them. ¬†But it all falls apart at the user interface. ¬†Users are dependent on the UX designer to give them the information they need for the task. ¬†Failing to mention mention that “password” could mean “application-specific password” is a big omission. ¬†Google’s support site does mention the issue, and users of 2-factor authentication are told in advance to expect this behaviour, but that doesn’t cut mustard.

Now, back to my under-powered plastic tablet and its slight violations of terms of service…

Windows Password Hints: Big Deal?

Jonathan Claudius and Ryan Reynolds, white-hats in the security game, have discovered a registry key in Windows 7 and Windows 8 that contains password hints, the little reminders that pop up when you try to log in and make a mistake. ¬†The hint is supposed to be an additional cue to a user’s recall, one that is meaningful to the user but not to anyone else.

There’s some debate about the significance of this discovery. ¬†On one hand, the hint is freely available to anyone trying to log in to an account, so it’s not meant to be privileged knowledge. ¬†Anyone can try to log in to any account, mash the keyboard to make a random guess at the password, and get that account’s hint in return. ¬†On the other hand, the hint is crafted¬†by the user with the goal of helping him/herself to remember something. ¬†The amount and kind of information they can put into a hint has the potential to make guessing much, much easier, and not just for the user.

Claudius and Reynolds’ discovery shows us that anyone or any program with access to the registry can read the hint. ¬†How is this beneficial for an attacker?

For one, there’s no actual failed login attempt needed to get the hint. ¬†If I walk up to a machine, click on someone’s account name and type in a random password to get the hint, the login attempt can be logged. ¬†In the aftermath of an attack, that log entry can tie me to a particular place or network connection at a particular time. ¬†Even if there hasn’t been an attack, it’s a possible sign that someone other than the intended user has probed the system.

The fact that the hint is stored essentially in plaintext is understandable given the way Microsoft intends it to work (it’s given out prior to authentication, so why bother encrypting it for storage?), but it’s a bad, bad, bad, bad, bad idea for security purposes. ¬†A user could practically spell out their entire password in the plaintext hint and kneecap the security model altogether.

How can this be fixed?  Well, ask the user to remember something before you give them the hint.  Show them a set of images, one of which has been pre-selected by the user.  The user is asked to click their special image to reveal the hint.  Only after this soft authentication would the hint be unencrypted and revealed.  The image would (or should) have absolutely no relationship to the password, no relationship to the hint.  This would be a small barrier for an attacker, but even a small barrier could profoundly reduce the incidence and success of casual or automated attacks.  The additional cognitive burden on the user should be very small, certainly much smaller than that of a password.

Some have called this discovery a non-issue, and in many ways this changes nothing about the Windows security model, but it does highlight some bad characteristics of the model itself.

Cliff’s Notes for Password Vulnerability

This article at Ars is a great introduction to the current state of password strength/vulnerability.

The gist is that password reuse is steadily increasing, brute force and hash attack costs are plummeting, and password composition is pretty much as bad as it always was. ¬†No big surprise at any of those trends, probably because those trends have held for the past 20 years at least, but it’s still disconcerting.

The article gives examples of several attack methods, including the trusty old dictionary attack (and not just Webster’s).

What I found really interesting was the focus on pattern analysis as a tool for reducing search space. ¬†The idea is that you can use some piece of information about a site’s user population or about the site itself to predict patterns in user-generated passwords. ¬†A site that requires one uppercase, one number, and one special character will have a different password pattern distribution than a site that requires a minimum 10 letters with no common words. ¬†Using this kind of a priori information isn’t the cool part, though…

In an attack on a large set of user generated passwords, there will always be a large percentage that will fall to easy patterns and simple dictionaries. ¬†The cool part comes from using analysis of these broken passwords to inform your attack on the ones that didn’t break. ¬†Say, for example, 10% of a password population was broken through an easy attack. ¬†Just because that 10% was easily cracked, it doesn’t mean that they are wholly dissimilar to the 90% that weren’t cracked. ¬†Both populations may have similar common patterns and only differ in length or size of their character set. ¬†If we assume that the patterns used in the easy set also describe the uncracked set, we can focus attack resources on those patterns.

Imagine playing a game of Battleship where the board is very large and you can use as many ships as you want. ¬†Play against a large number of opponents. ¬†You win some, you lose some. ¬†Look at the games you won. ¬†You can see patterns emerge in people’s tactics, how close together they place their ships, whether to clump them in groups or line them end to end, etc. ¬†The game board may be very large, but you have reasonable limits on your time to play… maybe you only want to spend 10 minutes playing any one game, but you want to win as many games as possible. ¬†If you know that the most popular pattern used by your opponents is to line their ships end to end in a long line, you will try to find a ship, then continue attacking in a line until you sink all the ships. ¬†If this attack is generally successful in 10 minute games, you may suppose that it will work even if you extend the play to 1 hour. ¬†The pattern may have been in use in the games you lost ¬†— perhaps there were simply too many ships to have sunk them all in 10 minutes. ¬†By finding the common pattern(s) in the short games, you’ve increased the chances of winning longer games without having to play many, many long games to discover the pattern.

Your computational resources are finite, just like the amount of time you have to spend playing games of Battleship.  If you want to get rich hustling the underground Battleship circuit (hey, it could be a real thing), you want to win as many games as you can in a set amount of time.  If you want to be a 5up3r h4ck3r, you want to crack as many passwords as you can with a set amount of computational resources.

Okay, the analogy isn’t perfect, but you get the idea.



Microsoft Thinks Two-Factor Authentication Isn’t Important

Reps for Microsoft’s new service suggest that strong passwords and vague statements about R&D are enough to protect their users.

Mashable questioned Microsoft about’s security and was told that, unlike Google, two-factor authentication will not be implemented. ¬†Google does not require use of two-factor authentication, but they do offer it to users on an opt-in basis. ¬†Microsoft’s decree that they won’t even offer an opt-in service is disappointing to say the least, and will very likely come back to haunt them in the following months/years.

The tone of the MS rep’s comments gives the impression that two-factor-auth is a sort of anachronism or secret handshake — something only a Spock-eared nerd or snobby IT elite would encumber himself with. ¬†Whether they take issue with the two-factor concept generally, or Google’s implementation specifically, is unclear.

The rep’s case boils down to two propositions:

  1. (Google’s?) Two-factor auth creates a bad user experience.
  2. Strong passwords and unspecified future schemes are secure enough.

Let’s examine these in more detail.

Google’s two-factor scheme works like this:

  • Joe Blow has a Google account which stores email, browser passwords, documents, and other private stuff
  • Joe tells Google that his password will be “JoeIsCool”, that his cell phone number is 867-555-5309, and that he wants to use two-factor authentication
  • If Joe accesses his Google account from his home computer or other trusted machine, he may be asked to enter his password, “JoeIsCool”, once a day or every other day; Joe enters the password and Google lets him in
  • If Joe wants to use his Google account from an internet cafe or untrusted computer, Google will ask him to enter his password, but it will¬†also send a code to his cell phone; he must enter both tokens (password and one-time code) before Google will let him in
  • If Hacker Henry discovers Joe’s password, he isn’t able break in to Joe’s account since he doesn’t have Joe’s phone and thus can’t receive the one-time code; what’s more, Joe is now alerted that someone is trying to break in

To me, the biggest usability barrier in this scheme is getting the initial user buy-in. ¬†The user needs to know that the option exists, that it is important, that it is good, and that it is easy. ¬†If the user opts-in, use of the two-factor scheme is fairly straightforward; Joe attempts to log in from an untrusted computer, he enters his password correctly, he receives a text with a number in it, he enters the number, he’s in.

Think about it this way: he only needs to remember his password. ¬†Whether he uses two-factor or not, he only needs to remember his password. ¬†If he uses two-factor,¬†the system itself gives him a token and asks for it back —¬†no additional memory burden. ¬†Compare this to the most common method of enhanced authentication: testing your memory. ¬†When I log on to my bank’s website from an untrusted computer, the site asks for the usual stuff and then tests my memory about certain things. ¬†What is your favourite movie? Who was your first grade teacher’s name? ¬†What is your pet’s name? ¬†These seem like simple questions to answer, but there be dragons here. ¬†You may remember the facts just fine, but must also reproduce the answer you gave when you created the account. ¬†This gets into issues of case-sensitive tokens, ambiguous questions that can have several perfectly sensible but incorrect answers, etc.

How is that not a giant usability minefield? ¬†Google prompts you with the exact token it wants, zero memory burden on the user (aside from the password, obviously) each and every time it asks you to authenticate. ¬†A memory test, by comparison, is always a memory test; each time you are asked to remember something, you have to perform a search in your memory or in your little black book of things you can’t be bothered to actually remember.

One potential pitfall of Google’s two-factor is that the user must have access to his or her phone during the authentication. ¬†In 2012, I don’t see this as an unreasonable requirement, but there’s always that one day when you’ve lost your phone or its battery died right when you need to log in.

So, what authentication scheme will Microsoft employ that is plenty secure and user friendly? ¬†They aren’t saying. ¬†The rep assures us that they’re pouring money and effort into R&D on this matter, but I can’t see them inventing a totally new scheme that satisfies ease of use and strong security. ¬†I’m expecting a reliance on strong passwords (on pain of death, little user!), the usual memory tests, and perhaps something gimmicky tacked on… Something graphical?

Keep watching the skies…

Voter Data Breach Update

Well well well, the privacy breach at Elections Ontario is worse than expected.

The Chief Elections Officer initially stated that the memory sticks were encrypted and that there was no evidence that the data had been accessed. ¬†Twenty-four hours later we’re told that they were not encrypted (!!!) but they’re still confident that the data have not been accessed…

Oh, and we’re now up to a possible 25 ridings affected, not just 24.

I wrote about the questionable logic behind the “we’re pretty sure nothing’s been accessed” concept in my last post, so I won’t go over it again.

Elections Ontario needs to improve their communication on this breach. ¬†I went to their official site to get more information about the affected ridings, but they are unsure (or unwilling to disclose) which 24/25 were definitely affected out of a list of 49 potentially affected ridings. ¬†Okay, let’s see the list of 49 potentially affected ridings…

Oh, an Excel Spreadsheet

An Excel Spreadsheet???  Is it really that difficult for them to append the riding names to the press release, or are they just trying to make the information less obvious?

Oy vey.

Dragon ID’s mobile unlock by voice

A brief but interesting story on GigaOm.

Nuance, the company behind the Dragon family of voice recognition products, is promoting a mobile app called Dragon ID. ¬†The app acts as a replacement for standard user authorization schemes like PINs or swipe patterns by matching speech characteristics of a user against a known set of characteristics. ¬†It’s the old “My voice is my passport” idea that we (or at least I) saw in “Sneakers“; the user speaks a phrase into the device, the device checks to see if the user’s speech has the same x, y, and z as the real Mr. User, and accepts or rejects the attempt.

At first blush this looks like a UX winner. ¬†The user doesn’t have to remember any complicated passwords, PINs, or other meaningless tokens. ¬†And it would be impossible for the user to lose his authenticator, his voice, except by disease or injury.

But there are some security considerations that must be satisfied for this to be an acceptable gatekeeper for a mobile device. ¬†The most obvious weakness of this system would be to a replay attack, literally replaying a recording of the user authenticating. ¬†What countermeasures are used by Dragon ID to prevent such simple attacks? ¬†Presumably the audio recording is analyzed by Dragon ID to ensure that the voice is coming from a point directly in front of the device or headset microphone, but this would not be a robust defense. ¬†Can it detect artifacts of digital audio reproduction? ¬†Audio compression schemes like MP3? ¬†Does it emit a one-time audio watermark via the speaker during recording so that a replay would be easily detected? ¬†I’d certainly love to know.

Pattern matching is performed against an established set of phrases recorded by the user. ¬†This simplifies the task of matching a candidate audio sample’s characteristics against a known set of characteristics, but it presumably reduces the amount of work an attacker would need to put into making a passable authenticator. ¬†In a perfect world, the app would compose a unique phrase for each attempted authentication, each log-in, so that an attacker would have no real template for a “good guess”. ¬†The attacker would need to know about a user’s full range of accents, inflections, cadences, etc., in order to make a passable authenticator, and he would only get one shot at each phrase. ¬†With a known subset of authenticators (like a decent recording of one successful authentication attempt), the attacker knows what the phrase will be for any future attempt and that he will only have to polish it somehow for it to be acceptable.

Phrases can be disabled by the user or disallowed by the device or the Dragon ID servers for too many failed attempts, but this raises a question about the resistance of the app to multiple attempts. ¬†The app surely only allows a certain number of attempts before either locking the device entirely, disabling a specific phrase, or forcing the user to authenticate with a password or some other non-voice token. ¬†But how does it track multiple attempts? The app is required to work even when the device is completely disconnected from voice or data networks, so there must be some form of device-resident logging. ¬†If the device’s memory is cloned before an attack, what prevents the attacker from reflashing the device into its previous state where the counter was at 0? ¬†There are plenty of memory locations on a device to store counter information, and more clever ways than a simple variable in a LoginAttempts.dat file. ¬†Is it possible to completely reset the state of the device to a set point such that an attacker could indefinitely attempt authentication?

Enlighten me.  I love this stuff.


Original article on GigaOm.