No; iMessage isn’t intercept-proof.

*** (April 5, 2013) Update: TechDirt has a nice post about the whole affair. They summarize the counterarguments against the DEA memo and the original CNET story; and they line up quite nicely with mine ūüôā They also include snippets from Julian Sanchez that offer more details and some possible motives for this whole exercise. Woot!

Argh. This story is traveling around the OMGosphere. A DEA office sent an internal notice among its agents and investigators. The notice was meant to warn them about the inability of pen registers and trap and trace devices to log Apple iMessages. The devices in question work like the call list on your phone; every call you make and every call you receive are logged. Extend that idea to include SMS messages (mobile texts) and you get the idea. It’s a form of wiretapping, but it doesn’t necessarily include logging the content of the communication.

The DEA uses these devices to record evidence of contact and communication between suspects. If they’re logging the phone calls made and received by gang members, the record of their intercommunication history could be used in court to show collusion in criminal activity, for example. RICO Act type of stuff.

Most of this equipment is installed and maintained by the phone companies to meet their legal disclosure requirements; when an agency comes knocking and asks for a full bidirectional record of calls for a certain phone number, the company is required to produce it.

The DEA warning was issued because agents discovered that the communication records they received weren’t always complete. The missing events were iMessages sent between two Apple devices; two iPhones, an iPhone and an iPad, two iToilets, etc.

So, that means that Apple iMessages have unbreakable encryption and are so amazingly great that EVEN THE DEA CAN’T TRACK THEM! ¬†Right?



Internet, there are times when I want to hit you with an atomic hockey stick.

DEA foiled again!

Why are SMS messages logged while iMessages are not? A few reasons that have nothing do with super Apple encryption framice plates.

1. SMS messages are handled by the phone company network. The capability to transport text messages between mobile phones is built right into the specifications of the mobile phone networks. When you send a mobile text message, the message protocol includes source and destination headers telling the tower where the message originated and who it’s for. The logging equipment at the phone company can simply take those headers and add them to the record.

2. iMessage is not a standard adopted by the Mobile Phone Industry. Apple handles the routing of iMessages. When you send an iMessage from your iPhone — assuming you send it via mobile data and not Wifi — the cell tower treats it like a bunch of ordinary data packets; you might as well be uploading a photo or streaming some music. The packets will have source and destination headers of their own, but only to move the packets to an Apple server. The actual source and destination of the iMessage will be part of the data packets’ content, not as cleartext metadata on the outside of an SMS message.

3. Pen registers and traps aren’t psychic. There are people in the world who think that a virus scanner is capable of identifying any kind of virus. Surprisingly, the scanner is not an oracle; it’s just pattern matching to a list of known patterns. Have you ever been bothered by anti-virus software begging you to update your virus definitions? The software needs to have the latest set of known virus patterns (or signatures) so that it can detect known threats. If the definitions haven’t been updated in 2 years, there are lots of new virussessesesesssii the scanner will miss. The wiretaps can work in a similar fashion. They can sit in the network and look for SMS-shaped things, voice call-shaped things, etc. They have been told how to identify those events; they don’t get a tingling spidey-sense when an SMS is nearby. It’s entirely possible that the wiretap equipment could be given an update allowing it to identify the signature of an iMessage, if not the ability to decode it. Depending on the iMessage spec, messages may have a structure that is observable even when encrypted; messages may have a specific preamble; all packets heading to a set of identified iMessage servers could be flagged, etc.

4. It is almost certain that Apple IS maintaining a log of iMessages in order to comply with legal requirements. If so ordered, they would be required by law to produce activity logs for individual iMessage accounts. In this case, the DEA agents weren’t aware that the Apple-held data wouldn’t be logged by the phone company. This wasn’t a triumph of Apple tech against evil government privacy violations. This was a temporary ignorance of modern communications tech.

Thus endeth the lesson.


Windows Password Hints: Big Deal?

Jonathan Claudius and Ryan Reynolds, white-hats in the security game, have discovered a registry key in Windows 7 and Windows 8 that contains password hints, the little reminders that pop up when you try to log in and make a mistake. ¬†The hint is supposed to be an additional cue to a user’s recall, one that is meaningful to the user but not to anyone else.

There’s some debate about the significance of this discovery. ¬†On one hand, the hint is freely available to anyone trying to log in to an account, so it’s not meant to be privileged knowledge. ¬†Anyone can try to log in to any account, mash the keyboard to make a random guess at the password, and get that account’s hint in return. ¬†On the other hand, the hint is crafted¬†by the user with the goal of helping him/herself to remember something. ¬†The amount and kind of information they can put into a hint has the potential to make guessing much, much easier, and not just for the user.

Claudius and Reynolds’ discovery shows us that anyone or any program with access to the registry can read the hint. ¬†How is this beneficial for an attacker?

For one, there’s no actual failed login attempt needed to get the hint. ¬†If I walk up to a machine, click on someone’s account name and type in a random password to get the hint, the login attempt can be logged. ¬†In the aftermath of an attack, that log entry can tie me to a particular place or network connection at a particular time. ¬†Even if there hasn’t been an attack, it’s a possible sign that someone other than the intended user has probed the system.

The fact that the hint is stored essentially in plaintext is understandable given the way Microsoft intends it to work (it’s given out prior to authentication, so why bother encrypting it for storage?), but it’s a bad, bad, bad, bad, bad idea for security purposes. ¬†A user could practically spell out their entire password in the plaintext hint and kneecap the security model altogether.

How can this be fixed?  Well, ask the user to remember something before you give them the hint.  Show them a set of images, one of which has been pre-selected by the user.  The user is asked to click their special image to reveal the hint.  Only after this soft authentication would the hint be unencrypted and revealed.  The image would (or should) have absolutely no relationship to the password, no relationship to the hint.  This would be a small barrier for an attacker, but even a small barrier could profoundly reduce the incidence and success of casual or automated attacks.  The additional cognitive burden on the user should be very small, certainly much smaller than that of a password.

Some have called this discovery a non-issue, and in many ways this changes nothing about the Windows security model, but it does highlight some bad characteristics of the model itself.

Microsoft Thinks Two-Factor Authentication Isn’t Important

Reps for Microsoft’s new service suggest that strong passwords and vague statements about R&D are enough to protect their users.

Mashable questioned Microsoft about’s security and was told that, unlike Google, two-factor authentication will not be implemented. ¬†Google does not require use of two-factor authentication, but they do offer it to users on an opt-in basis. ¬†Microsoft’s decree that they won’t even offer an opt-in service is disappointing to say the least, and will very likely come back to haunt them in the following months/years.

The tone of the MS rep’s comments gives the impression that two-factor-auth is a sort of anachronism or secret handshake — something only a Spock-eared nerd or snobby IT elite would encumber himself with. ¬†Whether they take issue with the two-factor concept generally, or Google’s implementation specifically, is unclear.

The rep’s case boils down to two propositions:

  1. (Google’s?) Two-factor auth creates a bad user experience.
  2. Strong passwords and unspecified future schemes are secure enough.

Let’s examine these in more detail.

Google’s two-factor scheme works like this:

  • Joe Blow has a Google account which stores email, browser passwords, documents, and other private stuff
  • Joe tells Google that his password will be “JoeIsCool”, that his cell phone number is 867-555-5309, and that he wants to use two-factor authentication
  • If Joe accesses his Google account from his home computer or other trusted machine, he may be asked to enter his password, “JoeIsCool”, once a day or every other day; Joe enters the password and Google lets him in
  • If Joe wants to use his Google account from an internet cafe or untrusted computer, Google will ask him to enter his password, but it will¬†also send a code to his cell phone; he must enter both tokens (password and one-time code) before Google will let him in
  • If Hacker Henry discovers Joe’s password, he isn’t able break in to Joe’s account since he doesn’t have Joe’s phone and thus can’t receive the one-time code; what’s more, Joe is now alerted that someone is trying to break in

To me, the biggest usability barrier in this scheme is getting the initial user buy-in. ¬†The user needs to know that the option exists, that it is important, that it is good, and that it is easy. ¬†If the user opts-in, use of the two-factor scheme is fairly straightforward; Joe attempts to log in from an untrusted computer, he enters his password correctly, he receives a text with a number in it, he enters the number, he’s in.

Think about it this way: he only needs to remember his password. ¬†Whether he uses two-factor or not, he only needs to remember his password. ¬†If he uses two-factor,¬†the system itself gives him a token and asks for it back —¬†no additional memory burden. ¬†Compare this to the most common method of enhanced authentication: testing your memory. ¬†When I log on to my bank’s website from an untrusted computer, the site asks for the usual stuff and then tests my memory about certain things. ¬†What is your favourite movie? Who was your first grade teacher’s name? ¬†What is your pet’s name? ¬†These seem like simple questions to answer, but there be dragons here. ¬†You may remember the facts just fine, but must also reproduce the answer you gave when you created the account. ¬†This gets into issues of case-sensitive tokens, ambiguous questions that can have several perfectly sensible but incorrect answers, etc.

How is that not a giant usability minefield? ¬†Google prompts you with the exact token it wants, zero memory burden on the user (aside from the password, obviously) each and every time it asks you to authenticate. ¬†A memory test, by comparison, is always a memory test; each time you are asked to remember something, you have to perform a search in your memory or in your little black book of things you can’t be bothered to actually remember.

One potential pitfall of Google’s two-factor is that the user must have access to his or her phone during the authentication. ¬†In 2012, I don’t see this as an unreasonable requirement, but there’s always that one day when you’ve lost your phone or its battery died right when you need to log in.

So, what authentication scheme will Microsoft employ that is plenty secure and user friendly? ¬†They aren’t saying. ¬†The rep assures us that they’re pouring money and effort into R&D on this matter, but I can’t see them inventing a totally new scheme that satisfies ease of use and strong security. ¬†I’m expecting a reliance on strong passwords (on pain of death, little user!), the usual memory tests, and perhaps something gimmicky tacked on… Something graphical?

Keep watching the skies…

Voter Data Breach Update

Well well well, the privacy breach at Elections Ontario is worse than expected.

The Chief Elections Officer initially stated that the memory sticks were encrypted and that there was no evidence that the data had been accessed. ¬†Twenty-four hours later we’re told that they were not encrypted (!!!) but they’re still confident that the data have not been accessed…

Oh, and we’re now up to a possible 25 ridings affected, not just 24.

I wrote about the questionable logic behind the “we’re pretty sure nothing’s been accessed” concept in my last post, so I won’t go over it again.

Elections Ontario needs to improve their communication on this breach. ¬†I went to their official site to get more information about the affected ridings, but they are unsure (or unwilling to disclose) which 24/25 were definitely affected out of a list of 49 potentially affected ridings. ¬†Okay, let’s see the list of 49 potentially affected ridings…

Oh, an Excel Spreadsheet

An Excel Spreadsheet???  Is it really that difficult for them to append the riding names to the press release, or are they just trying to make the information less obvious?

Oy vey.

Voter Personal Data Out in the Wild

I just saw an article on CBC about a “privacy breach” at Elections Ontario. ¬†A set of “memory sticks” has gone walkabout, taking with it the names, addresses, birth dates, and genders of registered voters in 24 provincial ridings. ¬†Oh dear.

The Chief Electoral Officer for the province says that the information was encrypted and that there is no evidence that the data were accessed.

Hold on. ¬†If the memory sticks are missing, and therefore not available for inspection, then you have no information whatsoever on the state of their contents. ¬†It is perfectly true that there is no evidence that the data were accessed… because you can’t check them to find out.

I could just as easily tell you that my autographed photo of Eddie Murphy is missing, and that there’s no evidence that someone else has crossed out my name and written “Bono” in permanent marker. ¬†Indeed, I have no evidence of the portrait’s Bonofication, but that absence of evidence is not evidence of absence. ¬†Damn you, Bono.

It would be nice if the Chief Electoral Officer gave out details of the encryption used on the memory sticks. ¬†Without defining the method being used, the term “encryption” could mean anything from symmetric key crypto (AES, Twofish, etc.) to giving the files inconspicuous names (not_voter_data.dat) and everything in between.

There was also mention that the data may indicate whether the individual voter actually cast a ballot in the last election.  I have no issue with them tracking participation, but that data should not be stored with information that could identify individuals.  Information that uniquely identifies individual people must be kept segregated.  There should be no master database that contains all data points, big and small.  At a minimum, sensitive data should be stored in one database, non-sensitive data in another.  Each person would have a unique, randomly-assigned identifier (a number, for example), and that would be the common link between the two databases.  If the voter registry was leaked, you would know who the voters were, but not know if they voted.  If the record of voter participation was leaked, you would know which unique identifiers were associated with someone who voted, but not have any information about who that person was.  Obviously there will be occasions when you need some information from both databases (accomplished with a JOIN, in SQL) but the resulting mix of personal and non-personal would be temporary, not something that you would store on a USB stick.

At least, that’s how it¬†should be done. ¬†Let’s see what the morning briefing reveals…