I have my original Social Security card. I applied for it and got it when I was in High School and was about to get my first real job. Printed at the bottom is "FOR SOCIAL SECURITY AND TAX PURPOSES -- NOT FOR IDENTIFICATION". This "Not for Identification" business is often misconstrued. It is usually taken to mean that your Social Security number is not some kind of ID or identification number. That's not what is meant. Because the very same card also says "For Social Security and Tax Purposes". What is actually meant is that the card itself is not to be used for identification. That's because the card itself was not designed to provide positive identification.
And that is a nice bridge. This post is an update to a previous post about Positive Identification (see http://sigma5.blogspot.com/2016/04/positive-identification.html). If you reread it you will find it links to previous blogs. I have been weighing in on privacy issues for some time now. Although the trend continues the details keep changing. And there have been several important changes since my last post over two years ago.
I spent some time in the post I linked to on the fingerprint identification system Apple had implemented. You placed your finger on the correct spot on the phone and it would read your fingerprint. If it recognized it the phone would unlock. The phone positively identified you by analyzing your fingerprint.
Apple has since moved on to facial recognition. Smart phones have had cameras in them for some time. Even my old fashioned flip-phone now comes equipped with a camera. My phone doesn't have enough computer power to do facial recognition but newer iPhones do. They take a picture and supplement it by measuring other characteristics of your face. If it's a match you have been positively identified and your phone unlocks.
It is important to recognize there are limitations. First of all, what the phone is doing is matching your current face to one that was identified to the phone during the phone's "setup" procedure. So the phone knows that, relatively speaking, you are you. But absolutely speaking it doesn't know who you are. Also, it is possible to fool the recognition system. This was true of the older fingerprint system and it continues to be true of the facial recognition system. But everybody expects that the process will be updated and enhanced as time goes by so that it becomes harder and harder to do this. Even now, it takes a lot of skill and effort to fool either system. It takes deliberate effort and a considerable amount of knowledge.
But the identification is only relative. The phone recognizes you as the owner because it has been told that you are the owner. But it doesn't know who you are. And the same is true more generally. The analysis I did pointing out that there is really no way the current system can absolutely tie a specific person to a specific birth certificate is still true. And using CODIS or some other DNA based system to create an absolute connection continues to get easier and easier from a technical perspective.
CODIS has added 7 additional STRs in 2017 so it is now using using 20. But nothing has moved from a political perspective. There are still tight restrictions on what gets put into a CODIS database. There has been no move to CODIS newborns, for instance. And if smartphone makers are thinking about using DNA for identification I don't know about it.
But there has been big developments on the identification front. These developments are with respect to relative identification. But they are so pervasive and extensive that they have rendered the difference between relative and absolute identification moot.
We have known for a long time that tech companies were collecting a lot of data about us. Google famously saves every search ever made. Initially this was supposed to be so they could analyze it and optimize their algorithms to give you a more useful answer. But it soon became apparent that they were not just using it as some sort of anonymous pile of data that helped in search optimization. They were using it to identify each and every one of us. They would then develop a profile of each of us which they would sell to advertisers. The idea was this would allow advertisers to narrowly target their marketing to just the people most likely to be interested in the product.
As an example of how this worked I once searched Amazon for shredders. I didn't need one but my mother, who did not have a computer, needed one. I soon started to notice that wherever I went on the web an ad for an Amazon shredder would soon follow. This behavior persisted even after I went back and bought a shredder from Amazon for my mother. That was modestly entertaining and little or no harm was done to me or anybody else. So this kind of behavior didn't seem all that bad either to me or pretty much anybody else. And that's the kind of mental model people had about what was going on.
Okay, so the NSA was sweeping up all this data. That was the government and they shouldn't be doing that sort of thing. At least so went the argument by myself and many others. (There were, of course, lots of people who were okay with this and other intrusive behavior by the government.) But my point is that if we put this sort of thing on a scale the NSA was generally considered closer to the "bad" end than people like Google. And a large number of people were of the opinion that it was all fine.
Then the 2016 election happened. And over time we have learned a lot more about what tech companies in general and Facebook in particular have been up to. And more and more people have become very angry. There is something in the business called an EULA, an End User License Agreement. We have all had to deal with them. They are long documents full of impenetrable legalese that even expert lawyers can often not make sense out of. In the backs of our minds we are all pretty sure that there is stuff in them that we would not like if we understood what it was and what it meant. But you can't get around EULAs. Everybody uses them so you can't just go to the next company.
And they are all bad to one extent or another so it is impossible from a practical point of view to go with the company that has the least onerous EULA. They are what is called a "coercive contract". At least one party (us) is effectively powerless in the negotiation. So we don't read them. We just click the "Agree" button and move on. We have all made a deal with the devil. If we are going to have access to these compelling tools and gadgets we are going to have to put up with a certain amount of stuff we would rather not have to. But if we sign up for Facebook, for instance, we expect the bad behavior to be confined to the relationship between us and Facebook. And we did after all "Agree" to Facebook's EULA.
But we have found out that it is far worse than we thought. We expect Facebook to use what it has learned about us to try to get us to sign up for more Facebook stuff. And we expect Facebook to sell profile information to advertisers so that Amazon can pester me with ads for shredders. But we expect it to stop at that. But it turns out it didn't.
Facebook has a program that allows companies to build and run applications within the Facebook environment. Those applications can harvest information. And the information is not limited to what we tell the application. A popular type of application is a cute quiz. "How much do you really know about Star Wars?", or about pop stars, or fashion, or cars, or whatever. Certainly these quizzes can be constructed to collect information that advertisers would find valuable and, therefore, pay money for. That sort of thing seems fair. But these cute applications (they are designed to be cute so that they will be popular so that lots of people will install them) are not limited to harvesting the data you provide while answering the quiz. It turns out that they get access to all the information Facebook has on you.
That's bad. I'm pretty sure it's legal because they would be idiots to not put the necessary language into their EULA. But this "they get all the that Facebook has on you" degree of badness is just the first and least bad level of badness. It turns out they also get access to what Facebook knows about your friends. That's the second level of badness. And this behavior is probably legal because of the Facebook EULA. There is probably some language in there saying this sort of thing is legal. But it turns out there is a third level of badness.
Remember the bit about how I was getting those Amazon shredder ads everywhere. I did my search not in Google or Facebook but on Amazon's web site. So only Amazon knows I did the search. What's going on is that advertisers and the companies they do business with like Amazon and Facebook share data in networks. Amazon shared the information that I had done a search on shredders to its network partners and they placed "Amazon shredder" ads on web sites that I later visited.
It turns out that Facebook does the same thing. They are part of these information sharing networks so they have access to what's happening on sites that are far away from anything Facebook owns or operates. So Facebook has a profile on people like me who have NEVER had a Facebook account. And people like me have never signed an EULA with Facebook or any of the application providers Facebook hosts on their platform.
We have slowly found this out as revelations have trickled out as people have looked at how the 2016 election actually played out. Facebook has a "commercial" interface so that people who want to make a buck can build and run an application to run on the Facebook platform. But they also have an "educational" interface so that people doing research can also have access to the Facebook platform and Facebook data. This latter interface is given wider latitude due to it's presumably non-commercial and beneficial intent.
A Cambridge University Don (professor - Cambridge is in the United Kingdom) took advantage, and as we now know, allowed a company called Cambridge Analytica to harvest vast amounts of data about Americans from Facebook. First it was data on 50 million people. Then it was data on 87 million people. The actual number is and probably will never be known. And we know they sucked a vast amount of data out of Facebook. And we know we don't know where it all ended up. Facebook at one point asked for it all back. Fat chance.
And that's just Cambridge Analytica. There is certainly no technical reason dozens or hundreds of others could not have done the same thing. And we know that Cambridge Analytica was able to harvest data on users who signed up for one of the applications they put together. They all signed Cambridge's EULA. But we also know that this group numbers less than a million. We get to 50 and later 87 million because they were able to collect data on "friends" then "friends of friends" and so on. All these people at least signed the Facebook EULA. But were they also able to collect data on people like me, people who have never signed up for Facebook? The answer is unclear.
So it turns out that Facebook knows a lot about each of its users. The NSA would probably like to know as much about people as Facebook knows. So Facebook can positively identify its users. The positive identification is relative. They can't tie a specific user to a specific birth certificate. But they know so much about that person it doesn't matter. They can more positively identify a person that a bureaucrat at the bureau that issues driver's licenses, or voter registration cards, or passports. They can do a better job of positively identifying a person that the government can.
And Facebook has taken all the heat. But the same is true of Google. Remember they have all that search history (and lots more). It is probably also true to a lesser extent of Apple and Microsoft and a number of other companies. (So far the spotlight has shown brightly on Facebook and left the others in the shadows.) We now have a completely new method of personal identification, one that I did not imagine as recently as two years ago. You can now be positively identified by your online profile.
Most of us now live on our smartphones. (Again, I am an exception.) Back in the stone age of personal computers Intel was going to put a serial number in their 80486 chip. The privacy advocates of the day talked them out of it and treated this as a big victory for privacy, the ability to use computers and remain anonymous. But while they won this tiny skirmish they lost the war. There are hundreds of numbers on smartphones that can easily be accessed that provide a unique identification for a specific device.
Microsoft even pioneered a process for creating a GUID, a Globally Unique IDentification. The process guaranteed that it would never generate the same number twice. Microsoft uses GUIDs all over the place in their software. If you can get access, and it is easy to do, to any of these GUIDs you can uniquely identify a device like a PC (I do have and use those) or a tablet running a Microsoft application or a smartphone running a Microsoft application. And creating GUIDs is not that hard to do. So you can't avoid the problem by avoiding Microsoft.
You might just as well try to have a presence on the Internet without ever "Agree"ing to an EULA. Other vendors have figured out how to generate their equivalent of a GUID. Then there are all those numbers that behave like a serial number. Every network interface has a MAC address. It is effectively a serial number. Lots of software uses a "License key" or an "Activation key". They are both effectively serial numbers. IP addresses often behave like a serial number. The list goes on and on.
Companies like Facebook can harvest GIUDs and MAC Addresses and license/activation keys and tie a specific profile to a specific device or small list of devices and build up a positive identification. And they can and have done it better than the government.
I am pro-privacy. But I am also realistic. I have argued in a number of posts that when it comes to privacy the horse left the barn long ago and that there is no effective way to get the horse back.in the barn. And even if you did the horse is likely to escape no matter how much effort you put into horse-proofing the barn.
I think we need to accept the fact that privacy is not possible any more. That means we need laws, regulations, and social norms to constrain how we and our institutions and our businesses behave in a world where the technology exists that permits the powerful to peer pretty much anywhere they want. There is technology like encryption that can close some of the doors that let the powerful or just the technologically sophisticated in. This sort of thing is helpful and should be encouraged. But it doesn't protect us from those who have access to the inside like Facebook and whoever they license or enable.
This means that we must outlaw behavior that is technologically possible and often easy to do. We must also demand a high level of tolerance when it comes to what people are permitted to do. No matter who you are there are some behaviors other people engage in that you don't like, in some cases you don't like it a lot. As a society we must find ways to constrain the actions you are allowed to take in response to the behaviors you dislike. If you violate those constraints you need to be punished.
This at first seems like a new and unnatural way for people to behave. But a thousand or so years ago we all lived in small villages. It was then easy to look into a doorway as doorways frequently had no doors. But social morays constrained people from looking into doorways or acknowledging that certain behaviors were taking place in some public place. This was all enforced by shunning and other social actions. Society is now a global enterprise encompassing billions of people, millions of companies, and thousands of governments. Social norms alone are not going to work for us now.
At first blush it might sound like I am advancing a Libertarian agenda. And that is half right. Libertarians believe that legal prohibitions on behavior should be kept to a minimum. That's the part where my position coincides with the libertarian point of view. Where I differ is that there also needs to be legal prohibitions that outlaw violating the new norms. The government must step in, sometimes in a heavy handed way, to stop people, organizations, and institutions from doing things they want to do, namely going after people they disagree with and prohibiting behavior they don't like.
Facebook is a rich and powerful corporation with many fans. It will take a very powerful institution to be able to force them to change their behavior. They won't do it on their own. The only institution capable of doing that is a large powerful government doing intrusive things. And that kind of government is one Libertarians vehemently oppose.
We are a very divided society right now. And at its most fundamental level what divides us is our vision of how things are and how things should be. Until we come to a common vision the sort of things I am talking about are impossible. Even if a common vision were possible the issues I am talking about will be very hard to resolve. The most likely result of this current division is gridlock with no progress in any direction.
Maybe that is for the best. It gives all of us time to think about these issues and decide what we think about them and where we stand. But if recent events tells us anything they tell us that instead of thinking about these hard problems we will chase the next shiny object and the one after that. Then we will wonder how we got into another fine mess.
Showing posts with label Privacy. Show all posts
Showing posts with label Privacy. Show all posts
Wednesday, April 18, 2018
Sunday, May 21, 2017
Cyrpto: Offense or Defense?
Some people have always found it valuable to hide the contents of messages from others. A common method is Cryptography, or Crypto for short. Crypto methods date back to the ancient Romans and probably even further back than that. And for a long time writing was good enough in most cases. Most people couldn't read so whatever you wrote was safe from the prying eyes of a large percentage of the population. Only the elite members of society could read so only members of the elites figured into your calculations.
And the two elite groups who were most interested in Crypto were the military and the diplomats. Both were interested in communicating reliably with their friends while keeping their enemies in the dark. And this led to a variety of systems. Simple systems just scrambled the order of the letters or substituted one letter for another. But by the middle ages the most common method was the Nomenclator. It consisted of a long list of words or phrases organized into two columns. The word or phrase in one column replaced the corresponding word or phrase in the other column. The system was clunky so it was mostly used by diplomats who had embassies that employed code clerks. The military, who needed systems they could use in the field under combat conditions, pretty much stuck with letter substitution schemes.
The population of people who found Crypto a part of their life got wider with the introduction of the telegraph. Traveling representatives of companies needed to communicate over long distances and they didn't want competing companies to know what they were up to. So Nomenclators morphed into Telegraphic Codes. And there was another reason Telegraphic Codes became popular. They could save money. The coded message was cheaper to send then the "plain text", the term of art for the original message, because it was shorter. This got to be a hassle for the telegraph companies so they ended up restricting people to using one of a small number of approved "Commercial Codes". The telephone eventually doomed all this.
And up to this point all the work was being done by people. This restricted the options to things people could reliably do in a reasonable amount of time and with a reasonable amount of effort. That all changed with the introduction of Crypto machines in the 1930's. The most famous of these is the Enigma machine used by the Nazis during World War II. Mechanical Crypto machines quickly evolved to become computer based Crypto machines. But for a long time the use of Crypto was, with the exception of the Telegraphic Commercial codes, restricted to the elites in general and the military and the diplomatic corps in particular.
That all changed when the general public got access to the Internet. By this time computers were very powerful and capable of implementing very powerful Crypto systems. And all of a sudden pretty much everybody used Crypto whether they knew it or not. You care whether your credit card transactions are secure and reliable or not. And that security and reliability depends critically on Crypto. Thus endeth the history lesson.
And so far I haven't said a word about the ostensible subject of this post. Here's where I start.
I am using the words "offensive" and "defensive" the way a military person would use them. If you are attacking the enemy you have gone on the offensive. If you are implementing measures to make it more difficult for the enemy to attack you, or for the attack to succeed, you are on the defensive.
So how does this translate into the world of Crypto? Well, if you are encrypting your messages you are making an attempt to protect them from the other guys. That is a defensive move. If you are attempting to decode the other guy's encrypted messages that is an offensive move. And there is a war going on here. One side may make a defensive move by deploying a new and hopefully improved Crypto system. The other side tries to counter this by upping their offensive game. One side typically has the advantage at any given point. But the "move - countermove" game goes on and on. It is commonly referred to in other contexts as an arms race.
I want to get at the question of whether we are striking the appropriate balance between offense and defense. And this question has been around for a long time. How much time and effort do you put into developing or enhancing the Crypto systems you use versus attempting to crack the other guy's Crypto systems? This question was important to ordinary people only at one remove before. You usually had some investment in some army or another or in some government or another. So Crypto success for those people you were invested in was a good thing and crypto failure was a bad thing. Now the impact is more direct.
Recently we had a new computer virus outbreak. This was different. It was a "ransomware" attack. Just like other arms races virus attacks change over time. Originally a virus attack would wipe out data on your computer. Then virus attacks evolved into ones that stole data. Your credit card information (or military and diplomatic secrets) is very valuable if it can be gotten into the right hands. The value to the attackers of a successful ransomware attack is very direct. You pay them money.
And the core of the ransomware attack is Crypto. Your files get encrypted. Now if this was a movie or TV show at this point we would cut to a shot of one or more people frantically typing, typically onto laptops. This might be intercut with shots of photogenic arrays of computer screens or of worried people. All the while dramatic music would be thumping so we would know that something VERY IMPORTANT AND DRAMATIC was happening. But never fear. After not very long (we audience members get bored quickly) someone would shout something equivalent to "Eureka". The Crypto had been cracked and we were all saved. Happy endings all around.
But in the real world things didn't and don't go that way. Nobody cracked the virus. If you didn't send the ransom payment you never would be able to read the files that had been encrypted again. In short, the offense won and the defense lost. Why?
Looked at from another perspective this ransomware attack contains some good news. And the good news is "Crypto works". (That's something I have noted previously. See: http://sigma5.blogspot.com/2016/02/digital-privacy.html). So if Crypto works and (being the pedantic kind of guy I am I feel the need to repeat myself) it does, then why isn't it used more widely? And the answer to that question feeds directly into my thesis.
For a very long time the arms of the US government that deal in Crypto have chosen to invest a lot of effort in offensive Crypto and have criminally neglected defensive Crypto. Governments, including ours, keep deciding it's more fun to crack the other guy's systems than it is to make sure the other guy can't crack their own systems. They have convinced themselves that their own Crypto systems were unbreakable but that with the proper amount of effort the other guy's systems weren't. And more and more the arms of the US government have decided that literally any system that is not a US government system is an "other guy" system.
And there is a direct connection between the two. If everybody is using poor Crypto systems then it is much easier to crack them. Crypto systems have been cracked going all the way back to the Romans (and probably before). But somehow the fact that we have succeeded in cracking the other guy's systems (at least some of the time) does not lead to the obvious action of looking hard at our own systems.
There is a trap that governments have been falling into for millennia. "Our systems can't be cracked". And there is usually a good reason to believe this. There is a universal system for cracking Crypto systems. It is called the "brute force" approach and it consists of trying all the possibilities. Let's say that it takes a minute to try a possibility, a reasonable figure during the middle ages. Then if a person lives to be a hundred years old and never stops to eat or sleep they can try about fifty million possibilities in a lifetime. But let's say our system has a billion possibilities. Then it can't be cracked using a brute force approach. It was easy, even a thousand years ago, to come up with a Crypto system that allowed for a billion possibilities. So these systems were completely secure, right? Obviously not.
So what's the secret? The secret is what the British called a "crib", something a student would do to cheat on a test. The most obvious crib in the Crypto world is to steal the key. You now have not a billion possibilities to try but one. But cribs come in lots of different flavors. Let's say you could find something out or figure something out that reduces the possibilities from a billion to a thousand. Then the system can be cracked after less than 24 hours' worth of effort. Cribs that powerful are hard to come by. But cribs can be combined. And maybe they only reduce the list to ten thousand or a hundred thousand possibilities. That's still a big improvement. Governments tend to assume that they are crib-proof. But they rarely are. And the fact that they succeed in developing cribs with which to attack the other guy tends to not have the obvious effect, namely a thorough and careful review of their own Crypto systems.
And the whole Enigma business with Bletchley Park and Magic and all the rest of it is a classic example of this. Lacking the appropriate cribs it turns out the Enigma machine couldn't be cracked. Enigma was used by many branches of the Nazi government. But messages were never cracked for many of those branches. There is a thing called "Cypher discipline". This is where you religiously follow all the proper procedures and protocols. Some Nazi departments were very careful and other departments were sloppy. But wait, there's more.
Bletchley was a British show but the Americans were heavily involved. And the Americans ran a parallel operation against the Japanese with considerable success. Again, some departments of the Japanese government were softer targets than others due in large measure to the degree of adherence to Cypher discipline. And one of the big beneficiaries of what was cracked was the US Navy. So did the Navy learn the obvious lesson and make sure they were using good Crypto and good Cypher discipline? Nope! The Japanese had a great deal of success cracking US Naval codes and using what they learned effectively.
So has anything changed since World War II? Yes! Things have gotten worse. Various Crypto responsibilities can be found in many parts of the US government. The NSA, officially the National Security Agency and unofficially "No Such Agency", is a big player in all this. And the NSA is all offense and no defense. It turns out that the basic code for the ransomware attack was stolen from the NSA. It us unclear whether the NSA developed it or just obtained it from elsewhere. But what they definitely did not do was notify Microsoft of the vulnerability the attack exploited so that a fix could be issued. Microsoft found out about the vulnerability when leakers posted an NSA list of vulnerabilities and the code that could be used to exploit them on the Internet. Microsoft immediately issued a fix but a lot of computers were left unprotected for one reason or another.
But wait, there's more. As I indicated above, there are lots of ways to do Crypto. For decades the NSA has seen it as their right to decide which systems people can use. And they want those systems to be easy for them to crack. Then some civilians came up with a system called RSA, which turns out to be completely secure if no cribs are handy. And this was a Crypto system that the NSA could not control. This forced the NSA to respond by issuing a pretty good Crypto system called DES. But we wouldn't have DES if we hadn't had RSA first.
And this policy of doing their best to keep good Crypto out of the hands of anybody but the US government has been a long standing policy of the US government with the NSA often taking the lead. A couple of decades ago the "Clipper" computer chip was announced. All computes were supposed to use a Clipper chip to do their Crypto. But the Clipper came with a back door that the NSA, the FBI, and other government agencies could use. Fortunately, that proposal died quickly.
9/11 produced the USA Patriot Act. It in turn produced the most complete gag order in history. Agencies like the NSA and the FBI can ask you for any kind of data they want and you are forbidden from even disclosing that a request had been made. Companies like Google and the mobile phone companies were ordered to disgorge vast amounts of data about literally everyone. At the same time they were forbidden from even telling anyone about the existence of the order let alone its contents. This was all revealed by Edward Snowden. The Snowden revelations have caused these kinds of provisions to be dialed back but only to a modest extent. The main provisions are still in effect.
The FBI was in the news a few months back because they were asking Apple to hack their own phones. This is because newer versions of the iPhone use better and better Crypto to effectively keep the data on them private. Various government agencies, including but not limited to the FBI and the NSA, have repeatedly asked for legislation mandating back doors into consumer devices like phones. They have also asked for back doors into data centers run by Google, mobile phone companies, and others.
There is an obvious value in letting the appropriate agencies in the appropriate circumstances get access to the appropriate data. But it's the whole "appropriate" thing that is the problem. It turns out that you can't draw a bright line indicating where the boundary between appropriate and inappropriate should be. And even if you could the boundary is not a real boundary. If the appropriate agencies can get appropriate access then inappropriate agencies will also be able to get inappropriate access.
The news has been littered with these stories for the past few years. Credit card data gets stolen so routinely that it now hardly qualifies as news. And if the NSA can get into Iranian computers the North Koreans can get into the computes at Sony Pictures studio. And Russian hackers can get into the computers of the US State Department, campaign committees belonging to both the Democrats and the Republicans, and so on. Apparently the only place they couldn't get into was Hillary Clinton's home email server.
These systems could be much more secure. But various US government agencies have been doing what they can to keep them insecure. It is beneficial to these agencies for them to be able to get into the systems of other countries. But the cost is great because it means that our systems are vulnerable to other governments like Russia, China, and even the likes of Iran and North Korea. They are also vulnerable to criminals both domestic and international. It even means that our systems are vulnerable to amateurs interested in celebrity sex tapes, gossip, and the like. It's gotten to the point where even some kid who wants to cyberstalk another kid can break into a surprising number of places.
All of this is the cost of the policy pursued by so many in the government of keeping our online systems vulnerable. And the big problem is it is an unacknowledged cost. It affects us all in ways we notice and ways we don't. Is the benefit really worth the cost? I don't think so. Reasonable people may disagree with me. But the big problem is that almost nobody knows that this tradeoff is being made on out behalf. So they don't even know that it is a question that needs to be investigated.
And the two elite groups who were most interested in Crypto were the military and the diplomats. Both were interested in communicating reliably with their friends while keeping their enemies in the dark. And this led to a variety of systems. Simple systems just scrambled the order of the letters or substituted one letter for another. But by the middle ages the most common method was the Nomenclator. It consisted of a long list of words or phrases organized into two columns. The word or phrase in one column replaced the corresponding word or phrase in the other column. The system was clunky so it was mostly used by diplomats who had embassies that employed code clerks. The military, who needed systems they could use in the field under combat conditions, pretty much stuck with letter substitution schemes.
The population of people who found Crypto a part of their life got wider with the introduction of the telegraph. Traveling representatives of companies needed to communicate over long distances and they didn't want competing companies to know what they were up to. So Nomenclators morphed into Telegraphic Codes. And there was another reason Telegraphic Codes became popular. They could save money. The coded message was cheaper to send then the "plain text", the term of art for the original message, because it was shorter. This got to be a hassle for the telegraph companies so they ended up restricting people to using one of a small number of approved "Commercial Codes". The telephone eventually doomed all this.
And up to this point all the work was being done by people. This restricted the options to things people could reliably do in a reasonable amount of time and with a reasonable amount of effort. That all changed with the introduction of Crypto machines in the 1930's. The most famous of these is the Enigma machine used by the Nazis during World War II. Mechanical Crypto machines quickly evolved to become computer based Crypto machines. But for a long time the use of Crypto was, with the exception of the Telegraphic Commercial codes, restricted to the elites in general and the military and the diplomatic corps in particular.
That all changed when the general public got access to the Internet. By this time computers were very powerful and capable of implementing very powerful Crypto systems. And all of a sudden pretty much everybody used Crypto whether they knew it or not. You care whether your credit card transactions are secure and reliable or not. And that security and reliability depends critically on Crypto. Thus endeth the history lesson.
And so far I haven't said a word about the ostensible subject of this post. Here's where I start.
I am using the words "offensive" and "defensive" the way a military person would use them. If you are attacking the enemy you have gone on the offensive. If you are implementing measures to make it more difficult for the enemy to attack you, or for the attack to succeed, you are on the defensive.
So how does this translate into the world of Crypto? Well, if you are encrypting your messages you are making an attempt to protect them from the other guys. That is a defensive move. If you are attempting to decode the other guy's encrypted messages that is an offensive move. And there is a war going on here. One side may make a defensive move by deploying a new and hopefully improved Crypto system. The other side tries to counter this by upping their offensive game. One side typically has the advantage at any given point. But the "move - countermove" game goes on and on. It is commonly referred to in other contexts as an arms race.
I want to get at the question of whether we are striking the appropriate balance between offense and defense. And this question has been around for a long time. How much time and effort do you put into developing or enhancing the Crypto systems you use versus attempting to crack the other guy's Crypto systems? This question was important to ordinary people only at one remove before. You usually had some investment in some army or another or in some government or another. So Crypto success for those people you were invested in was a good thing and crypto failure was a bad thing. Now the impact is more direct.
Recently we had a new computer virus outbreak. This was different. It was a "ransomware" attack. Just like other arms races virus attacks change over time. Originally a virus attack would wipe out data on your computer. Then virus attacks evolved into ones that stole data. Your credit card information (or military and diplomatic secrets) is very valuable if it can be gotten into the right hands. The value to the attackers of a successful ransomware attack is very direct. You pay them money.
And the core of the ransomware attack is Crypto. Your files get encrypted. Now if this was a movie or TV show at this point we would cut to a shot of one or more people frantically typing, typically onto laptops. This might be intercut with shots of photogenic arrays of computer screens or of worried people. All the while dramatic music would be thumping so we would know that something VERY IMPORTANT AND DRAMATIC was happening. But never fear. After not very long (we audience members get bored quickly) someone would shout something equivalent to "Eureka". The Crypto had been cracked and we were all saved. Happy endings all around.
But in the real world things didn't and don't go that way. Nobody cracked the virus. If you didn't send the ransom payment you never would be able to read the files that had been encrypted again. In short, the offense won and the defense lost. Why?
Looked at from another perspective this ransomware attack contains some good news. And the good news is "Crypto works". (That's something I have noted previously. See: http://sigma5.blogspot.com/2016/02/digital-privacy.html). So if Crypto works and (being the pedantic kind of guy I am I feel the need to repeat myself) it does, then why isn't it used more widely? And the answer to that question feeds directly into my thesis.
For a very long time the arms of the US government that deal in Crypto have chosen to invest a lot of effort in offensive Crypto and have criminally neglected defensive Crypto. Governments, including ours, keep deciding it's more fun to crack the other guy's systems than it is to make sure the other guy can't crack their own systems. They have convinced themselves that their own Crypto systems were unbreakable but that with the proper amount of effort the other guy's systems weren't. And more and more the arms of the US government have decided that literally any system that is not a US government system is an "other guy" system.
And there is a direct connection between the two. If everybody is using poor Crypto systems then it is much easier to crack them. Crypto systems have been cracked going all the way back to the Romans (and probably before). But somehow the fact that we have succeeded in cracking the other guy's systems (at least some of the time) does not lead to the obvious action of looking hard at our own systems.
There is a trap that governments have been falling into for millennia. "Our systems can't be cracked". And there is usually a good reason to believe this. There is a universal system for cracking Crypto systems. It is called the "brute force" approach and it consists of trying all the possibilities. Let's say that it takes a minute to try a possibility, a reasonable figure during the middle ages. Then if a person lives to be a hundred years old and never stops to eat or sleep they can try about fifty million possibilities in a lifetime. But let's say our system has a billion possibilities. Then it can't be cracked using a brute force approach. It was easy, even a thousand years ago, to come up with a Crypto system that allowed for a billion possibilities. So these systems were completely secure, right? Obviously not.
So what's the secret? The secret is what the British called a "crib", something a student would do to cheat on a test. The most obvious crib in the Crypto world is to steal the key. You now have not a billion possibilities to try but one. But cribs come in lots of different flavors. Let's say you could find something out or figure something out that reduces the possibilities from a billion to a thousand. Then the system can be cracked after less than 24 hours' worth of effort. Cribs that powerful are hard to come by. But cribs can be combined. And maybe they only reduce the list to ten thousand or a hundred thousand possibilities. That's still a big improvement. Governments tend to assume that they are crib-proof. But they rarely are. And the fact that they succeed in developing cribs with which to attack the other guy tends to not have the obvious effect, namely a thorough and careful review of their own Crypto systems.
And the whole Enigma business with Bletchley Park and Magic and all the rest of it is a classic example of this. Lacking the appropriate cribs it turns out the Enigma machine couldn't be cracked. Enigma was used by many branches of the Nazi government. But messages were never cracked for many of those branches. There is a thing called "Cypher discipline". This is where you religiously follow all the proper procedures and protocols. Some Nazi departments were very careful and other departments were sloppy. But wait, there's more.
Bletchley was a British show but the Americans were heavily involved. And the Americans ran a parallel operation against the Japanese with considerable success. Again, some departments of the Japanese government were softer targets than others due in large measure to the degree of adherence to Cypher discipline. And one of the big beneficiaries of what was cracked was the US Navy. So did the Navy learn the obvious lesson and make sure they were using good Crypto and good Cypher discipline? Nope! The Japanese had a great deal of success cracking US Naval codes and using what they learned effectively.
So has anything changed since World War II? Yes! Things have gotten worse. Various Crypto responsibilities can be found in many parts of the US government. The NSA, officially the National Security Agency and unofficially "No Such Agency", is a big player in all this. And the NSA is all offense and no defense. It turns out that the basic code for the ransomware attack was stolen from the NSA. It us unclear whether the NSA developed it or just obtained it from elsewhere. But what they definitely did not do was notify Microsoft of the vulnerability the attack exploited so that a fix could be issued. Microsoft found out about the vulnerability when leakers posted an NSA list of vulnerabilities and the code that could be used to exploit them on the Internet. Microsoft immediately issued a fix but a lot of computers were left unprotected for one reason or another.
But wait, there's more. As I indicated above, there are lots of ways to do Crypto. For decades the NSA has seen it as their right to decide which systems people can use. And they want those systems to be easy for them to crack. Then some civilians came up with a system called RSA, which turns out to be completely secure if no cribs are handy. And this was a Crypto system that the NSA could not control. This forced the NSA to respond by issuing a pretty good Crypto system called DES. But we wouldn't have DES if we hadn't had RSA first.
And this policy of doing their best to keep good Crypto out of the hands of anybody but the US government has been a long standing policy of the US government with the NSA often taking the lead. A couple of decades ago the "Clipper" computer chip was announced. All computes were supposed to use a Clipper chip to do their Crypto. But the Clipper came with a back door that the NSA, the FBI, and other government agencies could use. Fortunately, that proposal died quickly.
9/11 produced the USA Patriot Act. It in turn produced the most complete gag order in history. Agencies like the NSA and the FBI can ask you for any kind of data they want and you are forbidden from even disclosing that a request had been made. Companies like Google and the mobile phone companies were ordered to disgorge vast amounts of data about literally everyone. At the same time they were forbidden from even telling anyone about the existence of the order let alone its contents. This was all revealed by Edward Snowden. The Snowden revelations have caused these kinds of provisions to be dialed back but only to a modest extent. The main provisions are still in effect.
The FBI was in the news a few months back because they were asking Apple to hack their own phones. This is because newer versions of the iPhone use better and better Crypto to effectively keep the data on them private. Various government agencies, including but not limited to the FBI and the NSA, have repeatedly asked for legislation mandating back doors into consumer devices like phones. They have also asked for back doors into data centers run by Google, mobile phone companies, and others.
There is an obvious value in letting the appropriate agencies in the appropriate circumstances get access to the appropriate data. But it's the whole "appropriate" thing that is the problem. It turns out that you can't draw a bright line indicating where the boundary between appropriate and inappropriate should be. And even if you could the boundary is not a real boundary. If the appropriate agencies can get appropriate access then inappropriate agencies will also be able to get inappropriate access.
The news has been littered with these stories for the past few years. Credit card data gets stolen so routinely that it now hardly qualifies as news. And if the NSA can get into Iranian computers the North Koreans can get into the computes at Sony Pictures studio. And Russian hackers can get into the computers of the US State Department, campaign committees belonging to both the Democrats and the Republicans, and so on. Apparently the only place they couldn't get into was Hillary Clinton's home email server.
These systems could be much more secure. But various US government agencies have been doing what they can to keep them insecure. It is beneficial to these agencies for them to be able to get into the systems of other countries. But the cost is great because it means that our systems are vulnerable to other governments like Russia, China, and even the likes of Iran and North Korea. They are also vulnerable to criminals both domestic and international. It even means that our systems are vulnerable to amateurs interested in celebrity sex tapes, gossip, and the like. It's gotten to the point where even some kid who wants to cyberstalk another kid can break into a surprising number of places.
All of this is the cost of the policy pursued by so many in the government of keeping our online systems vulnerable. And the big problem is it is an unacknowledged cost. It affects us all in ways we notice and ways we don't. Is the benefit really worth the cost? I don't think so. Reasonable people may disagree with me. But the big problem is that almost nobody knows that this tradeoff is being made on out behalf. So they don't even know that it is a question that needs to be investigated.
Wednesday, October 12, 2016
Wikileaks Revisited
One of the earliest posts to this blog concerned WikiLeaks, the web site that posts classified information. Here's a link to the post: http://sigma5.blogspot.com/2010/11/wikileaks.html. It's a short post so I can unreservedly recommend you take a look at it.
As I said then "99% of everything that is classified is not classified because it needs to be. A large percentage of classification is CYA. Someone doesn't want the embarrassing stuff to leak out. Something is rotten in the state of Denmark and someone doesn't want the rottenness to be put on public display. An even greater source of unnecessary classification is bureaucratic." I'll leave it at that because I want to leave some reason for you to check out my previous post.
And in spite of the fact that the post was written almost six years ago, things have changed very little in the interim. And the little that has changed has changed for the worse. Back then I wrote "[i]t is still necessary for WikiLeaks to demonstrate that it is not just on some kind of anti-US jihad." WikiLeaks has not done that. Instead they have reinforced the case that all they do is engage in anti-US behavior.
And it's worse. The jihad is more narrowly targeted than that. They seem to be focused on embarrassing Democrats while leaving Republicans alone. My local newspaper has a story today about a recent trove of documents posted with the obvious intent of embarrassing Hillary Clinton and her allies. And WikiLeaks has been happy to publish material whose obvious source is the Russian government or Russian intelligence services.
Let me be clear. Putin is a demagogue who is actively hostile to what WikiLeaks purports to support and believe in. What the Hell does WikiLeaks think it is doing when it cooperates with these people? And we have a candidate who is running for President that seems to be a fan boy of Putin and Putin-style government. The WikiLeaks dumps seem to be aimed at supporting his candidacy.
When I posted my previous remarks WikiLeaks had not been around all that long. There seemed to be a pattern to their behavior (the previously mentioned anti-US slant) but it was too soon to say for sure. They have now been in business long enough to justify firm conclusions. WikiLeaks' primary mission seems to be to embarrass the US and, for the moment at least, to particularly embarrass US Democratic party.
I noted then that WikiLeaks was rumored to be sitting on a large trove of data that was about the US but not about the US government. If they actually had the data they were rumored to possess they chose not to publish it. There are bad governments all around the world. There are also far too many examples of good governments all around the world doing bad things. There are also many non-governmental entities all over the world that are engaged in bad behavior. It would be a good thing if more of this came to light. But WikiLeaks has made little or no effort to go after these other targets. I can't believe this due to a lack of material. In fact, events have shown us that there is lots of other material available.
A recent example is the "Panama Papers" incident. Panama has very lax corporate governance laws. As a result many companies are registered in Panama so that they can be used to hide bad behavior. Early this year millions of documents from a Panamanian law firm specializing in this were leaked. The leaked information was quite revelatory and a useful contribution to public discourse. These documents were not leaked via WikiLeaks in spite of the fact that WikiLeaks was the obvious channel for this information.
The Edward Snowden NSA revelations were also not disclosed through WikiLeaks. Why? Because careful observers have noticed the strong anti-US and, judged by their inactions, pro-everybody-else bias of WikiLeaks.
I am still strongly of the opinion that there is far too much secrecy around. I was disappointed in 2010 with the actions of the Obama Administration in continuing or enhancing Bush Administration policies I disapprove of. I am still disappointed. But everybody does it. This is not to excuse the Obama Administration in particular, or the US more generally. It is to say that everybody's dirty laundry needs to be aired. First, there is a lot of dirt there that needs to be exposed. And secondarily, and ultimately equally importantly, it is important to be able to have the appropriate context within which to judge the actions of the Obama Administration and the US. Finally, it would be nice to generate some push back on unnecessary classification and unnecessarily delayed declassification.
And that's my segue into another topic, the Hillary Clinton emails. The discussion of this "controversy" totally lacks context. And it is rife with exaggeration and downright lies. Let's take a look at the actual facts. The most damning accusation comes from FBI director James Comey in a July 5 press release and accompanying public statement. He subsequently testified before congress but the key elements did not change. (The press release can be found here: https://www.fbi.gov/news/pressrel/press-releases/statement-by-fbi-director-james-b-comey-on-the-investigation-of-secretary-hillary-clinton2019s-use-of-a-personal-e-mail-system).
According to Mr. Comey, 52 out of over 30,000 email chains contained information that was classified at the time the email was sent. How serious is this? Well, consider that about 2,000 additional emails were subsequently up-classified. This is when information that is initially considered unclassified is later classified by some bureaucrat at some agency. So how do we know what should be classified and what shouldn't? The short answer is "we don't". No individual or group of individuals knows with 100% certainty what should be classified.
Well, actually some bureaucrats think they know. In their opinion everything should be classified and should stay classified forever. And that's the world we live in. Secretary of State Clinton, a very busy person, was supposed to know that some person somewhere thought something was classified. And in this environment she was supposed to do her job. Secretary Clinton has since remarked that a number of the questionable emails concerned drones and drone strikes.
Is a drone strike a classified subject? You bet it is. To this day the US rarely publically acknowledges that a strike occurred. And they certainly don't provide any details. ("not publicly acknowledged" is bureaucratese for "it's classified".) And the program is run either by the CIA, a classified government agency, or highly classified parts of the Defense Department. There is very little about drones or drone strikes that is not considered classified by somebody. And a lot of the time one bureaucrat or another thinks that TOP SECRET, the highest level of classification, is the appropriate level of classification.
So what are you as a public official and representative of the US Government supposed to say and do when there is a strike and there are lots of pictures, video, etc. that provides absolute proof that the strike happened at a certain place at a certain time? This is a common situation in Afghanistan, Iraq, and a number of other places in the world. In many of these cases the US maintains total air superiority so there is no doubt as to which country is responsible for the drone strike.
Secretary of State Clinton (as she was at the time) is a public figure whose full time job is diplomacy. People, government officials, the press, random members of the general public, are going to ask her about drones in general, US drone strike policies and procedures, and specific drone strikes. Is she supposed to restrict her response to "no comment" or "I can't talk about that because it involves classified materials"? And if she does is she appropriately advancing the interests of the US and its allies, her job? If she had always gone the "no comment" route she would have been roundly and justifiably ridiculed by the very people who are now so exercised by her "sloppy" approach to classified material.
We have to depend on leaks for the most "damning" information supposedly contained in the emails. Three of them, we are told, contained pictures that included a "c" in the caption. Apparently this indicated that the image is classified. Seriously? Apparently so but still ridiculous, in my opinion. And, again going by leaks (because the base material is, you know, classified, but no one is going after the leakers) these apparently classified pictures were not at the top of the email. They were buried somewhere in the "chain".
We are all familiar with email chains. Someone sends an email. Then someone sends a reply that contains the original email. Then someone sends a reply to the reply and it contains both the reply and the original email. And so on. This is a common situation and most people most of the time do the same thing. We do NOT review the whole chain. We just review the email on the top of the chain. Oh, occasionally it will be necessary to dive down the chain for something. That's the justification for the chain.
And this behavior of only paying attention to the top of the email is what people using smartphones almost always do. You will be less than totally surprised to know that I sometimes write long emails. Now I am NOT talking about a chain, just a single long email. And I have long since lost count of the number of times it has become obvious that the person replying did not read the entire email.
And this is most common with people who use a smartphone to deal with their email. They pay attention to what's on the screen of their phone and rarely scroll down to see what else might be there. We know that the most common method Secretary of State Clinton used for dealing with emails was by using her Blackberry. So in all likelihood she never saw the pictures because they were off screen. And she most likely did not see the "c" that indicated they were classified.
I am just having a lot of trouble getting exercised about this. If there was anything that was really serious in any of the "inappropriately handled" emails we would know about it because someone would have leaked it in an effort to damage Ms. Clinton. It is telling that "charges" are based on the level of classification of one or another piece of material and not on the contents. This is the kind of technically wrong behavior that conservatives accusers are exercised about when they start talking about "political correctness gone awry". Is the behavior a breach of the letter of the law or regulation? Yes. Is it a serious or important breach? No.
And then there is FBI Director Comey. Mr. Comey first attracted widespread attention as a special council to the Ken Starr Senate Whitewater Committee. That operation was famous for a number of things. But one of them was that it leaked like a sieve. All investigations and internal deliberations are supposed to be kept completely secret until they are presented in public sessions of the committee. But the Starr investigation leaked prolifically. Apparently key people had absolutely no respect for the rules of secrecy they were supposed to maintain.
The Committee operated for years. The rampant leaking was noticed quickly and the committee was made aware of the problem. But the leaking never stopped. It didn't even slow down. And the leaking was biased strongly against Preident Bill Clinton's interests. So it was politically motivated. During this period Mr. Comey, who must have known what was going on and why, could have done the right thing. He could have exposed who was leaking or at least resigned. He did neither. And he was eventually rewarded with the directorship of the FBI by the George W. Bush administration.
Mr. Comey's remarks on Secretary Clinton's emails fall into two broad categories. Those that are essentially opinion and those which are grounded in regulations, policy, and law. The damning part of what he said all fell into the opinion part of what he had to say. When it came to regulations, law, and policy he was much more measured and had much more positive things to say about Secretary Clinton's actions. This behavior, going all in on the political stuff and taking a measured approach on the legal side, was entirely predicted by his history.
So what do we have? We have an environment where vast over classification is the norm. And when bureaucrats are put into the spotlight their instinctive, and from their perspective reasonable, response is to classify and up the security level of anything for which there is the tiniest shred of justification. So they did. And a political hack makes a lot of insinuations before rendering a "not guilty" verdict. And the press covers all this ad nauseam because that's what they do. And, because no one sees it in there interest or as their jobs (hello press!) to provide context, no context is provided.
If Mr. Trump's remarks about women are "locker talk" then Ms. Clinton's email activities are the actions of a saint.
As I said then "99% of everything that is classified is not classified because it needs to be. A large percentage of classification is CYA. Someone doesn't want the embarrassing stuff to leak out. Something is rotten in the state of Denmark and someone doesn't want the rottenness to be put on public display. An even greater source of unnecessary classification is bureaucratic." I'll leave it at that because I want to leave some reason for you to check out my previous post.
And in spite of the fact that the post was written almost six years ago, things have changed very little in the interim. And the little that has changed has changed for the worse. Back then I wrote "[i]t is still necessary for WikiLeaks to demonstrate that it is not just on some kind of anti-US jihad." WikiLeaks has not done that. Instead they have reinforced the case that all they do is engage in anti-US behavior.
And it's worse. The jihad is more narrowly targeted than that. They seem to be focused on embarrassing Democrats while leaving Republicans alone. My local newspaper has a story today about a recent trove of documents posted with the obvious intent of embarrassing Hillary Clinton and her allies. And WikiLeaks has been happy to publish material whose obvious source is the Russian government or Russian intelligence services.
Let me be clear. Putin is a demagogue who is actively hostile to what WikiLeaks purports to support and believe in. What the Hell does WikiLeaks think it is doing when it cooperates with these people? And we have a candidate who is running for President that seems to be a fan boy of Putin and Putin-style government. The WikiLeaks dumps seem to be aimed at supporting his candidacy.
When I posted my previous remarks WikiLeaks had not been around all that long. There seemed to be a pattern to their behavior (the previously mentioned anti-US slant) but it was too soon to say for sure. They have now been in business long enough to justify firm conclusions. WikiLeaks' primary mission seems to be to embarrass the US and, for the moment at least, to particularly embarrass US Democratic party.
I noted then that WikiLeaks was rumored to be sitting on a large trove of data that was about the US but not about the US government. If they actually had the data they were rumored to possess they chose not to publish it. There are bad governments all around the world. There are also far too many examples of good governments all around the world doing bad things. There are also many non-governmental entities all over the world that are engaged in bad behavior. It would be a good thing if more of this came to light. But WikiLeaks has made little or no effort to go after these other targets. I can't believe this due to a lack of material. In fact, events have shown us that there is lots of other material available.
A recent example is the "Panama Papers" incident. Panama has very lax corporate governance laws. As a result many companies are registered in Panama so that they can be used to hide bad behavior. Early this year millions of documents from a Panamanian law firm specializing in this were leaked. The leaked information was quite revelatory and a useful contribution to public discourse. These documents were not leaked via WikiLeaks in spite of the fact that WikiLeaks was the obvious channel for this information.
The Edward Snowden NSA revelations were also not disclosed through WikiLeaks. Why? Because careful observers have noticed the strong anti-US and, judged by their inactions, pro-everybody-else bias of WikiLeaks.
I am still strongly of the opinion that there is far too much secrecy around. I was disappointed in 2010 with the actions of the Obama Administration in continuing or enhancing Bush Administration policies I disapprove of. I am still disappointed. But everybody does it. This is not to excuse the Obama Administration in particular, or the US more generally. It is to say that everybody's dirty laundry needs to be aired. First, there is a lot of dirt there that needs to be exposed. And secondarily, and ultimately equally importantly, it is important to be able to have the appropriate context within which to judge the actions of the Obama Administration and the US. Finally, it would be nice to generate some push back on unnecessary classification and unnecessarily delayed declassification.
And that's my segue into another topic, the Hillary Clinton emails. The discussion of this "controversy" totally lacks context. And it is rife with exaggeration and downright lies. Let's take a look at the actual facts. The most damning accusation comes from FBI director James Comey in a July 5 press release and accompanying public statement. He subsequently testified before congress but the key elements did not change. (The press release can be found here: https://www.fbi.gov/news/pressrel/press-releases/statement-by-fbi-director-james-b-comey-on-the-investigation-of-secretary-hillary-clinton2019s-use-of-a-personal-e-mail-system).
According to Mr. Comey, 52 out of over 30,000 email chains contained information that was classified at the time the email was sent. How serious is this? Well, consider that about 2,000 additional emails were subsequently up-classified. This is when information that is initially considered unclassified is later classified by some bureaucrat at some agency. So how do we know what should be classified and what shouldn't? The short answer is "we don't". No individual or group of individuals knows with 100% certainty what should be classified.
Well, actually some bureaucrats think they know. In their opinion everything should be classified and should stay classified forever. And that's the world we live in. Secretary of State Clinton, a very busy person, was supposed to know that some person somewhere thought something was classified. And in this environment she was supposed to do her job. Secretary Clinton has since remarked that a number of the questionable emails concerned drones and drone strikes.
Is a drone strike a classified subject? You bet it is. To this day the US rarely publically acknowledges that a strike occurred. And they certainly don't provide any details. ("not publicly acknowledged" is bureaucratese for "it's classified".) And the program is run either by the CIA, a classified government agency, or highly classified parts of the Defense Department. There is very little about drones or drone strikes that is not considered classified by somebody. And a lot of the time one bureaucrat or another thinks that TOP SECRET, the highest level of classification, is the appropriate level of classification.
So what are you as a public official and representative of the US Government supposed to say and do when there is a strike and there are lots of pictures, video, etc. that provides absolute proof that the strike happened at a certain place at a certain time? This is a common situation in Afghanistan, Iraq, and a number of other places in the world. In many of these cases the US maintains total air superiority so there is no doubt as to which country is responsible for the drone strike.
Secretary of State Clinton (as she was at the time) is a public figure whose full time job is diplomacy. People, government officials, the press, random members of the general public, are going to ask her about drones in general, US drone strike policies and procedures, and specific drone strikes. Is she supposed to restrict her response to "no comment" or "I can't talk about that because it involves classified materials"? And if she does is she appropriately advancing the interests of the US and its allies, her job? If she had always gone the "no comment" route she would have been roundly and justifiably ridiculed by the very people who are now so exercised by her "sloppy" approach to classified material.
We have to depend on leaks for the most "damning" information supposedly contained in the emails. Three of them, we are told, contained pictures that included a "c" in the caption. Apparently this indicated that the image is classified. Seriously? Apparently so but still ridiculous, in my opinion. And, again going by leaks (because the base material is, you know, classified, but no one is going after the leakers) these apparently classified pictures were not at the top of the email. They were buried somewhere in the "chain".
We are all familiar with email chains. Someone sends an email. Then someone sends a reply that contains the original email. Then someone sends a reply to the reply and it contains both the reply and the original email. And so on. This is a common situation and most people most of the time do the same thing. We do NOT review the whole chain. We just review the email on the top of the chain. Oh, occasionally it will be necessary to dive down the chain for something. That's the justification for the chain.
And this behavior of only paying attention to the top of the email is what people using smartphones almost always do. You will be less than totally surprised to know that I sometimes write long emails. Now I am NOT talking about a chain, just a single long email. And I have long since lost count of the number of times it has become obvious that the person replying did not read the entire email.
And this is most common with people who use a smartphone to deal with their email. They pay attention to what's on the screen of their phone and rarely scroll down to see what else might be there. We know that the most common method Secretary of State Clinton used for dealing with emails was by using her Blackberry. So in all likelihood she never saw the pictures because they were off screen. And she most likely did not see the "c" that indicated they were classified.
I am just having a lot of trouble getting exercised about this. If there was anything that was really serious in any of the "inappropriately handled" emails we would know about it because someone would have leaked it in an effort to damage Ms. Clinton. It is telling that "charges" are based on the level of classification of one or another piece of material and not on the contents. This is the kind of technically wrong behavior that conservatives accusers are exercised about when they start talking about "political correctness gone awry". Is the behavior a breach of the letter of the law or regulation? Yes. Is it a serious or important breach? No.
And then there is FBI Director Comey. Mr. Comey first attracted widespread attention as a special council to the Ken Starr Senate Whitewater Committee. That operation was famous for a number of things. But one of them was that it leaked like a sieve. All investigations and internal deliberations are supposed to be kept completely secret until they are presented in public sessions of the committee. But the Starr investigation leaked prolifically. Apparently key people had absolutely no respect for the rules of secrecy they were supposed to maintain.
The Committee operated for years. The rampant leaking was noticed quickly and the committee was made aware of the problem. But the leaking never stopped. It didn't even slow down. And the leaking was biased strongly against Preident Bill Clinton's interests. So it was politically motivated. During this period Mr. Comey, who must have known what was going on and why, could have done the right thing. He could have exposed who was leaking or at least resigned. He did neither. And he was eventually rewarded with the directorship of the FBI by the George W. Bush administration.
Mr. Comey's remarks on Secretary Clinton's emails fall into two broad categories. Those that are essentially opinion and those which are grounded in regulations, policy, and law. The damning part of what he said all fell into the opinion part of what he had to say. When it came to regulations, law, and policy he was much more measured and had much more positive things to say about Secretary Clinton's actions. This behavior, going all in on the political stuff and taking a measured approach on the legal side, was entirely predicted by his history.
So what do we have? We have an environment where vast over classification is the norm. And when bureaucrats are put into the spotlight their instinctive, and from their perspective reasonable, response is to classify and up the security level of anything for which there is the tiniest shred of justification. So they did. And a political hack makes a lot of insinuations before rendering a "not guilty" verdict. And the press covers all this ad nauseam because that's what they do. And, because no one sees it in there interest or as their jobs (hello press!) to provide context, no context is provided.
If Mr. Trump's remarks about women are "locker talk" then Ms. Clinton's email activities are the actions of a saint.
Wednesday, April 6, 2016
Positive Identification
"Who are you?"
"Jane Doe."
"Prove it!"
Some variation of the above dialog is now a common part of our lives. It is frequently boiled down to "Show me your picture ID." The picture ID contains a name, a picture, and typically other information. The name provides the answer to the question. The rest of the information on the ID and the fact that you possess it provides the proof. The connection between you and the ID is provided by the picture. Presumably you and your picture can be compared to see if there is a match.
There are variations. It is becoming more and more common for your smartphone to stand in for your picture ID. And the degree to which the "proof" actually validates your identification varies. Bartenders just want to know if you are old enough to drink legally. The TSA wans to be really sure you are not a terrorist. And then there is the sad situation with which the title of this post is most commonly associated. Someone may need to confirm the identity of a deceased person.
With the background established let's look at the process of positive identification as it was, as it is, and as it will soon be. The times they are a changing. Let's start with the "was" part and for that I want to go back a thousand years.
A thousand years ago almost everyone lived on a small farm or in a small village. Almost everyone a farmed, fished, or was otherwise engaged in the process of growing and harvesting food. And at the time almost everyone was illiterate. Paper hadn't been invented yet. The alternatives that existed at the time (i.e. parchment) were all extremely expensive and only available in tiny quantities. So in that environment how were people identified?
Most people spent their entire lives within a few miles of where they were born. Everyone knew everyone else in the neighborhood by sight. You saw people at home or on market days or at feast day events. And you saw them for their entire lifetime. At some point a woman would be pregnant. Then she would show up with a small child. The child would grow up, become a parent and die. And the community observed all this. Identification was not absolute but it was good enough for the situation. You knew who farmed what piece of land and who their children were.
And frankly positive identification was not that important. People were poor so they had few possessions and even fewer valuable ones. Land ownership was mostly governed by the "possession is nine points of the law" rule. There were no accurate surveys and all the land was probably technically owned by the local feudal lord anyhow.
Oh, there were foreigners. Someone from outside would occasionally wander by but this did not happen often. If the wanderer was a trader how much did it matter who they were? They showed up, traded, and moved on. The trade goods were important. The identity of the trader was not. The other group who would show up occasionally were members of the power elite. It might be a soldier or a priest. If a particular soldier was the top dog he became the local feudal lord. Other soldiers either worked for him or there was a power struggle. Eventually someone came out on top and the others ended up dead, part of the lord's operation, or they moved on. The feudal lord was in a position to assert his authority by means of his ability to kill or maim you.
So the locals tended to take him at his word as to who he and what he was. The niceties of the law and who's authority was more legitimate tended to be less important than who won the power struggle. The other source of authority and power were the religious authorities like priests. If there was a power struggle between religious factions the rules of engagement were different (less blood more politicking) but who stayed and who was pushed out mostly depended on who was supported by the local feudal lord. And again, the peasantry tended to take whoever won at their word. So in this period positive identification had little real practical meaning.
Eventually paper got invented and the technology for making it cheaply spread broadly and it became practical to keep paper records. This ushered in the era when marriages, births, and deaths, started to become routinely recorded. For a long time the process was hap hazard. A record might be maintained at the local church or in a family bible. How reliable was this information? One assumed that it was fairly reliable. But this assumption rested to a great extent on past practice.
Usually people in the community were around to testify to the accuracy of the information, at least until enough time had passed that all the eye witnesses had died. After that inertia set in. Records that had been accepted in the past continued to be accepted. Beyond that, old records came to be seen as accurate records, mostly because they were old. And it was certainly possible for a record to be fudged. A marriage could retroactively be added to a church register or a family bible. And the same process could be used to erase or alter entries. People went with these records as much because they were the only practical option as for any other reason.
Not that long ago governments started taking over responsibility. They started issuing birth certificates, wedding licenses, and certificates of death. And more people were born in a hospital with a physician in attendance. But in a certain sense the foundation the process rested on had not changed. Someone filled out a form. The information was only as dependable its source and its source was some person. The person might be the mother or the doctor or a hospital employee. And, in the case of the doctor or hospital employee, they might be relying on some stranger for the information they were entering. I suspect that most of the time little effort was made to corroborate it.
The piece of paper has now been replaced by a computer screen and the data no longer resides on a piece of paper in a file cabinet. It now resides in a computer file somewhere. And this highlights a fundamental problem. It's just data. And more problematic than that is this. How do we know that a particular birth certificate is actually the birth certificate of a particular person? The surprising answer is that we don't. But that can, and I expect that it will, change in the near future.
We have all been exposed to this sort of thing due to the "birther" controversy. A lot of people but most notably Donald Trump have spent a lot of time and gotten a lot of media coverage contending that President Obama was not born in Hawaii in 1961. A lot of their argument is nonsense. There is absolutely no doubt that a birth certificate was issued at the time and place the President contends it was.
An argument could be made that he is not the child that belongs to that birth certificate, that there were, in fact, two children. This argument is logically consistent but it is not the argument that birthers make. And there is a large body of evidence that there is only one child and he is that child. In fact the connection between this birth certificate and the President is much stronger than the connection between Donald J. Trump and any birth certificate. So the birther argument, such as it is, is about the wrong thing. A fundamental question exists. How do you definitively connect any person with any birth certificate? And the answer is that in almost all cases you can't.
In some places at some times a footprint (like a fingerprint but of the bottom of a foot instead of the fingertips) of the child was routinely put on the back of birth certificates. With such a birth certificate you can take a matching print of the foot of the individual in question and do a "fingerprint analysis" to see if it matches. If it does then you can definitively match a specific birth certificate to a specific individual.
But I have never heard of this comparison being attempted. As far as I can tell the "footprint on the birth certificate" procedure was never common and, in the cases where it was done, I know of no instances where a match was attempted later. It probably happened but it was never common enough to feature in crime fiction, for instance. It is easy to imagine Erle Stanley Gardner plugging it into a Perry Mason novel but he never did. The fact that it was never a common crime fiction motif is evidence that it was never a common practice.
I have both a passport (expired) and an "enhanced" driver's license. Both of these require a positive identification. Having been there and done that I know the drill. I show up, fill out some paperwork, provide a picture (or get one taken), and provide "positive identification". What's positive identification? Why a birth certificate, of course. So I hand over a piece of paper for examination by a bureaucrat. But what's the piece of paper? In my case I have the original actual birth certificate that was issued at the time of my birth and it's a pretty ordinary looking piece of paper. That's how it was done in the era that preceded the computerization of everything.
But I actually have two "birth certificates". One of them is the aforementioned piece of paper. The other is a "certified copy" of the piece of paper. It is something called a Photostat. A Photostat, as you can guess, is just a photograph, well actually a print of a photographic negative. The only thing special about it is that it is embossed with an official stamp and an "I attest that this is an authentic copy of . . ." statement followed by the signature of some obscure bureaucrat.
Let's say I wanted someone to impersonate me. I could keep my original birth certificate and give them the Photostatic version. They could use that as the basis of a scheme to identify themselves as me. So there could be two official me's running around. And, in fact, a minor variation of this used to be commonly done.
People who wanted to change their identity would search newspaper death notices for someone who was born about the same time they were but who had died young. They would then write the proper authority asking for a Photostatic copy of the birth certificate for this person. At the time this was a routine bureaucratic procedure that did not require any kind of special documentation. When it arrived they would then use the Photostatic birth certificate as the base on which to build up an entire false identity for themselves. If they picked their dead person properly their chance of being caught out was infinitesimally small.
Spies, crooks, people on the run for political reasons, etc. did this routinely in the '60s. You could even find "how to" manuals if you knew the right people. One thing that helped then was that most people did not get a Social Security card and number until they entered the job market in their middle to late teens. If you picked someone who had died at ten, say, your chances of fooling the Social Security Administration into issuing a card were very good. It is now much harder to pull this kind of thing off because we are all now surrounded by a much larger more complicated web of interconnection than we used to be.
Children now get issued a Social Security number at birth, for instance. And as big data spreads its tentacles it becomes harder and harder to pull something like this off without setting an alarm off somewhere. The federal witness protection people can still do it. But they can change Social Security and other government records. But none of this changes the fact that there really is no completely reliable way to positively connect a specific birth certificate to a specific person. But I believe that is going to change in the near future.
There now exists something called CODIS, the Combined DNA Index System. This is the database that is used to do DNA matches in crime scene and other law enforcement (i.e. missing persons) situations. The database contains over 12 million entries and continues to grow rapidly. There is a considerable amount of duplication so it doesn't represent that many distinct individuals but the number of distinct individuals is somewhere in the millions. Each entry contains enough information to identify a single specific individual with very high degree of confidence. Does that mean entries contain complete DNA sequences? Far from it. Instead each entry contains 14 numbers. Thirteen of the numbers are based on something called a STR, a Single Tandem Repeat. The specifics are complicated but the idea is simple.
An STR "locus" is a very short piece of DNA that varies wildly from person to person. There are a bunch of variations possible for each STR locus. The database contains the specific variation number in the DNA of the entry for each of the 13 STR loci. The 14th number is based on a person's Amblogen gene. It has been included because the version of Amblogen gene that an individual has tells us whether that person is a male or a female.
Only a few percent of the population has a specific version of a specific STR locus. So different individuals are likely to have a different variation of the first STR locus. But they could just by luck have the same variation. But do they also have the same variant in the case of the second locus? Here too it is very unlikely that two different people have the same variant but it is possible. And so it goes. Scientists have done the math and the likelihood that two different people who are not identical twins would have the same variant of all thirteen STR loci is a really tiny number. It varies from case to case but it is unlikely that two non-twin individuals on Earth have the same variant of all thirteen STR loci. And just to decrease the chances even more there is a move afoot to add several more STR loci to the standard list.
It turns out that the amount of DNA in all fourteen loci used in this process is a tiny fraction of your whole genome. It's way, way, way less than 1%. But it is enough to get the job done, namely deciding if two DNA samples come from the same person or not. And the basic technology for this was developed more than a decade ago. In the mean time anything having to do with DNA has gotten a lot cheaper.
The original project to sequence the entire DNA of a single individual cost more than 3 billion dollars and took about a decade. Now the complete DNA of a single individual can be done for about 10 thousand dollars and it's getting cheaper every year. Scientists think the cost will drop to below a thousand dollars within the next few years. And that's what it costs to sequence everything. The cost to sequence enough DNA to tell one person from another costs way less than that and that cost is also dropping like a rock. And of equal importance the size of the gadget that does the CODIS sequencing is also getting smaller and smaller. And that opens up a lot of possibilities.
We as a society have been fighting over privacy for a long time now. Before the Revolutionary War colonists decided they didn't like British soldiers searching peoples homes any time they wanted to. They complained about it in The Declaration of Independence. After the War the US adopted the Fourth Amendment outlawing "illegal search and seizure".
When I was younger we were fighting the Cold War. The USSR was an "authoritarian dictatorship". The Nazis before them were also an authoritarian dictatorship. Both regimes were famous for requiring everybody to carry "papers" that had to be produced any time any place any time any official wanted to examine them. So, since we were the good guys, we were all in favor of the opposite. Our citizens were able to move about freely and were not be under any obligation to produce their papers. It was a point of differentiation between us and them. "Only authoritarian dictatorships require law abiding people to always carry identification documents as they go about their ordinary business."
Well, times have changed. The USSR is no more so apparently we no longer need to differentiate the behavior of our government from that of authoritarian dictatorships. It seems that we are now all supposed to be afraid of terrorists in our midst. And that means anyone who is suspicious (not a well dressed white person) had better have their papers on them at all times. And besides terrorists there is the ever present danger of rapist Mexicans or whoever else fits the "looks suspicious" profile. I am going to ignore the issue of whether this change is a good thing or a bad thing. Instead I am going to focus on the technicalities of how to positively identify people.
As I have discussed extensively above, the birth certificate is the foundation of identity for US born individuals. There is an elaborate system in the US for dealing with the foreign born that I am not going to get into. I will just note that in many cases it often ends up coming back to a birth certificate for these people too and move on. And, as I have also extensively elaborated on above, there is no way currently to definitively tie a specific individual to a specific birth certificate. And by now I think I have telegraphed where I am going pretty clearly. The thing that could tie the two together is CODIS style DNA information.
There is no technological impediment to doing this now. A sample sufficient to the task is easily obtained from a newborn. Blood works and only a drop is necessary. And the equipment needed to take the necessary measurements is relatively inexpensive and the process is relatively quick. So it is completely possible to CODIS characterize every newborn at birth. (As a side note it is also easy in most cases to CODIS characterize the mother and, if he is handy, the father at the same time.) And the amount of data is modest so it could easily be added to the birth certificate computer record. Once this is routinely done and some time has passed it becomes a simple process to prove that a specific individual is the one connected to a specific birth certificate. You just draw a drop of blood, run it through the CODIS process and see if the results match the information in the birth certificate record. None of this is beyond our current technical capability.
But it is currently beyond our political capability. People do not want to be in the CODIS database. Part of this is due to the association between the CODIS database and criminality. But a lot of people see it as an invasion of privacy they are unwilling to put up with. They can be convinced to change their mind if there is great need, say a loved one is missing. But currently every state has restrictive policies that limit who goes into the CODIS database. Not even all criminals or suspects go in now. The details vary from state to state. Some have restrictive policies and CODIS only a relatively small number of people. Others apply a broad brush and CODIS many more. But all states prohibit adding people without cause.
And the CODIS database is not the only DNA database in existence. People sign up with 23andme or other similar companies that do DNA analysis. The company tells them, for instance, where their ancestors are from. Various groups also collect DNA information for a number of different scientific reasons. But both the commercial and the scientific operations are careful to not sequence the DNA loci that CODIS uses. They just don't want to get tangled up in criminal investigations. And the people whose DNA ends up in these other databases like it that way.
But let me emphasize that this is a decision that is made for non-technical reasons. Companies like 23andme try to retain the original sample so that it can be reanalyzed as technology advances. So they could easily reanalyze the samples they still have and sequence the CODIS loci. The sequencing they already do is much more extensive than what the CODIS process requires. And if they did this their database could be used for CODIS-compatible searches. The number of people whose DNA could be CODIS matched would immediately jump substantially. But this is not really necessary. There is already a strong trend in place to keep expanding the CODIS pool. It is partly a result of technological considerations. It keeps getting quicker, cheaper, and easier to CODIS samples. And the people that run CODIS type databases keep coming up with more and more reasons to include more and more people in their collection programs.
I would think that intelligence agencies like the CIA would want to CODIS their employees and contractors. And how about soldiers? And how about law enforcement people. And, on the other side, how about foreigners entering our country. And how about people busted for minor offenses like speeding tickets or people involved in divorces or people filing for a business license or people involved in food preparation or, or, or. As the ease with which the process can be performed and the cost comes down the strength of the argument necessary to justify including an additional group gets less and less. And as this trend continues at some point you will have twenty or thirty percent of the entire population in the database. At that point you might as well just put everyone in.
Consider that many crimes now go unsolved. There is DNA evidence available in many of these cases but it doesn't match any entries in the current CODIS database. If we had CODIS coverage of the entire population then it would go some way toward increasing the percentage of crimes that do get solved. This higher solution rate should lower the overall crime rate, right? And isn't lowering the crime rate a laudable goal? That is only the most obvious potential benefit to CODISing everybody. Other potential benefits are easy to come up with. Instead of listing them let me extrapolate a little ways into the future.
When I was younger pretty much all small transactions (i.e. buying a cup of coffee) were done with cash. Then people started using debit cards instead. There are now a lot of people who carry only a small amount of cash around. And as I write this we are transitioning to an even newer method, paying with our smartphones. Today it is rarely used (except at Starbucks). But that is because there are some kinks that need to be worked out. Not all smartphones work at all stores all the time. That's mostly because we have dueling incompatible payment systems fighting it out. And for business reasons each system makes sure that it is incompatible with any of the other systems. At some point that competition between systems will be made to stop. Then people will be able to use one application on whatever phone they like to buy stuff from whoever they want to. But that puts the identification issue front and center.
The simplest thing from a user standpoint is to always leave your phone unlocked. And far too many people do this because dealing with the security system is bothersome. But Apple came up with a trick. You put your thumb in the right place and the phone can validate your thumbprint. This can be done almost instantaneously. And this approach is now being copied by the other smartphone makers. I expect it to be universal within a few years. But I suspect that the thumbprint scheme is not really that secure. The phone only sees part of your thumb and in poor conditions. The vendor (e.g. Apple) does not want a bunch of false negatives (you put your thumb on your phone but it doesn't okay you) so I suspect that the phone calls anything that is even vaguely close a match.
But let's fast forward a few years. Currently the easiest way to do a CODIS analysis is with a drop of blood. But with a lot of effort even very tiny amounts of DNA can sometimes be used. In ideal circumstances the tiny amount of DNA that ends up in some fingerprints is enough. And it turns out that there are lots of cells on the surface of your skin that contain your DNA. (That's where the fingerprint DNA comes from.) These cells can be collected and processed without having to poke a hole in you, a process that is not very painful but "not very painful" is not the same as "not even noticeable". And it is easy to imagine harvesting a few cells from the surface of your finger in a way that is not even noticeable so let's imagine it.
Next imagine the CODIS analysis device being small enough and cheap enough to be incorporated into a smartphone. And, while we are at it, assume it can produce an accurate result in less than a second. Now we have everything we need to build a system right into our smartphones that is fully capable of positively validating that you are you. And it is quick enough so that it can be used routinely, perhaps a hundred or more times per day. That would definitely solve the positive identification issue for smartphone transactions.
I think that for better or worse this is the direction we are heading. I would like to say that it is not inevitable but I am concerned that the forces that are pushing in this direction are powerful enough to overwhelm any opposition I can currently foresee. I think most people will be of the opinion that it is no big deal. In the fight between Apple and the FBI over unlocking that iPhone (see http://sigma5.blogspot.com/2016/02/digital-privacy.html for more on this subject) that was the opinion of a large segment of the general public when they were surveyed on the subject.
They put it another way: "I've got nothing to hide so what's the problem?" That situation did not seem to directly affect them. They did not foresee the FBI or anyone else wanting to unlock their phone so it didn't seem personally important either way. In the case of what I am now taking about the direct connection is much more obvious. But there are also immediate benefits. "I can use my smartphone to pay for my coffee without having to worry about someone maxing out my credit cards if my phone gets stolen." (As a side note if smartphones used this system they would be useless to thieves and thieves would stop stealing them.)
Our privacy is continuously under assault. Technological advance keeps making it easier to invade our privacy and harder to protect against an invasion. If everyone ends up in a CODIS-type database and that database is routinely used to confirm our identification and if a truly positive identification is the norm then pretty much every nook and cranny of our lives will be stored away in one or more computer databases. It looks like this eliminates any technical barrier to the complete invasion of our privacy.
I'm sure at least some will continue to say "I've got nothing to hide." But that's not really true. You may think you have little or nothing to hide. But all of us have opinions and all of us lead our lives in certain ways. Bear in mind that whatever opinions you hold there are a large number of people who think you are wrong. And no matter how boring you think your lifestyle is there are lots of people who strongly disapprove of it.
Are you a girl who likes to wear pants? Are you a guy who likes to shave? There are people who are seriously unhappy with you. What religion to you follow? It doesn't matter. There are a lot of people who hate that religion, whichever one it is. Do you like city living or do you prefer the wide open spaces? Either way, there are people who are seriously unhappy with you. Those are all choices many people would find boring and unimportant. How about more controversial ones?
Do you drink? Have you ever had sex outside of marriage? Have you tried non-missionary sex? Have you smoked pot? How about other drugs? Even once? Have you ever broken a traffic law, driven drunk, or maybe after you have had only one or two? Have you ever skinny dipped or streaked or done anything else "young and stupid"? Have you ever stolen something, even accidently?
The point is we have all done some embarrassing things, maybe even a lot of embarrassing things. And we have all done things some would disapprove of to the point that they would delight in harassing us about them. So we all have things to hide. Pretty much all of us have things we would prefer our parents, or our children, or our friends, or our coworkers, or the authorities, or our enemies, or random obnoxious people we don't know, don't know about. In other words, we all value out privacy.
In the past there have been practical or technological barriers we could hide behind. The tatters that remain of the old barriers are quickly being shredded. I have addressed the general issue of privacy before (see http://sigma5.blogspot.com/2013/12/privacy.html). I devoted roughly the last third of that post to what I thought should be done. I wrote that post over two years ago. The current topic only adds to the pressure that is moving us toward a world where there is no privacy. I recommend that post for my overall thinking on what should be done. Meanwhile there is a small piece of good news on the privacy front.
I linked to my blog post on the fight between the FBI and Apple above. At the time I wrote it no one knew how it would come out. But that specific situation has since been resolved. The FBI found a way to crack the phone that did not require the extraordinary cooperation that Apple was objecting to. That sounds like bad news but it's not. The phone that was cracked is an older model. Apple has upped its game with newer models. Whatever methods were used are unlikely to work (or at least will be much harder to pull off) on newer models. And in spite of various polls that were done at the time it turns out that there is a market for secure phones. So Apple has promised to keep adding features to make each new generation of phones much harder to crack than the old generation. And remember the phone the FBI was only able to crack after a great deal of difficulty is now a couple of generations old.
And various other technology companies are now jumping onto the "increased security" bandwagon. They are encrypting more and encrypting to a higher level of security. They are also changing how their products operate so that they no longer have a backdoor that lets them read unencrypted customer data. This means that if they are subpoenaed they can respond "sorry -- we can't read it either". And a side effect of this is that they can't sell or analyze detailed customer activity like they used to be able to do.
They can still do a metadata analysis. For instance they can figure out who you are interacting with. They can tell how often you are connecting up and how long you are staying connected. But they can't tell what you are doing while you are connected. This means that the data they can share with someone else, the government or another company, is much more limited than in the past. And that means it is much less valuable. And that means they will do less sharing in the future. And that is a modest step in the direction of more privacy. It is a small but very welcome development.
"Jane Doe."
"Prove it!"
Some variation of the above dialog is now a common part of our lives. It is frequently boiled down to "Show me your picture ID." The picture ID contains a name, a picture, and typically other information. The name provides the answer to the question. The rest of the information on the ID and the fact that you possess it provides the proof. The connection between you and the ID is provided by the picture. Presumably you and your picture can be compared to see if there is a match.
There are variations. It is becoming more and more common for your smartphone to stand in for your picture ID. And the degree to which the "proof" actually validates your identification varies. Bartenders just want to know if you are old enough to drink legally. The TSA wans to be really sure you are not a terrorist. And then there is the sad situation with which the title of this post is most commonly associated. Someone may need to confirm the identity of a deceased person.
With the background established let's look at the process of positive identification as it was, as it is, and as it will soon be. The times they are a changing. Let's start with the "was" part and for that I want to go back a thousand years.
A thousand years ago almost everyone lived on a small farm or in a small village. Almost everyone a farmed, fished, or was otherwise engaged in the process of growing and harvesting food. And at the time almost everyone was illiterate. Paper hadn't been invented yet. The alternatives that existed at the time (i.e. parchment) were all extremely expensive and only available in tiny quantities. So in that environment how were people identified?
Most people spent their entire lives within a few miles of where they were born. Everyone knew everyone else in the neighborhood by sight. You saw people at home or on market days or at feast day events. And you saw them for their entire lifetime. At some point a woman would be pregnant. Then she would show up with a small child. The child would grow up, become a parent and die. And the community observed all this. Identification was not absolute but it was good enough for the situation. You knew who farmed what piece of land and who their children were.
And frankly positive identification was not that important. People were poor so they had few possessions and even fewer valuable ones. Land ownership was mostly governed by the "possession is nine points of the law" rule. There were no accurate surveys and all the land was probably technically owned by the local feudal lord anyhow.
Oh, there were foreigners. Someone from outside would occasionally wander by but this did not happen often. If the wanderer was a trader how much did it matter who they were? They showed up, traded, and moved on. The trade goods were important. The identity of the trader was not. The other group who would show up occasionally were members of the power elite. It might be a soldier or a priest. If a particular soldier was the top dog he became the local feudal lord. Other soldiers either worked for him or there was a power struggle. Eventually someone came out on top and the others ended up dead, part of the lord's operation, or they moved on. The feudal lord was in a position to assert his authority by means of his ability to kill or maim you.
So the locals tended to take him at his word as to who he and what he was. The niceties of the law and who's authority was more legitimate tended to be less important than who won the power struggle. The other source of authority and power were the religious authorities like priests. If there was a power struggle between religious factions the rules of engagement were different (less blood more politicking) but who stayed and who was pushed out mostly depended on who was supported by the local feudal lord. And again, the peasantry tended to take whoever won at their word. So in this period positive identification had little real practical meaning.
Eventually paper got invented and the technology for making it cheaply spread broadly and it became practical to keep paper records. This ushered in the era when marriages, births, and deaths, started to become routinely recorded. For a long time the process was hap hazard. A record might be maintained at the local church or in a family bible. How reliable was this information? One assumed that it was fairly reliable. But this assumption rested to a great extent on past practice.
Usually people in the community were around to testify to the accuracy of the information, at least until enough time had passed that all the eye witnesses had died. After that inertia set in. Records that had been accepted in the past continued to be accepted. Beyond that, old records came to be seen as accurate records, mostly because they were old. And it was certainly possible for a record to be fudged. A marriage could retroactively be added to a church register or a family bible. And the same process could be used to erase or alter entries. People went with these records as much because they were the only practical option as for any other reason.
Not that long ago governments started taking over responsibility. They started issuing birth certificates, wedding licenses, and certificates of death. And more people were born in a hospital with a physician in attendance. But in a certain sense the foundation the process rested on had not changed. Someone filled out a form. The information was only as dependable its source and its source was some person. The person might be the mother or the doctor or a hospital employee. And, in the case of the doctor or hospital employee, they might be relying on some stranger for the information they were entering. I suspect that most of the time little effort was made to corroborate it.
The piece of paper has now been replaced by a computer screen and the data no longer resides on a piece of paper in a file cabinet. It now resides in a computer file somewhere. And this highlights a fundamental problem. It's just data. And more problematic than that is this. How do we know that a particular birth certificate is actually the birth certificate of a particular person? The surprising answer is that we don't. But that can, and I expect that it will, change in the near future.
We have all been exposed to this sort of thing due to the "birther" controversy. A lot of people but most notably Donald Trump have spent a lot of time and gotten a lot of media coverage contending that President Obama was not born in Hawaii in 1961. A lot of their argument is nonsense. There is absolutely no doubt that a birth certificate was issued at the time and place the President contends it was.
An argument could be made that he is not the child that belongs to that birth certificate, that there were, in fact, two children. This argument is logically consistent but it is not the argument that birthers make. And there is a large body of evidence that there is only one child and he is that child. In fact the connection between this birth certificate and the President is much stronger than the connection between Donald J. Trump and any birth certificate. So the birther argument, such as it is, is about the wrong thing. A fundamental question exists. How do you definitively connect any person with any birth certificate? And the answer is that in almost all cases you can't.
In some places at some times a footprint (like a fingerprint but of the bottom of a foot instead of the fingertips) of the child was routinely put on the back of birth certificates. With such a birth certificate you can take a matching print of the foot of the individual in question and do a "fingerprint analysis" to see if it matches. If it does then you can definitively match a specific birth certificate to a specific individual.
But I have never heard of this comparison being attempted. As far as I can tell the "footprint on the birth certificate" procedure was never common and, in the cases where it was done, I know of no instances where a match was attempted later. It probably happened but it was never common enough to feature in crime fiction, for instance. It is easy to imagine Erle Stanley Gardner plugging it into a Perry Mason novel but he never did. The fact that it was never a common crime fiction motif is evidence that it was never a common practice.
I have both a passport (expired) and an "enhanced" driver's license. Both of these require a positive identification. Having been there and done that I know the drill. I show up, fill out some paperwork, provide a picture (or get one taken), and provide "positive identification". What's positive identification? Why a birth certificate, of course. So I hand over a piece of paper for examination by a bureaucrat. But what's the piece of paper? In my case I have the original actual birth certificate that was issued at the time of my birth and it's a pretty ordinary looking piece of paper. That's how it was done in the era that preceded the computerization of everything.
But I actually have two "birth certificates". One of them is the aforementioned piece of paper. The other is a "certified copy" of the piece of paper. It is something called a Photostat. A Photostat, as you can guess, is just a photograph, well actually a print of a photographic negative. The only thing special about it is that it is embossed with an official stamp and an "I attest that this is an authentic copy of . . ." statement followed by the signature of some obscure bureaucrat.
Let's say I wanted someone to impersonate me. I could keep my original birth certificate and give them the Photostatic version. They could use that as the basis of a scheme to identify themselves as me. So there could be two official me's running around. And, in fact, a minor variation of this used to be commonly done.
People who wanted to change their identity would search newspaper death notices for someone who was born about the same time they were but who had died young. They would then write the proper authority asking for a Photostatic copy of the birth certificate for this person. At the time this was a routine bureaucratic procedure that did not require any kind of special documentation. When it arrived they would then use the Photostatic birth certificate as the base on which to build up an entire false identity for themselves. If they picked their dead person properly their chance of being caught out was infinitesimally small.
Spies, crooks, people on the run for political reasons, etc. did this routinely in the '60s. You could even find "how to" manuals if you knew the right people. One thing that helped then was that most people did not get a Social Security card and number until they entered the job market in their middle to late teens. If you picked someone who had died at ten, say, your chances of fooling the Social Security Administration into issuing a card were very good. It is now much harder to pull this kind of thing off because we are all now surrounded by a much larger more complicated web of interconnection than we used to be.
Children now get issued a Social Security number at birth, for instance. And as big data spreads its tentacles it becomes harder and harder to pull something like this off without setting an alarm off somewhere. The federal witness protection people can still do it. But they can change Social Security and other government records. But none of this changes the fact that there really is no completely reliable way to positively connect a specific birth certificate to a specific person. But I believe that is going to change in the near future.
There now exists something called CODIS, the Combined DNA Index System. This is the database that is used to do DNA matches in crime scene and other law enforcement (i.e. missing persons) situations. The database contains over 12 million entries and continues to grow rapidly. There is a considerable amount of duplication so it doesn't represent that many distinct individuals but the number of distinct individuals is somewhere in the millions. Each entry contains enough information to identify a single specific individual with very high degree of confidence. Does that mean entries contain complete DNA sequences? Far from it. Instead each entry contains 14 numbers. Thirteen of the numbers are based on something called a STR, a Single Tandem Repeat. The specifics are complicated but the idea is simple.
An STR "locus" is a very short piece of DNA that varies wildly from person to person. There are a bunch of variations possible for each STR locus. The database contains the specific variation number in the DNA of the entry for each of the 13 STR loci. The 14th number is based on a person's Amblogen gene. It has been included because the version of Amblogen gene that an individual has tells us whether that person is a male or a female.
Only a few percent of the population has a specific version of a specific STR locus. So different individuals are likely to have a different variation of the first STR locus. But they could just by luck have the same variation. But do they also have the same variant in the case of the second locus? Here too it is very unlikely that two different people have the same variant but it is possible. And so it goes. Scientists have done the math and the likelihood that two different people who are not identical twins would have the same variant of all thirteen STR loci is a really tiny number. It varies from case to case but it is unlikely that two non-twin individuals on Earth have the same variant of all thirteen STR loci. And just to decrease the chances even more there is a move afoot to add several more STR loci to the standard list.
It turns out that the amount of DNA in all fourteen loci used in this process is a tiny fraction of your whole genome. It's way, way, way less than 1%. But it is enough to get the job done, namely deciding if two DNA samples come from the same person or not. And the basic technology for this was developed more than a decade ago. In the mean time anything having to do with DNA has gotten a lot cheaper.
The original project to sequence the entire DNA of a single individual cost more than 3 billion dollars and took about a decade. Now the complete DNA of a single individual can be done for about 10 thousand dollars and it's getting cheaper every year. Scientists think the cost will drop to below a thousand dollars within the next few years. And that's what it costs to sequence everything. The cost to sequence enough DNA to tell one person from another costs way less than that and that cost is also dropping like a rock. And of equal importance the size of the gadget that does the CODIS sequencing is also getting smaller and smaller. And that opens up a lot of possibilities.
We as a society have been fighting over privacy for a long time now. Before the Revolutionary War colonists decided they didn't like British soldiers searching peoples homes any time they wanted to. They complained about it in The Declaration of Independence. After the War the US adopted the Fourth Amendment outlawing "illegal search and seizure".
When I was younger we were fighting the Cold War. The USSR was an "authoritarian dictatorship". The Nazis before them were also an authoritarian dictatorship. Both regimes were famous for requiring everybody to carry "papers" that had to be produced any time any place any time any official wanted to examine them. So, since we were the good guys, we were all in favor of the opposite. Our citizens were able to move about freely and were not be under any obligation to produce their papers. It was a point of differentiation between us and them. "Only authoritarian dictatorships require law abiding people to always carry identification documents as they go about their ordinary business."
Well, times have changed. The USSR is no more so apparently we no longer need to differentiate the behavior of our government from that of authoritarian dictatorships. It seems that we are now all supposed to be afraid of terrorists in our midst. And that means anyone who is suspicious (not a well dressed white person) had better have their papers on them at all times. And besides terrorists there is the ever present danger of rapist Mexicans or whoever else fits the "looks suspicious" profile. I am going to ignore the issue of whether this change is a good thing or a bad thing. Instead I am going to focus on the technicalities of how to positively identify people.
As I have discussed extensively above, the birth certificate is the foundation of identity for US born individuals. There is an elaborate system in the US for dealing with the foreign born that I am not going to get into. I will just note that in many cases it often ends up coming back to a birth certificate for these people too and move on. And, as I have also extensively elaborated on above, there is no way currently to definitively tie a specific individual to a specific birth certificate. And by now I think I have telegraphed where I am going pretty clearly. The thing that could tie the two together is CODIS style DNA information.
There is no technological impediment to doing this now. A sample sufficient to the task is easily obtained from a newborn. Blood works and only a drop is necessary. And the equipment needed to take the necessary measurements is relatively inexpensive and the process is relatively quick. So it is completely possible to CODIS characterize every newborn at birth. (As a side note it is also easy in most cases to CODIS characterize the mother and, if he is handy, the father at the same time.) And the amount of data is modest so it could easily be added to the birth certificate computer record. Once this is routinely done and some time has passed it becomes a simple process to prove that a specific individual is the one connected to a specific birth certificate. You just draw a drop of blood, run it through the CODIS process and see if the results match the information in the birth certificate record. None of this is beyond our current technical capability.
But it is currently beyond our political capability. People do not want to be in the CODIS database. Part of this is due to the association between the CODIS database and criminality. But a lot of people see it as an invasion of privacy they are unwilling to put up with. They can be convinced to change their mind if there is great need, say a loved one is missing. But currently every state has restrictive policies that limit who goes into the CODIS database. Not even all criminals or suspects go in now. The details vary from state to state. Some have restrictive policies and CODIS only a relatively small number of people. Others apply a broad brush and CODIS many more. But all states prohibit adding people without cause.
And the CODIS database is not the only DNA database in existence. People sign up with 23andme or other similar companies that do DNA analysis. The company tells them, for instance, where their ancestors are from. Various groups also collect DNA information for a number of different scientific reasons. But both the commercial and the scientific operations are careful to not sequence the DNA loci that CODIS uses. They just don't want to get tangled up in criminal investigations. And the people whose DNA ends up in these other databases like it that way.
But let me emphasize that this is a decision that is made for non-technical reasons. Companies like 23andme try to retain the original sample so that it can be reanalyzed as technology advances. So they could easily reanalyze the samples they still have and sequence the CODIS loci. The sequencing they already do is much more extensive than what the CODIS process requires. And if they did this their database could be used for CODIS-compatible searches. The number of people whose DNA could be CODIS matched would immediately jump substantially. But this is not really necessary. There is already a strong trend in place to keep expanding the CODIS pool. It is partly a result of technological considerations. It keeps getting quicker, cheaper, and easier to CODIS samples. And the people that run CODIS type databases keep coming up with more and more reasons to include more and more people in their collection programs.
I would think that intelligence agencies like the CIA would want to CODIS their employees and contractors. And how about soldiers? And how about law enforcement people. And, on the other side, how about foreigners entering our country. And how about people busted for minor offenses like speeding tickets or people involved in divorces or people filing for a business license or people involved in food preparation or, or, or. As the ease with which the process can be performed and the cost comes down the strength of the argument necessary to justify including an additional group gets less and less. And as this trend continues at some point you will have twenty or thirty percent of the entire population in the database. At that point you might as well just put everyone in.
Consider that many crimes now go unsolved. There is DNA evidence available in many of these cases but it doesn't match any entries in the current CODIS database. If we had CODIS coverage of the entire population then it would go some way toward increasing the percentage of crimes that do get solved. This higher solution rate should lower the overall crime rate, right? And isn't lowering the crime rate a laudable goal? That is only the most obvious potential benefit to CODISing everybody. Other potential benefits are easy to come up with. Instead of listing them let me extrapolate a little ways into the future.
When I was younger pretty much all small transactions (i.e. buying a cup of coffee) were done with cash. Then people started using debit cards instead. There are now a lot of people who carry only a small amount of cash around. And as I write this we are transitioning to an even newer method, paying with our smartphones. Today it is rarely used (except at Starbucks). But that is because there are some kinks that need to be worked out. Not all smartphones work at all stores all the time. That's mostly because we have dueling incompatible payment systems fighting it out. And for business reasons each system makes sure that it is incompatible with any of the other systems. At some point that competition between systems will be made to stop. Then people will be able to use one application on whatever phone they like to buy stuff from whoever they want to. But that puts the identification issue front and center.
The simplest thing from a user standpoint is to always leave your phone unlocked. And far too many people do this because dealing with the security system is bothersome. But Apple came up with a trick. You put your thumb in the right place and the phone can validate your thumbprint. This can be done almost instantaneously. And this approach is now being copied by the other smartphone makers. I expect it to be universal within a few years. But I suspect that the thumbprint scheme is not really that secure. The phone only sees part of your thumb and in poor conditions. The vendor (e.g. Apple) does not want a bunch of false negatives (you put your thumb on your phone but it doesn't okay you) so I suspect that the phone calls anything that is even vaguely close a match.
But let's fast forward a few years. Currently the easiest way to do a CODIS analysis is with a drop of blood. But with a lot of effort even very tiny amounts of DNA can sometimes be used. In ideal circumstances the tiny amount of DNA that ends up in some fingerprints is enough. And it turns out that there are lots of cells on the surface of your skin that contain your DNA. (That's where the fingerprint DNA comes from.) These cells can be collected and processed without having to poke a hole in you, a process that is not very painful but "not very painful" is not the same as "not even noticeable". And it is easy to imagine harvesting a few cells from the surface of your finger in a way that is not even noticeable so let's imagine it.
Next imagine the CODIS analysis device being small enough and cheap enough to be incorporated into a smartphone. And, while we are at it, assume it can produce an accurate result in less than a second. Now we have everything we need to build a system right into our smartphones that is fully capable of positively validating that you are you. And it is quick enough so that it can be used routinely, perhaps a hundred or more times per day. That would definitely solve the positive identification issue for smartphone transactions.
I think that for better or worse this is the direction we are heading. I would like to say that it is not inevitable but I am concerned that the forces that are pushing in this direction are powerful enough to overwhelm any opposition I can currently foresee. I think most people will be of the opinion that it is no big deal. In the fight between Apple and the FBI over unlocking that iPhone (see http://sigma5.blogspot.com/2016/02/digital-privacy.html for more on this subject) that was the opinion of a large segment of the general public when they were surveyed on the subject.
They put it another way: "I've got nothing to hide so what's the problem?" That situation did not seem to directly affect them. They did not foresee the FBI or anyone else wanting to unlock their phone so it didn't seem personally important either way. In the case of what I am now taking about the direct connection is much more obvious. But there are also immediate benefits. "I can use my smartphone to pay for my coffee without having to worry about someone maxing out my credit cards if my phone gets stolen." (As a side note if smartphones used this system they would be useless to thieves and thieves would stop stealing them.)
Our privacy is continuously under assault. Technological advance keeps making it easier to invade our privacy and harder to protect against an invasion. If everyone ends up in a CODIS-type database and that database is routinely used to confirm our identification and if a truly positive identification is the norm then pretty much every nook and cranny of our lives will be stored away in one or more computer databases. It looks like this eliminates any technical barrier to the complete invasion of our privacy.
I'm sure at least some will continue to say "I've got nothing to hide." But that's not really true. You may think you have little or nothing to hide. But all of us have opinions and all of us lead our lives in certain ways. Bear in mind that whatever opinions you hold there are a large number of people who think you are wrong. And no matter how boring you think your lifestyle is there are lots of people who strongly disapprove of it.
Are you a girl who likes to wear pants? Are you a guy who likes to shave? There are people who are seriously unhappy with you. What religion to you follow? It doesn't matter. There are a lot of people who hate that religion, whichever one it is. Do you like city living or do you prefer the wide open spaces? Either way, there are people who are seriously unhappy with you. Those are all choices many people would find boring and unimportant. How about more controversial ones?
Do you drink? Have you ever had sex outside of marriage? Have you tried non-missionary sex? Have you smoked pot? How about other drugs? Even once? Have you ever broken a traffic law, driven drunk, or maybe after you have had only one or two? Have you ever skinny dipped or streaked or done anything else "young and stupid"? Have you ever stolen something, even accidently?
The point is we have all done some embarrassing things, maybe even a lot of embarrassing things. And we have all done things some would disapprove of to the point that they would delight in harassing us about them. So we all have things to hide. Pretty much all of us have things we would prefer our parents, or our children, or our friends, or our coworkers, or the authorities, or our enemies, or random obnoxious people we don't know, don't know about. In other words, we all value out privacy.
In the past there have been practical or technological barriers we could hide behind. The tatters that remain of the old barriers are quickly being shredded. I have addressed the general issue of privacy before (see http://sigma5.blogspot.com/2013/12/privacy.html). I devoted roughly the last third of that post to what I thought should be done. I wrote that post over two years ago. The current topic only adds to the pressure that is moving us toward a world where there is no privacy. I recommend that post for my overall thinking on what should be done. Meanwhile there is a small piece of good news on the privacy front.
I linked to my blog post on the fight between the FBI and Apple above. At the time I wrote it no one knew how it would come out. But that specific situation has since been resolved. The FBI found a way to crack the phone that did not require the extraordinary cooperation that Apple was objecting to. That sounds like bad news but it's not. The phone that was cracked is an older model. Apple has upped its game with newer models. Whatever methods were used are unlikely to work (or at least will be much harder to pull off) on newer models. And in spite of various polls that were done at the time it turns out that there is a market for secure phones. So Apple has promised to keep adding features to make each new generation of phones much harder to crack than the old generation. And remember the phone the FBI was only able to crack after a great deal of difficulty is now a couple of generations old.
And various other technology companies are now jumping onto the "increased security" bandwagon. They are encrypting more and encrypting to a higher level of security. They are also changing how their products operate so that they no longer have a backdoor that lets them read unencrypted customer data. This means that if they are subpoenaed they can respond "sorry -- we can't read it either". And a side effect of this is that they can't sell or analyze detailed customer activity like they used to be able to do.
They can still do a metadata analysis. For instance they can figure out who you are interacting with. They can tell how often you are connecting up and how long you are staying connected. But they can't tell what you are doing while you are connected. This means that the data they can share with someone else, the government or another company, is much more limited than in the past. And that means it is much less valuable. And that means they will do less sharing in the future. And that is a modest step in the direction of more privacy. It is a small but very welcome development.
Saturday, February 20, 2016
Digital Privacy
A story has recently broken about a fight between Apple Computer and the FBI. The context is the San Bernardino massacre which resulted in 14 deaths and many injuries. The perpetrators, Syed Farook and Tashfeen Malik, were dead within hours. So the "who" in "who done it" has been known for some time now. The only open questions have to do with how much help they got and from whom. There has been a lot of progress on that front too.
Enrique Marquez, a friend and neighbor has been arrested. Among other things, he purchased some of the guns that were used by the perps. Literally hundreds of searches have been done and mountains of evidence has been seized. Online accounts of all kinds have been scrutinized. Even after all this effort there is more to be learned. A few days before I wrote this Syed's brother's house was searched. This was only the latest in a series of searches of his house.
As a result of all this effort the story is pretty much known. All that is left is to fill in some details. It is possible that a new major development could be unearthed in the future, say substantial participation by overseas terrorist groups. But the chances are small. And that brings us to the phone.
A tiny part of the mountain of seized evidence is a smart phone that belonged to one of the perps. It has been in FBI custody for some time now. But that hasn't stopped the FBI from being frustrated, literally. The phone is encrypted. The FBI has not been able to break or get around the encryption so they have not been able to access the contents of the phone. This literal frustration has not been for want of trying. At least that's the story from both the FBI and Apple Computer. The FBI has asked Apple for assistance and Apple has provided it. But the FBI now says Apple must take that assistance to a new level. And that's what the fight is about.
Before proceeding let me stop to make what I believe is an observation of monumental importance.
Why is this so important? Have you seen an "action" movie or TV show any time in say the last 50 years? These shows often feature a scenario where encrypted data is critical, frequently a matter of life and death. Sometimes a good guy is trying to decrypt the bad guy's secret plan. Sometimes it is a bad guy trying to decrypt the good guy's security system so he can steal the secret formula or the invasion plans or whatever. Regardless, the scene is always handled in the same way.
A geek types away furiously while "action" visuals play out on screen and dramatic music (queue the "Mission Impossible" theme) plays underneath so we will know that important things are happening. This goes on for about 20 seconds of screen time which may represent perhaps a few hours or days of "mission" time. But we always have the "Aha" moment when the geek announces that the encryption has been cracked. And it never takes the geek more than a week to crack it. In fact it is common for the geek to only need a few minutes.
This is the pop culture foundation for a belief that is widespread and grounded in things that are a lot more solid then a TV script. We've all seen it over and over so it must be true. Any encryption system can be broken. All you need is the genius geek and perhaps a bunch of really cool looking equipment. People in the real world support this idea often enough for one reason or another that most people have no reason to doubt its veracity. But it is not true. And we know it is not true because the FBI has just told us. Let's look at why.
It starts with the fact that FBI has publicly said that it has been unsuccessful in breaking Apple's encryption. This is in spite of the fact that they have had weeks in which to try and they have had a considerable amount of cooperation from Apple. But wait, there's more. Which government agency is the one with the most skill, equipment, and experience with encryption? The NSA (National Security Agency). It's literally what they do.
Before 9/11 it was possible to believe that the FBI and the NSA did not talk to each other. It was possible to believe this because it was true. But in the post-9/11 era those communications barriers were broken down and there is now close cooperation between the two agencies, especially on terrorism cases like this one. It is literally unbelievable that the FBI has not consulted with the NSA on this problem. And that means the NSA has also not been able to crack Apple's encryption either.
Let's say they had. Then the FBI could easily have covered this up by claiming that their own people had cracked the phone. Even if this was not believed it provides the standard "plausible deniability" that is commonly used in these situations. It doesn't matter if the official line is credible. It only matters that there is an official line that officials can pretend to believe. This is why I believe the NSA failed too. (For a counter-argument see below).
There is actually a lot of evidence that encryption works but it is the boring stuff that the media ignores. It gets dismissed as a "dog bites man" story. I worked in the computer department of a bank for a long time. They treated computer problems that could screw up data very seriously. "We are messing with people's money and people take their money very seriously." I then worked for a company that ran hospitals and clinics. After observing the culture there I remarked "If you want to see people who treat computer problems seriously, talk to bankers. They deal with money. Around here we only deal with life and death and that's not as serious." That's a cute way of highlighting that people take money very seriously. And every aspect of handling money now depends critically on encryption.
If even one of the common encryption systems used in the money arena could be cracked there is a lot of money to be made. Look at the amount of noise generated by people stealing credit card information. It has finally caused the credit card industry in the US to move from a '60s style magnetic stripe technology to a modern RFID chip based one. The important take away is that the hackers have never broken into a system by breaking the encryption. They have used what is generally referred to as a "crib". One of the most successful cribs goes by the name of Social Engineering. You call someone up (or email them or whatever) and talk them out of information you are not entitled to like say a high powered user id and password. You use this information to break into the system.
Important data has been encrypted for many decades now. The DES standard was developed and implemented in the '70s. It is considered weak by modern standards but I know of no successful attempt to crack it. But the long voiced but so far not validated idea that "it might be crackable soon" has been enough to cause everybody to move on. Something called triple-DES was shown to be harder to crack after double-DES was shown to provide no improvement. We have since moved on to other encryption standards.
A common one in the computer business is "Secure Sockets". Any web site with a prefix of HTTPS uses it. It is now recommended for general use instead of being restricted to use only in "important" situations. The transition has resulted in some variation of a "show all content" message popping up with annoying frequency. That's because the web page is linking to a combination of secure (HTTPS) and unsecure (HTTP) web sites.
If the basic algorithm (computer formula) is sound the common trick is to make the key bigger. DES used a 56 bit key. The triple-DES algorithm can be used with keys that are as long as 168 bits. Behind the scenes, HTTPS has been doing the same thing. Over time the keys it uses have gotten longer and longer. And a little additional length makes a big difference. Every additional bit literally doubles the number of keys that need to be tested in a "brute force" (try all the possible combinations) attack.
So piling on the bits fixes everything, right? No! It gets back to that crib thing. Let's say I have somehow gotten hold of your locked cell phone. What if I call you and say "I'm your mother and I need the key for your phone." Being a dutiful child you always do what your mother says so you give me the key. At this point it literally doesn't matter how long the key you use is. Actually no one would fall for so transparent a ploy but it illustrates the basic idea of Social Engineering. It boils down to tricking people into giving you information that you can use to get around their security.
If I can get your key I have effectively reduced your key length to zero. Cribs can be very complex and sophisticated but a good way to think of them is in terms of ways to reduce the effective key length. If I can find a crib that reduces the effective key length to ten bits that means a brute force attack only needs to try a little over a thousand keys to be guaranteed success. I once used a brute force approach to figure out the combination of a bicycle lock. The lock could be set to a thousand different numbers but only one opened it. It took a couple of hours of trying each possibility in turn but I eventually succeeded in finding that one number. Under ideal circumstances a computer can try a thousand possibilities in less than a second.
And Apple is well aware of this. So they added a delay to the process. It takes about a twelfth of a second to process a key. This means that no more than a dozen keys can be tried in a second. And the Apple key is more than ten bits in length. But wait. There's more. After entering a certain number of wrong keys in a row (the number varies with iPhone model and iOS version) the phone locks up. Under some circumstances the phone will even go so far as to wipe everything clean if too many wrong keys are tried in a row.
The FBI is not releasing the details of what they have tried so far. And Apple has not released the details of what assistance they have rendered so far. But this particular iPhone as currently configured is apparently impervious to a brute force attack. Whatever else the FBI has tried is currently a secret. So what the FBI is asking from Apple is for changes to the configuration. Specifically, they want the twelfth second delay removed and they want the "lock up" and "wipe after a number of failed keys" features disabled. That, according to the FBI, would allow a medium speed brute force attack to be applied. Some combinations of iPhone and iOS version use relatively short key lengths so this would be an effective approach if the phone in question is one of them.
But Apple rightly characterizes this as a request by the FBI to build a crib into their phones. Another name for this sort of thing is a "back door". And we have been down this path before. In the '90s the NSA released specifications for something called a "Clipper chip". It was an encryption / decryption chip that appeared to provide a high level of security. It used an 80 bit key. That's a lot bigger than the 56 bit key used by DES so that's good, right? The problem is that the Clipper chip contained a back door that was supposed to allow "authorized security agencies" like the NSA to crack it fairly easily. The NSA requested that a law be passed mandating exclusive use of the Clipper chip. After vigorous push back on many fronts the whole thing was dropped a couple of years later without being implemented broadly.
We can also look to various statements made by current and former heads of various intelligence and law enforcement agencies. The list includes James Clapper (while he was Director of National Intelligence and since), former NSA director Keith Alexander, and others. They have all railed against encryption unless agencies like theirs are allowed back doors. Supposedly all kinds of horrible things will happen if these agencies can't read everything terrorists are saying. But so far there is no hard evidence that these back doors would be very helpful in the fight against terrorism. What they would be very helpful for is making it easy to invade the privacy of everybody. Pretty much nothing on the Internet was encrypted in the immediate post-9/11 period. Reading messages was helpful in some cases but the bad guys quickly learned how to make their messages hard to find and hard to read.
These agencies have swept up massive amounts of cell phone data. Again, mass data collection has not been shown to be important to thwarting terrorist plots. After they are on to a specific terrorist then going back and retrospectively reviewing who they have been in contact with has been helpful. And, by the way, that has already been done in the San Bernardino Massacre case. But the FBI argues that even after all these other things have been done they still desperately need to read the contents of this one cell phone. We have been told for more than a decade that the "collect everything" programs are desperately needed and are tremendously effective. The FBI's current request indicates that they are not all that effective and that means they were never needed as badly as we were told they were.
The FBI also argues that this will be a "one off" situation. Apple argues that once the tool exists its use will soon become routine. If cracking a phone is difficult, time consuming, and expensive after the tool exists then the FBI may have a case. But if it is then what's to stop the FBI from demanding that Apple build a new tool that is easier, quicker and cheaper to use. Once the first tool has been created the precedent has been set.
The fundamental question here is whether a right to privacy exists. The fourth amendment states:
A plain reading of the language supports the idea that a privacy right exists and that the mass collection of phone records, whether "metadata" or the contents of the actual conversation, is unconstitutional. The Supreme Court has so far dodged its responsibility by falling back on a "standing" argument. I think the standing argument (which I am not going to get into) is bogus but I am not a Supreme Court justice. And the case we are focusing on is clearly covered by the "probable cause . . ." language. The FBI can and has obtained a search warrant. The only problem they are having is the practical one of making sense of the data they have seized.
The problem is not with this specific case. It is with what other use the capability in question might be put to. We have seen our privacy rights almost completely obliterated in the past couple of decades. Technology has enabled an unprecedented and overwhelming intrusion into our privacy. It is possible to listen in on conversations in your home by bouncing a laser off a window. A small GPS tracking device can be attached to your car in such a way that it is nearly undetectable. CCTV cameras are popping up everywhere allowing your public movements to be tracked. Thermal imaging cameras and other technology can tell a lot about what is going on inside your house even if you are not making any noise and they can do this from outside your property line.
And that ignores the fact that we now live in a highly computerized world. Records of your checking, credit card, and debit card activity, all maintained by computer systems, make your life pretty much an open book. Google knows where you go on the Internet (and probably what you say in your emails). And more and more of us run more and more of our lives from our smart phones. Imagine comparing what you can find out from a smart phone with what you could have found out 200 years ago by rifling through someone's desk (their "papers"). Then a lot of people couldn't read. So things were done orally. And financial activity was done in cash so no paper record of most transactions existed. The idea that the contents of a smartphone should not be covered under "persons, papers, and effects" is ridiculous. Yet key loggers and other spyware software are available for any and all models of smart phones.
Apple was one of the first companies to recognize this. They were helped along by several high profile cases where location data, financial data, and other kinds of private data were easily extracted from iPhones. They decided correctly that the only solution that would be effective would be to encrypt everything and to do so with enough protections that the encryption could not be easily avoided. The FBI has validated the robustness of their design.
Technology companies have been repeatedly embarrassed in the last few years by revelations that "confidential" data was easily being swept up by security agencies and others. They too decided that encryption was the way to cut this kind of activity off. Hence we see the move to secure (HTTPS) web sites and to companies encrypting data as is moves across the Internet from one facility to another.
Security agencies and others don't like this. It makes it at least hard and possibly impossible to tap into these data streams. And, according to agency heads this is very dangerous. But these people are known and documented liars. And they have a lot of incentive to lie. It makes the job of their agency easier and it makes it easier for them to amass bureaucratic power. Finally, given that lying does not put them at risk for criminal sanctions (none of them have even been charged) and can actually enhance their political standing, why wouldn't they?
Here's a theory for the paranoid. Maybe the FBI/NSA successfully cracked the phone. But they decided that they could use this case to leverage companies like Apple into building trap doors into their encryption technology. The Clipper case shows that this sort of thinking exists within these agencies. And agency heads are known to be liars. So this theory could be true. I don't think it is true but I can't prove that I am right. (I could if agency heads could actually be compelled to tell the truth when testifying under oath to Congress but I don't see that happening any time soon.)
The issue is at bottom about a trade off. The idea is that we can have more privacy but be less secure or we can have less privacy but be more secure. In my opinion, however, the case that we are more secure is weak to nonexistent and the case that we have lost a lot of valuable privacy and are in serious danger of losing even more is strong. I see the trade off in theory. But I don't see much evidence that as a practical matter the trade off actually exists in the real world. Instead I see us giving up privacy and getting nothing, as in no increase in security, back. In fact, I think our security is diminished as others see us behaving in a sneaky and underhanded way. That causes good people in the rest of the world to be reluctant to cooperate with us. That reduction in cooperation reduces our security. So I come down on the side of privacy and support Apple's actions.
In the end I expect some sort of deal will be worked out between the FBI and Apple. It will probably not be one that I approve of. It will erode our privacy a little or a lot and I predict that whatever information is eventually extracted from the phone will turn out to be of little or no value. And, as Tim Cook, the CEO of Apple, has stated, once the tool is built it will always exist for the next time and the time after that, ad infinitum. That is too high a cost.
Enrique Marquez, a friend and neighbor has been arrested. Among other things, he purchased some of the guns that were used by the perps. Literally hundreds of searches have been done and mountains of evidence has been seized. Online accounts of all kinds have been scrutinized. Even after all this effort there is more to be learned. A few days before I wrote this Syed's brother's house was searched. This was only the latest in a series of searches of his house.
As a result of all this effort the story is pretty much known. All that is left is to fill in some details. It is possible that a new major development could be unearthed in the future, say substantial participation by overseas terrorist groups. But the chances are small. And that brings us to the phone.
A tiny part of the mountain of seized evidence is a smart phone that belonged to one of the perps. It has been in FBI custody for some time now. But that hasn't stopped the FBI from being frustrated, literally. The phone is encrypted. The FBI has not been able to break or get around the encryption so they have not been able to access the contents of the phone. This literal frustration has not been for want of trying. At least that's the story from both the FBI and Apple Computer. The FBI has asked Apple for assistance and Apple has provided it. But the FBI now says Apple must take that assistance to a new level. And that's what the fight is about.
Before proceeding let me stop to make what I believe is an observation of monumental importance.
ENCRYPTION WORKS
Why is this so important? Have you seen an "action" movie or TV show any time in say the last 50 years? These shows often feature a scenario where encrypted data is critical, frequently a matter of life and death. Sometimes a good guy is trying to decrypt the bad guy's secret plan. Sometimes it is a bad guy trying to decrypt the good guy's security system so he can steal the secret formula or the invasion plans or whatever. Regardless, the scene is always handled in the same way.
A geek types away furiously while "action" visuals play out on screen and dramatic music (queue the "Mission Impossible" theme) plays underneath so we will know that important things are happening. This goes on for about 20 seconds of screen time which may represent perhaps a few hours or days of "mission" time. But we always have the "Aha" moment when the geek announces that the encryption has been cracked. And it never takes the geek more than a week to crack it. In fact it is common for the geek to only need a few minutes.
This is the pop culture foundation for a belief that is widespread and grounded in things that are a lot more solid then a TV script. We've all seen it over and over so it must be true. Any encryption system can be broken. All you need is the genius geek and perhaps a bunch of really cool looking equipment. People in the real world support this idea often enough for one reason or another that most people have no reason to doubt its veracity. But it is not true. And we know it is not true because the FBI has just told us. Let's look at why.
It starts with the fact that FBI has publicly said that it has been unsuccessful in breaking Apple's encryption. This is in spite of the fact that they have had weeks in which to try and they have had a considerable amount of cooperation from Apple. But wait, there's more. Which government agency is the one with the most skill, equipment, and experience with encryption? The NSA (National Security Agency). It's literally what they do.
Before 9/11 it was possible to believe that the FBI and the NSA did not talk to each other. It was possible to believe this because it was true. But in the post-9/11 era those communications barriers were broken down and there is now close cooperation between the two agencies, especially on terrorism cases like this one. It is literally unbelievable that the FBI has not consulted with the NSA on this problem. And that means the NSA has also not been able to crack Apple's encryption either.
Let's say they had. Then the FBI could easily have covered this up by claiming that their own people had cracked the phone. Even if this was not believed it provides the standard "plausible deniability" that is commonly used in these situations. It doesn't matter if the official line is credible. It only matters that there is an official line that officials can pretend to believe. This is why I believe the NSA failed too. (For a counter-argument see below).
There is actually a lot of evidence that encryption works but it is the boring stuff that the media ignores. It gets dismissed as a "dog bites man" story. I worked in the computer department of a bank for a long time. They treated computer problems that could screw up data very seriously. "We are messing with people's money and people take their money very seriously." I then worked for a company that ran hospitals and clinics. After observing the culture there I remarked "If you want to see people who treat computer problems seriously, talk to bankers. They deal with money. Around here we only deal with life and death and that's not as serious." That's a cute way of highlighting that people take money very seriously. And every aspect of handling money now depends critically on encryption.
If even one of the common encryption systems used in the money arena could be cracked there is a lot of money to be made. Look at the amount of noise generated by people stealing credit card information. It has finally caused the credit card industry in the US to move from a '60s style magnetic stripe technology to a modern RFID chip based one. The important take away is that the hackers have never broken into a system by breaking the encryption. They have used what is generally referred to as a "crib". One of the most successful cribs goes by the name of Social Engineering. You call someone up (or email them or whatever) and talk them out of information you are not entitled to like say a high powered user id and password. You use this information to break into the system.
Important data has been encrypted for many decades now. The DES standard was developed and implemented in the '70s. It is considered weak by modern standards but I know of no successful attempt to crack it. But the long voiced but so far not validated idea that "it might be crackable soon" has been enough to cause everybody to move on. Something called triple-DES was shown to be harder to crack after double-DES was shown to provide no improvement. We have since moved on to other encryption standards.
A common one in the computer business is "Secure Sockets". Any web site with a prefix of HTTPS uses it. It is now recommended for general use instead of being restricted to use only in "important" situations. The transition has resulted in some variation of a "show all content" message popping up with annoying frequency. That's because the web page is linking to a combination of secure (HTTPS) and unsecure (HTTP) web sites.
If the basic algorithm (computer formula) is sound the common trick is to make the key bigger. DES used a 56 bit key. The triple-DES algorithm can be used with keys that are as long as 168 bits. Behind the scenes, HTTPS has been doing the same thing. Over time the keys it uses have gotten longer and longer. And a little additional length makes a big difference. Every additional bit literally doubles the number of keys that need to be tested in a "brute force" (try all the possible combinations) attack.
So piling on the bits fixes everything, right? No! It gets back to that crib thing. Let's say I have somehow gotten hold of your locked cell phone. What if I call you and say "I'm your mother and I need the key for your phone." Being a dutiful child you always do what your mother says so you give me the key. At this point it literally doesn't matter how long the key you use is. Actually no one would fall for so transparent a ploy but it illustrates the basic idea of Social Engineering. It boils down to tricking people into giving you information that you can use to get around their security.
If I can get your key I have effectively reduced your key length to zero. Cribs can be very complex and sophisticated but a good way to think of them is in terms of ways to reduce the effective key length. If I can find a crib that reduces the effective key length to ten bits that means a brute force attack only needs to try a little over a thousand keys to be guaranteed success. I once used a brute force approach to figure out the combination of a bicycle lock. The lock could be set to a thousand different numbers but only one opened it. It took a couple of hours of trying each possibility in turn but I eventually succeeded in finding that one number. Under ideal circumstances a computer can try a thousand possibilities in less than a second.
And Apple is well aware of this. So they added a delay to the process. It takes about a twelfth of a second to process a key. This means that no more than a dozen keys can be tried in a second. And the Apple key is more than ten bits in length. But wait. There's more. After entering a certain number of wrong keys in a row (the number varies with iPhone model and iOS version) the phone locks up. Under some circumstances the phone will even go so far as to wipe everything clean if too many wrong keys are tried in a row.
The FBI is not releasing the details of what they have tried so far. And Apple has not released the details of what assistance they have rendered so far. But this particular iPhone as currently configured is apparently impervious to a brute force attack. Whatever else the FBI has tried is currently a secret. So what the FBI is asking from Apple is for changes to the configuration. Specifically, they want the twelfth second delay removed and they want the "lock up" and "wipe after a number of failed keys" features disabled. That, according to the FBI, would allow a medium speed brute force attack to be applied. Some combinations of iPhone and iOS version use relatively short key lengths so this would be an effective approach if the phone in question is one of them.
But Apple rightly characterizes this as a request by the FBI to build a crib into their phones. Another name for this sort of thing is a "back door". And we have been down this path before. In the '90s the NSA released specifications for something called a "Clipper chip". It was an encryption / decryption chip that appeared to provide a high level of security. It used an 80 bit key. That's a lot bigger than the 56 bit key used by DES so that's good, right? The problem is that the Clipper chip contained a back door that was supposed to allow "authorized security agencies" like the NSA to crack it fairly easily. The NSA requested that a law be passed mandating exclusive use of the Clipper chip. After vigorous push back on many fronts the whole thing was dropped a couple of years later without being implemented broadly.
We can also look to various statements made by current and former heads of various intelligence and law enforcement agencies. The list includes James Clapper (while he was Director of National Intelligence and since), former NSA director Keith Alexander, and others. They have all railed against encryption unless agencies like theirs are allowed back doors. Supposedly all kinds of horrible things will happen if these agencies can't read everything terrorists are saying. But so far there is no hard evidence that these back doors would be very helpful in the fight against terrorism. What they would be very helpful for is making it easy to invade the privacy of everybody. Pretty much nothing on the Internet was encrypted in the immediate post-9/11 period. Reading messages was helpful in some cases but the bad guys quickly learned how to make their messages hard to find and hard to read.
These agencies have swept up massive amounts of cell phone data. Again, mass data collection has not been shown to be important to thwarting terrorist plots. After they are on to a specific terrorist then going back and retrospectively reviewing who they have been in contact with has been helpful. And, by the way, that has already been done in the San Bernardino Massacre case. But the FBI argues that even after all these other things have been done they still desperately need to read the contents of this one cell phone. We have been told for more than a decade that the "collect everything" programs are desperately needed and are tremendously effective. The FBI's current request indicates that they are not all that effective and that means they were never needed as badly as we were told they were.
The FBI also argues that this will be a "one off" situation. Apple argues that once the tool exists its use will soon become routine. If cracking a phone is difficult, time consuming, and expensive after the tool exists then the FBI may have a case. But if it is then what's to stop the FBI from demanding that Apple build a new tool that is easier, quicker and cheaper to use. Once the first tool has been created the precedent has been set.
The fundamental question here is whether a right to privacy exists. The fourth amendment states:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
A plain reading of the language supports the idea that a privacy right exists and that the mass collection of phone records, whether "metadata" or the contents of the actual conversation, is unconstitutional. The Supreme Court has so far dodged its responsibility by falling back on a "standing" argument. I think the standing argument (which I am not going to get into) is bogus but I am not a Supreme Court justice. And the case we are focusing on is clearly covered by the "probable cause . . ." language. The FBI can and has obtained a search warrant. The only problem they are having is the practical one of making sense of the data they have seized.
The problem is not with this specific case. It is with what other use the capability in question might be put to. We have seen our privacy rights almost completely obliterated in the past couple of decades. Technology has enabled an unprecedented and overwhelming intrusion into our privacy. It is possible to listen in on conversations in your home by bouncing a laser off a window. A small GPS tracking device can be attached to your car in such a way that it is nearly undetectable. CCTV cameras are popping up everywhere allowing your public movements to be tracked. Thermal imaging cameras and other technology can tell a lot about what is going on inside your house even if you are not making any noise and they can do this from outside your property line.
And that ignores the fact that we now live in a highly computerized world. Records of your checking, credit card, and debit card activity, all maintained by computer systems, make your life pretty much an open book. Google knows where you go on the Internet (and probably what you say in your emails). And more and more of us run more and more of our lives from our smart phones. Imagine comparing what you can find out from a smart phone with what you could have found out 200 years ago by rifling through someone's desk (their "papers"). Then a lot of people couldn't read. So things were done orally. And financial activity was done in cash so no paper record of most transactions existed. The idea that the contents of a smartphone should not be covered under "persons, papers, and effects" is ridiculous. Yet key loggers and other spyware software are available for any and all models of smart phones.
Apple was one of the first companies to recognize this. They were helped along by several high profile cases where location data, financial data, and other kinds of private data were easily extracted from iPhones. They decided correctly that the only solution that would be effective would be to encrypt everything and to do so with enough protections that the encryption could not be easily avoided. The FBI has validated the robustness of their design.
Technology companies have been repeatedly embarrassed in the last few years by revelations that "confidential" data was easily being swept up by security agencies and others. They too decided that encryption was the way to cut this kind of activity off. Hence we see the move to secure (HTTPS) web sites and to companies encrypting data as is moves across the Internet from one facility to another.
Security agencies and others don't like this. It makes it at least hard and possibly impossible to tap into these data streams. And, according to agency heads this is very dangerous. But these people are known and documented liars. And they have a lot of incentive to lie. It makes the job of their agency easier and it makes it easier for them to amass bureaucratic power. Finally, given that lying does not put them at risk for criminal sanctions (none of them have even been charged) and can actually enhance their political standing, why wouldn't they?
Here's a theory for the paranoid. Maybe the FBI/NSA successfully cracked the phone. But they decided that they could use this case to leverage companies like Apple into building trap doors into their encryption technology. The Clipper case shows that this sort of thinking exists within these agencies. And agency heads are known to be liars. So this theory could be true. I don't think it is true but I can't prove that I am right. (I could if agency heads could actually be compelled to tell the truth when testifying under oath to Congress but I don't see that happening any time soon.)
The issue is at bottom about a trade off. The idea is that we can have more privacy but be less secure or we can have less privacy but be more secure. In my opinion, however, the case that we are more secure is weak to nonexistent and the case that we have lost a lot of valuable privacy and are in serious danger of losing even more is strong. I see the trade off in theory. But I don't see much evidence that as a practical matter the trade off actually exists in the real world. Instead I see us giving up privacy and getting nothing, as in no increase in security, back. In fact, I think our security is diminished as others see us behaving in a sneaky and underhanded way. That causes good people in the rest of the world to be reluctant to cooperate with us. That reduction in cooperation reduces our security. So I come down on the side of privacy and support Apple's actions.
In the end I expect some sort of deal will be worked out between the FBI and Apple. It will probably not be one that I approve of. It will erode our privacy a little or a lot and I predict that whatever information is eventually extracted from the phone will turn out to be of little or no value. And, as Tim Cook, the CEO of Apple, has stated, once the tool is built it will always exist for the next time and the time after that, ad infinitum. That is too high a cost.
Subscribe to:
Posts (Atom)