Menu

Michael Gracie

A Public Key Infrastructure for Extensible Interoperability

An explanation for that title is in order. It is the culmination of what I call my “dark side”. Literally.

Eighteen months ago a colleague and I embarked on a brain-busting adventure: figure out a way to encrypt anything (or everything) on the web without installing any software – full-on security for the cloud. A few months later CryptML, an entirely new markup language whose sole purpose is hard encryption, was born. Since that time we’ve been developing a representative sample implementation, debugging more code than any human being should ever have to, and building an entire platform around what started as satisfaction of intellectual curiosity.

Could it be done? You betcha! We’re now working straight out of the National Security Agency’s playbook, using a collection of algorithmic components called Suite B. And we’re doing so without violating any patents.

While a few folks out there pass by these pages looking for help with mcrypt and Wireshark, most come by to hear about fly fishing. I’d like to keep it that way, so I won’t bore you with the extraneous details. Nevertheless, a paper I wrote recently outlining some of the history of modern encryption and why I feel it is in the best interests of both the general public AND national security to adopt technology such as ours, was picked up by Military Information Technology Magazine, and published in their April edition. Needless to say we’re pretty happy about that, and we greatly appreciate MITM’s consideration of what we’ve accomplished.

The introduction from that paper follows, and for those really interested a link to the entirety over at the MITM website has been included below too.

The existing public key infrastructure was developed in the late 70’s and early 80’s as part of research coming out of academia. The systems and methods were quickly perceived as a revolutionary way to satisfy the secure data exchange needs of the scientific community, and later the federal government. Since that seminal period, advances in microcomputer technology have pushed communication channels, protocols, and hardware to the point where the convergence of voice, video and data are the norm. Secure communication using encryption, however, is still based on standards developed in decades past, with advancement centering primarily on new algorithms to replace those for which mathematical weaknesses are found.

Unfortunately, it is the seemingly never-ending advancement of computing horsepower combined with its ever-falling prices that are enabling the discovery of the “flaws”. Unless computational speeds unexpectedly plateau, the costly cycle of adopting new platforms and devices to replace those built on ever weakening encryption schemes will continue unabated.

What is needed is a completely new paradigm for the process of encoding, exchanging, decoding, and validating data – one that can completely replace that built for the closed-loop, point-to-point communications existing before internet protocol (IP) use became pervasive.

The rest of the piece can be found here: http://www.military-information-technology.com/mit-archives/241-mit-2010-volume-14-issue-3-april/2793-public-key-infrastructure-for-interoperability.html – Public Key Infrastructure for Interoperability. If the link doesn’t work, just pick up a hard copy – available in the lobby of the Pentagon.

Special thanks go out to editorial staff at MITM, as well as other individuals who helped make this happen. If you are in the defense, healthcare information systems, or financial services fields and are interested in seeing our representative application, a hard-encrypted messaging tool, you can contact us here for an invitation.

UPDATE 10/12/16: The link above is no longer in service. Hence the entire text of the original article has been reproduced in its entirety below.

——————————-

This article originally appeared in Military Information Technology 14.3.

Introduction

The existing public key infrastructure was developed in the late 70’s and early 80’s as part of research coming out of academia. The systems and methods were quickly perceived as a revolutionary way to satisfy the secure data exchange needs of the scientific community, and later the federal government. Since that seminal period, advances in microcomputer technology have pushed communication channels, protocols, and hardware to the point where the convergence of voice, video and data are the norm. Secure communication using encryption, however, is still based on standards developed in decades past, with advancement centering primarily on new algorithms to replace those for which mathematical weaknesses are found.

Unfortunately, it is the seemingly never-ending advancement of computing horsepower combined with its ever-falling prices that are enabling the discovery of the “flaws”. Unless computational speeds unexpectedly plateau, the costly cycle of adopting new platforms and devices to replace those built on ever weakening encryption schemes will continue unabated.

What is needed is a completely new paradigm for the process of encoding, exchanging, decoding, and validating data – one that can completely replace that built for the closed-loop, point-to-point communications existing before internet protocol (IP) use became pervasive.

A brief explanation of key-based encryption

Key-based encryption generally falls into two categories – symmetric and asymmetric – and both are actually fairly simple concepts when the mathematics is removed from the description. Under symmetric encryption, there exists a single key that both “locks” and “unlocks” the data – both sender and receiver share that key before the data is transferred. In asymmetric encryption, two keys are required: one is used to encrypt (lock) the data, and is readily shareable by all participants, while another key is used to decode (unlock) the data.

With symmetric encryption, if sender and receiver want to keep communications between themselves alone the connection must remain secure, particularly during the key sharing process. For this reason, symmetric encryption is most suitable for point-to-point networks such as those historically deployed in the military theatre. Asymmetric encryption, however, doesn’t require this pre-determined, ongoing trust. A party wishing to receive, say simple text messages, simply generates a publicly available key for encrypting those incoming messages, and can do so far in advance of the discrete communication. It is then distributed to everyone the potential data recipient needs to receive communications from, and that receiver is ultimately responsible for keeping it up to date. The secret, or private, key is the only key that can decode any inbound communications, and is generally kept secure by that recipient. Asymmetric encryption also allows other users to validate those self-generated keys, as well as determine when they are legitimately changed.

Origin of today’s standards

In 1976 Whitfield Diffie and Martin Hellman published a paper entitled New Directions in Cryptography that specified an innovative method for key exchange. The process allowed communicators with no prior knowledge of each other to establish a shared key without the need for a secure network. While a number of cryptosystems have been developed since, Diffie-Hellman has become the de-facto standard for key exchange, and part of the foundation of the public-key encryption in wide use today.

The National Security Agency adopted an improved derivative of Diffie-Hellman and married it with algorithms used for other parts of encrypted communications, forming a set of standards called Suite B. The openly published Suite B includes specifications for encrypting data, exchanging keys, signing messages, and validating received data.

Suite B’s base elements and the Federal Information Processing Standards they are derived from are as follows:

– Encryption: Advanced Encryption Standard (AES) – FIPS 197 (with key sizes of 128 and 256 bits)
– Key Exchange: Elliptic Curve Diffie-Hellman – Draft NIST Special Publication 800-56 (using the curves with 256 and 384-bit prime moduli)
– Digital Signatures: Elliptic Curve Digital Signature Algorithm – FIPS 186-2 (using the curves with 256 and 384-bit prime moduli)
– Hashing: Secure Hash Algorithm – FIPS 180-2 (using SHA-256 and SHA-384)

Announced in early 2005, Suite B complies with policy guidance set out by The Committee on National Security Systems, and can be used for encrypting information up to a level of Top Secret when the larger bit-sized keys are used.

The Suite B specification has been submitted to the Internet Engineering Task Force (IETF) for inclusion in Internet Protocol Security (IPSec), a budding framework for securing data at the packet layer. IPSec is designed to encrypt all traffic crossing the internet, either by securing the individual package being sent (transport mode) or by securing the actual route the data is sent over (tunnel mode, or virtual private networking). The latter methodology is in widespread commercial and governmental use today, but it suffers from many of the same limitations the underlying encryption infrastructure does.

Inherent weaknesses of the existing model

The existing model has aged. It was created while the personal computer was still in its infancy, and when widespread access to networks did not exist. Development burgeoned when the environment was simple, and the user base sophisticated. The surrounding conditions are now infinitely more complex, and the average user less so.

Encryption technology was developed before 3MHz desktop CPUs were commonplace – 3GHz is now a norm – and internet connections, if available at all, were at best dialup speeds. Hence, the approach to implementing it meant minimizing the processing requirements as well as minimizing the amount of data that had to be transmitted. The tradeoff was that software was inflexible, provided minimal features, and was difficult to update. What might have seemed like fair compensation for performance then is insignificant, maybe even a nuisance, today. Concerns about encryption overhead should now be relegated to only the most demanding applications, particularly across networks whose capacity has expanded a billion-fold or more since the 1970s.

Variants of Suite B are already used in a variety of hardware and software applications, but most are fixed with respect to the platform. In other words, change is hard to come by. If a computer scientist (a.k.a. hacker) is able to find a weakness in just one portion of the mathematics that are part of Suite B, entire systems must be changed for future security to be ensured. For example, the MD5 hashing algorithm was “cracked” in 2004, yet many systems, including the Secure Sockets Layer (SSL) certificate exchange that is the foundation of internet commerce, are still using MD5 because of the widespread switching costs that would result.

Existing key-based encryption works within an infrastructure that is relatively transparent to the sophisticated user. For those that are not extensively trained, however, even installing and properly configuring a desktop email client encryption plug-in can be an impossible task. Users must learn how to generate random key data, must know how and/or where to obtain an encryption key pair, must know how to generate a revocation certificate to change those keys, and must quickly comprehend complex web-of-trust issues before they can safely communicate in a secure environment. Further, varying commercial software products contain components of encryption functionality, but such features are often de-emphasized both in the product and in the documentation. Stand-alone encryption software is much the same, and usually requires support in the form of a full-fledged systems administrator to operate.

Network in transition

While development of the public-key infrastructure remained relatively stagnant, the growth of the internet as public medium pushed forth – and was pushed forth – by new protocols and languages. New communication standards were formed for the exchange of different data types, including:

– iLBC and G.711 for voice
– MPEG-4 and H.264 for video
– HTML and XML for text and data

These standards centered on usability and interoperability. Hardware and software manufacturers adopted them because the general public, the consumer market critical to their economic growth, could utilize them easily, and the protocols could be delivered with chosen measures of opacity. Newer, more sophisticated applications were developed on and around them.

The first email was sent in 1971 – less than a decade later public-key encryption entered its present stage. Meanwhile, in 1989 Tim Berners-Lee gave us the World Wide Web, and a few years later the browser was born. Core processes for delivering secure data may be relatively unchanged, but we now have streaming video on cellular phones to deal with. As data processing moves ever closer to the fully-distributed cloud computing model, leveraging the combination of open standards and tools built first for usability makes perfect sense both technologically and economically. Encryption schemes need to follow the same path, that of interoperability and extensibility.

Re-engineer the entire infrastructure

Adopting a new infrastructure for secure data exchange works for one simple reason: the rest of the network has already moved far past what exists today. Governmental bodies have already begun implementing Commercial Off-The-Shelf (COTS) hardware into newly deployed systems. In most cases this hardware is already fully capable of interfacing with web services.

Embracing the base technology behind the commercial internet provides the ultimate in interoperability. Browsers, for example, are virtually omnipresent – and they can run on almost any hardware. In fact, much of the COTS network equipment utilized by the US military already contains browser software for configuration and management. By extension all devices deployed in the theatre, whether it is baseband hardware, reachback equipment, or remote connections via handheld device, could be exchanging data via the same web services, and with little or no additional customization.

The World Wide Web was designed for constant change, hence the tools that interpret the data must be able to constantly adapt. Unlike the present encryption infrastructure, a new model is already available that can not only manage key exchange from point to point or amongst multiple users in disparate locations, but can also obtain keys from a variety of sources (include those under physical control) and orderly or arbitrarily switch those sources as security procedures require. Further, applying web technologies to encryption services allows the user to update those services to keep up with changing mathematics. If an algorithm presently in use is deemed insecure, it can be replaced with another one immediately. Should the system user determine that interchanging algorithms in the middle of a conversation is necessary to comply with the classification of content being exchanged, it can be done on-the-quick-halt instead of after a new hardware requisition.

Web services are also designed to be interoperable not only with hardware, but also with software. While internet browsers are the de-facto standard for accessing web services on the public internet, many application front-ends are ported to client-side software. This provides additional security in the form of source code audit-ability. In addition, users gain added control over access to virtual private networking services, certification, and other layers of security outside the realm of commercially available and/or open source software. A well-engineered service can be access via proprietary software, without degrading either performance or the flexibility web services are known for.

Beneficial change

The free exchange of data across IP networks is not going away. Once ubiquitous, which it arguably already is in all but developing countries, those connected will find ever-expanding ways to leverage it. Encryption technology must already make a generational leap just to catch up.

Adopting a web services approach to a new encryption infrastructure has several distinct advantages:

1) Usability – the growth of the commercial internet is proof-of-concept, now a mainstay of day to day voice, video and data exchange, including commerce that is a measureable portion of gross domestic product;

2) Flexibility – as data exchange needs change, the platform is quickly modified to comply with those needs with virtually no additional effort or costs – changes can also be made on-the-quick-halt;

3) Interoperability – web-based technologies run on networking hardware, personal computers, cellular telephones and other devices very efficiently; web-services can operate seamlessly with new systems as well as hardware and software already deployed in the field; and most importantly…

4) Security – encryption as a web service means state of the art defense against intrusion; combining it with existing standards such as Secure Sockets Layer and Virtual Private Networking makes it even more so.

The cost versus benefit applied to encryption schemes is insignificant. Training happens quickly, even at the individual non-technical personnel level, because familiarity with the required tools is an afterthought. All equipment deployed in the field utilizes the same technology, resulting in multiple-mission capability – overall hardware needs are significantly reduced. And finally, as the base mathematics behind encryption are deemed inferior for the data they are to secure, new algorithms can replace the old immediately, versus recalling all equipment for update or dispersing technical expertise into the field to perform the task.

It is time to implement the next generation of encryption delivery, to ensure data security network-wide, permanently applied against the foreseeable future.

——————————-

Spy agency help Microsoft build Vista

It may have been a good move to get some hardcore security guys involved in the development of Vista, but a lot of people are going to question why Microsoft looked to the NSA, which has been under fire recently for spying on people at the request of the Bush Administration.

Adding fuel to the upcoming fire…

The Redmond, Wash., software maker declined to be specific about the contributions the NSA made to secure the Windows operating system.

Then again, maybe the idea was to position the upcoming operating system to be used by political bloggers, and/or throw a bone to the 110th Congress…

The NSA also declined to be specific but said it used two groups — a “red team” and a “blue team” — to test Vista’s security. The red team, for instance, posed as “the determined, technically competent adversary” to disrupt, corrupt or steal information. “They pretend to be bad guys,” Sager said. The blue team helped Defense Department system administrators with Vista’s configuration .

So the “blue team” were the good guys.

I guess I’m wondering whether the only one that is going to turn out bad or good is the Microsoft PR Team, when the whole concept spins in or out of control.

UPDATE: Bruce Schneier asks: Is this a good idea or not?

Qwest serious about privacy, or just politics and PR

qwest_logo.gifQwest was recently praised for ingoring a request from the NSA for data on it’s subscribers. They looked like good guys and gals. People purportedly rushed to get their services. Their employees certainly ran around town, chatting it up.

Fast forward.

Qwest is at it again, only this time the talk is heavy endorsement of mandatory data retention laws being proposed for ISPs. Several Colorado politicians who had previously jumped on the Qwest hero worship are endorsing (and in one case, sponsoring) said measures.

The local Rocky Mountain News had noted:

Qwest has done its share to reinvent the company in recent years, but it may have generated an unexpected windfall by rebuffing the National Security Agency.

So..now that they have all those subscribers, what are they going to do with all that extra data they want to retain? Let’s just hope they don’t pull “an AOL.”

***UPDATE***

Oops. Someone at a big telco has admitted they misspoke, which has to be a first: Qwest endorses a more reasonable local law, not the federal mandate.

Time for a telecomm “trade’

The recently announced NSA/Telco data sharing fiasco is setting off a wave of lawsuits. Make jokes if you must, in light of the situation (which I don’t personally think is a big deal – I don’t get any phone calls anyway). But I do think – how timely!

The telcos are in the middle of a “net neutrality” fight, and I wish someone would properly communicate the bigger picture – the telco’s certainly can’t.

The time is now – trade a quick sweep of this issue under the carpet in return for perpetually free access across the pipes.
(more…)

NSA’s Hands Deep in the Cookie Jar

The NSA has been snagged serving cookies to it’s website visitors’ computers, despite federal rules against the practice. The cookies expire when? 2035. Hmm. Who else does such things?

They had an excuse – an overlooked software upgrade. I wouldn’t be surprised if the Bush Administration now pins the whole spying fiasco on the NSA, citing a rogue macro in Word that screws up court orders.

No Child Left Behind working after all?

nochildlb.jpgThe folks I know in the education space, including some teachers, a few policy makers, and the higher-ups at a couple of for-profit institutions, have pissed and moaned about the Bush Administration’s No Child Left Behind program. I’ve heard funding is the big issue, but I can’t opine on the matter myself, as education just isn’t my “business.” But I see a covert signs that NCLB is actually working.
(more…)