“We need end-to-end encryption…”
I have heard that statement repeated many times as customers, colleagues, and press quickly point out that it is necessary for consumers and companies to conduct business on the Internet.
For a security practitioner, ironically, it is a very bad idea.
The Problem
Before you shoot me for saying it’s a bad idea, end-to-end encryption should be defined first as to set the backdrop for my arguments.
End-to-end’s definition is the means utilized when a computer communicates with the server from which or to which it is sending information, using some encryption technology.
By way of example, VPN software on a laptop communicates with the VPN server internally to build the IPSEC tunnel, and end-to-end encryption starts on the desktop VPN client and ends at the VPN server.
A second example is a web browser using SSL to transport traffic to/from a web site, whereby the end-to-end encryption starts at the browser and finishes at the web server.
A key point: the traffic is at its destination before it is evaluated and therein is the problem.
The Concern
Current best practice methodology states that security is best practiced with a Defense-In-Depth security strategy. Defense- in-Depth recommends that more than one “layer” be used in the defense of the protected assets in question.
One example of an added layer in a network topology is filtering unnecessary traffic at an ingress network point through, say, the IP Access Lists.
If a network is protected by access-lists alone protection is limited at best, given that so many attacks are conducted over “acceptable or trusted” IP ports.
Defense- in-Depth requires taking at least one additional step, frequently the use of a firewall.
Traffic which passes the first layer, and that has successfully matched the allowed traffic rules for the network’s designated IP characteristics, is then secondarily evaluated at a firewall for protocol compliance so as to avoid exploits utilizing buffer overflows or overly lengthy data requests.
Oftentimes a third step is implemented: as traffic is traversing the security checkpoints, intrusion detection engines monitor the ‘knocking on the door’ attempts and alert based on various situations.
Lastly, traffic checks occur on the destination host to ensure that the data matches what is expected before being processed.
“End-to-end” encryption circumvents some of these steps under the accepted definition.
Since the traffic has characteristics that allow it through the filtering (it has a TCP destination port 443 (SSL), for example), the next two protection depths are the Intrusion Detection engines and the Firewall… but since the traffic is encrypted, these two technologies all too frequently can’t read the traffic!
It is here that we undermine a defense in depth strategy, and here that end-to-end as good practice takes on bad characteristics.
As a result of encryption, the new set of simple attacks is targeting encrypted web sites, VPN servers, or extranet sites.
Why? Because the traditional methods for stopping these attacks are rendered useless by encryption. Traffic cannot be reviewed except under the most basic conditions, such as IP header data, which we have already established is not enough by itself.
The scenario plays out as follows: When web traffic was unencrypted, it was reviewed by firewalls for protocol compliance, header fields, and others. With that same web traffic tunneled over SSL, the payload (which is the same HTTP traffic as before, encrypted) cannot be analyzed.
In short, SSL in a very odd way is assisting the hacking community, not impeding it.
Another example is secure email. Since S/MIME is on the rise, the contents of email will be more and more difficult to analyze until decrypted, and as a result, the contents of those emails will have to be analyzed/controlled on the desktop.
This change will undermine the virus mail firewall scanners that currently aide us, for example, which then lowers our overall protection.
The Solution(s)
As an advocate for best practice, end-to-end encryption takes on a different meaning for me.
The encryption termination point (one “end”) is a device that peels off the encryption layer, perhaps even temporarily, allows the payload and previously encrypted traffic to be analyzed as defense in depth dictates, and if approved, then and only then does the firewall allow it to continue - in short, the traffic is decrypted and then the defense in depth methodology is instituted as if the traffic were never encrypted.
The change in architecture has a cost, e.g. “another” device is involved, but the benefits outweigh the costs. The defense in depth strategy is enforced while still maintaining the needed design confidence.
The existing generation devices and designs fall into three categories.
The first category terminates encrypted traffic on a front-end device, decrypts and analyzes the traffic in the clear, and if approved, sends it on in the clear to the back-end devices.
Oftentimes, this is the approach of the all- in-one vendors whereby the traffic is terminated on a VPN tunnel and/or SSL tunnel, sent to the virus and IDS scanning engine in the device, and then passed on into the back end network in the clear.
The all in one device is a security proxy. This design has strengths in defense in depth, but weaknesses in defense in exposure as the information is in the clear for too long.
The second category terminates the traffic similarly on a front-end device, decrypts and analyzes the traffic in the clear, then re-encrypts the traffic on another device and sends it to the back-end.
To address the weaknesses inherent in the previous all- in-one example, network administrators “work around” and create minimalist networks (which is good) that meet the ideological goal of end-to-end. This approach has similar strength in defense in depth, and marginal (but better) strength in defense in exposure since the traffic is re-encrypted.
The third category is evolving in the technology industry today. In this category, traffic is decrypted on a device, analyzed on the same device, and then re-encrypted on the same device and sent on.
This design ensures a minimal exposure, while still retaining the multi-tiered security capabilities. This has the now-familiar strength of defense in depth, as well as the highest strength for defense in exposure.
Conclusion
Encrypted traffic cannot be analyzed by a firewall unless either decrypted permissively or decrypted forcibly.
The same traffic cannot be cleansed of viruses, or worm signatures, or attack characteristics (IIS URL length overflow) until the traffic is decrypted on the host.
Clearly, traffic should never hit a multi-purpose operating system until after all of this happens.
End-to-end encryption is what we want, but not at the price we’d have to pay. Protection of data during creation, transmission, processing and storage or End-to-End-Defense-in-Depth is what we really want, as it ensures the defense in depth best practices are not lost.
Without it, break-ins will increase not decrease, and we again lose.
Kevin has testified as an expert witness before the Congressional High Tech Task Force, the Chairman of the Senate Armed Services Committee, and the Chairman of the House Ways and Means Committee. He has also served on infrastructure security boards and committees including the Disaster Recovery Workgroup for the Office of Homeland Security, and as a consultant to the Federal Trade Commission.
The Author gives permission to link, post, distribute, or reference this article for any lawful purpose, provided attribution is made to the author and to Information-Security-Resources.com