Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
SSLsplit – Transparent and scalable SSL/TLS interception (roe.ch)
103 points by shayanbahal on Oct 18, 2014 | hide | past | favorite | 34 comments


A few things for everyone to think about. This tool seems to be aimed more on the non-interactive side. There are several other good tools out there which do this, but most of them are not oriented towards performing software assessments. Webmitm does good for HTTP/HTTPS, as does Burp Suite (transparent non-proxy aware stuff). There are a lot of tools in this space. There are very few that will let you interactively debug non-HTTP(s) TCP and SSL.

I released a tool with a colleague back in 2010 designed for assessing non-proxy aware (mostly non-HTTP(S)) applications. It was aimed at getting those hard to reach TCP and SSL wrapped TCP apps that other proxies don't let you work with interactively. Mallory does the exact same on the fly cert generation etc. As does Burp Suite a professional grade HTTP proxy, that can operate in transparent mode. I much prefer Burp to webmitm for day to day work.

SSL Split also supports more NAT mechanisms than Mallory and most other tools (which just tend to be iptables/Linux aware). That is one of the real nice pieces of this code.

We did add in a few neat things into Mallory. SSH MiTM that, when it works, lets you open up your own PTYs on the back of the user's SSH session. A GUI that lets you do binary level regex to play with traffic on the fly. HTTP plugins along with a Chrome extension for hijacking sessions and some other fun things, which were mostly just for demonstration of what a MiTM proxy can do and be. Making a MiTM proxy protocol aware can be very powerful. Mallory is still a little buggy and tricky to use, but it has served me well in performing blackbox app assessments for many years.

http://bitexploder.com/BlackHat-USA-2010-Umadas-Allen-Networ... (Check this out if you want an overview of how to configure a system to support a tool like SSLSplit or Mallory)

https://bitbucket.org/IntrepidusGroup/mallory (reasonably up to date code, its not the easiest thing to set up, though).


Overall, very well written proxy. Just few nitpicks from browsing the code:

https://github.com/droe/sslsplit/blob/master/pxyconn.c#L991-...

there is no guarantee that a space comes after the colon. These comparisons are better off with 1 character less.

https://github.com/droe/sslsplit/blob/master/pxythrmgr.c#L12...

pthread_mutex_init can fail.


On linux, pthread_mutex_init always returns 0. But yes, according to POSIX, the function might fail (the common response is generally to assert() in this case)


On linux, pthread_mutex_init can fail and return ENOMEM.


No, it can not, at least on non-obsolete GLIBC. http://fossies.org/linux/glibc/nptl/pthread_mutex_init.c#l_4...


You should refer to the man pages not particular implementations.


Isn't way easier to hijack the DNS requests of the victim, redirect the connection to a similar unknown domain which replicates the HTML code and fields offered by the originally requested website[1]? I think Metasploit can do something similar.

I mean once you're in a position where using SSlstrip of SSLsplit or whatever, means that you gained access to the victim's LAN.

That's why programs like DNSSEC or DNSCrypt should be installed by default on laptops who use often Free WiFi/Internet Caffes, etc.

Funny thing is that, after all these years there's no way to implement an ARP poisoning mitigation policy that will protect all your clients no matter what OS they run (iOs/OSX, Linux/Android, BSD, Windows, Embedded, etc.).

[1] What comes in mind is a ruby Sinatra application which reads the targeted website's output and creates css/haml output on-the-fly.


If you have a captive audience, DNS, etc. can still be much more difficult. Think, malicious wifi hotspots where you can totally dominate everything about the victim's Internet access. But, generally, you are right, these tools require a certain level of access. I have never used one on an internal network assessment. I have done a small amount of arp poisoning on occasion, which you could then use a tool like ssl strip with, but my needs were different :)


If you hijack DNS and the user tries to visit a HTTPS site you still need to do some sort of SSL interception.


This tool more than being a ssl interception tool, is useful to find privacy leaks such as Mac OS X Yosemite sharing of Spotlight searches with Apple. https://fix-macosx.com/


I'm not sure how this is "scalable", the "scalable" thing about tools like sslsniff was that earlier versions of SSL used a mechanism for key exchange where the MITM could cache the key negotiated between the client and server, and passively log the encrypted data paired with the intercepted key. decryption can then be done offline, in parallel. that scales, and the MITM component is resource-limited at the rate it can exchange packets.

this requires an active attack, so the MITM is doing double duty for an intercepted session, since it has to pin two session keys together for each endpoint in an intercepted session and decrypt/encrypt. scaling that is harder, since for larger networks you need a relatively constant amount of processing power relative to the number of connections per second and stuff like AES-NI only goes so far. and then you investigate cryptographic coprocessing hardware or GPU acceleration and discover that data transfer speeds on commodity systems become a problem, and now to "scale" your attack you are building custom hardware that both needs to have the intense data plane capacity of high end switches and routers with the intense processing capability of HPC systems.

it's not fun. and it's a hard and interesting challenge to write this kind of software and I commend the authors for doing so but I don't think that active attacks "scale" by definition. you're going to wind up with either a ridiculous amount of custom hardware or only intercepting some percentage of traffic.


Well, that's horrifying to see exist as a sysadmin (though I am sure plenty of blackhats have such tools).


Every second antivirus appliance, "threat management" devices and other traffic scanning boxes had SSL splicing for years. I worked on such splicer for a large telecom equipment manufacturer back in 2005 and it wasn't exactly a blackhat tech even then.


The attacker must install a CA certificate on the users' computers so he can strip/split their SSL/TLS sessions. Frankly, that's the sysadmins who are using this type of tools the most for monitoring the browsing habits of the employees.


No wonder NSA puts a greater deal of focus on hacking sysadmins. Once those individuals are hacked, the data floodgates open.


Really?

I wrote one of these a while ago, using a fake DNS rather than NAT. It's pretty trivial.

Ths trick is getting the clients to accept your CA cert.


How is that different from sslsniff/sslstrip or webmitm from dsniff?


Fundamentally it isn't all that different. One neat thing is the multiple NAT mechanisms supported. There are at least a half a dozen tools I can think of that do this. But it is a very nice, clean, small, high performance, written in C, MiTM proxy.


It would be really nice to see a write up on ways to prevent an attacker from effectively using this to intercept and decrypt traffic in this way... Like for example what are a few measures you could take to better protect a standard nginx or apache deployment?


Use HSTS. Never use HTTP even to redirect to HTTPS. That is one of the surest ways to prevent users from getting hijacked without knowing it.

If you are developing any kind of client application where you control the certificate authorities, throw them all away. Use certificate pinning in the client and reject any connections that present a certificate other than your known, pinned, certs.

In a web application, keep an eye out for suddenly changing IP addresses. If the user's IP address for a session changes, terminate the session immediately and force the user to authenticate again. This is a common, if paranoid, measure. Alert the user that their IP address has changed. A security warning is sufficient. Most likely they would be getting this attack in a coffee shop or some sort of untrusted Internet connection, so you have to tell them MiTM may be happening and they should be on alert. Defending from the server is hard.


How are you supposed to get them to https the first time if they type in http://site ? I'd suggest rephrasing that to "use HSTS even if you also redirect http to https". Of course it's not secure, technically, but are you willing to annoy (and lose) users until browsers only use https? If you expect visitors only via links, then I suppose you could do away with http, but I don't think any business that cares more about money than about making the world a better place (any public company and most private companies) would want to stop accepting connections on port 80 today.

Binding a session to a specific IP might be more secure for the typical case, but it will annoy the living crap out of fringe, but legitimate, users. As already mentioned, Tor users would bear most of the pain and suffering, but if your service supports multipath TCP or something like it, a simplistic "store connecting IP with the session" approach seems like it would cause those users to suffer, too, although not as much.

It's also possible the client might be in a corporate environment with corporate SSL (MITM) proxies, and maybe there's a cluster of them so the public client IPs change for different connections. So you'd have to bind sessions to networks, not individual IPs, but what network size do you use? Then even odd MITM attacks that don't use the client's real public IP could use another IP on the same subnet as the real client to connect to the remote service, so how much do you actually gain?

If session cookies are httponly and secure, what's the reason for binding sessions to IPs?

If it were just Tor, you could set up a hidden service, and detect clients that are connecting from Tor exit IPs and redirect them to the hidden service. But it's not just Tor.


Binding a session to an IP is just a defense in depth measure. On its own it doesn't buy you a lot, but it is something and it can matter in a small set of circumstances. I would not recommend this as a generalized approach, but I was trying to answer the question.

Also, httponly and secure don't help you with a mitm that is ripping apart your SSL.

Also, if they type in http://, they simply don't get anything. If you serve even one thing over http://, you give a tool like sslstrip the foothold it needs. Again, this is appropriate for a subset of applications and situations, not a broad, consumer oriented site.


If the MITM proxy can rip apart SSL then SSL is broken (or the client trusts bad CAs). What's an example of where pinning a SSL session, one that's been established securely, to a client IP prevents an attack? If SSL is broken, trying to restrict sessions to the IP that initiated them is inadequate security, isn't it?

(Arguably it's useful to associate a session with a client IP for unencrypted, http, connections, since in that case cookies can be intercepted passively on the wire, and an attacker can try to reuse those cookies from clients with different IPs, since knocking the real client off the LAN to steal that IP might not be desirable since it could be detected. But a basic assumption with properly functioning, secure SSL connections is that no third party can steal session cookies to begin with other than due to a client or server compromise which IP restrictions won't help with.)

What you're mitigating against seems to be only this: an attacker capable of MITMing SSL sessions and capable of stealing session cookies passively from legitimate sessions, but only able to establish connections to the real server from some other public IP, not the client's actual public IP. I don't understand how that could be the case.

Serving a blank page or rejecting connections to port 80 still doesn't help, does it? The security vulnerability appears whenever the client even attempts a port 80 connection, since a MITM proxy will happily accept it and MITM it to the real https site. Once a client gets the HSTS header that attack is prevented, but the first contact is always a risk, and I don't see how refusing to support http on the server mitigates that.


"Also, if they type in http://, they simply don't get anything"

If I were writing an SSL strip style tool, I'd make sure it listened on port 80 and 443, and then even if the end-site didn't support http, I'd still pick up the http connection and forward it on to the real site via HTTPS.

I don't see the value in not providing a HTTP redirect. A MITM can still pick up a HTTP connection, even if you're not listening for it yourself...


Because, the user's browser will remember that it should be HTTPS. With HSTS the user's browser will refuse to go to the HTTP at all. Thats how HSTS works, the browser remembers that the site has the HSTS setting.

HSTS also prevents the overriding of the bad certificate message. It is like anti-ssl strip. HSTS is harder to get around than this :)


I know how HSTS works. I was responding to the claim that offering a HTTP service which redirects to HTTPS somehow reduces security, even when HSTS is in use.


> In a web application, keep an eye out for suddenly changing IP addresses. If the user's IP address for a session changes, terminate the session immediately and force the user to authenticate again.

If your on a mobile device this may happen often. If you are connected to WiFi and travel out of range you will transition to cellular data, with a new IP. I frequently walk to my sister's house which is ~30 seconds away, looking at a website or app on my phone. During this walk I will transition from my WiFi, to cell, then to her WiFi. It also doesn't protect against your example threat of having your session stolen in a coffee shop, as all users on the same NAT subnet will all have the same public IP, so the server can't distinguish between them.


It is certainly a trade off. A consumer friendly site/application could never get away with this. In a high sensitivity application you can often get away with it (think large corporation admin type applications, AWS admin when a user is at his desktop, etc.)


Note that binding sessions to a particular IP is not recommended if your site is designed to be Tor-friendly, as their IP rotates every few minutes.


The conventional wisdom is that it's not possible to detect MITM attacks other than by using some trusted path to validate the credentials from the other end. But that's not quite true. When an attacker decrypts with one key and re-encrypts with another, the encrypted bit stream changes. Both ends now have different encrypted bits. If they can somehow compare them, a MITM attack can be detected.

One early secure telephone unit displayed a 2-digit number derived from the beginning crypto bits. Users were supposed to confirm, by voice, that both units showed the same number. An attacker would have to break into the voice stream and substitute voice words in a matching voice to prevent that detection.

This illustrates what's possible. It's possible to force a MITM attacker to do considerable work (perhaps an arbitrarily large amount of work) to prevent both ends from comparing notes on what crypto bits they have. More than that, the endpoints can force the attacker to have to construct an arbitrarily complex lie in the form of impersonation of content.

David Chaum (of DigiCash fame) claims to have developed a way to detect MITM attacks along these lines. See U.S. Patent Application #20060218636.

The USPTO rejected the patent application, because of prior art from Microsoft (U.S. Patent # 7,475,421) for "Automatic Re-Authentication", which is about recovering disconnected Microsoft Terminal Server sessions securely and is limited to re-connection. So Chaum's approach seems to be unencumbered by patents. However, his explanation is almost incomprehensible. There's a thesis that explains it, though.

http://www.cisa.umbc.edu/papers/theses/newton-thesis-2010.pd...

Even that is heavy going. Here's a simple explanation of the concept.

if both ends can compare some of their crypto bits, they can detect a naive MITM attack that's decrypting and re-encrypting at the transport level. A smarter MITM attack would detect and rewrite that exchange of info, even if it occurred at a higher protocol level, like HTTP. However, the effort required by the MITM attacker can be made very large, and the MITM attacker can be forced to introduce delay.

An example approach would be to send an HTML or XML document which contains an item showing the N crypto bits that both ends should be seeing. Wrap the entire document with an item which has a cryptographic hash of the entire document, and send the hash BEFORE sending the document content. If the attacker just sends the HTML/XML through unchanged, the attack will be detected. If they change the item containing the N crypto bits, the hash will be wrong and the attack will be detected. If they generate a fake document with suitable crypto bits and hash, they have to do so before they've seen the entire content of the actual document. If they buffer up the entire document so they can modify the document, they introduce delay equal to the transmission time for the entire document. It should be possible to detect that delay.

For concreteness, suppose the server does the SSL/TLS handshake, then sends the HTTP headers, which contain the cryptographic hash of the material to follow. This is done as fast as possible. The server then sends most of the rest of the document. But at the end of the document, it pauses until 3 seconds have elapsed from receipt of the HTTP request before sending the last hundred bytes or so. Most of the document will still render, although "onload" Javascript won't trigger yet.

At the receiving end, the browser only allows the server 2 seconds from HTTP request to headers received. If the server can't keep up, the page will be treated as untrusted.

This provides considerable MITM protection. An attacker must be able to fake page content to defeat this.


It's an interesting idea but it fails on slow connections, network problems etc. As an attacker I might even start b6 sending a request for something trivial (favicon or something) before I receive anything.

Also as an attacker the effectiveness of your mitigation depends on what it is I want - by this point I have some plaintext and know what protocol you're using.


You can still detect that you are intercepted by checking fingerprints and/or green bar https://www.grc.com/ssl/ev.htm ... don't you?


You will detect it even sooner - the root certificate that signs the certificates for the MITM is not recognized by your browser (unless you manually installed it). You will get a nice, big, red warning on every modern browser. The tool doesn't claim otherwise. An example useful usage is testing your app that communicates using SSL and seeing what's under the hood. In that case, you would install the root certificate manually so the app (or OS) wouldn't reject it.


It strike me that this tools works (or is perhaps designed to be?) a proof-of-concept for how things are done by those Agencies that keep showing up in the news.

They will simply have (by hook-or-crook) several of the root cert private keys for the already-installed roots. There's a few dozen of them and any one will do. Just fry-up forged MITM certificates as needed. I doubt the target would even notice the tiny delay during handshake.

As you say, no one checks fingerprints. I think fingerprints would only help if you had visited the site before without a MITM, and your SSL layer flagged changes under ALL circumstances, remembering the fried-up MITM cert will look 100% legit.

I wonder if there is a privacy-protecting way to share certs/ids to detect these changes? (or is there one already?)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: