When Vendors Attack! Film at 11

By on January 7, 2007 7:25 am

I’m sorry to interrupt the performance post series, but this seems to warrant a timely response.

Before I go any further, I should note that once-upon-a-time I was deeply involved in the webapp security community. As an engineer at a small MSSP in ’02 and ’03, I contributed to OWASP, lead one of their main projects, and participated in the associated discussions. I’ve audited web software for security flaws and worked to secure new and existing systems. These days, my involvement in the security world is reduced to reading interesting papers from the various conferences and my occasional trawl of CiteSeer. I have tremendous respect for the security community and many of the smart and talented people I had a chance to work with in those days.

But all is not right with the world of web app security. Paola and Fedon’s paper is an amalgam of other people’s research (response splitting) and a sprinkling of idiomatic JavaScript. When it can get to the front page of slashdot with “the web2.0 is falling!” billing, it only feeds the FUD flames. Pablum as revolution is disturbing. When it’s widely read, it’s urban legend in the making.

Here’s what Paola and Fedon tried to side-step:

  1. Response Splitting attacks aren’t that common (no, really)
    • The scariest bits of the presented paper require a complicit, b0rken proxy.
    • Mitigating the threat therefore means fixing the proxies, not the clients. This is comparatively good news as it implies fewer nodes to upgrade to remove the immediate-term threat. This matters to everyone interested in mitigating and managing risk (not eliminating it).
  2. The fundamental root-of-trust issue here is still an XSS attack. If you are subject to an XSS, the same domain policy already ensures that you’re f’d. An XSS attack is the “root” or “ring 0” attack of the web. This is the fundamental weakness of the web’s security model today, and one that is difficult to solve (e.g., requires upgrading all clients). That there are problems associated with being rooted should surprise no one.
  3. Characterizing the replacement of existing functions as a “design flaw” in JavaScript is comical. The assumption is malicious code in the same execution scope as the code being attacked (see #2), and that’s not tractable by disallowing redefinition. Even if JavaScript didn’t allow it, any environment that would allow runtime event handlers to be registered would suffice, and since there is no way (in current JS) to determine if code is “valid”, the gig would still be up. Just register a malicious onreadystatechange handler. The only change would be that you might have to target applications more narrowly.

What really makes me sad though is that the work of folks like H.D. Moore, Thor Larhom, and Jeremiah Grossman gets lost in the noise when chaff like this is published. By not providing an honest evaluation of the real-world potential of a threat vector, the authors of a paper like this create a sort of seismograph that can’t tell magnitudes, only number of things shaking. Without magnitude information, an instant market is created for people to stand on the tops of roofs and yell down how bad it is (or in this case, how bad it could have been had they not been valiantly standing there).

Threat information is only valuable as when there is enough data about it to manage and mitigate risk. Yes, security problems are real, and web app security problems aren’t going away any time soon, but without level-headed analysis of the threat vectors, the real-world risk profiles, and the root-of-trust that is being attacked there is very little reason for clients to view the security community as anything but a freakish collection of opportunists, wolves, and disillusioned techno-utopianists. Accurate data builds trust, and trust builds a relationships that allows you to effectively mitigate risk. It’s high time that the security industry developed a code of ethics that prevents FUD-slinging. OWASP could even lead the way although I suspect there’s not a chance in hell of it happening.

The view from the roof is pretty good, after all.

Comments

  • Pingback: Ajaxian » Subverting Ajax()

  • I need to read Grossman’s latest post a bit more carefully… but I’m confused (perhaps you can clarify for me)…

    His latest seems to be a different stance than his previous post on WhiteHat’s site. Before he and WhiteHat effectively stated “Ajax presents no inherent insecurity” (link here: http://www.whitehatsec.com/home/resources/articles/files/myth_busting_ajax_insecurity.html)…

    now this latest on his blog he says “Web browser security is broken. Completely shattered.”

    I understand there’s a distinct difference in topics here between the Ajax paradigm not being inherently flawed, and commenting on the browser’s security, but yes, this recent commentary on browser insecurity seems to be a lot of FUD more than facts and data. Thanks for providing some more insight with your post.

  • Bart Melton

    The prototype overwriting mentioned in the article has 1 big fundamental flaw that many security articles have. How do you get the code onto the page in the first place? If you corrupt the server, then it will likely be quickly discovered and fixed (and why not just corrupt the original JS functions?). If you have the power to inject things such as this from the client side, on a mass scale, then the client has already been comprimised (trojan/spyware) in a way that makes writing code such as this irrelevant since there are other/better ways to hijack the same information. The only real injection here is from questionable sites to begin with or sites with very poor security measures (allowing people to insert script tags into comments, etc). In that case, there is no need to subvert the XHR when the “bad people” most likely control the original code to begin with.

    “since there is no way (in current JS) to determine if code is “valid””

    Actually you can. alert(document.writeln) on a test page. You get “function writeln() { [native code] }” in all browsers (IE,Gecko,Opera all work with some whitespace variations). So you could write a verification function that checks [functionName].toString() returns the above mentioned. If it is custom code, it returns the source of the code.

  • Hey Mark,

    So briefly, what Jeremiah is pointing out is that the current coarse-grained control that the same-origin policy creates is failing us. It’s the entire reason that XSS and CSRF exist and are dangerous. Like buffer overflows before it, this is the long, slow, rumbling train that punishes those who don’t practice defense in depth, boundary filtering, and other reasonable risk mitigation strategies.

    The sad fact of the matter is that complexity is eating us alive. The halting problem prevents verification, and there’s not a damn thing we can do about it (yet). The inability to comprehend systems and the top-to-bottom risks, not to mention shifting environments, makes a lot of these symptoms worse. We’re in for a long, hard slog and Jeremiah is just telling everyone what they already suspect: you don’t understand the code you’re running, and never will. Prepare to be rooted.

    Regards

  • Bart,

    Surely you must have realized that you can implement your own toString function on the hijacked function object?

    Regards

  • Bart Melton

    Alex,

    Yes, you could implement your own toString function, you could also sniff the toString for the same “native” results as Function.toString. Really though, you just need to run a for/in on the properties of the object. Native prototype objects don’t iterate. If “evil hacker” overwrites Function.toString, it iterates.

    But then if “evil hacker” has the power to inject code to hijack the xhr object, it would be just as easy to overwrite any functions in “the good code” at the same time. Basic rule of javascript, last one in wins. So it really doesn’t matter much.

  • Alex,

    Thanks for your past work on the OWASP Guide.

    However, the work PDP Architect (here: http://www.gnucitizen.org/) and the folks at Slackers (here: http://sla.ckers.org/forum/ and RSnake (here: http://ha.ckers.org) are up to means that ring 0 (XSS) is utterly broken wrt cross-domain restrictions. Honestly, it’s only a matter of time before serious crime writes a decent worm based upon this work.

    To say that XSS is a browser issue ignores why it occurs – lack of output validation by devs. It’s too hard for programmers to understand when and how to output validate. There is nothing wrong with HTML and other sequences being part of an input stream, but if you don’t properly output encode EVERYWHERE, HTML injection results, and once that’s done, XSS / CSRF is the result.

    The paper is ground breaking because it highlighted a few new techniques based upon the work of other existing techniques. It doesn’t require a site to be Ajax based, it just uses Ajax techinques to achieve its results.

    Folks who write Ajax toolkits like yourselves need to pay attention to the output of serious researchers like Amit Klein and these folks (and the so called grey hats like RSnake and the slackers. Not taking it seriously means that your users (the devs) and their users (us!) will be at risk.

    We’re in the process of updating the Guide to be more relevant, and we welcome folks who have lots of real world Ajax experience to come help us make it the best it can be. This does not mean poo-pooing our results, for I have many examples of poor Ajax programs, it means that we need and want constructive input.

    Thanks,
    Andrew

  • Andrew,

    The PDF (and other plugin flaw) is quite bad. What I object to is the way it’s been dressed up in this paper. It doesn’t need it.

    Yes, we can yell ’till we’re blue in the face about the instantaneous threats (and they are important, don’t get me wrong), but the lack of perspective leads to a distorted atmosphere where it’s impossible to tell what’s really at stake and how it’s really being given up.

    XSS is the problem, and now we’ve got a new vector for it. Why isn’t that important enough? Why go putting the “response splitting” tutu on it and parading it around with trumped-up language?

    Regards