Security in Ajax

By on September 25, 2008 12:38 am

Security in Ajax web applications is of growing importance. While the client-server model is very useful for architecting web applications, the web security is model is not client-server, but rather a client-deputy-server model. Understanding this security model is important for building secure web applications, and it is becoming even more important as we build mashups and web applications that utilize cross-site resources.

In a client-server model, the client acts on behalf of the user, and the server trusts the client to the degree that a user is authorized. In the client-deputy-server model, the deputy (the browser) acts on behalf of the user, with suspicion of the client (web page/JavaScript), taking responsibility for some aspects of security, limiting client to client interaction. By understanding the mechanisms for the deputy boundaries, servers can appropriately participate in the security model with proper trust for the browser to act on behalf of the user. We will look at how to secure resources from being accessed from the wrong clients and protect clients from malicious server code.

Protecting Resources

The first concern is how to protect server resources from rogue web pages. Most sites perform manual authentication and use cookies to maintain that authentication state, which is completely appropriate. Protected resources are only accessible if there is a cookie provided token proving that the user is a given user and has authority to retrieve the resource. However, browsers will still send cookies regardless of what site is sending the request. Cookies should be understood to represent authentication validation and not authorization. If only cookies are used to authorize resource access, protected resources can be utilized by other web sites. Other sites can spoof requests, with the browser automatically attaching cookies. This is called a cross-site request forgery (CSRF) attack. There are a number of ways to protect against CSRF, usually through explicit token validation.

Explicit Token Validation

With explicit token validation, the server requires that the client provide validation of the authenticated session that is not spoofable in order to protect against CSRF. One way to do this is with double submission of cookies. Double cookie submission is an approach where the JavaScript reads the cookie value for the validation token (usually the session id) and includes that value explicitly in the request. Other sites can not access cookies through JavaScript, so this can’t be spoofed. Another approach is for the server to provide an explicit secret token in the web page to use for requests. Once again other sites won’t know this secret token and will be unable to spoof requests.

Applying the double submission of cookie technique for Ajax requests is easy to do in Dojo. Because all Ajax/XHR requests go through dojo.xhr(), we can augment this function to always add an extra header that includes the value of the session cookie id:

dojo.require("dojo.cookie"); // need to access cookies
var plainXhr = dojo.xhr; // save the standard XHR handler
dojo.xhr = function(method,args,hasBody) {
  args.headers = args.header || {}; // make sure there is a header object
  // Here we get the cookie session id and put it in a header
  // J2EE servers use "JSESSIONID", PHP uses "PHPSESSID"
  args.headers["X-Session-Verify"] = dojo.cookie("JSESSIONID");
  return plainXhr(method,args,hasBody); // fire the standard XHR function

Now on the server we should implement a check that will verify that the X-Session-Verify header has the same value as our session id. If an unsafe request comes to the server without the X-Session-Verify header, or with the wrong value, this request should be rejected as it may have originated from a different site.

Referer Header Checked Validation

Referer header checked validation is an alternative to explicit token validation. With this approach, after authentication, the server validates authorization by using a combination of the cookie along with the Referer header to verify that the request was made from the correct web site. This technique must be used with great care, there are a number of exploits and ways to forge the Referer header, but there are ways to mitigate risk. With proper understanding of these security holes, it is possible to defend against them and use this technique.

First, the Referer should only be used in combination with another authentication schema (cookies or HTTP authentication), otherwise an HTTP request can easily be forged. Flash and Internet Explorer’s XMLHttpRequest allow Referer header modification. XMLHttpRequest is only usable for same-origin requests, and Flash can be kept to same-origin requests if there is no cross-domain access allowed with a crossdomain.xml file. Consequently, if you are using Referer based validation, you should not allow Flash cross-domain access. As long as the same-origin site always has at least the same default authorization level as any other site (which is almost always the case), it is viable to use Referer header validation with proper consideration of these issues.

An important advantage of Referer checked authentication is that it does not require any extra action to be taken by the web page as browsers automatically add Referer headers, unlike explicit token validation. However, an important consideration of Referer checked validation is that you can’t verify requests that don’t have a Referer header. Such requests could be from any web site and therefore can’t be trusted. If a user has turned off Referer headers in their browser, the site must either switch to explicit token validation or alert the user to the situation and ask the user to turn Referer headers back on before giving access to resources.

JSON Hijacking

When explicit token validation is not used, Ajax requests may be spoofable and applications may be vulnerable to CSRF attacks. However, in some applications Ajax may only be used for requests without side effects. In these situations, it is not absolutely necessary to use explicit validation to protect against Ajax requests, but it may still be critical to protect against other sites accessing protected resources (the responses from these requests). With JSON data, there are situations where these resources may be accessible through a technique called JSON Hijacking. The Array constructor can be overwritten by rogue sites and they can request resources from your site and access the results. Several things to be aware of with JSON hijacking:

  • It is only possible to hijack JSON data with a root that is an array. When the root is a primitive, primitive values do not trigger a constructor. When the root is an object, it is not valid JavaScript syntax, and therefore can’t be parsed. No amount of environmental alterations can affect an unparseable script.
  • JSON hijacking is only a threat for resources that are protected solely through cookie or authentication.
  • JSON hijacking can be averted by explicit token validation. If you are using robust validation schemes and not relying solely on cookies for authorization to protected resources, you don’t need to worry about hijacking.

One approach for combatting JSON hijacking is JSON prefixing. JSON prefixing involves simply prefixing all JSON data with {}&&. This renders all JSON syntactically invalid as a script and therefore cannot be hijacked. In addition, this prefix does not affect the evaluation of JSON. Another technique, commented JSON, requires that the client strip the comments before evaluation and unfortunately introduces other security problems. The client does not need to take any measures to evaluate prefixed JSON. Prefixed JSON looks like:

{}&& ["some","json","data"]

The only alteration clients might need is when JSON validation is performed (Crockford’s JSON library does validation), the validator would need to ignore the prefix. However, Dojo does not perform JSON validation, and therefore does not need to do anything to handle prefixed JSON. Prefixed JSON prevents JSON array hijacking, does not introduce any security concerns, and does not require any processing by clients. However, this is still somewhat of a hack and I would still strongly recommend that you use proper security measures (like explicit token validation) to authorize requests rather than relying on JSON modifications.

HTTP Authentication

HTTP authentication is an alternative to manually processing authentication with a form. The browser’s implementation of HTTP authentication can also be used to handle maintaining the authentication state. HTTP authentication is not commonly used because the authentication is done with a browser provided UI, which does not allow for a themed interface, and it is not possible to add other buttons/UI options such as signing up for an account and password recovery.

HTTP authentication follows the same rules as cookies when other sites access cross-site resources, and has the same CSRF vulnerability. That is, if your site accesses your resources with HTTP authentication, another site can access your resources without needing to re-authenticate. The browser still remembers the initial authentication and provides these credentials regardless of the domain of the requester. Unfortunately, HTTP authentication information is a request header and can’t be accessed like cookies, so it is not possible to do double cookie submission as an explicit token validation. If you are using HTTP authentication and you want to limit cross-site access to your resources, you must use Referer header checked validation, or create a manual scheme of sending explicit tokens to the client.

The advantage of HTTP authentication is that you don’t need to build a user interface and it is a standards-based approach to authentication. This can be very beneficial for pure web services. HTTP authentication also provides a means for secure authentication in a non-SSL connection.

Enabling Other Sites to Access Your Resources

So far we have looked at how to prevent other sites from interacting with and accessing our resources, but there are certainly situations where we do want others to access our resources. We will now look at how to do this in safe, controllable manner. There are several ways that other sites can safely access your data:

  • Proxied – In Ajax applications, the server may be effectively acting as a proxy for requests from their server.
  • JSONP – The other site may use a script tag with a callback parameter to retrieve your resources. You may put the JSON data in a callback to fulfill their request.
  • Cross-Site XHR – The W3C proposal for cross-site XHR (which is partly implemented by IE8’s XDomainRequest).
  • – This is the new technique we have developed for securely loading cross-site data.

Some of your resources that you wish to provide to other sites may be public resources, accessible to anyone. In these situations you may use any of these techniques freely. However, you may have protected resources that you may conditionally provide to some sites if the user authorizes the access. The naive approach is for the requesting site to ask the user for their username and password. This is terrible for security since the requesting site now has full access to the users full credentials and access to all the users data on the providing site. Much more secure techniques are available.

For example, suppose there is a site that can develop a workout plan for you based on the data from your last medical physical. When users go to, the website will retrieve information about the user’s last physical (blood pressure, heart rate, etc) from The records at are protected resources unavailable to the public. The most secure manner of allowing to gain access to is with the principle of least privilege. Access to the information on blood data and user dimensions is all that is shared with, with other information withheld. The user achieves greater privacy if he does not need to allow to access his entire medical history. There are several options for negotiating access to the protected resource.


The OAuth protocol can be used to negotiate access to a specific resource from the server, without providing unlimited access. Both and must implement OAuth in order to use managed resource access. Once both sites to implement this protocol, can access the needed information without demanding full user credentials, and can allow protected access to resources without forcing unsecure distribution of user credentials.

JSONP Resource Protection

JSONP can be used in any browser and an alternate form of fine-grained controlled resource access is available with JSONP. Since JSONP is carried out by loading a script, web service providers can perform their own resource authorization without requiring cooperation from the consumer. When a site requests a resource using JSONP, the web service can return a script that first triggers a popup that confirms that the user wants to allow the site to access the resource, and then after authorization, the script can call the callback function to provide the resource to the site. The web service may utilize OAuth to do this authorization, but since it has JavaScript capabilities on the client, it can implement both the client and server sides of the OAuth negotiation, or use an alternate technique. Resource Protection

The protocol is most conducive for fine-grained resource access control, since it utilizes an iframe that can be properly sandboxed by browser and allow intuitive in-page authorization interaction. This technique is described in detail here.

Protecting the Web Page

The previous discussion has been in regards to how a web service can protect its resources. Now we will look at how a web page can protect it’s self from cross-site web services. JSONP is an efficient and cross-browser way to request resources from another site. However, JSONP works by loading a script from the target web service, and the script normally has unrestricted access to everything in the web page. A script can access the requesting site’s cookies and has full ability to manipulate the DOM.

Using XHR and loading data as text is more secure, however only the newer/unreleased browsers support cross-site requests with XHR (or XDR). Another option is to use proxied requests. If the server will proxy a request, XMLHttpRequests can be sent to the origin server, and can request resources from other servers.

However, if you are using JSON data, there are still more precautions necessary in requesting data from other servers in order to be secure. JSON data is parsed by using an eval which allows arbitrary code execution. Therefore JSON data should be validated to ensure that it does not contain any executable code, only data. Crockford’s JavaScript library includes a JSON validator for these situations. JSON validation can alternately be performed on the server when using a proxy. Dojo Secure can also validate JSON prior to evaluation.


Subspace is a technique for sandboxing JSONP requests so that the loaded script cannot interact with the requesting site. Subspace works by creating two iframes, one from the same site as the requester and one from a different subdomain. A closure is passed to the first iframe and then both iframes modify the document.domain property to a common domain. The closure is then passed to the second iframe which makes the JSONP requests. This second iframe does not have access to the parent page because it is not in the same domain, however, the closure can act as a mediator because the closure has been passed through. This allows the browser to directly and securely access resources from other sites without using a proxy. However, Subspace is very complicated and has additional DNS/host name requirements as well.

Loading Cross-Site/Untrusted JavaScript

In some situations, the requesting page may want to actually load executable code/scripts and not just data. Widgets are great examples of objects that may be loaded from another site that includes JavaScript. In these situations, protecting the web page is more complicated because standard JavaScript has unrestricted access to its environment. However, it is possible to use a subset of JavaScript that limits its capabilities in a closely controlled object-capability model.

There are two main projects that have been developed to achieve a safe, controllable object-capability form of JavaScript: Google Caja and ADsafe. Google Caja works by compiling JavaScript and rewriting most of the operations and actions in JavaScript to provide hooks for a Caja runtime to check that the script does not violate it’s set of capabilities on every action. ADsafe defines a subset of JavaScript that prohibits operations that could violate it’s provided capabilities. This safe subset of JavaScript does not allow access to |this|, global variables (except those in a whitelist), the [] operator, and a number of properties. However, this approach has an advantage in that it does not require compilation, only validation, and can therefore operate much faster and more efficiently.

Using object-capability validation can be performed in conjunction with an XHR request (cross-site from the browser or through a proxy). The XHR request can retrieve the text of the script, the capability validator can validate that it is compliant (there are no illegal references), and then do an eval. I am working on an compact ADsafe validator for Dojo.

In order for ADsafe scripts to be useful, they must be given sufficient privileges to carry out their tasks. In the case of widgets, the scripts must be given access to a limited set of the DOM. Unfortunately, it is not safe to simply pass a DOM element to a script. Any DOM element has references to parent DOM elements, and a script can easily walk through the DOM tree to access any node on the page. Therefore, a DOM facade API must be created that can safely allow an ADsafe script to only access a subset of the DOM.

Dojo Secure

Dojo 1.2 includes a new framework for handling the process of securely loading scripts, data, or widgets, validating the safety of the code, and providing secure access to the DOM. Dojo Secure provides an end-to-end security system that utilizes a registry for defining server support for secure loading mechanisms. Dojo Secure uses an ADsafe style subset to ensure only safe JavaScript can be executed. Finally, Dojo Secure includes a set of secure library functions and DOM access. Dojo Secure provides all the client-side tools necessary for building secure mash-ups based on untrusted widgets or loading data with proper protection.


  • Psy

    Great job. This article is incredible useful to take a quick look to the most used techniques for protecting data queries. Just a quick question: does dojo implement some kind of simple validation out-of-the-box, or the idea is to implement them using only when you need them?

  • Dojo Secure includes a safe JavaScript validator ( You can use this validator directly, or when you use, it will automatically use this validator.

  • Neville

    Nice article. Do you know if dojo is planning formal support for OAuth, ie, apis for OAuth request signing, token management etc?

  • den

    Yes, it’s great job. This article is incredible useful to take a quick look to the most used techniques for protecting data queries.

  • seoeun

    Great article. Just one question: what if the system is distributed and clustered, what can be a explicit token instead of the SessionID? The SessionID could be changed if the request goes to the another server at the distrubted system.

  • Pingback: Sandbox Your Cross Domain JSONP To Improve Mashup Security | BeeBole()

  • Pingback: Sandbox Your Cross Domain JSONP To Improve Mashup Security - BeeBole Blog()