A significant amount of work on JavaScript toolkits and frameworks has centered around trying to fix, normalize, and optimize browser implementations. Doing so requires making many assumptions about what the problems are, how our tools will be used by developers, and what we expect of the future.

The assumptions made often turn out to be wrong. What’s worse is that these choices may prove to be correct for a very long time before coming back to bite us. During this period of blissful ignorance, toolkits can become tremendously popular and become a vital part of large, complex codebases.

Event Bubbling and Event Delegation

Event bubbling allows events originating from a child node to “bubble up” to its parents. This behavior led JavaScript developers to the loose design pattern of identifying the node we really care about receiving events from – which is typically written using CSS selector syntax – but adding the listener to a parent of that node.

Once this pattern made its way into toolkits, a number of assumptions had to be made when designing the APIs we have today, originally revolving around both performance and efficiency.

Event delegation is one of the defacto ways of doing event handling. But is it the right methodology for all projects? In fact, the better question might be whether the assumptions each toolkit has made are right for your project. Knowing whether an API is right for your project depends on knowing what assumptions these tools are built on and understanding how each toolkit has interpreted them.


Let’s look at some assumptions that might be made in thinking through how to efficiently manage DOM events.

The native event registration mechanism is too slow

Unless you can come up with a secondary reason for an API to exist, do not create a new API. With the effort browser vendors are putting into their run-times, it is all but guaranteed your implementation will one day be slower than the native implementation. At SitePen, we had a project that relied on the speed of an array splice. Even though we discovered in some cases that manually downshifting indexes and array length could result in a significant speed improvement, we had no way of targeting a specific browser, browser version, or platform as there was no way to do a run-time feature test to determine if our implementation was faster than the native API.

New native APIs will not emerge

Work very carefully to guarantee you’ve gathered enough information to fall back to a native implementation either as it exists now or as it could conceivably exist in a perfect world. Another term for this is “future proofing”. In some cases, you may end up with an API that has more required parameters than absolutely necessary, but if it guarantees an easy transition to a significantly better native API, do it. A good example of this is the eventual native support for querySelectorAll where browsers implemented an API natively that many developers assumed would never happen.

There is no performance penalty for uncommon use cases

Event delegation may manifest itself in several ways and the two outlying situations are a small number of events on a large number of nodes versus a large number of events on a small number of nodes. If you optimize the API for one of these two outliers, you may create significant bottlenecks for the other. With event delegation, while we may only ever have to add an event listener to a single node, a complicated method of identifying the nodes where callbacks should fire may have a disproportionate performance hit. This can be the case with event delegation when a large number of events are fired very quickly, such as mouse movement or scroll events.

DOM events aren’t always the result of user interaction – we also have synthetic, custom, and loading events.

Conditions and context

When considering event delegation, it is easy to think that we only need to concern ourselves with user interaction. This could lead us to assume that nodes are always part of a document and then ask, why wouldn’t we just add a single event handler to the document object? But DOM events aren’t always the result of user interaction – we also have synthetic, custom, and loading events. If the nodes we want to listen to are not in the document yet, but the main listener is on the document object, we will never be notified. And if it is unclear from the API that the listener has been added to the document and not one of the passed parameters, it can be baffling to understand why this is happening.

Abstraction is required

If a toolkit was to offer an API for event handling that only supported delegation – requiring both a parent node and a selector to identify child nodes – there would be no way to add an event listener directly to a node. Even requiring CSS selector syntax introduces higher-order functionality that could easily use another selector syntax or a simple function.

Side effects will not happen

As we saw above, DOM event bubbling allows the event delegation pattern to exist in the first place. But when you learn about what the full specification entails, you will see that event bubbling can be canceled. Your implementation may involve passing a custom event to the callback with a no-op stopPropagation method or you may just document that this can be a problem and limit the utility of your event delegation API. Both of these approaches have problems, but if you decide to do something like attach the event handler to the document object, it can amplify the side effects by adding a significant number of layers where the event can be canceled.


Once code has been written, it is tempting to “set it and forget it”. But with each year browsers improve in ways we cannot imagine or predict and the assumptions we had when the code was written may prove to have been wrong despite our best efforts.


Why are you choosing event delegation for your project?

  • Is the native implementation too slow? That’s unlikely in modern browsers.
  • Are there better APIs to perform event delegation? Not yet – if you need event delegation, this is a good pattern.
  • Does the toolkit’s performance optimization match what your project needs? If it’s focused on an outlier, it may not.
  • Is there something about the toolkit’s implementation that won’t work for your project? Read the documentation, it will usually be noted.
  • Are there side effects? You may not find this out until you run into a bug, so have it in the back of your mind.

Because all design patterns risk becoming anti-patterns as people learn them without learning the assumptions made during their creation, you should ask these same questions for any new tool you employ in your project. Be especially careful if what you are doing seems like it is cutting a corner. With care and thoughtfulness, your projects will be the shining monuments you know they can be.