As we create and improve open source software, and build many applications for our customers, we’re constantly looking for things that will improve the software we create. Part of this is looking at an often dizzying array of proposed and emerging standards, and finding those that feel efficient and ready for use. Here we’ll explore five emerging web standards that we’ve started using or are strongly considering using in future work.

CSS Variables / Custom Properties

Web engineers have been using variables to create and manage complex systems of CSS for over ten years, and they continue to be one of the main features driving demand for CSS preprocessors like Sass, LESS, and Stylus. Used well, they can greatly increase the maintainability of large codebases by standardizing and consolidating all values used for colors, fonts, padding, etc. Over time, preprocessor variables have converged on a set of shared features, by design or convention:

  • They are prefixed: e.g. by $ or @ to prevent conflicts with existing CSS keywords
  • They are scoped: $bgColor defined within .container will be available in .container > .child, but not vice-versa.
  • They can be overridden:
    	$bgColor: blue;
    	.container {
    		$bgColor: red;
    		background-color: $bgColor; /* will be red */

Native CSS variables (or “custom properties”) have adopted all of these conventions, making the switch to native support easy and intuitive. CSS variables must be prefixed by two dashes: --, they are scoped to the selector in which they are defined and are inherited by its descendants, and they may be overridden within those descendants. For example:

:root {
	--bgColor: periwinkle;

.container {
	background-color: var(--bgColor); /* will be periwinkle */

.container .child {
	--bgColor: lime;
	background-color: var(--bgColor); /* will be lime */

There are also a few obvious differences in this example between CSS variables and preprocessor variables:

  • CSS variables must be wrapped in var() when used.
  • The prefix, -- is different from any other prefix used in existing preprocessors, so that they may be used in tandem.
  • CSS variables must be defined within a selector, so the closest thing to a “global” scope is :root.
  • Preprocessors are unaware of the DOM, so they rely on nesting for inheritance. The value of CSS variables inherit down the DOM tree in the same way any other value is inherited.

The other significant difference between preprocessor variables and CSS variables is not obvious in the above code: since they aren’t compiled down to static values, CSS variables may be updated in the browser at run-time. This means CSS variable are available to be read and written in JavaScript for use in calculations or animations. The following Codepen demonstrates how to create animated accordions using CSS variables and JavaScript:

CSS variables are supported in Firefox, Chrome, and Safari, with partial support in the current version of Edge:

CSS Modules

Variables are not the only concept that has leaked from JavaScript to CSS. Especially in recent years, JavaScript developers have been turning their eye to CSS organization and, in an extension of Atwood’s Law, thinking “I could do that better.” There are some good reasons to believe that:

  • Unlike JavaScript, CSS classes all exist within a global namespace
  • Resolving conflicting styles is brittle and prone to unexpected behavior in large, compiled stylesheets, or only solved with increasing specificity
  • Styles “leak” down the DOM hierarchy, and can break child element styles in unexpected ways.
  • Managing styles with JavaScript allows CSS rules to be based on run-time logic.

React in particular pioneered the idea that inline styles, controlled through JavaScript, could be the answer to all those problems (maybe not styles leaking to children, but it does reduce the number of cases where styles would conflict). However, it comes with its own set of drawbacks:

  • Pseudo-classes (e.g. :hover or :focus) are easily accomplished with CSS, but must be faked with JavaScript
  • Media queries are labor-intensive to recreate in JavaScript
  • Inline styles lose the ability to override with greater specificity, since they are already at the top of the specificity hierarchy
  • Toggling classes, CSS variables, and functions like calc() already solve most problems around dynamic styles
  • Performance: DOM weight is a thing, and CSS can be cached

CSS Modules are in some ways the CSS developer’s comeback to JavaScript developers intruding on their turf: a way to address criticisms and improve stylesheets without doing away with them entirely. CSS modules essentially boil down to locally-scoped CSS files that may be imported into JavaScript, and compile to unique class names.

For example, this:

import * as css from ‘css/buttonComponent.css’;

const buttonHTML = `<button class="${css.root}">${buttonText}</button>`;
.root {
	background-color: #ffffff;
	color: blue;
	border: 1px solid blue;

Would compile to something like this:

<button class="buttonComponent_root_abc2718">Button with modular CSS</button>
.buttonComponent_root_abc2718 {
	background-color: #ffffff;
	color: blue;
	border: 1px solid blue;

Since the classes contained in buttonComponent.css are locally scoped and in a clearly named CSS file, there is no longer any need for specific class names like .button. Instead, the recommended format is to use a single standardized “root” class name like .root or .normal, and then state-specific class names like .error, .success, or .disabled, all of which may be conditionally applied with JavaScript.

CSS modules also solve the problem of brittle style overrides by discouraging the use of multiple classes in favor of composes. The composes keyword is similar to preprocessor decorators like Sass’ @extends, except instead of compiling the styles in CSS, composes returns multiple namespaced class names in a predictable order. So, for example:

.root {
	display: inline-block;
	padding: 10px;
	color: blue;

.success {
	composes: root;
	color: green;
import * as css from 'css/buttonComponent.css';

// css.success = 'buttonComponent_root_abc2718 buttonComponent_success_pi3141'
const buttonHTML = `<button class="${css.success}">${buttonText}</button>`;
<button class="buttonComponent_root_abc2718 buttonComponent_success_pi3141">Button with green text</button>

CSS also has one more powerful tool to increase the modularity of its styles: the all keyword, separate from CSS modules, can be used to reset all properties to their initial state, e.g. with .root { all: initial; }. Since this is a new CSS property rather than a pattern that relies on a compiler like Webpack or Browserify, support is still lacking in IE and Edge.

Async / Await / Can make your code great

JavaScript has long Promised to improve the handling of asynchronous code, with increasing success and the occasional (caught) error. A callback to the days before promises might look like the following nested cone of doom:

doAsyncFunction(function(result) {
	doSecondAsyncFunction(result, function(resultTwo) {
		doThirdAsyncFunction(resultTwo, function(resultThree) {
			// and so on…
		}, catchError);
	}, catchError);
}, catchError);

With promises, the above could be simplified into a chained set of .then() calls with a final .catch() at the end:

	.then(result => doSecondAsyncFunction(result))
	.then(resultTwo => doThirdAsyncFunction(resultTwo))
	// and so on…

The chained syntax is clearly cleaner and more fetching than the earlier pyramid of passed-in callbacks and error handlers. With async functions, however, developers need no longer await the day when writing asynchronous code will be as clear and intuitive as synchronous code. The initial example using async/await would look like this:

(async function() {
	try {
		const result = await doAsyncFunction();
		const resultTwo = await doSecondAsyncFunction(result);
		const resultThree = await doThirdAsyncFunction(resultTwo);

		return resultThree;
	catch(error) {

Each await will pause code execution until its promise is resolved, and the whole set of code can be wrapped in a try/catch block, just as with synchronous code. The most important points to remember are that await may only be used inside an async function designated by that keyword, and the async function itself is asynchronous and will not block surrounding code.

While the triple promise example shows how three promises can be executed one after the other, Promise.all and Promise.race can be used in conjunction with async/await to run them concurrently:

async function doAsyncStuff() {
	const [resultOne, resultTwo, resultThree] = await Promise.all([

	// do stuff with resultOne, resultTwo, and resultThree

Technically, await does not even need to be passed a promise, since it will wrap any non-promise value in Promise.resolve. Together with the fact that any promise resolution is pushed to the end of the call stack, this can result in some slightly odd but fun possibilities:

async function getAnswer() {
	const answer = await 42;
	console.log(`The answer to life, the universe, and everything is ${answer}`);

console.log(‘Vogons blow up Earth’);

// will log:
// “Vogons blow up Earth”
// The answer to life, the universe, and everything is 42

blockingElements / inert

Managing focus has been and continues to be a poorly-solved problem for any developer who has needed to create a modal and cares about accessibility. Focus should never be allowed to enter hidden or obscured DOM elements, but creating the proper behavior is painful and labor-intensive. The two basic options are to listen to the focus event and hijack focus whenever it attempts to leave the modal, or to manually remove all non-modal elements from the focus order by setting tabindex="-1". Both solutions usually end up using a large, fragile DOM query for focusable elements somewhere in their code:


Even after focus has been dealt with, hidden sections should ideally have aria-hidden set to true, so it will not be read by assistive technology like screen readers.

Two specification proposals would solve the modal problem entirely: inert and blockingElements. The first, the HTML attribute inert, would remove a DOM tree from the focus order (as if all focusable elements received tabindex="-1"), as well as hiding it from assistive technology.

blockingElements would do the almost exact opposite: expose a stack of “blocking elements” that would effectively make all other DOM trees inert. As an example, if I were to have the following DOM structure:

	<div class="content">
		<div class="modal"> Modal content </div>
		<p> Other content, including links/buttons/etc </p>
	<div class="sidebar">

To open the dialog, I would remove the inert attribute and call document.$blockingElements.push(document.querySelector('.modal')), which would render not only direct sibling trees inert, but also siblings of parents and ancestors.

Both inert and blockingElements are still proposals so are not natively supported in any current browsers. However, there are polyfills available that allow them to be used now: and


Watching for elements to scroll into view has long been the province of a proliferation of scrolling plugins using some sort of (hopefully throttled) scroll event listener. Now, the IntersectionObserver allows developers to create an observer with options and a callback to watch for elements scrolling into view with only vanilla JavaScript.

The IntersectionObserver is similar to other DOM observers like the MutationObserver, in that you create it with a callback and options, then call .observe on a DOM element. A simple implementation that updates every time the element’s intersection with the viewport increases by 10% might look like this:

const observerOptions = {
  root: null,
  rootMargin: '0px'
  threshold: [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]

function intersectionCallback(entries) {
	entries.forEach((entry) => {
		const percent = Math.floor(entry.intersectionRatio * 100);
		console.log(`Element ${} scrolled ${percent}% into view`);

const observer = new IntersectionObserver(intersectionCallback, observerOptions);

The options object consists of the following properties:

  • root: the element containing the scroll area (falls back to the document’s viewport if null)
  • rootMargin: can grow or shrink the area around the root used to compute intersections
  • threshold: takes an array of numbers indicating at which percentages of the target element’s visibility the callback should fire. E.g. [0, 0.5, 1] would fire when the element passes the 0%, 50%, and 100% visibility marks.

To cease observing a particular element — advisable if the callback is only needed the first time the element is scrolled past, or if it’s being used to watch a large number of elements scrolling into and out of view, simply call observer.unobserve(myElement);. To disconnect the entire observer, do observer.disconnect();.

A good use case for using unobserve() with IntersectionObserver would be a script to lazy-load images as they scroll into view:

function onImageIntersect(entries) {
	entries.forEach(entry => { = imageSource;

document.querySelectorAll('img').forEach(img => observer.observe(img));

Browser support is still trickling in, but it can be used without a polyfill in Chrome, Firefox, and Edge. IE and Safari lack native support at this time.