Augmented Reality on the Web in 2019

By on May 21, 2019 7:08 am

Augmented Reality (AR) brings digital information or media and interweaves it with our experience of the real-world. In recent years Augmented Reality has become apparent in the consumer space in two major formats: head mounted displays such as the Microsoft HoloLens and the Magic Leap along with more widely available experiences on mobile devices. Here the applications normally take hold of the device’s camera and then impose digital artifacts onto the device’s viewport. Some examples of popular mobile based Augmented Reality applications are Pokemon Go and Snapchat Lens. Both provide users with applications that enable the melding of the digital world with the tangible world.

Native mobile applications have begun to see some mainstream AR success, but support on the Web has been less widely embraced. Traditionally the Web has not provided native Augmented Reality functionality and work has been focused on marker based approaches to AR with client libraries. Marker based tracking allows orientation and placement of objects via identifying specific patterns (markers) in the scene. Many of the established JavaScript libraries take this approach to achieve AR.

There has been some recent traction in native Web-based approaches to Augmented Reality, arising from movements in 2016 for native Virtual Reality (VR) functionality. The movement towards WebVR started with a report by the w3c, which later lead to implementation in browsers like Firefox and Chrome in 2017.

The move to merge the work towards WebVR and WebAR came to a consensus as the Immersive Web Community Group, and work is now moving toward a new API called the WebXR Device API. With this brief history of Augmented Reality on the Web, let’s explore the current state of affairs for developers in 2019.

Marker Based AR with JavaScript Libraries

One of the most popular AR libraries is ar.js which has amassed over 12 thousands stars on GitHub. Before going too deep into ar.js it is important to understand more about its dependency on ARToolKit, a long established and popular native cross-platform Augmented Reality library authored in C/C++. The ar.js library wraps an emscripten port, artoolkitjs, which is required as an ambient global dependency for ar.js and must be loaded first. ARToolKit is not actively maintained but a fork, artoolkitX, remains active. So the state of support is not currently intuitive or straightforward, but is sufficient today to make AR possible in a browser.

ar.js supports three.js and A-Frame as targets for Augmented Reality rendering targets. Three.js a full 3D graphics library for JavaScript providing an imperative API for constructing 3D scenes on top of WebGL. A-Frame is a framework for building Virtual Reality experiences, taking a declarative approach using HTML, custom elements, and the DOM. Here is an example that will look for a Hiro marker (the default marker type):


  <!DOCTYPE html>
  <html>
    <head>
          <style>
              body {
                  margin : 0px;
                  overflow: hidden;
              }
          </style>
      </head>

    <body>

      <script src="https://jeromeetienne.github.io/AR.js/aframe/examples/vendor/aframe/build/aframe.min.js"></script>
      <script src='https://jeromeetienne.github.io/AR.js/aframe/build/aframe-ar.js'></script>
          
      <script>ARjs.Context.baseURL = 'https://jeromeetienne.github.io/AR.js/three.js/'</script>

      <!-- Create a A-Frame Scene and enable tracking with ar.js -->
      <a-scene embedded arjs='trackingMethod: best;'>

        <!-- Create an A-Frame anchor for our 3D models, detect for marker -->
        <a-anchor hit-testing-enabled='true'>

          <!-- Provide a rotating Box and Torus Knot -->
          <a-box position='0 0.5 0' material='opacity: 0.5; side:double; color:red;'>
            <a-torus-knot radius='0.26' radius-tubular='0.05'>
              <a-animation attribute="rotation" to="360 0 0" dur="5000" easing='linear' repeat="indefinite"></a-animation>
            </a-torus-knot>
          </a-box>

        </a-anchor>

        <a-camera-static/>

      </a-scene>
    </body>
  </html>

ar.js also provides an interface for dealing with three.js in a similar manner. Overall ar.js is a great library, but it has a couple of cited current issues. For example, although ar.js is on npm it is not bundled as a module and as such does not currently work well with module bundlers like Webpack or Rollup, so it must be put into a script tag to be used as a global.

THREE AR is a new library I created which builds on ar.js and artoolkitjs to provide a modern library written in TypeScript. With THREE AR, artoolkit is bundled along with the required camera parameter binary data that is required for artoolkit. Encapsulating these two items means that THREE AR has no external ambient dependencies other than three.js. THREE AR also takes a Promise based approach, rather than passing around callbacks, allowing for more modern coding styles (for example, you could use it with async/await). Here is an example of how to create a basic screen using a simple pattern marker, in this case a Hiro marker (the default marker):


  import * as THREEAR from "threear";
  import * as THREE from "three";

  const renderer = new THREE.WebGLRenderer({
      antialias: true,
      alpha: true
  });

  renderer.setClearColor(new THREE.Color('lightgrey'), 0)
  renderer.setSize( window.innerWidth, window.innerHeight );
  renderer.domElement.style.position = 'absolute'
  renderer.domElement.style.top = '0px'
  renderer.domElement.style.left = '0px'
  document.body.appendChild( renderer.domElement );

  // Initialise the three.js scene and camera
  const scene = new THREE.Scene();
  const camera = new THREE.Camera();
  scene.add(camera);

  const markerGroup = new THREE.Group();
  scene.add(markerGroup);

  var source = new THREEAR.Source({ renderer, camera });

  THREEAR.initialize({ source: source }).then((controller) => {

      // Add a torus knot		
      const geometry = new THREE.TorusKnotGeometry(0.3,0.1,64,16);
      const material = new THREE.MeshNormalMaterial(); 
      const torus = new THREE.Mesh( geometry, material );
      torus.position.y = 0.5
      markerGroup.add(torus);

      var patternMarker = new THREEAR.PatternMarker({
          patternUrl: 'patt.hiro', // the URL of the hiro pattern
          markerObject: markerGroup,
          minConfidence: 0.4 // The confidence level before the marker should be shown
      });

      controller.trackMarker(patternMarker);

      // run the rendering loop
      let lastTimeMilliseconds = 0;
      requestAnimationFrame(function animate(nowMsec){
          // keep looping
          requestAnimationFrame( animate );
          // measure time
          lastTimeMilliseconds = lastTimeMilliseconds || nowMsec-1000/60;
          const deltaMillisconds = Math.min(200, nowMsec - lastTimeMilliseconds);
          lastTimeMilliseconds = nowMsec;

          // call each update function
          controller.update( source.domElement );

          torus.rotation.y += deltaMillisconds/1000 * Math.PI
          torus.rotation.z += deltaMillisconds/1000 * Math.PI
          renderer.render( scene, camera );
      });

  });

Lastly, awe.js is a library from awe-media which offers similar types of functionality to ar.js. Unfortunately awe.js does not appear to have been updated for the past two years. Interestingly awe.js also has an example of interfacing with ARToolKit, although in a different form, using a JavaScript port of an ActionScript port of ARToolKit (port-ception!). Another feature awe.js supports is location based markers using device sensors to position an object.

Looking to the Future: WebXR

The future of native Augmented Reality in the browser is promising, if arguably difficult to follow. At the moment the focus is ongoing work with the WebXR specification which supersedes the WebVR browser API specification shipped in 2017. The advent of ARCore (Android) and ARKit (iOS) sparked the advancement of wanting to bring AR to the web and as such a new specification was developed to align AR and VR interests together.

Ultimately the WebXR specification is aiming to allow for both Virtual Reality, Augmented Reality and Mixed Reality into the browser. The WebXR API requires capable hardware such as ARCore compatible devices or Microsoft HoloLens.

At the moment there are some examples of how to use the WebXR API. Mozilla has created some demos that work for iOS ARKit using the Mozilla webxr-polyfill. Google is doing similar work, showing how it’s possible to use ARCore and Chrome Canary to use the WebXR API natively in the browser.

The WebXR API is still in flux, with the standard still being finalized. The WebXR group GitHub repository provides updated details on the WebXR specification including the WebXR draft specification. There are at least two notable polyfills for the WebXR API, with the official Immersive Web Community Group WebXR polyfill being the one that is actively supported and maintained.

Let’s have a look at how the API might be used. The code samples for the examples are substantial and are abbreviated here to provide snippets of the core JavaScript API to illustrate how the WebXR API works for Augmented Reality functionality on the Web. This code is adapted from an example where the user puts digital sunflowers into a camera scene.


  function initXR() {
    
    // Code omitted for brevity
    
    // Detect that the browser has the WebXR API
    if (navigator.xr) {

      // Check that the Augmented Reality mode is available
      navigator.xr.supportsSessionMode('immersive-ar').then(() => {
        // Do something here, for example disable/enable a start button
      });
    }

  }

  function onRequestSession() {
    navigator.xr.requestSession({ mode: 'immersive-ar' })
        .then((session) => {
          xrButton.setSession(session);
          onSessionStarted(session);
        });
  }

  function onSessionStarted(session) {

    // Add event handlers for the end of the session, and a 
    // user selecting something in the scene
    session.addEventListener('end', onSessionEnded);
    session.addEventListener('select', onSelect);

    // Code omitted for brevity
    
    // Get hold of a reference space, bounded, unbounded, and stationary are the types
    session.requestReferenceSpace({ type: 'stationary', subtype: 'eye-level' }).then((refSpace) => {
      xrRefSpace = refSpace;
      session.requestAnimationFrame(onXRFrame);
    });

  }

  function onSelect(event) {

      // Fire a hit test
      let inputPose = event.frame.getInputPose(event.inputSource, xrRefSpace);
      if (!inputPose) {
        return;
      }

      if (inputPose.targetRay) {

        vec3.set(rayOrigin,
            inputPose.targetRay.origin.x,
            inputPose.targetRay.origin.y,
            inputPose.targetRay.origin.z);
        vec3.set(rayDirection,
            inputPose.targetRay.direction.x,
            inputPose.targetRay.direction.y,
            inputPose.targetRay.direction.z);
        
        // Perform a hit test into the real world
        event.frame.session.requestHitTest(rayOrigin, rayDirection, xrRefSpace).then((results) => {
          if (results.length) {
            // Place the object at the given location
            addARObjectAt(results[0].hitMatrix);
          }
        });
      }

    }

  }

You can find the full code for this example on the Immersive Web Community Group’s examples repository as well as a video for this example.

It’s worth elaborating on a few concepts here. First, reference spaces, references the kind of space the experience requires. The available options bounded, unbounded, stationary. bounded refers to experiences where the user will move but not outside a predefined area. Unbounded means that there are no limits on the users spatial movements. Last, stationary refers to experiences where the user is seated or standing. The WebXR API GitHub repository provides a further guide on explaining spatial tracking.

In the example, the hit test (session.requestHitTest) fires a ray into the real world scene and detects what real world object we hit. This allows us to place a virtual object at the location of the intersection of that ray. Again, you can find a deep guide on hit testing on the WebXR Device API GitHub repository.

This example provides a taste of the WebXR API and how to perform some core functionality like placing models/objects into a real world scene.

Conclusion

It is now possible to leverage Augmented Reality in Web applications using marker-based libraries. The future of Augmented Reality on the Web is evolving with the WebXR Device API and can get used today with the polyfill for this API. It’s important to consider that even if one uses the WebXR polyfill, you will still need AR enabled hardware (i.e. ARCore, ARKit). As such it is important to consider which experience you want to build and the hardware availability of your target users. Going forward we should see more formalization and solidification of the API and more adoption into the modern browsers.

If you need help getting AR, VR, or XR working with your application, or finding the right approach for your next application, please contact us to discuss how we can help!

Follow SitePen for more articles just like this
TwitterFacebookLinkedIn


Do you have any questions or want some expert assistance?