This is part 2 of the Disciplined Engineering series. It focuses on improving the confidence of the code produced by your team. If you haven’t already, we recommend reading Part 1: Building an Engineering Process as a prerequisite for these practices. You can also check out Part 3: Refining Your Engineering Process.
Creating Code Confidence
Ensuring the quality and sustainability of large applications can be challenging. One of the best ways to address this challenge is by adopting tools and practices that improve your overall confidence in an application’s code, reinforcing common coding practices to help engineers automatically write better code that is easy to understand and maintain. This saves time for engineers to focus on solving problems and improving the overall quality of their applications. Without these tools, it is practically impossible to consistently and sustainably create high-quality code.
Understanding what tools and practices are available to improve the team’s confidence in the code, and understanding why these tools are valuable will help you and your team use these tools to iteratively improve upon your own processes.
Testing is a fundamental practice for all forms of engineering. Some may argue for more or less testing or for a specific practice such as Test Driven Development (TDD). Regardless of methodology, unit testing represents the most baseline form of testing and the first step for many teams to instill confidence and quality in their software.
A good unit test validates a single unit of functionality within your code. Unit tests are the proof to your conjecture that for a certain set of inputs the code produces a certain set of outputs. There is not a strict definition for a unit test—however, the best unit tests are narrow in scope. A good rule of thumb is to write a unit test for each branch of code. This will separate the functionality of your software into atomic tests that are easy to understand and update.
A unit test usually consists of three phases: setup, execution, and assertions. When setting up a piece of code for testing it is important to isolate it from other external pieces of code that don’t directly relate to what’s being tested. We do this by mocking modules surrounding the code we are testing. This allows us to control the expected output of the mocked code as well as monitor how our code under test behaves. Once the external code has been sufficiently mocked, the unit of code being tested is executed. Finally, the assertions are made against the output of the code. The assertions should be written so they strictly describe all of the expected output. The best way to do this is to always use static values in the test (as opposed to using values generated from a function). This guarantees that the values tested are always what they are expected to be unless explicitly changed. The purpose of a unit test is to make you and your team feel more confident that the code works as intended. Your assertions need to reflect that.
Once unit testing is in place, other forms of testing may be added to ensure the quality of your application. Functional testing emulates a user interacting with a piece of software. It shows how a system will usually function for customers and provides an opportunity for automatically testing a complete application. In general, functional tests should not cover every interaction that a user may have with a system. Instead, only the common or critical paths need to be tested. We recommend always setting up functional testing as part of any application. This will reduce friction for engineers when they need to add functional tests. If a functional test framework does not exist when a critical functional test needs to be added, then the burden of setting up the framework falls on the engineer—usually at the cost of significantly impacting the time it takes to complete their story. It is far better to address this risk early, set up functional testing for your application, and in turn encourage good practices. Again, the purpose of testing is to instill confidence that the code and overall system will perform as expected.
There are many ways to test software beyond unit and functional testing. Additional testing can be useful in identifying specific issues in a piece of software. For example, accessibility testing is very important for identifying if a web page is compatible with accessibility tools. Another example is load-testing backend systems to see how they operate under high load. These types of tests are often added to automatically ensure the application you are developing always maintains a high level of quality.
A code coverage tool instruments an application’s code while it is being tested and identifies which branches, methods, and files have been executed. It provides a critical metric that correlates to the health and quality of your application. Code coverage provides additional insight to engineers when writing tests to cover all of the conditional branches of their code. Teams should aim for around 95% branch coverage (some branches are often inaccessible in certain browsers or environments).
An application’s branch coverage is one good indicator of the overall quality of a piece of software. Low code coverage is a warning that unknown issues almost surely exist within the application and that making modifications or changes will be inherently risky in areas that do not have ample code coverage. However, a high level of code coverage doesn’t guarantee high-quality software. This depends as much on the coverage as having well-written tests that fully assert the state of the application.
Code reviews are a discussion between two engineers to ensure that a piece of code follows good practice and is understandable by another engineer. Without code reviews, it is impossible for a team to ensure the quality and readability of a piece of software. A good code review should start by making sure that the submitter has followed the practices laid out by the team. Tests should exist that are easy to understand and they should properly test the input and output of each piece of code. Effectively, a reviewer should be looking to maintain the high standards of the team and verify that the implemented solution and tests are robust.
Fundamentally, a code review should be checking for things that automation cannot easily verify. For example, an engineer should be able to read and understand the code being written. Code that is not readable or readily understandable makes adding features or changes costlier. Another example is whether sufficient testing has been done to ensure the quality of the application. These are topics that humans are much better at evaluating than an automation tool. Topics such as code style or branch coverage are better enforced by a computer and should be automated whenever possible.
Because communication is so central to a good code review we recommend that the submitter and reviewer pair on code reviews. It makes code reviews faster and better when the person who authored the code is there to show what sections are most important and answer any questions the reviewer may have. Overall it produces better communication and outcomes in less time.
Continuous integration (CI) is an automated system that builds a piece of software on every code commit. It provides insight to engineers like whether the application can build and pass all of its tests on a neutral system. In a way, CI acts as an automated gatekeeper of quality and standards. Continuous integration systems can also act to streamline your team’s development processes by automatically building and releasing artifacts or deploying fully built sites or applications using committed code. Systems like these are amazing ways to reduce friction and enhance communication between engineers and team members because automation handles the heavy lifting. Modern CI systems should run on every PR in order to identify failures as soon as possible before commits make it to the master branch and issues become more costly to solve. There is much that these systems can achieve from creating a simple build and verification pipeline to continuously deploying features to production. The best systems focus on standardizing your quality controls and processes with a central authority and a single source of truth that can be relied on by the entire team.
Linting and Coding Conventions
Code that looks different and inconsistent is inefficient for humans to read and process. Proper use of code style makes it easy to find things that are out of place. There are a number of tools that help to ensure the uniformity and readability of code called code linters or formatters. These tools effectively reduce the cognitive load of reading and writing software and eliminate the need to waste time addressing styling issues in code reviews.
Prettier is an automated tool that can be used to ensure a consistent style throughout your application. You can set up a style definition or use prettier’s default style and run it against your code. Prettier will then go through all of your application’s code and automatically fix any code that doesn’t conform to the style. Prettier integrates well with most popular IDEs and can automatically apply styles on each save. You can also run prettier on a git-commit hook using Husky. This will ensure that all of the code being committed will automatically follow a common style.
Once linting and coding conventions have been adopted by your team their application should be reinforced by your continuous integration system. This will make sure that all code conforms to the team’s agreed-upon standards and make it more readable and higher in quality. We recommend adding linting and formatting tools to all projects. They offer a huge return on investment because the tools are easy to add to any project and automatically address a wide range of usage and readability issues and alleviates the need for engineers to address styling during code reviews.
Adding types to your application can greatly improve the developer experience as well. TypeScript improves most IDE’s auto-completion of class attributes, object keys, function parameters, and more. This leads to better flow while developing software as engineers are better assisted with the specific use and function of the modules they are using. It is especially useful for the onboarding of new engineers to the project acting as a set of guide-rails that assist in writing correct code.
One of the more challenging aspects of quality engineering occurs when everything is running smoothly as there is nothing to remind the team of its importance. Measuring quality is one of the best ways to make your efforts tangible and identify areas of risk before they become a problem.
While there are no objective measurements of quality or sustainability, we can certainly measure second order effects that indicate the overall health of an application. Code coverage (which we talked about above) is one major indicator of quality. It tells us how many untested branches of code exist in our application, and by extension this reflects the amount of risk as well as how difficult it is to add new features. Average team velocity is another indicator of quality. Teams that are able to maintain consistent velocity are usually able to do so because of good engineering practices, tooling, and support. When velocity slows or is choppy without reason, it is indicative of underlying quality issues. In addition to data-driven metrics, subjective metrics like asking engineers their overall release confidence can be useful in summarizing an engineer’s experience and may start a conversation on previously unknown risks and overall application health.
Measuring quality is necessary to create feedback for your business and team so you know when risk exists, to sustain good engineering practices, and know about the quality and sustainability of your application. We recommend all teams have a set of public metrics that speak to the current quality and risks of their software. It is an important way of informing stakeholders outside of the team how the project is progressing as well as giving your team an opportunity to celebrate progress and create conversations around concerns.
We’ve touched upon a number of tools and practices to help ensure the quality and sustainability of your engineering projects. There is no single trick for creating confidence around your software. Instead, it is about continuously refining your approach by automating common tasks and letting engineers focus most on solving problems. All of the automation suggested in the article is focused on reducing numerous, minor decisions. The remaining engineering-led processes of code reviews and writing tests and features all focus strictly on problem-solving. Our next article in the series, Refining Your Engineering Process, takes this focus further by streamlining problem-solving processes and establishing patterns around common work.
If you need help with improving your approach to engineering quality, we have consulted with countless teams on how to incorporate quality engineering practices into new and existing projects. Please contact us if you’d like to discuss what SitePen can do to help you improve your engineering quality and developer productivity.