Going Data-Driven

By on January 7, 2007 8:29 pm

Dylan’s last post on performance is only one in a series we’ll be running on the topic, and as promised this post is all about the tools of the trade for doing Ajax app performance tuning and how to use them. Here at SitePen, we often get called into a project when the heat is really on: after most of the code is written and just before the hoped-for (or worse, already slipped) ship date. Needless to say, we like this situation even less than our clients do. We always prefer to be involved early enough in the development process to be able to steer clients towards architecture decisions that will scale and perform better and still fit the budget, but sometimes the damage is done. What then?

Let the data guide you.

It sounds simple and naive, but most developers who have been through their share of performance tuning crunches will have horror stories of hours or days lost to phantom performance “problems” that turned out to be nothing more than the hunch of one developer. It’s no use optimizing your database configuration or adding more expensive storage systems if the bottleneck is at the JavaScript or HTTP levels. Likewise, tuning your HTTP server to the hilt may have no effect if the bottleneck is storage contention or TCP fragmentation. Getting to a root cause requires defining the goal of a tuning project, testing each change to isolate causality, and keeping a log book handy and up-to-date.

It may be necessary to write custom tools to help you diagnose problems in some environments, but there’s a stable of tools that we always seem to fall back on here at SitePen and I’m going to do a quick run-through of what we do at each step when we’re analyzing webapp performance and scalability problems (note: they’re not the same thing!). Here’s the short list, and why we can’t live without them:

  • Firebug 1.0 Beta
    • To users, perceived performance is the only thing that matters, and that means that investigation should examine the system from the user’s perspective and work backward from there. There was a time when the Firefox TamperData extension ruled the roost for this, but no more. Now that page loading requests can be graphed inside of Firebug, things are getting a lot easier. Not only can the graph view show 404 requests and slow responses, it often lays bare the synchronous nature of script execution and requests and the 2 HTTP connection limit. Generating “before” and “after” evaluations for clients has never been so easy.
  • Venkman and Firebug 1.0 Beta
    • Now that Firebug includes some profiling and debugging support, Venkman may finally be on the way out, but whichever tool you use it’s highly valuable to be able to profile in-browser JavaScript performance at a function level. HTTP and server-side problems are often a source of perceived latency, but simple testing with full caches can easily point to client-side performance issues. Any logging or profiling system will impact overall page performance, but you should be using these tools to get relative timing data. There’s a special mode for the Dojo package loader that can be used to get accurate function names and line numbers. While the timing information may not translate 100% across FF, IE, Opera, and Safari, the relative timings tend to be in line.
  • dojo.profile
    • The dojo.profile module lets you do tic/toc timings of JavaScript code and provide a table showing averages and total timings. We use this to verify relative timings across browsers once Venkman/Firebug point out bottlenecks and to validate fixes in a cross-browser way.
  • Tsung and Apache Bench
    • As I noted earlier, HTTP-level performance problems can seriously impact application latency. Neither tool can pinpoint fundamental problems like outbound bandwidth saturation (making the system more scalable doesn’t matter if you can’t send more data across your link), but when the problem is one of scale and not instantaneous performance, these tools let you begin to validate assumptions. Apache Bench is great for testing balls-to-the-wall concurrency of a single script, but very often you’re more interested in full-app performance under more realistic workloads. While there are commercial tools available that can do this kind of load testing in a “real world” way, Tsung provides a highly-capable “replay” proxy mode that will generate workloads that can be used to monitor system performance from a variety of angles. Since we’re most often interested in “how many users can it handle?” rather than “how many times a second can I request foo.php?”, Tsung is an invaluable ally. As a downside, Tsung often requires a proxy and an Erlang build.
  • bonnie++
    • Databases and web servers alike need good I/O performance, and bonnie++ lets us determine if we’re getting anything like the theoretical disk performance out of a system. Knowing the “shape” of your workload is essential, but I find that when remediating disk I/O issues bonnie++ usually finds its way into my analysis.
    • Please, please, please make sure that your file systems are running with noatime set.
  • “EXPLAIN” statements and slow-query logs
    • SQL is the ubiquitous abstraction that most of the web runs on, and every database system today provides information on how well it’s returning what you request. A thousand other things can niggle your SQL server performance to death, but nothing should get done without logs and EXPLAIN output to guide you.

System development and tuning need to go hand-in-hand, and expert help can clearly make a huge difference. The tools above are some of the most visible artifacts of the process, but it’s discipline in the process itself that’s of the most paramount importance. Let the data guide you and everything else is likely to work out…assuming you know where the goal line is.

Performance tuning can easily drive you crazy should you not have a goal in mind. Without a goal, there will always be another tweak, another 3% to be eeked out of the system. Combined with the marathon sessions that seem to lead nowhere, it’s important that developers doing performance work remember to keep their eye on the ball and to take a walk or a nap or just stop for the day when there isn’t forward progress for an hour or so. That, of course, means having a ball to keep an eye on. So before you start your tuning adventure (or call us in to help), you need to know what your budget is, what your responsiveness goals are, and what your scalability targets are.

For more thorough treatments of how to build things that both perform well and can be made to scale, I strongly recommend Cal Henderson’s “Building Scalable Web Sites”, Theo Schlossnagle’s “Scalable Internet Architectures”, and Jeremy Zawodny’s “High Performance MySQL”.

Next time: why the Dojo build system matters, why the x-domain package loader is awesome, and other stupid HTTP tricks.