A few weeks back webserver request queueing came under heightened scrutiny
as rapgenius blasted Heroku for not using as much autotune as promised in
their “intelligent load balancing”. If you somehow missed
the write-up (or response), check it out for its great simulations of load
balancing strategies on Heroku.
What if you’re not running on Heroku? Well, the same wisdom still applies
– know your application’s load balancing and concurrency and measure its
performance. Let’s explore how request queueing affects applications in the
non-PaaS world and what you can do about it.
Full-stack apps have full-stack problems
Rapgenius had been monitoring server-side request latency as only the time
the request spent being processed in the app layer – leading to large
discrepancies between what their APM tools were reporting and what the actual
user experience was. The missing... (more)
Many types of performance problems can result from the load created by
concurrent users of web applications, and all too often these scalability
bottlenecks go undetected until the application has been deployed in
production. Load-testing, the generation of simulated user requests, is a
great way to catch these types of issues before they get out of hand. Last
month I presented about load testing with Canonical's Corey Goldberg at
the Boston Python Meetup last week and thought the topic deserved blog
discussion as well.
In this two-part series, I'll walk through generating lo... (more)
Performance for end-users is the metric by which most businesses judge their
web applications' performance: is the responsiveness of the application an
asset or a liability to the business? Studies show that users are growing
more and more demanding, while average pageloads are getting bigger and
bigger-more than doubling in weight since 2010. Combine that with frequent
releases and updates from marketing, and pretty soon the optimization job is
never quite done.
Ongoing monitoring application performance from the end-user's perspective is
therefore critical; fortunately, ther... (more)
In part 1 of this article, we covered writing web app load tests using
multi-mechanize. This post picks up where the other left off and will
discuss how to gather interesting and actionable performance data from a
load-test, using (of course) Traceview as an example.
The big problem we had after writing load tests was that timing data gathered
by multi-mechanize is inherently external to the application. This means it
can tell us the response times of requests when the app is under load but
doesn't identify bottlenecks or configuration problems. So we need to be
gathering a bi... (more)
Our fundamental unit of performance data is the trace, an incredibly rich
view into the performance of an individual request moving through your web
application. Given all this data and the diversity of the contents of any
individual trace, it’s important to have an interface for understanding
what exactly was going on when a request was served. How did it get handled?
What parts were slow, and what parts were anomalous?
Over the past year, the TraceView team has been listening to your thoughts on
this topic as well as hatching some of our own. Today we get to share the
fruit of... (more)