Monday, 10 November 2014

Basic Metrics to measure the performance and identify performance bottlenecks of Web Server


To characterize your web server's performance, you need to measure both throughput and response time.
Throughput (say, HTTP operations per second) is a capacity metric, while response time (usually measured in milliseconds) gives you an idea of responsiveness for individual users.

Graphing your throughput vs. response time frequently highlights some interesting trends. Up to the capacity limit of your server, throughput tends to increase along a flat response time curve. When your server reaches its maximum thoughput, response time increases exponentially.


Poorly tuned servers display one of two phenomena (sometimes both): the response time increases proportionally with the throughput, or the response time remains constant while the throughput actually degrades. These curves generally indicate a poorly-designed algorithm at the heart of the server.


Before you begin your performance measurements, take some time to set your goals. Should your average response time come in under 50 ms? 500 ms? Should you measure the 90th or 95th percentile, rather than the average? How many users do you need to support? The answers to these questions need to come from your own (Business Analyst and Product team) intimate understanding of your application, user population, and workload.

No comments:

Post a Comment