Starting Performance Testing: what Metrics to use?

Home Forums Software Testing Discussions Starting Performance Testing: what Metrics to use?

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #9538
    Oliver
    Participant
    @oliver

    Examining a Product from a Performance stand point what are the Key Metrics required to evaluate the performance of a software?

    – Time to complete a Test
    – Time to complete an action
    – Memory used by the System over time
    – Memory used per Process over time
    – CPU usage
    -CPU allocation
    -Any others

    What do you find useful ?

    #9542
    Jesper
    Participant
    @jesper-lindholt-ottosen
    #9547
    stefan
    Participant
    @ipstefan

    @Jesper
    That’s available for pro subscribers only. It costs at least ~25 pounds or ~34 to check that presentation in one month.

    #9605
    Paul
    Participant
    @paulcoyne73

    Oliver, sorry to be picky, but it entirely depends what the risks are that you need to explore, and the requirements that exist for the system.
    Clearly memory leaks and poor garbage collection can result in failure after a sustained load (or a not sustained trickle if it’s a wrong ‘un). Disk usage & network traffic (nobody wants an IP storm). Then there’s things you’ll get from your database like plans and index usage. Mustn’t forget cumulative/combinatorial effects eg where two systems sharing resources are going at it hammer & tong. Individually (even in arithmetic combination) they could be great, but there could be deadly embraces or at least sufficient contention that it makes no difference.

    #9635
    Geert
    Participant
    @geertpeeters

    It all depends on the kind of application.
    What software stack is used? Java? .Net? php? embedded?

    A performance test is useless when you don’t measure system and VM metrics in your test itself.
    You need to be able to correlate timewise performance drops to specific system calls (HTTP, REST, other interfaces)
    If the application is built on a Java stack you can use http://jmeter-plugins.org/wiki/JMXMon/

    For general system monitoring you can use http://jmeter-plugins.org/wiki/PerfMon/

    I didn’t try it myself yet, as my previous performance tests were executed with developers monitoring the system and DB’s.

    Once I get the chance to execute performance tests again, I’ll connect to our JMX interface and system with these plugins, so that I can see the impact while running the test.
    I’ll update my post http://www.nandu.be/test-domains/performance/performance-testing-with-detailed-reporting-in-jenkins/

    I’m not a guru in performance testing. Just explained what I did on my blog
    😉

    #9636
    ramesh
    Participant
    @rameshviyer

    Team,

    We need to understand two important factors
    a. Metrics related to Performance Testing
    b. Metrics that assist Performance Testing Maturity


    a. Metrics related to performance testing again is linked to Server side and Client side Metrics;
    Server side Metrics include CPU, Memory, Disk IO, Network along with application related metrics for e.g if the application is developed using MS and IIS web-server what are the ASP requests, ASP requests queued along with back-end SQL Server metrics like buffer cache hit ratio, let me give more information for SQL related counters:
    1. SQLServer: Buffer Manager: Buffer cache hit ratio
    The buffer cache hit ratio counter represents how often SQL Server is able to find data pages in its buffer cache when a query needs a data page. The higher this number the better, because it means SQL Server was able to get data for queries out of memory instead of reading from disk. You want this number to be as close to 100 as possible. Having this counter at 100 means that 100% of the time SQL Server has found the needed data pages in memory. A low buffer cache hit ratio could indicate a memory problem.
    2. SQLServer: Buffer Manager: Page life expectancy
    The page life expectancy counter measures how long pages stay in the buffer cache in seconds. The longer a page stays in memory, the more likely SQL Server will not need to read from disk to resolve a query. You should watch this counter over time to determine a baseline for what is normal in your database environment. Some say anything below 300 (or 5 minutes) means you might need additional memory.
    3. SQLServer: SQL Statistics: Batch Requests/Sec
    Batch Requests/Sec measures the number of batches SQL Server is receiving per second. This counter is a good indicator of how much activity is being processed by your SQL Server box. The higher the number, the more queries are being executed on your box. Like many counters, there is no single number that can be used universally to indicate your machine is too busy. Today’s machines are getting more and more powerful all the time and therefore can process more batch requests per second. You should review this counter over time to determine a baseline number for your environment.
    4. SQLServer: SQL Statistics: SQL Compilations/Sec
    The SQL Compilations/Sec measure the number of times SQL Server compiles an execution plan per second. Compiling an execution plan is a resource-intensive operation. Compilations/Sec should be compared with the number of Batch Requests/Sec to get an indication of whether or not complications might be hurting your performance. To do that, divide the number of batch requests by the number of compiles per second to give you a ratio of the number of batches executed per compile. Ideally you want to have one compile per every 10 batch requests.
    5. SQLServer: SQL Statistics: SQL Re-Compilations/Sec
    When the execution plan is invalidated due to some significant event, SQL Server will re-compile it. The Re-compilations/Sec counter measures the number of time a re-compile event was triggered per second. Re-compiles, like compiles, are expensive operations so you want to minimize the number of re-compiles. Ideally you want to keep this counter less than 10% of the number of Compilations/Sec.
    6. SQLServer: General Statistics: User Connections
    The user connections counter identifies the number of different users that are connected to SQL Server at the time the sample was taken. You need to watch this counter over time to understand your baseline user connection numbers. Once you have some idea of your high and low water marks during normal usage of your system, you can then look for times when this counter exceeds the high and low marks. If the value of this counter goes down and the load on the system is the same, then you might have a bottleneck that is not allowing your server to handle the normal load. Keep in mind though that this counter value might go down just because less people are using your SQL Server instance.
    7. SQLServer: Locks: Lock Waits / Sec: _Total
    In order for SQL Server to manage concurrent users on the system, SQL Server needs to lock resources from time to time. The lock waits per second counter tracks the number of times per second that SQL Server is not able to retain a lock right away for a resource. Ideally you don’t want any request to wait for a lock. Therefore you want to keep this counter at zero, or close to zero at all times.
    8. SQLServer: Access Methods: Page Splits / Sec
    This counter measures the number of times SQL Server had to split a page when updating or inserting data per second. Page splits are expensive, and cause your table to perform more poorly due to fragmentation. Therefore, the fewer page splits you have the better your system will perform. Ideally this counter should be less than 20% of the batch requests per second.
    9. SQLServer: General Statistic: Processes Block
    The processes blocked counter identifies the number of blocked processes. When one process is blocking another process, the blocked process cannot move forward with its execution plan until the resource that is causing it to wait is freed up. Ideally you don’t want to see any blocked processes. When processes are being blocked you should investigate.
    10. SQLServer: Buffer Manager: Checkpoint Pages / Sec
    The checkpoint pages per second counter measures the number of pages written to disk by a checkpoint operation. You should watch this counter over time to establish a baseline for your systems. Once a baseline value has been established you can watch this value to see if it is climbing. If this counter is climbing, it might mean you are running into memory pressures that are causing dirty pages to be flushed to disk more frequently than normal.
    Client Side Metrics; Hits/sec, throughput, transaction response time, transactions/sec and etc…

    b. Metrics that assist in Testing Maturity:
    Performance Scripting Productivity – PSP (Operations/hour) ; Performance Scripts Re-usability – PSR (%ge) ; Performance Test Efficiency — PTE (%ge)
    Performance Execution Summary — PES (#)

    Thanks
    Ramesh

    #9651
    Eric
    Participant
    @ericatwork

    @oliver Lack of customers’ performance requirements is the major concern. This question is backwards way of addressing the software development team’s inability to define their customers’ expectations. Benchmarking is standard way of reporting performance of software in the best light. A Good place to start is with simple stress test and begin to load it up till breaking point is found.. This type of test helps to determine the application infrastructure breaking point and assists in exposing traffic bottlenecks.

    #9668
    Jesper
    Participant
    @jesper-lindholt-ottosen

    By the way:
    https://dojo.ministryoftesting.com/series/performance-testing-101-simon-knight
    mentioned above is in the free section of the MoT.

Viewing 8 posts - 1 through 8 (of 8 total)
  • You must be logged in to reply to this topic.