Literature Review – Potential Article #4

Kai et al present one of the first papers into performance testing multiple web technologies such as node.js, php and python-web.

  • No testing code is provided to reproduce their findings, however their modifications to their server configuration are provided.
  • They provide a framework for how one may perform systematic testing via both benchmarking and scenario testing
  • However, their logic for deciding the number of users to utilize in testing doesn’t quite hold up – for example they chose 500 users because the server would run into errors beyond that point.

Kai, L., Yining, M., & Zhi, T. (2014, 19-21 Dec. 2014). Performance Comparison and Evaluation of Web Development Technologies in PHP, Python, and Node.js. Paper presented at the Computational Science and Engineering (CSE), 2014 IEEE 17th International Conference

Literature Review – Potential Article #3

Križanić et al (2010) present their experiences in identifying an appropriate tool for load testing and performance monitoring in AJAX based applications.

  • They highlight that while almost every tool they surveyed supported HTTPS & HTTP, support for advanced (at the time) web technologies such as AJAX were crucial.
  • They also highlighted the higher level of technical expertise required to operate open source load testing tools relative to commercial tools, however the cost of licensing outweighed the benefits.
  • They also flag JMeter and Grinder as being deficient in their results analysis, relative to commercial tools.

Križanić, J. E. N. T. d. d. G., A. ; Mošmondor, M. ; Lazarevski, P. (2010, 24-28 May 2010). Load testing and performance monitoring tools in use with AJAX based web applications. Paper presented at the MIPRO, 2010 Proceedings of the 33rd International Convention.

Literature Review – Potential Article #2

Shaw (2000) presents a case study of performance testing an eLearning solution with HP Load Runner.

  • A key finding of the study was that increasing a server’s power with more hardware is not the solution; in fact the worst performance in the web application was recorded in the more powerful environment.
    • Identifying which part of the application is causing delays then becomes a necessity, as timing a server’s response only tells the developer the overall response time
  • Further, late use of performance testing doesn’t help developers release a scalable product, although the tests do identify performance problems, giving an ‘accurate indication of where the main problem lay and made it possible to quantify improvements in subsequent patches’
    • The main competitor to the Web Application Performance Testing (WAPT) Tool can identify the root cause of unresponsive applications (i.e. un-optimised database vs. slow DNS response times)
    • Thus a core requirement of the tool should be to allow developers to accurately diagnose their performance problem


Shaw, J. (2000). Web Application Performance Testing — a Case Study of an On-line Learning Application. BT Technology Journal, 18(2), 79.

Literature Review – Potential Article #1

Bouch et al’s (2000) work into quantifying the effects of latency on user perception of web application Quality of Service offers key lessons for the development of web applications.

In particular:

  • Users perceived load times of greater than 10 seconds as an error in processing their request to the application
  • There is a relationship between perceived quality of service and objective quality of service, influenced by factors such as task type, method of page loading and cumulative time spent on the website.


Bouch, A., et al. (2000). Quality is in the eye of the beholder: meeting users’ requirements for Internet quality of service. Proceedings of the SIGCHI conference on Human Factors in Computing Systems. The Hague, The Netherlands, ACM: 297-304.

Requirements for Project

Given that the one of the most used competing applications in the landscape of performance testing is JMeter – a free open source tool for benchmarking web servers and web services, I’ll be focusing on what Jmeter does poorly, and try to improve on that.

Non-requirement, however recommendation:

Would be nice if the application was multi-platform in Java, but given my experience in Visual Studio, probably best I stay in that environment and use C#.

  • Functional
    • The application should be able to visit any web application, follow a user-defined path of web pages, and report results
      • The application should be able to repeat the path a user-defined number of times before exiting the thread
      • The application should be able to handle different OS/Browsers (perhaps through changing the user-agent sent to the server?).
    • The application needs to be able to simulate login requests through HTTP packets
      • Given this is normally done in HTTPS, it should handle that too
    • The application should be able to operate in several modes – constant user load to simulate average usage, peak user load to simulate worst case load, and sharp changes in user load.
    • The application needs to be multi-threaded – i.e the application will send out the same series of requests, on multiple threads to simulate a large number of users hitting a website at one time
    • The application should be able to support a list of pages to visit to simulate a real user browsing the application (for example visiting the home page -> products page -> login page -> shopping cart)
    • The application needs to respond with a usable benchmark once testing is complete – something like response time, actual server response, summary statistics over all threads, and so on
  • Non-functional
    • Scalability
      • The application should be able to either run on a sever with multiple instances of itself running, or have multi-threading to be able to simulate multiple users running at once
    • Light-weight
      • Since a major bottle-neck in benchmarking a server can be the machine running the benchmark itself, the application should be lightweight on resource usage, to allow optimal power behind the tests themselves.
    • Extensibility
      • The application will be written for Open-Source distribution, thus it would be greatly useful if the application could be extended by other developers, for further development/adoption
    • User friendly
      • The application should have a user-friendly interface: something JMeter lacks: it provides hundreds of options to test servers, with no basic/new user friendly option to test, straight out of the box
      • Side-note: This could be provided using command-line with a series of text questions prior to starting the test, and then saving the results on desktop, or we could just have a side panel,  and allow user definition of settings (similar to how JMeter does it, in a more user-intuitive way)

A look at the competitive landscape

  • Few other solutions, all using a “virtual user” type system hosted in AWS or Cloud, simulating actions using BrowserClick kind of events in Internet Explorer