In days gone by, System Testing involved a test team being given a software system, a set of requirements, and manually and exhaustively executing test cases which covered all requirements of the system.
The problem with this methodology lies in the word ‘manually’. A test team could go to great lengths constructing complex test cases for a given release with no way of recording these test cases for an automated replay at a later date. The problems with this are clear. If at some point in the future the testing team is delivered a new release of the software, not only must they devise test cases for the new functionality but also manually execute all previous test cases which were devised for earlier releases as part of a regression pack. Test teams can opt not to execute regression tests as part of every release but this comes with significant risks.
The problem with manual testing has been recognised in recent years and a number of systems, such as Selenium, have been developed which allow testers to record test cases using a web browser and replay them at any point. However, what if the system you want to test does not have a web front end and is instead comprised of a number of message based web services? One solution is to employ developers who are capable of programmatically consuming the web services and programmatically specifying expected responses.
However, if you have a test team capable of understanding and testing requirements but without the technical expertise to construct web service test cases you have a problem. A problem myself and my colleagues have faced in the past.
A couple of years ago my company, Bridgeall, was asked to deliver the solution which was to administer the water market in Scotland. The main purpose of this system was to allow certain parties to send messages via web services which would create certain data entities within the system, such as water meters and readings, then use this information to calculate wholesale billing targeted at the water retailers.
At an early stage it was obvious the system testing of these systems was going to be a challenge. We needed a way for non-technical testing staff to create repeatable web service test cases, which requires developing test cases that submit web service requests and interrogate web service responses. This was a challenging prospect. In order to write test cases which consume web services the tester generally needs to know a lot of web service detail such as endpoints, SOAP vs REST etc. Furthermore, the tester generally needs to have technical expertise in a lot of technologies such as HTTP, XPATH etc.
The second challenge we faced when trying to develop a strategy which could test these systems was how to test the calculation engine. If we somehow developed a solution that would allow a tester to produce a test case consuming the web services and create the appropriate entities, how do we then guarantee the invoice which is generated in the presence of these entities is accurate? The calculation algorithms which are used are complex and vary depending on which entities exist, it was not feasible to expect the tester to manually calculate the charges which would be generated for a given scenario and use this as a comparison point.
The solution that we conceived and developed was smartTest. smartTest is a web based testing framework capable of testing message based web services
The smartTest system allows testers to create test cases which consume web services using a web UI. The system achieves this functionality by delivering a screen for every message supported by the web service. However, these screens are not static. As new messages are supported by the web service, or their structure changes, these new/modified messages are also added to smartTest dynamically generating an edit screen for these messages within the smartTest UI, allowing the tester to include them in their test cases. Even for technical staff this is an extremely useful tool. I for one much prefer using the simple web UI to consume the web services rather than writing C# apps to submit SOAP requests and then programmatically interrogate the responses!
To solve the calculation engine problem we took the decision to compare the results generated by the calculation engine to those generated by a completely independent system, developed by an independent group of developers using the same requirements. The theory being if both systems generate the same charges for an identical dataset the test case is a pass. Otherwise, it is a fail.
In summary we have created a system which makes testing the web service and calculation elements of the Scottish Water market software system extremely simple. Both ourselves and our client are very satisfied with the system and continue to use it to test every release of these systems. Furthermore, the system has been designed in a generic manner which should allow adoption to other web service based systems, allowing others to experience the same benefits our testing team has enjoyed.