A vendor developed the new scheduler for the customer and it was largely untested. The customer wanted the product to be evaluated and validated by an independent Quality Assurance Service Provider before presenting it to the public. The testing of the product posed a challenge as there were no test cases or requirement specification documents from which to work.
The customer approached PreludeSys to conduct an independent evaluation of the new scheduler application. Understanding the Test Requirements: PreludeSys met extensively with the customer to learn and understand the customer’s business process, the roles and actions of various users, the expected behavior of the scheduler application, and the probable results.
Test Preparation: PeludeSys prepared a test plan and a test case to evaluate the scheduler system. The PreludeSys QA Team then prepared system and performance tests designed to evaluate the sequence of operations in scheduling and to check if the product would fail any of the possible scenarios in its real life use. These tests were supported by appropriate data to check the behavior of the product under valid and invalid data classes and at boundary values.
Test Execution and Defect Tracking: The test cases were executed per a predetermined test plan. The defects discovered during those tests were recorded by a collaborative, defecttracking tool throughout their life cycle.
Reporting: At the end of the tests, the results for items, such as defect categorization and behavior of the application under different levels of user load, were summarized into reports. These reports and the defect tracker data were presented to the customer for review.
Standards: PreludeSys followed the IEEE 829 testing standards for all of the testing activities