Test AutomationIn industrial practice, functional testing of enterprise applications starts with the identification of test scenarios from requirements and use cases. These scenarios are translated to manual test cases. A manual test case is a sequence of test steps written in natural language—each test step is essentially an instruction for the tester to perform an action on the application user interface or verify some visible state of the user interface. Because tests are run repeatedly, they go through an automation process, in which they are converted to automated test scripts (or programs) that perform the test steps mechanically.
Conventional test-automation techniques, such as record-replay and keyword-driven automation, can be time-consuming, can require specialized skills, and can produce fragile scripts. To address these limitations, we have developed a technique and a tool, called ATA, for automating the test-automation task. The input to our technique is a sequence of test steps written in natural language, and the output is a test script that can drive the application without human intervention. Our technique is based on a novel combination of natural-language processing and backtracking exploration. Our key insight is to deal with the ambiguities in natural-language input by using exploration over a space of possible scripts to arrive at a desired one. This technology promises to bring tremendous productivity benefits in the area of testing of web applications; initial results have shown almost a twofold gain in productivity. ATA not only speeds up the initial conversion of manual tests to automated scripts, it also generates scripts that are more resilient to changes than conventional test scripts, thereby, reducing the costs associated with test-script maintenance.
- Automating test automation. S Thummalapenta, S Sinha, N Singhania, S Chandra. Proceedings of the 34th International Conference on Software Engineering (ICSE 2012), 2012.
- Efficiently scripting change-resilient tests. N Singhania, P Devaki, S Sinha, S Chandra, A Das and S Mangipudi. Proceedings of the 20th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2012) Research Tool Demonstrations Track, to appear.
Automating test automation.
S Thummalapenta, S Sinha, D Mukherjee, S Chandra.
Technical Report RI11014, IBM Research, 2011.
Test-Case GenerationFunctional specifications of enterprise applications are often given in the form of business rules. Business rules describe business logic, access control, or even navigational properties of an application's user interface. Therefore, business rules are a natural starting point for the design of functional tests.
The goal of our research is to generate executable tests automatically that drive an application along paths along which business rules apply (are "covered"), so that the application's conformance to relevant business rules can be validated. Test generation requires (1) the creation of adequate test data (to ensure that business rules are well-tested) and (2) the synthesis of flow through the application user interface to exercise a rule.
To attain the goal, our research develops modeling techniques for capturing business rules in a formal representation that is amenable to automated analysis and that can drive the generation of adequate test data and test flows. We are also developing directed search techniques for synthesizing flows through the network of interconnected pages of web applications that cover business rules.
Test RepairTest suites, once created, rarely remain static. Just like the application they are testing, they evolve throughout their lifetime. Test obsolescence is probably the most known reason for test-suite evolution---test cases cease to work because of changes in the code and must be suitably repaired. Repairing existing test cases manually, however, can be time consuming, especially for large test suites, which has motivated the recent development of automated test-repair techniques.
To develop effective repair techniques that are applicable in real-world scenarios, a fundamental prerequisite is a thorough understanding of how test cases evolve in practice. Without such knowledge, we risk to develop techniques that may not be widely applicable. In this research, we have developed a technique and a tool for studying test-suite evolution. Using the tool, we have conducted an extensive empirical study of how test suites evolve.
Based on the insights gained from the study, we are developing automated test-repair techniques that are grounded in real-world scenarios. A particularly interesting research problem, in this context, is the development of repair techniques that are intent-preserving, that is, techniques that ensure that the repaired tests preserve the "intent" behind the original tests. Often, tests can be repaired in different ways, and selecting the repair that most closely mirrors the original test intent is desirable. Our research is investigating to what extent test intent can be characterized formally and accurately.
This is joint research with Alex Orso.
- Understanding myths and realities of test-suite evolution. L Sales Pinto, S Sinha, and A Orso. Proceedings of the 20th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2012), to appear.
Regression TestingModern software systems exhibit characteristics, such as heterogeneity and environment dependence, which complicate regression testing. As examples of heterogeneity, the systems can be implemented using multiple languages and technologies, and can be highly dynamic, involving runtime generation of code and other artifacts. As examples of environment dependence, the systems can interact with configuration files and databases. Existing regression-testing techniques are suitable for homogeneous and static applications, and also ignore changes in the environment that can affect application behavior. Our research develops effective regression test-selection techniques that can be applied to complex, heterogeneous, and environment-dependent software systems for which existing techniques are ineffective or completely inapplicable.
- Regression testing in the presence of non-code changes. A Nanda, S Mani, S Sinha, M J Harrold, A Orso. Proceedings of the 4th International Conference on Software Testing, Verification and Validation (ICST 2011), pp. 21–30, 2011.