2023
Carving UI Tests to Generate API Tests and API Specification
R Yandrapally, S Sinha, R Tzoref-Brill, A Mesbah
Proceedings of the 45th International Conference on Software Engineering (ICSE 2023)
Abstract
Modern web applications make extensive use of API calls to update the UI state in response to user events or server- side changes. For such applications, API-level testing can play an important role, in-between unit-level testing and UI-level (or end-to-end) testing. Existing API testing tools require API specifications (e.g., OpenAPI), which often may not be available or, when available, be inconsistent with the API implementation, thus limiting the applicability of automated API testing to web applications. In this paper, we present an approach that leverages UI testing to enable API-level testing for web applications. Our technique navigates the web application under test and automatically generates an API-level test suite, along with an OpenAPI specification that describes the application? s server-side APIs (for REST-based web applications). A key element of our solution is a dynamic approach for inferring API endpoints with path parameters via UI navigation and directed API probing. We evaluated the technique for its accuracy in inferring API specifications and the effectiveness of the ? carved? API tests. Our results on seven open-source web applications show that the technique achieves 98% precision and 56% recall in inferring endpoints. The carved API tests, when added to test suites generated by two automated REST API testing tools, increase statement coverage by 24% and 29% and branch coverage by 75% and 77%, on average. The main benefits of our technique are: (1) it enables API-level testing of web applications in cases where existing API testing tools are inapplicable and (2) it creates API-level test suites that cover server-side code efficiently, while exercising APIs as they would be invoked from an application? s web UI, and that can augment existing API test suites.
2022
Automated Test Generation for REST APIs: No Time to Rest Yet
M Kim, Q Xin, S Sinha, A Orso
Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2022)
Abstract
Modern web services routinely provide REST APIs for clients to access their
functionality. These APIs present unique challenges and opportunities for
automated testing, driving the recent development of many techniques and tools
that generate test cases for API endpoints using various strategies.
Understanding how these techniques compare to one another is difficult, as they
have been evaluated on different benchmarks and using different metrics. To
fill this gap, we performed an empirical study aimed to understand the landscape
in automated testing of REST APIs and guide future research in this area. We
first identified, through a systematic selection process, a set of 10
state-of-the-art REST API testing tools that included tools developed by both
researchers and practitioners. We then applied these tools to a benchmark of 20
real-world open-source RESTful services and analyzed their performance in terms
of code coverage achieved and unique failures triggered. This analysis allowed
us to identify strengths, weaknesses, and limitations of the tools considered
and of their underlying strategies, as well as implications of our findings for
future research in this area.
CrawLabel: Computing Natural-Language Labels for UI Test Cases
Y Liu, R Yandrapally, A Kalia, S Sinha, R Tzoref-Brill, A Mesbah
Proceedings of the 3rd ACM/IEEE International Conference on Automation of Software Test, 2022
Abstract
End-to-end test cases that exercise the application under test via its user interface (UI) are known to be hard for developers to read and understand; consequently, diagnosing failures in these tests and maintaining them can be tedious. Techniques for computing natural-language descriptions of test cases can help increase test readability. However, so far, such techniques have been developed for unit test cases; they are not applicable to end-to-end test cases.
In this paper, we focus on the problem of computing natural-language labels for the steps of end-to-end UI test cases for web applications. We present two techniques that apply natural-language processing to information available in the browser document object model (DOM). The first technique is an instance of a supervised approach in which labeling-relevant DOM attributes are ranked via manual analysis and fed into label computation. However, supervised approach requires a training dataset. So we propose the second technique, which is unsupervised: it leverages probabilistic context-free grammar learning to compute dominant DOM attributes automatically. We implemented these techniques, along with two simpler baseline techniques, in a tool called CrawLabel (available as a plugin to Crawljax, a state-of-the-art UI test-generation tool for web applications) and evaluated their effectiveness on open-source web applications. Our results indicate that the supervised approach can achieve precision, recall, and F1-score of 83.38, 60.64, and 66.40, respectively. The unsupervised approach, although less effective, is competitive, achieving scores of 72.37, 58.12, and 59.77. We highlight key results and discuss the implications of our findings.
TackleTest: A Tool for Amplifying Test Generation via Type-Based Combinatorial Coverage
Rachel Tzoref-Brill, Saurabh Sinha, Antonio Abu Nassar, Victoria Goldin, Haim Kermany
Proceedings of the 15th International Conference on Software Testing, Verification and Validation (ICST 2022), Testing Tools Track
Abstract
We present TackleTest, an open-source tool for automatic generation of unit-level test cases for Java applications. TackleTest builds on top of two well-known test-generation tools, EvoSuite and Randoop, by adding a new combinatorial-testing-based approach for computing coverage goals that comprehensively exercises different parameter type combinations of the methods under test, at configurable interaction levels. We describe the tool architecture, the main tool components, and the combinatorial type-based testing technique.
TackleTest was developed in the context of application modernization at IBM, but it is also applicable as a general-purpose test-generation tool. We have evaluated TackleTest on several IBM-internal enterprise applications as well as on
a subset of the SF110 benchmark, and share our findings and lessons learned. Overall, TackleTest implements a new and complementary way of computing coverage goals for unit testing via a novel white-box application of combinatorial testing.
2021
Mono2Micro: A Practical and Effective Tool for Decomposing Monolithic Java Applications to Microservices
A K Kalia, J Xiao, R Krishna, S Sinha, M Vukovic, D Banerjee
Proceedings of the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Industry Track, 2021
Abstract
In migrating production workloads to cloud, enterprises often face the daunting task of evolving monolithic applications toward a microservice architecture. At IBM, we developed a tool called Mono2Micro to assist with this challenging task. Mono2Micro performs spatio-temporal decomposition, leveraging well-defined business use cases and runtime call relations to create functionally cohesive partitioning of application classes. Our preliminary evaluation of Mono2Micro showed promising results.
How well does Mono2Micro perform against other decomposition techniques, and how do practitioners perceive the tool? This paper describes the technical foundations of Mono2Micro and presents results to answer these two questions. To answer the first question, we evaluated Mono2Micro against four existing techniques on a set of open-source and proprietary Java applications and using different metrics to assess the quality of decomposition and tool's efficiency. Our results show that Mono2Micro significantly outperforms state-of-the-art baselines in specific metrics well-defined for the problem domain. To answer the second question, we conducted a survey of twenty-one practitioners in various industry roles who have used Mono2Micro. This study highlights several benefits of the tool, interesting practitioner perceptions, and scope for further improvements. Overall, these results show that Mono2Micro can provide a valuable aid to practitioners in creating functionally cohesive and explainable microservice decompositions.
2020
Mono2Micro: An AI-Based Toolchain for Evolving Monolithic Enterprise Applications to a Microservice Architecture
A K Kalia, J Xiao, C Lin, S Sinha, J Rofrano, M Vukovic, D Banerjee
Proceedings of the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Tool Demo Track, 2020
Abstract
Mono2Micro is an AI-based toolchain that provides recommendations
for decomposing legacy web applications into microservice
partitions. Mono2Micro consists of a set of tools that collect
static and runtime information from a monolithic application and
process the information using an AI-based technique to generate
recommendations for partitioning the application classes. Each partition represents a candidate microservice or a grouping of classes with similar business functionalities. Mono2Micro takes a temporo-spatial
clustering approach to compute meaningful and explainable
partitions. It generates two types of partition recommendations.
First, it computes business-logic-seams-based partitions that represent
a desired encapsulation of business functionalities. However,
such a recommendation may cut across data dependencies between
classes, accommodating which could require significant application
updates. To address this, Mono2Micro computes natural-seams-based
partitions, which respect data dependencies. We describe the
set of tools that comprise Mono2Micro and illustrate them using a
well-known open-source JEE application.
Auto-Generation of Domain-Specific Systems: Cloud-Hosted DevOps for Business Users
S Sinha, T Astigarraga, R Hull, N Jean-Louis, V Sreedhar, H Chen, L Xue Hu, F Carpi, J Brusco and W Loach
Proceedings of IEEE CLOUD, 2020
Abstract
The wide use of spreadsheet-based solutions for business processes
illustrates the importance of giving business users simple mechanisms
for specifying and managing their processes. However,
spreadsheet-based solutions are hard to maintain, reuse, integrate,
and scale. This paper describes an approach for supporting "DevOps
for business users" that enables business-level users to manage the
full lifecycle of a large class of cloud-hosted business processes.
The approach builds on DevOps for software engineering, but removes
software engineers from the loop. Unlike general-purpose "low
code" business process management systems, the approach incorporates
aspects of a processing domain (e.g., billing) to create a DevOps
experience that business users can master easily.
In the approach, business users follow an agile
"specify-check-generate-deploy" methodology, enabling them to
rapidly and iteratively generate and operationalize cloud-hosted
processing systems, with little or no assistance from IT staff. We
demonstrate and evaluate the approach using a system built for the
billing application area, developed at IBM,
which provides technology support and maintenance services for numerous clients,
each with different billing needs and logic. The paper describes the
system, requirements, empirical evaluation of key components, and
lessons learned.
2016
2015
Test Generation from Business Rules
S H Jensen, S Thummalapenta, S Sinha, and S Chandra
Proceedings of the 8th International Conference on Software Testing, Verification and Validation (ICST 2015), Best Paper Award, pp. 1--10
Abstract
Enterprise applications are difficult to test because their intended functionality is either not described precisely enough or described in cumbersome business rules. It takes a lot of effort on the part of a test architect to understand all the business rules and design tests that "cover" them, i.e., exercise all their constituent scenarios. Part of the problem is that it takes a complicated set up sequence to drive an application to a state in which a business rule can even fire. In this paper, we present a business rule modeling language that can be used to capture functional specification of an enterprise system. The language makes it possible to build tool support for rule authoring, so that obvious deficiencies in rules can be detected mechanically. Most importantly, we show how to mechanically generate test sequences--i.e., test steps and test data--needed to exercise these business rules. To this end, we translate the rules into logical formulae and use constraint solving to generate test sequences. One of our contributions is to overcome scalability issues in this process, and we do this by using a novel algorithm for organizing search through the space of candidate sequences to discover covering sequences. Our results on three case studies show the promise of our approach.
Automated Modularization of GUI Test Cases
R Yandrapally, G Sridhara, and S Sinha
Proceedings of the 37th International Conference on Software Engineering (ICSE 2015)
Abstract
Test cases that drive an application under test via its graphical user interface (GUI) consist of sequences of steps that perform actions on, or verify the state of, the application user interface. Such tests can be hard to maintain, especially if they are not properly modularized---that is, common steps occur in many test cases, which can make test maintenance cumbersome and expensive. Performing modularization manually can take up considerable human effort. To address this, we present an automated approach for modularizing GUI test cases. Our approach consists of multiple phases. In the first phase, it analyzes individual test cases to partition test steps into candidate subroutines, based on how user-interface elements are accessed in the steps. This phase can analyze the test cases only or also leverage execution traces of the tests, which involves a cost-accuracy tradeoff. In the second phase, the technique compares candidate subroutines across test cases, and refines them to compute the final set of subroutines. In the last phase, it creates callable subroutines, with parameterized data and control flow, and refactors the original tests to call the subroutines with context-specific data and control parameters. Our empirical results, collected using open-source applications, illustrate the effectiveness of the approach.
2014
Robust Test Automation Using Contextual Clues
R Yandrapally, S Thummalapenta, S Sinha, and S Chandra
Proceedings of the International Symposium on Software Testing and Analysis (ISSTA), pp. 304--314, 2014
Abstract
Despite the seemingly obvious advantage of test automation, significant skepticism exists in the industry regarding its cost-benefit tradeoffs. Test scripts for web applications are fragile: even small changes in the page layout can break a number of tests, requiring the expense of re-automating them. Moreover, a test script created for one browser cannot be relied upon to run on a different web browser: it requires duplicate effort to create and maintain versions of tests for a variety of browsers. Because of these hidden costs, organizations often fall back to manual testing.
We present a fresh solution to the problem of test script fragility. Often, the root cause of test script fragility is that, to identify UI elements on a page, tools typically record some metadata that depends on the internal representation of the page in a browser. Our technique eliminates metadata almost entirely. Instead, it identifies UI elements relative to other prominent elements on the page. The core of our technique automatically identifies a series of contextual clues that unambiguously identify a UI element, without recording anything about the internal representation.
Empirical evidence shows that our technique is highly accurate in computing contextual clues, and outperforms existing techniques in its resilience to UI changes as well as browser changes.
Software Services: A Research Roadmap
S Chandra, V S Sinha, S Sinha, K Ratakonda
Proceedings of the 36th International Conference on Software Engineering (ICSE), Future of Software Engineering, pp. 40--54, 2014
Abstract
Software services companies offer software development, testing and maintenance as a "service" to other organizations. As a thriving industry in its own right, software services offers certain unique research problems as well as different takes on research problems typically considered in software engineering research. In this paper, we highlight some of these research problems, drawing heavily upon our involvement with IBM Global Business Services organization over the past several years. We focus on four selected topics: how to organize people and the flow of work through people, how to manage knowledge at an organizational level, how to estimate and manage risk in a services engagement, and finally, testing services. These topics by no means cover all areas pertinent to software services; rather, they reflect ones in which we have personal perspectives to offer. We also share our experience in deployment of research innovations in a large service delivery organization.
Operational Abstractions of Model Transforms
V S Sinha, P Dhoolia, S Mani, and S Sinha
Proceedings of the 7th India Software Engineering Conference (ISEC), pp. 3:1--3:10, 2014
Abstract
Model transforms convert a model to another model or text. Transforms play a key role in model-driven engineering; therefore, it is essential for a transform user to understand the transform semantics. However, examining the transform code to obtain such understanding can be cumbersome and impractical. To address this, we present an operational abstraction of model transforms. The abstraction captures the essential transformation semantics in terms of the structure of the output and the influence of input-model elements on output fragments. Thus, the abstraction supports the transform user's perspective, which is focused on inputs and outputs, and is unconcerned with implementation details. We present an automated approach that uses a combination of selective path enumeration, forced execution of enumerated paths, and an offline trace-merging analysis to synthesize operational abstractions. We discuss different applications of the abstraction. We implemented our approach for XSLT-based model transforms and conducted empirical studies. Our results indicate that selective path enumeration can be highly effective in reducing the scope of path exploration, while ensuring that the abstraction is complete. Our user study illustrates that the abstraction can improve user efficiency in debugging faulty input models. Our final study shows the feasibility of using abstractions for two metamodel management tasks.
2013
Efficient and Flexible GUI Test Execution via Test Merging
P Devaki, S Thummalapenta, N Singhania, S Sinha
Proceedings of the International Symposium on Software Testing and Analysis (ISSTA), 2013
Abstract
As a test suite evolves, it can accumulate redundant tests. To address this problem, many test-suite reduction techniques, based on different measures of redundancy, have been developed. A more subtle problem, that can also cause test-suite bloat and that has not been addressed by existing research, is the accumulation of similar tests. Similar tests are not redundant by any measure; but, they contain many common actions that are executed repeatedly, which over a large test suite, can degrade execution time substantially.
We present a test merging technique for GUI tests. Given a test suite, the technique identifies the tests that can be merged and creates a merged test, which covers all the application states that are exercised individually by the tests, but with the redundant common steps executed only once. The key novelty in the merging technique is that it compares the dynamic states induced by the tests to identify a semantically meaningful interleaving of steps from different tests. The technique not only improves the efficiency of test execution, but also ensures that there is no loss in the fault-revealing power of the original tests. In the empirical studies, conducted using four open-source web applications and one proprietary enterprise web application, in which over 3300 test cases and 19600 test steps were analyzed, the technique reduced the number of test steps by 29% and the test-execution time by 39%.
Guided Test Generation for Web Applications
S Thummalapenta, K V Lakshmi, S Sinha, N Sinha, S Chandra
Proceedings of the 35th International Conference on Software Engineering (ICSE), pp. 162--171, 2013
Abstract
We focus on functional testing of enterprise applications with the goal of exercising an application's interesting behaviors by driving it from its user interface. The difficulty in doing this is focusing on the interesting behaviors among an unbounded number of behaviors.
We present a new technique for automatically generating tests that drive a web-based application along interesting behaviors, where the interesting behavior is specified in the form of "business rules." Business rules are a general mechanism for describing business logic, access control, or even navigational properties of an application's GUI. Our technique is black box, in that it does not analyze the application's server-side implementation, but relies on directed crawling via the application's GUI. To handle the unbounded number of GUI states, the technique includes two phases. Phase 1 creates an abstract state-transition diagram using a relaxed notion of equivalence of GUI states without considering rules. Next, Phase 2 identifies rule-relevant abstract paths and refines those paths using a stricter notion of state equivalence. Our technique can be much more effective at covering business rules than an undirected technique, developed as an enhancement of an existing test-generation technique. Our experiments showed that the former was able to cover 92% of the rules, compared to 52% of the rules covered by the latter.
Efficient and Change-Resilient Test Automation: An Industrial Case Study
S Thummalapenta, P Devaki, S Sinha, S Chandra, S Gnanasundaram, D Nagaraj, S Sathishkumar
Proceedings of the 35th International Conference on Software Engineering (ICSE), Software Engineering in Practice Track, pp. 1002--1011, 2013
Abstract
Test automation, which involves the conversion of manual test cases to executable test scripts, is necessary to carry out efficient regression testing of GUI-based applications. However, test automation takes significant investment of time and skilled effort. Moreover, it is not a one-time investment: as the application or its environment evolves, test scripts demand continuous patching. Thus, it is challenging to perform test automation in a cost-effective manner.
At IBM, we developed a tool, called ATA, to meet this challenge. ATA has novel features that are designed to lower the cost of initial test automation significantly. Moreover, ATA has the ability to patch scripts automatically for certain types of application or environment changes.
How well does ATA meet its objectives in the real world? In this paper, we present a detailed case study in the context of a challenging production environment: an enterprise web application that has over 6500 manual test cases, comes in two variants, evolves frequently, and needs to be tested on multiple browsers in time-constrained and resource-constrained regression cycles. We measured how well ATA improved the efficiency in initial automation. We also evaluated the effectiveness of ATA's change-resilience along multiple dimensions: application versions, browsers, and browser versions. Our study highlights several lessons for test-automation practitioners as well as open research problems in test automation.
TestEvol: A Tool for Analyzing Test-Suite Evolution
L S Pinto, S Sinha, A Orso
Proceedings of the 35th International Conference on Software Engineering (ICSE), Tool Demonstrations Track, pp. 1303--1306, 2013
Abstract
Test suites, just like the applications they are testing, evolve throughout their lifetime. One of the main reasons for test-suite evolution is test obsolescence: test cases cease to work because of changes in the code and must be suitably repaired. There are several reasons why it is important to achieve a thorough understanding of how test cases evolve in practice. In particular, researchers who investigate automated test repair--an increasingly active research area--can use such understanding to develop more effective repair techniques that can be successfully applied in real-world scenarios. More generally, analyzing test-suite evolution can help testers better understand how test cases are modified during maintenance and improve the test evolution process, an extremely time consuming activity for any non-trivial test suite. Unfortunately, there are no existing tools that facilitate investigation of test evolution. To tackle this problem, we developed TestEvol, a tool that enables the systematic study of test-suite evolution for Java programs and JUnit test cases. This demonstration presents TestEvol and illustrates its usefulness and practical applicability by showing how TestEvol can be successfully used on real-world software and test suites.
2012
Understanding Myths and Realities of Test-Suite Evolution
L S Pinto, S Sinha, and A Orso
Proceedings of the 20th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE), 2012
Abstract
Test suites, once created, rarely remain static. Just like the application they are testing, they evolve throughout their lifetime. Test obsolescence is probably the most known reason for test-suite evolution---test cases cease to work because of changes in the code and must be suitably repaired. Repairing existing test cases manually, however, can be extremely time consuming, especially for large test suites, which has motivated the recent development of automated test-repair techniques. We believe that, for developing effective repair techniques that are applicable in real-world scenarios, a fundamental prerequisite is a thorough understanding of how test cases evolve in practice. Without such knowledge, we risk to develop techniques that may work well for only a small number of tests or, worse, that may not work at all in most realistic cases. Unfortunately, to date there are no studies in the literature that investigate how test suites evolve. To tackle this problem, in this paper we present a technique for studying test-suite evolution, a tool that implements the technique, and an extensive empirical study in which we used our technique to study many versions of six real-world programs and their unit test suites. This is the first study of this kind, and our results reveal several interesting aspects of test-suite evolution. In particular, our findings show that test repair is just one possible reason for test-suite evolution, whereas most changes involve refactorings, deletions, and additions of test cases. Our results also show that test modifications tend to involve complex, and hard-to-automate, changes to test cases, and that existing test-repair techniques that focus exclusively on assertions may have limited practical applicability. More generally, our findings provide initial insight on how test cases are added, removed, and modified in practice, and can guide future research efforts in the area of test-suite evolution.
Efficiently Scripting Change-Resilient Tests
S Thummalapenta, N Singhania, P Devaki, S Sinha, S Chandra, A Das and S Mangipudi
Proceedings of the 20th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE), Research Tool Demonstrations Track, 2012
Abstract
In industrial practice, test cases often start out as steps described in natural language and are intended to be executed by a human. Since tests are executed repeatedly, they go through an automation process, in which they are converted to automated test scripts (or programs) that perform the test steps mechanically. Conventional test-automation techniques can be time-consuming, require specialized skills, and can produce fragile scripts. To address these limitations, we present a tool, called ATA, for automating the test-automation task. Using a novel combination of natural-language processing, backtracking exploration, and learning, ATA can significantly improve tester productivity in automating manual tests. ATA also produces change-resilient scripts, which automatically adapt themselves in the presence of certain common types of user-interface changes.
Automating Test Automation
S Thummalapenta, S Sinha, N Singhania, S Chandra
Proceedings of the 34th International Conference on Software Engineering (ICSE), pp. 881--891, 2012
Abstract
Mention test case, and it conjures up image of a script or a program that exercises a system under test. In industrial practice, however, test cases often start out as steps described in natural language. These are essentially directions a human tester needs to follow to interact with an application, exercising a given scenario. Since tests need to be executed repeatedly, such manual tests then have to go through test automation to create scripts or programs out of them. Test automation can be expensive in programmer time. We describe a technique to automate test automation. The input to our technique is a sequence of steps written in natural language, and the output is a sequence of procedure calls with accompanying parameters that can drive the application without human intervention. The technique is based on looking at the natural language test steps as consisting of segments that describe actions on targets, except that there can be ambiguity in the action itself, in the order in which segments occur, and in the specification of the target of the action. The technique resolves this ambiguity by backtracking, until it can synthesize a successful sequence of calls. We present an evaluation of our technique on professionally created manual test cases for two open-source web applications as well as a proprietary enterprise application. Our technique could automate over 82% of the steps contained in these test cases with no human intervention, indicating that the technique can reduce the cost of test automation quite effectively.
Identifying services from legacy batch applications
R Komondoor, V K Nandivada, S Sinha, J Field
Proceedings of the 5th India Software Engineering Conference (ISEC), pp. 13--22, 2012
Abstract
Transaction processing is a key constituent of the IT workload of commercial enterprises (e.g., banks, insurance companies). Even today, in many large enterprises, transaction processing is done by legacy "batch" applications, which run offline and process accumulated transactions. Developers acknowledge the presence of multiple loosely coupled pieces of functionality within individual applications. Identifying such pieces of functionality (which we call "services") is desirable for the maintenance and evolution of these legacy applications. This is a hard problem, which enterprises grapple with, and one without satisfactory automated solutions.
In this paper, we propose a novel static-analysis-based solution to the problem of identifying services within transaction-processing programs. We provide a formal characterization of services in terms of control-flow and data-flow properties, which is well-suited to the idioms commonly exhibited by business applications. Our technique combines program slicing with the detection of conditional code regions to identify services in accordance with our characterization. A preliminary evaluation, based on a manual analysis of three real business programs, indicates that our approach can be effective in identifying useful services from batch applications.
2011
Regression Testing in the Presence of Non-code Changes
A Nanda, S Mani, S Sinha, M J Harrold, A Orso
Proceedings of the 4th International Conference on Software Testing, Verification and Validation (ICST), pp. 21--30, 2011
Abstract
Regression testing is an important activity performed to validate modified software, and one of its key tasks is regression test selection (RTS)---selecting a subset of existing test cases to run on the modified software. Most existing RTS techniques focus on changes made to code components and completely ignore non-code elements, such as configuration files and databases, which can also change and affect the system behavior. To address this issue, we present a new RTS technique that performs accurate test selection in the presence of changes to non-code components. To do this, our technique computes traceability between test cases and the external data accessed by an application, and uses this information to perform RTS in the presence of changes to non-code elements. We present our technique, a prototype implementation of our technique, and a set of preliminary empirical results that illustrate the feasibility, effectiveness, and potential usefulness of our approach.
Execution Hijacking: Improving Dynamic Analysis by Flying off Course
P Tsankov, W Jin, A Orso, S Sinha
Proceedings of the 4th International Conference on Software Testing, Verification and Validation (ICST), pp. 200--209, 2011
Abstract
Typically, dynamic-analysis techniques operate on a small subset of all possible program behaviors, which limits their effectiveness and the representativeness of the computed results. To address this issue, a new paradigm is emerging: execution hijacking, consisting of techniques that explore a larger set of program behaviors by forcing executions along specific paths. Although hijacked executions are infeasible for the given inputs, they can still produce feasible behaviors that could be observed under other inputs. In such cases, execution hijacking can improve the effectiveness of dynamic analysis without requiring the (expensive) generation of additional inputs. To evaluate the usefulness of execution hijacking, we defined, implemented, and evaluated several variants of it. Specifically, we performed an empirical study where we assessed whether execution hijacking could improve the effectiveness of a common dynamic analysis: memory error detection. The results of the study show that execution hijacking, if suitably performed, can indeed improve dynamic analysis.
Outsourced, Offshored Software-Testing Practice: Vendor-Side Experiences
H Shah, S Sinha, M J Harrold
Proceedings of the 6th International Conference on Global Software Engineering (ICGSE), Best Paper Award, pp. 131--140, 2011
Abstract
In the era of globally distributed software engineering, the practice of outsourced, off shored software testing (OOST) has witnessed increasing adoption. Although there have been ethnographic studies of the development aspects of global software engineering and of the in-house practice of testing, there have been fewer studies of OOST, which to succeed, can require dealing with unique challenges. To address this limitation of the existing studies, we conducted-and, in this paper, report the findings of-an ethnographically-informed study of three vendor testing teams involved in OOST practice. Specifically, we studied how test engineers perform their tasks under deadline pressures, the challenges that they encounter, and their strategies for coping with the challenges. Our study provides insights into the differences and similarities between in-house testing and OOST, the influence of team structures on the degree of pressure experienced by test engineers in the OOST setup, and the factors that influence quality and productivity under OOST.
Entering the Circle of Trust: Developer Initiation as Committers in Open-source Projects
V S Sinha, S Mani, S Sinha
Proceedings of the 8th Working Conference on Mining Software Repositories (MSR), pp. 133--142, 2011
Abstract
The success of an open-source project depends to a large degree on the proactive and constructive participation by the developer community. An important role that developers play in a project is that of a code committer. However, code-commit privilege is typically restricted to the core group of a project. In this paper, we study the phenomenon of the induction of external developers as code committers. The trustworthiness of an external developer is one of the key factors that determines the granting of commit privileges. Therefore, we formulate different hypotheses to explain how the trust is established in practice. To investigate our hypotheses, we developed an automated approach based on mining code repositories and bug-tracking systems. We implemented the approach and performed an empirical study, using the Eclipse projects, to test the hypotheses. Our results indicate that, most frequently, developers establish trust and credibility in a project by contributing to the project in a non-committer role. Moreover, the employing organization of a developer is another factor--although a less significant one--that influences trust.
2010
Automated Support for Repairing Input-Model Faults
S Mani, V S Sinha, P Dhoolia, S Sinha
Proceedings of the IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 195--204, 2010
Abstract
Model transforms are a class of applications that convert a model to another model or text. The inputs to such transforms are often large and complex; therefore, faults in the models that cause a transformation to generate incorrect output can be difficult to identify and fix. In previous work, we presented an approach that uses dynamic tainting to help locate input-model faults. In this paper, we present techniques to assist with repairing input-model faults. Our approach collects runtime information for the failing transformation, and computes repair actions that are targeted toward fixing the immediate cause of the failure. In many cases, these repair actions result in the generation of the correct output. In other cases, the initial fix can be incomplete, with the input model requiring further repairs. To address this, we present a pattern-analysis technique that identifies correct output fragments that are similar to the incorrect fragment and, based on the taint information associated with such fragments, computes additional repair actions. We present the results of empirical studies, conducted using real model transforms, which illustrate the applicability and effectiveness of our approach for repairing different types of faults.
Debugging Model-Transformation Failures using Dynamic Tainting
P Dhoolia, S Mani, V S Sinha, S Sinha
Proceedings of the 25th European Conference on Object-Oriented Programming (ECOOP), pp. 26--51, 2010
Abstract
Model-to-text (M2T) transforms are a class of software applications that translate a structured input into text output. The input models to such transforms are complex, and faults in the models that cause an M2T transform to generate an incorrect or incomplete output can be hard to debug. We present an approach based on dynamic tainting to assist transform users in debugging input models. The approach instruments the transform code to associate taint marks with the input-model elements, and propagate the marks to the output text. The taint marks identify the input-model elements that either contribute to an output string, or cause potentially incorrect paths to be executed through the transform, which results in an incorrect or a missing string in the output. We implemented our approach for XSL-based transforms and conducted empirical studies. Our results illustrate that the approach can significantly reduce the fault search space and, in many cases, precisely identify the input-model faults. The main benefit of our approach is that it automates, with a high degree of accuracy, a debugging task that can be tedious to perform manually.
Automated Bug Neighborhood Analysis for Identifying Incomplete Bug Fixes
M Kim, S Sinha, C Gorg, H Shah, M J Harrold, M G Nanda
Proceedings of the 3rd International Conference on Software Testing, Verification and Validation (ICST), pp. 383--392, 2010
Abstract
Although many static-analysis techniques have been developed for automatically detecting bugs, such as null dereferences, fewer automated approaches have been presented for analyzing whether and how such bugs are fixed. Attempted bug fixes may be incomplete in that a related manifestation of the bug remains unfixed. In this paper, we characterize the "completeness" of attempted bug fixes that involve the flow of invalid values from one program point to another, such as null dereferences, in Java programs. Our characterization is based on the definition of a bug neighborhood, which is a scope of flows of invalid values. We present an automated analysis that, given two versions P and P' of a program, identifies the bugs in P that have been fixed in P', and classifies each fix as complete or incomplete. We implemented our technique for null-dereference bugs and conducted empirical studies using open-source projects. Our results indicate that, for the projects we studied, many bug fixes are not complete, and thus, may cause failures in subsequent executions of the program.
Making Defect-Finding Tools Work for You
M G Nanda, M Gupta, S Sinha, S Chandra, D Schmidt, P Balachandran
Proceedings of the 32nd International Conference on Software Engineering (ICSE) Software Engineering in Practice Track, pp. 99--108, 2010
Abstract
Given the high costs of software testing and fixing bugs after release, early detection of bugs using static analysis can result in significant savings. However, despite their many benefits, recent availability of many such tools, and evidence of a positive return-on-investment, static-analysis tools are not used widely because of various usability and usefulness problems. The usability inhibitors include the lack of features, such as capabilities to merge reports from multiple tools and view warning deltas between two builds of a system. The usefulness problems are related primarily to the accuracy of the tools: identification of false positives (or, spurious bugs) and uninteresting bugs among the true positives. In this paper, we present the details of an online portal, developed at IBM Research, to address these problems and promote the adoption of static-analysis tools. We report our experience with the deployment of the portal within the IBM developer community. We also highlight the problems that we have learned are important to address, and present our approach toward solving some of those problems.
BUGINNINGS: Identifying the Origins of a Bug
V S Sinha, S Sinha, S Rao
Proceedings of the 3rd India Software Engineering Conference (ISEC), Best Paper Award, pp. 3--12, 2010
Abstract
Information about the origins of a bug can be useful for many applications, such as the computation of defect age and fault-proneness of code components. Although there exists an extensive body of research that has investigated the applications of bug-fix information, the computation of bug-origin information and its applications have largely been ignored--the only known approach for identifying bug origins has limitations. In this paper, we present a new approach for identifying bug-introducing code changes. The key innovation of our approach is that it analyzes the effects of bug-fix code changes on program dependences and, based on these effects, identifies the bug-introducing version. Thus, unlike the existing approach, which is text-based, our technique partially takes into account semantics of code changes. This can make it not only more accurate, but also applicable to a wider class of bug-fix code changes.To evaluate the feasibility and usefulness of our technique, we performed a preliminary empirical evaluation. Our results indicate that technique can be more effective than the text-based approach, and has an acceptable computational cost, be potentially useful in practice.
From Informal Process Diagrams to Formal Process Models
D Mukherjee, P Dhoolia, S Sinha, A J Rembert, M G Nanda
Proceedings of the 8th International Conference on Business Process Management (BPM), pp. 145--161, 2010
2009
Demystifying Model Transformations: An approach Based on Automated Rule Inference
M G Nanda, S Mani, V S Sinha, S Sinha
Proceeding of the 24th ACM SIGPLAN Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA), pp. 341--360, 2009
Abstract
Model-driven development (MDD) is widely used to develop modern business applications. MDD involves creating models at different levels of abstractions. Starting with models of domain concepts, these abstractions are successively refined, using transforms, to design-level models and, eventually, code-level artifacts. Although many tools exist that support transform creation and verification, tools that help users in understanding and using transforms are rare. In this paper, we present an approach for assisting users in understanding model transformations and debugging their input models. We use automated program-analysis techniques to analyze the transform code and compute constraints under which a transformation may fail or be incomplete. These code-level constraints are mapped to the input model elements to generate model-level rules. The rules can be used to validate whether an input model violates transform constraints, and to support general user queries about a transformation. We have implemented the analysis in a tool called XYLEM. We present empirical results, which indicate that (1) our approach can be effective in inferring useful rules, and (2) the rules let users efficiently diagnose a failing transformation without examining the transform source code.
Fault Localization and Repair for Java Runtime Exceptions
S Sinha, H Shah, C Gorg, S Jiang, M Kim, M J Harrold
Proceedings of the International Symposium on Software Testing and Analysis (ISSTA), pp. 153--164, 2009
Abstract
This paper presents a new approach for locating and repairing faults that cause runtime exceptions in Java programs. The approach handles runtime exceptions that involve a flow of an incorrect value that finally leads to the exception. This important class of exceptions includes exceptions related to dereferences of null pointers, arithmetic faults (e.g., ArithmeticException), and type faults (e.g., ArrayStoreException). Given a statement at which such an exception occurred, the technique combines dynamic analysis (using stack-trace information) with static backward data-flow analysis (beginning at the point where the runtime exception occurred) to identify the source statement at which an incorrect assignment was made; this information is required to locate the fault. The approach also identifies the source statements that may cause this same exception on other executions, along with the reference statements that may raise an exception in other executions because of this incorrect assignment; this information is required to repair the fault. The paper also presents an application of our technique to null pointer exceptions. Finally, the paper describes an implementation of the null-pointer-exception analysis and a set of studies that demonstrate the advantages of our approach for locating and repairing faults in the program.
Accurate Interprocedural Null-Dereference Analysis for Java
M G Nanda, S Sinha
Proceedings of the 31st International Conference on Software Engineering (ICSE), pp. 133--144, 2009
Abstract
Null dereference is a commonly occurring defect in Java programs, and many static-analysis tools identify such defects. However, most of the existing tools perform a limited interprocedural analysis. In this paper, we present an interprocedural path-sensitive and context-sensitive analysis for identifying null dereferences. Starting at a dereference statement, our approach performs a backward demand-driven analysis to identify precisely paths along which null values may flow to the dereference. The demand-driven analysis avoids an exhaustive program exploration, which lets it scale to large programs. We present the results of empirical studies conducted using large open-source and commercial products. Our results show that: (1) our approach detects fewer false positives, and significantly more interprocedural true positives, than other commonly used tools; (2) the analysis scales to large subjects; and (3) the identified defects are often deleted in subsequent releases, which indicates that the reported defects are important.
Efficient Testing of Service-Oriented Applications using Semantic Service Stubs
S Mani, V S Sinha, S Sinha, P Dhoolia, D Mukherjee, S Chakraborty
Proceedings of the IEEE International Conference on Web Services (ICWS), pp. 197--204, 2009
Abstract
Service-oriented applications can be expensive to test because services are hosted remotely, are potentially shared among many users, and may have costs associated with their invocation. In this paper, we present an approach for reducing the costs of testing such applications. The key observation underlying our approach is that certain aspects of an application can be tested using locally deployed semantic service stubs, instead of actual remote services.A semantic service stub incorporates some of the service functionality, such as verifying preconditions and generating output messages based on post conditions. We illustrate how semantic stubs can enable the client test suite to be partitioned into subsets, some of which need not be executed using remote services. We also present a case study that demonstrates the feasibility of the approach, and potential cost savings for testing. The main benefits of our approach are that it can (1) reduce the number of test cases that need to be run to invoke remote services, (2) ensure that certain aspects of application functionality are well-tested before service integration occurs.
2007
Parametric Process Model Inference
S Sinha, G Ramalingam, R Komondoor
Proceedings of the 14th Working Conference on Reverse Engineering (WCRE), pp. 21--30, 2007
Abstract
Legacy applications can be difficult and time-consuming to understand and update due to the lack of modern abstraction mechanisms in legacy languages, as well as the gradual deterioration of code due to repeated maintenance activities. We present an approach for reverse engineering process model abstractions from legacy code. Such a process model can provide a quick initial understanding of an application, and can be a useful starting point for further program exploration. Our approach takes as input a user specification of interesting events, and creates a representation (i.e., a process model) that concisely depicts the occurrences of the events and the possible control-flow among them. The key features of our approach are the use of a logical data model of the program for specifying the events, and graph-projection techniques for creating the process model.
2006
Semantics-based Reverse Engineering of Object-Oriented Data Models
G Ramalingam, R Komondoor, J Field, S Sinha
Proceedings of the 28th International Conference on Software Engineering (ICSE), pp. 192--201, 2006
Abstract
We present an algorithm for reverse engineering object-oriented (OO) data models from programs written in weakly-typed languages like Cobol. These models, similar to UML class diagrams, can facilitate a variety of program maintenance and migration activities. Our algorithm is based on a semantic analysis of the program's code, and we provide a bisimulation-based formalization of what it means for an OO data model to be correct for a program.
Subsumption of Program Entities for Efficient Coverage and Monitoring
R Santelices, S Sinha, M J Harrold
Proceedings of the 3rd International Workshop on Software Quality Assurance (SOQUA), pp. 2--5, 2006
Abstract
Program entities such as branches, def-use pairs, and call sequences are used in diverse software-development tasks. Reducing a set of entities to a small representative subset through subsumption saves monitoring overhead, focuses the developer's attention, and provides insights into the complexity of a program. Previous work has solved this problem for entities of the same type, and only for some types. In this paper we introduce a novel and general approach for subsumption of entities of any type based on predicate conditions. We discuss applications of this technique, and address future steps.
2004
Classifying Data Dependences in the Presence of Pointers for Program Comprehension, Testing, and Debugging
A Orso, S Sinha, M J Harrold
ACM Transactions on Software Engineering and Methodology 13(2), 199--239, 2004
Abstract
Understanding data dependences in programs is important for many software-engineering activities, such as program understanding, impact analysis, reverse engineering, and debugging. The presence of pointers can cause subtle and complex data dependences that can be difficult to understand. For example, in languages such as C, an assignment made through a pointer dereference can assign a value to one of several variables, none of which may appear syntactically in that statement. In the first part of this article, we describe two techniques for classifying data dependences in the presence of pointer dereferences. The first technique classifies data dependences based on definition type, use type, and path type. The second technique classifies data dependences based on span. We present empirical results to illustrate the distribution of data-dependence types and spans for a set of real C programs. In the second part of the article, we discuss two applications of the classification techniques. First, we investigate different ways in which the classification can be used to facilitate data-flow testing. We outline an approach that uses types and spans of data dependences to determine the appropriate verification technique for different data dependences; we present empirical results to illustrate the approach. Second, we present a new slicing approach that computes slices based on types of data dependences. Based on the new approach, we define an incremental slicing technique that computes a slice in multiple steps. We present empirical results to illustrate the sizes of incremental slices and the potential usefulness of incremental slicing for debugging.
Automated Support for Development, Maintenance, and Testing in the Presence of Implicit Control Flow
S Sinha, A Orso, M J Harrold
Proceedings of the 26th International Conference on Software Engineering (ICSE), pp. 336--346, 2004
Abstract
Although object-oriented languages can improve programming practices, their characteristics may introduce new problems for software engineers. One important problem is the presence of implicit control flow caused by exception handling and polymorphism. Implicit control flow causes complex interactions, and can thus complicate software-engineering tasks. To address this problem, we present a systematic and structured approach, for supporting these tasks, based on the static and dynamic analyses of constructs that cause implicit control flow. Our approach provides software engineers with information for supporting and guiding development and maintenance tasks. We also present empirical results to illustrate the potential usefulness of our approach. Our studies show that, for the subjects considered, complex implicit control flow is always present and is generally not adequately exercised.
2001
Interprocedural Control Dependence
S Sinha, M J Harrold, G Rothermel
ACM Transactions on Software Engineering and Methodology 10(2), 209--254, 2001
Abstract
Program-dependence information is useful for a variety of applications, such as software testing and maintenance tasks, and code optimization. Properly defined, control and data dependences can be used to identify semantic dependences. To function effectively on whole programs, tools that utilize dependence information require information about interprocedural dependences: dependences that are identified by analyzing the interactions among procedures. Many techniques for computing interprocedural data dependences exist; however, virtually no attention has been paid to interprocedural control dependence. Analysis techniques that fail to account for interprocedural control dependences can suffer unnecessary imprecision and loss of safety. This article presents a definition of interprocedural control dependence that supports the relationship of control and data dependence to semantic dependence. The article presents two approaches for computing interprocedural control dependences, and empirical results pertaining to the use of those approaches.
Regression Test Selection for Java Software
M J Harrold, J A Jones, T Li, D Liang, A Orso, M Pennings, S Sinha, S A Spoon, A Gujarathi
Proceeding of the 16th ACM SIGPLAN Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA), pp. 312--326, 2001
Abstract
Regression testing is applied to modified software to provide confidence that the changed parts behave as intended and that the unchanged parts have not been adversely affected by the modifications. To reduce the cost of regression testing, test cases are selected from the test suite that was used to test the original version of the software---this process is called regression test selection. A safe regression-test-selection algorithm selects every test case in the test suite that may reveal a fault in the modified software. Safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language. Unlike other safe regression test selection techniques, the presented technique also handles incomplete programs. The technique can thus be safely applied in the (very common) case of Java software that uses external libraries of components; the analysis of the external code is note required for the technique to select test cases for such software. The paper also describes RETEST, a regression-test-selection algorithm can be effective in reducing the size of the test suite.
Incremental Slicing based on Data-Dependence Types
A Orso, S Sinha, M J Harrold
Proceedings of the International Conference on Software Maintenance (ICSM), pp. 158--167, 2001
Abstract
Program slicing is useful for assisting with many software-maintenance tasks. The presence and frequent usage of pointers in languages such as C causes complex data dependences. To function effectively on such programs, slicing techniques must account for pointer-induced data dependences. Existing slicing techniques do not distinguish data dependences based on their types. This paper presents a new slicing technique, in which slices are computed based on types of data dependences. This new slicing technique offers several benefits and can be exploited in different ways, such as identifying subtle data dependences for debugging, computing reduced-size slices quickly for complex programs, and performing incremental slicing. This paper describes an algorithm for incremental slicing that increases the scope of a slice in steps, by incorporating different types of data dependences at each step. The paper also presents empirical results to illustrate the performance of the technique in practice. The results illustrate that incremental slices can be significantly smaller than complete slices. Finally, the paper presents a case study that explores the usefulness of incremental slicing for debugging.
Effects of Pointers on Data Dependences
A Orso, S Sinha, M J Harrold
Proceedings of the 9th International Workshop on Program Comprehension (IWPC), pp. 39--49, 2001
Abstract
This paper presents a technique for computing and classifying data dependences that takes into account the complexities introduced by specific language constructs, such as pointers, arrays and structures. The classification is finer-grained than previously proposed classifications. Moreover unlike previous work, the paper presents empirical results that illustrate the distribution of data dependences for a set of C subjects. The paper also presents a potential application for the proposed classification, program slicing, and a technique that completes slices based on data-dependence types. This technique facilitates the use of slicing for program understanding because a user can either augment a slice incrementally, by incorporating data dependences based on their relevance, or focus on specific kinds of dependences. Finally, the paper presents a case study that shows how the incremental addition of data dependences allows for growing the size of the slices in steps.
2000
Analysis and Testing of Programs with Exception-Handling Constructs
S Sinha, M J Harrold
IEEE Transactions on Software Engineering 26(9), 849--871, 2000
Abstract
Analysis techniques, such as control flow, data flow, and control dependence, are used for a variety of software engineering tasks, including structural and regression testing, dynamic execution profiling, static and dynamic slicing, and program understanding. To be applicable to programs in languages such as Java and C++, these analysis techniques must account for the effects of exception occurrences and exception handling constructs; failure to do so can cause the analysis techniques to compute incorrect results and, thus, limit the usefulness of the applications that use them. This paper discusses the effects of exception handling constructs on several analysis techniques. The paper presents techniques to construct representations for programs with explicit exception occurrences-exceptions that are raised explicitly through throw statements-and exception handling constructs. The paper presents algorithms that use these representations to perform the desired analyses. The paper also discusses several software engineering applications that use these analyses. Finally, the paper describes empirical results pertaining to the occurrence of exception handling constructs in Java programs and their effect on some analysis tasks.
1999
Criteria for Testing Exception-Handling Constructs in Java Programs
S Sinha, M J Harrold
Proceedings of the International Conference on Software Maintenance (ICSM), pp. 265--276, 1999
Abstract
Exception-handling constructs provide a mechanism for mixing exceptions and a facility for designating protected code by attaching exception handlers to blocks of code. Despite the frequency of their occurrences, the behavior of exception-handling constructs is often the least understood and poorly tested part of a program. The presence of such constructs introduces new structural elements, such as control-flow paths, in a program. To adequately test such programs, these new structural elements must be considered for coverage during structural testing. In this paper, we describe a class of adequacy criteria that can be used to test the behavior of exception-handling constructs. We present a subsumption hierarchy of the criteria, and illustrate the relationship of the criteria to those found in traditional subsumption hierarchies. We describe techniques for generating the testing requirements for the criteria using our control-flow representations. We also describe a methodology for applying the criteria to unit and integration testing of programs that contain exception-handling constructs.
System-Dependence-Graph-Based Slicing of Programs with Arbitrary Interprocedural Control Flow
S Sinha, M J Harrold, G Rothermel
Proceedings of the 21st International Conference on Software Engineering (ICSE), pp. 432--441, 1999
Abstract
Many algorithms for automating software engineering tasks require program slices. To be applicable to large software systems, these slices must be computed interprocedurally. Slicing techniques based on the system dependence graph (SDG) provide one approach for computing interprocedural slices, but these techniques are defined only for programs in which called procedures necessarily return to call sites. When applied to programs that contain arbitrary interprocedural control flow, existing SDG-based slicing techniques can compute incorrect slices; this limits their applicability. This paper presents an approach to constructing SDGs, and computing slices on SDGs, that accommodates programs with arbitrary interprocedural control flow. The main benefit of our approach is that it allows the use of the SDG-based slicing technique on a wide class of practical programs to which it did not previously apply.
An Approach to Analyzing and Testing Component-Based Systems
M J Harrold, D Liang, S Sinha
International ICSE Workshop Testing Distributed Component-Based Systems, 1999
Abstract
Software testing and maintenance account for as much as two-thirds of the cost of software production. Program analysis techniques offer the potential to automate testing and maintenance tasks, and thereby reduce the cost of these tasks. An emerging paradigm of software development that promises to enhance software productivity and quality composes software from independently-developed components. The nature of component-based software, however, introduces new problems for applying program analysis techniques to testing and maintenance of such software: the providers of software components develop and test the components independently of the applications that use the components; the users of software components analyze and test their applications without access to source code of the components used in those applications. This paper describes issues and challenges in applying analysis and testing techniques to component-based software and presents an approach to analyzing and testing coomponent-based systems.
A Case Study: Productivity and Quality Gains using an Object-Oriented Framework
S A Mamrak, S Sinha
Software-Practice and Experience 29(6), 501--518, 1999
Abstract
The Neuro-Oncology Information System (NOIS) supports researchers and other personnel throughout the United States engaged in brain tumor research. Graphical user interfaces that allow data input into the NOIS have been evolving over several years. This paper describes the design and implementation of the NOIS Input Forms as they migrated from a procedural approach to a static object-oriented approach, and finally to a framework approach in which not only static components were reused, but also the patterns of interaction among the components. The paper documents a significant gain in productivity and quality that was realized when using the framework design paradigm.
1998
Analysis of Programs with Exception-Handling Constructs
S Sinha, M J Harrold
Proceedings of the International Conference on Software Maintenance (ICSM), pp. 348--357, 1998
Abstract
Analysis techniques, such as control flow, data flow, and control dependence, are used for a variety of maintenance tasks, including regression testing, dynamic execution profiling, and static and dynamic slicing. To be applicable to programs in languages, such as Java and C++ however, these analysis techniques should, to the extent possible, account for the effects of exception occurrences and exception handling constructs. The paper presents techniques to construct intraprocedural and interprocedural representations on which existing techniques can be performed and demonstrates their applicability to several maintenance tasks.
Computation of Interprocedural Control Dependence
M J Harrold, G Rothermel, S Sinha
Proceedings of the International Symposium on Software Testing and Analysis (ISSTA), pp. 11--20, 1998
Abstract
Program dependence information is useful for a variety of software testing and maintenance tasks. Properly defined, control and data dependencies can be used to identify semantic dependencies. To function effectively on whole programs, tools that utilize dependence information require information about interprocedural dependencies: dependencies that exist because of interactions among procedures. Many techniques for computing data and control dependencies exist; however, in our search of the literature we find only one attempt to define and compute interprocedural control dependencies. Unfortunately, that approach can omit important control dependencies, and incorrectly identifies control dependencies for a large class of programs. This paper presents a definition of interprocedural control dependence that supports the relationship of control and data dependence to semantic dependence, an efficient algorithm for calculating interprocedural control dependencies, and empirical results obtained by our implementation of the algorithm.