Co-located with The Sixth IEEE International Conference on Software Testing Verification and Validation

March 18, 2013

Fourth International Workshop on

TESTing Techniques & Experimentation Benchmarks

for Event-Driven Software (TESTBEDS 2013)

Theme for 2013: Experimentation and Benchmarking

Important Dates

Submission

Organization

CONTACT: testbeds2013@cs.umd.edu

Workshop Program

Workshop Program

08:00 - 09:00

Registration

09:00 - 09:15

Welcome and Opening Remarks

09:15 - 10:30

 Keynote Address: A Journey of Test Scripts: From Manual to Adaptive and Beyond

Dr. Mark Grechanik, Assistant Professor, Department of Computer Science, University of Illinois, Chicago, USA.

Abstract: Test scripts are recorded sequences of actions that should be performed against software applications to verify if these applications behave as desired. Test scripts occupy a wide spectrum of possible implementations. On one extreme, test scripts are unstructured descriptions of actions in plain English that instruct the reader (i.e., a tester) how to interact with Graphical User Interfaces (GUIs) of applications manually. However, since manual black-box testing is tedious and laborious, test engineers create test scripts as programs to automate the testing process. These test scripts are programs that interact with applications by invoking methods of their interfaces and performing actions on their GUI objects. An extra effort that test engineers put in writing test scripts is paid off when these scripts are run repeatedly. Unfortunately, releasing new versions of applications with modified interfaces often breaks their corresponding test scripts thereby obliterating benefits of test automation. On the other extreme, test scripts are intelligent programs that adapt to applications that they should run against to test. These adaptive test scripts can be repaired automatically to adjust to new interfaces of the same application for the previous versions of which these scripts are designed, these scripts can be reused on other similar applications, and they can be dynamically reconfigured if the environment in which they run is modified. In this talk we argue that these adaptive scripts is the future of automated software testing.

 

Dr. Grechanik ‘s research area is software engineering in general, with particular interests in software testing, evolution, and reuse. Dr.Grechanik has a unique blend of strong academic background and long-term industry experience. He earned his Ph.D. in Computer Science from the department of Computer Sciences of the University of Texas at Austin. In parallel with his academic activities, Dr.Grechanik has worked for over 20 years as a software consultant for dozens of startups and Fortune 500 companies. Dr.Grechanik is a recipient of best paper awards from competitive conferences, his research is funded by NSF, and he holds many patents. His ideas are implemented and used by different companies and organizations. Dr.Grechanik is a Senior Member of ACM and a member of IEEE and he serves on the ACM SigSoft executive committee as the industry liaison.

10:30 - 11:00

Coffee Break

11:00 – 11:30

Full paper presentation: “Considering Context Events in Event-Based Testing of Mobile Applications” by Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana and Nicola Amatucci

11:30 - 12:00

Full paper presentation: “AutoQUEST - Automated Quality Engineering of Event-driven Software” by Steffen Herbold and Patrick Harms

12:00 – 12:20

Demo paper presentation: “Pattern Based GUI Testing Modeling Environment” by Tiago Monteiro and Ana Paiva

12:20 – 12:30

Closing Remarks

Workshop Overview & Goals

We’re doing this for the fourth time! TESTBEDS 2009, 2010, and 2011 were extremely successful. We had several interesting talks and discussions in the past TESTBEDS. We’re doing this because testing of several classes of event-driven software (EDS) applications is becoming very important. Common examples of EDS include graphical user interfaces (GUIs), web applications, network protocols, embedded software, software components, and device drivers. An EDS takes internal/external events (e.g., commands, messages) as input (e.g., from users, other applications), changes its state, and sometimes outputs an event sequence.  An EDS is typically implemented as a collection of event handlers designed to respond to individual events. Nowadays, EDS is gaining popularity because of the advantages this ``event-handler architecture'' offers to both developers and users. From the developer's point of view, the event handlers may be created and maintained fairly independently; hence, complex system may be built using these loosely coupled pieces of code. In interconnected/distributed systems, event handlers may also be distributed, migrated, and updated independently. From the user's point of view, EDS offers many degrees of usage freedom. For example, in GUIs, users may choose to perform a given task by inputting GUI events (mouse clicks, selections, typing in text-fields) in many different ways in terms of their type, number and execution order.

Software testing is a popular QA technique employed during software development and deployment to help improve its quality. During software testing, test cases are created and executed on the software. One way to test an EDS is to execute each event individually and observe its outcome, thereby testing each event handler in isolation. However, the execution outcome of an event handler may depend on its internal state, the state of other entities (objects, event handlers) and/or the external environment. Its execution may lead to a change in its own state or that of other entities. Moreover, the outcome of an event's execution may vary based on the sequence of preceding events seen thus far. Consequently, in EDS testing, each event needs to be tested in different states. EDS testing therefore may involve generating and executing sequences of events, and checking the correctness of the EDS after each event. Test coverage may not only be evaluated in terms of code, but also in terms of the event-space of the EDS. Regression testing not only requires test selection, but also repairing obsolete test cases. One goal of this workshop is to bring together researchers and practitioners to discuss some of these topics.

One of the biggest obstacles to conducting research in the field of EDS testing is the lack of freely available standardized benchmarks containing artifacts (software subjects and their versions, test cases, coverage-adequate test suites, fault matrices, coverage matrices, bug reports, change requests), tools (test-case generators, test-case replayers, fault seeders, regression testers), and processes (how an experimenter may use the tools and artifacts together)  [see http://comet.unl.edu for examples] for experimentation. Another goal of this workshop is to promote the development of concrete benchmarks for EDS.

Important Dates

·       Submission of All Papers/Presentations: 14 Jan 2013    21 Jan 2013

·       Notification: 11 February 2013

·       Camera-Ready: 3 March 2013

·       Workshop: 18 Mar. 2013

Submission

The workshop solicits submission of:

·       Full Papers (max 8 pages)

·       Position Papers (max 4 pages) [what is a position paper?]

·       Demo Papers (max 4 pages) [usually papers describing implementation-level details (e.g., tool, file format, structure) that are of interest to the community]

·       Industrial Presentations (2-page overview and 2 sample slides)

All submissions will be handled through http://www.easychair.org/conferences/?conf=testbeds2013.

Industrial presentations are submitted in the form of a 2-page overview and 2 sample presentation slides. They will be evaluated by at least two members of the Program Committee for relevance and soundness.

Each paper will be reviewed by at least three referees. Papers should be submitted as PDF files in standard IEEE two-column conference format (Latex , Word). The workshop proceedings will be published on this workshop web-page. Papers accepted for the workshop will appear in the IEEE digital library, providing a lasting archived record of the workshop proceedings.

Organization

Organizers

·       Myra Cohen, University of Nebraska – Lincoln, USA

·       Atif M Memon, University of Maryland, USA

Program Committee

·       Cristiano Bertolini, United Nations University International Institute for Software Technology, China

·       Zhenyu Chen, Nanjing University, China

·       Anna Rita Fasolino, Department of Computer Science and Automation, University Federico II of Naples, Italy

·       Mark Grechanik, University of Illinois at Chicago, USA

·       Mika Katara, Intel, Finland

·       Alessandro Marchetto, Fondazione Bruno Kessler, Italy

·       Leonardo Mariani, University of Milano Bicocca, Italy

·       Cu Nguyen, Fondazione Bruno Kessler, Italy

·       Ana Paiva, University of Porto, Portugal

·       Mauro Pezze, University of Lugano, Switzerland

·       Tanja Vos, Universidad Politécnica de Valencia, Spain