Co-located with The IEEE International Conference on Software Testing Verification and Validation

Please look at http://guitar.sourceforge.net for more work on automated GUI testing.

April 6, 2010

Second International Workshop on

TESTing Techniques & Experimentation Benchmarks

for Event-Driven Software (TESTBEDS 2010)

Theme for 2010: GUI-Based Applications and Rich Internet Applications

Important Dates

Submission

Organization

CONTACT:atif@cs.umd.edu

Workshop Program

08:00 - 09:00

Registration

09:00 - 09:15

Opening Remarks

09:15 - 10:30

Keynote Address: Perspectives in GUI Testing

Dr. Peter Santhanam is currently the Senior Manager of the Software Engineering department at the IBM T. J. Watson Research Center in Hawthorne, New York.  Dr. Santhanam holds a B.Sc. from the University of Madras, India, an M.Sc. from the Indian Institute of Technology, Madras, India, an M.A. from Hunter College, The City University of New York, and a Ph.D. from Yale University. He joined IBM Research in 1985 and has been working in Software Engineering since 1993.  His current portfolio covers tools and methodology for the end-to-end software life cycle activities.   His personal interests include collaborative software development, holistic requirements capture, automated test generation, software metrics and process improvement.  He has published over fifty technical papers in journals and conferences. Dr. Santhanam is a member of the ACM and a senior member of the IEEE. 

 

Abstract: As a leading provider of business software and the supporting software engineering tools, with an active program in software engineering research, our experience in IBM is quite unique in the understanding of the breadth and the depth of the practical, technical and business issues in this important area.  This talk will present four different stakeholder perspectives of GUI testing:

·         The end user of the software application

·         Tester of the software application

·         Commercial tool vendor that provides a GUI testing framework

·         Software engineering researcher

While GUI testing has significantly advanced in the last decade, as we look to the future, significant challenges remain.

10:30 - 11:00

Coffee Break

11:00 - 12:30

Session 1

·         Using Methods & Measures from Network Analysis for GUI Testing (download pdf )

o   Ethar Elsaka, Walaa Eldin Moustafa, Bao Nguyen and Atif Memon

·         Performance Testing of GUI Applications (download pdf )

o   Milan Jovic and Matthias Hauswirth

·         A Framework for GUI Test Case Generation and Evaluation Based on Use Case Design (download pdf )

o   Cristiano Bertolini and Alexandre Mota

12:30 - 14:30

Lunch

14:30 - 16:00

Session 2

·         (Invited Talk) COMET: Community Event-based Testing (The COMET site went live on April 1, 2010. http://comet.unl.edu/)

o   Myra Cohen

·         On Modeling of GUI Test Profile (download pdf )

o   Lei Zhao and Kai-Yuan Cai

·         Message Broker using Asynchronous Method Invocation in Web Service and its Evaluation (download pdf )

o   Fan Bai and Tao Wang

16:00 - 16:30

Coffee Break

16:30 - 17:30

Session 3

·         Rich Internet Application Testing Using Execution Trace Data (download pdf )

o   Domenico Amalfitano, Anna Rita Fasolino and Porfirio Tramontana

·         Test Coverage Analysis of UML State Machines (download pdf )

o   Ricardo Ferreira, João P. Faria and Ana C. R. Paiva

17:30 – 18:00

Closing Remarks

Workshop Overview & Goals

With the tremendous success of TESTBEDS 2009, we are happy to announce this second workshop. As the participants of TESTBEDS 2009 noted in several interesting talks and discussions, testing of several classes of event-driven software (EDS) applications is becoming very important. Common examples of EDS include graphical user interfaces (GUIs), web applications, network protocols, embedded software, software components, and device drivers. An EDS takes internal/external events (e.g., commands, messages) as input (e.g., from users, other applications), changes its state, and sometimes outputs an event sequence.  An EDS is typically implemented as a collection of event handlers designed to respond to individual events. Nowadays, EDS is gaining popularity because of the advantages this ``event-handler architecture'' offers to both developers and users. From the developer's point of view, the event handlers may be created and maintained fairly independently; hence, complex system may be built using these loosely coupled pieces of code. In interconnected/distributed systems, event handlers may also be distributed, migrated, and updated independently. From the user's point of view, EDS offers many degrees of usage freedom. For example, in GUIs, users may choose to perform a given task by inputting GUI events (mouse clicks, selections, typing in text-fields) in many different ways in terms of their type, number and execution order.

Software testing is a popular QA technique employed during software development and deployment to help improve its quality. During software testing, test cases are created and executed on the software. One way to test an EDS is to execute each event individually and observe its outcome, thereby testing each event handler in isolation. However, the execution outcome of an event handler may depend on its internal state, the state of other entities (objects, event handlers) and/or the external environment. Its execution may lead to a change in its own state or that of other entities. Moreover, the outcome of an event's execution may vary based on the sequence of preceding events seen thus far. Consequently, in EDS testing, each event needs to be tested in different states. EDS testing therefore may involve generating and executing sequences of events, and checking the correctness of the EDS after each event. Test coverage may not only be evaluated in terms of code, but also in terms of the event-space of the EDS. Regression testing not only requires test selection, but also repairing obsolete test cases. The first major goal of this workshop is to bring together researchers and practitioners to discuss some of these topics.

One of the biggest obstacles to conducting research in the field of EDS testing is the lack of freely available standardized benchmarks containing artifacts (software subjects and their versions, test cases, coverage-adequate test suites, fault matrices, coverage matrices, bug reports, change requests), tools (test-case generators, test-case replayers, fault seeders, regression testers), and processes (how an experimenter may use the tools and artifacts together)  [see http://www.cs.umd.edu/~atif/newsite/benchmarks.htm for examples] for experimentation. The second major goal of this workshop is to promote the development of concrete benchmarks for EDS.

To provide focus, this event will only examine GUI-based applications and Rich Internet Applications, which share many testing challenges. As this workshop matures, we hope to expand to other types of EDS (e.g., general web applications).

Important Dates

·         Submission of Full Papers: Friday, 8 January, 2010; Friday, 15 January, 2010

·         Notification: Friday, 26 February, 2010

·         Camera-Ready: Thursday, 18 March, 2010

·         Workshop: April 6, 2010

Submission

The workshop solicits submission of:

·         Full Papers (max 10 pages)

·         Position Papers (max 6 pages) [what is a position paper?]

·         Demo Papers (max 6 pages) [usually papers describing implementation-level details (e.g., tool, file format, structure) that are of interest to the community]

·         Industrial Presentations (slides)

All submissions will be handled through http://www.easychair.org/conferences/?conf=testbeds2010.

Industrial presentations are submitted in the form of presentation slides and will be evaluated by at least two members of the Program Committee for relevance and soundness.

Each paper will be reviewed by at least three referees. Papers should be submitted as PDF files in standard IEEE two-column conference format (Latex , Word). The workshop proceedings will be published on this workshop web-page. Papers accepted for the workshop will appear in the IEEE digital library, providing a lasting archived record of the workshop proceedings.

Organization

General Chair

·         Atif M Memon, University of Maryland, USA.

Program Committee

·         Fevzi Belli, University of Paderborn, Germany.

·         Renee Bryce, Utah State University, USA.

·         Kai-Yuan Cai, Beijing University of Aeronautics and Astronautics, China.

·         S.C. Cheung, Hong Kong University of Science and Technology, Hong Kong.

·         Myra Cohen, University of Nebraska – Lincoln, USA.

·         Anna Rita Fasolino, University of Naples Federico II, Italy.

·         Chin-Yu Huang, National Tsing Hua University, Taiwan.

·         Alessandro Marchetto, Fondazione Bruno Kessler–IRST, Trento, Italy.

·         Ana Paiva, University of Porto, Portugal.

·         Brian P Robinson, ABB Corporate Research, USA.

·         Qing Xie, Accenture Technology Labs, Chicago, USA.