Co-located with The
IEEE International Conference on Software Testing Verification and Validation
March
21, 2011 (Monday)
Third
International Workshop on
TESTing
Techniques & Experimentation Benchmarks
for
Event-Driven Software (TESTBEDS
2011)
Theme for 2011: GUI-Based and Web Applications
08:00
- 09:00
|
Registration
|
09:00
- 09:05
|
Opening Remarks
|
09:05
- 10:00
|
Keynote Address: Event-Based GUI Testing and Reliability Assessment -- A
Critical Review
Fevzi Belli, Department of
Electrical Engineering and Information Technology, University of Paderborn,
Padeborn, Germany; belli@adt.upb.de Abstract: It is widely accepted that graphical user interfaces (GUIs) highly
affect—positively or negatively—the quality and reliability of
computer-based human-machine systems. Nevertheless, quantitative assessment
of the reliability of GUIs is a relatively young research field. Existing
software reliability assessment techniques attempt to statistically describe
the software testing process and to determine and thus predict the
reliability of the system under consideration (SUC). These techniques model
the reliability of the SUC based on particular assumptions and preconditions
on probability distribution of cumulative number of failures, failure data
observed, and form of the failure intensity function, etc. This keynote
addresses notions of modeling, and positive and negative testing, which are important
in testing of event-based GUI applications. To do this, two event-based GUI
testing methods using event flow graphs and event sequence graphs are
reviewed outlining their primary aspects, related notions and limitations in
a comparative manner. Based on these methods, concepts of software
reliability are discussed, mainly considering their use in GUI testing. Keywords—GUI modeling and testing; reliability
modeling/assessment/prediction; event sequence graphs; event flow graphs. Dr. Fevzi Belli is a professor of Software Engineering at the University of Paderborn,
Germany. In 1978 he completed his PhD in formal methods for verifying
software systems and self-correction features in formal languages at Berlin
Technical University. He spent several years as a software engineer in
Munich, writing programs to test other programs, before he changed in 1989 to
the University of Paderborn. He has an interest and experience in software
reliability/fault tolerance, model-based testing, and test automation.
|
10:00
- 10:30
|
Coffee Break
|
10:30
- 12:00
|
Session 1
·
Identifying Infeasible GUI Test Cases
Using Support Vector Machines and Induced Grammars Robert Gove, United States,
University of Maryland Jorge Faytong, United States,
University of Maryland Abstract: Model-based GUI software testing is
an emerging paradigm for automatically generating test suites. In the context
of GUIs, a test case is a sequence of events to be executed which may detect
faults in the application. However, a test case may be infeasible if one or
more of the events in the event sequence are disabled or made inaccessible by
a previously executed event (e.g., a button may be disabled until another GUI
widget enables it). These infeasible test cases terminate prematurely and waste
resources, so software testers would like to modify the test suite execution
to run only feasible test cases. Current techniques focus on repairing the
test cases to make them feasible, but this relies on executing all test
cases, attempting to repair the test cases, and then repeating this process
until a stopping condition has been met. We propose avoiding infeasible test
cases altogether by predicting which test cases are infeasible using two
supervised machine learning methods: support vector machines (SVMs) and
grammar induction. We experiment with three feature extraction techniques and
demonstrate the success of the machine learning algorithms for classifying
infeasible GUI test cases in several subject applications. We further
demonstrate a level of robustness in the algorithms when training and
classifying test cases of different lengths. Fevzi Belli, Germany, University of
Paderborn Mutlu Beyazıt, Germany,
University of Paderborn Nevin Güler, Turkey, University
of Muğla Abstract: Based on the keynote of TESTBEDS
2011, this presentation outlines the preliminary results of our work on
methods for modeling graphical user interfaces (GUIs) and related frameworks
for testing. The objective is to analyze how these models and techniques
affect the failure data to be observed, prerequisites to be met, and software
reliability assessment techniques to be selected. We expect that the quality
of the reliability assessment process, and ultimately also the reliability of
the GUI, depends on the methods used for modeling and testing the SUC. In
order to gain some experimental insight into this problem, GUI testing
frameworks based on event sequence graphs and event flow graphs were chosen
as examples. A case study drawn from a large commercial web-based system is
used to carry out the experiments and discuss the results. ·
Behind the Scenes: An Approach to
Incorporate Context in GUI Test Case Generation Stephan Arlt, Germany, University of
Freiburg Cristiano Bertolini, Macao, United
Nations University Martin Schäf, Macao, United
Nations University Abstract: Graphical user interfaces are a
common way to interact with software. To ensure the quality of such software
it is important to test the possible interactions with its user interface.
Testing user interfaces is a challenging task as they can allow, in general,
infinitely many different sequences of interactions with the software. As it
is only possible to test a limited amount of possible user interactions, it
is crucial for the quality of user interface testing to identify relevant
sequences and avoid improper ones. In this paper we propose a model that can
be used for GUI testing. Our model is created based on two observations. It
is a common case that different user interactions result in the execution of
the same code fragments. That is, it is sufficient to test only interactions
that execute different code fragments. Our second observation is that user interactions
are context sensitive. That is, the control flow that is taken in a program
fragment handling a user interaction depends on the order of some preceding
user interactions. We show that these observations are relevant in practice
and present a preliminary implementation that utilizes these observations for
test case generation. |
12:00
- 14:00
|
Lunch
|
14:00
- 15:30
|
Session 2
Steffen Herbold , Germany,
Universität Göttingen Uwe Bünting, Germany, Mahr GmbH
Göttingen Jens Grabowski, Germany,
Universität Göttingen Stephan Waack, Germany,
Universtät Göttingen Abstract: Most software systems are operated
using a Graphical User Interface (GUI). Therefore, bugs are often triggered
by user interaction with the software's GUI. Hence, accurate and reliable GUI
usage information is an important tool for bug fixing, as the reproduction of
a bug is the first important step towards fixing it. To support bug
reproduction, a generic, easy to integrate, non-intrusive GUI usage
monitoring mechanism is introduced in this paper. As supplement for the
monitoring, a method for automatically replaying the monitored usage logs is
provided. The feasibility of both is demonstrated through proof-of-concept
implementations. A case-study shows that the monitoring mechanism can be
integrated into large-scale software products without significant effort and
that the logs are replayable. Additionally, a usage-based end-to-end GUI
testing approach is outlined, in which the monitoring and replaying play
major roles. ·
Model-Based Testing with a General
Purpose Keyword-Driven Test Automation Framework Tuomas Pajunen, Finland, Tampere
University of Technology Tommi Takala, Finland, Tampere
University of Technology Mika Katara , Finland, Tampere
University of Technology Abstract: Model-based testing is a relatively
new approach to software testing that extends test automation from test
execution to test design using automatic test generation from models. The
effective use of the new approach requires new skills and knowledge, such as
test modeling skills, but also good tool support. This paper focuses upon the
integration of the TEMA model-based graphical user interface test generator
with a keyword-driven test automation framework, Robot Framework. Both of the
tools are available as open source.
The purpose of the integration was to enable the wide testing library
support of Robot Framework to be used in online model-based testing. The main
contribution of this paper is to present the integration providing a base for
future MBT utilization, but we will also describe a short case study where we
experimented the integration in testing a Java Swing GUI application and some
experiences in using the framework in testing Web GUIs. ·
A GUI Crawling-based technique for
Android Mobile Application Testing Domenico Amalfitano, Italy,
University of Naples Federico II Anna Rita Fasolino, Italy,
University of Naples Federico II Porfirio Tramontana, Italy,
Università degli Studi di Napoli Federico II Abstract: As mobile applications become more
complex, specific development tools and frameworks, as well cost-effective
testing techniques and tools, will be essential to assure the development of
secure, high-quality mobile applications. This paper addresses the problem of
automatic testing of mobile applications developed for the Google Android
platform, and presents a technique for rapid crash testing and regression
testing of Android applications. The technique is based on a crawler that
automatically builds a model of the application GUI and obtains test cases
that can be automatically executed. The technique is supported by a tool for
both crawling the application and generating the test cases. In the paper we
present an example of using the technique and the tool for testing a real
small size Android application that preliminary shows the effectiveness and
usability of the proposed testing approach GUIs. |
15:30
- 16:00
|
Coffee Break
|
16:00
- 17:15
|
Session 3
·
16:00-16:15 + 5 mins for discussion/questions; An Update on COMET (Community
Event-based Testing) Amanda Swearngin, United States,
University of Nebraska - Lincoln Myra Cohen, United States, University
of Nebraska - Lincoln Atif Memon, United States,
University of Maryland Abstract: TBD. ·
16:20-16:30 + 5 mins for discussion/questions; Model-Based Testcase Generation for
Web Applications from a Textual Model Arne-Michael Toersel, Germany,
University of Appl. Sciences, Stralsund Abstract: Model-based testing is a promising
technique for test case design and is used in an increasing number of
application domains. However, to gain efficiency advantages intuitive
domain-specific notations with comfortable tool support as well as a high
degree of automation in the whole testing process is required. In this paper
a model-based testing approach for web application black box testing is
presented. A control flow model of the application augmented with data flow
information is used. The primary modeling notation is textual. The research
prototype demonstrates the fully automated generation of ready to use test
case scripts for common test automation tools including test oracles from the
model. The prototype is evaluated in a basic case study. ·
16:35-16:50 + 5 mins for discussion/questions; An Industry Perspective Brian P Robinson, ABB Corporate
Research Abstract: TBD. ·
16:55-17:10 + 5 mins for discussion/questions; Crowdsourcing and Web Configuration
Fault Detection: An Overview Cyntrica Eaton, Norfolk State
University Abstract: Detecting configuration faults is
a problem in web application development where end-users have expanded
flexibility in web access options and the client configurations used to explore
the web are highly varied.
Engaging a community of users with varied configurations in the
process of web configuration fault detection/correction could significantly
improve the feasibility of comprehensive analysis. In this talk, I will discuss one
approach to developing a community of contributors and experts who will
collectively synthesize, fortify, and refine a knowledge base that enables
detection, diagnosis, and correction of configuration faults. |
17:15
– 17:30
|
Closing Remarks
|
We’re
doing this for the third time! TESTBEDS 2009 and TESTBEDS 2010
were extremely successful.
We had several interesting talks and discussions in the past TESTBEDS.
We’re doing this because testing of several classes of event-driven software (EDS) applications
is becoming very important. Common examples of EDS include graphical user interfaces
(GUIs), web applications, network protocols, embedded software, software
components, and device drivers. An EDS takes internal/external events (e.g.,
commands, messages) as input (e.g., from users, other applications), changes
its state, and sometimes outputs an event sequence. An EDS is typically implemented as a
collection of event handlers designed to respond to individual events.
Nowadays, EDS is gaining popularity because of the advantages this ``event-handler
architecture'' offers to both developers and users. From the developer's point
of view, the event handlers may be created and maintained fairly independently;
hence, complex system may be built using these loosely coupled pieces of code.
In interconnected/distributed systems, event handlers may also be distributed,
migrated, and updated independently. From the user's point of view, EDS offers
many degrees of usage freedom. For example, in GUIs, users may choose to
perform a given task by inputting GUI events (mouse clicks, selections, typing
in text-fields) in many different ways in terms of their type, number and
execution order.
Software
testing is a popular QA technique employed during software development and
deployment to help improve its quality. During software testing, test cases are
created and executed on the software. One way to test an EDS is to execute each
event individually and observe its outcome, thereby testing each event handler
in isolation. However, the execution outcome of an event handler may depend on
its internal state, the state of other entities (objects, event handlers)
and/or the external environment. Its execution may lead to a change in its own
state or that of other entities. Moreover, the outcome of an event's execution
may vary based on the sequence of preceding events seen thus far. Consequently,
in EDS testing, each event needs to be tested in different states. EDS testing
therefore may involve generating and executing sequences of events, and
checking the correctness of the EDS after each event. Test coverage may not
only be evaluated in terms of code, but also in terms of the event-space of the
EDS. Regression testing not only requires test selection, but also repairing
obsolete test cases. The first major goal of this workshop is
to bring together researchers and practitioners to discuss some of these
topics.
One of
the biggest obstacles to conducting research in the field of EDS testing is the
lack of freely available standardized benchmarks
containing artifacts (software
subjects and their versions, test cases, coverage-adequate test suites, fault
matrices, coverage matrices, bug reports, change requests), tools (test-case generators, test-case
replayers, fault seeders, regression testers), and processes (how an experimenter may use the tools and artifacts together) [see http://comet.unl.edu
for examples] for experimentation. The second major goal of this workshop is
to promote the development of concrete benchmarks for EDS.
To provide focus,
this event will only examine GUI-based applications and Web Applications, which
share many testing challenges.
As this workshop matures, we hope to expand to other types of EDS.
The
workshop solicits submission of:
·
Full
Papers (max 10 pages)
·
Position
Papers (max 6 pages) [what is a position paper?]
·
Demo
Papers (max 6 pages) [usually papers describing implementation-level details
(e.g., tool, file format, structure) that are of interest to the community]
·
Industrial
Presentations (slides)
All submissions will be handled
through http://www.easychair.org/conferences/?conf=testbeds2011.
Industrial
presentations are submitted in the form of presentation slides and will be
evaluated by at least two members of the Program Committee for relevance and
soundness.
Each
paper will be reviewed by at least three referees. Papers should be submitted
as PDF files in standard IEEE
two-column conference format (Latex , Word). The workshop proceedings will
be published on this workshop web-page. Papers accepted for the workshop will
appear in the IEEE digital library, providing a lasting archived record of the
workshop proceedings.
·
Atif M Memon, University of Maryland, USA.
· Cristiano Bertolini, Federal University of Pernambuco, Brazil.
· Zhenyu Chen, Nanjing University, China.
· Myra Cohen, University of Nebraska-Lincoln, USA.
· Cyntrica Eaton, Norfolk State University, USA.
· Anna-Rita Fasolino, University of Naples Federico II, Italy.
· Mark Grechanik, Accenture Labs, USA.
· Matthias Hauswirth, University of Lugano, Switzerland.
· Chin-Yu Huang, National Tsing Hua University, Taiwan.
· Ana Paiva, University of Porto, Portugal.
·
Brian Robinson, ABB Inc., US Corporate Research,
USA.