26-27 Aug 2019 in Tallinn, Estonia

Co-located with ESEC/FSE 2019 

About

A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automated Testing.

Modern software teams seek a delicate balance between two opposing forces: striving for reliability and striving for agility. Software teams need tools to strike the right balance by increasing the development speed without sacrificing quality. Automated testing tools play an important role in obtaining this balance.

A-TEST has successfully run 9 editions since 2009. During the 2017 and 2018 editions, that was also co-located at ESEC/FSE, we introduced hands-on sessions where testing tools can be studies in depth. Due to the many positive reactions we received, this year we will have them again!

Program

26th of August

9:00 – 9:30

Welcome

9:30 – 10:30

Keynote Martin Monperrus (KTH Royal Institute of Technology )

A Comprehensive Study of Pseudo-tested Methods

10:30 – 11:00

Coffee

11:00 - 12:30

Riccardo Coppola, Luca Ardito and Marco Torchiano. Fragility of Layout-based and Visual GUI test scripts: an assessment study on a Hybrid Mobile Application

Marc-Florian Wendland, Martin Schneider and Andreas Hoffmann. Extending UTP 2 with cascading arbitration specifications.

Alberto Martin-Lopez, Sergio Segura and Antonio Ruiz-Cortés. Test Coverage Criteria for RESTful Web APIs

12:30 – 14:00

LUNCH

14:00 – 15:30

Hands-on

15:30 – 16:00

Coffee break

16:00 – 17:30

Hands-on session

Markus Borg. Test Visualization with a Game Engine

27th of August

9:30 – 10:30

Keynote Theo Ruys (Axini)

Effective Model Based Testing Theo Ruys (Axini)

10:30 – 11:00

Coffee

11:00 – 12:30

Rudolf Ramler, Claus Klammer and Thomas Wetzlmaier. Lessons Learned from Making the Transition to Model-Based GUI Testing.

 

Ana Turlea. Testing Extended Finite State Machines using NSGA-III

 

Marcus Kessel and Colin Atkinson. A Platform for Diversity-Driven Test Amplification.

12:30 – 14:00

LUNCH

14:00 - 15:30

Hands-on session

Franck Chauvel, Brice Morin and Enrique Garcia-Ceja. Amplifying Integration Tests with CAMP

15:30 – 16:00

Coffee break

15:30 – 17:30

Hands-on session

Franck Chauvel, Brice Morin and Enrique Garcia-Ceja. Amplifying Integration Tests with CAMP

Keynotes

Effective Model Based Testing

Theo Ruys (Axini)

In theory, model based testing (MBT) is only concerned with a formal model, an implementation, an automated tool and a means to connect the tool and the implementation. In practice, however, in every MBT project, there are several engineering and management problems to be dealt with.

For example, how to construct a viable model on the basis of the specifications? What is a good model? How to ensure that the model keeps as close to the specification and implementation as possible? How to ensure that a future re-test of the implementation will be effortless? And when a bug is found in the implementation, how to ensure that we can replay the bug? And how to mask the bug within the model such that we can continue the test process?

Furthermore, the test engineer has to deal with incremental versions of the implementation, different versions of the models, different test purposes, the collection of issues found in the implementation, the collection of test executions, etc. Preferably, the MBT tool includes version management functionality for the models and the test executions. Still, the test engineer has to apply a rigorous attitude when managing the abundance of data.

In this talk we will discuss the various engineering problems when applying MBT in practice, and our approach to tackle these problems. We will discuss classic MBT vs. incremental MBT. We will argue why the emphasis on achieving high coverage of the model is overrated. The talk will be illustrated with a demo of the Axini Modelling Suite, the MBT solution of Axini.

A Comprehensive Study of Pseudo-tested Methods

Martin Monperrus (KTH Royal Institute of Technology )

Pseudo-tested methods are defined as follows: they are covered by the test suite, yet no test case fails when the method body is removed, i.e., when all the effects of this method are suppressed. This intriguing concept was coined in 2016, by Niedermayr and colleagues, who showed that such methods are systematically present, even in well-tested projects with high statement coverage. This work presents a novel analysis of pseudo-tested methods. First, we run a replication of Niedermayr's study with 28K+ methods, enhancing its external validity thanks to the use of new tools and new study subjects. Second, we perform a systematic characterization of these methods, both quantitatively and qualitatively with an extensive manual analysis of 101 pseudo-tested methods. The first part of the study confirms Niedermayr's results: pseudo-tested methods exist in all our subjects. Our in-depth characterization of pseudo-tested methods leads to two key insights: pseudo-tested methods are significantly less tested than the other methods; yet, for most of them, the developers would not pay the testing price to fix this situation. This calls for future work on targeted test generation to specify those pseudo-tested methods without spending developer time. See https://hal.archives-ouvertes.fr/hal-01867423/document"

Important dates

  • Submission deadline (extended):  7th of June 2019
  • Author notification:  24th of June 2019
  • Camera-ready: 1st of July 2019

Organization Committee

A-TEST TEAM

General Chair

Tanja E.J. Vos (Universidad Politecnica de Valencia, Open Universiteit)

 

Program Chairs

Wishnu Prasetya (Universiteit van Utrecht)

Sinem Getir (Humboldt-Universität zu Berlin)

 

Hands-on session chair

Ali Parsai (Universiteit van Antwerpen)

 

Publicity Chair

Pekka Aho (Open Universiteit)

 

Call for papers

Papers

We invite you to submit a paper to the workshop, and present and discuss it at the event itself on any topics related to automated software testing.

Full paper (up to 7 pages, including references) describing original and completed research.

Short paper (up to 4 pages, including references) for example:

- position statements - intended to generate discussion and debate.

- work-in-progress - that describes novel work that not necessarily has reached its full completion.

- technology transfer paper - describing University-Industry co-operation.

- tool paper - describing a test tool. Note that accepted tool papers are expected to give a demo at the workshop.

Topics include:

  • Techniques and tools for automating test case design, generation, and selection, e.g. model-based approaches, combinatorial-based approaches, search based approaches, symbolic-based approaches, machine learning, artificial intelligence.
  • New trends in the use of machine learning and artificial intelligence to improve test automation.
  • Test cases optimization.
  • Test cases evaluation and metrics.
  • Test cases design, selection, and evaluation in emerging domains, e.g. Graphical User Interface, Social Network, Cloud, Games, Security, Cyber Physical Systems.
  • Case studies that have evaluated an existing technique or tool on real systems, not only toy problems, to show the quality of the resulting test cases compared to other approaches.

Call for hands-on

A-TEST also offers an opportunity to introduce your novel testing technique or tool to its audience in an active hands-on session of 3 hours. This is an excellent opportunity to get more people involved in your technique/tool. You are invited to send us hands-on proposals (up to 2 pages) describing how you will conduct the session.

Submission

Papers and proposals will be submitted through EasyChair: https://easychair.org/conferences/?conf=atest2019

Each paper will be reviewed by at least three referees. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

All papers must be prepared in ACM Conference Format.

Papers accepted for the workshop will appear in the ACM digital library, providing a lasting archived record of the workshop proceedings.

Previous Editions

The A-TEST workshop has evolved over the years and has successfully run 7 editions since 2009. The first editions went by the name of ATSE (2009 and 2011) took place at the CISTI (Conference on Information Systems and Technologies, http://www.aisti.eu/). The three subsequent editions (2012, 2013 and 2014) at FEDCSIS (Federated Conference on Computer Science and Information Systems, http://www.fedcsis.org). In 2015 there was an ATSE2015 at SEFM year and an A-TEST2015 at FSE.

In 2016 we merged the events at FSE resulting in the 7th edition of A-TEST in 2016.

The 8th edition of A-TEST in 2017 was Co-located at the 12th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2017 in Paderbron.

The 9th edition of A-TEST in 2018 was Co-located at the 13th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2018 in Lake Buena Vista, Florida, United States.

 

 

Programme Committee

  • Julián García, University of Seville. Spain
  • Marc-Florian Wendland, Fraunhofer FOKUS, Germany
  • Rudolf Ramler, Software Competence Center Hagenberg, Austria
  • Filippo Ricca, Università di Genova, Italy
  • Markus Borg, RISE SICS, Sweden
  • Yvan Labiche, Carleton University, Canada
  • Maurizio Leotta, Università di Genova, Italy
  • Peter M. Kruse, Expleo, Germany
  • Sam Malek, University of California, Irvine, USA
  • Ana Paiva, University of Porto, Portugal
  • M.J. Escalona, University of Seville, Spain
  • Leire Etxeberria, Mondragon Uniberstitatea, Spain
  • Anna Rita Fasolino, University of Naples Federico II, Italy
  • Sai Zhang, Google, USA
  • Rafael Oliveira, USP, Brazil
  • João Faria, FEUP, INESC TEC
  • Marcio Eduardo Delamaro, University of São Paulo, Brazil
  • Emil Alégroth, Blekinge Institute of Technology, Sweden
  • Ina Schieferdecker, Fraunhofer FOKUS/TU Berlin, Germany
  • Mariano Ceccato, Fondazione Bruno Kessler, Italy
  • Daniele Zuddas, Università della Svizzera italiana
  • Lydie Du Bousquet, LIG, France
  • Marko Van Eekelen, Radboud University, The Netherlands
  • Martin Deiman, Testwerk, The Netherlands
  • Domenico Amalfitano, University of Naples Federico II, Italy
  • Jan Tretmans, TNO, The Netherlands
  • Mika Rautila, VTT, Finland
  • Riccardo Coppola, Politecnico di Torino, Italy
  • Valentin Dallmeier, Saarland University, Germany
  • Denys Poshyvanyk, College of William and Mary, USA
  • José Carlos Maldonado, ICMC-USP
  • Kevin Moran, College of William and Mary, USA