26-27 AUG 2019 IN TALLINN, ESTONIA

Co-located with ESEC/FSE 2019 

About

A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automated Testing.

Modern software teams seek a delicate balance between two opposing forces: striving for reliability and striving for agility. Software teams need tools to strike the right balance by increasing the development speed without sacrificing quality. Automated testing tools play an important role in obtaining this balance.

A-TEST has successfully run 9 editions since 2009. During the 2017 and 2018 editions, that was also co-located at ESEC/FSE, we introduced hands-on sessions where testing tools can be studies in depth. Due to the many positive reactions we received, this year we will have them again!

Pro­gram

26th of August

9:00 – 9:30

Welcome

9:30 – 10:30

Keynote Martin Monperrus (KTH Royal Institute of Technology )

A Comprehensive Study of Pseudo-tested Methods

10:30 – 11:00

Coffee

11:00 – 12:30

Riccardo Coppola, Luca Ardito and Marco Torchiano. Fragility of Layout-based and Visual GUI test scripts: an assessment study on a Hybrid Mobile Application

Marc-Florian Wendland, Martin Schneider and Andreas Hoffmann. Extending UTP 2 with cascading arbitration specifications

Alberto Martin-Lopez, Sergio Segura and Antonio Ruiz-Cortés. Test Coverage Criteria for RESTful Web APIs

12:30 – 14:00

LUNCH

14:00 – 15:30

Wishnu Prasetya. AI-enhanced Testing (besides machine learning)

Marc-Florian Wendland. Demo of Components of a UTP2-based Test Automation Solution

15:30 – 16:00

Coffee break

16:00 – 17:30

Hands-on session

Markus Borg. Test Visualization with a Game Engine

27th of August

9:30 – 10:30

Ana Turlea. Testing Extended Finite State Machines using NSGA-III

Rudolf Ramler, Claus Klammer and Thomas Wetzlmaier. Lessons Learned from Making the Transition to Model-Based GUI Testing.

10:30 – 11:00

Coffee

11:00 – 12:30

Hands-on session

Franck Chauvel, Brice Morin and Enrique Garcia-Ceja. Amplifying Integration Tests with CAMP

12:30 – 14:00

LUNCH

14:00 – 15:30

Keynote Theo Ruys (Axini)

Effective Model Based Testing Theo Ruys (Axini)

 

Marcus Kessel and Colin Atkinson. A Platform for Diversity-Driven Test Amplification.

15:30 – 16:00

Coffee break

15:30 – 17:30

Hands-on session

Franck Chauvel, Brice Morin and Enrique Garcia-Ceja. Amplifying Integration Tests with CAMP

Key­notes

Effective Model Based Testing

Theo Ruys (Axini)

In theory, model based testing (MBT) is only concerned with a formal model, an implementation, an automated tool and a means to connect the tool and the implementation. In practice, however, in every MBT project, there are several engineering and management problems to be dealt with.

For example, how to construct a viable model on the basis of the specifications? What is a good model? How to ensure that the model keeps as close to the specification and implementation as possible? How to ensure that a future re-test of the implementation will be effortless? And when a bug is found in the implementation, how to ensure that we can replay the bug? And how to mask the bug within the model such that we can continue the test process?

Furthermore, the test engineer has to deal with incremental versions of the implementation, different versions of the models, different test purposes, the collection of issues found in the implementation, the collection of test executions, etc. Preferably, the MBT tool includes version management functionality for the models and the test executions. Still, the test engineer has to apply a rigorous attitude when managing the abundance of data.

In this talk we will discuss the various engineering problems when applying MBT in practice, and our approach to tackle these problems. We will discuss classic MBT vs. incremental MBT. We will argue why the emphasis on achieving high coverage of the model is overrated. The talk will be illustrated with a demo of the Axini Modelling Suite, the MBT solution of Axini.

A Comprehensive Study of Pseudo-tested Methods

Martin Monperrus (KTH Royal Institute of Technology )

Pseudo-tested methods are defined as follows: they are covered by the test suite, yet no test case fails when the method body is removed, i.e., when all the effects of this method are suppressed. This intriguing concept was coined in 2016, by Niedermayr and colleagues, who showed that such methods are systematically present, even in well-tested projects with high statement coverage.

This work presents a novel analysis of pseudo-tested methods. First, we run a replication of Niedermayr’s study with 28K+ methods, enhancing its external validity thanks to the use of new tools and new study subjects. Second, we perform a systematic characterization of these methods, both quantitatively and qualitatively with an extensive manual analysis of 101 pseudo-tested methods.

The first part of the study confirms Niedermayr’s results: pseudo-tested methods exist in all our subjects. Our in-depth characterization of pseudo-tested methods leads to two key insights: pseudo-tested methods are significantly less tested than the other methods; yet, for most of them, the developers would not pay the testing price to fix this situation. This calls for future work on targeted test generation to specify those pseudo-tested methods without spending developer time. 

See https://hal.archives-ouvertes.fr/hal-01867423/document

Im­por­tant Dates

  • Submission deadline (extended):  7th of June 2019
  • Author notification:  24th of June 2019
  • Camera-ready: 1st of July 2019

Orga­ni­za­tion Com­mit­tee

A-TEST TEAM

General Chair

Tanja E.J. Vos (Universidad Politecnica de Valencia, Open Universiteit)

 

Program Chairs

Wishnu Prasetya (Universiteit van Utrecht)

Sinem Getir (Humboldt-Universität zu Berlin)

 

Hands-on session chair

Ali Parsai (Universiteit van Antwerpen)

 

Publicity Chair

Pekka Aho (Open Universiteit)

Pro­gram­me Com­mit­tee

 

 

  • Julián García, University of Seville. Spain
  • Marc-Florian Wendland, Fraunhofer FOKUS, Germany
  • Rudolf Ramler, Software Competence Center Hagenberg, Austria
  • Filippo Ricca, Università di Genova, Italy
  • Markus Borg, RISE SICS, Sweden
  • Yvan Labiche, Carleton University, Canada
  • Maurizio Leotta, Università di Genova, Italy
  • Peter M. Kruse, Expleo, Germany
  • Sam Malek, University of California, Irvine, USA
  • Ana Paiva, University of Porto, Portugal
  • M.J. Escalona, University of Seville, Spain
  • Leire Etxeberria, Mondragon Uniberstitatea, Spain
  • Anna Rita Fasolino, University of Naples Federico II, Italy
  • Sai Zhang, Google, USA
  • Rafael Oliveira, USP, Brazil
  • João Faria, FEUP, INESC TEC
  • Marcio Eduardo Delamaro, University of São Paulo, Brazil
  • Emil Alégroth, Blekinge Institute of Technology, Sweden
  • Ina Schieferdecker, Fraunhofer FOKUS/TU Berlin, Germany
  • Mariano Ceccato, Fondazione Bruno Kessler, Italy
  • Daniele Zuddas, Università della Svizzera italiana
  • Lydie Du Bousquet, LIG, France
  • Marko Van Eekelen, Radboud University, The Netherlands
  • Martin Deiman, Testwerk, The Netherlands
  • Domenico Amalfitano, University of Naples Federico II, Italy
  • Jan Tretmans, TNO, The Netherlands
  • Mika Rautila, VTT, Finland
  • Riccardo Coppola, Politecnico di Torino, Italy
  • Valentin Dallmeier, Saarland University, Germany
  • Denys Poshyvanyk, College of William and Mary, USA
  • José Carlos Maldonado, ICMC-USP
  • Kevin Moran, College of William and Mary, USA