November 9, 2020. The workshop will be held VIRTUALLY.

Co-located with ESEC/FSE 2020 

About

 

A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automated Testing.

Modern software teams seek a delicate balance between two opposing forces: striving for reliability and striving for agility. Software teams need tools to strike the right balance by increasing the development speed without sacrificing quality. Automated testing tools play an important role in obtaining this balance.

A-TEST has successfully run 10 editions since 2009. During the 2017, 2018 and 2019 editions, which were also co-located at ESEC/FSE, we introduced hands-on sessions where testing tools can be studied in depth. Due to the many positive reactions we received, this year we will have them again, in the form of live online tutorials.

Pro­gram

 

Program to be defined.

List of accepted papers:

Navigation and Exploration in 3D-Game Automated Play Testing
I. S. W. B. Prasetya, Maurin Voshol, Tom Tanis, Adam Smits, Bram Smit, Jacco van Mourik, Menno Klunder, Frank Hoogmoed, Stijn Hinlopen, August van Casteren, Jesse van de Berg, Naraenda G.W.Y. Prasetya, Samira Shirzadehhajimahmood, and Saba Gholizadeh Ansari
(Utrecht University, Netherlands)

Comparing Transition Trees Test Suites Effectiveness for Different Mutation Operators
Hoda Khalil and Yvan Labiche
(Carleton University, Canada)

Fuzz4B: A Front-End to AFL Not Only for Fuzzing Experts
Ryu Miyaki, Norihiro Yoshida, Natsuki Tsuzuki, Ryota Yamamoto, and Hiroaki Takada
(Nagoya University, Japan)

Towards Automated Testing of RPA Implementations
Marina Cernat, Adelina Nicoleta Staicu, and Alin Stefanescu
(University of Bucharest, Romania; Cegeka, Romania)

Key­notes

Corina Pasareanu 

Corina Pasareanu is a distinguished researcher at NASA Ames and Carnegie Mellon University. She is affiliated with KBR and CMU’s CyLab and holds a courtesy appointment in Electrical and Computer Engineering. At Ames, she is developing and extending Symbolic PathFinder, a symbolic execution tool for Java bytecode. Her research interests include model checking and automated testing, compositional verification, model-based development, probabilistic software analysis, and autonomy and security. She is the recipient of several awards, including ASE Most Influential Paper Award (2018), ESEC/FSE Test of Time Award (2018), ISSTA Retrospective Impact Paper Award (2018), ACM Distinguished Scientist (2016), ACM Impact Paper Award (2010), and ICSE 2010 Most Influential Paper Award (2010). She has been serving as Program/General Chair for several conferences including: FM 2021, ICST 2020, ISSTA 2020, ESEC/FSE 2018, CAV 2015, ISSTA 2014, ASE 2011, and NFM 2009. She is currently an associate editor for the IEEE TSE journal. 

SafeDNN: Understanding and Verifying Neural Networks

The SafeDNN project at NASA Ames explores new techniques and tools to ensure that systems that use Deep Neural Networks (DNN) are safe, robust and interpretable. Research directions we are pursuing in this project include: symbolic execution for DNN analysis, label-guided clustering to automatically identify input regions that are robust, parallel and compositional approaches to improve formal SMT-based verification, property inference and automated program repair for DNNs, adversarial training and detection, probabilistic reasoning for DNNs. In this talk I will highlight some of the research advances from SafeDNN.

Aldeida Aleti

Aldeida is a Senior Lecturer and 2013 Australian Research Council DECRA Fellow at the Faculty of Information Technology, Monash University in Australia, where she leads the software engineering discipline group. Aldeida’s research is in the areas of Software Engineering and Artificial Intelligence, with a particular focus on Search-Based Software Testing. Aldeida has published in top AI, optimisation and software engineering venues, served as PC member and organising committee at both SE and optimisation conferences, such as ASE, ICSE, GECCO and IJCAI.

The effectiveness of automated software testing techniques

With the rise of AI-based systems, such as self-driving cars, Google search, and automated decision-making systems, new challenges have emerged for the testing community. Verifying such software systems is becoming an extremely difficult and expensive task, often constituting up to 90% of the software expenses.  Software in a self-driving car, for example, must safely operate in an infinite number of scenarios, which makes it extremely hard to find bugs in such systems. In this talk, I will explore some of these challenges, and introduce our work which aims at improving the bug-detection capabilities of automated software testing. First, I will talk about a framework that maps the effectiveness of automated software testing techniques, by identifying software features that impact the ability of these techniques to achieve high code coverage. Next, I will introduce our latest work that incorporates defect prediction information to improve the efficiency of search-based software testing to detect software bugs.

Tool Demos

Diffblue Cover 

Peter Schrammel (Diffblue Ltd – University of Sussex, Brighton)

Diffblue Cover is a tool for creating unit tests for Java code automatically. Unlike the code produced by typical test generators that smells like machine-generated, Diffblue Cover aims at writing tests using an idiomatic coding style. The aim is to pass the Turing test – can you distinguish whether a test has been written by a machine or a developer? 

This tutorial will introduce you into Diffblue Cover and how to use it for your daily work on your Java code base. Diffblue Cover Community Edition is available as an IntelliJ Plugin. Please install it on your machine prior to the tutorial so that you can actively follow and interact: https://plugins.jetbrains.com/plugin/14946-diffblue-cover.

 

Scout 

Michel Nass, Emil Alegroth (Blekinge Institute of Technology)

Test automation is a staple of modern software development, but graphical user interface testing is associated with challenges that limits its use. In this technical report we introduce Scout, a tool that combines, and revises, previous technologies into a common tool with the potential to mitigate the challenges encountered by previous GUI testing approaches. 

 

 

Im­por­tant Dates

  

  • Submission deadline (EXTENDED):  July 27, 2020 (Abstract) and July 31, 2020 (Paper)
  • Author notification: 4th September 2020
  • Camera-ready: 18th September 2020

Organization Committee

A-TEST TEAM

General Chair

Sinem Getir Yaman (Humboldt-Universität zu Berlin)

 

Program Chair

Phu Nguyen (SINTEF, Norway)

 

Publicity Chair

Luca Ardito (Politecnico di Torino)

Online Tutorials Session Chair

Yannic Noller (National University of Singapore)

 

Web Chair

Riccardo Coppola (Politecnico di Torino)

Stee­ring Com­mit­tee

  

Tanja E.J. Vos (Universidad Politecnica de Valencia, Open Universiteit)

Wishnu Prasetya (Universiteit van Utrecht)

Pekka Aho (Open Universiteit)

Pro­­gram­­me Com­­mi­t­tee

 

 

Emil Alégroth – Blekinge Institute of Technology

Domenico Amalfitano – University of Naples Federico II 

Markus Borg – RISE SICS AB

Mariano Ceccato – University of Verona

Márcio Eduardo Delamaro – Universidade de São Paulo

Lydie Du Bousquet – LIG

M.J. Escalona – University of Seville

Leire Exteberria – Mondragon Uniberstitatea

 João Faria – FEUP, INESC TEC

Anna Rita Fasolino – University of Naples Federico II

Onur Kilincceker – Mugla Sitki Kocman University

Yvan Labiche – Carleton University

Bruno Legeard – Smartesting

Maurizio Leotta – Università di Genova

Sam Malek – University of California, Irvine

Kevin Moran – College of William & Mary

Ana Paiva – University of Porto

Ali Parsai – University of Antwerp

Wishnu Prasetya – Utrecht University

Rudolf Ramler – Software Competence Center Hagenberg

Filippo Ricca – DIBRIS, Università di Genova

Martin Schneider – Fraunhofer FOKUS

Jan Tretmans – TNO – Embedded Systems Innovation

Man Zhang – Kristiania University College

 

Call for Papers

 

We invite you to submit a paper to the workshop, and present and discuss it at the event itself on any topics related to automated software testing.

Full paper (up to 7 pages, including references) describing original and completed research.

Short paper (up to 4 pages, including references) for example:

– position statements – intended to generate discussion and debate.

– work-in-progress – that describes novel work that not necessarily has reached its full completion.

– technology transfer paper – describing University-Industry co-operation.

– tool paper – describing a test tool. Note that accepted tool papers are expected to give a demo at the workshop.

Topics include:

  • Techniques and tools for automating test case design, generation, and selection, e.g. model-based approaches, combinatorial-based approaches, search based approaches, symbolic-based approaches, machine learning, artificial intelligence.
  • New trends in the use of machine learning and artificial intelligence to improve test automation.
  • Test cases optimization.
  • Test cases evaluation and metrics.
  • Test cases design, selection, and evaluation in emerging domains, e.g. Graphical User Interface, Social Network, Cloud, Games, Security, Cyber Physical Systems.
  • Case studies that have evaluated an existing technique or tool on real systems, not only toy problems, to show the quality of the resulting test cases compared to other approaches.
  • Search based testing approaches

Call for online tutorials

A-TEST also offers an opportunity to introduce your novel testing technique or tool to its audience in an online tutorial session. This is an excellent opportunity to get more people involved in your technique/tool. You are invited to submit proposals for 45 minutes to 1 hour sessions. The proposals are limited to 2 pages and should include a topic description and your session schedule.

 

Submission

Papers and proposals will be submitted through EasyChair: 

https://easychair.org/conferences/?conf=atest2020

Each paper will be reviewed by at least three referees. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

All papers must be prepared in ACM Conference Format.

Papers accepted for the workshop will appear in the ACM digital library, providing a lasting archived record of the workshop proceedings.