The workshop will be held VIRTUALLY on 23-24 of August, 2021.

Co-located with ESEC/FSE 2021 

About

 

For the past eleven years, A-TEST has provided a venue for researchers and industry partners to exchange and discuss trending views, ideas, state of the art, work in progress, and scientific results on automated test case design, selection and evaluation.

The theme of 2021 edition will be testing “Extended Reality” (XR) systems, i.e. advanced interactive systems such as Virtual Reality (VR) and Augmented Reality (AR) systems. XR systems have emerged in various domains, ranging from entertainment, cultural heritage, to combat training and mission critical applications.

We need novel testing technology for XR systems based on AI techniques to provide learning and reasoning over a virtual world. XR developers need powerful test agents to automatically explore and test the correct parameters of their virtual worlds as they iteratively develop and refine them. This workshop will provide researchers and practitioners a forum for exchanging ideas, experiences, understanding of the problems, visions for the future, and promising solutions to the problems in testing XR systems.

The 2021 edition of the A-TEST is a joint workshop with the LANGETI, organized as an online workshop of ESEC/FSE 2021.

Im­por­tant Dates

  

  • Submission deadline:  May 28, 2021 
  • Author notification: July 1, 2021
  • Camera-ready: July 8, 2021
  • Workshop: August 23-24, 2021

Organization Committee

A-TEST TEAM

General Chairs

Sinem Getir Yaman (Humboldt-Universität zu Berlin and Ege University, Izmir, Turkey)

Rui Prada (INESC-ID and Instituto Superior Técnico, Universidade de Lisboa)

 

Program Chairs

Fitsum Meshesha Kifetew (Fondazione Bruno Kessler)

Nicolas Cardozo Alvarez (Universidad de los Andes)

 

Publicity Chair

Tanja E.J. Vos (Universidad Politecnica de Valencia and Open Universiteit)

Hands-on Session Chair

Kevin Moran (George Mason University)

Student Competition Chairs

Wishnu Prasetya (Utrecht University)

Joseph Davidson (Good AI)

Web Chair

Pekka Aho (Open Universiteit)

Stee­ring Com­mit­tee

  

Tanja E.J. Vos (Universidad Politecnica de Valencia, Open Universiteit)

Wishnu Prasetya (Universiteit van Utrecht)

Pekka Aho (Open Universiteit)

Pro­­gram­­me Com­­mi­t­tee

 

 

Emil Alégroth – Blekinge Institute of Technology

Domenico Amalfitano – University of Naples Federico II 

Markus Borg – RISE SICS AB

Mariano Ceccato – University of Verona

Márcio Eduardo Delamaro – Universidade de São Paulo

Lydie Du Bousquet – LIG

M.J. Escalona – University of Seville

Leire Exteberria – Mondragon Uniberstitatea

 João Faria – FEUP, INESC TEC

Anna Rita Fasolino – University of Naples Federico II

Onur Kilincceker – Mugla Sitki Kocman University

Yvan Labiche – Carleton University

Bruno Legeard – Smartesting

Maurizio Leotta – Università di Genova

Sam Malek – University of California, Irvine

Kevin Moran – College of William & Mary

Ana Paiva – University of Porto

Ali Parsai – University of Antwerp

Wishnu Prasetya – Utrecht University

Rudolf Ramler – Software Competence Center Hagenberg

Filippo Ricca – DIBRIS, Università di Genova

Martin Schneider – Fraunhofer FOKUS

Jan Tretmans – TNO – Embedded Systems Innovation

Man Zhang – Kristiania University College

 

Call for Papers

 

We invite you to submit a paper to the workshop, and present and discuss it at the event itself on any topics related to automated software testing.

Full paper (up to 8 pages, including references) describing original and completed research.

Short paper (up to 4 pages, including references) for example:

  • Position papers (max. 2 pages) that analyse trends and raise issues of importance. Position papers are intended to generate discussion and debate during the workshop, and will be reviewed with respect to relevance and their ability to start up fruitful discussions;
  • Work-in-progress papers (max. 4 pages) that describe novel, interesting, and highly potential work in progress, but not necessarily reaching its full completion;
  • Tool papers (max. 4 pages) presenting some academic testing tool in a way that it could be presented to industry as a start of successful technology transfer.
  • Technology transfer paper (max. 4 pages) describing industry-academia co-operation.

Topics include:

  • Test cases design, selection, and evaluation in Extended reality systems (VR, AR, MR), but also other emerging domains like Graphical User Interfaces, Social Networks, the Cloud, Games, Security, Cyber-Physical Systems. 
  • Techniques and tools for automating test case design, generation, and selection, e.g., model-based approaches, mutation approaches, metamorphic approaches, combinatorial- based approaches, search based approaches, symbolic-based approaches, chaos testing, machine learning testing. 
  • New trends in the use of machine learning (ML) and artificial intelligence (AI) to improve test automation, and new approaches to test ML/AI techniques.
  • Test case optimization.
  • Test cases evaluation and metrics.
  • Case studies that have evaluated an existing technique or tool on real systems, empirical studies, not only toy problems, to show the quality of the resulting test cases compared to other approaches.
  • Experience/industry reports.

Call for hands-on

A-TEST also offers an opportunity to introduce your novel testing technique or tool to its audience in an active hands-on session of 3 hours. This is an excellent opportunity to get more people involved in your technique/tool. You are invited to send us hands-on proposals (up to 2 pages) describing how you will conduct the session.

Call for Participation – Student Competition

To encourage students’ (bachelor, master or PhD) interest and involvement in themes around automated testing we organize a student competition, where they can come up with their own algorithms or AI to solve a set of managed testing problems in the domain of 3D computer games. We will use a test-AI Gym called Lab Recruits2 provided by the iv4XR project. It is a 3D maze-like game, a screenshot is shown below, that already provides a Java API that let an external test agent to control the game and receives structured information about its state. New game levels can be conveniently defined or generated, to provide challenges for AI to solve, though in our case we will be focusing on AIs for solving testing problems. A set of game levels can be defined as challenges, where subsequently participants are asked to write an automated test agent that would automatically check their correctness. Scoring can be based e.g. on the soundness, completeness (against mutations), and time consumption.

 

Submission

Papers and proposals will be submitted through EasyChair: 

https://easychair.org/conferences/?conf=atest2021

Each paper will be reviewed by at least three referees. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

All papers must be prepared in ACM Conference Format.

Papers accepted for the workshop will appear in the ACM digital library, providing a lasting archived record of the workshop proceedings.