The workshop will be held VIRTUALLY on 23 of August, 2021.

Co-located with ESEC/FSE 2021 

About

 

For the past eleven years, A-TEST has provided a venue for researchers and industry partners to exchange and discuss trending views, ideas, state of the art, work in progress, and scientific results on automated test case design, selection and evaluation.

The theme of 2021 edition will be testing “Extended Reality” (XR) systems, i.e. advanced interactive systems such as Virtual Reality (VR) and Augmented Reality (AR) systems. XR systems have emerged in various domains, ranging from entertainment, cultural heritage, to combat training and mission critical applications.

We need novel testing technology for XR systems based on AI techniques to provide learning and reasoning over a virtual world. XR developers need powerful test agents to automatically explore and test the correct parameters of their virtual worlds as they iteratively develop and refine them. This workshop will provide researchers and practitioners a forum for exchanging ideas, experiences, understanding of the problems, visions for the future, and promising solutions to the problems in testing XR systems.

The 2021 edition of the A-TEST is organized as an online workshop of ESEC/FSE 2021 together with iv4XR project.

Program (23rd August 2021)

23rd August 2021. All times are in CET

09:00 – 09:15   Welcome, opening (A-Test organization)
09:15 – 10:15   Keynote: Rui Prada (INESC-ID, Lisbon). Automated Test Agents for XR Based Systems.

Abstract
Extended Reality (XR) systems, which are advanced interactive systems including Virtual Reality (VR) and Augmented Reality (AR) have emerged in various domains. The development and authoring of such systems is an iterative process that depends highly on quality assurance processes that involve user testing to make sure that the resulting systems are correct and deliver a high quality user experience. As the complexity of these systems keeps increasing, the XR industry now finds itself confronting a soaring engineering challenge: paradoxically, XR’s fine grained and high level of interactivity and realism make such systems very hard and expensive to test. The iv4XR project aims to build a novel verification and validation technology for XR systems based on techniques from AI to provide learning and reasoning over an interactive virtual world. The technology will allow XR developers to deploy powerful test agents to automatically explore and test the correct parameters of their virtual worlds as they iteratively develop and refine them. The iv4XR agents will support testing functional properties of the system under test (SUT) and user experience (UX) as well. We are using cognitive and socio-emotional AI to enable test agents to conduct automated assessment of the dimensions of user experience, such as emotional state, and to check the impact of the SUT on different demographic and social profiles.  

 

Bio
Rui Prada is Associate Professor at Instituto Superior Técnico (IST), Universidade de Lisboa and Senior Researcher in the AI for People and Society Research Group at INESC-ID Lisbon. He was co-responsible for the creation of courses on Game Design and Development and the creation of the Specialisation in Games of the Master Program in Information Systems and Computer Engineering at IST. He conducts research on social intelligent agents, affective computing, human-agent (and robot) interaction, computer games, applied gaming and game AI. He is co-author of the book “Design e Desenvolvimento de Jogos” and was one of the founding members of the Portuguese Society of Videogame Sciences. He is currently the coordinator of the iv4XR EU funded project that conducts research on automated testing of Extended Reality systems.

10:15 – 10:45   Technical paper: Samira Shirzadehhajimahmood, Wishnu Prasetya, Frank Dignum, Mehdi Dastani and Gabriele Keller. Using an Agent-based Approach for Robust Automated Testing of Computer Games | preprint

10:45 – 11:00       Coffee break

11:00 – 11:30  Technical paper: Riccardo Coppola, Luca Ardito and Marco Torchiano. Automated Translation of Android Context-dependent Gestures to Visual GUI Test Instructions
11:30 – 12:00  Technical paper: Stefan Fischer, Claus Klammer and Rudolf Ramler. Integrating Usage Monitoring for Continuous Evaluation and Testing in the UI of an Industry Application

12:00 -13:00       Lunch break

13:00 – 14:00   Hands-on session: Kevin Moran. A Tutorial on the CrashScope Tool. Tutorial.

About CrashScope
Unique challenges arise when testing mobile applications due to their prevailing event-driven nature and complex contextual features (e.g. sensors, notifications). Current automated input generation approaches for Android apps are typically not practical for developers to use due to required instrumentation or platform dependence. To better support developers in mobile testing tasks, in this tutorial we present an automated tool called CrashScope. This tool explores a given Android app using systematic input generation, according to several strategies with the intrinsic goal of triggering crashes. When a crash is detected, CrashScope generates a record of the execution of events that led to a crash which can then be transformed into a report containing screenshots, detailed crash reproduction steps, the captured exception stack trace, and a fully replayable script that automatically reproduces the crash on a target device(s). This tutorial will focus on providing participants an overview of the tool, hands-on experience executing a test application using CrashScope, and a discussion on how the tool could be extended to work with XR/AR/VR applications. The website for the hands-on session is located at https://sagelab.io/crashscope-tutorial/

14:00 – 14:30  Technical paper: Filippo Ricca and Maurizio Leotta. Automated Generation of PO-based WebDriver Test Suites from Selenium IDE Recordings
14:30 – 15:00  Technical paper: Muhammad Nouman Zafar, Wasif Afzal and Eduard Enoiu. Towards a Workflow for Model-Based Testing of Embedded Systems

15:00 – 15:15    Coffee break

15:15 – 16:15  Panel discussion: Expanding Software Testing to XR Systems
16:15 – 17:00  Announcement of student competition winners 

Congratulation for the winners:
Winner: Team Rhobot (Rhodon van Tilburg) from Utrecht University, the Netherlands.
Runner-up: Team SquidGL (Gerjan Lugtenberg, Nout Heesink, Kayleigh Lieverse) from Utrecht University, the Netherlands. 

ATEST Automated Testing of Computer Game, Student Competition 2021

250 euroRubCall for Participation – Student Competition

To encourage students’ (bachelor, master or PhD) interest and involvement in themes around automated testing we organize a student competition, where they can come up with their own algorithms or AI to solve a set of managed testing problems in the domain of 3D computer games. We will use a test-AI Gym called JLabGym linked to a 3D maze-like game called Lab Recruits. JLabGym provides Java APIs to let an external test agent to control the game and receive structured information about its state. New game levels can be conveniently defined or generated, to provide challenges for AI to solve, though in our case we will be focusing on AIs for solving testing problems. Contestants will be challenged to devise (and implement) a generic algorithm to automate a certain type of testing tasks for the game Lab Recruits. A set of game levels will be produced as challenges (some will be provided as examples) to benchmark the submitted solutions. Scoring would be based e.g. on the soundness, completeness (against some pre-defined oracles), and time consumption. 

  • Prizes. Winner: 500 euro + game keys for the game Space Engineers + participation in the production of an upcoming game of one of our sponsors. Runner-up: 250 euro and Space Engineers game keys. Third: Space Engineers game keys.
  • Registration. Register your team here.
  • Submitting your project: edit your registration entry (above) by adding a link to where we can download a zip of your project. It should be a Maven project. Make sure that we can build your project. Deadline: 31st July 2021. Extended to August 7, 2021
  • More instructions for the competition can be found from https://github.com/iv4xr-project/JLabGym/blob/master/docs/contest/contest2021.md
  • Regulation: here.

Im­por­tant Dates

  

  • Paper Submission deadline: May 28, 2021; extended to June 7, 2021 
  • Author notification: July 1, 2021
  • Camera-ready: July 8, 2021
  • Student Competition Submission: July 31, 2021; extended to August 7, 2021, 24:00 AOE.
  • Workshop: August 23-24, 2021

Organization Committee

A-TEST TEAM

General Chairs

Sinem Getir Yaman (Humboldt-Universität zu Berlin and Ege University, Izmir, Turkey)

Rui Prada (INESC-ID and Instituto Superior Técnico, Universidade de Lisboa)

 

Program Chairs

Fitsum Meshesha Kifetew (Fondazione Bruno Kessler)

Nicolas Cardozo Alvarez (Universidad de los Andes)

 

Publicity Chair

Tanja E.J. Vos (Universidad Politecnica de Valencia and Open Universiteit)

Hands-on Session Chair

Kevin Moran (George Mason University)

Student Competition Chairs

Wishnu Prasetya (Utrecht University)

Joseph Davidson (Good AI)

Web Chair

Pekka Aho (Open Universiteit)

Stee­ring Com­mit­tee

  

Tanja E.J. Vos (Universidad Politecnica de Valencia, Open Universiteit)

Wishnu Prasetya (Universiteit van Utrecht)

Pekka Aho (Open Universiteit)

Pro­­gram­­me Com­­mi­t­tee

 

Rui Abreu FEUP
Nuno Antunes University of Coimbra
Alexandre Bergel University of Chile
Carlos Bernal-Cárdenas The College of William and Mary
Matteo Biagiola Università della Svizzera italiana
Mariano Ceccato University of Verona
Coen De Roover Vrije Universiteit Brussel
Frank Dignum Utrecht University
Julian Dolby IBM
Mattia Fazzini University of Minnesota
Raihana Ferdous Fondazione Bruno Kessler
Carlos Gavidia Pontifica Universidad Católica del Perú
Onur Kilincceker Mugla Sitki Kocman University
Sam Malek University of California, Irvine
Alexandra Mendes INESC TEC & Universidade da Beira Interior
Fabio Palomba University of Salerno
Ali Parsai Flanders Make
Davide Prandi Fondazione Bruno Kessler
Wishnu Prasetya Utrecht University
Gordana Rakic Faculty of Science, Novi Sad
Gerson Sunyé Université de Nantes
Michele Tufano College of William and Mary
Ekincan Ufuktepe University of Missouri – Columbia

 

Call for Papers

 

We invite you to submit a paper to the workshop, and present and discuss it at the event itself on any topics related to automated software testing. This call for papers is available also in PDF format.

Full paper (up to 8 pages, including references) describing original and completed research.

Short paper (up to 4 pages, including references) for example:

  • Position papers (max. 2 pages) that analyse trends and raise issues of importance. Position papers are intended to generate discussion and debate during the workshop, and will be reviewed with respect to relevance and their ability to start up fruitful discussions;
  • Work-in-progress papers (max. 4 pages) that describe novel, interesting, and highly potential work in progress, but not necessarily reaching its full completion;
  • Tool papers (max. 4 pages) presenting some academic testing tool in a way that it could be presented to industry as a start of successful technology transfer.
  • Technology transfer paper (max. 4 pages) describing industry-academia co-operation.

Topics include:

  • Test cases design, selection, and evaluation in Extended reality systems (VR, AR, MR), but also other emerging domains like Graphical User Interfaces, Social Networks, the Cloud, Games, Security, Cyber-Physical Systems. 
  • Techniques and tools for automating test case design, generation, and selection, e.g., model-based approaches, mutation approaches, metamorphic approaches, combinatorial- based approaches, search based approaches, symbolic-based approaches, chaos testing, machine learning testing. 
  • New trends in the use of machine learning (ML) and artificial intelligence (AI) to improve test automation, and new approaches to test ML/AI techniques.
  • Test case optimization.
  • Test cases evaluation and metrics.
  • Case studies that have evaluated an existing technique or tool on real systems, empirical studies, not only toy problems, to show the quality of the resulting test cases compared to other approaches.
  • Experience/industry reports.

Call for hands-on

A-TEST also offers an opportunity to introduce your novel testing technique or tool to its audience in an active hands-on session of 3 hours. This is an excellent opportunity to get more people involved in your technique/tool. You are invited to send us hands-on proposals (up to 2 pages) describing how you will conduct the session.

Submission

Papers and proposals will be submitted through EasyChair: 

https://easychair.org/conferences/?conf=atest21

Each paper will be reviewed by at least three referees. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

All papers must be prepared in ACM Conference Format.

Papers accepted for the workshop will appear in the ACM digital library, providing a lasting archived record of the workshop proceedings.