A-Test banner

The workshop will be held in Singapore

November 17-18, 2022

Co-located with ESEC/FSE 2022 

About

For the past twelve years, the Workshop on Automating Test Case Design, Selection and Evaluation (A-TEST) has provided a venue for researchers and industry members alike to exchange and discuss trending views, ideas, state of the art, work in progress, and scientific results on automated testing.

Following the success of the past years, the 13th edition of A-TEST will continue to be co-located with and organised at ESEC/FSE 2022. A-TEST 2022 is planned to take place over two days preferably in person in Singapore (if the COVID-19 situation allows).

Testing is at the moment the most important and mostly used quality assurance technique applied in the software industry. However, the complexity of software, and hence of their development amount, is increasing.

Even though many test automation tools are currently available to aid test planning and control as well as test case execution and monitoring, all these tools share a similar passive philosophy towards test case design, selection of test data and test evaluation. They leave these crucial, time-consuming and demanding activities to the human tester. This is not without reason; test case design and test evaluation through oracles are difficult to automate with the techniques available in current industrial practices. The domain of possible inputs (potential test cases), even for a trivial method, program, model, user interface or service is typically too large to be exhaustively explored. Consequently, one of the major challenges associated with test case design is the selection of test cases that are effective at finding flaws without requiring an excessive number of tests to be carried out. Automation of the entire test process requires new thinking that goes beyond test design or specific test execution tools. These are the problems that this workshop aims to attack.

The 2022 edition of the A-TEST is organized as a co-located workshop of ESEC/FSE 2022.

Venue

 
The workshop will be a hybrid organization. Please see the ESEC/FSE’22 website for details about the registration.

ATEST Student Competition 2022 – Develop a Source Code Analysis Testing Tool

This competition is sponsored by FrontEndART (https://frontendart.com). Prizes that can be won are: €500 for the
winning team and €250 for the runner up¹.

  1. Introduction
    To involve students into the testing community, and to promote the attention for software testing and security, we organise a student competition.
  2. Call For Participation: Develop a Source Code Analysis Testing Tool
    Last year there was a zero-day vulnerability in a popular Java logging framework called Log4j (CVE-2021-44228).
    Researchers came to the conclusion that the vulnerability was unnoticed since 2013. With the vulnerability it was possible to execute arbitrary code. The Log4j framework was used widely in all kinds of software.
    There are a lot of vulnerabilities that are present in software. So the challenge is to find vulnerabilities in early stage. Also communicating about vulnerabilities helps the software world in getting better knowledge and understanding. There are all kinds of external resources you can use to scan a software folder for vulnerabilities. The nice part is that in case you find a vulnerable piece of software, you need to anonymize it, and publish it to an open source GitHub container.
    So the goal of the project is to automatically scan a set of source projects and publish your findings to an open GitHub container. The findings need to be ordered according the OWASP Top 10 2021 Web Application Security Risks.
    Also you need to be able to map the vulnerabilities and dependencies on a chart. You can run the designed software on your own platform, but cloud service (for example Heroku) are also allowed. For the ease of use it is most preferable to use a Docker container for your solution. The solution needs to accept wildcards (the location of for example GitHub repository).

     

    1. Fair game play
      By participating, each member of the team adheres to the following rules to create a fair game:
      All submissions are the own work of the members of the team and have been created without external support.
      All members are enrolled at a university as a student and can and will provide proof if asked for by the judges.
      All member must adhere to the ACM Code of Ethics and Professional Conduct.
    2. Assessment of the submissions
      The assessment will be independently assessed by industry experts on both cyber security and software testing. The submitted code will be run on two disclosed and one undisclosed repository.
    3. Repositories
      The following repositories will be used by the judges to test the submissions:
      https://github.com/latex3/latex3.
      https://github.com/humanmade/S3-Uploads.
      The judges will also use one undisclosed repository to run the code against. You may also run your code against other open source repositories and provide us with the link to the repository.
    4. Scoring rubrics
      The assessment will be independently assessed using the scoring rubrics presented in Table 1.
  3. Sponsoring & Prizes
    This competition is sponsored by FrontEndART (https://frontendart.com). Prizes that can be won are: €500 for the winning team and €250 for the runner up.
  4. Dates And Ways To Submit Your Solution
    The submission deadline is Sunday the fourth of November, at 24:00h (AoE). Submission can be done by sending an e-mail to niels.doorn@ou.nl, you will than be provided with an URL to upload all your files.

¹Note: these prices can be subject to change
Authors’ addresses: Niels Doorn, Open Universiteit, The Netherlands, niels.doorn@ou.nl; Jeroen Pijpker, NHL Stenden, The Netherlands, jeroen.pijpker@nhlstenden.com.

Table 1.

Score: 0-25% 25-50% 50-75% 75%-100%
Functionality/ efficiency There are unnecessary fields and methods. A lot of duplicate code. Methods are not atomic. There are unnecessary fields and methods. There is also duplicate code in the application. There are no unnecessary fields, but there are some unnecessary methods (or vice versa). There are no unused fields and methods (with disregards of mutators and accessors). Also no duplicate code and unnecessary methods.
Number of vulnerabilities found 0-2 3-4 5-6 7 or more
Performance A lot of performance issues Some performance issues Code performance is normal Code is optimized for performance
Ease of use Unable to execute code. Difficult to run the code. The code can be run in Docker container with some problems. The code can be run in Docker container and start without problems.
Visualizing /Presentation of the results There’s no presentation of the results. There’s some presentation of the results. There’s some presentation of the results and some ordering. There’s presentation of the results and some ordering.
Functionality OWASP top 10 No OWASP top 10 functionality. Some OWASP top 10 functionality. OWASP top 10 functionality. OWASP top 10 functionality and ordering.
Use of different algorithms (AI) No AI. Some AI. AI makes sense. Meaningful use of AI.
Comments No comments. Too much text in comments, unnecessary comments, bad styling. Some comments, but with bad styling. Neat and meaningful comments that makes sense.

Program (17-18 November 2022)

17th November 2022. All times are in UTC+08:00 (Singapore)

09:00 – 09:15   Welcome, opening (A-Test organization)
09:15 – 10:15   Experience Studies and Industrial Applications | Chair: Beatriz Marín

(30 min) – An Agent-based Approach to Automated Game Testing: an Experience Report
Wishnu Prasetya, Fernando Ricos, Fitsum Meshesha Kifetew, Davide Prandi, Samira Shirzadehhajimahmood, Tanja E. J. Vos, Premek Paska, Karel Hovodska, Raihana Ferdous, Angelo Susi and Joseph Davidson

(30 min) – Automation of the Creation and Execution of System Level Hardware-In-Loop Tests through Model-Based Testing
Viktor Aronsson Karlsson, Ahmed Almasri, Eduard Paul Enoiu, Wasif Afzal and Peter Charbachi

10:15 – 11:00 | Coffee break

11:00 – 12:00   Test Automation Efficiency 1 | Chair: Ákos Kiss

(30 min) – KUBO: A Framework for Automated Efficacy Testing of Anti-Virus Behavioral Detection with Procedure-based Malware Emulation
Jakub Pruzinec, Quynh Nguyen, Adrian Baldwin, Jonathan Griffin and Yang Liu

(30 min) – Interactive Fault Localization for Python with CharmFL
Attila Szatmári, Qusay Idrees Sarhan and Árpád Beszédes

12:00 -14:00 | Lunch break

14:00 – 15:30   Hands-on 1 | Chair: Niels Doorn

(90 min) – Interacting with Interactive Fault Localization Tools (part 1)
Ferenc Horváth, Gergő Balogh, Attila Szatmári, Qusay Idrees Sarhan, Béla Vancsics and Árpád Beszédes

15:30 – 16:00 | Coffee break

16:00 – 17:30   Hands-on 2 | Chair: Niels Doorn

(90 min) – Interacting with Interactive Fault Localization Tools (part 2)
Ferenc Horváth, Gergő Balogh, Attila Szatmári, Qusay Idrees Sarhan, Béla Vancsics and Árpád Beszédes


18th November 2022. All times are in UTC+08:00 (Singapore)

09:00 – 10:30   Keynote | Chair: Ákos Kiss

(90 min) – Mutation Testing in Evolving Systems
Michail Papadakis

10:30 – 11:00 | Coffee break

11:00 – 12:30   Test Automation Efficiency 2 | Chair: Niels Doorn

(30 min) – An Online Agent-based Search Approach in Automated Computer Game Testing with Model Construction
Samira Shirzadehhajimahmood, Wishnu Prasteya, Frank Dignum and Mehdi Dastani

(30 min) – OpenGL API Call Trace Reduction with the Minimizing Delta Debugging Algorithm
Daniella Bársony

(30 min) – Iterating the Minimizing Delta Debugging Algorithm
Dániel Vince

12:30 -14:00 | Lunch break

14:00 – 15:00   Best Practices for Testing | Chair: Beatriz Marín

(30 min) – Guidelines for GUI testing maintenance: a linter for test smell detection
Tommaso Fulcini, Giacomo Garaccione, Riccardo Coppola, Luca Ardito and Marco Torchiano

(30 min) – Academic search engines: constraints, bugs and recommendations
Zheng Li and Austen Rainer

15:00 – 15:30   Closing

(15 min) – Announcement of Student Competition Winners & Closing
Ákos Kiss, Beatriz Marín and Niels Doorn

Im­por­tant Dates

 

  • Abstract Submission deadline: July 24th, 2022 (non mandatory)
  • Submission deadline: July 28th, 2022
  • Author notification: September 2nd, 2022
  • Camera-ready Submission: September 9th, 2022
  • Student Competition Submission: TBD
  • Workshop Notification: August 31st, 2022
  • Workshop: November 17-18, 2022

All dates are 23:59:59 AoE

Call for Papers

 

Authors are invited to submit papers to the workshop, and present and discuss them at the event on topics related to automated software testing. Paper submissions can be of the following types:

Full papers (max 8 pages, including references) describing original, complete, and validated research – either empirical or theoretical – in A-TEST related techniques, tools, or industrial case studies.

Work-in-progress papers (max. 4 pages) that describe novel, interesting, and high-potential work in progress, but not necessarily reaching full completion (e.g., not completely validated)

Tool papers (max. 4 pages) presenting some testing tool in a way that it could be presented to industry as a start of successful technology transfer.

Technology transfer paper (max. 4 pages) describing industry-academia co-operation.

Position papers (max. 2 pages) that analyse trends and raise issues of importance. Position papers are intended to generate discussion and debate during the workshop.

Topics of interest include, but are not limited to:

  • Techniques and tools for automating test case design, generation, and selection, e.g., model-based approaches, mutation approaches, metamorphic approaches, combinatorial based approaches, search-based approaches, symbolic-based approaches, chaos testing, machine learning testing.
  • New trends in the use of machine learning (ML) and artificial intelligence (AI) to improve test automation, and new approaches to test ML/AI-based systems.
  • Test case and test process optimization.
  • Test case evolution, repair, and reuse.
  • Test case evaluation and metrics.
  • Test case design, selection, and evaluation in emerging domains like graphical user interfaces, social networks, the cloud, games, security, cyber-physical systems, or extended reality.
  • Case studies that have evaluated an existing technique or tool on real systems, empirical studies, not only toy problems, to show the quality of the resulting test cases compared to other approaches.
  • Experience/industry reports.
  • Education of (automating) testing.

Call for hands-on

A-TEST also offers an opportunity to introduce novel testing techniques or tools to the audience in active hands-on sessions. The strong focus on tools in this session complements the traditional conference style of presenting research papers, in which a deep discussion of technical components of the implementation is usually missing. Furthermore, presenting the actual tools and implementations of testing techniques in the hands-on sessions allows other researchers and practitioners to reproduce research results and to apply the latest testing techniques in practice. The invited proposals should be:
hands-on proposals (max. 2 pages) that describe how the session (with a preferred time frame of 3 hours) will be conducted.

Submission

All submissions must be in English and in PDF format. At the time of submission, all papers must conform to the ESEC/FSE 2022 Format and Submission Guidelines. A-TEST 2022 will employ a single-blind review process.

Papers and proposals will be submitted through EasyChair: 

https://easychair.org/conferences/?conf=atest22

Each submission will be reviewed by at least three members of the program committee. Full papers will be evaluated on the basis of originality, importance of contribution, soundness, evaluation, quality of presentation, and appropriate comparison to related work. Work-in-progress and position papers will be reviewed with respect to relevance and their ability to start up fruitful discussions. Tool and technology transfer papers will be evaluated based on improvement on the state-of-the-practice and clarity of lessons learned.

Submitted papers must not have been published elsewhere and must not be under review or submitted for review elsewhere during the duration of consideration. To prevent double submissions, the chairs may compare the submissions with related conferences that have overlapping review periods. The double submission restriction applies only to refereed journals and conferences, not to unrefereed pre-publication archive servers (e.g., arXiv.org). Submissions that do not comply with the foregoing instructions will be desk rejected without being reviewed.

All papers must be prepared in ACM Conference Format.

All accepted contributions will appear in the ACM Digital Library, providing a lasting archived record of the workshop proceedings. At least one author of each accepted paper must register and present the paper in person at A-TEST 2022 in order for the paper to be published in the proceedings.

Organization Committee

A-TEST TEAM

General Chairs

Ákos Kiss (University of Szeged, Hungary)

 

Program Co-Chairs

Beatriz Marín (Universidad Politècnica de Valencia, Spain)

Mehrdad Saadatmand (RISE Research Institutes of Sweden, Sweden)

Student Competition Chairs

Niels Doorn (Open Universiteit, The Netherlands)

Jeroen Pijpker (NHL Stenden, The Netherlands)

Publicity & Web Chair

Antony Bartlett (TU Delft, The Netherlands)

All questions about submissions should be emailed to the workshop organisers at atest2022@easychair.org

Pro­­gram­­me Com­­mi­t­tee

 

Rui Abreu INESC-ID & U.Porto
Domenico Amalfitano University of Naples Federico II
Nuno Antunes University of Coimbra
Alexandre Bergel University of Chile
Matteo Biagiola Università della Svizzera italiana
Markus Borg RISE SICS AB
Mariano Ceccato University of Verona
Marcio Eduardo Delamaro University of São Paulo
Julian Dolby IBM
Lydie du Bousquet LIG
Anna Rita Fasolino University of Naples Federico II
Mattia Fazzini University of Minnesota
Alessio Gambi IMC University of Applied Science Krems, Austria
Tamás Gergely Department of Software Engineering, University of Szeged
Mahshid Helali Moghadam RISE Research Institutes of Sweden
Onur Kilincceker Department of Computer Science, Antwerp
Maurizio Leotta Università di Genova
Alexandra Mendes FEUP
Ana Paiva University of Porto
Gordana Rakic Faculty of Science, Novi Sad
Rudolf Ramler Software Competence Center Hagenberg
Filippo Ricca DIBRIS, Università di Genova – Italy
Martin Schneider Fraunhofer
Gerson Sunyé Université de Nantes
Guilherme Travassos COPPE/UFRJ
Jan Tretmans TNO – Embedded Systems Innovation
Michele Tufano College of William and Mary