The workshop will be held in Singapore 14-18 November, 2022.

Co-located with ESEC/FSE 2022 

About

 

For the past twelve years, the Workshop on Automating Test Case Design, Selection and Evaluation (A-TEST) has provided a venue for researchers and industry members alike to exchange and discuss trending views, ideas, state of the art, work in progress, and scientific results on automated testing. Following the success of the past six years, the 13th edition of A-TEST will continue to be co-located with and organised at ESEC/FSE 2022. A-TEST 2022 is planned to take place over two days preferably in person in Singapore (if COVID situation allows).

Aims and scope

 

Trends such as globalisation, standardisation and shorter life-cycles place great demands on the flexibility of the software industry. In order to compete and cooperate on an international scale, a constantly decreasing time to market and an increasing level of quality are essential. Testing is at the moment the most important and mostly used quality assurance technique applied in industry. However, the complexity of software and hence of their development amount is increasing.

Even though many test automation tools are currently available to aid test planning and control as well as test case execution and monitoring, all these tools share a similar passive philosophy towards test case design, selection of test data and test evaluation.

They leave these crucial, time-consuming and demanding activities to the human tester. This is not without reason; test case design and test evaluation through oracles are difficult to automate with the techniques available in current industrial practice. The domain of possible inputs (potential test cases), even for a trivial method, program, model, user interface or service is typically too large to be exhaustively explored. Consequently, one of the major challenges associated with test case design is the selection of test cases that are effective at finding flaws without requiring an excessive number of tests to be carried out. Automation of the entire test process requires new thinking that goes beyond test design or specific test execution tools. These are the problems that this workshop wants to attack.

Im­por­tant Dates

 

  • Abstract Submission deadlineJuly 24, 2022 (non mandatory)
  • Submission deadline: July 28, 2022
  • Author notification: September 2, 2022
  • Camera-ready submission: September 9, 2022 (hard deadline)
  • Student Competition Submission: TBD.
  • Workshop: TBD

All dates are 25:59:59 AoD

Call for Papers

 

Authors are invited to submit papers to the workshop, and present and discuss them at the event on topics related to automated software testing. Paper submissions can be of the following types:

Full papers (max. 8 pages) describing original, complete, and validated research – either empirical or theoretical – in A-TEST related techniques, tools, or industrial case studies.

Work-in-progress papers (max. 4 pages) that describe novel, interesting, and high-potential work in progress, but not necessarily reaching full completion (e.g., not completely validated).

Tool papers (max. 4 pages) presenting some testing tool in a way that it could be presented to industry as a start of successful technology transfer.

Technology transfer papers (max. 4 pages) describing industry-academia co-operation.

Position papers (max. 2 pages) that analyse trends and raise issues of importance. Position papers are intended to generate discussion and debate during the workshop.

Topics of interest include, but are not limited to:

  • Techniques and tools for automating test case design, generation, and selection, e.g., model-based approaches, mutation approaches, metamorphic approaches, combinatorial-based approaches, search-based approaches, symbolic-based approaches, chaos testing, machine learning testing.
  • New trends in the use of machine learning (ML) and artificial intelligence (AI) to improve test automation, and new approaches to test ML/AI techniques.
  • Test case optimization.
  • Test cases evaluation and metrics.
  • Test cases design, selection, and evaluation in emerging domains like graphical user interfaces, social networks, the cloud, games, security, cyber-physical systems, or extended reality.
  • Case studies that have evaluated an existing technique or tool on real systems, empirical studies, not only toy problems, to show the quality of the resulting test cases compared to other approaches.
  • Experience/industry reports.
  • Education of (automating) testing.

Call for hands-on

A-TEST also offers an opportunity to introduce novel testing techniques or tools to the audience in active hands-on sessions. The strong focus on tools in this session complements the traditional conference style of presenting research papers, in which a deep discussion of technical components of the implementation is usually missing. Furthermore, presenting the actual tools and implementations of testing techniques in the hands-on sessions allows other researchers and practitioners to reproduce research results and to apply the latest testing techniques in practice. The invited proposals should be:

Hands-on proposals (max. 2 pages) that describe how the session (with a preferred time frame of 3 hours) will be conducted.

Submission

All submissions must be in English and in PDF format. At the time of submission, all papers must conform to the ESEC/FSE 2022 Format and Submission Guidelines.

A-TEST 2022 will employ a single-blind review process. Papers and hands-on proposals must be submitted electronically through EasyChair.

Each submission will be reviewed by at least three members of the program committee. Full papers will be evaluated on the basis of originality, importance of contribution, soundness, evaluation, quality of presentation, and appropriate comparison to related work. Work-in-progress and position papers will be reviewed with respect to relevance and their ability to start up fruitful discussions. Tool and technology transfer papers will be evaluated based on improvement on the state-of-the-practice and clarity of lessons learned.

Submitted papers must not have been published elsewhere and must not be under review or submitted for review elsewhere during the duration of consideration. To prevent double submissions, the chairs may compare the submissions with related conferences that have overlapping review periods. The double submission restriction applies only to refereed journals and conferences, not to un-refereed pre-publication archive servers (e.g., arXiv.org). Submissions that do not comply with the foregoing instructions will be desk rejected without being reviewed.

All accepted contributions will appear in the ACM Digital Library, providing a lasting archived record of the workshop proceedings. At least one author of each accepted paper must register and present the paper in person at A-TEST 2022 in order for the paper to be published in the proceedings.

Organization Committee

A-TEST TEAM

General Chair

Ákos Kiss, General Chair (University of Szeged, Hungary)

 

Program Chairs

Beatriz Marín (Universidad Politècnica de Valencia, Spain)

Mehrdad Saadatmand (RISE Research Institutes of Sweden, Sweden)

 

Student Competition Chairs

Niels Doorn (Open Universiteit, The Netherlands)

Jeroen Pijpker (NHL Stenden, The Netherlands)

Publicity & Web Chair

Antony Bartlett (TU Delft, The Netherlands)

All questions about submissions should be emailed to the workshop organizers at atest2022@easychair.org