For the past thirteen years, the Workshop on Automating Test Case Design, Selection and Evaluation (A-TEST) has provided a venue for researchers and industry members alike to exchange and discuss trending views, ideas, state of the art, work in progress, and scientific results on automated testing.
This year the 14th edition of A-TEST will be organized in person as a co-located workshop of ASE 2023, in Kirchberg, Luxembourg on September 15, in person.
Testing is at the moment the most important and mostly used quality assurance technique applied in the software industry. However, the complexity of software, and hence of their development amount, is increasing.
Even though many test automation tools are currently available to aid test planning and control as well as test case execution and monitoring, all these tools share a similar passive philosophy towards test case design, selection of test data and test evaluation. They leave these crucial, time-consuming and demanding activities to the human tester. This is not without reason; test case design and test evaluation through oracles are difficult to automate with the techniques available in current industrial practices. The domain of possible inputs (potential test cases), even for a trivial method, program, model, user interface or service is typically too large to be exhaustively explored. Consequently, one of the major challenges associated with test case design is the selection of test cases that are effective at finding flaws without requiring an excessive number of tests to be carried out. Automation of the entire test process requires new thinking that goes beyond test design or specific test execution tools. These are the problems that this workshop aims to attack.
|08:45 – 10:00||Session 1 – Opening and Keynote|
|08:45 – 09:00||Welcome|
|09:00 – 10:00||Keynote: Present to Future: Forging the Future of Quantum Software Testing, Shaukat Ali, Simula Research Laboratory and Oslo Metropolitan University|
|10:00 – 10:30||Coffee break|
|10:30 – 12:00||Session 2 – Technical papers|
|10:30 – 11:00||Evaluating the Effectiveness of Neuroevolution for Automated GUI-Based Software Testing, Daniel Zimmermann, Patrick Deubel, Anne Koziolek|
|11:00 – 11:30||Automated Test Case Generation for Service Composition from Event Logs, Sebastien Salva, Jarod Sue|
|11:30 – 12:00||Exploring Android Apps Using Motif Actions, Michael Auer, Gordon Fraser|
|12:00 – 13:30||Lunch|
|13:30 – 15:00||Session 3 – Technical papers|
|13:30 – 14:00||Improving AFLGo’s directed fuzzing by considering indirect function calls, Fabian Jezuita|
|14:00 – 14:30||Static Test Case Prioritization strategies for Grammar-based Testing, Moeketsi Raselimo, Lars Grunske, Bernd Fischer|
|14:30 – 15:00||Test Case Recommendations with Distributed Representation of Code Syntactic Features, Mosab Rezaei, Hamed Alhoori, Mona Rahimi|
|15:00 – 15:30||Coffee break|
|15:30 – 17:00||Session 4 – Technical papers and closure|
|15:30 – 15:50||Continuous Domain Input Abstraction and Fault Detection Capability in Combinatorial Testing, Yavuz Koroglu, Franz Wotawa|
|15:50 – 16:10||GUI-Based Software Testing: An Automated Approach Using GPT-4 and Selenium WebDriver, Daniel Zimmermann, Anne Koziolek|
|16:10 – 16:30||Chouette: An Automated Cross-Platform UI Crawler for Improving App Quality, Terrence Wong|
|16:30 – 16:50||An Empirical Study on the Adoption of Scripted GUI Testing for Android Apps, Ruizhen Gu, José Miguel Rojas|
|16:50 – 17:00||Closure|
Simula Research Laboratory and Oslo Metropolitan University
Present to Future: Forging the Future of Quantum Software Testing
Abstract: Currently, gate-based quantum computers are the most popular ones developed by prominent companies such as IBM, Google, and IQM. These quantum computers are programmed using quantum circuits. Ensuring the correctness of these quantum circuits, i.e., they perform their intended computations, is currently the focus of quantum software testing. First, this keynote will focus on the latest developments in quantum software testing, such as coverage criteria, search-based testing, mutation testing, and metamorphic testing. Next, the keynote will discuss some key future research directions in quantum software testing that deserve the community’s attention.
Biography: Shaukat Ali is a Chief Research Scientist, Research Professor, and Head of the Department at Simula Research Laboratory, Norway. He is also an adjunct associate professor at Oslo Metropolitan University, Oslo, Norway. He focuses on devising novel methods for verifying and validating classical and quantum software systems. He has led many national and European projects related to testing, search-based software engineering, model-based system engineering, and quantum software engineering.
- Abstract Submission deadline: July 14, 2023 (non mandatory)
- Submission deadline: July 23 2023
- Author notification: August 11, 2023
- Camera-ready Submission: August 18, 2023 (hard)
- Workshop: September 15, 2023
All dates are 23:59:59 AoE
Call for Papers
Authors are invited to submit papers to the workshop, and present and discuss them at the event on topics related to automated software testing. Paper submissions can be of the following types:
- Full papers (max 8 pages, including references) describing original, complete, and validated research – either empirical or theoretical – in A-TEST related techniques, tools, or industrial case studies.
- Work-in-progress papers (max. 4 pages) that describe novel, interesting, and high-potential work in progress, but not necessarily reaching full completion (e.g., not completely validated).
- Tool papers (max. 4 pages) presenting some testing tool in a way that it could be presented to industry as a start of successful technology transfer.
- Technology transfer papers (max. 4 pages) describing industry-academia co-operation.
- Position papers (max. 2 pages) that analyse trends and raise issues of importance. Position papers are intended to generate discussion and debate during the workshop.
Topics of interest include, but are not limited to:
- Techniques and tools for automating test case design, generation, and selection, e.g., model-based approaches, mutation approaches, metamorphic approaches, combinatorial-based approaches, search-based approaches, symbolic-based approaches, chaos testing, machine learning testing.
- New trends in the use of machine learning (ML) and artificial intelligence (AI) to improve test automation, and new approaches to automate the test of AI/ML-driven systems.
- Test case and test process optimization.
- Test case evolution, repair, and reuse.
- Test case evaluation and metrics.
- Test case design, selection, and evaluation in emerging domains like natural user interfaces, cloud, edge and IoT-based systems, cyber-physical systems, social networks, games, security, and extended reality.
- Case studies that have evaluated an existing technique or tool on real systems (not only toy problems), to show its benefits as compared to other approaches.
- Experience/industry reports.
- Education on software testing.
All submissions must be in English and in PDF format. At the time of submission, all papers must conform to the ASE 2023 Format and Submission Guidelines. A-TEST 2023 will employ a single-blind review process.
Contributions must be submitted through EasyChair: https://easychair.org/conferences/?conf=atest2023
Each submission will be reviewed by at least three members of the program committee. Full papers will be evaluated on the basis of originality, importance of contribution, soundness, evaluation, quality of presentation, and appropriate comparison to related work. Work-in-progress and position papers will be reviewed with respect to relevance and their ability to start up fruitful discussions. Tool and technology transfer papers will be evaluated based on improvement on the state-of-the-practice and clarity of lessons learned.
Submitted papers must not have been published elsewhere and must not be under review or submitted for review elsewhere during the duration of consideration. To prevent double submissions, the chairs may compare the submissions with related conferences that have overlapping review periods. The double submission restriction applies only to refereed journals and conferences, not to unrefereed pre-publication archive servers (e.g., arXiv.org). Submissions that do not comply with the foregoing instructions will be desk rejected without being reviewed.
All accepted contributions will appear in the ACM Digital Library, providing a lasting archived record of the workshop proceedings. At least one author of each accepted paper must register and present the paper in person at A-TEST 2023 in order for the paper to be published in the proceedings.
All questions about submissions should be emailed to the workshop organisers at email@example.com
João Pascoal Faria (University of Porto, Portugal)
Anna Rita Fasolino (University of Naples Federico II, Italy)
Freek Verbeek (Open University, The Netherlands)
Student Competition Chair
Kevin Moran (George Mason University, USA)
Publicity & Web Chair
Bruno Lima (University of Porto, Portugal)
|Ana Paiva||University of Porto|
|Bao N. Nguyen||University of Leeds|
|Domenico Amalfitano||University of Naples|
|Filippo Ricca||DIBRIS, Università di Genova|
|Gerson Sunyé||Université de Nantes|
|Gordana Rakic||University of Novi Sad|
|Guilherme Travassos||COPPE / Federal University of Rio de Janeiro|
|Jan Tretmans||TNO – Embedded Systems Innovation|
|José Campos||University of Porto|
|Lydie du Bousquet||Universite Grenoble Alpes|
|Mahshid Helali Moghadam||Scania & Mälardalen University|
|Mariano Ceccato||University of Verona|
|Matteo Biagiola||Università della Svizzera italiana|
|Nico Naus||Virginia Tech|
|Onur Kilincceker||University of Antwerp|
|Porfirio Tramontana||University of Napoli Federico II|
|Rudolf Ramler||Software Competence Center Hagenberg|
|Tamás Gergely||University of Szeged|
The A-TEST workshop has evolved over the years and has successfully run 7 editions since 2009. The first editions went by the name of ATSE (2009 and 2011) took place at the CISTI (Conference on Information Systems and Technologies, http://www.aisti.eu/). The three subsequent editions (2012, 2013 and 2014) at FEDCSIS (Federated Conference on Computer Science and Information Systems, http://www.fedcsis.org). In 2015 there was an ATSE2015 at SEFM year and an A-TEST2015 at FSE.
In 2016 we merged the events at FSE resulting in the 7th edition of A-TEST in 2016.
The 8th edition of A-TEST in 2017 was Co-located at the 12th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2017 in Paderbron.
The 9th edition of A-TEST in 2018 was Co-located at the 13th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2018 in Lake Buena Vista, Florida, United States.
The 10th edition of A-TEST in 2019 was Co-located at the 14th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2019 in Tallinn, Estonia.
The 11th edition of A-TEST in 2020 was an online workshop due to the COVID-19, co-located with the 15th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2020, also organized virtually.
The 12th edition of A-TEST in 2021 was an online workshop due to the COVID-19, co-located with the 16th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2021, also organized virtually.
The 13th edition of A-TEST in 2022 was co-located with the 17th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2022, in Singapore.