A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automated Testing.
Modern software teams seek a delicate balance between two opposing forces: striving for reliability and striving for agility. Software teams need tools to strike the right balance by increasing the development speed without sacrificing quality. Automated testing tools play an important role in obtaining this balance.
A-TEST has successfully run 10 editions since 2009. During the 2017, 2018 and 2019 editions, which were also co-located at ESEC/FSE, we introduced hands-on sessions where testing tools can be studied in depth. Due to the many positive reactions we received, this year we will have them again, in the form of live online tutorials.
Corina Pasareanu is a distinguished researcher at NASA Ames and Carnegie Mellon University. She is affiliated with KBR and CMU’s CyLab and holds a courtesy appointment in Electrical and Computer Engineering. At Ames, she is developing and extending Symbolic PathFinder, a symbolic execution tool for Java bytecode. Her research interests include model checking and automated testing, compositional verification, model-based development, probabilistic software analysis, and autonomy and security. She is the recipient of several awards, including ASE Most Influential Paper Award (2018), ESEC/FSE Test of Time Award (2018), ISSTA Retrospective Impact Paper Award (2018), ACM Distinguished Scientist (2016), ACM Impact Paper Award (2010), and ICSE 2010 Most Influential Paper Award (2010). She has been serving as Program/General Chair for several conferences including: FM 2021, ICST 2020, ISSTA 2020, ESEC/FSE 2018, CAV 2015, ISSTA 2014, ASE 2011, and NFM 2009. She is currently an associate editor for the IEEE TSE journal.
Aldeida is a Senior Lecturer and 2013 Australian Research Council DECRA Fellow at the Faculty of Information Technology, Monash University in Australia, where she leads the software engineering discipline group. Aldeida’s research is in the areas of Software Engineering and Artificial Intelligence, with a particular focus on Search-Based Software Testing. Aldeida has published in top AI, optimisation and software engineering venues, served as PC member and organising committee at both SE and optimisation conferences, such as ASE, ICSE, GECCO and IJCAI.
- Submission deadline (EXTENDED): July 27, 2020 (Abstract) and July 31, 2020 (Paper)
- Author notification: 4th September 2020
- Camera-ready: 18th September 2020
Emil Alégroth – Blekinge Institute of Technology
Domenico Amalfitano – University of Naples Federico II
Markus Borg – RISE SICS AB
Mariano Ceccato – University of Verona
Márcio Eduardo Delamaro – Universidade de São Paulo
Lydie Du Bousquet – LIG
M.J. Escalona – University of Seville
Leire Exteberria – Mondragon Uniberstitatea
João Faria – FEUP, INESC TEC
Anna Rita Fasolino – University of Naples Federico II
Onur Kilincceker – Mugla Sitki Kocman University
Yvan Labiche – Carleton University
Bruno Legeard – Smartesting
Maurizio Leotta – Università di Genova
Sam Malek – University of California, Irvine
Kevin Moran – College of William & Mary
Rafael Oliveira – USP
Ana Paiva – University of Porto
Ali Parsai – University of Antwerp
Wishnu Prasetya – Utrecht University
Rudolf Ramler – Software Competence Center Hagenberg
Filippo Ricca – DIBRIS, Università di Genova
Martin Schneider – Fraunhofer FOKUS
Jan Tretmans – TNO – Embedded Systems Innovation
Man Zhang – Kristiania University College
Call for Papers
We invite you to submit a paper to the workshop, and present and discuss it at the event itself on any topics related to automated software testing.
Full paper (up to 7 pages, including references) describing original and completed research.
Short paper (up to 4 pages, including references) for example:
– position statements – intended to generate discussion and debate.
– work-in-progress – that describes novel work that not necessarily has reached its full completion.
– technology transfer paper – describing University-Industry co-operation.
– tool paper – describing a test tool. Note that accepted tool papers are expected to give a demo at the workshop.
- Techniques and tools for automating test case design, generation, and selection, e.g. model-based approaches, combinatorial-based approaches, search based approaches, symbolic-based approaches, machine learning, artificial intelligence.
- New trends in the use of machine learning and artificial intelligence to improve test automation.
- Test cases optimization.
- Test cases evaluation and metrics.
- Test cases design, selection, and evaluation in emerging domains, e.g. Graphical User Interface, Social Network, Cloud, Games, Security, Cyber Physical Systems.
- Case studies that have evaluated an existing technique or tool on real systems, not only toy problems, to show the quality of the resulting test cases compared to other approaches.
- Search based testing approaches
Call for online tutorials
A-TEST also offers an opportunity to introduce your novel testing technique or tool to its audience in an online tutorial session. This is an excellent opportunity to get more people involved in your technique/tool. You are invited to submit proposals for 45 minutes to 1 hour sessions. The proposals are limited to 2 pages and should include a topic description and your session schedule.
Papers and proposals will be submitted through EasyChair:
Each paper will be reviewed by at least three referees. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.
All papers must be prepared in ACM Conference Format.
Papers accepted for the workshop will appear in the ACM digital library, providing a lasting archived record of the workshop proceedings.
The A-TEST workshop has evolved over the years and has successfully run 7 editions since 2009. The first editions went by the name of ATSE (2009 and 2011) took place at the CISTI (Conference on Information Systems and Technologies, http://www.aisti.eu/). The three subsequent editions (2012, 2013 and 2014) at FEDCSIS (Federated Conference on Computer Science and Information Systems, http://www.fedcsis.org). In 2015 there was an ATSE2015 at SEFM year and an A-TEST2015 at FSE.
In 2016 we merged the events at FSE resulting in the 7th edition of A-TEST in 2016.
The 8th edition of A-TEST in 2017 was Co-located at the 12th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2017 in Paderbron.
The 9th edition of A-TEST in 2018 was Co-located at the 13th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2018 in Lake Buena Vista, Florida, United States.
The 10th edition of A-TEST in 2019 was Co-located at the 14th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2019 in Tallinn, Estonia.