A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automated Testing.
Modern software teams seek a delicate balance between two opposing forces: striving for reliability and striving for agility. Software teams need tools to strike the right balance by increasing the development speed without sacrificing quality. Automated testing tools play an important role in obtaining this balance.
A-TEST has successfully run 10 editions since 2009. During the 2017, 2018 and 2019 editions, which were also co-located at ESEC/FSE, we introduced hands-on sessions where testing tools can be studied in depth. Due to the many positive reactions we received, this year we will have them again, in the form of live online tutorials.
Program to be defined.
List of accepted papers:
Navigation and Exploration in 3D-Game Automated Play Testing
I. S. W. B. Prasetya, Maurin Voshol, Tom Tanis, Adam Smits, Bram Smit, Jacco van Mourik, Menno Klunder, Frank Hoogmoed, Stijn Hinlopen, August van Casteren, Jesse van de Berg, Naraenda G.W.Y. Prasetya, Samira Shirzadehhajimahmood, and Saba Gholizadeh Ansari
(Utrecht University, Netherlands)
Fuzz4B: A Front-End to AFL Not Only for Fuzzing Experts
Ryu Miyaki, Norihiro Yoshida, Natsuki Tsuzuki, Ryota Yamamoto, and Hiroaki Takada
(Nagoya University, Japan)
Towards Automated Testing of RPA Implementations
Marina Cernat, Adelina Nicoleta Staicu, and Alin Stefanescu
(University of Bucharest, Romania; Cegeka, Romania)
Corina Pasareanu is a distinguished researcher at NASA Ames and Carnegie Mellon University. She is affiliated with KBR and CMU’s CyLab and holds a courtesy appointment in Electrical and Computer Engineering. At Ames, she is developing and extending Symbolic PathFinder, a symbolic execution tool for Java bytecode. Her research interests include model checking and automated testing, compositional verification, model-based development, probabilistic software analysis, and autonomy and security. She is the recipient of several awards, including ASE Most Influential Paper Award (2018), ESEC/FSE Test of Time Award (2018), ISSTA Retrospective Impact Paper Award (2018), ACM Distinguished Scientist (2016), ACM Impact Paper Award (2010), and ICSE 2010 Most Influential Paper Award (2010). She has been serving as Program/General Chair for several conferences including: FM 2021, ICST 2020, ISSTA 2020, ESEC/FSE 2018, CAV 2015, ISSTA 2014, ASE 2011, and NFM 2009. She is currently an associate editor for the IEEE TSE journal.
SafeDNN: Understanding and Verifying Neural Networks
The SafeDNN project at NASA Ames explores new techniques and tools to ensure that systems that use Deep Neural Networks (DNN) are safe, robust and interpretable. Research directions we are pursuing in this project include: symbolic execution for DNN analysis, label-guided clustering to automatically identify input regions that are robust, parallel and compositional approaches to improve formal SMT-based verification, property inference and automated program repair for DNNs, adversarial training and detection, probabilistic reasoning for DNNs. In this talk I will highlight some of the research advances from SafeDNN.
Aldeida is a Senior Lecturer and 2013 Australian Research Council DECRA Fellow at the Faculty of Information Technology, Monash University in Australia, where she leads the software engineering discipline group. Aldeida’s research is in the areas of Software Engineering and Artificial Intelligence, with a particular focus on Search-Based Software Testing. Aldeida has published in top AI, optimisation and software engineering venues, served as PC member and organising committee at both SE and optimisation conferences, such as ASE, ICSE, GECCO and IJCAI.
The effectiveness of automated software testing techniques
With the rise of AI-based systems, such as self-driving cars, Google search, and automated decision-making systems, new challenges have emerged for the testing community. Verifying such software systems is becoming an extremely difficult and expensive task, often constituting up to 90% of the software expenses. Software in a self-driving car, for example, must safely operate in an infinite number of scenarios, which makes it extremely hard to find bugs in such systems. In this talk, I will explore some of these challenges, and introduce our work which aims at improving the bug-detection capabilities of automated software testing. First, I will talk about a framework that maps the effectiveness of automated software testing techniques, by identifying software features that impact the ability of these techniques to achieve high code coverage. Next, I will introduce our latest work that incorporates defect prediction information to improve the efficiency of search-based software testing to detect software bugs.
Peter Schrammel (Diffblue Ltd – University of Sussex, Brighton)
Diffblue Cover is a tool for creating unit tests for Java code automatically. Unlike the code produced by typical test generators that smells like machine-generated, Diffblue Cover aims at writing tests using an idiomatic coding style. The aim is to pass the Turing test – can you distinguish whether a test has been written by a machine or a developer?
This tutorial will introduce you into Diffblue Cover and how to use it for your daily work on your Java code base. Diffblue Cover Community Edition is available as an IntelliJ Plugin. Please install it on your machine prior to the tutorial so that you can actively follow and interact: https://plugins.jetbrains.com/plugin/14946-diffblue-cover.
Michel Nass, Emil Alegroth (Blekinge Institute of Technology)
Test automation is a staple of modern software development, but graphical user interface testing is associated with challenges that limits its use. In this technical report we introduce Scout, a tool that combines, and revises, previous technologies into a common tool with the potential to mitigate the challenges encountered by previous GUI testing approaches.
- Submission deadline (EXTENDED): July 27, 2020 (Abstract) and July 31, 2020 (Paper)
- Author notification: 4th September 2020
- Camera-ready: 18th September 2020
Emil Alégroth – Blekinge Institute of Technology
Domenico Amalfitano – University of Naples Federico II
Markus Borg – RISE SICS AB
Mariano Ceccato – University of Verona
Márcio Eduardo Delamaro – Universidade de São Paulo
Lydie Du Bousquet – LIG
M.J. Escalona – University of Seville
Leire Exteberria – Mondragon Uniberstitatea
João Faria – FEUP, INESC TEC
Anna Rita Fasolino – University of Naples Federico II
Onur Kilincceker – Mugla Sitki Kocman University
Yvan Labiche – Carleton University
Bruno Legeard – Smartesting
Maurizio Leotta – Università di Genova
Sam Malek – University of California, Irvine
Kevin Moran – College of William & Mary
Ana Paiva – University of Porto
Ali Parsai – University of Antwerp
Wishnu Prasetya – Utrecht University
Rudolf Ramler – Software Competence Center Hagenberg
Filippo Ricca – DIBRIS, Università di Genova
Martin Schneider – Fraunhofer FOKUS
Jan Tretmans – TNO – Embedded Systems Innovation
Man Zhang – Kristiania University College
Call for Papers
We invite you to submit a paper to the workshop, and present and discuss it at the event itself on any topics related to automated software testing.
Full paper (up to 7 pages, including references) describing original and completed research.
Short paper (up to 4 pages, including references) for example:
– position statements – intended to generate discussion and debate.
– work-in-progress – that describes novel work that not necessarily has reached its full completion.
– technology transfer paper – describing University-Industry co-operation.
– tool paper – describing a test tool. Note that accepted tool papers are expected to give a demo at the workshop.
- Techniques and tools for automating test case design, generation, and selection, e.g. model-based approaches, combinatorial-based approaches, search based approaches, symbolic-based approaches, machine learning, artificial intelligence.
- New trends in the use of machine learning and artificial intelligence to improve test automation.
- Test cases optimization.
- Test cases evaluation and metrics.
- Test cases design, selection, and evaluation in emerging domains, e.g. Graphical User Interface, Social Network, Cloud, Games, Security, Cyber Physical Systems.
- Case studies that have evaluated an existing technique or tool on real systems, not only toy problems, to show the quality of the resulting test cases compared to other approaches.
- Search based testing approaches
Call for online tutorials
A-TEST also offers an opportunity to introduce your novel testing technique or tool to its audience in an online tutorial session. This is an excellent opportunity to get more people involved in your technique/tool. You are invited to submit proposals for 45 minutes to 1 hour sessions. The proposals are limited to 2 pages and should include a topic description and your session schedule.
Papers and proposals will be submitted through EasyChair:
Each paper will be reviewed by at least three referees. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.
All papers must be prepared in ACM Conference Format.
Papers accepted for the workshop will appear in the ACM digital library, providing a lasting archived record of the workshop proceedings.
The A-TEST workshop has evolved over the years and has successfully run 7 editions since 2009. The first editions went by the name of ATSE (2009 and 2011) took place at the CISTI (Conference on Information Systems and Technologies, http://www.aisti.eu/). The three subsequent editions (2012, 2013 and 2014) at FEDCSIS (Federated Conference on Computer Science and Information Systems, http://www.fedcsis.org). In 2015 there was an ATSE2015 at SEFM year and an A-TEST2015 at FSE.
In 2016 we merged the events at FSE resulting in the 7th edition of A-TEST in 2016.
The 8th edition of A-TEST in 2017 was Co-located at the 12th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2017 in Paderbron.
The 9th edition of A-TEST in 2018 was Co-located at the 13th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2018 in Lake Buena Vista, Florida, United States.
The 10th edition of A-TEST in 2019 was Co-located at the 14th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2019 in Tallinn, Estonia.