7th Workshop on Automated Software Testing (A-TEST), 2016

Co-located at the 12th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2017

About

A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automated Testing.

Modern software teams seek a delicate balance between two opposing forces: striving for reliability and striving for agility. Software teams need tools to strike the right balance by increasing the development speed without sacrificing quality. Automated testing tools play an important role in obtaining this balance.

Im­por­tant Dates

Paper submission deadline: July 1, July 15 2016 (EXTENDED)
Notification: August 12, 2016
Camera-ready deadline: September 15, 2016
Workshop: November 18, 2016

Final Program

Session 1 (9:00 – 10:30)

  • Opening
  • Multilevel Coarse-to-Fine-Grained Prioritization For GUI And Web Applications
  • EventFlowSlicer: Goal Based Test Generation for Graphical User Interfaces
  • PredSym: Estimating Software Testing Budget for a Bug-free Release

Break (10:30 – 11:00)


Session 2 (11:00 – 12:30)

  • The Complementary Aspect of Automatically and Manually Generated Test Case Sets
  • Modernizing Hierarchical Delta Debugging
  • Complete IOCO Test Cases: A Case Study

Lunch (12:30 -14:00)


Session 3 (14:00 – 15:30)

  • Model-Based Testing of Stochastic Systems with ioco Theory
  • Development and Maintenance Efforts Testing Graphical User Interfaces: A Comparison
  • MT4A: A No-Programming Test Automation Framework for Android Applications

Break (15:30 – 16:00)


Session 4 (16:00 – 17:00)

  • Mitigating (and Exploiting) Test Reduction Slippage
  • Automated Workflow Regression Testing for Multi-tenant SaaS: Integrated Support in Self-service Configuration Dashboard
  • Towards an MDE-based approach to test entity reconciliation applications

Orga­ni­za­tion Com­mit­tee

A-TEST TEAM

General Chair

Tanja E.J. Vos (Universidad Politecnica de Valencia, Open Universiteit)

 

Program Chairs

Sigrid Eldh (Ericsson)

Wishnu Prasetya (Universiteit van Utrecht)

Pro­gram­me Com­mit­tee

 

M.J. Escalona

Wasif Afzal

Javier Dolado

Raquel Blanco

Valentin Dallmeier

Sheikh Umar Farooq

Peter M. Kruse

Topics

We invite you to submit a paper to the workshop, and present and discuss it at the event itself on topics related to:

  • Techniques and tools for automating test case design and selection, e.g. model-based, combinatorial-based, search based, symbolic-based, or property-based approaches.
  • Test case/suite optimization.
  • Test cases evaluation and metrics.
  • Test cases design, selection, and evaluation in emerging test domains, e.g. Graphical User Interfaces, Social Network, Cloud, Games or Security, Cyber Physical Systems.
  • Case studies that have evaluated on real systems, not only toy problems.
  • Experiences during test technology transfer from university to companies

Submissions

Papers can be submitted through easychair (https://easychair.org/conferences/?conf=atest2016). We expect the following type of papers:

Position paper (2 pages) intended to generate discussion and debate during the workshop.

Work-in-progress paper (4 pages) that describes novel work in progress, that not necessarily has reached its full completion.

Full paper (7 pages) describing original and completed research.

Tool demo (4 pages) describing your tool and a description of your planned demo-session.

Technology transfer paper (4 pages). Describing a co-operation between University-Industry.

 

Accepted Papers

Mitigating (and Exploiting) Test Reduction Slippage Josie Holmes, Mohammad Amin Alipour and Alex Groce
Multilevel Coarse-to-Fine-Grained Prioritization For GUI And Web Applications Dmitry Nurmuradov, Renee Bryce and Hyunsook Do
PredSym: Estimating Software Testing Budget for a Bug-free Release Arnamoy Bhattacharyya and Timur Malgazhdarov
Modernizing Hierarchical Delta Debugging Renáta Hodován and Ákos Kiss
Automated Workflow Regression Testing for Multi-tenant SaaS: Integrated Support in Self-service Configuration Dashboard Majid Makki, Dimitri Van Landuyt and Wouter Joosen
The Complementary Aspect of Automatically and Manually Generated Test Case Sets Tiago Bachiega, Daniel G. de Oliveira, Simone R. S. Souza, José C Maldonado and Auri Marcelo Rizzo Vincenzi
EventFlowSlicer: Goal Based Test Generation for Graphical User Interfaces Jonathan Saddler and Myra Cohen
Development and Maintenance Efforts Testing Graphical User Interfaces: A Comparison Antonia Kresse and Peter M. Kruse
Complete IOCO Test Cases: A Case Study Sofia Costa Paiva, Adenilso Simao, Mohammad Reza Mousavi and Mahsa Varshosaz
MT4A: A No-Programming Test Automation Framework for Android Applications Tiago Coelho, Bruno Lima and João Faria
Towards an MDE-based approach to test entity reconciliation applications J.G. Enríquez, Raquel Blanco, F.J. Domínguez-Mayo, Javier Tuya and M.J. Escalona
Model-Based Testing of Stochastic Systems with ioco Theory Marcus Gerhold and Marielle Stoelinga

Format

 

All submissions must be in English and in PDF format. Papers must not exceed the page limits that are listed in the call for papers. At the time of submission all papers must conform to the ACM Format and Submission Guidelines (http://www.acm.org/publications/article-templates/proceedings-template.html) All authors of accepted papers will be asked to complete an electronic ACM Copyright form and will receive further instructions for preparing their camera ready versions. All accepted contributions will be published in the conference electronic proceedings and in the ACM Digital Library Note that the official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of FSE 2016. The official publication date affects the deadline for any patent filings related to published work. The names and ordering of authors in the camera ready version cannot be modified from the ones in the submitted version – no exceptions! The title can be changed only if required by the reviewers and the new title must be accepted by the workshop chairs. At least one author of each accepted paper must register for the workshop and present the paper at A-TEST 2016 in order for the paper to be published in the proceedings. Papers submitted for consideration to any of the above call for papers should not have been already published elsewhere and should not be under review or submitted for review elsewhere during the duration of consideration. Specifically, authors are required to adhere to the ACM Policy and Procedures on Plagiarism (http://www.acm.org/publications/policies/plagiarism_policy) and the ACM Policy on Prior Publication and Simultaneous Submissions (http://www.acm.org/publications/policies/sim_submissions). All submissions are subject to the ACM Author Representations policy (http://www.acm.org/publications/policies/author_representations).