Co-located with ESEC/FSE 2018 

Co-located with ESEC/FSE 2018 

77 days
10 hours
59 minutes
28 seconds

About

A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automated Testing.

Modern software teams seek a delicate balance between two opposing forces: striving for reliability and striving for agility. Software teams need tools to strike the right balance by increasing the development speed without sacrificing quality. Automated testing tools play an important role in obtaining this balance.

A-TEST has successfully run 8 editions since 2009. During the 2017 edition, that was also co-located at ESEC/FSE. We introduced hands-on sessions where testing tools can be studies in depth. Due to the many positive reactions we received, this year we will have them again!

Call for papers

We invite you to submit a paper to the workshop, and present and discuss it at the event itself on any topics related to automated software testing.

Position paper (2 pages) intended to generate discussion and debate during the workshop.

Work-in-progress paper (4 pages) that describes novel work in progress, that not necessarily has reached its full completion.

Full paper (7 pages) describing original and completed research.

Tool demo (4 pages) describing your tool and a description of your planned demo-session.

Technology transfer paper (4 pages). Describing University-Industry co-operation.

Papers will be submitted through EasyChair: https://easychair.org/conferences/?conf=atest2018

Each paper will be reviewed by at least three referees. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

All papers must be prepared in ACM Conference Format.

Papers accepted for the workshop will appear in the ACM digital library, providing a lasting archived record of the workshop proceedings.

Speakers

Keynote: Gregg Rothermel

Professor and Jensen Chair of Software Engineering

University of Nebraska

Keynote:  Gregg Rothermel
Professor and Jensen Chair of Software Engineering
University of Nebraska

Improving Regression Testing in Continuous Integration Development Environments

In continuous integration development environments, software engineers frequently integrate new or changed code with the mainline codebase. Merged code is then regression tested to help ensure that the codebase remains stable and that continuing engineering efforts can be performed more reliably. Continuous integration is advantageous because it can reduce the amount of code rework that is needed in later phases of development, and speed up overall development time. From a testing standpoint, however, continuous integration raises several challenges.

Chief among these challenges are the costs, in terms and time and resources, associated with handling a constant flow of requests to execute tests. To help with this, organizations often utilize farms of servers to run tests in parallel, or execute tests "in the cloud", but even then, test suites tend to expand to utilize all available resources, and then continue to expand beyond that.

We have been investigating strategies for applying regression testing in continuous integration development environments more cost-effectively. Our strategies are based on two well-researched techniques for improving the cost-effectiveness of regression testing -- regression test selection (RTS) and test case prioritization (TCP). In the continuous integration context, however, traditional RTS and TCP techniques are difficult to apply, because these techniques rely on instrumentation and analyses that cannot easily be applied to fast-arriving streams of test suites.

We have thus created new forms of RTS and TCP techniques that utilize relatively lightweight analyses, that can cope with the volume of test requests. To evaluate our techniques, we have conducted an empirical study on several large data sets. In this talk, I describe our techniques and the empirical results we have obtained in studying them.

Bio

Gregg Rothermel is Professor and Jensen Chair of Software Engineering at the University of Nebraska-Lincoln. He received the Ph.D. in Computer Science from Clemson University working with Mary Jean Harrold, the M.S. in Computer Science from SUNY Albany, and a B.A. in Philosophy from Reed College. Prior to returning to academia, he was a software engineer, and Vice President of Quality Assurande and Quality Control for Palette Systems, a manufacturer of CAD/CAM software.

Dr. Rothermel's research interests include software engineering and program analysis, with emphases on the application of program analysis techniques to problems in software maintenance and testing, end-user software engineering, and empirical studies. He is a co-founder of the ESQuaReD (Empirically-Based Software Quality Research and Development) Laboratory at the Univerity of Nebraska-Lincoln. He is also a co-founder of the EUSES (End-Users Shaping Effective Software) Consortium, a group of researchers who, with National Science Foundation support, have led end-user software engineering research. He co-founded and leads the development of the Software-Artifact Infrastructure Repository (SIR), a repository of software-related artifacts that support rigorous controlled experiments with program analysis and software testing techniques, and has been utilized, to-date, by more than 3500 persons from over 700 institutions around the world, supporting over 800 scientific publications. His research has been supported by NSF, DARPA, AFOSR, Boeing Commercial Airplane Group, Microsoft, and Lockheed Martin.

Dr. Rothermel is an IEEE Fellow and an ACM Distinguished Scientist. He is currently General co-Chair for the 2020 ACM/IEEE International Conference on Software Engineering, serves as an Associate Editor for IEEE Transactions on Software Engineering and Methodology, and is a member of the Editorial Boards of the Empirical Software Engineering Journal and Software Quality Journal.

Previous positions include Associate Editor in Chief for IEEE Transactions on Software Engineering, General Chair for the 2009 International Symposium on Software Testing and Analysis, Program Co-Chair for the 2007 International Conference on Software Engineering, and Program Chair for the 2004 ACM International Symposium on Software Testing and Analysis.

/

Important dates

  • Submission deadline:  July 27th 2018
  • Author notification: August 24th 2018
  • Camera-ready: September 18th 2018

Organization Committee

A-TEST TEAM

General Chair

Wishnu Prasetya (Universiteit van Utrecht)

 

Industrial Chair

Sigrid Eldh (Ericsson)

 

Program Chairs

Tanja E.J. Vos (Universidad Politecnica de Valencia, Open Universiteit)

Sinem Getir (Humboldt-Universität zu Berlin)

 

Hands-on session chair

Ali Parsai (Universiteit van Antwerpen)

 

Publicity Chair

Pekka Aho (Open Universiteit)

 

Programme Committee

TBD

Previous Editions

The A-TEST workshop has evolved over the years and has successfully run 7 editions since 2009. The first editions went by the name of ATSE (2009 and 2011) took place at the CISTI (Conference on Information Systems and Technologies, http://www.aisti.eu/). The three subsequent editions (2012, 2013 and 2014) at FEDCSIS (Federated Conference on Computer Science and Information Systems, http://www.fedcsis.org). In 2015 there was an ATSE2015 at SEFM year and an A-TEST2015 at FSE.

In 2016 we merged the events at FSE resulting in the 7th edition of A-TEST in 2016.

The 8th edition of A-TEST in 2017 was Co-located at the 12th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2017 in Paderbron.