Test automation has been an acknowledged software engineering best practice for years. However, the topic involves more than the repeated execution of test cases that often comes first to mind. Simply running test cases using a unit testing framework is no longer enough for test automation to keep up with the ever-shorter release cycles driven by continuous deployment and technological innovations such as microservices and DevOps pipelines. Now test automation needs to rise to the next level by going beyond mere test execution. The NEXTA workshop will explore how to advance test automation to further contribute to software quality in the context of tomorrow's rapid release cycles. Take-aways for industry practitioners and academic researchers will encompass test case generation, automated test result analysis, test suite assessment and maintenance, and infrastructure for the future of test automation.
The NEXTA workshop will be organised in a fully virtual setting. Nevertheless, we want to stay as close as possible to the spirit of the workshop (i.e. encourage discussions, active on social media). But what implications does this have for you?
(1) Live but virtual.
We ask authors to give a real virtual presentation, instead of playing a pre-recorded video. However, if you feel uncomfortable about giving a virtual presentation, contact the organizers and we will gladly accommodate.
(3) Date and Time zone.
As we assume the conference is on Brazil time which will impact the schedule
The platform for the virtual event is currently unclear.
(5) Active Q&A.
Despite the virtual nature we will do our best to emulate the spirit of a live workshop. Each presenter will be asked to serve as a discussant for another paper in the same session. A discussant will read the paper beforehand and prepare a series of questions to initiate the discussion. If you are uncomfortable about serving as a discussant, contact the organisers.
NEXTA has a tradition of handing out awards to encourage interaction.
- Most Viral Tweet Award [#nexta21]
- Best Questions Award (New format this year)
- Best Presentation Award
Participants will be asked to participate in voting for these awards.
All times in (UCT-3): Brazil
Welcome to NEXTA - Introduction to Workshop
Active Machine Learning to Test Autonomous Driving
Prof. Karl Meinke, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Sweden
Abstract: Autonomous driving represents a significant challenge to all software quality assurance techniques, including testing. Generative machine learning (ML) techniques including active ML have considerable potential to generate high-quality synthetic test data that can complement and improve on existing techniques such as hardware-in-the-loop and road testing.
AI-based Test Automation: A Grey Literature Analysis
Filippo Ricca*, Alessandro Marchetto** and Andrea Stocco***; *DIBRIS, Università di Genova, Italy, **Italy, ***Università della Svizzera italiana (USI), Schweiz
This paper provides the results of a survey of the grey literature concerning the use of artificial intelligence to improve test automation practices. We surveyed more than 1,200 sources of grey literature (e.g., blogs, white-papers, user manuals, StackOverflow posts) looking for highlights by professionals on how AI is adopted to aid the development and evolution of test code. Ultimately, we filtered 136 relevant documents from which we extracted a taxonomy of problems that AI aims to tackle, along with a taxonomy of AI-enabled solutions to such problems. Manual code development and automated test generation are the most cited problem and solutions, respectively. The paper concludes by distilling the six most prevalent tools on the market, along with think-aloud reflections about the current and future status of artificial intelligence for test automation.
Flaky Mutants; Another Concern for MutationTesting
Sten Vercammen, Serge Demeyer, Markus Borg* and Robbe Claessens, Unviversity of Antwerpen, Belgium and *RISE, Sweden
Using Advanced Code Analysis for Boosting Unit Test Creation
Miroslaw Zielinski* and Rix Groenboom**, *Parasoft, Poland, **Parasoft, Netherlands
Unit testing is a popular testing technique, widespread in enterprise IT and embedded/safety-critical. For enterprise IT, unit testing is considered to be good practice and is frequently followed as an element of test-driven development. In the safety-critical world, there are many standards, such as ISO 26262, IEC 61508, and others, that either directly or indirectly mandate unit testing. Regardless of the area of the application, unit testing is very time-consuming and teams are looking for strategies to optimize their efforts. This is especially true in the safety-critical space, where demonstration of test coverage is required for the certification. In this presentation, we share the results of our research regarding the use of advanced code analysis algorithms for augmenting the process of unit test creation. The discussion includes the automatic discovery of inputs and responses from mocked components that maximize the code coverage and automated generation of the test cases.
QRTest: Automatic Query Reformulation for Information Retrieval Based Regression Test Case Prioritization
Maral Azizi, East Carolina University, United States
The most effective regression testing algorithms have long running times and often require dynamic or static code analysis, making them unsuitable for the modern software development environment where the rate of software delivery could be less than a minute. More recently, some researchers have developed information retrieval-based (IR-based) techniques for prioritizing tests such that the higher similar tests to the code changes have a higher likelihood of finding bugs. A vast majority of these techniques are based on standard term similarity calculation, which can be imprecise. One reason for the low accuracy of these techniques is that the original query often is short, therefore, it does not return the relevant test cases. In such cases, the query needs reformulation. The current state of research lacks methods to increase the quality of the query in the regression testing domain. Our research aims at addressing this problem and we conjecture that enhancing the quality of the queries can improve the performance of IR-based regression test case prioritization (RTP). Our empirical evaluation with six open source programs shows that our approach improves the accuracy of IR-based RTP and increases regression fault detection rate, compared to the common prioritization techniques.
An Empirical Study of Parallelizing Test Execution Using CUDA Unified Memory and OpenMP GPU Offloading
Taghreed Bagies and Ali Jannesari, Iowa State University, United States
The execution of software testing is costly and time-consuming. To accelerate the test execution, researchers have applied several methods to run the testing in parallel. One method of parallelizing the test execution is by using a GPU to distribute test case inputs among several threads running in parallel. In this paper, we investigate three programming models CUDA Unified Memory, CUDA Non-Unified Memory, and OpenMP GPU offloading to parallelize the test execution and discuss the challenges using these programming models. We use eleven benchmarks and parallelize their test suites by using these models. We evaluate their performance in terms of execution time, analyze the results, and report the limitations of using these programming models.
Advancing Test Automation Using Artificial Intelligence (AI)
Assoc.Prof Jeremy Bradbury, Ontario Tech University, Canda
In recent years, software testing automation has been enhanced through the use of Artificial Intelligence (AI) techniques including genetic algorithms, machine learning, and deep learning. The use cases for AI in test automation range from providing recommendations to the complete automation of software testing activities. To demonstrate the breadth of application, I will present several recent examples of how AI can be leveraged to support automated testing in rapid release cycles. Furthermore, I will discuss my own successes and failures in using AI to advance test automation as well as share the lesson I have learned.
Workshop Closing Remarks
Due to the Corona pandemic, NEXTA 2021 will be organised as a fully virtual event.
NEXTA solicits contributions targeting all aspects of test automation, from initial test design to automated verdict analysis. Topics of interest include, but are not limited to, the following:
NEXTA accepts the following types of original papers:
Ericsson AB, Sweden
Linköping University, Sweden
Ericsson AB, Sweden
Queens University of Belfast, UK
University of Innsbruck, Austria
Program Committee (tentative)