4th IEEE Workshop on NEXt level of Test Automation 2021
NEXTA 2021

Workshop Summary

Test automation has been an acknowledged software engineering best practice for years. However, the topic involves more than the repeated execution of test cases that often comes first to mind. Simply running test cases using a unit testing framework is no longer enough for test automation to keep up with the ever-shorter release cycles driven by continuous deployment and technological innovations such as microservices and DevOps pipelines. Now test automation needs to rise to the next level by going beyond mere test execution. The NEXTA workshop will explore how to advance test automation to further contribute to software quality in the context of tomorrow's rapid release cycles. Take-aways for industry practitioners and academic researchers will encompass test case generation, automated test result analysis, test suite assessment and maintenance, and infrastructure for the future of test automation.

Practical

The NEXTA workshop will be organised in a fully virtual setting. Nevertheless, we want to stay as close as possible to the spirit of the workshop (i.e. encourage discussions, active on social media). But what implications does this have for you?

(1) Live but virtual.
We ask authors to give a real virtual presentation, instead of playing a pre-recorded video. However, if you feel uncomfortable about giving a virtual presentation, contact the organizers and we will gladly accommodate.

(3) Date and Time zone.
As we assume the conference is on Brazil time which will impact the schedule

(4) Platform.
The platform for the virtual event is currently unclear.

(5) Active Q&A.
Despite the virtual nature we will do our best to emulate the spirit of a live workshop. Each presenter will be asked to serve as a discussant for another paper in the same session. A discussant will read the paper beforehand and prepare a series of questions to initiate the discussion. If you are uncomfortable about serving as a discussant, contact the organisers.

(6) Awards.
NEXTA has a tradition of handing out awards to encourage interaction.
 - Most Viral Tweet Award [#nexta21]
 - Best Questions Award (New format this year)
 - Best Presentation Award
Participants will be asked to participate in voting for these awards.

Program

All times in (UCT-3): Brazil


      9.00 -12.30 (virtual)

    Welcome to NEXTA - Introduction to Workshop
     Chairs

    Active Machine Learning to Test Autonomous Driving
    Prof. Karl Meinke, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Sweden

    Abstract:
    Autonomous driving represents a significant challenge to all software quality assurance techniques, including testing. Generative machine learning (ML) techniques including active ML have considerable potential to generate high-quality synthetic test data that can complement and improve on existing techniques such as hardware-in-the-loop and road testing.

    AI-based Test Automation: A Grey Literature Analysis
    Filippo Ricca*, Alessandro Marchetto** and Andrea Stocco***; *DIBRIS, Università di Genova, Italy, **Italy, ***Università della Svizzera italiana (USI), Schweiz
    Abstract:
    This paper provides the results of a survey of the grey literature concerning the use of artificial intelligence to improve test automation practices. We surveyed more than 1,200 sources of grey literature (e.g., blogs, white-papers, user manuals, StackOverflow posts) looking for highlights by professionals on how AI is adopted to aid the development and evolution of test code. Ultimately, we filtered 136 relevant documents from which we extracted a taxonomy of problems that AI aims to tackle, along with a taxonomy of AI-enabled solutions to such problems. Manual code development and automated test generation are the most cited problem and solutions, respectively. The paper concludes by distilling the six most prevalent tools on the market, along with think-aloud reflections about the current and future status of artificial intelligence for test automation.

    Flaky Mutants; Another Concern for MutationTesting
    Sten Vercammen, Serge Demeyer, Markus Borg* and Robbe Claessens, Unviversity of Antwerpen, Belgium and *RISE, Sweden

    Using Advanced Code Analysis for Boosting Unit Test Creation
    Miroslaw Zielinski* and Rix Groenboom**, *Parasoft, Poland, **Parasoft, Netherlands
    Abstract:
    Unit testing is a popular testing technique, widespread in enterprise IT and embedded/safety-critical. For enterprise IT, unit testing is considered to be good practice and is frequently followed as an element of test-driven development. In the safety-critical world, there are many standards, such as ISO 26262, IEC 61508, and others, that either directly or indirectly mandate unit testing. Regardless of the area of the application, unit testing is very time-consuming and teams are looking for strategies to optimize their efforts. This is especially true in the safety-critical space, where demonstration of test coverage is required for the certification. In this presentation, we share the results of our research regarding the use of advanced code analysis algorithms for augmenting the process of unit test creation. The discussion includes the automatic discovery of inputs and responses from mocked components that maximize the code coverage and automated generation of the test cases.

    QRTest: Automatic Query Reformulation for Information Retrieval Based Regression Test Case Prioritization
    Maral Azizi, East Carolina University, United States
    Abstract:
    The most effective regression testing algorithms have long running times and often require dynamic or static code analysis, making them unsuitable for the modern software development environment where the rate of software delivery could be less than a minute. More recently, some researchers have developed information retrieval-based (IR-based) techniques for prioritizing tests such that the higher similar tests to the code changes have a higher likelihood of finding bugs. A vast majority of these techniques are based on standard term similarity calculation, which can be imprecise. One reason for the low accuracy of these techniques is that the original query often is short, therefore, it does not return the relevant test cases. In such cases, the query needs reformulation. The current state of research lacks methods to increase the quality of the query in the regression testing domain. Our research aims at addressing this problem and we conjecture that enhancing the quality of the queries can improve the performance of IR-based regression test case prioritization (RTP). Our empirical evaluation with six open source programs shows that our approach improves the accuracy of IR-based RTP and increases regression fault detection rate, compared to the common prioritization techniques.

    An Empirical Study of Parallelizing Test Execution Using CUDA Unified Memory and OpenMP GPU Offloading
    Taghreed Bagies and Ali Jannesari, Iowa State University, United States
    Abstract:
    The execution of software testing is costly and time-consuming. To accelerate the test execution, researchers have applied several methods to run the testing in parallel. One method of parallelizing the test execution is by using a GPU to distribute test case inputs among several threads running in parallel. In this paper, we investigate three programming models CUDA Unified Memory, CUDA Non-Unified Memory, and OpenMP GPU offloading to parallelize the test execution and discuss the challenges using these programming models. We use eleven benchmarks and parallelize their test suites by using these models. We evaluate their performance in terms of execution time, analyze the results, and report the limitations of using these programming models.

    Advancing Test Automation Using Artificial Intelligence (AI)
    Assoc.Prof Jeremy Bradbury, Ontario Tech University, Canda
    Abstract:
    In recent years, software testing automation has been enhanced through the use of Artificial Intelligence (AI) techniques including genetic algorithms, machine learning, and deep learning. The use cases for AI in test automation range from providing recommendations to the complete automation of software testing activities. To demonstrate the breadth of application, I will present several recent examples of how AI can be leveraged to support automated testing in rapid release cycles. Furthermore, I will discuss my own successes and failures in using AI to advance test automation as well as share the lesson I have learned.

    Workshop Closing Remarks
    Chairs

Registration

Registration for the workshop (half day) should be done via link found at ICST2021 homepage ICST2021 Registration.

Travel information

Due to the Corona pandemic, NEXTA 2021 will be organised as a fully virtual event.

Important Dates

  • 31th of January, 2021 (hard deadline)

    Full-Paper Submission

  • 7 st of January, 2021

    Notification

  • 26th of February, 2021

    Camera Ready

  • 16th of April, 2021

    Workshop

Call for Papers

NEXTA solicits contributions targeting all aspects of test automation, from initial test design to automated verdict analysis. Topics of interest include, but are not limited to, the following:

  • Test execution automation
  • Test case generation
  • Automatic test design generation
  • Analytics, learning and big data in relation to test automation
  • Automatic aspects management in test, progress, reporting, planning etc.
  • Visualization of test
  • Evolution of test automation
  • Test suite architecture and infrastructure
  • Test environment, simulation, and other contextual issues for automated testing
  • Test tools, frameworks, and general support for test automation
  • Testing in an agile and continuous integration context, and testing within DevOps
  • Orchestration of test
  • Metrics, benchmarks, and estimation on any type of test automation
  • Any type of test technologies relying on automation of test
  • Process improvements and assessments related to test automation
  • Test automation maturity and experience reports on test automation
  • Automatic retrieval of test data and test preparation aspect
  • Maintainability, monitoring and refactoring of automated test suites
  • Training and education on automated testing
  • Automated test for product lines and high-variability systems
  • Test automation patterns
  • Automated test oracles


NEXTA accepts the following types of original papers:

  • Technical Papers (max. 8 pages in IEEE format).
    Full papers presenting research results or industrial practices related to the next generation of test automation.

  • Tool Papers (max. 4 pages in IEEE format).
    Tool papers introduce tools that implement an approach to support the transition to the next generation of test automation. A tool paper submission must include either 1) a URL to a screencast of the tool in action, or 2) a runnable version of the tool for evaluation by the program committee.

  • Position and Experience Papers (max. 4 pages in IEEE format).
    Short papers introducing challenges, visions, positions or preliminary results within the scope of the workshop. Experience reports and papers on open challenges in industry are especially welcome.


Authors should submit a PDF version of their paper through the NEXTA 2021 paper submission site: easychair.
All accepted papers will be part of the ICST joint workshop proceedings published in the IEEE Digital Library.

Call for Papers can be downloaded as text or as a PDF document.

Organisation

Sigrid Eldh

Ericsson AB, Sweden
General Chair

Kristian Sandahl

Linköping University, Sweden
Program Co-Chair

Sahar Tahvili

Ericsson AB, Sweden
Program Co-Chair

Vahid Garousi

Queens University of Belfast, UK
Program Co-Chair

Michael Felderer

University of Innsbruck, Austria
Program Co-Chair

Program Committee (tentative)

  • Sigrid Eldh, Ericsson AB, Sweden
  • Sahar Tahvili, Ericsson AB, Sweden
  • Kristian Sandahl, Linköping Unviersty, Sweden
  • Vahid Garousi, The Queens University of Belfast, UK
  • Michael Felderer, University of Innsbruck, Austria
  • Serge Demeyer, University of Antwerp, Belgium
  • Pasqualina Potena, RISE Research Institutes of Sweden AB, Sweden
  • Kristian Wiklund, Ericsson AB, Sweden
  • Markus Borg, RISE Research Institutes of Sweden AB, Sweden
  • Tanja Vos, Universidad Politecnica Valencia, Spain and Open University Netherlands
  • Marc-Florian Wendland, Fraunhofer Fokus, Germany
  • Machiel van der Bijl, Axini, Netherlands
  • Francisco Gomez, Göteborgs and Chalmers University, Sweden
  • Karl Meinke, The Royal Institute of Technology, Sweden
  • Mika Mäntylä, University of Oulu, Finland
  • Magnus C Ohlsson, System Verification, Sweden
  • Pekka Aho, The Open University of the Netherlands
  • Rix Groenboom, Parasoft, Germany
  • Eduardo Paul Enuio, Mälardalen University, Sweden
  • Magnus C Ohlsson, System Verification, Sweden