Automatic Unit Test Generation for Java Classes

 
T
e
s
t
F
u
l
:
A
u
t
o
m
a
t
i
c
 
U
n
i
t
-
T
e
s
t
 
G
e
n
e
r
a
t
i
o
n
 
f
o
r
 
J
a
v
a
 
C
l
a
s
s
e
s
 
Matteo Miraz, Pier Luca Lanzi
DEI – Politecnico di Milano (Italy)
 
Automated Test Generation
 
Software systems permeate (almost) every aspect of our life
Software is buggy
In 2002 the costs related to software errors in the U.S. are
estimated in 60 Billion USD
An adequate testing campaign might require up to half of
the software development effort
 
Several approaches to
automatically generate tests:
Symbolic Execution
Random Testing
Evolutionary Testing
 
Our Approach: TestFul
 
Focuses on Java Classes
Features may be state-dependent
 
Fitness Function:
Designed to maximize the number of features exercised
Composed by Statement, Branch & DU pairs coverage
 
Efficiency Enhancements:
Local search to tackle difficult branches
Seeding to start from a better initial population
Fitness Inheritance to reduce the test execution costs
 
Design of the Experiment
 
We compare TestFul against test generated by our SE students
Software Engineering course, after lessons on Testing
Students were given 40 minutes to test a short, non-naive class
 
Design of the Experiment
 
We compare TestFul against test generated by our SE students
Software Engineering course, after lessons on Testing
Students were given 40 minutes to test a short, non-naive class
Students were facilitated in writing the tests
 
Design of the Experiment
 
We compare TestFul against test generated by our SE students
Software Engineering course, after lessons on Testing
Students were given 40 minutes to test a short, non-naive class
Students were facilitated in writing the tests
 
Automated approaches run on the same class for 10 minutes
 
The comparison was made on the test quality
Measured with mutation score
We used the Mann–Whitney–Wilcoxon U statistical test
 
The experiment was inspired to:
S. Mouchawrab et. al., 
Assessing, Comparing, and Combining State Machine-Based
Testing and Structural Testing: A Series of Experiments
,
IEEE Transactions on Software Engineering, 2010
 
Empirical Results
 
Empirical Results
Comparing to the Mouchawrab’s TSE
Empirical Results (2)
«
container classes  are the de facto benchmark
for testing software with internal state
»
[ Arcuri 2010 ]
 
Bug #4275605
 
Why We Are Human Competitive
 
B) The result is equal to or better than a result that was
accepted as a new scientific result  […]
 
We compare TestFul with the TSE of Mouchawrab et. Al.
C) The result is equal to or better than a result that was
placed into a database […]
TestFul found a bug in the Java Collections Framework
 
H) The result holds its own or wins a regulated competition
involving human contestants […]
TestFul produces better results than human being
 
Reference: Matteo Miraz; 2010; Evolutionary Testing of Stateful Systems: a
Holistic Approach; Ph.D. thesis; Politecnico di Milano
Slide Note
Embed
Share

Automated test generation software systems play a crucial role in identifying and rectifying software errors, which have significant financial implications. TestFul focuses on Java classes, utilizing a fitness function designed to maximize test coverage and efficiency enhancements for improved testing outcomes. An experiment comparing TestFul against tests generated by software engineering students reveals valuable insights into test quality and effectiveness.

  • TestFul
  • Java classes
  • Automated testing
  • Software engineering
  • Test generation

Uploaded on Sep 29, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. TestFul: Automatic Unit-Test Generation for Java Classes Matteo Miraz, Pier Luca Lanzi DEI Politecnico di Milano (Italy)

  2. Automated Test Generation Software systems permeate (almost) every aspect of our life Software is buggy In 2002 the costs related to software errors in the U.S. are estimated in 60 Billion USD An adequate testing campaign might require up to half of the software development effort Several approaches to automatically generate tests: Symbolic Execution Random Testing Evolutionary Testing

  3. Our Approach: TestFul Focuses on Java Classes Features may be state-dependent Fitness Function: Designed to maximize the number of features exercised Composed by Statement, Branch & DU pairs coverage Efficiency Enhancements: Local search to tackle difficult branches Seeding to start from a better initial population Fitness Inheritance to reduce the test execution costs

  4. Design of the Experiment We compare TestFul against test generated by our SE students Software Engineering course, after lessons on Testing Students were given 40 minutes to test a short, non-naive class

  5. Design of the Experiment We compare TestFul against test generated by our SE students Software Engineering course, after lessons on Testing Students were given 40 minutes to test a short, non-naive class Students were facilitated in writing the tests

  6. Design of the Experiment We compare TestFul against test generated by our SE students Software Engineering course, after lessons on Testing Students were given 40 minutes to test a short, non-naive class Students were facilitated in writing the tests Automated approaches run on the same class for 10 minutes The comparison was made on the test quality Measured with mutation score We used the Mann Whitney Wilcoxon U statistical test The experiment was inspired to: S. Mouchawrab et. al., Assessing, Comparing, and Combining State Machine-Based Testing and Structural Testing: A Series of Experiments, IEEE Transactions on Software Engineering, 2010

  7. Empirical Results

  8. Empirical Results Comparing to the Mouchawrab s TSE

  9. Empirical Results (2) container classes are the de facto benchmark for testing software with internal state [ Arcuri 2010 ] Bug #4275605

  10. Why We Are Human Competitive B) The result is equal to or better than a result that was accepted as a new scientific result [ ] We compare TestFul with the TSE of Mouchawrab et. Al. C) The result is equal to or better than a result that was placed into a database [ ] TestFul found a bug in the Java Collections Framework H) The result holds its own or wins a regulated competition involving human contestants [ ] TestFul produces better results than human being Reference: Matteo Miraz; 2010; Evolutionary Testing of Stateful Systems: a Holistic Approach; Ph.D. thesis; Politecnico di Milano

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#