Evolution of Test Automation and Its Impact on Software Engineering

undefined
Achievements, open problems,
and challenges for test
automation
Jeff Offutt
Jeff Offutt
George Mason University
George Mason University
cs.gmu.edu/~offutt
“Science can amuse and fascinate us, but
it is engineering that changes the world.”
-- Isaac Asimov
Pre-history: Manual testing
Before automation, testers entered inputs
by hand
Inputs often arbitrary
Entirely system testing
Slow
Repeatability problems
Testers made errors
o
Mis-entered values
o
Missed erroneous results
2
Common
among CS
students
 
Early (partial) automation
 
Tests designed by hand
Tests were sequences of actions and inputs
Sometimes executed all or partially by software
Results checked by human
Entirely system testing
Somewhat repeatable
Fewer errors
Slow & expensive
 
3
 
Test automation in the 1990s
 
System testing test automation tools
Often project- or company-specific
Test values usually designed by hand
Result checking usually by hand
Capture-replay tools for GUIs
Captured inputs and result screens from manual tests
Replayed tests when software changed
Many unit-level test frameworks
Not accessible to non-programmers
Most were research demonstration tools
Automatic checking of outputs via assertions
JUnit integrated the best ideas and simplified
 
4
 
Then this happened
 
5
 
6
0
 
Requirements
 
Prog / unit testing
 
Design
 
Integration testing
 
Fault origin (%)
 
Fault detection (%)
 
Unit cost (X)
 
Software Engineering Institute; Carnegie Mellon University; Handbook CMU/SEI-96-HB-002
 
System testing
 
Post-deployment
Evidence of RoI for unit testing
6
6
0
Requirements
Prog / unit testing
Design
Integration testing
Software Engineering Institute; Carnegie Mellon University; Handbook CMU/SEI-96-HB-002
 
Assume $1000 unit cost per fault, 100 faults
System testing
Post-deployment
Fault origin (%)
Fault detection (%)
Unit cost (X)
Unit testing and automation
Unit tests are easier to automate
Unit tests are easier to automate
Thus … more automation
Thus … more automation
7
Faster to produce
Faster to produce
Cheaper
Cheaper
More predictable
More predictable
Less repetitive
Less repetitive
 work
 work
More modularity
More modularity
Less re-design
Less re-design
Re-verification
Re-verification
supports
supports
 evolution
 evolution
Better software
Better software
reduces support costs
reduces support costs
Abstraction test design model
software
artifact
model /
structure
test
requirements
input
values
test
cases
test
scripts
test
results
pass /
fail
Concrete level
Concrete level
(implementation)
(implementation)
Abstract level
Abstract level
(design)
(design)
8
Raising our level of abstraction
Raising our level of abstraction
allows m
allows m
ore time for creative work
ore time for creative work
Ammann & Offutt, 
Introduction to Software Testing (
2nd), 2018
Postfix values
Prefix (setup) values
Object obj = Min.min(list);
Elements of an automated test
9
@Test
}
Test case values
public void testMutuallyIncomparable () {
Expected output
(expected = ClassCastException.class)
list.add(1);
List list = new ArrayList();
list.add("cat");
list.add("dog");
abstract
concrete
 
Test automation context model
 
10
 
Test automation
 
how
 
what
 
human scripts
C/R (GUIs)
script languages
frameworks (JUnit)
continuous integration (DevOps)
 
Execution
 
Generation
 
Management
 
test values
 
new
test
 
change
test
 
delete
test
 
human-based
 
criteria-based
Fault localization
&  debugging?
11
A
chievements in test
automation
(generation)
(1)
mapping problem
(abstract test to concrete test)
 
Mapping 
example—vending machine
 
Abstract test: [ 1, 3, 4, 1, 2, 4 ]
Refined abstract test:
AddChoc, Coin, GetChoc, Coin, AddChoc
Concrete test (mapping):
1.
addChoc();
2.
addCoin(
v
);
3.
chooseChoc(
c
); dispense();
4.
addCoin(
v
);
5.
addChoc();
Testers implement abstract tests by hand
But each transition 
maps
 to specific concrete actions
We can automate by 
assembling
 concrete test components
 
12
Assembling test components
13
Test components
C1
C2
C3
C4
C5
C6
C7
Test 1
Test 3
Test 2
C1
C3
C4
C1
C5
C6
C2
C7
C4
C5
Each abstract
Each abstract
test component
test component
must be mapped
must be mapped
to real code in
to real code in
concrete tests
concrete tests
many
many
times
times
Li & Offutt, 
Test Automation Language
Framework for Behavioral Models
, A-MOST 2015
 
14
A
chievements in test
automation
(generation)
 
(2) test
 oracles
Briand, Di Penta, Labiche, 
Assessing and Improving State-Based Class Testing
, TSE 2004
Barr, Harman, McMinn, Shahbaz, & Yoo, 
The Oracle Problem in Software Testing: A Survey
, TSE 2015
 
1980s RIP model (fault & failure)
 
15
 
R
eachability
I
nfection
P
ropagation
Test
Fault
Error
program
state
 
Infects
 
Reaches
 
Propagates
Outputs
(final program
state)
Morell, 
A Theory of Fault-Based Testing
, 1990 (PhD diss. 1984)
Offutt, 
Automatic Test Data Generation
, PhD diss. 1988
 
RIP model assumptions
 
16
 
R
eachability
I
nfection
P
ropagation
Implicit assumptions
A human tester looked
 at the output state
If the output was wrong, the tester
 saw it
No
 
false positives
No
 false negatives
 
RIP and test oracles
 
17
Automated execution of 
test oracles
(assertions) broke that assumption
 
R
eachability
I
nfection
P
ropagation
Practical testers found that some
output states are hard to 
observe
False positives 
were common
Li & Offutt, 
Test oracle strategies for model-based testing
, TSE 2017
 
What makes a good test oracle?
 
How do we model test oracles?
Do test oracles need to observe the entire output space?
Arbitrarily large
Do test oracles need to observe intermediate states?
Do test oracles always return the same answer?
How do we automate test oracles when we do not know
what the correct output should be?
A range of results is acceptable
Stochastic software (concurrent, decision, AI & ML, games)
Non-functional requirements (timing, security, usability, …)
Not humanly computable (scientific modeling software)
 
18
Hundreds of results on these questions!
 
RIP
R
 model
 
R
eachability
I
nfection
P
ropagation
Revealability
Test
Fault
Error
program
state
Test
oracles
Observed final
output 
state
 
Reaches
 
Infects
 
Propagates
 
Reveals
Incorrect
final output
 
19
 
Outputs
final program state
 
Good test oracles (TO) can see
 
A blind test does 
not
 check the portion of the output that
is incorrect
 
20
@Test
 public void testTwoValues()
 {
     list.addFront(“dog”);
     list.addFront(“cat”);
     Object obj = list.getFirst();
     assertTrue(“Two values”, obj.equals(“cat”));
 }
Passes:
[ cat ]
[ cat, cat ]
[ cat, null ]
Should check: [ cat, dog ]
Li & Offutt, 
Test oracle strategies for model-based testing
, TSE 2017
Blind tests
 is a 
serious problem 
in the software industry
Baral & Offutt, 
An Empirical Analysis of Blind Tests
, ICST 2020
Good test oracles (TO)
Smoke (crash) tests are blind more than half of the time
Why waste a good test?
Tests are expensive to design, to implement, and to run
21
@Test (expected = NullPointerException.class)
 public void addOneValue()
 {
     list.addFront(“cat”);
     Object obj = list.getFirst();
}
???
Should check: [ cat ]
 
Good TOs are consistent
 
Sometimes tests behave differently on different runs
 
 
 
 
Google says 16% of their tests are 
flaky
What makes a test flaky?
 
22
 
Concurrency
Asynchronous behavior
Random inputs
Resource leaks
 
Test order dependency
Collection class assumptions
Relying on external systems
Fowler, 
Eradicating non-determinism in tests
, online 2011
Luo, Hariri, Eloussi, Marinov, 
An Empirical Analysis of Flaky Tests
, FSE 2014
 
23
A
chievements in test
automation
(management)
 
(3) test evolution
(smart tests)
Old style tests
Values
 invented by humans
Scripts
 were pieces of paper with steps
1.
Turn on computer
2.
Type : “
Run myProgram
3.
Enter name : “
George P. Burdell
4.
Enter age : “
-25
Simple 
directions
 to humans
Slow!
Error
prone!
Limited
repeatability!
Almost
impossible to
integrate criteria
These tests are primitive
single-cell organisms
24
 
Modern not-smart tests
 
Test 
values
Created by a mix of humans and test data generators
Satisfy well-documented goals, test criteria, or specialized
domain needs
Integrated into 
automated test scripts
 (eg, JUnit)
Includes a small amount of brain power … these tests
know what 
results to expect
 (eg, assertions)
Fast
repeatable
These 
m
ulti-cellular
 tests show
preliminary signs of intelligence
 
25
Multicellular test model
Why is it there?
But a modern test
does not know …
When should it run?
When
 should it change?
When
 should it die?
26
Baral, Offutt, & Mulla, 
Self determination: A comprehensive strategy
for making automated tests more effective and efficient
, ICST 2021
 
Intelligent tests
Intelligent tests
Intelligent tests
need
need
self-awareness
self-awareness
and
and
self-determination!
self-determination!
Each test should encode
traceability
 … what it covers
Tests should check what has
changed
, and rerun if necessary
Tests should 
alert tester 
when they
no longer match the software
Tests should 
delete themselves
when no longer needed
 
27
 
Code evolution—test changes
 
28
29
Challenges
 and problems
in test automation
 
The Web changed everything
 
30
High quality become essential …
reliability, usability, maintainability, …
Distribution became free …
web deployment
Support became cheap …
we search the Web
Continuous updates …
“perfect out of the box” is outdated
The Web and agile processes resuscitated evolutionary design
Evolutionary design requires continuous (automated) testing
 
TA challenges and problems
 
Examples:
Games, scientific modeling, non-deterministic, AI & ML, …
How to build automated tests?
Inputs are often very complicated
Test oracles—JUnit assertions are not enough
Metamorphic testing is a great idea—but we need more ideas
 
31
Non-determinant
 software
We cannot know the result
 
a priori
Instead of “correct behavior,”
we need
acceptable behavior
 
TA challenges and problems
 
TOs are still usually created by hand
Old, current, and new approaches:
Formal specifications—lots of research, but limited use
Aggregating unit-level TOs into system-level TOs
Impact analysis to identify which parts of output state to check
Screen capturing approaches
Machine learning to “guess” expected results?
Leverage TDD tests to create expected results for similar tests?
Mutate test values, then design corresponding TO mutations?
 
32
Test oracles
We need to automate
 generation
 
TA challenges and problems
 
Test automation tools (execution) are very slow
Most inputs are through the screen with funny gestures
and auto-fills
Challenging to automate
Every mobile device has its own ecosystem
How to model these ecological communities?
How to test entanglements with other apps?
What does  a test oracle look like in a mobile app?
 
33
Testing mobile apps
Many details are very different
 
TA challenges and problems
 
We need effective techniques to automate the testing of
non-behavioral properties
Timeliness … automatic execution of timely tests is complex
Accuracy … if software approximates a solution, how can we
evaluate how accurate its results are?
Usability … human studies are usually done by hand … can we
create a model that can empower automation?
Ethical and equitable behavior … can we define this generally
and specifically enough to support automation?
 
34
Non-behavioral properties
 
TA challenges and problems
 
Can the smart test approach be expanded to arbitrary
models and coverage?
Can NLP be used to connect requirements to code?
How to build a full test framework where tests are truly
self-aware?
Agent-based software might be an effective model
Can such a framework be fully implemented into an IDE?
 
35
Smart tests
We have only
 scratched the surface
 
TA challenges and problems
 
When I learned to program …
compile … integrate … run … repeat
Each step took multiple iterations as automated tools (compilers,
IDEs, make) helped us find problems
IDEs should treat behavioral checking exactly the same as
syntax checking
Compile … integrate … generate tests … generate TOs … run tests
And the IDE should use automatic program repair to
automatically fix many of the faults it found automatically
 
36
Compiler-integrated testing
Tests should run when
 we compile
 
Takeaways
 
37
(1)
Test automation has
 enormous impact
 
JUnit
 
test suite
management
 
agile &
TDD
 
continuous
integration
 
capture
replay
 
test
scripts
 
test
oracle
 
Model-
based
testing
 
Takeaways
 
38
(2)
Good models empower good research
Takeaways
39
(3)
We have barely started!
Every single research result
reveals two more problems
we need to solve
Jeff Offutt
Jeff Offutt
cs.gmu.edu/~offutt
“Computer science amuses and fascinates
us, but automation changes the world.”
(paraphrasing Isaac Asimov)
Slide Note
Embed
Share

Evolution of test automation from manual testing to partial automation in the early stages, advancements in the 1990s with the introduction of system testing tools, and the impact of test automation on fault detection and cost reduction in software engineering. The journey reflects achievements, challenges, and open problems faced in the field of test automation, ultimately aiming to improve software quality and efficiency.


Uploaded on Mar 19, 2024 | 4 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Achievements, open problems, and challenges for test automation Jeff Offutt George Mason University cs.gmu.edu/~offutt Science can amuse and fascinate us, but it is engineering that changes the world. -- Isaac Asimov

  2. Pre-history: Manual testing Before automation, testers entered inputs by hand Inputs often arbitrary Entirely system testing Slow Repeatability problems Testers made errors o Mis-entered values o Missed erroneous results Common among CS students test automation TA achievements major challenges & open problems takeaways of 39 2

  3. Early (partial) automation Tests designed by hand Tests were sequences of actions and inputs Sometimes executed all or partially by software Results checked by human Entirely system testing Somewhat repeatable Fewer errors Slow & expensive test automation TA achievements major challenges & open problems takeaways of 39 3

  4. Test automation in the 1990s System testing test automation tools Often project- or company-specific Test values usually designed by hand Result checking usually by hand Capture-replay tools for GUIs Captured inputs and result screens from manual tests Replayed tests when software changed Many unit-level test frameworks Not accessible to non-programmers Most were research demonstration tools Automatic checking of outputs via assertions JUnit integrated the best ideas and simplified test automation TA achievements major challenges & open problems takeaways of 39 4

  5. Then this happened 60 50 40 Fault origin (%) 30 Fault detection (%) 20 10 Unit cost (X) 0 Software Engineering Institute; Carnegie Mellon University; Handbook CMU/SEI-96-HB-002 test automation TA achievements major challenges & open problems takeaways of 39 5

  6. Evidence of RoI for unit testing 60 Assume $1000 unit cost per fault, 100 faults 50 40 Fault origin (%) 30 Fault detection (%) 20 10 Unit cost (X) 0 Software Engineering Institute; Carnegie Mellon University; Handbook CMU/SEI-96-HB-002 test automation TA achievements major challenges & open problems takeaways of 39 6

  7. Unit testing and automation Unit tests are easier to automate Thus more automation More modularity Less re-design Faster to produce Re-verification supports evolution Cheaper Less repetitive work Better software reduces support costs More predictable test automation TA achievements major challenges & open problems takeaways of 39 7

  8. Abstraction test design model test model / structure requirements Abstract level (design) Concrete level (implementation) software artifact input values pass / fail test results test scripts test cases Ammann & Offutt, Introduction to Software Testing (2nd), 2018 test automation TA achievements major challenges & open problems takeaways of 39 8

  9. Elements of an automated test @Test Expected output (expected = ClassCastException.class) public void testMutuallyIncomparable () { List list = new ArrayList(); Prefix (setup) values list.add("cat"); list.add("dog"); Test case values list.add(1); Postfix values Object obj = Min.min(list); } abstract concrete test automation TA achievements major challenges & open problems takeaways of 39 9

  10. Test automation context model Test automation Execution Generation Management how what new test change test delete test test oracle prefix postfix values test values human-based human scripts prefix & postfix criteria-based C/R (GUIs) assembled pieces script languages frameworks (JUnit) test oracle continuous integration (DevOps) test automation TA achievements major challenges & open problems takeaways of 39 10

  11. Achievements in test automation (generation) (1) mapping problem (abstract test to concrete test) test automation TA achievements major challenges & open problems takeaways of 39 11

  12. Mapping examplevending machine Abstract test: [ 1, 3, 4, 1, 2, 4 ] Refined abstract test: AddChoc, Coin, GetChoc, Coin, AddChoc Concrete test (mapping): 1. addChoc(); 2. addCoin(v); 3. chooseChoc(c); dispense(); 4. addCoin(v); 5. addChoc(); Testers implement abstract tests by hand But each transition maps to specific concrete actions We can automate by assembling concrete test components Coin Coin 1 2 GetChoc AddChoc AddChoc Coin 3 4 GetChoc AddChoc or Coin AddChoc test automation TA achievements major challenges & open problems takeaways of 39 12

  13. Assembling test components Test components Test 1 Test 2 C1 C1 C1 C2 C2 C3 C3 Test 3 C4 C4 C4 C5 C5 C5 C6 C6 C7 C7 Li & Offutt, Test Automation Language Framework for Behavioral Models, A-MOST 2015 test automation TA achievements major challenges & open problems takeaways of 39 13

  14. Achievements in test automation (generation) (2) test oracles Briand, Di Penta, Labiche, Assessing and Improving State-Based Class Testing, TSE 2004 Barr, Harman, McMinn, Shahbaz, & Yoo, The Oracle Problem in Software Testing: A Survey, TSE 2015 test automation TA achievements major challenges & open problems takeaways of 39 14

  15. 1980s RIP model (fault & failure) Test Reaches Outputs (final program state) Reachability Fault Infection Infects Propagation Propagates Error program state Morell, A Theory of Fault-Based Testing, 1990 (PhD diss. 1984) Offutt, Automatic Test Data Generation, PhD diss. 1988 test automation TA achievements major challenges & open problems takeaways of 39 15

  16. RIP model assumptions Implicit assumptions A human tester looked at the output state If the output was wrong, the tester saw it No false positives No false negatives Test Outputs (final program state) Reaches Reachability Infection Propagation Fault Infects Error program state Propagates test automation TA achievements major challenges & open problems takeaways of 39 16

  17. RIP and test oracles Automated execution of test oracles (assertions) broke that assumption Practical testers found that some output states are hard to observe False positives were common Test Outputs (final program state) Reaches Reachability Infection Propagation Fault Infects Error program state Propagates Li & Offutt, Test oracle strategies for model-based testing, TSE 2017 test automation TA achievements major challenges & open problems takeaways of 39 17

  18. What makes a good test oracle? How do we model test oracles? Do test oracles need to observe the entire output space? Arbitrarily large Do test oracles need to observe intermediate states? Do test oracles always return the same answer? How do we automate test oracles when we do not know what the correct output should be? A range of results is acceptable Stochastic software (concurrent, decision, AI & ML, games) Non-functional requirements (timing, security, usability, ) Not humanly computable (scientific modeling software) Hundreds of results on these questions! test automation TA achievements major challenges & open problems takeaways of 39 18

  19. RIPR model Outputs Test final program state Reaches Reachability Observed final output state Infection Fault Incorrect final output Propagation Infects Propagates Revealability Reveals Error program state Test oracles test automation TA achievements major challenges & open problems takeaways of 39 19

  20. Good test oracles (TO) can see A blind test does not check the portion of the output that is incorrect @Test public void testTwoValues() { list.addFront( dog ); list.addFront( cat ); Object obj = list.getFirst(); assertTrue( Two values , obj.equals( cat )); } Passes: [ cat ] [ cat, cat ] [ cat, null ] Should check: [ cat, dog ] Blind tests is a serious problem in the software industry Li & Offutt, Test oracle strategies for model-based testing, TSE 2017 Baral & Offutt, An Empirical Analysis of Blind Tests, ICST 2020 test automation TA achievements major challenges & open problems takeaways of 39 20

  21. Good test oracles (TO) Smoke (crash) tests are blind more than half of the time Why waste a good test? Tests are expensive to design, to implement, and to run @Test (expected = NullPointerException.class) public void addOneValue() { list.addFront( cat ); Object obj = list.getFirst(); } ??? Should check: [ cat ] test automation TA achievements major challenges & open problems takeaways of 39 21

  22. Good TOs are consistent Sometimes tests behave differently on different runs test A run 1 test A run 2 test A run 3 Google says 16% of their tests are flaky What makes a test flaky? Concurrency Asynchronous behavior Random inputs Resource leaks Test order dependency Collection class assumptions Relying on external systems Fowler, Eradicating non-determinism in tests, online 2011 Luo, Hariri, Eloussi, Marinov, An Empirical Analysis of Flaky Tests, FSE 2014 test automation TA achievements major challenges & open problems takeaways of 39 22

  23. Achievements in test automation (management) (3) test evolution (smart tests) test automation TA achievements major challenges & open problems takeaways of 39 23

  24. Old style tests Values invented by humans Scripts were pieces of paper with steps 1. Turn on computer 2. Type : Run myProgram 3. Enter name : George P. Burdell 4. Enter age : -25 Simple directions to humans Slow! Error prone! Limited repeatability! These tests are primitive single-cell organisms Almost impossible to integrate criteria test automation TA achievements major challenges & open problems takeaways of 39 24

  25. Modern not-smart tests Test values Created by a mix of humans and test data generators Satisfy well-documented goals, test criteria, or specialized domain needs Integrated into automated test scripts (eg, JUnit) Includes a small amount of brain power these tests know what results to expect (eg, assertions) Fast repeatable These multi-cellular tests show preliminary signs of intelligence test automation TA achievements major challenges & open problems takeaways of 39 25

  26. Multicellular test model But a modern test does not know prefix values test values postfix values Why is it there? expected results When should it run? When should it change? When should it die? Baral, Offutt, & Mulla, Self determination: A comprehensive strategy for making automated tests more effective and efficient, ICST 2021 test automation TA achievements major challenges & open problems takeaways of 39 26

  27. Intelligent tests Intelligent tests need self-awareness and self-determination! Each test should encode traceability what it covers Tests should check what has changed, and rerun if necessary Tests should alert tester when they no longer match the software Tests should delete themselves when no longer needed test automation TA achievements major challenges & open problems takeaways of 39 27

  28. Code evolutiontest changes Changed method Original method concatNames (s1, s2) fName = s1 lName = s2 return lName+ +fName concatNames (s1, s2) fName = s1 lName = s2 return fName+ +lName Changed test Original test testConcatNames (s1, s2) result = concatNames( Anita , Borg ) assertTrue(result == Borg Anita ) testConcatNames (s1, s2) result = concatNames( Anita , Borg ) assertTrue(resultContains( Anita ) && (result.contains( Borg )) test automation TA achievements major challenges & open problems takeaways of 39 28

  29. Challenges and problems in test automation test automation TA achievements major challenges & open problems takeaways of 39 29

  30. The Web changed everything High quality become essential reliability, usability, maintainability, Distribution became free web deployment Support became cheap we search the Web Continuous updates perfect out of the box is outdated The Web and agile processes resuscitated evolutionary design Evolutionary design requires continuous (automated) testing test automation TA achievements major challenges & open problems takeaways of 39 30

  31. TA challenges and problems Non-determinant software We cannot know the result a priori Examples: Games, scientific modeling, non-deterministic, AI & ML, How to build automated tests? Inputs are often very complicated Test oracles JUnit assertions are not enough Metamorphic testing is a great idea but we need more ideas Instead of correct behavior, we need acceptable behavior test automation TA achievements major challenges & open problems takeaways of 39 31

  32. TA challenges and problems Test oracles We need to automate generation TOs are still usually created by hand Old, current, and new approaches: Formal specifications lots of research, but limited use Aggregating unit-level TOs into system-level TOs Impact analysis to identify which parts of output state to check Screen capturing approaches Machine learning to guess expected results? Leverage TDD tests to create expected results for similar tests? Mutate test values, then design corresponding TO mutations? test automation TA achievements major challenges & open problems takeaways of 39 32

  33. TA challenges and problems Testing mobile apps Many details are very different Test automation tools (execution) are very slow Most inputs are through the screen with funny gestures and auto-fills Challenging to automate Every mobile device has its own ecosystem How to model these ecological communities? How to test entanglements with other apps? What does a test oracle look like in a mobile app? test automation TA achievements major challenges & open problems takeaways of 39 33

  34. TA challenges and problems Non-behavioral properties We need effective techniques to automate the testing of non-behavioral properties Timeliness automatic execution of timely tests is complex Accuracy if software approximates a solution, how can we evaluate how accurate its results are? Usability human studies are usually done by hand can we create a model that can empower automation? Ethical and equitable behavior can we define this generally and specifically enough to support automation? test automation TA achievements major challenges & open problems takeaways of 39 34

  35. TA challenges and problems Smart tests We have only scratched the surface Can the smart test approach be expanded to arbitrary models and coverage? Can NLP be used to connect requirements to code? How to build a full test framework where tests are truly self-aware? Agent-based software might be an effective model Can such a framework be fully implemented into an IDE? test automation TA achievements major challenges & open problems takeaways of 39 35

  36. TA challenges and problems Compiler-integrated testing Tests should run when we compile When I learned to program compile integrate run repeat Each step took multiple iterations as automated tools (compilers, IDEs, make) helped us find problems IDEs should treat behavioral checking exactly the same as syntax checking Compile integrate generate tests generate TOs run tests And the IDE should use automatic program repair to automatically fix many of the faults it found automatically test automation TA achievements major challenges & open problems takeaways of 39 36

  37. Takeaways (1) Test automation has enormous impact test scripts test oracle continuous integration JUnit capture replay agile & TDD Model- based testing test suite management test automation TA achievements major challenges & open problems takeaways of 39 37

  38. Takeaways (2) Good models empower good research test model / structure requirements Execution Generation Management Abstract level how what new test change test delete test Concrete level software artifact values test oracle prefix postfix input values test values human-based human scripts pass / fail test results test scripts test cases prefix & postfix criteria-based C/R (GUIs) script languages assembled pieces frameworks (JUnit) test oracle continuous integration Outputs Test final program state prefix values Reaches Observed final output state Incorrect final state Fault test values expected results postfix values Infects Error program state Propagates Reveals Test oracles test automation TA achievements major challenges & open problems takeaways of 39 38

  39. Takeaways (3) We have barely started! Every single research result reveals two more problems we need to solve Computer science amuses and fascinates us, but automation changes the world. (paraphrasing Isaac Asimov) Jeff Offutt cs.gmu.edu/~offutt test automation TA achievements major challenges & open problems takeaways of 39 39

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#