SE 433/333 Software Testing & Quality Assurance

undefined
SE 433/333 Software Testing
& Quality Assurance
Dennis Mumaugh, Instructor
dmumaugh@depaul.edu
Office: CDM, Room 428
Office Hours: Tuesday, 4:00 – 5:30
May 30, 2017
SE 433: Lecture 10
1 of 87
Administrivia
Comments and feedback
Assignment 1-8 solutions are posted, Assignment 9 on
Wednesday
Assignments and exams:
A
s
s
i
g
n
m
e
n
t
 
9
:
 
D
u
e
 
M
a
y
 
3
0
F
i
n
a
l
 
e
x
a
m
:
 
 
J
u
n
e
 
1
-
7
T
a
k
e
 
h
o
m
e
 
e
x
a
m
/
p
a
p
e
r
:
 
J
u
n
e
 
6
[
F
o
r
 
S
E
4
3
3
 
s
t
u
d
e
n
t
s
 
o
n
l
y
]
Final examination
Will be on Desire2Learn.
June 1 to June 7
May 30, 2017
SE 433: Lecture 10
2 of 87
SE 433 – Class 10
T
o
p
i
c
s
:
Test Plans
Interview questions: aka Review
Statistics and metrics.
Miscellaneous
R
e
a
d
i
n
g
:
Pezze and Young,  Chapters 20, 23, 24
Articles on the Reading List
A
s
s
i
g
n
m
e
n
t
s
A
s
s
i
g
n
m
e
n
t
 
9
:
 
D
u
e
 
M
a
y
 
3
0
F
i
n
a
l
 
e
x
a
m
:
 
 
J
u
n
e
 
1
-
7
T
a
k
e
 
h
o
m
e
 
e
x
a
m
/
p
a
p
e
r
:
 
J
u
n
e
 
6
[
F
o
r
 
S
E
4
3
3
 
s
t
u
d
e
n
t
s
 
o
n
l
y
]
May 30, 2017
SE 433: Lecture 10
3 of 87
undefined
Thought for the Day
“Program testing can be a very effective way to show the
presence of bugs, but is hopelessly inadequate for
showing their absence.”
– Edsger Dijkstra
May 30, 2017
SE 433: Lecture 10
4 of 87
Fundamental Questions in Testing
When can we stop testing?
Test coverage
What should we test?
Test generation
Is the observed output correct?
Test oracle
How well did we do?
Test efficiency
Who should test your program?
Independent V&V
May 30, 2017
SE 433: Lecture 10
5 of 87
Test Plans
 
The most common question I hear about testing is
 How do I write a test plan? 
This question usually comes up when the focus is on the
document, not the contents
It’s the contents that are important, not the structure
Good testing is more important than proper documentation
However – documentation of testing can be very helpful
Most organizations have a list of topics, outlines, or
templates
May 30, 2017
SE 433: Lecture 10
6 of 87
Test Plan
Objectives
To create a set of testing tasks.
Assign resources to each testing task.
Estimate completion time for each testing task.
Document testing standards.
May 30, 2017
SE 433: Lecture 10
7 of 87
Good Test Plans
Developed and Reviewed early.
Clear, Complete and Specific
Specifies tangible deliverables that can be inspected.
Staff knows what to expect and when to expect it.
Realistic quality levels for goals
Includes time for planning
Can be monitored and updated
Includes user responsibilities
Based on past experience
Recognizes learning curves
May 30, 2017
SE 433: Lecture 10
8 of 87
Standard Test Plan
ANSI / IEEE Standard 829-1983 is ancient but still used:
T
e
s
t
 
P
l
a
n
A document describing the scope, approach, resources, and
schedule of intended testing activities. It identifies test items,
the features to be tested, the testing tasks, who will do each
task, and any risks requiring contingency planning.
Many organizations are required to adhere to this standard
Unfortunately, this standard emphasizes documentation, not
actual testing – often resulting in a well documented vacuum
 
May 30, 2017
SE 433: Lecture 10
9 of 87
Test Planning and Preparation
Major testing activities:
Test planning and preparation
Execution (testing)
Analysis and follow-up
 Test planning:
Goal setting
Overall strategy
 Test preparation:
Preparing test cases & test suite(s)
   (systematic: model-based; our focus)
Preparing test procedure
May 30, 2017
SE 433: Lecture 10
10 of 87
Test Planning: Goal setting and strategic planning
Goal setting
Quality perspectives of the customer
Quality expectations of the customer
Mapping to internal goals and concrete (quantified) measurement.
Example: customer's correctness concerns => specific reliability
target
 Overall strategy, including:
Specific objects to be tested.
Techniques (and related models) to use.
Measurement data to be collected.
Analysis and follow-up activities.
Key: Plan the 
whole thing"!
May 30, 2017
SE 433: Lecture 10
11 of 87
The test plan
Allocate resources
affects specific models and techniques chosen
simple models based on checklists and partitions   require less
resources
Generic steps and activities in test model construction
information source identification and data collection (in-field or
anticipated usage? code?)
analysis and initial model construction
model validation and incremental improvement
May 30, 2017
SE 433: Lecture 10
12 of 87
Types of Test Plans
M
i
s
s
i
o
n
 
p
l
a
n
 
 
t
e
l
l
s
 
w
h
y
Usually one mission plan per organization or group
Least detailed type of test plan
S
t
r
a
t
e
g
i
c
 
p
l
a
n
 
 
t
e
l
l
s
 
w
h
a
t
 
a
n
d
 
w
h
e
n
Usually one per organization, or perhaps for each type of project
General requirements for coverage criteria to use
T
a
c
t
i
c
a
l
 
p
l
a
n
 
 
t
e
l
l
s
 
h
o
w
 
a
n
d
 
w
h
o
One per product
More detailed
Living document, containing test requirements, tools, results and
issues such as integration order
May 30, 2017
SE 433: Lecture 10
13 of 87
undefined
Test documentation
 
May 30, 2017
SE 433: Lecture 10
14 of 87
Test documentation
Test plans – Outline how your application will be tested in
detail
Test Plan
What: a document describing the scope, approach, resources and
schedule of intended testing activities; identifies test items, the
features to be tested, the testing tasks, who will do each task and
any risks requiring contingency planning;
Who: Software Testing;
When: (planning)/design/coding/testing stage(s);
May 30, 2017
SE 433: Lecture 10
15 of 87
Test documentation
Test Plan (cont’d)
Why:
»
Divide responsibilities between teams involved; if more than one
Software Testing team is involved (i.e., manual / automation, or
English / Localization) – responsibilities between Software
Testing teams ;
»
Plan for test resources / timelines ;
»
Plan for test coverage;
»
Plan for OS / DB / software deployment and configuration models
coverage.
Software Testing role:
»
Create and maintain the document;
»
Analyze for completeness;
»
Have it reviewed and signed by Project Team leads/managers.
May 30, 2017
SE 433: Lecture 10
16 of 87
Test documentation
Test Case
What: a set of inputs, execution preconditions and expected
outcomes developed for a particular objective, such as exercising a
particular program path or verifying compliance with a specific
requirement;
Who: Software Testing;
When: (planning)/(design)/coding/testing stage(s);
Why:
»
Plan test effort / resources / timelines;
»
Plan / review test coverage;
»
Track test execution progress;
»
Track defects;
»
Track software quality criteria / quality metrics;
»
Unify Pass/Fail criteria across all testers;
»
Planned/systematic testing vs. Ad-Hoc.
May 30, 2017
SE 433: Lecture 10
17 of 87
Test documentation
Test Case (cont’d)
Five required elements of a Test Case:
»
ID – unique identifier of a test case;
»
Features to be tested / steps / input values – what you need to do;
»
Expected result / output values – what you are supposed to get
from application;
»
Actual result – what you really get from application;
»
Pass / Fail.
May 30, 2017
SE 433: Lecture 10
18 of 87
Test documentation
Test Case (cont’d)
Optional elements of a Test Case:
»
Title – verbal description indicative of test case objective;
»
Goal / objective – primary verification point of the test case;
»
Project / application ID / title – for TC classification / better
tracking;
»
Functional area – for better TC tracking;
»
Bug numbers for Failed test cases – for better error / failure
tracking (ISO 9000);
»
Positive / Negative class – for test execution planning;
»
Manual / Automatable / Automated parameter etc. – for planning
purposes;
»
Test Environment.
May 30, 2017
SE 433: Lecture 10
19 of 87
Test documentation
Test Case (cont’d)
 Inputs:
»
Through the UI;
»
From interfacing systems or devices;
»
Files;
»
Databases;
»
State;
»
Environment.
Outputs:
»
To UI;
»
To interfacing systems or devices;
»
Files;
»
Databases;
»
State;
»
Response time.
 
 
May 30, 2017
SE 433: Lecture 10
20 of 87
Test documentation
Test Case (cont’d)
Format – follow company standards; if no standards – choose the
one that works best for you:
»
MS Word document;
»
MS Excel document;
»
Memo-like paragraphs (MS Word, Notepad, Wordpad).
Classes:
»
Positive and Negative;
»
Functional, Non-Functional and UI;
»
Implicit verifications and explicit verifications;
»
Systematic testing and ad-hoc;
May 30, 2017
SE 433: Lecture 10
21 of 87
Test documentation
Test Suite
A document specifying a sequence of actions for the execution of
multiple test cases;
Purpose: to put the test cases into an executable order, although
individual test cases may have an internal set of steps or
procedures;
Is typically manual, if automated, typically referred to as test script
(though manual procedures can also be a type of script);
Multiple Test Suites need to be organized into some sequence – this
defined the order in which the test cases or scripts are to be run,
what timing considerations are, who should run them etc.
 
 
May 30, 2017
SE 433: Lecture 10
22 of 87
Elements of a test plan 1
Title
Identification of software (incl. version/release #s)
Revision history of document (incl. authors, dates)
Table of Contents
Purpose of document, intended audience
Objective of testing effort
Software product overview
Relevant related document list, such as requirements,
design documents, other test plans, etc.
Relevant standards or legal requirements
Traceability requirements
May 30, 2017
SE 433: Lecture 10
23 of 87
Elements of a test plan 2
Relevant naming conventions and identifier conventions
Overall software project organization and personnel/contact-
info/responsibilities
Test organization and personnel/contact-info/responsibilities
Assumptions and dependencies
Project risk analysis
Testing priorities and focus
Scope and limitations of testing
Test outline - a decomposition of the test approach by test
type, feature, functionality, process, system, module, etc. as
applicable
Outline of data input equivalence classes, boundary value
analysis, error classes
May 30, 2017
SE 433: Lecture 10
24 of 87
Elements of a test plan 3
Test environment - hardware, operating systems, other
required software, data configurations, interfaces to other
systems
Test environment validity analysis - differences between the
test and production systems and their impact on test validity.
Test environment setup and configuration issues
Software migration processes
Software CM processes
Test data setup requirements
Database setup requirements
Outline of system-logging/error-logging/other capabilities,
and tools such as screen capture software, that will be used
to help describe and report bugs
May 30, 2017
SE 433: Lecture 10
25 of 87
Elements of a test plan 4
Discussion of any specialized software or hardware tools
that will be used by testers to help track the cause or source
of bugs
Test automation - justification and overview
Test tools to be used, including versions, patches, etc.
Test script/test code maintenance processes and version
control
Problem tracking and resolution - tools and processes
Project test metrics to be used
Reporting requirements and testing deliverables
Software entrance and exit criteria
Initial sanity testing period and criteria
Test suspension and restart criteria
May 30, 2017
SE 433: Lecture 10
26 of 87
Elements of a test plan 5
Personnel allocation
Personnel pre-training needs
Test site/location
Outside test organizations to be utilized and their purpose,
responsibilities, deliverables, contact persons, and
coordination issues
Relevant proprietary, classified, security, and licensing
issues.
Open issues
Appendix - glossary, acronyms, etc.
May 30, 2017
SE 433: Lecture 10
27 of 87
Test Plan Contents – System Testing
Introduction
Test items
Features tested
Features not tested
Test criteria
Pass / fail standards
Criteria for starting testing
Criteria for suspending testing
Requirements for testing
restart
Hardware and software
requirements
Responsibilities for severity
ratings
Staffing & training needs
Test schedules
Risks and contingencies
Approvals
Purpose
Target audience and application
Deliverables
Information included:
May 30, 2017
SE 433: Lecture 10
28 of 87
Test Plan Contents – Tactical Testing
Purpose
Outline
Test-plan ID
Introduction
Test reference items
Features that will be tested
Features that will not be tested
Approach to testing (criteria)
Criteria for pass / fail
Criteria for suspending testing
Criteria for restarting testing
Test deliverables
Testing tasks
Environmental needs
Responsibilities
Staffing & training needs
Schedule
Risks and contingencies
Approvals
 
May 30, 2017
SE 433: Lecture 10
29 of 87
undefined
Interview Questions and
Answers
http://www.careerride.com/Testing-frequently-
asked-questions.aspx
May 30, 2017
SE 433: Lecture 10
30 of 87
Interview Questions
What is difference between QA, QC and Software Testing?
What is verification and validation?
Explain Branch Coverage and Decision Coverage.
What is pair-wise programming and why is it relevant to software
testing?
Why is testing software using concurrent programming hard? What are
races and why do they affect system testing.
Phase in detecting defect: During a software development project two
similar requirements defects were detected. One was detected in the
requirements phase, and the other during the implementation phase.
Why do we measure defect rates and what can they tell us?
What is Static Analysis?
May 30, 2017
SE 433: Lecture 10
31 of 87
What is difference between QA, QC and Software Testing?
Q
u
a
l
i
t
y
 
A
s
s
u
r
a
n
c
e
 
(
Q
A
)
:
 
Q
A
 
r
e
f
e
r
s
 
t
o
 
t
h
e
 
p
l
a
n
n
e
d
 
a
n
d
s
y
s
t
e
m
a
t
i
c
 
w
a
y
 
o
f
 
m
o
n
i
t
o
r
i
n
g
 
t
h
e
 
q
u
a
l
i
t
y
 
o
f
 
p
r
o
c
e
s
s
 
w
h
i
c
h
 
i
s
f
o
l
l
o
w
e
d
 
t
o
 
p
r
o
d
u
c
e
 
a
 
q
u
a
l
i
t
y
 
p
r
o
d
u
c
t
.
 
Q
A
 
t
r
a
c
k
s
 
t
h
e
o
u
t
c
o
m
e
s
 
a
n
d
 
a
d
j
u
s
t
s
 
t
h
e
 
p
r
o
c
e
s
s
 
t
o
 
m
e
e
t
 
t
h
e
 
e
x
p
e
c
t
a
t
i
o
n
.
Q
A
 
i
s
 
n
o
t
 
j
u
s
t
 
t
e
s
t
i
n
g
.
Q
u
a
l
i
t
y
 
C
o
n
t
r
o
l
 
(
Q
C
)
:
 
C
o
n
c
e
r
n
 
w
i
t
h
 
t
h
e
 
q
u
a
l
i
t
y
 
o
f
 
t
h
e
p
r
o
d
u
c
t
.
 
Q
C
 
f
i
n
d
s
 
t
h
e
 
d
e
f
e
c
t
s
 
a
n
d
 
s
u
g
g
e
s
t
s
 
i
m
p
r
o
v
e
m
e
n
t
s
.
T
h
e
 
p
r
o
c
e
s
s
 
s
e
t
 
b
y
 
Q
A
 
i
s
 
i
m
p
l
e
m
e
n
t
e
d
 
b
y
 
Q
C
.
 
T
h
e
 
Q
C
 
i
s
t
h
e
 
r
e
s
p
o
n
s
i
b
i
l
i
t
y
 
o
f
 
t
h
e
 
t
e
s
t
e
r
.
S
o
f
t
w
a
r
e
 
T
e
s
t
i
n
g
:
 
i
s
 
t
h
e
 
p
r
o
c
e
s
s
 
o
f
 
e
n
s
u
r
i
n
g
 
t
h
a
t
 
p
r
o
d
u
c
t
w
h
i
c
h
 
i
s
 
d
e
v
e
l
o
p
e
d
 
b
y
 
t
h
e
 
d
e
v
e
l
o
p
e
r
 
m
e
e
t
s
 
t
h
e
 
u
s
e
r
r
e
q
u
i
r
e
m
e
n
t
.
 
T
h
e
 
m
o
t
i
v
e
 
t
o
 
p
e
r
f
o
r
m
 
t
e
s
t
i
n
g
 
i
s
 
t
o
 
f
i
n
d
 
t
h
e
b
u
g
s
 
a
n
d
 
m
a
k
e
 
s
u
r
e
 
t
h
a
t
 
t
h
e
y
 
g
e
t
 
f
i
x
e
d
.
May 30, 2017
SE 433: Lecture 10
32 of 87
Verification And Validation
What is verification and validation?
V
e
r
i
f
i
c
a
t
i
o
n
:
 
p
r
o
c
e
s
s
 
o
f
 
e
v
a
l
u
a
t
i
n
g
 
w
o
r
k
-
p
r
o
d
u
c
t
s
 
o
f
 
a
d
e
v
e
l
o
p
m
e
n
t
 
p
h
a
s
e
 
t
o
 
d
e
t
e
r
m
i
n
e
 
w
h
e
t
h
e
r
 
t
h
e
y
 
m
e
e
t
 
t
h
e
s
p
e
c
i
f
i
e
d
 
r
e
q
u
i
r
e
m
e
n
t
s
 
f
o
r
 
t
h
a
t
 
p
h
a
s
e
.
V
a
l
i
d
a
t
i
o
n
:
 
p
r
o
c
e
s
s
 
o
f
 
e
v
a
l
u
a
t
i
n
g
 
s
o
f
t
w
a
r
e
 
d
u
r
i
n
g
 
o
r
 
a
t
 
t
h
e
e
n
d
 
o
f
 
t
h
e
 
d
e
v
e
l
o
p
m
e
n
t
 
p
r
o
c
e
s
s
 
t
o
 
d
e
t
e
r
m
i
n
e
 
w
h
e
t
h
e
r
 
i
t
m
e
e
t
s
 
s
p
e
c
i
f
i
e
d
 
r
e
q
u
i
r
e
m
e
n
t
s
.
May 30, 2017
SE 433: Lecture 10
33 of 87
Branch Coverage and Decision Coverage
Explain Branch Coverage and Decision Coverage.
B
r
a
n
c
h
 
C
o
v
e
r
a
g
e
 
i
s
 
t
e
s
t
i
n
g
 
p
e
r
f
o
r
m
e
d
 
i
n
 
o
r
d
e
r
 
t
o
 
e
n
s
u
r
e
t
h
a
t
 
e
v
e
r
y
 
b
r
a
n
c
h
 
o
f
 
t
h
e
 
s
o
f
t
w
a
r
e
 
i
s
 
e
x
e
c
u
t
e
d
 
a
t
 
l
e
a
s
t
 
o
n
c
e
.
T
o
 
p
e
r
f
o
r
m
 
t
h
e
 
B
r
a
n
c
h
 
c
o
v
e
r
a
g
e
 
t
e
s
t
i
n
g
 
w
e
 
t
a
k
e
 
t
h
e
 
h
e
l
p
 
o
f
t
h
e
 
C
o
n
t
r
o
l
 
F
l
o
w
 
G
r
a
p
h
.
D
e
c
i
s
i
o
n
 
c
o
v
e
r
a
g
e
 
t
e
s
t
i
n
g
 
e
n
s
u
r
e
s
 
t
h
a
t
 
e
v
e
r
y
 
d
e
c
i
s
i
o
n
t
a
k
i
n
g
 
s
t
a
t
e
m
e
n
t
 
i
s
 
e
x
e
c
u
t
e
d
 
a
t
 
l
e
a
s
t
 
o
n
c
e
.
Both decision and branch coverage testing is done to
ensure the tester that no branch and decision taking
statement will lead to failure of the software.
To Calculate Branch Coverage:
B
r
a
n
c
h
 
C
o
v
e
r
a
g
e
 
=
 
T
e
s
t
e
d
 
D
e
c
i
s
i
o
n
 
O
u
t
c
o
m
e
s
 
/
 
T
o
t
a
l
 
D
e
c
i
s
i
o
n
O
u
t
c
o
m
e
s
.
May 30, 2017
SE 433: Lecture 10
34 of 87
Pair-Wise Programming
What is pair-wise programming and why is it relevant to software testing?
Concept used in Extreme Programming (XP)
Coding is the key activity throughout a software project
Life cycle and behavior of complex objects defined in test cases – again
in code
XP Practices
Testing – programmers continuously write unit tests; customers write
tests for features
Pair-programming –  all production code is written with two
programmers at one machine
Continuous integration – integrate and build the system many times
a day – every time a task is completed.
Mottos
Communicate intensively
Test a bit, code a bit, test a bit more
May 30, 2017
SE 433: Lecture 10
35 of 87
Testing and Concurrent Programming
Why is testing software using concurrent programming
hard? What are races and why do they affect system
testing?
Concurrency: 
Two or more sequences of events occur 
in
parallel”
May 30, 2017
SE 433: Lecture 10
36 of 87
Testing Concurrent Programs is Hard
Concurrency bugs triggered non-deterministically
Prevalent testing techniques ineffective
A race condition is a common concurrency bug
Two threads can simultaneously access a memory
location
At least one access is a write
See note
May 30, 2017
SE 433: Lecture 10
37 of 87
Race Conditions
Race condition
 occurs when the value of a variable depends
on the execution order of two or more concurrent processes
(why is this bad?)
Example
procedure signup(person)
 
begin
  
number := number + 1;
  
list[number] := person;
    end;
signup(joe) || signup(bill)
May 30, 2017
SE 433: Lecture 10
38 of 87
What is Static Analysis?
The term "static analysis" is conflated, but here we use it to
mean a collection of algorithms and techniques used to
analyze source code in order to automatically find bugs.
The idea is similar in spirit to compiler warnings (which can
be useful for finding coding errors) but to take that idea a
step further and find bugs that are traditionally found using
run-time debugging techniques such as testing.
Static analysis bug-finding tools have evolved over the last
several decades from basic syntactic checkers to those that
find deep bugs by reasoning about the semantics of code.
May 30, 2017
SE 433: Lecture 10
39 of 87
Defect Costs
Questions:
When you find one, how much will it cost to fix?
»
How much depends on when the defect was created vs.
when you found it?
Just how many do you think are in there to start with?!
The cost of fixing a defect rises exponentially by lifecycle phase
But this is simplistic
When were the defects injected?
Are all defects treated the same?
Do we reduce costs by getting better at fixing or at prevention?
May 30, 2017
SE 433: Lecture 10
40 of 87
 
Defect Costs
May 30, 2017
SE 433: Lecture 10
41 of 87
undefined
Software Quality Assurance
 
May 30, 2017
SE 433: Lecture 10
42 of 87
QA
 & Testing
Testing 
Phases
Unit
Integration
System
User Acceptance
Testing
Testing Types
Black-box
White-box
Static vs. Dynamic
Testing
Automated Testing
Pros and cons
Integration: 2 types
Top down
Bottom up
May 30, 2017
SE 433: Lecture 10
43 of 87
Q
u
a
l
i
t
y
 
A
s
s
u
r
a
n
c
e
 
(
Q
A
)
D
e
f
i
n
i
t
i
o
n
 
-
 
W
h
a
t
 
d
o
e
s
 
Q
u
a
l
i
t
y
 
A
s
s
u
r
a
n
c
e
 
(
Q
A
)
 
m
e
a
n
?
Quality assurance (QA) is the process of verifying whether a
product meets required specifications and customer
expectations. QA is a process-driven approach that
facilitates and defines goals regarding product design,
development and production. QA's primary goal is tracking
and resolving deficiencies prior to product release.
The QA concept was popularized during World War II.
May 30, 2017
SE 433: Lecture 10
44 of 87
Software Quality Assurance
S
o
f
t
w
a
r
e
 
q
u
a
l
i
t
y
 
a
s
s
u
r
a
n
c
e
 
(
S
Q
A
)
 
i
s
 
a
 
p
r
o
c
e
s
s
 
t
h
a
t
e
n
s
u
r
e
s
 
t
h
a
t
 
d
e
v
e
l
o
p
e
d
 
s
o
f
t
w
a
r
e
 
m
e
e
t
s
 
a
n
d
 
c
o
m
p
l
i
e
s
 
w
i
t
h
d
e
f
i
n
e
d
 
o
r
 
s
t
a
n
d
a
r
d
i
z
e
d
 
q
u
a
l
i
t
y
 
s
p
e
c
i
f
i
c
a
t
i
o
n
s
.
S
Q
A
 
i
s
 
a
n
 
o
n
g
o
i
n
g
 
p
r
o
c
e
s
s
 
w
i
t
h
i
n
 
t
h
e
 
s
o
f
t
w
a
r
e
d
e
v
e
l
o
p
m
e
n
t
 
l
i
f
e
 
c
y
c
l
e
 
(
S
D
L
C
)
 
t
h
a
t
 
r
o
u
t
i
n
e
l
y
 
c
h
e
c
k
s
 
t
h
e
d
e
v
e
l
o
p
e
d
 
s
o
f
t
w
a
r
e
 
t
o
 
e
n
s
u
r
e
 
i
t
 
m
e
e
t
s
 
d
e
s
i
r
e
d
 
q
u
a
l
i
t
y
m
e
a
s
u
r
e
s
.
May 30, 2017
SE 433: Lecture 10
45 of 87
Software Quality Assurance
The area of Software Quality Assurance can be broken
down into a number of smaller areas such as
Quality of planning,
Formal technical reviews,
Testing
and
Training.
May 30, 2017
SE 433: Lecture 10
46 of 87
undefined
Quality Control
"Quality must be built in at the design
stage. It may be too late once plans are
on their way
."
W. Edwards Deming
May 30, 2017
SE 433: Lecture 10
47 of 87
Role of the SQA Group  – I
F
o
r
m
 
a
 
S
o
f
t
w
a
r
e
 
Q
u
a
l
i
t
y
 
A
s
s
u
r
a
n
c
e
 
G
r
o
u
p
P
r
e
p
a
r
e
s
 
a
n
 
S
Q
A
 
p
l
a
n
 
f
o
r
 
a
 
p
r
o
j
e
c
t
.
The plan identifies
»
evaluations to be performed
»
audits and reviews to be performed
»
standards that are applicable to the project
»
procedures for error reporting and tracking
»
procedures for change management
»
documents to be produced by the SQA group
»
amount of feedback provided to the software project team
P
a
r
t
i
c
i
p
a
t
e
s
 
i
n
 
t
h
e
 
d
e
v
e
l
o
p
m
e
n
t
 
o
f
 
t
h
e
 
p
r
o
j
e
c
t
s
 
s
o
f
t
w
a
r
e
 
p
r
o
c
e
s
s
d
e
s
c
r
i
p
t
i
o
n
.
The SQA group reviews the process description for compliance with
organizational policy, internal software standards, externally imposed
standards (e.g., ISO-9001), and other parts of the software project
plan.
May 30, 2017
SE 433: Lecture 10
48 of 87
Role of the SQA Group  – II
R
e
v
i
e
w
s
 
s
o
f
t
w
a
r
e
 
e
n
g
i
n
e
e
r
i
n
g
 
a
c
t
i
v
i
t
i
e
s
 
t
o
 
v
e
r
i
f
y
 
c
o
m
p
l
i
a
n
c
e
w
i
t
h
 
t
h
e
 
d
e
f
i
n
e
d
 
s
o
f
t
w
a
r
e
 
p
r
o
c
e
s
s
.
identifies, documents, and tracks deviations from the process
and verifies that corrections have been made.
A
u
d
i
t
s
 
d
e
s
i
g
n
a
t
e
d
 
s
o
f
t
w
a
r
e
 
w
o
r
k
 
p
r
o
d
u
c
t
s
 
t
o
 
v
e
r
i
f
y
 
c
o
m
p
l
i
a
n
c
e
w
i
t
h
 
t
h
o
s
e
 
d
e
f
i
n
e
d
 
a
s
 
p
a
r
t
 
o
f
 
t
h
e
 
s
o
f
t
w
a
r
e
 
p
r
o
c
e
s
s
.
reviews selected work products; identifies, documents, and
tracks deviations; verifies that corrections have been made
periodically reports the results of its work to the project manager.
E
n
s
u
r
e
s
 
t
h
a
t
 
d
e
v
i
a
t
i
o
n
s
 
i
n
 
s
o
f
t
w
a
r
e
 
w
o
r
k
 
a
n
d
 
w
o
r
k
 
p
r
o
d
u
c
t
s
a
r
e
 
d
o
c
u
m
e
n
t
e
d
 
a
n
d
 
h
a
n
d
l
e
d
 
a
c
c
o
r
d
i
n
g
 
t
o
 
a
 
d
o
c
u
m
e
n
t
e
d
p
r
o
c
e
d
u
r
e
.
R
e
c
o
r
d
s
 
a
n
y
 
n
o
n
c
o
m
p
l
i
a
n
c
e
 
a
n
d
 
r
e
p
o
r
t
s
 
t
o
 
s
e
n
i
o
r
m
a
n
a
g
e
m
e
n
t
.
Noncompliance items are tracked until they are resolved.
May 30, 2017
SE 433: Lecture 10
49 of 87
Statistical Software Quality Assurance
Statistical quality assurance implies the following steps:
Information about software defects is collected and
categorized.
An attempt is made to trace each defect to its underlying
cause (e.g., non-conformance to specifications, design
error, violation of standards, poor communication with the
customer).
Using the Pareto principle (80 percent of the defects can be
traced to 20 percent of all possible causes), isolate the 20
percent (the "vital few").
Once the vital few causes have been identified, move to
correct the problems that have caused the defects.
May 30, 2017
SE 433: Lecture 10
50 of 87
undefined
Metrics
 
May 30, 2017
SE 433: Lecture 10
51 of 87
Software Reliability
Defined as the probability of failure free operation
of a computer program in a specified environment
for a specified time period
Can be measured directly and estimated using
historical and developmental data (unlike many
other software quality factors)
Software reliability problems can usually be traced
back to errors in design or implementation.
May 30, 2017
SE 433: Lecture 10
52 of 87
Software Reliability Metrics
Reliability metrics are units of measure for system
reliability
System reliability is measured by counting the
number of operational failures and relating these to
demands made on the system at the time of failure
A long-term measurement program is required to
assess the reliability of critical systems
May 30, 2017
SE 433: Lecture 10
53 of 87
Reliability Metrics - part 1
Probability of Failure on Demand (POFOD)
POFOD = 0.001
For one in every 1000 requests the service fails per time
unit
Rate of Fault Occurrence (ROCOF)
ROCOF = 0.02
Two failures for each 100 operational time units of
operation
May 30, 2017
SE 433: Lecture 10
54 of 87
Reliability Metrics - part 2
Mean Time to Failure (MTTF)
average time between observed failures (aka MTBF)
Availability = MTBF / (MTBF+MTTR)
MTBF = Mean Time Between Failure
MTTR = Mean Time to Repair
Reliability = MTBF / (1+MTBF)
May 30, 2017
SE 433: Lecture 10
55 of 87
Time Units
Raw Execution Time
non-stop system
Calendar Time
If the system has regular usage patterns
Number of Transactions
demand type transaction systems
May 30, 2017
SE 433: Lecture 10
56 of 87
undefined
Defect Metrics
 
May 30, 2017
SE 433: Lecture 10
57 of 87
Defect Metrics
 
Why do we measure defects? Why do we track the defect
count when monitoring the execution of software projects?
What does this tell us?
Defect counts indicate how well the system is implemented
and how effectively testing is finding defects.
[See also notes.]
May 30, 2017
SE 433: Lecture 10
58 of 87
SE 433: Lecture 10
Defect Metrics
 
These are very important to the PM
Number of outstanding defects
Ranked by severity
»
Critical, High, Medium, Low
»
Showstoppers
Opened vs. closed
May 30, 2017
59 of 87
SE 433: Lecture 10
Defect Tracking
Fields
State: open, closed, pending
Date created, updated, closed
Description of problem
Release/version number
Person submitting
Priority: low, medium, high, critical
Comments: by QA, developer, other
May 30, 2017
60 of 87
Defect Metrics
 
Open Bugs (outstanding defects)
Ranked by severity
Open Rates
How many new bugs over a period of time
Close Rates
How many closed (fixed or resolved) over that same
period
Ex: 10 bugs/day
Change Rate
Number of times the same issue updated
Fix Failed Counts
Fixes that didn’
t really fix (still open)
One measure of 
vibration
 in project
May 30, 2017
SE 433: Lecture 10
61 of 87
Defect Distribution By Status And Phase
W
h
a
t
 
i
s
 
i
t
?
W
h
y
 
i
s
 
i
t
 
i
m
p
o
r
t
a
n
t
?
A persistently problematic section of code or unit within the program
may indicate some deeper concerns regarding the functionality of the
overall product.
May 30, 2017
SE 433: Lecture 10
62 of 87
Defect Rates
In general, defect rate is the number of defects over the
opportunities for errors during a specified time frame
Defect rate found during formal machine testing is usually
positively correlated with defect rate experienced in the field
Tracking defects and rates allow us to determine the quality
of the product and how mature it is.
May 30, 2017
SE 433: Lecture 10
63 of 87
Defect Metrics
 
Why do we measure defects? Why do we track the defect count when
monitoring the execution of software projects? What does this tell us?
Defect counts indicate how well the system is implemented and how
effectively testing is finding defects.
L
o
w
 
d
e
f
e
c
t
 
c
o
u
n
t
s
 
m
a
y
 
m
e
a
n
 
t
h
a
t
 
t
e
s
t
i
n
g
 
i
s
 
n
o
t
 
u
n
c
o
v
e
r
i
n
g
 
d
e
f
e
c
t
s
.
D
e
f
e
c
t
 
c
o
u
n
t
s
 
t
h
a
t
 
c
o
n
t
i
n
u
e
 
t
o
 
b
e
 
h
i
g
h
 
o
v
e
r
 
t
i
m
e
 
m
a
y
 
i
n
d
i
c
a
t
e
 
a
l
a
r
g
e
r
 
p
r
o
b
l
e
m
,
»
inaccurate requirements, incomplete design and coding,
premature testing, lack of application knowledge, or inadequately
trained team.
D
e
f
e
c
t
 
t
r
e
n
d
s
 
p
r
o
v
i
d
e
 
a
 
b
a
s
i
s
 
f
o
r
 
d
e
c
i
d
i
n
g
 
o
n
 
w
h
e
n
 
t
e
s
t
i
n
g
 
h
a
s
c
o
m
p
l
e
t
e
d
.
 
W
h
e
n
 
t
h
e
 
n
u
m
b
e
r
 
o
f
 
d
e
f
e
c
t
s
 
f
o
u
n
d
 
f
a
l
l
 
d
r
a
m
a
t
i
c
a
l
l
y
,
 
g
i
v
e
n
 
a
c
o
n
s
t
a
n
t
 
l
e
v
e
l
 
o
f
 
t
e
s
t
i
n
g
,
 
t
h
e
 
p
r
o
d
u
c
t
 
i
s
 
b
e
c
o
m
i
n
g
 
s
t
a
b
l
e
 
a
n
d
 
m
o
v
i
n
g
 
t
o
t
h
e
 
n
e
x
t
 
p
h
a
s
e
 
i
s
 
f
e
a
s
i
b
l
e
.
 
L
o
o
k
 
a
t
 
t
h
e
 
n
e
x
t
 
s
l
i
d
e
s
.
[See also notes.]
May 30, 2017
SE 433: Lecture 10
64 of 87
The Rayleigh Model
Represents the back-end formal testing phase
Special case of the Weibull distribution family which has
been widely used for reliability studies
Supported by a large body of empirical data, software
projects were found to follow a life cycle pattern described
by the Rayleigh curve
Used for both resource and staffing profiles and defect
discovery/removal patterns
May 30, 2017
SE 433: Lecture 10
65 of 87
The Rayleigh Distribution
If we graph defects over time
they will show a Rayleigh
distribution
R
c
 =
k is a constant representing
the time at which defects
peak.
Note the tail to the distribution.
We see this same curve in other
areas as well. Specifically in
reliability and quality.
Defect frequency
 
May 30, 2017
SE 433: Lecture 10
66 of 87
Stopping Testing
 
When do you stop?
Rarely are all defects 
closed
 by release
Shoot for all Critical/High/Medium defects
Often, occurs when time runs out
Final Sign-off (see also User Acceptance Testing)
By: customers, engineering, product mgmt.,
SE 433: Lecture 10
May 30, 2017
67 of 87
undefined
Validation Metrics
 
May 30, 2017
SE 433: Lecture 10
68 of 87
Metrics are Needed to Answer the Following Questions
How much time is required to find bugs, fix them, and verify
that they are fixed?
How much time has been spent actually testing the product?
How much of the code is being exercised?
Are all of the product’s features being tested?
How many defects have been detected in each software
baseline?
What percentage of known defects is fixed at release?
How good a job of testing are we doing?
May 30, 2017
SE 433: Lecture 10
69 of 87
Find-Fix Cycle Time
Find-Fix Cycle Time Includes Time Required to:
Find a potential bug by executing a test
Submit a problem report to the software engineering group
Investigate the problem report
Determine corrective action
Perform root-cause analysis
Test the correction locally
Conduct a mini code inspection on changed modules
Incorporate corrective action into new baseline
Release new baseline to system test
Perform regression testing to verify that the reported
problem is fixed and the fix hasn’t introduced new problems
May 30, 2017
SE 433: Lecture 10
70 of 87
Cumulative Test Time
The total amount of time spent actually testing the product
measured in test hours
Provides an indication of product quality
Is used in computing software reliability growth (the
improvement in software reliability that results from
correcting faults in the software)
May 30, 2017
SE 433: Lecture 10
71 of 87
Test Coverage Metrics
Code Coverage (How much of the code is being exercised?)
Segment coverage (percentage of segments hit)
»
Every (executable) statement is in some segment
»
A segment corresponds to an edge in a program’s directed graph
»
Segment coverage is especially useful during unit and integration
testing
»
Segment coverage is cumulative
»
A goal of 85% is a practical coverage value
Call-pair coverage (percentage of call pairs hit)
»
An interface whereby one module invokes another
»
A goal of 100% is a practical coverage value
Requirements coverage (Are all the product’s features being
tested?)
The percentage of requirements covered by at least one test
May 30, 2017
SE 433: Lecture 10
72 of 87
Quality Metrics
Defect removal percentage
What percentage of known defects is fixed at release?
[Number of bugs fixed prior to release/ Number of known bugs prior
to release] x 100
Defects reported in each baseline
Can be used to help make decisions regarding process
improvements, additional regression testing, and ultimate release of
the software
Defect detection efficiency
How well are we performing testing?
[Number of unique defects we find / (Number of unique defects we
find + Number of unique defects reported by customers)] x 100
Can be used to help make decisions regarding release of the final
product and the degree to which your testing is similar to actual
customer use
May 30, 2017
SE 433: Lecture 10
73 of 87
undefined
Coding and Testing Tools
 
May 30, 2017
SE 433: Lecture 10
74 of 87
Fundamental Questions in Testing
When can we stop testing?
Test coverage
What should we test?
Test generation
Is the observed output correct?
Test oracle
How well did we do?
Test efficiency
Who should test your program?
Independent V&V
May 30, 2017
SE 433: Lecture 10
75 of 87
Style checkers / Defect finders / Quality scanners
Compare code (usually source) to set of pre-canned “style”
rules or probable defects
Goal:
Make it easier to understand/modify code
Avoid common defects/mistakes, or patterns likely to lead to them
Some try to have low FP rate
Don’t report something unless it’s a defect
May 30, 2017
SE 433: Lecture 10
76 of 87
Free Metric Tools for Java
JCSC
CheckStyle
J
d
e
p
e
n
d
JavaNCSC – Non-commented source code
JMT
Eclipse plug-in
May 30, 2017
SE 433: Lecture 10
77 of 87
JCSC
JCSC is a powerful tool to check source code against a
highly definable coding standard and potential bad code.
The standard covers:
naming conventions for class, interfaces, fields, parameter, ... .
the structural layout of the type (class/interface)
finds weaknesses in the the code -- potential bugs -- like empty
catch/finally block, switch without default, throwing of type
'Exception', slow code, ...
It can be downloaded at: http://jcsc.sourceforge.net/
May 30, 2017
SE 433: Lecture 10
78 of 87
CheckStyle
C
h
e
c
k
s
t
y
l
e
 
i
s
 
a
 
d
e
v
e
l
o
p
m
e
n
t
 
t
o
o
l
 
t
o
 
h
e
l
p
 
p
r
o
g
r
a
m
m
e
r
s
w
r
i
t
e
 
J
a
v
a
 
c
o
d
e
 
t
h
a
t
 
a
d
h
e
r
e
s
 
t
o
 
a
 
c
o
d
i
n
g
 
s
t
a
n
d
a
r
d
.
It automates the process of checking Java code to spare humans of
this boring (but important) task.
This makes it ideal for projects that want to enforce a coding
standard.
Checkstyle is highly configurable and can be made to
support almost any coding standard.
It can be used as:
An ANT task.
A command line tool.
It can be downloaded at:  http://checkstyle.sourceforge.net/
May 30, 2017
SE 433: Lecture 10
79 of 87
Features
Javadoc Comments
Naming Conventions
Headers
Imports
Size Violations
Whitespace
Modifiers
Blocks
Coding Problems
Class Design
Duplicate Code
M
e
t
r
i
c
s
 
C
h
e
c
k
s
Miscellaneous Checks
Optional Checks
The things that Checkstyle can check for are:
May 30, 2017
SE 433: Lecture 10
80 of 87
Metrics Checks
BooleanExpressionComplexity
ClassDataAbstractionCoupling
ClassFanOutComplexity
CyclomaticComplexity
NPathComplexity
May 30, 2017
SE 433: Lecture 10
81 of 87
Open Source Code Analyzers in Java
J
d
e
p
e
n
d
 
t
r
a
v
e
r
s
e
s
 
J
a
v
a
 
c
l
a
s
s
 
f
i
l
e
 
d
i
r
e
c
t
o
r
i
e
s
 
a
n
d
 
g
e
n
e
r
a
t
e
s
d
e
s
i
g
n
 
q
u
a
l
i
t
y
 
m
e
t
r
i
c
s
 
f
o
r
 
e
a
c
h
 
J
a
v
a
 
p
a
c
k
a
g
e
.
JDepend allows you to automatically measure the quality of
a design in terms of its extensibility, reusability, and
maintainability to effectively manage and control package
dependencies.
 
http://java-source.net/open-source/code-analyzers
May 30, 2017
SE 433: Lecture 10
82 of 87
Open Source Code Analyzers in Java
http://java-source.net/open-source/code-analyzers
https://www.checkmarx.com/2014/11/13/the-ultimate-list-of-
open-source-static-code-analysis-security-tools/
May 30, 2017
SE 433: Lecture 10
83 of 87
Professional Ethics
 
I
f
 
y
o
u
 
c
a
n
t
 
t
e
s
t
 
i
t
,
 
d
o
n
t
 
b
u
i
l
d
 
i
t
P
u
t
 
q
u
a
l
i
t
y
 
f
i
r
s
t
 
:
 
E
v
e
n
 
i
f
 
y
o
u
 
l
o
s
e
 
t
h
e
 
a
r
g
u
m
e
n
t
,
 
y
o
u
 
w
i
l
l
g
a
i
n
 
r
e
s
p
e
c
t
B
e
g
i
n
 
t
e
s
t
 
a
c
t
i
v
i
t
i
e
s
 
e
a
r
l
y
D
e
c
o
u
p
l
e
D
e
s
i
g
n
s
 
s
h
o
u
l
d
 
b
e
 
i
n
d
e
p
e
n
d
e
n
t
 
o
f
 
l
a
n
g
u
a
g
e
P
r
o
g
r
a
m
s
 
s
h
o
u
l
d
 
b
e
 
i
n
d
e
p
e
n
d
e
n
t
 
o
f
 
e
n
v
i
r
o
n
m
e
n
t
C
o
u
p
l
i
n
g
s
 
a
r
e
 
w
e
a
k
n
e
s
s
e
s
 
i
n
 
t
h
e
 
s
o
f
t
w
a
r
e
!
D
o
n
t
 
t
a
k
e
 
s
h
o
r
t
c
u
t
s
I
f
 
y
o
u
 
l
o
s
e
 
t
h
e
 
a
r
g
u
m
e
n
t
 
y
o
u
 
w
i
l
l
 
g
a
i
n
 
r
e
s
p
e
c
t
D
o
c
u
m
e
n
t
 
y
o
u
r
 
o
b
j
e
c
t
i
o
n
s
V
o
t
e
 
w
i
t
h
 
y
o
u
r
 
f
e
e
t
D
o
n
t
 
b
e
 
a
f
r
a
i
d
 
t
o
 
b
e
 
r
i
g
h
t
!
May 30, 2017
SE 433: Lecture 10
84 of 87
undefined
Final Examination
 
Nobody expects the Spanish Inquisition!
– Monty Python
May 30, 2017
SE 433: Lecture 10
85 of 87
Final Examination
Final Examination will be on the Desire2Learn system starting from 
June
1 to June 7
See important information about 
Taking Quizzes On-line
Login to the 
Desire2Learn System (https://d2l.depaul.edu/)
Take the examination.
It will be made available Thursday, June 1.
You must take the exam by COB Wednesday, June 7.
Allow 3 hours (should take about one hour if you are prepared); note:
books or notes allowed but should be used sparingly.
See study guide on the web page or on D2L.
May 30, 2017
SE 433: Lecture 10
86 of 87
The End
May 30, 2017
SE 433: Lecture 10
87 of 87
Slide Note

May 30, 2017

Lecture 10

SE 433

of 87

Embed
Share

Dive into the world of software testing and quality assurance with Dennis Mumaugh in SE 433/333. Explore test plans, fundamental testing questions, and more in this comprehensive lecture series.

  • Software Testing
  • Quality Assurance
  • Dennis Mumaugh
  • SE433
  • Test Plans

Uploaded on Mar 06, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. SE 433/333 Software Testing & Quality Assurance Dennis Mumaugh, Instructor dmumaugh@depaul.edu Office: CDM, Room 428 Office Hours: Tuesday, 4:00 5:30 May 30, 2017 SE 433: Lecture 10 1 of 87

  2. Administrivia Comments and feedback Assignment 1-8 solutions are posted, Assignment 9 on Wednesday Assignments and exams: Assignment 9: Due May 30 Final exam: June 1-7 Take home exam/paper: June 6 [For SE433 students only] Final examination Will be on Desire2Learn. June 1 to June 7 May 30, 2017 SE 433: Lecture 10 2 of 87

  3. SE 433 Class 10 Topics: Test Plans Interview questions: aka Review Statistics and metrics. Miscellaneous Reading: Pezze and Young, Chapters 20, 23, 24 Articles on the Reading List Assignments Assignment 9: Due May 30 Final exam: June 1-7 Take home exam/paper: June 6 [For SE433 students only] May 30, 2017 SE 433: Lecture 10 3 of 87

  4. Thought for the Day Program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence. Edsger Dijkstra May 30, 2017 SE 433: Lecture 10 4 of 87

  5. Fundamental Questions in Testing When can we stop testing? Test coverage What should we test? Test generation Is the observed output correct? Test oracle How well did we do? Test efficiency Who should test your program? Independent V&V May 30, 2017 SE 433: Lecture 10 5 of 87

  6. Test Plans The most common question I hear about testing is How do I write a test plan? This question usually comes up when the focus is on the document, not the contents It s the contents that are important, not the structure Good testing is more important than proper documentation However documentation of testing can be very helpful Most organizations have a list of topics, outlines, or templates May 30, 2017 SE 433: Lecture 10 6 of 87

  7. Test Plan Objectives To create a set of testing tasks. Assign resources to each testing task. Estimate completion time for each testing task. Document testing standards. May 30, 2017 SE 433: Lecture 10 7 of 87

  8. Good Test Plans Developed and Reviewed early. Clear, Complete and Specific Specifies tangible deliverables that can be inspected. Staff knows what to expect and when to expect it. Realistic quality levels for goals Includes time for planning Can be monitored and updated Includes user responsibilities Based on past experience Recognizes learning curves May 30, 2017 SE 433: Lecture 10 8 of 87

  9. Standard Test Plan ANSI / IEEE Standard 829-1983 is ancient but still used: Test Plan A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Many organizations are required to adhere to this standard Unfortunately, this standard emphasizes documentation, not actual testing often resulting in a well documented vacuum May 30, 2017 SE 433: Lecture 10 9 of 87

  10. Test Planning and Preparation Major testing activities: Test planning and preparation Execution (testing) Analysis and follow-up Test planning: Goal setting Overall strategy Test preparation: Preparing test cases & test suite(s) (systematic: model-based; our focus) Preparing test procedure May 30, 2017 SE 433: Lecture 10 10 of 87

  11. Test Planning: Goal setting and strategic planning Goal setting Quality perspectives of the customer Quality expectations of the customer Mapping to internal goals and concrete (quantified) measurement. Example: customer's correctness concerns => specific reliability target Overall strategy, including: Specific objects to be tested. Techniques (and related models) to use. Measurement data to be collected. Analysis and follow-up activities. Key: Plan the whole thing"! May 30, 2017 SE 433: Lecture 10 11 of 87

  12. The test plan Allocate resources affects specific models and techniques chosen simple models based on checklists and partitions require less resources Generic steps and activities in test model construction information source identification and data collection (in-field or anticipated usage? code?) analysis and initial model construction model validation and incremental improvement May 30, 2017 SE 433: Lecture 10 12 of 87

  13. Types of Test Plans Mission plan tells why Usually one mission plan per organization or group Least detailed type of test plan Strategic plan tells what and when Usually one per organization, or perhaps for each type of project General requirements for coverage criteria to use Tactical plan tells how and who One per product More detailed Living document, containing test requirements, tools, results and issues such as integration order May 30, 2017 SE 433: Lecture 10 13 of 87

  14. Test documentation May 30, 2017 SE 433: Lecture 10 14 of 87

  15. Test documentation Test plans Outline how your application will be tested in detail Test Plan What: a document describing the scope, approach, resources and schedule of intended testing activities; identifies test items, the features to be tested, the testing tasks, who will do each task and any risks requiring contingency planning; Who: Software Testing; When: (planning)/design/coding/testing stage(s); May 30, 2017 SE 433: Lecture 10 15 of 87

  16. Test documentation Test Plan (cont d) Why: Divide responsibilities between teams involved; if more than one Software Testing team is involved (i.e., manual / automation, or English / Localization) responsibilities between Software Testing teams ; Plan for test resources / timelines ; Plan for test coverage; Plan for OS / DB / software deployment and configuration models coverage. Software Testing role: Create and maintain the document; Analyze for completeness; Have it reviewed and signed by Project Team leads/managers. May 30, 2017 SE 433: Lecture 10 16 of 87

  17. Test documentation Test Case What: a set of inputs, execution preconditions and expected outcomes developed for a particular objective, such as exercising a particular program path or verifying compliance with a specific requirement; Who: Software Testing; When: (planning)/(design)/coding/testing stage(s); Why: Plan test effort / resources / timelines; Plan / review test coverage; Track test execution progress; Track defects; Track software quality criteria / quality metrics; Unify Pass/Fail criteria across all testers; Planned/systematic testing vs. Ad-Hoc. May 30, 2017 SE 433: Lecture 10 17 of 87

  18. Test documentation Test Case (cont d) Five required elements of a Test Case: ID unique identifier of a test case; Features to be tested / steps / input values what you need to do; Expected result / output values what you are supposed to get from application; Actual result what you really get from application; Pass / Fail. May 30, 2017 SE 433: Lecture 10 18 of 87

  19. Test documentation Test Case (cont d) Optional elements of a Test Case: Title verbal description indicative of test case objective; Goal / objective primary verification point of the test case; Project / application ID / title for TC classification / better tracking; Functional area for better TC tracking; Bug numbers for Failed test cases for better error / failure tracking (ISO 9000); Positive / Negative class for test execution planning; Manual / Automatable / Automated parameter etc. for planning purposes; Test Environment. May 30, 2017 SE 433: Lecture 10 19 of 87

  20. Test documentation Test Case (cont d) Inputs: Through the UI; From interfacing systems or devices; Files; Databases; State; Environment. Outputs: To UI; To interfacing systems or devices; Files; Databases; State; Response time. May 30, 2017 SE 433: Lecture 10 20 of 87

  21. Test documentation Test Case (cont d) Format follow company standards; if no standards choose the one that works best for you: MS Word document; MS Excel document; Memo-like paragraphs (MS Word, Notepad, Wordpad). Classes: Positive and Negative; Functional, Non-Functional and UI; Implicit verifications and explicit verifications; Systematic testing and ad-hoc; May 30, 2017 SE 433: Lecture 10 21 of 87

  22. Test documentation Test Suite A document specifying a sequence of actions for the execution of multiple test cases; Purpose: to put the test cases into an executable order, although individual test cases may have an internal set of steps or procedures; Is typically manual, if automated, typically referred to as test script (though manual procedures can also be a type of script); Multiple Test Suites need to be organized into some sequence this defined the order in which the test cases or scripts are to be run, what timing considerations are, who should run them etc. May 30, 2017 SE 433: Lecture 10 22 of 87

  23. Elements of a test plan 1 Title Identification of software (incl. version/release #s) Revision history of document (incl. authors, dates) Table of Contents Purpose of document, intended audience Objective of testing effort Software product overview Relevant related document list, such as requirements, design documents, other test plans, etc. Relevant standards or legal requirements Traceability requirements May 30, 2017 SE 433: Lecture 10 23 of 87

  24. Elements of a test plan 2 Relevant naming conventions and identifier conventions Overall software project organization and personnel/contact- info/responsibilities Test organization and personnel/contact-info/responsibilities Assumptions and dependencies Project risk analysis Testing priorities and focus Scope and limitations of testing Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable Outline of data input equivalence classes, boundary value analysis, error classes May 30, 2017 SE 433: Lecture 10 24 of 87

  25. Elements of a test plan 3 Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems Test environment validity analysis - differences between the test and production systems and their impact on test validity. Test environment setup and configuration issues Software migration processes Software CM processes Test data setup requirements Database setup requirements Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs May 30, 2017 SE 433: Lecture 10 25 of 87

  26. Elements of a test plan 4 Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs Test automation - justification and overview Test tools to be used, including versions, patches, etc. Test script/test code maintenance processes and version control Problem tracking and resolution - tools and processes Project test metrics to be used Reporting requirements and testing deliverables Software entrance and exit criteria Initial sanity testing period and criteria Test suspension and restart criteria May 30, 2017 SE 433: Lecture 10 26 of 87

  27. Elements of a test plan 5 Personnel allocation Personnel pre-training needs Test site/location Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues Relevant proprietary, classified, security, and licensing issues. Open issues Appendix - glossary, acronyms, etc. May 30, 2017 SE 433: Lecture 10 27 of 87

  28. Test Plan Contents System Testing Purpose Target audience and application Deliverables Information included: Hardware and software requirements Introduction Test items Features tested Features not tested Test criteria Pass / fail standards Criteria for starting testing Criteria for suspending testing Requirements for testing restart Responsibilities for severity ratings Staffing & training needs Test schedules Risks and contingencies Approvals May 30, 2017 SE 433: Lecture 10 28 of 87

  29. Test Plan Contents Tactical Testing Purpose Outline Test-plan ID Introduction Test reference items Features that will be tested Features that will not be tested Approach to testing (criteria) Criteria for pass / fail Criteria for suspending testing Criteria for restarting testing Test deliverables Testing tasks Environmental needs Responsibilities Staffing & training needs Schedule Risks and contingencies Approvals May 30, 2017 SE 433: Lecture 10 29 of 87

  30. Interview Questions and Answers http://www.careerride.com/Testing-frequently- asked-questions.aspx May 30, 2017 SE 433: Lecture 10 30 of 87

  31. Interview Questions What is difference between QA, QC and Software Testing? What is verification and validation? Explain Branch Coverage and Decision Coverage. What is pair-wise programming and why is it relevant to software testing? Why is testing software using concurrent programming hard? What are races and why do they affect system testing. Phase in detecting defect: During a software development project two similar requirements defects were detected. One was detected in the requirements phase, and the other during the implementation phase. Why do we measure defect rates and what can they tell us? What is Static Analysis? May 30, 2017 SE 433: Lecture 10 31 of 87

  32. What is difference between QA, QC and Software Testing? Quality Assurance (QA): QA refers to the planned and systematic way of monitoring the quality of process which is followed to produce a quality product. QA tracks the outcomes and adjusts the process to meet the expectation. QA is not just testing. Quality Control (QC): Concern with the quality of the product. QC finds the defects and suggests improvements. The process set by QA is implemented by QC. The QC is the responsibility of the tester. Software Testing: is the process of ensuring that product which is developed by the developer meets the user requirement. The motive to perform testing is to find the bugs and make sure that they get fixed. May 30, 2017 SE 433: Lecture 10 32 of 87

  33. Verification And Validation What is verification and validation? Verification: process of evaluating work-products of a development phase to determine whether they meet the specified requirements for that phase. Validation: process of evaluating software during or at the end of the development process to determine whether it meets specified requirements. May 30, 2017 SE 433: Lecture 10 33 of 87

  34. Branch Coverage and Decision Coverage Explain Branch Coverage and Decision Coverage. Branch Coverage is testing performed in order to ensure that every branch of the software is executed at least once. To perform the Branch coverage testing we take the help of the Control Flow Graph. Decision coverage testing ensures that every decision taking statement is executed at least once. Both decision and branch coverage testing is done to ensure the tester that no branch and decision taking statement will lead to failure of the software. To Calculate Branch Coverage: Branch Coverage = Tested Decision Outcomes / Total Decision Outcomes. May 30, 2017 SE 433: Lecture 10 34 of 87

  35. Pair-Wise Programming What is pair-wise programming and why is it relevant to software testing? Concept used in Extreme Programming (XP) Coding is the key activity throughout a software project Life cycle and behavior of complex objects defined in test cases again in code XP Practices Testing programmers continuously write unit tests; customers write tests for features Pair-programming all production code is written with two programmers at one machine Continuous integration integrate and build the system many times a day every time a task is completed. Mottos Communicate intensively Test a bit, code a bit, test a bit more May 30, 2017 SE 433: Lecture 10 35 of 87

  36. Testing and Concurrent Programming Why is testing software using concurrent programming hard? What are races and why do they affect system testing? Concurrency: Two or more sequences of events occur in parallel May 30, 2017 SE 433: Lecture 10 36 of 87

  37. Testing Concurrent Programs is Hard Concurrency bugs triggered non-deterministically Prevalent testing techniques ineffective A race condition is a common concurrency bug Two threads can simultaneously access a memory location At least one access is a write See note May 30, 2017 SE 433: Lecture 10 37 of 87

  38. Race Conditions Race condition occurs when the value of a variable depends on the execution order of two or more concurrent processes (why is this bad?) Example procedure signup(person) begin number := number + 1; list[number] := person; end; signup(joe) || signup(bill) May 30, 2017 SE 433: Lecture 10 38 of 87

  39. What is Static Analysis? The term "static analysis" is conflated, but here we use it to mean a collection of algorithms and techniques used to analyze source code in order to automatically find bugs. The idea is similar in spirit to compiler warnings (which can be useful for finding coding errors) but to take that idea a step further and find bugs that are traditionally found using run-time debugging techniques such as testing. Static analysis bug-finding tools have evolved over the last several decades from basic syntactic checkers to those that find deep bugs by reasoning about the semantics of code. May 30, 2017 SE 433: Lecture 10 39 of 87

  40. Defect Costs Questions: When you find one, how much will it cost to fix? How much depends on when the defect was created vs. when you found it? Just how many do you think are in there to start with?! Cost Development Phases The cost of fixing a defect rises exponentially by lifecycle phase But this is simplistic When were the defects injected? Are all defects treated the same? Do we reduce costs by getting better at fixing or at prevention? May 30, 2017 SE 433: Lecture 10 40 of 87

  41. Defect Costs May 30, 2017 SE 433: Lecture 10 41 of 87

  42. Software Quality Assurance May 30, 2017 SE 433: Lecture 10 42 of 87

  43. QA & Testing Static vs. Dynamic Testing Automated Testing Pros and cons Integration: 2 types Top down Bottom up Testing Phases Unit Integration System User Acceptance Testing Testing Types Black-box White-box May 30, 2017 SE 433: Lecture 10 43 of 87

  44. Quality Assurance (QA) Definition - What does Quality Assurance (QA) mean? Quality assurance (QA) is the process of verifying whether a product meets required specifications and customer expectations. QA is a process-driven approach that facilitates and defines goals regarding product design, development and production. QA's primary goal is tracking and resolving deficiencies prior to product release. The QA concept was popularized during World War II. May 30, 2017 SE 433: Lecture 10 44 of 87

  45. Software Quality Assurance Software quality assurance (SQA) is a process that ensures that developed software meets and complies with defined or standardized quality specifications. SQA is an ongoing process within the software development life cycle (SDLC) that routinely checks the developed software to ensure it meets desired quality measures. May 30, 2017 SE 433: Lecture 10 45 of 87

  46. Software Quality Assurance The area of Software Quality Assurance can be broken down into a number of smaller areas such as Quality of planning, Formal technical reviews, Testing and Training. May 30, 2017 SE 433: Lecture 10 46 of 87

  47. Quality Control "Quality must be built in at the design stage. It may be too late once plans are on their way." W. Edwards Deming May 30, 2017 SE 433: Lecture 10 47 of 87

  48. Role of the SQA Group I Form a Software Quality Assurance Group Prepares an SQA plan for a project. The plan identifies evaluations to be performed audits and reviews to be performed standards that are applicable to the project procedures for error reporting and tracking procedures for change management documents to be produced by the SQA group amount of feedback provided to the software project team Participates in the development of the project s software process description. The SQA group reviews the process description for compliance with organizational policy, internal software standards, externally imposed standards (e.g., ISO-9001), and other parts of the software project plan. May 30, 2017 SE 433: Lecture 10 48 of 87

  49. Role of the SQA Group II Reviews software engineering activities to verify compliance with the defined software process. identifies, documents, and tracks deviations from the process and verifies that corrections have been made. Audits designated software work products to verify compliance with those defined as part of the software process. reviews selected work products; identifies, documents, and tracks deviations; verifies that corrections have been made periodically reports the results of its work to the project manager. Ensures that deviations in software work and work products are documented and handled according to a documented procedure. Records any noncompliance and reports to senior management. Noncompliance items are tracked until they are resolved. May 30, 2017 SE 433: Lecture 10 49 of 87

  50. Statistical Software Quality Assurance Statistical quality assurance implies the following steps: Information about software defects is collected and categorized. An attempt is made to trace each defect to its underlying cause (e.g., non-conformance to specifications, design error, violation of standards, poor communication with the customer). Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all possible causes), isolate the 20 percent (the "vital few"). Once the vital few causes have been identified, move to correct the problems that have caused the defects. May 30, 2017 SE 433: Lecture 10 50 of 87

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#