Understanding Software Quality and Testing in Modern Development
Software quality is a multifaceted concept that can be viewed from various perspectives, such as transcendental, user, manufacturer, product, and value-based views. In software development, quality of design and conformance play vital roles in creating a useful product that meets user needs and expectations. David Garvin's quality dimensions emphasize the importance of assessing conformance to quality standards. High-quality software benefits both the producer and the end user community by reducing maintenance efforts, bug fixes, and enhancing profitability.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Software Quality Software Quality and and Testing Testing Unit-V
Quality: At pragmatic level, David Garvin of the Harvard Business School suggests that quality is a complex and multifaceted concept that can be described from five different points of view. The transcendental view argues that quality is something that you immediately recognize, but cannot explicitly define. The user view sees quality in terms of an end user s specific goals. If a product meets those goals, it exhibits quality. The manufacturer s view defines quality in terms of the original specification of the product. If the product conforms to the spec, it exhibits quality. The product view suggests that quality can be tied to inherent characteristics (e.g., functions and features) of a product. Finally, the value-based view measures quality based on how much a customer is willing to pay for a product. In reality, quality encompasses all of these views and more. Quality of design refers to the characteristics that designers specify for a product. The grade of materials, tolerances, and performance specifications all contribute to the quality of design. 9/13/2024 2
Software Quality: In software development, quality of design encompasses the degree to which the design meets the functions and features specified in the requirements model. Quality of conformance focuses on the degree to which the implementation follows the design and the resulting system meets its requirements and performance goals. software quality can be defined as: An effective software process applied in a manner that creates a useful product that provides measurable value for those who produce it and those who use it. An effective software process establishes the infrastructure that supports any effort at building a high-quality software product. The management aspects of process create the checks and balances that help avoid project chaos a key contributor to poor quality. Software engineering practices allow the developer to analyze the problem and design a solid solution both critical to building high-quality software. Finally, umbrella activities such as change management and technical reviews have as much to do with quality as any other part of software engineering practice. 9/13/2024 3
Software Quality Cont'd A useful product delivers the content, functions, and features that the end user desires, but as important, it delivers these assets in a reliable, error-free way. A useful product always satisfies those requirements that have been explicitly stated by stakeholders. In addition, it satisfies a set of implicit requirements (e.g., ease of use) that are expected of all high-quality software. By adding value for both the producer and user of a software product, high quality software provides benefits for the software organization and the end user community. The software organization gains added value because high- quality software requires less maintenance effort, fewer bug fixes, and reduced customer support. The end result is greater software product revenue, better profitability when an application supports a business process, improved availability of information that is crucial for the business. 9/13/2024 4
Garvins Quality Dimensions: David Garvin suggests that quality should be considered by taking a multidimensional viewpoint that begins with an assessment of conformance and terminates with a transcendental view. Although Garvin s eight dimensions of quality were not developed specifically for software, they can be applied when software quality is considered: Performance quality. Does the software deliver all content, functions, and features that are specified as part of the requirements model in a way that provides value to the end user? Feature quality. Does the software provide features that surprise and delight first-time end users? Reliability. Does the software deliver all features and capability without failure? Is it available when it is needed? Does it deliver functionality that is error-free? Conformance. Does the software conform to local and external software standards that are relevant to the application? Does it conform to de facto design and coding conventions? For example, does the user interface conform to accepted design rules for menu selection or data input? Durability. Can the software be maintained (changed) or corrected (debugged) without the inadvertent generation of unintended side effects? Will changes cause the error rate or reliability to degrade with time? 9/13/2024 5
Garvins Quality Dimensions: Cont'd Serviceability. Can the software be maintained (changed) or corrected (debugged) in an acceptably short time period? Can support staff acquire all information they need to make changes or correct defects? Douglas Adams [Ada93] makes a wry comment that seems appropriate here: The difference between something that can go wrong and something that can t possibly go wrong is that when something that can t possibly go wrong goes wrong it usually turns out to be impossible to get at or repair. Aesthetics. There s no question that each of us has a different and very subjective vision of what is aesthetic. And yet, most of us would agree that an aesthetic entity has a certain elegance, a unique flow, and an obvious presence that are hard to quantify but are evident nonetheless. Aesthetic software has these characteristics. Perception. In some situations, you have a set of prejudices that will influence your perception of quality. For example, if you are introduced to a software product that was built by a vendor who has produced poor quality in the past, your guard will be raised and your perception of the current software product quality might be influenced negatively. Similarly, if a vendor has an excellent reputation, you may perceive quality, even when it does not really exist. One can classify these into: (1) factors that can be directly measured (e.g., defects uncovered during testing) (2) factors that can be measured only indirectly (e.g., usability or maintainability). 9/13/2024 6
McCalls Quality Factors: McCall, Richards, and Walters propose a useful categorization of factors that affect software quality. Focus on three important aspects of a software product: its operational characteristics, its ability to undergo change, its adaptability environments. to new 9/13/2024 7
Correctness. The extent to which a program satisfies its specification and fulfills the customer s mission objectives. Reliability. The extent to which a program can be expected to perform its intended function with required precision. Efficiency. The amount of computing resources and code required by a program to perform its function. Integrity. Extent to which access to software or data by unauthorized persons can be controlled. Usability. Effort required to learn, operate, prepare input for, and interpret output of a program. Maintainability. Effort required to locate and fix an error in a program. Flexibility. Effort required to modify an operational program. Testability. Effort required to test a program to ensure that it performs its intended function. Portability. Effort required to transfer the program from one hardware and/or software system environment to another. Reusability. Extent to which a program can be reused in other applications related to the packaging and scope of the functions that the program performs. Interoperability. Effort required to couple one system to another. 9/13/2024 8
ISO 9126 Quality Factors: The ISO (International Organization for Standardization) 9126 standard was developed in an attempt to identify the key quality attributes for computer software. The standard identifies six key quality attributes: Functionality. The degree to which the software satisfies stated needs as indicated by the following sub-attributes: suitability, accuracy, interoperability, compliance, and security. Reliability. The amount of time that the software is available for use as indicated by the following sub-attributes: maturity, fault tolerance, recoverability Usability. The degree to which the software is easy to use as indicated by the following sub- attributes: understandability, learnability, operability. Efficiency. The degree to which the software makes optimal use of system resources as indicated by the following sub-attributes: time behavior, resource behavior. Maintainability. The ease with which repair may be made to the software as indicated by the following sub-attributes: analyzability, changeability, stability, testability. Portability. The ease with which the software can be transposed from one environment to another as indicated by the following sub-attributes: adaptability, installability, conformance, replaceability. The ISO 9126 software is an international standard software quality model that helps in creating a solid framework for assessing software. 9/13/2024 9
Targeted Quality Factors: To conduct your assessment, you ll need to address specific, measurable (or at least, recognizable) attributes of the interface. Prepare the questionnaire and If the answer to most of these questions is yes, it is likely that the user interface exhibits high quality. Intuitiveness. ( ) The degree to which the interface follows expected usage patterns so that even a novice can use it without significant training. Efficiency. The degree to which operations and information can be located or initiated. Robustness. The degree to which the software handles bad input data or inappropriate user interaction. Richness. The degree to which the interface provides a rich feature set. 9/13/2024 10
software quality dilemma: Good Enough Software: Is it acceptable to produce good enough software? The answer to this question must be yes, because major software companies do it every day. The Cost of Quality: The argument goes something like this we know that quality is important, but it costs us time and money too much time and money to get the level of software quality we really want. The cost of quality includes all costs incurred in the pursuit of quality or in performing quality-related activities and the downstream costs of lack of quality. To understand these costs, an organization must collect metrics to provide a baseline for the current cost of quality, identify opportunities for reducing these costs, and provide a normalized basis of comparison. The cost of quality can be divided into costs associated with prevention, appraisal, and failure. 9/13/2024 11
Prevention costs include 1) the cost of management activities required to plan and coordinate all quality control and quality assurance activities, 2) the cost of added technical activities to develop complete requirements and design models, 3) test planning costs, 4) the cost of all training associated with these activities. Appraisal costs include activities to gain insight into product condition the first time through each process. Examples of appraisal costs include: 1) Cost of conducting technical reviews for software engineering work products 2) Cost of data collection and metrics evaluation 3) Cost of testing and debugging Failure costs are those that would disappear if no errors appeared before or after shipping a product to customers. Failure costs may be subdivided into internal failure costs and external failure costs. Internal failure costs are incurred when you detect an error in a product prior to shipment. Internal failure costs include 1) Cost required to perform rework (repair) to correct an error 2) Cost that occurs when rework inadvertently generates side effects that must be mitigated 3) Costs associated with the collection of quality metrics that allow an organization to assess the modes of failure. External failure costs are associated with defects found after the product has been shipped to the customer. Examples of external failure costs are complaint resolution, product return and replacement, help line support, and labor costs associated with warranty work. 9/13/2024 12
Risks: Poor quality leads to risks, some of them very serious. Negligence and Liability: Work begins with the best of intentions on both sides, but by the time the system is delivered, things have gone bad. The system is late, fails to deliver desired features and functions, is error-prone, and does not meet with customer approval. Litigation ensues. In most cases, the customer claims that the developer has been negligent (in the manner in which it has applied software practices) and is therefore not entitled to payment. The developer often claims that the customer has repeatedly changed its requirements and has subverted the development partnership in other ways. In every case, the quality of the delivered system comes into question. Quality and Security: As the criticality of Web-based systems and applications grows, application security has become increasingly important. Stated simply, software that does not exhibit high quality is easier to hack, and as a consequence, low-quality software can indirectly increase the security risk with all of its attendant costs and problems. 9/13/2024 13
The Impact of Management Actions: Software quality is often influenced as much by management decisions as it is by technology decisions. Even the best software engineering practices can be suffered by poor business decisions and questionable project management actions. Estimation decisions. Scheduling decisions Risk-oriented decisions. The software quality dilemma can best be summarized by stating Meskimen s Law There s never time to do it right, but always time to do it over again. It is hence advised that, taking the time to do it right is almost never the wrong decision. 9/13/2024 14
Achieving software quality: Software quality is the result of good project management and solid software engineering practice. Management and practice are applied within the context of four broad activities that help a software team to achieve high software quality. Software engineering methods: If you expect to build high-quality software, You must understand the problem to be solved. You must also be capable of creating a design that conforms to the problem while at the same time exhibiting characteristics that lead to software that exhibits the quality dimensions and factors as discussed so for. Project management techniques: The implications are clear: A project manager uses estimation to verify that delivery dates are achievable. Schedule dependencies are understood & the team resists the temptation to use shortcuts. Risk planning is conducted so problems do not breed chaos, software quality will be affected in a positive way. 9/13/2024 15
Achieving software quality: Cont'd . . . Quality control actions: Quality control encompasses a set of software engineering actions that help to ensure that each work product meets its quality goals. Models are reviewed to ensure that they are complete and consistent. Code may be inspected in order to uncover and correct errors before testing commences. A series of testing steps is applied to uncover errors in processing logic, data manipulation, and interface communication. A combination of measurement and feedback allows a software team to tune the process when any of these work products fail to meet quality goals. 9/13/2024 16
Achieving software quality: Cont'd . . . Software quality assurance: Quality assurance establishes the infrastructure that supports solid software engineering methods, rational project management, and quality control actions all pivotal if you intend to build high-quality software. In addition, quality assurance consists of a set of auditing and reporting functions that assess the effectiveness and completeness of quality control actions. The goal of quality assurance is to provide management and technical staff with the data necessary to be informed about product quality, thereby gaining insight and confidence that actions to achieve product quality are working. 9/13/2024 17
Software testing strategy: A software testing strategy is an outline which describes the software development cycle testing approach. Testing is a set of activities that can be planned in advance and conducted systematically. For this reason a template for software testing a set of steps into which you can place specific test case design techniques and testing methods should be defined for the software process. Generic Characteristics: To perform effective testing, you should conduct effective technical reviews. By doing this, many errors will be eliminated before testing commences. Testing begins at the component level and works outward toward the integration of the entire computer-based system. Different testing techniques are appropriate for different software engineering approaches and at different points in time. Testing is conducted by the developer of the software and an independent test group. Testing and debugging are different activities, but debugging must be accommodated in any testing strategy. 9/13/2024 18
Initially, system engineering defines the role of software and leads to software requirements analysis, where the information domain, function, performance, constraints, and validation criteria for software are established. Moving inward along the spiral, you come to design and finally to coding. behavior, From Pressman Book, Chapter 17, Page 453 (Fig 17.1) 9/13/2024 19
Unit testing begins at the vertex of the spiral and concentrates on each unit (e.g., component, class, or WebApp content object) of the software as implemented in source code. Testing progresses by moving outward along the spiral to integration testing, where the focus is on design and the construction of the software architecture. Taking another turn outward on the spiral, you encounter validation testing, where requirements established as part of requirements modeling are validated against the software that has been constructed. Finally, you arrive at system testing, where the software and other system elements are tested as a whole. To test computer software, you spiral out in a clockwise direction along streamlines that broaden the scope of testing with each turn. The role of an independent test group (ITG) is to remove the inherent problems associated with letting the builder test the thing that has been built. Independent testing removes the conflict of interest that may otherwise be present. 9/13/2024 20
Initially, tests focus on each component individually, ensuring that it functions properly as a unit. Hence, the name unit testing. Unit testing makes heavy use of testing techniques that exercise specific paths in a component s control structure to ensure complete coverage and maximum error components must be assembled or integrated to form the complete software package. Integration testing addresses the issues associated with the dual problems of verification and program construction. Test case design techniques that focus on inputs and outputs are more prevalent during integration, although techniques that exercise specific program paths may be used to ensure coverage of major control paths. After the software has been integrated (constructed), a set of high-order tests is conducted. Validation criteria (established analysis) must be evaluated. Validation testing provides final assurance that software meets all informational, functional, performance requirements. detection. Next, From Pressman Book, Chapter 17, Page 453 (Fig 17.2) during requirements behavioral, and 9/13/2024 21
Principles of Testing: There are seven principles in software testing: 1) Testing shows presence of defects: Software testing reduces the presence of defects. Software testing talks about the presence of defects and doesn t talk about the absence of defects. Software testing can ensure that defects are present but it can not prove that software is defects free. Even multiple testing can never ensure that software is 100% bug-free. Testing can reduce the number of defects but not removes all defects. 2) Exhaustive testing is not possible: It is the process of testing the functionality of a software in all possible inputs (valid or invalid) and pre- conditions is known as exhaustive testing. Exhaustive testing is impossible means the software can never test at every test cases. It can test only some test cases and assume that software is correct and it will produce the correct output in every test cases. 3) Early testing: To find the defect in the software, early test activity shall be started. The defect detected in early phases of SDLC will very less expensive. 9/13/2024 22
4) Defect clustering: In a project, a small number of the module can contain most of the defects. Pareto Principle to software testing state that 80% of software defect comes from 20% of modules. 5) Pesticide paradox: Repeating the same test cases again and again will not find new bugs. So it is necessary to review the test cases and add or update test cases to find new bugs. 6) Testing is context dependent: Testing approach depends on context of software developed. Different types of software need to perform different types of testing. For example, The testing of the e- commerce site is different from the testing of the Android application. 7) Absence of errors fallacy: If a built software is 99% bug-free but it does not follow the user requirement then it is unusable. It is not only necessary that software is 99% bug-free but it also mandatory to fulfill all the customer requirements. 9/13/2024 23
Strategic Issues: 1. Specify product requirements in a quantifiable manner long before testing commences. 2. State testing objectives explicitly. 3. Understand the users of the software and develop a profile for each user category. 4. Develop a testing plan that emphasizes rapid cycle testing. 5. Build robust software that is designed to test itself. 6. Use effective technical reviews as a filter prior to testing. 7. Conduct technical reviews to assess the test strategy and test cases themselves. 8. Develop a continuous improvement approach for the testing process. Page 4555-456 Pressman 9/13/2024 24
Verification & Validation: Verification refers to the set of tasks that ensure that software correctly implements a specific function. Validation refers to a different set of tasks that ensure that the software that has been built is traceable to customer requirements. Verification: Are we building the product right? Validation: Are we building the right product? Verification would check the design doc and correcting the spelling mistake. Otherwise, the development team will create a button like: Once the code is ready, Validation is done. A Validation test found Button is not clickable! 9/13/2024 25
Differences: Verification Verification is the process to find whether the software meets the specified requirements for particular phase. It estimates an intermediate product. The objectives of verification is to check whether software is constructed according to requirement and design specification. It describes whether the outputs are as per the inputs or not. Verification is done before the validation. Plans, requirement, specification, code are evaluated during the verifications. It manually checks the files and document. Validation The validation process is checked whether the software meets requirements and expectation of the customer. It estimates the final product. The objectives of the validation is to check whether the specifications are correct and satisfy the business need. It explains whether they are accepted by the user or not. It is done after the verification. Actual product or software is tested under validation. It is a computer software or developed program based checking of files and document. It always involves executing the code It uses methods like Black Box Testing, White Box Testing, and non-functional testing It does not involve executing the code Verification uses methods like reviews, walkthroughs, inspections, and desk- checking etc. 9/13/2024 26
Unit Testing: Unit testing focuses verification effort on the smallest unit of software design the software component or module. Among the potential errors that should be tested when error handling is evaluated are: 1) error description is unintelligible (Impossible to ), 2) error noted does not correspond to error encountered, 3) error condition intervention prior to error handling, 4) exception-condition processing is incorrect, 5) error description does not provide enough information to assist in the location of the cause of the error. 9/13/2024 Unit test understand / causes system Unit-test environment 27
Integration Testing: If they all work individually, why do you doubt that they ll work when we put them together? The problem, of course, is putting them together interfacing. Data can be lost across an interface; one component can have an inadvertent, adverse effect on another; sub-functions, when combined, may not produce the desired major function; individually acceptable imprecision (accuracy) may be magnified to unacceptable levels; global data structures can present problems etc.. Integration testing is a systematic technique for constructing the software architecture while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit-tested components and build a program structure that has been dictated by design. All components are combined in advance. The entire program is tested as a whole. 9/13/2024 28
Integration Testing - Top-down integration: Top-down integration testing is an incremental approach to construction of the software architecture. Modules integrated by moving downward through the control hierarchy, beginning with the main control module in either a depth-first or breadth-first manner. are Top-down integration 9/13/2024 29
The integration process is performed in a series of five steps: 1. The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module. 2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components. 3. Tests are conducted as each component is integrated. 4. On completion of each set of tests, another stub is replaced with the real component. 5. Regression testing (discussed later in this section) may be conducted to ensure that new errors have not been introduced. 9/13/2024 30
Integration Testing Bottom-up integration: Bottom-up integration testing, as its name implies, construction and testing with atomic modules components at the lowest levels in the program Because components integrated from the bottom up, the functionality provided by components subordinate to a given level is always available and the need for stubs is eliminated. begins (i.e., structure). are Bottom-up integration testing 9/13/2024 31
A bottom-up integration strategy may be implemented with the following steps: Low-level components are combined into clusters (sometimes called builds) that perform a specific software sub-function. Adriver (a control program for testing) is written to coordinate test case input and output. The cluster is tested. Drivers are removed and clusters are combined moving upward in the program structure. 9/13/2024 32
Regression testing: Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly (perfectly / ). In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects. [Dictionary meaning of Regression is / .] Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools. Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison. 9/13/2024 33
The regression test suite (the subset of tests to be executed) contains three different classes of test cases: 1. A representative sample of tests that will exercise all software functions. 2. Additional tests that focus on software functions that are likely to be affected by the change. 3. Tests that focus on the software components that have been changed. A defect is an error or a bug, in the application which is created. A programmer while designing and building the software can make mistakes or errors. These mistakes or errors mean that there are flaws in the software. These are called defects. 9/13/2024 34
In such case, as a Test Manager, what will you do? A) Agree With the test team that its a defect Defect Management: Adefect is an error or a bug, in the application which is created. A programmer while designing and building the software can make mistakes or errors. These mistakes or errors mean that there are flaws in the software. These are called defects. B) Test Manager takes the role of judge to decide whether the problem is defect or not C) Agree with the development team that is not a defect 9/13/2024 35
Defect Management: Defect Management is a systematic process to identify and fix bugs. A defect management cycle contains the following stages 1) Discovery of Defect 2) Defect Categorization 3) Fixing of Defect by developers 4) Verification by Testers 5) Defect Closure 6) Defect Reports at the end of project Defect Reporting in software testing is a process in which test managers prepare and send the defect report to the management team for feedback on defect management process and defects status. Then the management team checks the defect report and sends feedback or provides further support if needed. Defect reporting helps to better communicate, track and explain defects in detail. 9/13/2024 36
Defect Life Cycle Bug reporting and debugging Defect Life Cycle or Bug Life Cycle in software testing is the specific set of states that defect or bug goes through in its entire life. The purpose of Defect life cycle is to easily coordinate and communicate current status of defect which changes to various assignees and make the defect fixing process systematic and efficient. Guidelines: Make sure the defect life cycle to be used in your project is well- documented and the entire team understands what each defect status exactly means. Ensure that each member of the team is clearly aware of his / her responsibility as regards each defect and its current status. Ensure that enough detail is entered in each status change. For example, do not simply drop a defect but provide a plausible reason for doing so. Make sure the status of the defect in the defect tracking tool reflects the actual status. For example, do not be ask for verification without changing the defect s status to FIXED. 9/13/2024 37
Defect Life Cycle: 9/13/2024 38
OPEN / NEW: Defect is identified via a review or a test and reported. (beginning of a defects journey). ASSIGNED: The defect is assigned to a person who is tasked with its analysis or resolution. DUPLICATE: If the defect is invalid because it s a duplicate of another one already reported, it is marked as a duplicate. DROPPED / REJECTED: If the defect is invalid because of various other reasons it is dropped / rejected. DEFERRED: If the defect is valid but it s decided to be fixed in a future release instead of the current release, it is deferred. When the time comes, the defect is assigned again. FIXED / RESOLVED / COMPLETED: After the defect is fixed , its status is changed so that its verification can begin. If the defect can t be fixed, it could be because of any of the following reasons: Need More Information: More information, such as the exact steps to reproduce, is required to analyze and fix the defect. Can t Reproduce: The defect cannot be reproduced for reasons such as change of environment or the defect somehow being already fixed. Can t Fix: The defect cannot be fixed due to some other reason. Such defect is either assigned to another person (in the hope that the other person will be able to fix it), deferred (in the hope that the delay in fixing won t hurt much), or dropped (when there s no hope for its resolution ever and the defect needs to be considered as a known issue ) REASSIGNED: If the fixed defect is, in fact, verified as not being resolved at all or being only partially resolved, it is reassigned. If there s time to fix the reassigned defect in the current release, it is fixed again but if it s decided to wait and fix in a future release, it is deferred. CLOSED / VERIFIED: If the defect is verified as being resolved indeed, it is closed. This is the happy ending. Hurray! 9/13/2024 39
Unit-V Concludes! 9/13/2024 40