Quality Assurance, Testing, and Implementation Frequent Shopper Program, Part III ? Quality Assurance Process and Procedures While developing and implementing the Kudler Fine Foods Frequent Shopper Program, Smith Systems Consulting proposes to adhere to a well planned, comprehensive, and lifecycle-encompassing quality assurance process to assure the minimal standards of quality, as defined by Kudler Fine Foods within the functionality and performance requirements are met.
This process goes well beyond debugging the software itself. Rather, the quality assurance process will begin on day one of the project lifecycle and will be completed the day the software is retired. That said, though, the goal and guiding principle of the quality assurance process is to guarantee that the developing system not deviate from the user requirements, but that it meet their needs exactly as specified.
Deviations would be detected and resolved early on in the software development lifecycle (SDLC) and errors would be detected and fixed as early on in the SDLC as possible to avoid the far greater monetary and time costs incurred by errors that are discovered and fixed later on in the SDLC. Early error detection would even preclude having most errors find their way into the code at all. Quality assurance procedures, therefore, focus on detecting deviations and inconsistencies discovered in user requirements or in the way they are being developed.
Early on, quality assurance activities would typically be centered on making correct design decisions and on meeting user requirements so that, in the later stages or iterations of development, the software will be easily implemented. Later on, during implementation, quality assurance activities consist mostly of testing. In the case of the development of the Frequent Shopper Program, development stages usually overlap, meaning that quality assurance procedures normally conducted at various phases in a traditional SDLC would likewise overlap.
However, regardless of when various quality assurance activities are carried out, what needs to be done is to formally plan quality assurance procedures and to firmly stick to the plan. Provisions should be made to ensure that no excuse is made to short-change or skip the quality assurance process. Management, for example, needs to be brought onboard for quality assurance to happen. Also, the standards need to be easily understandable and measurable. Principally, quality assurance procedures would include the following four activities: 1. Defect Tracking 2.
Technical Reviews a. Walkthroughs b. Inspections 3. Testing (described in following section) Defect tracking involves recording and tracking defects from detection to resolution (McConnell, 1998, p. 129). In technical reviews, whether formal or informal, developers receive constructive feedback from other developers. During analysis and design and even during implementation, walkthroughs help ensure the design model or the program being implemented is complete and accurate, and therefore, of higher quality. Inspections are similar to walkthroughs though more formal.
The design model or code is reviewed by participants prior to the inspection group meeting. The inspection meeting comes up with agreed-upon solutions for all discussed errors. Testing, as discussed next, forms a large part, though not necessarily the most important part, of the quality assurance process. While reviews help guarantee upstream quality, system testing helps guarantee software quality downstream. System testing, which aims to cover 100% of system functionality, is usually conducted by an independent testing team (McConnell, 1998, p. 34). Defects detected are then corrected by developers. Taken alone, each of these quality assurance activities are effective, but nowhere nearly as effective as when done in conjunction with the other activities (McConnell, 1998, p. 135). The more effective testing and other quality assurance procedures are those that detect errors before the code is written. Early detection saves a tremendous amount of time and money. Testing Procedures Testing has two main objectives: 1. Detect and fix errors 2. Correct operation of the software
Of greater concern to this paper is the correct operation of the Frequent Shopper Program at all levels: program, network, systems, and interfaces. Testing procedures that can help guarantee this include: 1. Drawing up a test plan 2. Writing test cases 3. Static testing 4. Functional testing 5. Structured (non-functional) testing 6. Performance testing The first procedure that needs to be followed if testing is to be effective in accomplishing these goals is drawing up a test plan which would include an “overall test plan description” and “detailed test execution instructions” (Everett & McLeod, 2007, p. 9). The test plan should take each level of the system into consideration including when parts of the test should be conducted (test schedule), what, in particular, should be tested, test data (drawn up with the help of business analysts), expected results, and so on. In effect, the test plan determines what will be tested and why it needs to be tested. Drawing up a test plan involves writing test cases per development phase. Test cases describe how testing will be conducted and are usually based on use cases.
Just as with use cases, test cases become more and more detailed as details about the program, system, network, or interfaces are developed and become evident. In fact, the fleshing-out of use cases can be leveraged to likewise flesh-out test cases. In other words, both activities can be performed in conjunction with use cases forming the principal and foundation for test case development (Everett & McLeod, 2007, p. 102-103). Test case execution proceeds from test cases for individual program pieces before progressing to larger and more general program modules (Everett & McLeod, 2007, p. 19). The test plan needs to take all elements of the information system into consideration to ensure 100% coverage of each element, each element in relation with other elements (interfaces; object of integration testing), each element in communication with other elements (networks), each element as a software module (object of unit testing), all elements taken as a whole on both a program level (object of software system testing), and all elements taken as a whole on overall system level (including all elements of the information system, but without attending to the individual elements).
Software can be tested as individual components (as in unit testing), in sets of integrated components (as in integration testing), or as a whole – the entire system without considering individual components (as in system testing). Unit testing, integration testing, and system testing can be applied to the diverse components and aspects of the information system. In other words, at the program level, the individual components of the program would be unit-tested by developers; the set of components of the program would be tested for integration; and the program as a whole would be system-tested.
The same goes for the other levels of the system. A number of testing approaches could then be followed to test various aspects of the system: Static testing seeks to: 1. Reduce defects in software under development by reducing documentation defects. Since the Frequent Shopper Program will be developed from this documentation, static testing aims to prevent errors from being introduced in the code or to prevent the code development from going in the wrong direction, deviating from the user requirements. In that case any code that deviates from requirements would have to be scratched once discovered (Everett & McLeod, 2007, p. 3). 2. Effect correct software operation. If the new Frequent Shopper Program comes out and Kudler Fine Foods employees cannot figure out how to use the new system, they will want to revert back to how things were prior to the new system, thus wasting much time and money (Everett & McLeod, 2007, p. 93). Typical documentation for which static tests can be run include: user requirements, project plans, use cases, test plan and test cases, end user documents (such as user manuals or guides and help files), installation guides, and so on.
Just about any and all documents produced during the SDLC can be the subject of static tests (Everett & McLeod, 2007, p. 94). Concretely, static testing might involve activities like improving the readability of the documents and content review. Content review may include desk checking, document inspections, and document walkthroughs. These procedures involve making sure the document is complete, is consistent throughout, and is error-free.
These qualities of a document – being complete, error-free, and consistent – reduce the possibilities that documentation would be misinterpreted or be the direct cause of deviations that could, in turn, cause very costly setbacks in schedule to get back on track. Functional testing seeks to “validate the software behavior against the business functionality documented in the software requirements and specifications” (Everett & McLeod, 2007, p. 99). A number of functional tests can be conducted in an effort to cover as much of the software business functionality as possible.
These tests include: user navigation testing, transaction screen testing, transaction flow testing, report screen testing, report flow testing, and database function testing. User navigation testing engages the user access to the Web site or application (login and logout functions) and their ability to find their way around in correct sequence and arrive to the parts of the site/application to accomplish their goals (site traversing functions including both intra-screen – such as through tabbing or mouse clicks within the same page or screen – and inter-screen – such as navigating from one page or screen to another).
Whether users actually accomplish their goals is not resolved with user navigation testing (Everett & McLeod, 2007, p. 103 – 104). Transaction screen testing covers user ability to accomplish business goals once on a transaction page or screen (without concern about how to get there… ). To test the transaction screen, the tester needs to make sure all controls (action buttons, lists, input data fields, radio buttons, etc. ) operate as intended (according to business requirements and user & administrator guides).
This also involves making sure that all input yields the expected output (Everett & McLeod, 2007, p. 104). Once transaction screens have been tested, transaction flow testing attempts to figure out if a combination of correct navigation successfully achieves an intended business activity. In other words, having sequentially navigated through certain transaction screens, the user can expect to have completed a business activity successfully in its entirety (Everett & McLeod, 2007, p. 104-105).
Similar to transaction screen testing, report screen testing attempts to test the user ability to retrieve and display data (rather than entering data on transaction screens) utilizing report screens. The key is to focus on retrieved and displayed information and analyze it for completeness and accuracy (Everett & McLeod, 2007, p. 105). For whatever report modality the software supports, including printing reports and displaying them on the screen, report flow testing attempts to validate the report results (Everett & McLeod, 2007, p. 105).
Database functional testing seeks to answer: Can the database data be managed IAW requirements? In other words, is the database design viable for the intended application? And therefore: Could the application correctly maintain or manage its data? Once valid transaction flow screens and report flow screens are known, testers can verify the data flow from the transaction flow screen, via the database, and to the report flow screen. If everything checks out alright, the application is correctly manipulating its database (Everett & McLeod, 2007, p. 105-106).
Regression testing involves rerunning all tests that have worked successfully thus far on the code because new code or changes made to existing code can break existing code – code that once passed certain tests but which may no longer pass the same tests. If regression tests fail, then the newly added code or recently changed code would likely be the cause. This would help isolate the faulty module. Regression tests are also run when new version come out to ensure that the new version’s added functionality does not interfere with the functionality already present in the current version.
Failed regression tests in this case would indicate which tests within the test suite should be retired (Everett & McLeod, 2007, p. 106-107). Other forms of functional testing include white box testing and black box testing. White box testing centers testers’ attention on the correctness of the code logic: its statements, flow of control structures, and so on. White box testing could include some or all of the following techniques: statement coverage technique, branch coverage technique, compound coverage technique, path coverage technique, and loop coverage technique (Everett & McLeod, 2007, p. 07-110). Black box testing involves verifying the program’s behavior in response to ordinary business activities based on business requirements, use cases, the executable program, and its data. This testing is done by independent testers – not the developers themselves. Black box testing involves both expected behavior (positive testing) and unexpected behavior (negative testing). Black box testing techniques include the equivalence class technique, boundary value analysis technique, and expected results coverage technique (Everett & McLeod, 2007, p. 112-117).
Because functional testing based on test cases suffers the presence of non-functional and performance requirements testing, these two testing categories also need to be visited here. The new software will have to be adapted to the specific platform – including hardware and operating system – on which it’s destined to operate. Structured testing aims to ensure that the platform will accept the new software without incident. In other words, structured testing is performed on the platform components upon which the application will be installed (Everett & McLeod, 2007, p. 122).
Interface testing aims to test the interface between various software platform components (including network data transfers, application APIs, database requests) and the new program (the Frequent Shopper Program in this case). This may involve, for example, verifying that data is actually being sent, that the data is being received, that the response and/or return data is a correct reply (Everett & McLeod, 2007, p. 123). Hardware configuration can be tested at the system level by assuring its “ability to access data, process data, communicate with other hardware, and handle forecast peak loads” (Everett & McLeod, 2007, p. 5). Structured testing also includes security behavior testing. These tests include testing user authentication at the various levels of security access (such as testing user ID & password combinations), testing data encryption, security of transmitted data, or user session security checking. Installing software can be a simple process or a complex one. Installation testing ensures that the process is as easy and simple for the customer as possible and that the process works.
The production environment is simulated and testers perform the installation using the provided documentation. Installation verification aids are often included within installation packages to help users verify a successful installation. All systems fail at some point. Backup and recovery testing attempts to ensure that the mechanisms set in place to recover from system failures work as intended. This is typically done by performing backups, creating system failure, and then recovering from the failure using only the backup files.
Smoke testing helps verify that a successfully installed program can be properly configured by attempting to set the most likely configuration combinations – other than the configuration test cases executed during development (Everett & McLeod, 2007, p. 126). Administration testing is the “functional testing of business support activities” which is an extension to the functional testing of business activities conducted during system design and implementation (Everett & McLeod, 2007, p. 126). Administrative tests that pass can be saved as the starting point for the business function testing which depends on a correct administrative setup.
Otherwise, manually-built system setup files (used to test business functions) can be used as the expected output of administrative component tests. Performance testing attempts to make the program reveal its true production speed (as a combination of response time versus peak load time workload) by running it in an environment that’s as close to the production environment as possible (Everett & McLeod, 2007, p. 129). Performance testers first need to figure out which business transactions or activities need to be measured for performance.
They must then determine the peak transaction usage per group and when those timeframes are. Then, they need to find out the number of workload peaks should be tested. Finally, they need to formulate the steps they need to take to duplicate this peak in a test environment. Once this preparation work is completed, they execute workload ramp-up to peak load, execute performance measurements at peak, and execute workload ramp-down from the peak (Everett & McLeod, 2007, p. 148). Increasingly more workload is introduced in conjunction with an increase of complex business transactions.
Use of predefined metrics and performance standards help testers understand what’s expected in regards to performance for the particular program and the operating environment in which it runs. How quickly users typically lose patience over having to wait is what usually sets the standards. For mission-critical systems, response time could influence whether a life is saved or not. A performance workload plan also helps set the metrics by which performance testers can base their performance evaluations (Everett & McLeod, 2007, p. 131). Development methodology aside, any testing procedure must be based on certain strategy principles.
Scott Ambler (2006) lists these principles in his article, “Agile Testing Strategies”: 1. Test early – as early as possible (an idea discussed in Quality Assurance Process and Procedures). Many developers even adopt a test-first approach. 2. Test often and effectively – as often and as effectively as possible – to increase overall software quality and reduce total cost of ownership despite higher upfront costs. 3. Test just enough – not more and not less than the circumstances dictate. Some software projects require more testing than others – especially those upon which critical safety (which can involve the oss of life) depends. 4. Test with the help of other developers, not alone. The chances of discovering errors increase if a second or third set of eyes review the same design or code. Ambler advocates pair testing which is similar to pair programming, but the point is to have more than one tester work the model or code. Implementation Although important, choosing a programming language or writing code are not the most important implementation steps. Coming into implementation with a detailed system design that serves as a guide for developers to follow to implement the system is far more crucial.
It is from this design that the program code is written. Before a language is selected, before a development environment is established, and before a single line of code is written, a proper amount of needs assessment, planning, user requirements analysis, high-level, and detailed low-level design should be completed. Implementation methodologies vary in the amount of planning, analysis, and design, that they require, but all except the most foolhardy require at least some understanding of the user requirements before writing code.
Implementation is a very busy stage of development and it typically the single most costly development phase (consuming up to 1/3 of the cost) because it is the stage wherein all the preparations begin to flesh out. In it, detailed component design specification is codified into an operational source code implementation and in which their basic operation is validated (Scacci, 2001, p. 2). Some of the implementation steps and procedures are as follows: •Make sure to have all the required assets, personnel, and resources. These requirements may have changed since planning began. •Make sure the budget can support the implementation.
Affording the preliminary stages of development may have emptied the coffers. •Produce a Request for Proposal (RFP) to be sent to vendors. •Select the solution, basing the decision on proposals, budget, schedule, IT personnel expertise, inventory of resources. •Produce an implementation strategy and/or implementation model. •Select a programming language or programming languages and/or programming technologies, choosing the technology that will best respond to the program’s needs. •Choosing a development environment, an Integrated Development Environment (IDE) such as Eclipse, NetBeans, or MS Visual Studio, depending on the languages used. Establishing version control (such as using svn) to control software versioning and track all updates to source code throughout implementation. •Programming conventions need to be pre-established so that the efforts of various programmers will produce a consistent source code. •Task / role assignment to team members. •Write out an integration plan so that programmers will understand the implementation order, how modules will be tested for integration, and so that the integration team will follow set procedures. Review, and update if necessary, all implementation processes. Determine, for instance, the implementation order: whether it will be implemented top-down, bottom-up, or utilizing an input-process-output (IPO; according to the data flow through a system with the components that receive input from without the system to be developed first; then the components that process the data; finally the components that output the data) implementation. •Determine development methodology (traditional SDLC, Agile development – such as Scrum, XP, RUP, etc. Test-Driven Development, or another methodology) depending on the circumstances, experience of personnel, type of development being undertaken, schedule, funding, and other factors. •Verify and Test. Developers unit test their code. Test team or team members who are in charge of testing need to conduct integration testing and other forms of functional, structured, or performance testing, according to the adopted development methodology. •Implement the parts of the program that the customer needs before developing wants, would-likes, or would-be-nice functionality. Create system documentation – such as source code documentation. •Write user manuals, user guides, installation guides, and help files. •Train users to use the new system. •Install the system. •Convert data from old system to new system (such as from the old database to the new database) which also involves creating the new database. References Ambler, S. (2006, December 12). Agile testing strategies. Dr. Dobb’s; The world of software development Web site. Retrieved on August 28, 2010, from http://www. drdobbs. com/tools/196603549;jsessionid=TDNX0XZEJREJTQE1GHPCKH4ATMY