1.0 Introduction
The best way to approach differences in Testing Methods is by distinguishing between Traditional methods from the Empirical methods.

1.1 Defined development Process
Traditional software project management techniques have a propensity to concentrate on "command and control" methods, where a detailed plan is produced and predetermined during the inception phase of the project. A "command and control" method makes the unspoken assumption that the relationship between the inputs and outputs of a software development process are established and expected and that the schedule of outputs will still be applicable by the time they materialize.

1.2 Empirical Methods
The primary difference between defined processes (for example, Rational Unified Process) and empirical, agile processes (SCRUM, XP, etc.) is that the empirical approach assumes the development process is unpredictable and chaotic. Agile Controls are used to manage the unpredictability and control the risk. Agile’s flexibility, responsiveness and reliability are the claimed results. The classic application development dilemma has always been faster, better, cheaper. In today’s economy, budget pressures dominate an application development; even though quality is recognized as a critical factor, but time to market still drives many project schedules.

1.3 Traditional Testing
Software requirements document are established prior to the implementation of the code. Bugs are a result of the programming errors. Some bugs arise from oversights made when generating source code or transcribe data incorrectly. Other bugs are results of interactions between different parts of a computer program. A software development process plan would define testing at various phase of the development and the phases are:
1) Software:
a. Unit test:
b. Component Test
c. CSCI Test ( Formal Qualification Test – FQT)
d. Regression Testing
e. System Test and Integration (Hardware and Software integration)
f. Performance test (Running scenario for speed and data crunching)

2) Hardware
a. Functional tests (Hardware functionality)
b. Environment Test (Thermal Cycling, EMI Testing etc...)
c. Acceptance Test (Software stubs, drivers and the real hardware)

1.4 Unit Test
A Unit Test is a procedure used to validate that a particular module of source code is working properly. The procedure is to write test cases for all functions and methods so that whenever a change causes a regression test, it can be quickly identified and fixed. This type of testing is similar to the iterative development of Agile methods. The Unit Test is done by the developers and not by end-users.
1. Common types of computer bugs
2. Divide by zero
3. Infinite loops
4. Arithmetic overflow or underflow
5. Exceeding array bounds
6. Using an un-initialized variable
7. Accessing memory not owned (Access violation)
8. Memory leak or Handle leak
9. Stack overflow or underflow
10. Buffer overflow
11. Loss of precision in type conversion

1.4 Component Test
Component Testing is the assembly of functions into a component or module and performing interactions of the functions with other modules. This type of testing is mostly done by the developers and not by the end-users.

1.5 CSCI Test
A Computer Software configuration Item (CSCI) is individually managed and versioned. The objective is to provide confidence that the delivered system meets the requirements of the sponsors and users. CSCI Testing is the first time that the Product Items Development Specifications (PIDS)/Interface Requirement Specifications (IRS)/Software Requirement Specification (SRS), that describe the functionality that the vendor and a customer have agreed upon. The CSCI Testing is the formal qualification test which validates that the software requirements are met. This type of testing is mostly conducted by Test Engineering and witnessed by SQA, Customers, and other teams.

1.6 Regression Testing
Regression testing is usually performed on the area of code where changes have taken place and changes that were found during FQT (validation), sytem or performance tests.

1.7 System Test
System testing is conducted on a complete, integrated system to evaluate the system's compliance with the specified requirements. System testing falls within the scope of Black box testing, and as such, should require no knowledge of the inner design of the code or logic. System testing is the first time that the entire system is tested against the Functional Requirement Specification(s) (FRS) and integrated with the real hardware. System testing is intended to test up to and beyond the bounds (off-nominal) defined in the software/hardware requirements specification(s).

1.7.1 Types of system tests
The following examples are the different types of testing that should be considered during System testing:
1. Functional testing
2. User interface testing
3. Usability testing
4. Compatibility testing
5. Model based testing
6. Error exit testing
7. User help testing
8. Security testing
9. Capacity testing
10. Performance testing
11. Sanity testing
12. Regression testing
13. Reliability testing
14. Recovery testing
15. Installation testing
16. Maintenance testing
17. Accessibility

2.0 User Testing Methods
The user interface is the aggregate of means by which users interact with a system. The user interface provides means of inputs that allows the users to control the system, and outputs that allow the system to inform the users. In the user-centered design paradigm, the product is designed with users participation at all times. Sometimes the users become members of the design team which is quite a split from the traditional methods. User testing is an evaluative process to discover specific information about a design through the eyes of others. The following is a list of different user’s methods and their implementation.

2.1 Ethnographic Study / Field Observation
Usability requirements are best observed and determined in the field. Traditional laboratory environment makes data testing and recording easy, but it removes the user and the product from the context of the real world application. Field observation is first about inquiry; that is, interviewing users about their jobs and use of the product, and second watching people use the product in their environment.
2.2 Inquiry
This technique is best used in the early stages of development, when learning about the issues surrounding the use of a product.
2.3 Surveys/Questionnaires
Surveys can be used at any stage of development, and are often used after products are shipped to assess customer satisfaction with the product. Usability issues can be identified that should have otherwise been caught in-house before the product release.

2.4 Journaled Sessions
Journaled sessions bridges usability inquiry, and usability testing, where you observe people experiencing the product's user interface. Journaled sessions allow to perform usability evaluation across long distances and without much overhead. This technique is best used in the early stages of development.
2.5 Screen Snapshots
Snapshots is a method where screen snapshots are taken at different times during the execution of a task or series of tasks by the user. This technique is best used in the early to middle stages of development.

2.6 Heuristic evaluation
Heuristic evaluation is a bunch of experts who would examine the interface and evaluate each element of the interface against a list of commonly accepted principles--heuristics.
The ten general principles for user interface design called "heuristics" written by Jakob Nielsen can be found at 10 Heuristics for User Interface Design: Article by Jakob Nielsen

2.7 Cognitive Walkthrough
Expert evaluators construct task scenarios from a specification or early prototype and then role play the part of a user working with that interface. Cognitive walkthroughs are good for the early stages of development.

2.8 Formal Usability Inspections
Software inspection methodology (code inspection) is adapted to usability evaluation. This technique is designed to reduce the time required to discover defects in a tight product design cycle and is best used during early stages of development.

2.9 Pluralistic Walkthroughs
Users, developers, and usability professionals step through a task scenario, discussing and evaluating each element and its interaction in a meeting setting. This technique is best used in the early stages of development.

2.10 Standards Inspections
Standards inspections ensure compliance with industry standards. The elements of the product are analyzed for their use of the industry standard. This technique is best used in the middle stages of development.

2.12 Guideline Checklists
Guidelines and checklists help ensure that usability principles will be considered in a design. Use guideline checklists when performing usability inspections, such as heuristic evaluations or consistency inspections. A small set of usability guidelines are as follows:
1. Visibility of system status
2. Match between system and the real world
3. User control and freedom
4. Consistency and standards
5. Error prevention
6. Recognition rather than recall
7. Flexibility and efficiency of use
8. Aesthetic and minimalist design
9. Help users recognize, diagnose, and recover from errors
10. Help and documentation