1. Explain Testing Methods?

1. White Box
Also called ‘Structural Testing / Glass Box Testing' is used for testing the code keeping the system specs in mind. Inner working is considered and thus Developers Test..
* Mutation Testing
Number of mutants of the same program created with minor changes and none of their result should coincide with that of the result of the original program given same test case.


* Basic Path Testing
Testing is done based on Flow graph notation, uses Cyclometric complexity & Graph matrices.


* Control Structure Testing
The Flow of control execution path is considered for testing. It does also checks :-
Conditional Testing : Branch Testing, Domain Testing.
Data Flow Testing.
Loop testing :Simple, Nested, Conditional, Unstructured Loops.

2. Gray Box

3. Black Box
Also called ‘Functional Testing' as it concentrates on testing of the functionality rather than the internal details of code.
Test cases are designed based on the task descriptions
* Comparison Testing
Test cases results are compared with the results of the test Oracle.


* Graph Based Testing
Cause and effect graphs are generated and cyclometric complexity considered in using the test cases.


* Boundary Value Testing
Boundary values of the Equivalence classes are considered and tested as they generally fail in Equivalence class testing.


* Equivalence class Testing
Test inputs are classified into Equivalence classes such that one input check validates all the input values in that class.


Gray Box Testing : Similar to Black box but the test cases, risk assessments, and test methods involved in gray box testing are developed based on the knowledge of the internal data and flow structures

2. What is the Automated Testing?

Automated testing is as simple as removing the "human factor" and letting the computer do the thinking. This can be done with integrated debug tests, to much more intricate processes. The idea of the these tests is to find bugs that are often very challenging or time intensive for human testers to find. This sort of testing can save many man hours and can be more "efficient" in some cases. But it will cost more to ask a developer to write more lines of code into the game (or an external tool) then it does to pay a tester and there is always the chance there is a bug in the bug testing program. Reusability is another problem; you may not be able to transfer a testing program from one title (or platform) to another. And of course, there is always the "human factor" of testing that can never truly be replaced.

Other successful alternatives or variation: Nothing is infallible. Realistically, a moderate split of human and automated testing can rule out a wider range of possible bugs, rather than relying solely on one or the other. Giving the testere limited access to any automated tools can often help speed up the test cycle.

3. What is Functional Acceptance Simple Test?

The functional acceptance simple test(FAST) is run on each development release to check that key features of the program are appropriately accessible and functioning properly on the at least one test configuration (preferable the minimum or common configuration).This test suite consists of simple test cases that check the lowest level of functionality for each command- to ensure that task-oriented functional tests(TOFTs) cna be performed on the program. The objective is to decompose the functionality of a program down to the command level and then apply test cases to check that each command works as intended. No attention is paid to the combination of these basic commands, the context of the feature that is formed by these combined commands, or the end result of the overall feature. For example, FAST for a File/Save As menu command checks that the Save As dialog box displays. However, it does not validate that the overall file-saving feature works nor does it validate the integrity of save files.

4. Explain Structural System Testing Techniques?

Usage " To determine if the system can function when subject to large volumes.
" It includes testing of
input transactions
Internal tables
Disk Space
Out put
Communication
Computer capacity
Interaction with people.
Objectives " To simulate production environment
" Normal or above normal volumes of transactions can be processed through the transaction within expected time frame.
" Application system would be able to process larger volume of data.
" System capacity should have sufficient resources to meet expected turnaround time.
How to Use " It should simulate as closely as possible to production environment.
" Online system should be stress tested with users entering test data with normal or above normal pace.
" Batch system should be tested with huge volumes/ numbers of batches
" The test conditions should have error conditions.
" Transactions used in stress testing are obtained from following 3 sources :
Test data generators
Test transactions created by test group
Transactions which were previously used in production.
" In stress testing the system should run as it would in the production environment.
When to use " When there is uncertainty that system will work with huge volumes of data and without generating any faults.
" Attempt is made to break system with huge amount of data.
" Most commonly used technique to test for online transaction systems as other
techniques are not effective.
Examples " Sufficient disk space allocated
" Communication lines are adequate
Disadvantage " Amount of time taken to prepare for testing
" Amount of resources utilized during test execution.

5. Explain Levels of Testing?

1. Unit Testing.
* Unit Testing is primarily carried out by the developers themselves.
* Deals functional correctness and the completeness of individual program units.
* White box testing methods are employed


2. Integration Testing.
* Integration Testing: Deals with testing when several program units are integrated.
* Regression testing : Change of behavior due to modification or addition is called ‘Regression'. Used to bring changes from worst to least.
* Incremental Integration Testing : Checks out for bugs which encounter when a module has been integrated to the existing.
* Smoke Testing : It is the battery of test which checks the basic functionality of program. If fails then the program is not sent for further testing.


3. System Testing.
* System Testing - Deals with testing the whole program system for its intended purpose.
* Recovery testing : System is forced to fail and is checked out how well the system recovers the failure.
* Security Testing : Checks the capability of system to defend itself from hostile attack on programs and data.
* Load & Stress Testing : The system is tested for max load and extreme stress points are figured out.
* Performance Testing : Used to determine the processing speed.
* Installation Testing : Installation & uninstallation is checked out in the target platform.


4. Acceptance Testing.
* UAT ensures that the project satisfies the customer requirements.
* Alpha Testing : It is the test done by the client at the developer's site.
* Beta Testing : This is the test done by the end-users at the client's site.
* Long Term Testing : Checks out for faults occurrence in a long term usage of the product.
* Compatibility Testing : Determines how well the product is substantial to product transition.

6. What is Release Acceptance Test?

The release acceptance test (RAT), also referred to as a build acceptance or smoke test, is run on each development release to check that each build is stable enough for further testing. Typically, this test suite consists of entrance and exit test cases plus test cases that check mainstream functions of the program with mainstream data. Copies of the RAT can be distributed to developers so that they can run the tests before submitting builds to the testing group. If a build does not pass a RAT test, it is reasonable to do the following:

* Suspend testing on the new build and resume testing on the prior build until another build is received.
* Report the failing criteria to the development team.
* Request a new build.

7. What is Deployment Acceptance Test?

The configuration on which the Web system will be deployed will often be much different from develop-and-test configurations. Testing efforts must consider this in the preparation and writing of test cases for installation time acceptance tests. This type of test usually includes the full installation of the applications to the targeted environments or configurations.

8. What is Ping tests?

Ping tests use the Internet Control Message Protocol(ICMP) to send a ping request to a server. If the ping returns, the server is assumed to be alive and well. The downside is that usually a Web server will continue to return ping requests even when the Web-enable application has crashed.

9. Explain the Manual Support Testing Technique:

Usage " It involves testing of all the functions performed by the people while preparing the data and using these data from automated system.
Objectives " Verify - manual support documents and procedures are correct.
" Determine -
Manual support responsibility is correct
Manual support people are adequately trained.
Manual support and automated segment are properly interfaced.
How to Use " It involves:
Evaluation of adequacy of process
Execution of process
" Process evaluated in all segments of SDLC.
" Execution of the can be done in conjunction with normal system testing.
" Instead of preparing, execution and entering actual test transactions the clerical and supervisory personnel can use the results of processing from application system.
" It involves several iterations of process.
" To test people it requires testing the interface between the people and application system.
When to use " Verification that manual systems function properly should be conducted throughout the SDLC.
" Should not be done at later stages of SDLC.
" Best done at installation stage so that the clerical people do not get used to the actual system just before system goes to production.
Examples " Provide input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it in the computer.
" Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.

10. Explain Intersystem Testing Technique:

Background " Application systems are frequently interconnected to other application system.
" The interconnection may be data coming from another application system, leaving for another application system or both.
" Frequently multiple systems (applications) sometimes called cycles or functions are involved.
Usage " To ensure interconnection between application functions correctly.
Objectives " Determining -
Proper parameters and data are correctly passed between the applications
Documentation for involved system is correct and accurate.
" Ensure Proper timing and coordination of functions exists between the application system.
How to Use " Operations of multiple systems are tested.
" Multiple systems are run from one another to check that they are acceptable and processed properly.
When to use " When there is change in parameters in application system
" The parameters, which are erroneous then risk associated to such parameters, would decide the extent of testing and type of testing.
" Intersystem parameters would be checked / verified after the change or new application is placed in the production.
Examples " Develop test transaction set in one application and passing to another system to verify the processing.
" Entering test transactions in live production environment and then using integrated test facility to check the processing from one system to another.
" Verifying new changes of the parameters in the system, which are being tested, are corrected in the document.
Disadvantage " Time consuming
" Cost may be expensive if system is run several times iteratively.

Download Interview PDF

11. What is the Software Testing?

Software testing is more than just error detection;

Testing software is operating the software under controlled conditions, to (1) *verify* that it behaves "as specified"; (2) to *detect* *errors*, and (3) to *validate* that what has been specified is what the user actually wanted.

1. *Verification* is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements. [*V**erification: Are we building the system right?*]

2. *Error Detection*: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should.

3. *Validation* looks at the system correctness - i.e. is the process of checking that what has been specified is what the user actually wanted. [*Validation: Are we building the right system?*]

In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly. Both verification and validation are necessary, but different components of any testing activity.

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item. Remember: The purpose of testing is verification, validation and error detection in order to find problems - and the purpose of finding those problems is to get them fixed.

Software Testing

Testing involves operation of a system or application under controlled conditions and evaluating the results. Every Test consists of 3 steps :
Planning : Inputs to be given, results to be obtained and the process to proceed is to planned.
Execution : preparing test environment, Completing the test, and determining test results.
Evaluation : compare the actual test outcome with what the correct outcome should have been.

12. How to execute a testing?

Usage
" To determine whether the system achieves the desired level of proficiency in the production status.
" Used to verify -
Response time
Turn around time
Design performance.
" Test execution can be done using the simulated system and actual system.
" The system either can be tested as a whole or in parts.
Objectives " To determine whether the system can meet the specific performance criteria
" Verify whether system make optimum use of hardware and software.
" Determining response time to online use requests
" Determining transaction processing turnaround time.
How to Use
" Can be performed in any phase of SDLC
" Used to evaluate single aspect of system
" Executed in following manner -
Using h/w and s/w monitor
Simulation of functioning using simulation model
Creating quick or dirty programs to evaluate approximate performance of completed system.
When to use " Should be used early in SDLC
" Should be performed when it is known that the results can be used to make changes to the system structure.
Examples " Transaction turnaround time adequacy
" Optimum use of h/w and s/w.

13. What is Forced-Error Test?

The forced-error test (FET) consists of negative test cases that are designed to force a program into error conditions. A list of all error messages thatthe program issues should be generated. The list is used as a baseline for developing test cases. An attempt is made to generate each error message in the list. Obviously, test to validate error-handling schemes cannot be performed until all the handling and error message have been coded. However, FETs should be thought through as early as possible. Sometimes, the error messages are not available. The error cases can still be considered by walking through the program and deciding how the program might fail in a given user interface such as a dialog or in the course of executing a given task or printing a given report. Test cases should be created for each condition to determine what error message is generated.

14. What is Exploratory Test?

Exploratory Tests do not involve a test plan, checklist, or assigned tasks. The strategy here is to use past testing experience to make educated guesses about places and functionality that may be problematic. Testing is then focused on those areas. Exploratory testing can be scheduled. It can also be reserved for unforeseen downtime that presents itself during the testing process.

15. How to write testing documentations?

Testing of reference guides and user guises check that all features are reasonably documented. Every page of documentation should be keystroke-tested for the following errors:

* Accuracy of every statement of fact
* Accuracy of every screen shot, figure and illustation
* Accuracy of placement of figures and illustation
* Accuracy of every tutorial, tip, and instruction
* Accuracy of marketing collateral (claims, system requirements,and screen shots)
* Accuracy of downloadable documentation(PDFs, HTML, or test files)

16. How to Install/uninstall Test?

Web system often require both client-side and server-side installs. Testing of the installer checks that installed features function properly--including icons, support documentation , the README file, and registry keys. The test verifies that the correct directories are created and that the correct system files are copied to the appropriate directories. The test also confirms that various error conditions are detected and handled gracefully.

Testing of the uninstaller checks that the installed directories and files are appropriately removed, that configuration and system-related filea are also appropriately removed or modified, and that the operating environment is recovered in its original state.

17. What is External Beta Testing?

External beta testing offers developers their first glimpse at how users may actually interact with a program. Copies of the program or a test URL, sometimes accompanied with letter of instruction, are sent out to a group of volunteers who try out the program and respond to questions in the letter. Beta testing is black-box, real-world testing. Beta testing can be difficult to manage, and the feedback that it generates normally comes too late in the development process to contribute to improved usability and functionality. External beta-tester feedback may be reflected in a README file or deferred to future releases.

18. What is Unit Tests?

Unit tests are positive tests that eveluate the integrity of software code units before they are integrated with other software units. Developers normally perform unit testing. Unit testing represents the first round of software testing--when developers test their own software and fix errors in private.

19. What is Click-stream measurement tests?

Makes a request for a set of Web pages and records statiestics about the response, including total page views per hour, total hits per week, total user sessions per week, and derivatives of these numbers. The downside is that if your Web-enabled application takes twics as many pages as it should for a user to complete his or her goal, the click stream test makes it look as though your Web site is popular, while to the user your Web site is frustrating.

20. What is Web-Enabled Application Measurement Tests?

1. Meantime between failures in seconds
2. Amount of time in seconds for each user session, sometimes know as transaction
3. Application availability and peak usage periods.
4. Which media elements are most used ( for example, HTML vs. Flash, JavaScript vs. HTML forms, Real vs. Windows Media Player vs. QuickTime)

21. What is System-Level Test?

System-level tests consists of batteris of tests that are designed to fully exercise a program as a whole and check that all elements of the integrated system function properly.

22. What is Functional System Testing?

System tests check that the software functions properly from end-to-end. The components of the system include: A database, Web-enable application software modules, Web servers, Web-enabled application frameworks deploy Web browser software, TCP/IP networking routers, media servers to stream audio and video, and messaging services for email.

A common mistake of test professionals is to believe that they are conducting system tests while they are actually testing a single component of the system. For example, checking that the Web server returns a page is not a system test if the page contains only a static HTML page.

System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for ensuring that this level of testing is performed.

System testing checklist include question about:

* Functional completeness of the system or the add-on module
* Runtime behavior on various operating system or different hardware configurantions.
* Installability and configurability on various systems
* Capacity limitation (maximum file size, number of records, maximum number of concurrent users, etc.)
* Behavior in response to problems in the programming environment (system crash, unavailable network, full hard-disk, printer not ready)
* Protection against unauthorized access to data and programs.

23. What is Scalability and Performance Testing?

Scalability and performance testing is the way to understand how the system will handle the load cause by many concurrent users. In a Web environment concurrent use is measured as simply the number of users making requests at the same time.

Performance testing is designed to measure how quickly the program completes a given task. The primary objective is to determine whether the processing speed is acceptable in all parts of the program. If explicit requirements specify program performance, then performance test are often performed as acceptance tests.

As a rule, performance tests are easy to automate. This makes sense above all when you want to make a performance comparison of different system conditions while using the user interface. The capture and automatic replay of user actions during testing eliminates variations in response times.

This type of test should be designed to verify response and excution time. Bottlenecks in a system are generally found during this stage of testing.

24. What is the Load Testing?

The process of modeling application usage conditions and performing them against the application and system under test, to analyze the application and system and determine capacity, throughout speed, transation handling capabilities, scalabilities and reliability while under under stress.

This tyoe of test is designed to identify possible overloads to the system such as too many users signed on to the system, too many terminals on the network, and network system too slow.

Load testing a simulation of how a browser will respond to intense use by many individuals. The Web sessions can be recorded live and set up so that the test can be run during peak times and also during slow times. The following are two different types of load tests:

Single session - A single session should be set up on browser that will have one or multiple responses. The timeing of the data should be put in a file. After the test, you can set up a separate file for report analysis.

Multiple session - a multiple session should be developed on multiple browsers with one or multiple responses. The multivariate statistical methods may be needed for a complex but general performance model

When performing stress testing, looping transactions back on themselves so that the system stresses itself simulates stress loads and may be useful for finding synchronization problems and timing bugs, Web priority problems, memory bugs, and Windows problems using API. For example, you may want ot simulate an incoming message that is then put out on a looped-back line; this in turn will generate another incoming message. The nyou can use another system of comparable size to create the stress load.

Memory leaks are often found under stress testing. A memory leak occurs when a test leaves allocated memory behind and does not correctly return the memory to the memory allocation scheme. The test seems to run correctly, but after several iteration available memory is reduced until the system fails.

Peak Load and Testing Parameters:

Determining your peak load is important before beginning the assessment of the Web site test. It may mean more than just using user requests per second to stress the system. There should be a combination of determinants such as requests per second , processor time, and memory usage. There is also the consideration of the type of information that is on your Web page from graphics and code processing, such as scripts, to ASP pages. Then it is important to determine what is fast and what is slow for your system. The type of connection can be a critical component here, such as T1 or T3 versus a modem hookup. After you have selected your threshold, you can stress your system to additional limits.

As a tester you need to set up test parameters to make sure you can log the number of users coming into and leaving the test. This should be started in a small way and steadily increased. The test should also begin by selecting a test page that may not have a large amount of graphics and steadily increasing the complexity of the test by increasing the number of graphics and image requests. Keep in mind that images will take up additional bandwidth and resources on the server but do not really have a large impact on the server's processor.

Another important item to remember is that you need to account for the length of time the user will spend surfing each page. As you test, you should set up a log to determine the approximate time spend on each page, whether it is 25 or 30 seconds. It may be recorded that each user spends at least 30 seconds on each page, and that will produce a heightened response for the server. As the request is queued, and this will be analyzed as the test continues.

Download Interview PDF

25. Explain Parallel Testing Technique:

Usage " To ensure that the processing of new application (new version) is consistent with respect to the processing of previous application version.
Objectives " Conducting redundant processing to ensure that the new version or application performs correctly.
" Demonstrating consistency and inconsistency between 2 versions of the application.
How to Use " Same input data should be run through 2 versions of same application system.
" Parallel testing can be done with whole system or part of system (segment).
When to use " When there is uncertainty regarding correctness of processing of new application where the new and old version are similar.
" In financial applications like banking where there are many similar applications the processing can be verified for old and new version through parallel testing.
Examples " Operating new and old version of a payroll system to determine that the paychecks from both systems are reconcilable.
" Running old version of application to ensure that the functions of old system are working fine with respect to the problems encountered in the new system.