1. Explain a good test engineer?
A good test engineer because he Has a "test to break" attitude, Takes the point of view of the customer, Has a strong desire for quality, Has an attention to detail, He's also Tactful and diplomatic and Has good a communication skill, both oral and written. And he Has previous software development experience, too.
Good test engineers have a "test to break" attitude, they take the point of view of the customer, have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process, gives the test engineer an appreciation for the developers' point of view and reduces the learning curve in automated test tool programming.
In software or software testing, a constant is a meaningful name that represents a number, or string, that does not change. Constants are variables that remain the same, i.e. constant, throughout the execution of a program.
Why do we, developers, use constants? Because if we have code that contains constant values that keep reappearing, or, if we have code that depends on certain numbers that are difficult to remember, we can improve both the readability and maintainability of our code, by using constants.
To give you an example, we declare a constant and we call it "Pi". We set it to 3.14159265, and use it throughout our code. Constants, such as Pi, as the name implies, store values that remain constant throughout the execution of our program.
Keep in mind that, unlike variables which can be read from and written to, constants are read-only. Although constants resemble variables, we cannot modify or assign new values to them, as we can to variables, but we can make constants public or private. We can also specify what data type they are.
TestDirector®, also known as Mercury TestDirector®, is a software tool made for software QA professionals. Mercury TestDirector®, as the name implies, is a product made by Mercury Interactive Corporation, 379 North Whisman Road, Mountain View, California 94043 USA.
Mercury's other products include the Mercury QuickTest Professional™, Mercury WinRunner™, also known as WinRunner™, and Mercury Business Process Testing™.
Data integrity is one of the six fundamental components of information security. Data integrity is the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data.
In databases, important data - including customer information, order database, and pricing tables - may be stored. In databases, data integrity is achieved by preventing accidental, or deliberate, or unauthorized insertion, or modification, or destruction of data.
5. What is PDR - Peer Design Review?
PDR is an acronym. In the world of software QA or testing, it stands for "peer design review", informally known as "peer review".
In software QA, a waiver is an authorization to accept software that has been submitted for inspection, found to depart from specified requirements, but is nevertheless considered suitable for use "as is", or after rework by an approved method.
In virtual storage systems, virtual addresses are assigned to auxiliary storage locations. The use of virtual addresses allow those locations to be accessed as though they were part of the main storage.
8. What is version description document (VDD)?
Version description document (VDD) is a document that accompanies and identifies a given version of a software product. Typically the VDD includes the description and identification of the software, identification of the changes incorporated into this version, and the installation and operating information unique to this version of the software.
9. What is A document version?
A document version is an initial release (or complete re-release) of a document, as opposed to a revision resulting from issuing change pages to a previous release.
"Variants" are versions of a program. Variants result from the application of software diversity.
1. "Variable trace" is a (computer) record of the names and the values of variables accessed and/or changed during the execution of a computer program.
2. "Value trace" is same as variable trace. It is a (computer) record of the names and values of variables accessed and/or changed during the execution of a computer program.
"Utility" is a software tool designed to perform some frequently used support function. For example, one utility is a program to print files.
13. What is Interface Analysis?
Checks the interfaces between program elements for consistency and adherence to predefined rules or axioms.
The "user guide" is the same as the user manual. The user guide is a document that presents information necessary to employ a system or component to obtain the desired results. Typically, what is described are system and component capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special instructions.
15. What is A user friendly software?
A computer program is "user friendly", when it is designed with ease of use, as one of the primary objectives of its design.
16. What is User documentation?
"User documentation" is a document that describes the way a software product or system should be used to obtain the desired results.
17. What is Upwardly compatible software?
"Upwardly compatible software" is software that is compatible with a later or more complex version of itself. For example, an upwardly compatible software is able to handle files created by a later version of itself.
CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance.
19. Explain Configuration management?
Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts, and can easily adapt to your software tool and process needs.
20. What is A Test Configuration Manager?
Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.
21. What is A Database Administrator?
Test Build Managers, System Administrators and Database Administrators deliver current software versions to the test environment, install the application's software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator.
22. What is a Test Build Manager?
Test Build Managers deliver current software versions to the test environment, install the application's software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware.
Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.
QA engineers, are test engineers but we do more than just testing. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important. We, QA engineers, are successful if people listen to us, if people use our tests, if people think that we're useful, and if we're happy doing our work. I would love to see QA departments staffed with experienced software developers who coach development teams to write better code. But I've never seen it. Instead of coaching, we, QA engineers, tend to be process people.
24. What is Process and procedures?
Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable.
A document describing the conduct and results of the testing carried out for a system or system component.
An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation.
Another term for test harness.
A review that refers to content of the technical material being reviewed.
1. Documentation specifying the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, responsibilities, required, resources, and any risks requiring contingency planning.
or
A formal or informal plan to be followed to assure the controlled testing of the product under test.
2. A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.
A chronological record of all relevant details about the execution of a test.
31. What is Test incident report?
A document reporting on any event that occurs during testing that requires further investigation.
32. What is Test documentation?
Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.
Design could mean to many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.
The terms "test scenario" and "test case" are often used synonymously. Test scenarios are test cases or test scripts, and the sequence in which they are to be executed. Test scenarios are test cases that ensure that all business process flows are tested from end to end. Test scenarios are independent tests, or a series of tests that follow each other, where each of them dependent upon the output of the previous one. Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts may cover multiple test scenarios.
1. Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
or
The definition of test case differs from company to company, engineer to engineer, and even project to project. A test case usually includes an identified set of information about observable states, conditions, events, and data, including inputs and expected outputs.
2. A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a...
* Test case identifier;
* Test case name;
* Objective;
* Test conditions/setup;
* Input data requirements/steps, and
* Expected results.
Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.