* Scripts must retain independence from each other
--If there are script dependencies, consider a 2-tier approach where at the "batch" tier (sometimes called Scenario) is independent, and all dependencies are handled within batch script
* Consider concurrent impact on data --Similar test cases require different input data to run on different machines simultaneously
--Consider implementing machine-specific (C: drive) and machine-independent (LAN drive) data sources to feed data-driven tests
* Central Reporting Log (Cycle Execution Log) can be implemented to collect remote results of all machines in the test cycle
* By combining setup and restore functions into a State Navigation component, we can eliminate about 30%* of the required steps to each test case
* Such reduction affects those portions of the test cases that are most prone to rework as per change in the AUT
* Can also perform routine tasks such as monitoring system resources, timestamping, and error recovery
* * Average based on 6 years experience across wide range of industries and applications
* Strive for the simplest scripting environment possible
--Move all complexities to the Developer/Guru
--Gated by the skills of the Developers/Gurus
* Open Architecture allows tremendous flexibility in customization
* Hide as much of the complexity of the framework as possible
--Automatically load harness components at tool load time
--Incorporate routine maintenance and special reporting needs into simple function calls
* Consider the maintainability and simplicity of the framework itself when making enhancements to it
* Business Skilled (a.k.a. "Subject Matter Expert" or "SME"): Understands business needs in-depth, spotty knowledge of technology, no coding knowledge
* Tester Skilled: Understands discipline of testing and test development, spotty knowledge of scripting/coding
* Developer Skilled: Understands software development practices. Proficient coder. Usually NOT skilled in the discipline of testing!
* Guru: Understands software development practices in-depth at the strategic and tactical level. Also understands test practices in-depth. Gets the "Full-Picture".
* Supports Strategic QA Goals & Objectives
* Conceptual Simplicity & Streamlined Use
* Efficient and Effective Test Development, Execution, and Reporting
* Maintenance and Robustness Considerations (Scripts and Harness)
* Each Construct is Necessary, Sum of Constructs are Sufficient
* Poised for Expansion
* Matched to Team Skill Set
# A Plethora of "Architectural Frameworks" have emerged over the past several years
--General Purpose Frameworks: "One-Size-Fits-All"
--Technology-Specific (Telephony Interfaces, Multi-Platform Applications, etc.)
# Very difficult to anticipate all requirements in a "One-Size-Fits-All" model --Unnecessary constructs (e.g.: May contain elaborate environment constructs for single environment projects, etc.)
--Insufficient constructs (e.g.: May lack key Active X support for required third-party controls, etc.)
--Success of a framework is often hard to measure
* Harness and Tests should be under Configuration Management/Version Control
* Consider mechanisms to differentiate between a failed test (AUT is really broken) and a failed test case (test case is not implemented correctly)
* Build a little, Test a little
* Test Early/Test Often
* Consider implementing links to ancillary applications to assist in defect discovery (Boundschecker, Norton Utilities, etc.)
* Keep it simple!
* Determine continuum of unattended needs:
--Close monitoring of executing scripts
--Daily lab execution with periodic monitoring
--Overnight execution - little of no monitoring
* Harness robustness and error-recovery must be adequate for level of unattended need
* Typically requires strategies and negotiations with Network Security to ensure testing needs and security needs are met
* Central Reporting Log (Cycle Execution Log) can additionally be implemented for remote status monitoring
# Mature Applications: Straightforward scripting,
--AUT is stable: slight risk to script base per change in AUT
# New Applications (can expect changes in windows/objects/navigational paths)
--Considerable risk to existing script base as AUT changes
--Affects "granularity" of the test case
# If risk to existing script base is moderate of high
--Consider a "State Navigation" component to the Framework
--Allows navigational components of a test to be consolidated for ease of maintenance
--Must understand the "Anatomy of a Test Case"
► Tool should load with full knowledge of AUT
--For single AUT, harness-load command added to config (e.g.: TSLINIT for WinRunner or Settings --> Options -->Environment -->Startup Test)
--Loads harness, all AUT-specific extensions, GUI Map Files, etc.
► For Teams with Multiple Projects
--Embed a User Interface step (e.g.: "create_list_dialogue") to select AUT at tool startup
► Helpful Hint Should also load knowledge and functions for any needed ancillary applications (Telnet sessions, Middleware apps, Rumba, etc.)
► Typical mix: 2:8:1 (2 SMEs; 8 Testers; 1 Developer/Guru)
--Such "Stratified Teams" are most efficient - required level of abstraction is raised for the typical test writer and coding challenges are placed on developer/guru
► SMEs should be trained in the discipline of testing
--Hierarchies and types of testing, Testability of requirements, what verifications constitute necessary and sufficient testing, etc.
► SMEs and Testers together constitute the "Test Scripters"
--Combination of Record and Function Generator (F7) Calls to start
► Developers/Gurus constitute "Test Harness Support"
--Work with scripters to write functions to streamline scripting
--Supports/expands the framework infrastructure
► Test Harness Structure/Directory Tree
--Framework Components Across AUTs
► Scripting Templates and Coding Standards
--Testing Hierarchy, Test Templates, Function Templates
► Configuration Management Tools, Policies, Procedures
--PVCS, ClearCase, MS SourceSafe, etc.
► Mechanisms to address peculiarities of specific technologies
--Functions for increased class support for Active X, etc.
► Data repository or Database
► Before implementing a framework, the Test Organization should clarify its Strategic Goals and Objectives.
--Specific Risks Identified & Mitigated
--Measurement Requirements (Metrics)
--Timeliness Goals (Test Cycle Turnaround Time)
--Test Coverage Requirements (Code, Window/Object, Req'ts)
--Team Skill Mix Requirements (e.g.: 1:10 Technical/Business)
--Test Maintenance Requirements
► If properly framed, the framework's major requirements should follow the Strategic Goals and Objectives of the Test Organization
► Integrated framework for testing designed to support tight Test Execution Cycle goals with verifiable levels of test coverage
► Considers the "Paraclete Relationship": the tight bond between the test system and the system under test
--Low maintenance requirements on the test framework per change in the AUT
► Wrapped around a central relational database (ODBC)
► Works on several tools offered by major vendors
► Central point of maintenance for all test artifacts