Sunday, November 23, 2008

What to analyze in test result and how to make recommendations?

It depends on what sort of testing we have conducted and what types of report we have for particular test result, and what types of statistical diagram resulting there of, and more.

Such as, the scalability index could show the average transaction per second (TPS) throughput. Transaction distribution chart could be useful to audit a test to validate the result by identifying usueally long or short response rates or trends.

So, in general load testing scenario, we have to analyze transaction per second (tps) results as per test operates; analyze the range of response times for the duration of each test use case; analyze the timing values for each transaction on each test node in a set of XML formatted files; analyze the resource utilization on CPU, network, and memory utilization for each test node for each test use case; analyze the results of multiple test scenarios.

How to design test/ test case/ test plan?

Before designing any test, test cases, or test plan; first of all we have to think about the core point - how could we maximize the quality of the software? Depending on the limitation of human resources, budget, time, risk, market, and the future requirement we have to be revolved around the 'quality'and follow the centralized theme of the 'quality'.

The testing strategy to be undertaken might be driven by different factors, basically resource/time/and cost oriented factors. Looking at the nature of the development and the requirement of the project, the testing strategy could vary from flexible strategy to strict one. We need to analyze the risk and need to know how to handle overall test management, error management, and how to execute the overall test cases.

Based on the size of the project, we have to choose the strategy to be taken by either going for manual testing or automated testing. We should not go for automated one unless it is really required. But, we can not avoid manual testing. If we choose automated testing, we need to think about its environment set up, technical resources requirement, updates requirements of the tool, and cost incurred from these activities. It all depends on how we formulate the strategy.

Thursday, November 20, 2008

Software Development Life Cycle

Software Development Life Cycle (SDLC) - The process of the development of the application or software product is called the software development life cycle.

Phases of SDLC -
  1. Project Initiation (Clients identify their business requirements and provide it to the Software Company)
  2. Requirement Analysis and Gathering (Business Analysts fetch the requirement from clients to form the requirement document and Use Case document)
  3. System Design (The application design and layout is developed and documented by design team)
  4. System Development (The application will be coded and developed by the team of developers)/(The developers also perform unit tests after completing the development)
  5. System Testing (The application will be tested for functionality, navigation, security, and performance by the test team)
  6. User Acceptance Testing (The application will be tested by business users to make sure system is acceptable and meets business requirements)
  7. System Implementation/ Production (The application will be deployed in the production environment)
  8. System Maintenance (Bug fixes, Change Requests or New Requirements)
(Reference: http://www.learningdom.com)

Wednesday, October 15, 2008

What should we know as a ETL Tester?

First of all we should know what does ETL stand for? ETL is Extract, Transfer, and Load. When the data is fed into the system in the front end, it goes to the database after batch processing. Then the data is transfered to the ETL system. As per the business transformation rule, the ETL developer/tester run the jobs in ETL. To run these jobs they use proprietary language of ETL's company/or/Unix. As per the business need jobs are created. Once we run the jobs, the data from source file get transfered to the target file. We might also need to use a lot of basic and advanced SQL to map and verify those data and see whether the data is transfered in accordance with the business transforamtion rule or not. Normally, we have to map data from source file to stage table; stage table to development database table; development database table to production database table.

  • Tool: Informatica / Business Object's Data Integrator (OR other tools in the market). Actually ETL tools are proprietary.
  • Should know how to test mappings
  • Should be familiar with at least one programming language or scripting language. As ETL languages are proprietary, it would be helpful to know some language in advance. Keep in mind that each ETL's proprietary language has interface to call SQL. Most of the ETL tools have interfaces to use the bulk loading/unloading tools associated with the major databases.
  • If we have Unix Platform, we should have knowledge of UNIX shell scripting
  • We should have a knowledge of SQL. SQL can be used to extract data in most of the cases.
  • Strong database (Oracle/SQL Server/Sybase whatever, one of them) knowledge
  • Knowledge of working with large data sets
  • Know how to analyse data quality
  • Generally ETL tester involve in validating the source database, target database, data extraction, data transformation and data loading
  • In addition, ETL tester creates UNIX shell scripts to access data and move data from production to development
  • Further, they test different transformations (rules) to move data to the Staging area, ODS and finally Target Data base using Informatica or Data Integrator Designer.
  • Should know how and where to use DDL and DML Statements
  • Should be able to test ETL code

Although, ETL tools are common in market today, many Data Warehouse projects (especially in the past ) did not use the external ETL tool. In those cases, load utility is used to extract the data and populate tables which gives the flat file structure. So, in the absense of ETL tool, its job of transform and load of the staging and production table is still possible by using Transact SQL or PL/SQL or DB2/SQL.

Thursday, October 2, 2008

Unix Platform

"vi is a visual editor (default) in Unix. Because of this we will be able to see the document we are editing. We can edit files really quickly on vi editor, as it is extremely economical with the keystrokes. Due to its different modes for inserting and issuing commands, it is much faster than most non-mode based editors." (ref:roxanne.org). So, vi editor is basically used to create a new file or edit the existing file.

To open a vi editor, first we have to go to the Unix shell (black screen).

Commands:
vi filename (press enter) - to open the vi editor with the given filename - we can see the filename at the bottom of the vi editor screen. At first, when we open a new file we can observe that blank lines started with tilda sign (~).
At the botoom of the screen; type :q! (press enter) - to quit and come out of the vi editor
Before typing in vi editor, we should know that there are 2 modes in vi, i.e. Insert Mode and Command Mode. In the Insert Mode, typed input is recognized only as text and goes into the file ie opened. In the command mode, whateve we typed is interpreted as command.
When we first open the file, it will be in command mode. To go back to insert mode we have to type 'i' or 'a'; and to go to command mode we have to hit 'Esc'.
To save (:w) anything we typed, we have to go back to the command mode (type 'Esc') to execute it. To save and exit at a time, type ':wq' and enter.
To go to the next line - type 'o', and it will take your cursor to the beginning of the 2nd line in order to ADD a NEW LINE. Uppercase 'O' will add a new line before our existing line, the line where the cursor is blinking.
To navigate between lines - to go up type 'j' to go down, 'k' to go up, 'l' to go right, 'h' to go left
'Shift ^' will take your cursor to the beginning of the line. [We Should Be in the Command Mode]
d - will delete the letter [We Should Be in the Command Mode]
dw - will delete the entire word (or, Shift + d) [We Should Be in the Command Mode]
dd - will delter the entire line [We Should Be in the Command Mode]
u - will undo the things [We Should Be in the Command Mode]
2 d - will delete 2 lines [We Should Be in the Command Mode]
In unix, yanking is kind of cut and paste. To do so, we have to be in command mode. To copy a line and paste it, we have put the cursor at the end of the line and type # (how many lines we want to paste) e.g. if we want to copy 3 lines from the line where the cursor is we have to type 3yy and go to the line where we want to paste it and type 'p'.

To insert a file - :r (filename)
To search forward - / e.g. if we have a number of word 'box' in the file and want to search 'box'; at the bottom of the screen we do forward slash /box
To search backward - ? e.g. ?box
To repeat the last search - ?? or //
To find the next occurance - n or next
To repeat the last search - N
To undo the changes in last command - u
To undo the changes in current line - U
To reload the current document without saving any changes - :e!
To put back the last nth deletion - "p
To go to the end of the file (say when we are at the beginning of the file) - just type press colon : and $
To go to the end of the line - just type $
To get back to the beginning of line - just type caret '^'
To replace any word in the line with another word - e.g. say the word 'budda' which is in 3rd line of the file has to be replaced by the word 'budhi'; then type => :3s/budda/budhi and ENTER (colon, s-substitute, / is forward slash for the word ahead of us substituted with the word budhi). If we want to do the same thing not for the particular line, then we have to use % instead of 3 (line#) and also need to add /g at the end to mention that it has to be acted globally. So, the command would be like :%s/budda/budhi/g and ENTER

Thursday, September 18, 2008

For Quality Analyst with DATA ANALYST/MODELER; It is good to know the following...

  1. Conduct business and data analysis, and implementing physical design and ETL mappings for Data Warehouse and Business intelligence solutions,
  2. Familiarity with Oracle database technologies,
  3. SQL and PL/SQL skills,
  4. Able to participate in analysis sessions with the user community to capture data requirements clearly, completely and correctly, and represent them in a formal and visual way through the data models,
  5. Understand Business Intelligence solutions,
  6. Strong knowledge of relational and multi-dimensional database architectures, database lifecycle design and application development,
  7. Data analysis/profiling and reverse engineering of data,
  8. Developing detailed mapping documents that will provide the reporting and data warehouse team with source to target data mapping,
  9. Create test strategy and test cases that will ensure the seamless implementation of the data warehouse,
  10. Conduct string and system testing according to developed test scenarios,
  11. Work with business users to define data requirements,
  12. Ability to develop, integrate, manage and promote physical data models and data model standards that source data from multiple origins (internal data sources, external data sources, third party data, etc.),
  13. Ability to conduct data analysis and data profiling,
  14. Read / parse PL/SQL procedures and packages,
  15. Ability to develop detailed mapping documents that will provide the data warehouse reporting and ETL teams with source to target data mapping which will include logical names, physical names, data types, corporate meta-data definitions, and translation rules,
  16. Ability to define, develop, and implement joint data testing strategies with the business and IT QA,
  17. Ability to define, develop and implement test cases, based on performed data analysis and data mappings,
  18. Ability to execute test cases to ensure accurate and appropriate use of data during the project system, integration test and UAT.

Wednesday, August 6, 2008

Testing Terms and Terminology

(Source: Anonymous)
A

Acceptance Testing
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Accessibility Testing
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.)

Ad Hoc Testing

Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. Also, we can call it Monkey Testing.

Agile Testing

Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.

Alpha Testing
Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Application Binary Interface (ABI)
A specification defining requirements for portability of applications in binary forms across different system platforms and environments.

Application Programming Interface (API)
A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

Automated Software Quality (ASQ)
The use of software tools, such as automated testing tools, to improve software quality.

Automated Testing
Testing employing software tools that execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
The use of software to control the execution of tests, the comparision of actual outcomes to predicted outcomes, the setting up of test perconditions, and other test control and test reporting functions.

B

Backus-Naur Form
A Meta language used to formally describe the syntax of a language.

Basic Block
A sequence of one or more consecutive, executable statements containing no branches.

Basis Path Testing
A white box test case design technique that uses the algorithmic flow of the program to design tests.

Basis Set
The set of tests derived using basis path testing.

Baseline
The point at which some deliverable produced during the software engineering process is put under formal change control.

Beta Testing
Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

Binary Portability Testing
Testing an executable application for portablity across system platforms and environments, usually for confrormation to an ABI specification.

Black Box Testing: Functional Testing
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

  • Testing the features and operational behavior of a product to ensure they correspond to its specifications.
  • Testing that ignores the internal mechanism of a system or component and focuses solely on the outpurs generated in response to selected inputs and execution conditions.

Bottom Up Testing
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher-level components. The process is repeated until the component at the top of the hierarchy is tested.

For further terms and terminology - Click Here (Thankful to ISTQB)

SQL

SQL (Structured Query Language) is simply a language that is used to access the database. Ultimately, by accessing the database we can also manipulate data (information) over there.

In the market we have so many types of databases. Such as Oracle, DB2, Informix, Microsoft SQL Server, Sybase, MS Access, and other database systems.

SQL (Structured Query Language) is the only one language that we use to access and manipulate data in these database systems.

Tuesday, August 5, 2008

Test Director Overview

Application Testing is a complex process. Test Director helps organize and manage all phases of the testing process that encompasses the following steps:


  • Organize and manage testing requirements [SPECIFYING REQUIREMENTS]; where we define testing scope, create requirements, detail requirements, and analyze requirements.
  • Planning test; where we define test strategy, define test subjects, define tests, create requirement coverage, design test steps, automate tests, and analyze test plan.
  • Executing test [RUNNING TESTS]; where we create test sets, schedule runs, run tests, and analyze test results.
  • Tracking Defects, where we find defects, review new defects, repair open defects, test new build, and analyze defect data.

Q. How do you create tests from requirements in TD?

Once you have created the requirements tree, you use the requirement as a basis for defining the tests in your test plan tree and running tests in a test set.

There are 2 ways: (1) Convert requirement to tests, and (2) Generate a Test from requirements.

(1) Tools > Convert to Test > Convert All/ Convert Selected

(2) Right Click any requirement on Tree View ==> Generate Test.

Q. What are Test Director Projects?

TD projects are repositories that contain requirements, tests, testsets, test runs, defects, project documentation and customization information.

Q. What database application are used to store and manage TD information?

MS Access, Sybase, MS SQL, Oracle

Q. What are the 4 modules of Test Director?

  1. Requirements
  2. Test Plan
  3. Test Lab
  4. Defects

Q. How do you clear 'History' in TD?

  1. Click TOOLs on the Upper right corner --> Clear History
  2. Select the entity whose history you want to delete
  3. Select the field whose history you want to delete
  4. Upto '...date', select a date.

SCHEDULING, REQUIREMENT, TEST PLANNING

  1. How do you schedule a test in Test Director?
  2. How do you specify requirements?
  3. How do you plan the tests?
  4. How do you run the tests?
  5. How do you do defect tracking?
  6. How do you filter records in TD?
  7. What information you put in requirements?
  8. How do you view requirement history?
  9. How do you view associated defects for a test requirement?
  10. How do you attach or open WR script from TD?
  11. How do you E-mail test?
  12. What is Test Management Process in Test Director?
  13. What is a Test Grid?
  14. What data column is displayed by Test Grid?
  15. What does Test Planning Involve?
  16. What is a Test Plan Tree?
  17. How do you associate defect with a Test?
  18. What is the relationship of requirement and tests?
  19. How do you design test steps in Test Plan module?
  20. How do you generate an automated test template?
  21. How do you add system test to a test plan tree?

TEST EXECUTION

  1. What are the stages of Test Execution?
  2. What are the elements of Execution Flow Diagram?
  3. How do you set the Test Set on Failure Rules?
  4. How do you add a Test Set in Test Lab Module?
  5. What is scheduling test runs?
  6. How do you run an automated test in TD?
  7. What considerations should be considered when running tests on remote host?

DEFECT TRACKING

  1. How do you track Defects?
  2. How do you add defects?
  3. What information do you include in Defect Reporting/ Tracking?
  4. What are the status of defects?
  5. How do you find matching defects?
  6. How do you update defects?

TEST DIRECTOR ANALYSIS

  1. What reports are available in Requirement Module?
  2. What reports are available in Test Plan Module?
  3. What reports are available in TEST Lab Module?
  4. What reports are available in Defects Module?
  5. How do you create a Report?
  6. Tell about generating graphs in TD.
  7. What are the different types of Graph in Requirement Module?
  8. What are the different types of Graph in Test Plan Module?
  9. What are the different types of Graph in Test Lab Module?
  10. What are the different types of Graph in Defects Module?
  11. What is Project Documents in TD?
  12. How do you launch Document Generator?

Answers


How do you schedule a test in Test Director?

Go to TEST LAB --> Execution Flow --> Right Click on any Test --> Test Run Schedule --> Time Dependency --> Run at any time /or/ Run at a specified time []Date e.g.5/8/2005


How do you specify requirements?

Defining Testing Scope ==> Create Requirements ==> Detail Requirements ==> Analyze Requirements


How do you plan the tests?

Create a Test Plan based on Testing Requirements

Define Testing Strategy (How? What?) ==> Define Test Subjects ==> Define Tests ==> Create Requirment Coverage (Link each test with requirement) ==> Design Test Steps ==> Automate Tests ==> Analyze Test Plan


How do you run the tests?

Create Test Sets ==> Schedule Runs ==> Run Tests Manually/or/Automatedly ==> Analyze Test Results


How do you do defect tracking?

Add Defects ==> Review New Defects ==> Repair Open Defects ==> Test New Build ==> Analyze Defect Data


How do you filter records in TD?

  1. Click Set Filter/Sort button => Filter dialog box opens and displays filter tab
  2. Set a filter condition for a specific column
  3. Define the filter condition
  4. Ok [To add cross filters click Advanced Link]


What information you put in requirements?

Attachment, Author, Cover Status (Pass, Fail), Creation time/date, Modified, Name (Requirement Name), Priority, Product, Requirement ID, Reviewed, Type (hardware/software)


How do you view requirement history?

  1. Select a Requirement in the tree
  2. Click the History Tab


How do you view associated defects for a test requirement?

Select View >>> Associated Defect

OR, Right click a requirement >>> Associated Defect


How do you attach or open WR script from TD?

For attaching -> Go to TEST PLAN --> Attach ==> Open the script and the script gets attached. [You can attach file, URL, snapshot, and system info]

For opening WR script ==> From tree ==> Go to expand and find a WR script ==> copy from ==> browse to the script and open

Now, you can run the script from TD ==> by LAUNCH button


How do you E-mail test?

Right click any test on TREE VIEW ==> MAIL TESTS in TEST PLAN Module


What is Test Management Process in Test Director?

SPECIFY REQUIREMENTS (Analyze your application and determine your testing requirement) ==> PLAN TESTS (Create a test plan based on testing requirements) ==> EXECUTE TESTS (Create testsets and perform test runs) ==> TRACK DEFECTS (Report defects detected in your application and track how repqirs are progressing)


What is a Test Grid?

A Test Grid desplays all the tests in a TD Project. Each row diplays a separate test record. Each column represents a separate data-item.

To view the test grid, choose VIEW--> TEST GRID

Test Grid contains

  1. Test Grid Toolbar (Copy, Paste, Set Filter, Sort, Select Column, Find etc.)
  2. Grid Filter
  3. Description tab
  4. History tab


What data column is displayed by Test Grid?

Creation Data, Description, designer, estimated development time, execution status, modified, path, status, steps, subject, template, Test Name, Type.


What does Test Planning Involve? (What can you do in Test Plan Module?)

  1. Creating a Test Plan Tree
  2. Adding tests to a test plan tree
  3. Viewing the test plan tree
  4. Associating defects with a test
  5. Mailing test
  6. Finding tests in the test tree
  7. Sorting
  8. Modifying


What is a Test Plan Tree?

The typical application is too large to test as a whole. The Test Plan module enables you to divide your application according to functionality. You divide your application into units, or subjects, by creating a plan tree.


How do you associate defect with a Test?

View >>> Associate Defects

OR, right click the test and choose Associated Defects


What is the relationship of requirement and tests ?

It could be:

  1. 1 to 1
  2. 1 to many
  3. many to 1

You can do it both ways Requirement ==> to ==> Tests; OR, Test ==>to==> Requirements


How do you design test steps in Test Plan module?

You add steps to a test using Design Step Editor. To create a test step;

  1. Choose a test and click DESIGN STEPS tab
  2. Click NEW STEP button or Right Click DESIGN STEP TAB and choose NEW STEP
  3. Type a description and expected results
  4. To insert a PARAMETER, click INSERT PARAMETER button
  5. Click CLOSE to close Design Step Editor


How do you generate an automated test template?

  1. In the test plan tree, choose the mannual test that you want to automate
  2. Click the Design STEPS tab and click the GENERATE SCRIPT button
  3. Choose an automated test type to convert to: WR-automated, LR-scenario or Quick Test -test.
  4. Click the TEST SCRIPT tab to view the test template
  5. Click LAUNCH button


How do you add system test to a test plan tree?

  1. Choose Subject Folder in the Test Plan Tree
  2. New Test Button OR Planning > NEW TEST
  3. In the Test Type: Select SYSTEM TEST
  4. In the TEST NAME Box, type a name of the test and OK

How does it look? Screen Shot View! (Source:Prakash Nepal's www.qaquestions.com)

Ref -HP/Mercury Interactive

Friday, July 25, 2008

How to facilitate all environmental changes to test environments

It includes builds migrations, server bounces, database structure changes, configuraiton changes.

How to manage/facilitate test platform build

Code deployment in testing environments

Source code control is used in development environments in which several programmers work with the same piece of code. Applications and databases that are created in a development environment must be deployed first in a test environment and then in a production environment (© Dejan Sunderic).

Source code control (or version control) is typically introduced in development environments in which more than one developer needs to work with the same piece of code.

It allows development organizations and their members to

  • Manage code centrally
  • Manage multiple versions of the same code
  • Track change history
  • Compare versions
  • Prevent or allow developers from modifying the same piece of code at the same time
  • Synchronize deployment of all modifications needed to implement a single feature or bug fix

While you develop your database (and application) in a development environment, you must deploy the database first in a test environment and then in a production environment. Initially, you need to deploy the complete database, but later you will have to update the database with design changes and hotfixes.

How to provide environment support, RCA Support Coordination / Change Control

What is batch job?

A program that is assigned to the computer to run without further user interaction.

For example:
- a printing request or an analysis of a Web site log
- in our daily life, when we open our pc we can observe a lot of pre-scheduled program that runs antivirus system, anti-spy program, adware program etc.

In larger commercial computers or servers, batch jobs are usually initiated by a system user. Some are defined to run automatically at a certain time.

Corporations use batch jobs to automate tasks that they need to perform on a regular basis. Batch jobs usually run during off peak hours when systems are not being used for online processing. (For example, systems can run to update files, create printed reports, or purge files.)

In some computer systems, batch jobs are said to run in the background and interactive programs run in the foreground. In general, interactive programs are given priority over batch programs, which run during the time intervals when the interactive programs are waiting for user requests.

ref © searchdatacenter.com
ref © ibm

If QA is Assigned to Manage/Perform Code Migration

When QA has to work as a software configuration expert, he/she may need to properly migrate code through all phases of the software development lifecycle (development, QA, UAT, Production). He/She need to use a variety of tools (whatever the tools available) within the organization established for migration purpose. Tools such as WebMethods, Clearcase, CVS, and MS Office tools can be one of those tools example for migration purpose.

For this, they may also need some proficiency with UNIX Tar files to facilitate transfer, archiving, and unpacking of data. So, it is needed to understand the basic UNIX commands as well.

Other relevant experience could be with version control of software and release and change management experience in a software development environment.

Friday, July 18, 2008

Key

Primary Key:
The unique identifier for a row of data in a table is the primary key. This can be a single column or a combination of more than one column, in which case it is known as composite key.

Secondary or Foreign Key:
A key column in a table that identifies records in a different table is called a secondary or foreign key.

Example:
Let us say that you have an ‘order’ table and an ‘item’ table. In the ‘item’ table, let us say that your primary key is the combination of ‘order_id’ and ‘item_number’. In this case, the ‘order_id’ which is the primary key of the ‘order’ table is a foreign key in the ‘item’ table.

How To Join Tables? Learn From These Simple Instances Below

We have a couple of Tables below. See what would happen if we do inner join and outer join. See the result...

A. ----Table 1----- Table 2
----------1 ------------ 1
----------2 ------------ 2
----------3 ------------ 4
----------5 ------------ 5
----------8 ------------ 6
----------9 ------------ 7
---------11 ------------ 8
---------12 ----------- 10
---------13 ------------ 11


B. ----Table 1 ----------Table 2
---------SSN --------------- SSN
-----Last_Name -------- Apt #
-----First_Name --------Street
---------Age ----------------City
------------------- ---------- Zip


Inner Join:
Joining two tables by giving cause of equality is known as Inner Join. Only matching records from both tables get displayed.

In Inner Join only these numbers 1, 2, 5, 8, 11 will be displayed.

Inner Join: SQL> select a.SSN, a.Fname, b.city, b.zip from Table1 a, Table2 b
where a.SSN = b.SSN;

Outer Join:
Joining two tables to get matching records from both and non-matching records from any one of them is known as Outer Join.

Left Outer Join:
In this join 1, 2, 3, 5, 8, 9, 11, 12, 13 will be displayed.

Right Outer Join:
In this join 1, 2, 4, 5, 6, 7, 8, 10, 11 will be displayed.

Outer Join:
SQL> select a.SSN, a.Fname, b.city, b.zip from Table1 a, Table2 b
where a.SSN (+) = b.SSN;

Mostly Used Simple Unix Commands For Testers

  1. To create a file ==> cat↑>↑filename
  2. To read a file ==> cat↑filename
  3. To append existing file ==> cat↑>>filename
  4. To copy a file ==> cp↑file1↑file2
  5. To rename a file == > mv↑file1↑file2
  6. List your file ==> ls↑-a
  7. List all files ==> ls↑-al
  8. List directories with contents ==> ls↑-R
  9. List directories without contents ==> ls↑aFd
  10. To remove a directory ==> rmdir↑directory name
  11. To remove non empty directory ==> rm↑-r↑directory name

Traceability Matrix

Traceability matrix is a two dimensional matrix which maps Test Cases to Requirements.

Requirements # 1 ====> (covers in ==>) Test Cases #1, #2
Requirements # 2 ====> (covers in ==>) Test Cases #3, #4, #5
Requirements # 3 ====> (covers in ==>) Test Cases ------
Requirements # 4 ====> (covers in ==>) Test Cases #6, #7, #8, #9
Requirements # 5 ====> (covers in ==>) Test Cases #10
Requirements # 6 ====> (covers in ==>) Test Cases -------
Requirements # 7 ====> (covers in ==>) Test Cases #11, #12
Requirements # 8 ====> (covers in ==>) Test Cases #13, #14
Requirements # 9 ====> (covers in ==>) Test Cases -------
Requirements # 10 ====> (covers in ==>) Test Cases #15

Traceability Matrix follows one to many relationships.

Difference Between QA & QC

Difference between QA and QC

Quality Assurance


QA is oriented to Prevention.

QA activities focus on actual development process. The process is evaluated; improvements and changes are suggested to the processes. Audits are an example of QA activity which looks at whether and how the process is being followed. The end result may be suggested improvements or better compliance with the process. QC is process oriented. QA makes you sure that you are doing the right things, the right way. In a software environment,”QA” does not assure quality rather they assure that the process is being followed.

Quality Control

QC is oriented to Detection.

QC activities are work product oriented. They evaluate the product, identify weaknesses and suggest improvements. The product may be changed as a direct result of these activities. Testing activities of software are examples of QC since they usually result in changes to the product. QC activities are often the starting point for QA activities. QC is product oriented. Software testing is product oriented and thus comes under QC. QC makes sure that the results of what you have done are as expected. “QC” does not control quality; rather they measure quality by verifying that what was implemented matches the requirements.

Full Testing Life Cycle/Test Process/Job Of A Tester

  1. Get Business Requirements/Functional Specification/Master Documents
  2. Prepare Test Plan
  3. Write Test Cases (Also, it may include USE CASES from the client)
  4. Conduct Testing (Manual and/or Automated)(Manual - Various Types of Testing
    Unit, Integration, System, Front End, Back End, Stress, Load, Performance, Volume, Security etc.) (Automated - Automated ToolsWinRuner, LoadRuner, QTP, Rational, Silk, etc.)
  5. Find Bugs
  6. Bug ReportingTest Director, PVCS, Locally Developed, or Excel Sheet/Access Tables
  7. Bug to be fixedBy Developers or by White Box Testers
  8. Regression Testing (Manual or Automated) [Again this step may lead a tester to write more test cases or rewrite previous test cases]

Attributes Of A Good QA Engineer

A good QA Engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non technical (customers, management) people is useful. Previous software development experience can be useful as it provides deeper understanding of the software development process, gives the tester an appreciation for the developers’ point of view, reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high risk areas of the application on which to focus testing effort when time is limited. A good QA Engineer also understands the entire software development process and how it can fit into the business approach and goals of the organization. Excellent communication skills and ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience is especially needed. An ability to find problems as well as to see ‘what’s missing’ is important for inspection and reviews. I think if a person has above abilities he/she can be a good QA Engineer.

Wednesday, July 16, 2008

Software Test Plan

PROJECT INFORMATION
Application: Online Banking Account
Document Owner: The primary contact for questions regarding this document is: X
Author: S P, Sr. QA Analyst
Project Name: Online Account Management
Phone: (999) 854- 3015
Email: Sp@CFD.com

======================

Privacy Information: This document may contain sensitive information related to the projects within C F B. This information should not be given to any person other than those who are involved in the Online Account Management Application. It cannot be used for the purposes other than that prescribed by the project guidelines. Any unauthorized use of this document could result in severe legal recrimination.
Copyright © 2005

Revision History of the Test Plan

INTRODUCTION

APPLICATION OVERVIEW
The project involved developing the B’s module for an online mortgage application, which allows customers to apply for mortgage, checks the status of their application, a mortgage calculator and many mortgage tools. It provides the customers interactive tools in helping them decide whether to buy or rent a house, refinance, interest rate, property rate etc. Provides the customers graphical analysis, tabular analysis depending on their inputs. It provides the users with secure access and helps to increase the efficiency and the quality of loan decisions taken by the company.

PURPOSE AND SCOPE OF THE Q / A TEST PLAN
The purpose of the test plan is to describe the scope, objective, focus and approach of the testing application on online access to their accounts. This document serves as a guideline for the methodology to be approached for the testing process either manual or automated, generating test cases, bug documentation and bug reporting. This test plan defines the approach to be adopted by the QA team with the responsibilities of each group outlined in the chapters to come.


ROLES / RESPONSIBILITIES OVERVIEW
The chart below gives the list of tasks associated with this project outlining the roles and responsibilities of each participant. One or more participants will be held responsible for completing the task. Each task has one participant who will be held accountable. Some participants will be consulted before the task is completed and some will be informed after the task is completed. The assignments will be carried out by the QA Manager and assigned by the Project Manager.

RESPONSIBILITIES
PROJECT LEAD CONTACTS
QUALITY ASSURANCE ENGINEERS

DOCUMENTATION
Listed below are pertinent documents for the project. It also provides the contact information of the person in charge of the document.

TEST ENVIRONMENT
HARDWARE
The Quality Assurance group will have control of one application server, and the client interfaces separate from any used by non-test members of the project team. The QA group will be provided with six workstations i.e. six computers operating on Windows NT. The hardware requirements are listed below:

¨ Operating System : Windows 2000
¨ CPU : 2200 MHz (Pentium 4)
¨ RAM :512 MB
¨ Disk Storage : 40 GB
¨ CD / DVD Drive : 56 X Max
¨ Multimedia : 32-bit sound card with speakers
¨ Video Adaptor : 32 MB AGP Graphics
¨ Monitor : 17” color SVGA
¨ Removable Media :1.44MB 3.5” floppy Zip 100 Drive
¨ Network Access :10 Base-T Ethernet or v.90 modem
¨ NIC :3 Com Ethernet 10 / 100


SOFTWARE
The Quality Assurance group will use the following software to test the project:
¨ Load Runner 7.5
¨ WinRunner 7.0
¨ Test Director 6.02
¨ QTP 6.5

DATABASE
The Quality Assurance group will have control of one database separate from any used by non-test members of the project team. The name of the database will be Mortgage. This database will be accessible only by the authorized QA personnel.

OPERATING SYSTEM

BROWSERS

TESTING STRATEGY / APPROACH
Functional Testing
The purpose of functionality testing is to reveal defects related to the project components functionality. It aims to assist in ensuring that the components conform to the functional requirements. It determines whether the window or application functions correctly or not based on its specifications and relevant standard documentation. Both positive and negative testing will be performed to verify that the application responds correctly. The GUI testing will be performed using WinRunner 7.0.

Regression Testing
Repeatedly test the same application after some fixes or modifications were made to the application or its environment. It is difficult to determine how much re-testing is needed, especially near the end of development cycle. Automated tools like WinRunner can be especially useful for this type of testing.

Acceptance Testing
The purpose of acceptance testing is to provide a minimal set of tests that ensure conformance to milestone acceptance criteria. The approach to acceptance testing is outlined below:
Ø The minimal functional requirement of each query is outlined
Ø The minimal functionality of GUI controls (objects) is specified
Ø Scripts in WinRunner are to be generated as per the minimal functional, performance requirements.

Stress Testing
The purpose of stress testing is to reveal defects that arise in the system in extreme/non-normal operating environments such as low system resources, low network bandwidth and long duration.

Security Testing
This test confirms that unauthorized access is denied to the application. The testing will be done using the concepts of data driven testing. Here the Database Administrator will provide a view with the username and password. Tests are to be generated to check which functionalities are available to the user depending on the level of permission granted by the application.

Back-End Testing
In this sort of testing we ensure that data does not get corrupted when moving from front end to back end or vice versa. We will use WinRunner to perform this test, though this can be done manually.

Usability Testing
This testing is for user friendliness. Users those who are not software programmers or testers will perform the tests. A listing of the test criteria’s is discussed later.

Performance Testing
Performance Testing is performed to ensure that the system provides acceptable response times (which should not exceed 5 seconds). The outline for performance testing is given below:
q The performance measurements to be carried out are Memory, processor utilization and execution/transaction time.
q Performance testing is to be done using LoadRunner.
q Extensive analysis of Online Monitoring to be used in LoadRunner.

QUALITY ASSURANCE TEST PROCESS

COVERAGE
The QA testing effort is designed to test the completeness, consistency and accuracy of the product and application being tested. The Q / A test Plan ensures not only that the program functions as specified but that all internal and interface logic is tested.

REQUIREMENTS COVERAGE
This section identifies all program requirements that are covered in the test cases. The following types are included:
Ø All input editing, validation and processing
Ø All outputs
Ø All decision points in the program
Ø Each entry and exit point
Ø All error messages
Ø All limits or constraints
Ø Data flow

TEST CASE IDENTIFIER NAMING CONVENTION(S)
Each test case will have a unique identifier


TEST DATA REQUIREMENTS
The data for the testing will be from the users implementing the testing, computers, and software being tested.

Name of the Test Data: Mortgage
TEST DATA GENERATION
Test data from user and hardware/software will provide:
Ø Nominal and expected input data values to test the effect of normal data
Ø Default and null data to test default processing.
Ø Critical values to validate calculations
Ø Maximum and minimum values to test range processing

TEST REPORTING

TEST REPORTING DOCUMENTS
Test reporting is done using Test Director that includes documents such as the Test Matrix, Test Cases, Test Defect Log, Test Incident/Bug Reports and a Change Control Log.

TEST DATA RESULTS AND RETENTION
All test reporting will be logged to the Matrix, Test Cases, Test Defect Logs, Test Incident/Bug Reports and a Change Control Log when and where appropriate.

RELEASE NOTES
Release Notes will possibly be provided to the Configuration Management Representative or Lead for project. The Release Notes may / may not include the following for each module/program turnover:
Ø Date of scheduled iteration or final release
Ø Code freeze date
Ø Versions of software, libraries, hardware, database, etc. used
Ø Q / A Test Results
Ø Completed Matrix
Ø Build (make file) instructions
Ø Installation/Setup instructions
Ø Dependencies (other executables, database, browser, test drivers, etc)
Ø Scheduled functionality for this iteration or release
Ø New versions of shared programs
Ø Change requests
Ø Open Problems/Problem bug reports
Ø Known limitations


QA TEST ENTRANCE / EXIT CRITERIA
ENTRANCE CRITERIA
Ø The Entrance Criteria specified by the system test controller (QA Manager) should be fulfilled before system test can commence. In the event, that any criterion has not been achieved, the system test may commence if Business Team and Test Controller are in full agreement that the risk is manageable.
Ø All developed code must be unit tested. Unit and Link Testing must be completed and signed off by development team.
Ø Business Analyst and Test Controller must sign off system test plans.
Ø All human resources must be assigned and in place.
Ø All test hardware and environments must be in place, and free for system test use.
Ø The acceptance tests must be completed, with a pass rate of not less than 80%.


EXIT CRITERIA
Ø The Exit Criteria detailed below must be achieved before phase 1 software can be recommended for promotion to Operations Acceptance status. Furthermore, it is recommended that there be a minimum two days effort in final integration testing after the final fix/change has been retested.
Ø All high priority errors from system test must be fixed and tested.
Ø If any medium or low-priority errors are outstanding – Business Analyst and Business Expert must sign off the implementation risk as acceptable.
Ø Test Controller and Business Analyst must sign off project integration test.
Ø Business expert must sign off business acceptance test.


TEST EVALUATION
PASS / FAIL CRITERIA
In order for Q / A to begin testing, the project must be at an established phase or beta so that one or more complete test cases can be done.
The pass/fail criterion is based on the test cases. If a row within a test case passes or fails, we mark it as such. If one row fails within a test case, then the full case fails and vice-versa for a pass of a row within a test case.


RESULTS ANALYSIS
Analysis will be upon all test cases of pass to fail ratio as well as the Matrix. The QA lead in charge of the project will be the person in charge of final analysis.


QA TEST PLAN CHANGE LOG
This is to be used to document the QA Test Plan’s original version and all modification for all iterations.
Use the chart below to document the QA Test Plan’s original version and all modifications for change control and tractability.

TEST CASE DESIGN

VALIDATION MATRIX
The validation matrix is the matrix, which gives the status of all the test cases. It indicates whether the bug is open or closed or deferred. The matrix gives total needed information about various bugs and test cases such as Test case author, tester, developer to whom the bug has been assigned to for fixing. The matrix also gives the severity level of the bugs. For total information about the bug and the bug remarks as given by the tester, the appropriate test case can be checked. The matrix is outlined below:

Online Account Management Application Validation Matrix
P = Passed
F = Failed
Status: O = Open, F = Fixed, D = Deferred
Severity: U = Urgent, H = High, M = Medium, L = Low

QA TEST CASES

Area of Project: Test Case Identifier sequence

Requirements Satisfied:

TEST CASE DESCRIPTION:

INPUT: User provides input

OUTPUT:
CONTROL DATA: This data is the type of computers used and the way they are setup for the testing.

ALL EXECUTIONS PASSED? Yes
No {Check appropriate box}
TEST INFORMATION: {Fill in the following information}

Tester Name Version Date Problems Logged Corrected
Testing comments / Recommendations
Test case author Date
Modification Dates

QA TEST MATRIX
The tabulated form of test result data, which is accessible to the higher authorities of the project like QA Manager, Project Lead etc. This matrix is updated on daily basis. It shows the performance of the developers and testers.

GUI and Usability Checklist:
The GUI checklist has to cover the following points:

TESTING DELIVERABLES

SCHEDULE
The table below gives the deadlines for the various tasks associated with the testing of the Quote Agent Application.
The following are the Task, Start Date, and End Date

Test Environment Setup,
03/27/04,
04/05/04

WinRunner Installation
04/06/04
04/08/04

LoadRunner Installation
04/09/04
05/01/04

Test case generation
05/01/04
06/26/04

Functional Testing (Phase 1)
06/27/04
07/05/04

Regression Testing (Phase 1)
07/06/04
07/23/04

Acceptance Testing (Phase 1)
07/24/04
08/03/04

Stress Testing (Phase 1)
08/04/04
08/16/04

Security Testing (Phase 1)
08/17/04
09/03/04

Back-End Testing (Phase 1)
09/04/04
09/26/04

Functional Testing (Phase 2)
09/27/04
10/18/04

Regression Testing (Phase 2)
10/19/04
11/04/04

Acceptance Testing (Phase 2)
11/05/04
12/01/04

Stress Testing (Phase 2)
12/02/04
12/14/04

Security Testing (Phase 2)
12/15/04
01/05/05

Back-End Testing (Phase 2)
01/06/05
01/30/05

Test Release Notes First Draft
02/01/05
03/01/05

Test Release Notes Final Draft
03/02/05
03/29/05

Test Release Notes Approval
03/30/05
04/25/05

==========================================================================

APPENDIX

GUI Testing
GUI testing determines the appearance of the GUI objects on the window or application. We will check that the GUI objects should not overlap with each other and their properties are being maintained. In the present e-Hospital Management System we check the properties of various web objects.

Functionality Testing
Functionality testing determines whether the window or application functions correctly or not based on its specifications and relevant standard documentation. Both positive and negative testing will be performed to verify that the application responds correctly. The GUI functionality is to be checked used WinRunner 7.0.

Acceptance Testing
The purpose of acceptance testing is to provide a minimal set of tests that ensure conformance to milestone acceptance criteria. The approach to acceptance testing is outlined below –
· The minimal functional requirement of each query is outlined.
· The minimal functionality of GUI controls (objects) is specified.
· Scripts in WinRunner are to be generated as per the minimal functional, performance requirements.

System Testing
This testing is to be carried out once integration of front end and back end is carried out. Using WinRunner, different values will be entered from the front end and the database will be checked to see correctness of data. The functionalities of the system as a whole will be tested and perfection of data read from the database will be the prime focus of this test.


Regression Testing
We retest the same application after fixes or modifications are done to the application, its components or the environment. It is difficult to determine how much retesting is needed, especially nearing the end of the development cycle. Automated tools like WinRunner are especially useful for this testing methodology. The approach to regression testing is outline below –
· The regression testing is to be done using WinRunner 7.0
· The GUI testing should be Regression based.
· Data Driven testing to be done using WinRunner 7.0 to check for backend performance of SQL database.

Backend Testing
This sort of testing ensures that the data does not get corrupted in the process of being transferred from the front end to back end or vice-versa. We will be using WinRunner for this testing, although significant parts of it may be manually performed.

Security Testing
This test confirms that the software allows only authorized users to access the system. This testing will be done using the concepts of data driven testing. Here the DB administrator provides the table of username and passwords. Tests generated will test to see which functionalities are available to the user depending on the level of permission granted by the application.