Wednesday, November 24, 2010

UNIX shell Scripting

One of the main part of the SDLC cycle is Testing. The Shell, which is a part of UNIX, can help to do the testing smartly and quickly. Here we will discuss some of the advantages of UNIX shell scripting for test automation. some shell commands for automatic testing. The how to port the testing scripts to windows.

Introduction to UNIX

UNIX is the most popular among various operating systems; it has so many advantages like.

Multitasking: UNIX is designed to do many things at the same time. in computing, multitasking is a method by which multiple tasks or processes share common processing resources. In a computer with a single CPU, only one task is said to be running at any point of time, meaning that the CPU is executing instructions for one task. Multitasking solves the problem by scheduling. Like spooling to the printer of one file and editing of other file. This is important for users as they don't need to wait for one application to end before starting second one.

Multi-user: Multi-user is a term that defines an operating system or application software that allows concurrent access by multiple users of a computer. Time-sharing systems are multi-user systems. The computer can take the commands of a number of users to run programs, access files, and print documents at the same time.

Stability: One of the design goals of the UNIX is Robustness and Stability. The UNIX is stable by its own nature. UNIX doesn’t need Periodic reboot to keep the system stable to maintain performance levels. There is no problem of memory leak ups so it won’t freeze up or slows down. It has continues up time more than a year or hundreds of days. Therefore it requires less administration and maintenance.

Performance: In networks and workstations UNIX system provides high level performance. At a time it can handle large numbers of users. it is possible to tune the UNIX systems in a better way to meet our performance needs ranging from embedded systems to Symmetric multiprocessing systems.

Compatibility - UNIX can be installed on different types of hardware machines, including main-frame computers, supercomputers and micro-computers. Linux- One of the popular variants of UNIX which will run on almost 25 processor architectures including Alphs/VAX, intel, PowerPC etc. UNIX also is compatible with windows for file sharing etc via smb(samba file system) and NFS(Network File system).

Security: UNIX is one of the most secure operating systems. “Firewalls” and flexible file access permission systems prevent access of unwanted visitors or viruses.
Shell is the ‘command interpreter’ for UNIX systems. It resides at the base of most of the user level UNIX programs. All the commands invoked by user are interpreted by shell and it loads the necessary programs into memory. Thus being a default command interpreter on UNIX makes shell a preferred choice to interact with programs and write glue code for test scripts.

Advantages of using Shell for test automation on UNIX

Following are some of the advantages of using Shell for test automation on UNIX,

Free: Most of the popular shells are free and open source no additional cost. No Additional software required: All the UNIX systems have a default shell already installed and configured (bash/ksh/csh). So there is no need to spend extra time to set up the shell. Shell is something very common to UNIX systems and a inhabitant always understands the problems pretty well and help resolving it.

Powerful: It provides plenty of programming constructs to develop scripts with simple or medium complexity.

Extensible: It is possible to extend the shell scripts by using additional useful commands/programs for extending the functionality. it is possible to write shell scripts using default editors available (vi, emacs etc) and can run and test it. No specialized tool is needed for the same.

Color high lighted report: Can even generate color-highlighted reports of test case execution, which is of great help.

Portability: Shell scripts are portable to other UNIX platforms as well as to Windows via Cygwin. Cygwin which is a shell on windows allows us to execute shell scripts on windows also.

Shell Commands

For testing it is important to do test setup, test procedure steps, validation of actual result with expected result, clean up steps to bring the application back to original state, scheduling a test, prepare test results log, and report the test results. Shell has many commands, which can help to achieve automation of these test activities.

Following are some useful Unix Shell commands for automation.

Verification and setup testing: When we want to test for installation/ uninstallation etc we can effectively use the file verification functionality of the shell.
-f to check whether a file exist
-r to check whether a file is readable
-w to check whether a file is writeable
-x to check whether a file is executable
We can also invoke external commands and check for their return code for success/failure of execution using predefined variable’$?’.
Also availability of common looping constructs like 'for' and 'while' make shell obvious choice to automate installation/ uninstallation testing, checking out whether commands/programs are executing successfully or not and functionality testing as well.
Most of the time we need to setup some environment variables, have some proper links (test environment) to set, this task can be automated using shell and is of great help.
Interactive Application testing using expect
Expect is a program that talks to other interactive programs based on a script. We need to mention the “expect” to what to expect from the program and what should be the response need to send. When writing an “expect” script, the output from the program is an input to the “expect script” and output of the “expect” script is input to the program. So now the “expect” script keep on expecting output from the program and keep on feeding input the interactive program, thus automating the interactive programs. Expect is generalized so that it can interact with any of the user level command/ program. Expect can also talk to several programs at the same time. In general expect is useful for running any program, which requires interaction between user and the program. All that is necessary is the interaction can be characterized using a program.

Executing shell scripts on Windows using Cygwin
Cygwin is a Linux like environment for windows. It consists of two parts - A dll, cygwin1.dll which acts as a Linux emulation layer providing Linux API functionality.
- A collection of tools, which provide Linux look and feel.

Cygwin is available under GPL (GNU Public License) and is free software. Cygwin gives us almost all standard unix shells (bash, ksh, csh etc) so you can run most of your scripts on windows as well. Thus cygwin provides lot of portability to shell scripts.
When not to use shell scripts for automated testing

It’s not a good idea to use shell scripts in following cases.
- Need to generate or manipulate graphics or GUI
- Need port or socket I/O
- Complex applications with type checking, function prototyping etc
- Need data structures like linked lists, trees etc.
If any of the above is true it’s a good idea to use more powerful languages like C, C++ or Perl/ Python for test automation

software testing - 2

III. At what stage of the life cycle does testing begin?

                    Testing as an activity no longer waits till the first line of code is written. In today’s world of ever increasing complexity, testing has to begin right at the conceptual stages. Experts are evolving techniques to conduct model tests on the requirements model, domain model and design model.

                   Testing starts from the beginning - we can start testing from as early as Requirement specification stage of the project. Experience shows that close attention to the requirements in the beginning allows eliminating a lot of problems

When should testing start in a project?

                    The earlier you start the testing process, the better. The longer a defect stays in process, the more expensive it is to fix. It is said that on average, programmers create defects in every 100 lines of code (LOC) .They create 12 to 20 defects per 100 LOC if the code is not structured and is poorly documented. These defect rates improve to defects per 100 LOC if the code is structured and documented. However, the rates only improve to one to 1.5 defects per 100 LOC for subsystem, programs, or modules after typical unit testing. Although the above metrics are alarming, we also know that 70 percent of the defects in most systems are caused during the analysis and design phases, and 30 percent are introduced during coding. Using one of the most widely referenced metrics in the industry, we can expect costs to fix requirements defects (found late in system testing or after delivery) to be twice orders of magnitude or more than when found during requirements analysis.

How automation fits into the overall process of testing?

                      Test automation provides the answer to an organization’s business problems by minimizing the effort required for testing systems, while ensuring uniformity in the testing process. Using automation, tests can be run faster, in a consistent manner and with fewer overheads. Automation is the replacement or supplementation of manual testing with a suite of test programs. Benefits include increased software quality, reduced time-to-market, reusable test procedures, and reduced testing costs.

What is base for test cases?

                    We can base the test cases on documents like System Requirement Specifications (SRS), High Level Design, Low Level Design, Use cases, Blue Prints etc. From SRS and High level design documents you can create System Test cases, from low level design docs you can create Integration Test cases and from use cases you can create Unit test cases.

What is an equivalence class?

                 In the software testing, if the input domain is usually too large for exhaustive testing, it is therefore partitioned into a finite number of sub-domains for the selection of test inputs. Each sub-domain is known as an equivalence class and serves as a source of at least one test input.

                Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition


The roles of QA play in the software lifecycle.

                    Software QA Engineer works closely with the development team and provides technical leadership in testing to ensure the release of the highest quality products, including developing, maintaining, and implementing test strategies, plans, procedures, and cases. In addition, the Software QA Engineer drives the full software project lifecycle of each software project from gathering requirements to post implementation review and continually introduces improvements to streamline QA and development of processes and procedures


What is a "bug?"

                   A fault in a program, which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, and fault.


How do you feel about cyclomatic complexity?

                  Cyclomatic complexity is a measure for the complexity of code related to the number of ways there are to traverse a piece of code. This determines the minimum number of inputs you need to test all ways to execute the program


IV.Types of testing:

            Based on project stage     
                          i) Unit Testing,
                        ii) Integration Testing,
           Alpha testing
            Beta testing
Based on kind of testing
                         i) GUI testing,
                        ii) System Testing,
                      iii) Regression testing.
.

What is unit testing?

           Testing of a module for typographic, syntactic, and logical errors, for correct implementation of its design,  and for satisfaction of its requirements. Testing conducted to verify the implementation of the design for one software element; e.g., a unit or module; or a collection of software elements. Syn: component testing.


What is Regression testing?

                The complete retesting of a software system that has been modified to ensure that any bugs  have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. Also referred to as verification testing, regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity.


Difference between Integration & System testing:

                        Integration testing is testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. This type of testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) this method is contrary to the big-bang approach where all source modules are combined and tested in one step.

                       System testing is the testing of a complete system prior to delivery. The purpose of system testing is to identify defects that will only surface when a complete system is assembled. That is, defects that cannot be attributed to individual components or the interaction between two components. System testing includes testing of performance, security, configuration sensitivity, startup and recovery from failure modes.



Validation and Verification.

                Verification – intended to show that software correctly implements a specific function; typically takes place at the end of each phase.

       Are we building the product right?

               Validation – intended to show that software as a whole satisfies the user requirements: typically uses black-box testing.

       Are we building the right product?

               Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

Difference between Black & White box testing:

                 Black box testing. A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed.

                  White Box Testing (glass-box). Testing is done under a structural testing strategy and require complete access to the object's structure¡that is, the source code


V.Types of automated tools:

     Functional Testing Tools

a)         WinRunner versions 7, 7.5 from Mercury Interactive
b)         Quick Test Professional (QTP) 8
c)         Rational Robot from IBM Rational
d)         SilkTest from Seague

      Load Testing

A)        Load Runner from Mercury interactive
B)        Webload from Radview
C)        Silk Performer from Seague

      Unit testing tools

a)         JUnit
b)         NUnit

      Test Coverage Tools

a)         Test Coverage
b)         Rational Purify

      Test Management Tools

                Test Director

      Defect Tracking Tools

                Rational Clear Quest
                Buzilla

      Configuration Management Tools

                Rational Clear Case
                MS Visual Source safe
                               


VII. Why Software Testing?

To get adequate trust and confidence on the product.

-To meet the Organizational Goals like meeting requirements satisfied customers, improved market share, zero defects, etc

-Since the software can perform 100000 correct operations per second, it has the same ability to   perform 100000 wrong operations per second, if not tested properly


Some Software Failures:

1. Computer Glitch Causes £22m Tax Error; Mar 22, 2002: The Inland
   Revenue has said that computer problems were responsible for an estimated
   134,000 basic rate taxpayers being overcharged by a total of £22m. Individual  
   taxpayers paid an average of £148 over the odds because of data transfer
   problems between the national insurance computer system and the PAYE
   system.

2. Yahoo Glitch strikes again; Mar 20, 2002: Parts of Yahoo were shut down
   on Tuesday following software problems encountered in the integrating of
   Yahoo Groups and Yahoo Clubs.

3. Microsoft’s Anti-Unix Site Crashes; Apr 3, 2002: A marketing Web site,
    part of a multi-million dollar campaign by Microsoft and Unisys to get
    customers to switch from UNIX is turning into a major embarrassment.
    www.wehavethewayout.com which was powered by UNIX kit and that, when
    Microsoft switched it to Internet Information Server software, the site
    crashed completely.

4. Cisco Flaw Enables DoS Attack; Apr 2, 2002: Vulnerability in Cisco’s 
   Call Manager software can result in a memory leak in the computer telephony
   framework causing the server to crash, which could be used by a hacker to
   start a denial of service (DoS) attack.  The fault is most commonly seen when
   Call Manager systems are integrated with a directory such as Active Directory
   or Netscape.

VI.Standards

CMM and CMMI:

                Capability Maturity Models (CMMs) assist organizations in maturing their people, process, and technology assets to improve long-term business performance. The US based Software Engineering Institute has developed CMMs for software, people, and software acquisition, and assisted in the development of CMMs for Systems Engineering and Integrated Product Development. The latest development in this initiative is the CMM IntegrationSM (CMMISM) Product Suite.

                 The purpose of Capability Maturity Model Integration (CMMISM) is to provide guidance for improving your organization's processes and your ability to manage the development, acquisition, and maintenance of products and services. CMM IntegrationSM places proven practices into a structure that helps your organization assess its organizational maturity and process area capability, establish priorities for improvement, and guide the implementation of these improvements. Unlike CMM, which was based on software development, the CMM Integration is not restricted to software development. It encompasses the entire developmental organization processes.

What is Configuration management? Tools used?
               
                  Configuration management is the process of identifying and defining the deliverable product set in a system, controlling the release and change of these items throughout the system life cycle, recording and reporting the status of product items and change requests, and verifying the completeness and correctness of the product items. The common tools used are MS Visual Source Safe, Rational Clear Case, and CVS etc

software testing - 1

-Process of executing a program with the intent of finding errors

- Confirming that a system performs its intended functions correctly

-          Establishing confidence that a system does what it  is supposed to do
-          The process of analyzing a system to detect the difference between existing and required conditions and to evaluate the feature of the system (IEEE/ANSI, 1983 [Std 829-1983]).

Roles of Quality Assurance Manager and Project Manager:

                  The Project manager will lead the entire team concerned with the project and is responsible for the all aspects of the project. He has to conduct project related activities according to the applicable corporate and regulatory policies and procedures and also has to deliver required products to satisfied customers/stakeholders. Has specific accountability for achieving all of the defined project objectives within the time and resources allocated. The project manager performs the day-to-day management of the project. One or more assistant project managers with the same responsibilities over specific portions of the project may support the overall project manager, without diluting his or her responsibility. 


                 The QA Manager will lead the QA team in the development and execution of Software Quality Assurance (SQA) test strategies and procedures to ensure the highest level of quality of new releases of business applications as well as new and enhanced systems architecture. Focus on the enhancement of group skills and the implementation of best practices in software and systems QA. Additionally, the Manager of QA will be expected to supply the leadership to successfully introduce process improvements throughout the Software Development Lifecycle which result in higher quality releases in shorter development times.

                  QA Manager will oversee the development and execution of software test plans and analysis of results. They will also drive the implementation of test automation and measurement across the entire project, monitor the completion of tasks within time and cost constraints and ensure that technical objectives are met. Other responsibilities will include identifying and recommending changes to established practices and policies. The QA Manager will directly interface with all disciplines within the IT organization and business units.

Testing


What is testing?

         
            The process of detecting and identifying defects, where a defect is any variance between actual and expected results

Testing also involves:

-          Reporting the above and
-          Taking necessary corrective measures


What is Quality?

              The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs.



Quality assurance (QA):

             Quality assurance is the planned and systematic activities implemented within the quality system and demonstrated as needed to provide adequate confidence that an entity will fulfill requirements for quality.





The difference between QA and testing:

               Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'

               Testing involves operation of a system or application under controlled conditions and evaluating the results (e.g., 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'

Role of QA in a development project:

                  The QA function plans and implements the QA activities to ensure that the required processes and standards are followed. They collect and analyze project metric data, co-ordinate formal reviews and audits and participate in informal reviews. Also they maintain noncompliance issues list under CM(Configuration Management) control, observe testing and inspect test reports and verify deliverables for conformance to standards

What is a process?

A Process may be defined as:

           'a particular course of action intended to achieve a result' or more specifically as
           a set of logically related tasks performed to achieve a defined business outcome

Types of software Testing

Software Testing Types:
Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.
White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.
Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.
Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
Performance testing – Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.
Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.
Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.
Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose

Gray Box Testing

Grey box testing is the combination of black box and white box testing. Intention of this testing is to find out defects related to bad design or bad implementation of the system. In gray box testing, test engineer is equipped with the knowledge of system and designs test cases or test data based on system knowledge.
For example, consider a hypothetical case wherein you have to test a web application. Functionality of this web application is very simple, you just need to enter your personal details like email and field of interest on the web form and submit this form. Server will get this details, and based on the field of interest pick some articles and mail it to the given email. Email validation is happening at the client side using Java Scripts.


In this case, in the absence of implementation detail, you might test web form with valid/invalid mail IDs and different field of interests to make sure that functionality is intact.

But, if you know the implementation detail, you know that system is making following assumptions
  • Server will never get invalid mail ID
  • Server will never send mail to invalid ID
  • Server will never receive failure notification for this mail.


So as part of gray box testing, in the above example you will have a test case on clients where Java Scripts are disabled. It could happen due to any reason and if it happens, validation can not happen at the client site. In this case, assumptions made by the system are violated and
  • Server will get invalid mail ID
  • Server will send mail to invalid mail ID
  • Server will receive failure notification
Hope you understood the concept of gray box testing and how it can be used to create different test cases or data points based on the implementation details of the system

Blackbox Testing

Probably this is what most of us practice and is used most widely. This is also type of testing which is very close to customer experience. In this type of testing system is treated as close system and test engineer do not assume any thing about how system was created.
As a test engineer if you are performing black box test cases, one thing that you need to make sure is that you do not make any assumptions about the system based on your knowledge. Assumption created in our mind because of the system knowledge could harm testing effort and increase the chances of missing critical test cases.

Only input for test engineer in this type of testing is requirement document and functionality of system which you get by working with the system. Purpose of black box testing is to
  • Make sure that system is working in accordance with the system requirement.
  • Make sure that system is meeting user expectation.
In order to make sure that purpose of black box testing is met, various techniques can be used for data selection like
  • Boundary value analysis
  • Equivalence partitioning
Activities within every testing types can be divided into verification and validation. Within black box testing following activities will need verification techniques
  • Review of requirement and functional specification.
  • Review of Test plan and test cases.
  • Review of test data
And test case execution will fall under validation space.