Software testing (software) reveals flaws, flaws and errors in the code that need to be addressed. It can also be defined as a process of evaluating the functionality and correctness of software using analysis. The main methods of integration and testing of software products ensure the quality of applications and consist in checking specifications, design and code, assessing reliability, validation and verification.
Methods
The main goal of software testing is to confirm the quality of the software package by systematically debugging applications under carefully controlled conditions, determining their completeness and correctness, as well as detecting hidden errors.
Methods of checking (testing) programs can be divided into static and dynamic.
The former include informal, supervisory and technical review, inspection, step-by-step analysis, audit, as well as static analysis of data flow and management.
The dynamic techniques are as follows:
- White box testing. This is a detailed study of the internal logic and structure of a program. This requires knowledge of the source code.
- Black box testing. This technique does not require any knowledge about the internal operation of the application. Only the main aspects of the system that are unrelated or have little to do with its internal logical structure are considered.
- Gray box method. Combines the previous two approaches. Debugging with limited knowledge of the internal functioning of the application is combined with knowledge of the main aspects of the system.
Transparent testing
The white box method uses test scenarios of the control structure of the procedural project. This technique allows you to identify implementation errors, such as poor management of the code system, by analyzing the internal work of a piece of software. These test methods are applicable at the integration, modular and system levels. The tester must have access to the source code and, using it, find out which block behaves inappropriately.
Testing programs using the white box method has the following advantages:
- allows to detect an error in hidden code when deleting extra lines;
- the possibility of using side effects;
- maximum coverage is achieved by writing a test script.
Disadvantages:
- high cost process requiring a qualified debugger;
- many ways will remain unexplored, since a thorough check of all possible hidden errors is very difficult;
- some of the missing code will go unnoticed.
White box testing is sometimes also called transparent or open box testing, structural, logical testing, source-based testing, architecture, and logic testing.
The main varieties:
1) flow control testing - a structural strategy that uses the program control flow as a model and prefers more simple paths over fewer more complex ones;
2) branch debugging is aimed at examining each option (true or false) of each control operator, which also includes an integrated solution;
3) testing the main path, which allows the tester to establish a measure of the logical complexity of the procedural project to highlight the basic set of execution paths;
4) verification of data flow - a strategy for studying the control flow by annotating the graph with information about the declaration and use of program variables;
5) testing of cycles - fully focused on the correct implementation of cyclic procedures.
Behavioral Debugging
Black box testing considers software as a “black box” - information about the internal work of the program is not taken into account, but only the main aspects of the system are checked. In this case, the tester needs to know the system architecture without access to the source code.
The advantages of this approach:
- performance for a large code segment;
- ease of perception by the tester;
- the user's perspective is clearly separated from the perspective of the developer (the programmer and tester are independent of each other);
- faster test creation.
Black box testing of programs has the following disadvantages:
- in fact, the selected number of test scenarios is executed, resulting in limited coverage;
- the lack of a clear specification makes it difficult to develop test cases;
- low efficiency.
Other names for this technique are behavioral, opaque, functional testing, and closed-box debugging.
The following software testing methods can be attributed to this category:
1) an equivalent partition, which can reduce the set of test data, since the input data of the software module is divided into separate parts;
2) edge analysis focuses on checking boundaries or extreme boundary values - minima, maxima, erroneous and typical values;
3) fuzzing - used to search for implementation errors by entering distorted or semi-distorted data in automatic or semi-automatic mode;
4) graphs of cause and effect relationships - a technique based on creating graphs and establishing a relationship between an action and its causes: identity, denial, logical OR and logical AND - four main symbols expressing the interdependence between cause and effect;
5) verification of orthogonal arrays, applied to problems with a relatively small input area that exceeds the possibilities of an exhaustive study;
6) testing of all pairs - a technique whose set of test values includes all possible discrete combinations of each pair of input parameters;
7) debugging state transitions - a technique useful for checking a state machine, as well as for navigating through the graphical user interface .
Black Box Testing: Examples
The black box technique is based on specifications, documentation, and software or system interface descriptions. In addition, it is possible to use models (formal or informal) that represent the expected behavior of the software.
Typically, this debugging method is used for user interfaces and requires interaction with the application by entering data and collecting results - from the screen, from reports or printouts.
The tester, thus, interacts with the software by input, acting on switches, buttons or other interfaces. The choice of input data, the order of their introduction, or the sequence of actions can lead to a gigantic total number of combinations, as can be seen in the following example.
How many tests do I need to do to check all the possible values for 4 check boxes and one on-off field that sets the time in seconds? At first glance, the calculation is simple: 4 fields with two possible states - 24 = 16, which must be multiplied by the number of possible positions from 00 to 99, that is 1600 possible tests.
Nevertheless, this calculation is erroneous: we can determine that the two-position field may also contain a space, i.e. it consists of two alphanumeric positions and may include alphabet characters, special characters, spaces, etc. Thus, if Since the system is a 16-bit computer, you get 216 = 65 536 options for each position, resulting in 4,294,967,296 test cases that need to be multiplied by 16 combinations for flags, which gives a total of 68,719,476,736. If you execute them at a speed of 1 test per second, then the total prod The testing duration will be 2 177.5 years. For 32 or 64-bit systems, the duration is even longer.
Therefore, there is a need to reduce this period to an acceptable value. Therefore, techniques should be applied to reduce the number of test cases without reducing test coverage.
Equivalent split
Equivalent partitioning is a simple method applicable for any variables present in the software, whether it is input or output values, symbolic, numeric, etc. It is based on the principle that all data from one equivalent partition will be processed in the same way and with the same same instructions.
During testing, one representative is selected from each defined equivalent partition. This allows you to systematically reduce the number of possible test cases without losing the coverage of commands and functions.
Another consequence of such a partition is a reduction in the combinatorial explosion between the various variables and the associated reduction in test cases.
For example, in (1 / x) 1/2 three data sequences are used, three equivalent partitions:
1. All positive numbers will be processed in the same way and should give the correct results.
2. All negative numbers will be processed in the same way, with the same result. This is not true, since the root of a negative number is imaginary.
3. Zero will be processed separately and will give a "division by zero" error. This is a single value section.
Thus, we see three different sections, one of which boils down to a single meaning. There is one “correct” section giving reliable results, and two “wrong” ones with incorrect results.
Edge analysis
Data processing at the boundaries of the equivalent partition may be performed differently than expected. The study of boundary values is a well-known way of analyzing software behavior in such areas. This technique allows you to identify the following errors:
- misuse of relation operators (<,>, =, ≠, ≥, ≤);
- single errors;
- problems in loops and iterations,
- wrong types or sizes of variables used to store information;
- artificial constraints associated with data and types of variables.
Translucent testing
The gray box method increases verification coverage, allowing you to focus on all levels of a complex system by combining white and black methods.
Using this technique, the tester must have knowledge of internal data structures and algorithms to develop test values. Examples of gray box testing methods are:
- architectural model;
- Unified Modeling Language (UML);
- state model (state machine).
In the gray box method for developing test cases, module codes are studied using the white technique, and the actual test is performed on the program interfaces using black technology.
Such testing methods have the following advantages:
- combination of the advantages of white and black box techniques;
- the tester relies on the interface and functional specification, not on the source code;
- the debugger can create great test cases;
- verification is carried out from the point of view of the user, not the program designer;
- creation of custom test developments;
- objectivity.
Disadvantages:
- test coverage is limited because there is no access to the source code;
- the complexity of defect detection in distributed applications;
- many ways remain unexplored;
- if the software developer has already run the test, then further research may be redundant.
Another name for the gray box technique is translucent debugging.
The following test methods fall into this category:
1) orthogonal array - the use of a subset of all possible combinations;
2) matrix debugging using data on the state of the program;
3) regression check, carried out when making new changes to the software;
4) a template test that analyzes the design and architecture of a solid application.
Comparison of software testing methods
Using all dynamic methods leads to a combinatorial explosion of the number of tests that must be developed, implemented and carried out. Each technique should be used pragmatically, taking into account its limitations.
The only true method does not exist, there are only those that are better suited to a specific context. Structural techniques allow you to find useless or malicious code, but they are complex and not applicable to large programs. Specification-based methods are the only ones that can identify the missing code, but they cannot identify the stranger. Some techniques are more suited to a particular level of testing, such as errors or context, than others.
Below are the main differences between the three dynamic testing techniques - given a comparison table between the three forms of software debugging.
Aspect | Black box method | Gray box method | White box method |
Availability of information on the composition of the program | Only basic aspects are analyzed. | Partial knowledge of the internal structure of the program | Full access to source code |
Program crushing rate | Low | Average | High |
Who does the debugging? | End Users, Testers, and Developers | End Users, Debuggers, and Developers | Developers and Testers |
Base | Testing is based on external emergency situations. | Database diagrams, data flow diagrams, internal states, knowledge of the algorithm and architecture | The internal device is fully known |
Coverage | Least comprehensive and requires a minimum of time | Average | Potentially the most comprehensive. Time consuming |
Data and internal boundaries | Debugging exclusively by trial and error | Data domains and internal boundaries can be checked if known. | Best testing of data domains and internal boundaries |
Suitability for testing the algorithm | Not | Not | Yes |
Automation
Automated software testing methods greatly simplify the verification process, regardless of the technical environment or software context. They are used in two cases:
1) to automate the performance of tedious, repetitive or meticulous tasks, such as comparing files in several thousand lines in order to free up the tester's time to concentrate on more important points;
2) to perform or track tasks that cannot be easily accomplished by people, such as performance testing or analysis of response times, which can be measured in hundredths of a second.
Test tools can be classified in different ways. The following division is based on the tasks they support:
- testing management, which includes support for project management, versions, configurations, risk analysis, tracking of tests, errors, defects and reporting tools;
- requirements management, which includes storing requirements and specifications, checking for completeness and ambiguity, their priority and traceability of each test;
- critical viewing and static analysis, including monitoring flow and tasks, recording and storing comments, detecting defects and scheduled corrections, managing links to checklists and rules, tracking the relationship of source documents and code, static analysis with detecting defects, ensuring compliance with code writing standards, analysis of structures and their dependencies, calculation of metric parameters of the code and architecture. In addition, compilers, link analyzers, and cross-link generators are used;
- modeling, which includes tools for modeling business behavior and validating created models;
- test development provides the generation of expected data based on the conditions and user interface, models and code, their management to create or modify files and databases, messages, data verification based on management rules, analysis of statistics of conditions and risks;
- critical viewing by entering data through a graphical user interface, API, command lines using comparators to help determine successful and failed tests;
- support for debugging environments, which allows you to replace missing hardware or software, including hardware simulators based on a subset of deterministic output, terminal emulators, mobile phones or network equipment, a language checking environment, OS, and hardware by replacing missing components with dummy drivers modules, etc., as well as tools for intercepting and modifying OS requests, simulating CPU, RAM, ROM, or network restrictions;
- , , , . . , «»;
- , , , , , , , ;
- ;
- , ;
- , . . , , - .
Perspective
. , - (SOA), , . ., . , , :
- , ;
- , , ;
- ;
- ;
- the use of useful matrices, such as specification coverage, model coverage and code coverage, will determine project development;
- combinatorial tools will allow testers to determine the priority directions of debugging;
- testers will provide more visual and valuable services throughout the entire software development process;
- debuggers will be able to create tools and methods for testing software written in and interacting with various programming languages;
- debugging specialists will become more professionally trained.
New business-oriented methods for testing programs will come in place, ways to interact with systems and the information they provide will change, while reducing risks and increasing the benefits of business changes.