let's talk
🚀 Home Blog

Software Testing and QA Glossary: 118 Terms You Should Know

Maksym Babych

Maksym Babych

CEO

11 min

This glossary of essential software testing and QA terms and definitions covers over 110 concepts.

With straightforward explanations, this guide makes software testing terminology easy to understand and apply.

Whether you’re new to software testing or need a quick reference, this glossary will help you dive into testing and quality assurance in minutes.

Ready to Bring Your App Idea into Reality?

Unlock your startup potential now — start transforming your vision into a scalable solution with our expert developers!

A

  • A/B Testing: A method where two versions (A and B) are compared against each other to determine which one performs better.
  • Acceptance Testing: Formal testing is conducted to determine whether a system satisfies its acceptance criteria, allowing an end user to decide whether to accept the system.
  • Accessibility Testing: The process of making sure that the application is accessible to people with disabilities like hearing, color blindness, old age, and other disadvantaged groups.
  • Ad Hoc Testing: An informal testing phase where the tester aims to break the system without following any structured testing procedures.
  • Agile Testing: Testing practice for projects using agile methodologies, emphasizing testing from the customer’s perspective.
  • Alpha Testing: A type of acceptance testing performed mostly by the in-house testers in an environment that closely resembles a real production environment.
  • Automation Testing: The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
  • Assertion: In software testing, an assertion is a statement that verifies the truth of a condition in the code to ensure the software works as expected.

B

  • Black Box Testing: A testing technique where the application’s functionality is tested without peering into its internal structures or workings.
  • Boundary Testing: Testing process which focuses on the values at the boundary of the input domain.
  • Baseline Testing: A method of testing where the performance of the current program is compared to its previous versions or to a standard.
  • Beta Testing: A phase of testing where a software product is distributed to a wide audience outside of the development team for real-world exposure and feedback.
  • Branch Testing: A testing method where each branch from each decision point is tested at least once.
  • Bug: Any unexpected or incorrect software behavior from what is defined in the requirement specifications.

C

  • Canary Testing: A strategy where a new feature or software version is rolled out to a small subset of users before it is made available to the entire user base.
  • Compatibility Testing: Testing whether the software is compatible with other system elements with which it should operate, e.g., browsers, operating systems, or hardware.
  • Code Coverage: A measure used to describe the degree to which the source code of a program is executed when a particular test suite runs.
  • Component Testing: Testing of individual software components or modules, typically by the developer in isolation from other system parts.
  • Concurrency Testing: Testing to ensure the software can handle the expected number of users accessing or interacting with the system simultaneously.
  • Checkpoint: In testing, a checkpoint is a specific point where certain conditions are checked to ensure the application is functioning correctly.
  • Cross-browser Testing: Testing web applications across multiple web browsers to ensure consistent behavior and functionality.

D

  • Data-driven Testing: A testing methodology where test scripts are executed and verified based on data values stored in one or more data sources or data files.
  • Defect: A flaw in a software product that causes it to fail to perform its intended function.
  • Dependency Testing: Tests to ensure that all internal and external dependencies are working as expected.
  • Distributed Testing: Testing in which software components are tested under varying environments and systems to ensure their functionality across different configurations.
  • Dynamic Testing: Testing software through executing it to find defects.

E

  • Exploratory Testing: An approach to software testing that is concisely described as simultaneous learning, test design, and test execution.
  • End-to-End Testing: A methodology used to test whether the flow of an application is performing as designed from start to finish.
  • Entry Criteria: The set of conditions that permits a task to be performed, or in the case of testing, permits testing to begin.
  • Error Handling Testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

F

  • Functional Testing: Testing that operations perform as expected according to the requirements.
  • Fuzz Testing: A software testing technique that provides invalid, unexpected, or random data as input to a computer program to find potential bugs.
  • Failure Testing: Testing aimed at evaluating a system’s ability to withstand and recover from conditions that cause failure.
  • Field Testing: Testing done in the actual environment or “field” where the product will be used rather than in the testing environment.
  • Flow Testing: Testing that examines the execution flow of the software under test to find flaws in the control structures.

G

  • Generation Testing: Also known as back-to-back testing, where two versions of an application are tested together to identify differences.
  • Gherkin: A business-readable, domain-specific language used for writing automated tests, particularly in Behavior-Driven Development (BDD).
  • GUI Testing: Testing the graphical user interface of an application to ensure it meets specified requirements.
  • Gray Box Testing: A combination of black-box and white-box testing methodologies, testing a piece of software against its specification but using some knowledge of its internal workings.

H

  • Happy Path Testing: Testing the application’s typical or most commonly used paths, focusing on expected and straightforward scenarios.
  • Hardware Testing: The testing of a computer system’s physical components, such as processors, memory, and storage devices, to ensure they meet specifications and function correctly.
  • Hybrid Testing: A testing methodology combining elements of multiple testing approaches, such as combining black box and white box testing techniques.
  • High-Level Testing: Testing that focuses on the components or systems at a high level of abstraction, often ignoring the internal workings and focusing on the outputs given inputs.

App Development & Custom Mobile App Development

Crafting Custom Mobile Apps That Delight Users!

I

  • Integration Testing: Testing in which individual software modules are combined and tested as a group to discover interface defects between modules.
  • Intelligent Test Agent: A software agent that uses artificial intelligence and machine learning techniques to automate the testing process, learn from past tests, and improve testing efficiency.
  • Incremental Testing: Testing where components or modules are tested individually as they are developed and then tested together to ensure the entire system works correctly.
  • Inspection: A formal review practice in which software documents are examined by an individual or a team for defects or improvements.
  • Interface Testing: Testing conducted to evaluate whether systems or components pass data and control correctly to one another.

J

  • Journey Testing: Testing that simulates a user’s journey through a software application, from start to finish, to ensure all user goals can be achieved without issues.
  • JIT Testing: Just-In-Time Testing – a practice in agile methodologies where testing is done continuously and just in time for the implementation of features.
  • Jitter Testing: Testing the stability and reliability of a system when it experiences delays in data transmission or processing.
  • JUnit Testing: A framework used for testing Java applications, allowing developers to write and run repeatable automated tests.

K

  • Keyword-Driven Testing: A type of functional automation testing framework where test cases are defined by keywords or action words that describe the actions to be performed.
  • Knockout Testing: Testing approach used to determine which component(s) or part(s) of a system are causing the overall system to fail.
  • Kit Testing: The process of testing a group of components or parts that are intended to function together within a larger system.

L

  • Load Testing: Testing an application’s ability to perform under anticipated user loads, focusing on response times, throughput rates, and resource utilization levels.
  • Localization Testing: Testing a software application to ensure it can be adapted to a specific locale or culture without engineering changes.
  • Loop Testing: A white-box testing technique that focuses specifically on the validity of loop constructs.
  • Linting: The process of running a program that will analyze code for potential errors.
  • Limit Testing: Testing the extremes of the input and output parameters, typically to observe the effects of failures.

M

  • Mutation Testing: A method of testing where the software is modified in small ways to ensure that tests can detect the changes, thereby ensuring the quality of the test cases.
  • Manual Testing: The process where testers manually operate and test the software to find defects without the help of automated testing tools.
  • Mock Testing: Involves using mock objects to simulate the behavior of complex, real objects to test modules that have external dependencies.
  • Model-Based Testing: An approach to software testing where test cases are derived from a model that describes the functional aspects of the system.
  • Monkey Testing: A testing technique where the tester enters random inputs into the system to see if it crashes or behaves unexpectedly.

N

  • Negative Testing: Testing aimed at showing that a component or system does not work. It involves testing the system with incorrect or unexpected inputs.
  • Non-Functional Testing: Testing the aspects of the software that may not be related to a specific function or user action, such as scalability, performance, and security.
  • Network Testing: Testing conducted to evaluate the network performance and security, including the throughput, bandwidth, data transfer rate, latency, and packet loss.
  • Non-Regression Testing: A type of testing to ensure new changes or improvements haven’t affected the existing features of the software.

O

  • Object-Oriented Testing: Testing techniques that are tailored to applications developed using object-oriented programming and principles, focusing on testing the object interactions.
  • OK/NOK: Terms used in testing to indicate whether a test case has passed (OK) or failed (NOK).
  • Operational Testing: Testing conducted to evaluate a system or component in its operational environment.
  • Orthogonal Array Testing: A systematic, statistical way of testing pair-wise interactions to identify problematic combinations of variables.
  • Output Comparison Testing: Testing where the outputs from a system are compared against expected outcomes or against the outputs of another system.
  • Open Box Testing: A synonym for white box testing, where the tester has full visibility of the software’s internal workings.

Marketplace Development

From concept to creation – launch your marketplace with SPDLoad!

P

  • Peak Testing: Testing an application’s performance under maximum load and stress conditions to ensure it can handle high traffic or data processing demands.
  • Performance Testing: Evaluates the speed, responsiveness, and stability of a computer, network, software program, or device under a workload.
  • Penetration Testing: Also known as “pen testing” or “ethical hacking,” this is the practice of testing a computer system, network, or web application to find security vulnerabilities that an attacker could exploit.
  • Positive Testing: Testing the system by giving the valid data as input to check if the system behaves as expected.
  • Path Testing: White box testing technique that involves testing all possible paths that can be taken in the program.
  • Postmortem Testing: Conducted after a project or phase is completed to learn what went well and what didn’t to improve future projects.

Q

  • Quality Assurance (QA): A way of preventing mistakes or defects in manufactured products and avoiding problems when delivering solutions or services to customers.
  • Quality Control (QC): The part of quality management focused on fulfilling quality requirements often involves operational techniques and activities used to fulfill requirements for quality.
  • Quantitative Testing: Testing that is measurable and can be expressed in numbers, often used in performance testing to measure response times, throughput rates, etc.
  • Query Testing: Involves testing database queries for performance, accuracy, and reliability.
  • QuickCheck Testing: A form of automated specification-based testing where properties (expected behaviors) of the system are expressed as predicates, and the testing framework automatically generates test cases trying to falsify these properties.

R

  • Regression Testing: A type of software testing that ensures that previously developed and tested software still performs the same way after it is changed or interfaced with other software.
  • Risk-Based Testing (RBT): An approach to testing that prioritizes the tests of features and functions based on the risk of their failure—the greater the risk, the higher the priority.
  • Requirements Testing: Ensures that the system correctly implements software requirements specifications and that they meet the customer’s expectations.
  • Random Testing: Involves inputting random data into the system in order to check its behavior and observe if it crashes or fails.
  • Recovery Testing: Checks a system’s ability to recover from crashes, hardware failures, or other catastrophic problems.

S

  • Sanity Testing: A subset of regression testing to ensure that an attempt at fixing a bug or introducing a new feature hasn’t inadvertently caused additional problems.
  • Security Testing: Identifies system threats and measures potential vulnerabilities so the system does not stop functioning or is exploited.
  • Smoke Testing: A preliminary test to reveal simple failures severe enough to, for example, reject a prospective software release.
  • Stress Testing: Determines the stability of a system or entity under an extreme load that might cause the system to crash or fail.
  • System Testing: The testing of a complete and fully integrated software product to evaluate the system’s compliance with its specified requirements.

T

  • Test Case: A set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly.
  • Test Driven Development (TDD): A software development process that relies on the repetition of a very short development cycle: first the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces code to pass that test, and finally refactors the new code to acceptable standards.
  • Test Plan: A document detailing the objectives, resources, and processes for a specific test for a software or hardware product.
  • Test Suite: A collection of test cases intended to test a behavior or a set of behaviors of a software program.

U

  • UI Testing: Testing the user interface of an application to ensure it is functioning properly and provides a good user experience.
  • Unit Testing: The process of testing individual units or components of software to determine if they are fit for use.
  • Upward Compatibility Testing: Testing to ensure that a software product is compatible with newer or future versions of the platform or environment.
  • Usability Testing: A method to evaluate how easy a software application is to use, focusing on the user’s experience and satisfaction.
  • User Acceptance Testing (UAT): The final phase of the software testing process where actual software users test the software to ensure it can handle required tasks in real-world scenarios, according to specifications.

V

  • Validation Testing: The process of evaluating software at the end of the software development process to ensure compliance with software requirements.
  • Visual Testing: Testing the visual aspects of an application’s user interface to ensure it appears correctly to users.
  • Virtual User Testing: Simulating multiple users interacting with the software concurrently to understand its behavior under load.
  • V-model: A software development model that emphasizes verification and validation processes, where each stage of development has a corresponding testing phase.
  • Vulnerability Testing: Identifying and measuring security vulnerabilities in a software application or network.

W

  • White Box Testing: A testing technique that involves looking at the structure of the code and designing tests based on the internal operations of the application.
  • Workflow Testing: Testing the sequences of processes that the system performs, focusing on user tasks and their interactions with the system.
  • Worst Case Testing: Testing the most demanding or stressful conditions under which a software product can operate to determine its limits.
  • Web Application Testing: Testing web applications to identify potential web-specific issues like compatibility, security, load time, and accessibility.
  • Walkthrough: A form of peer review where the author of a product or document walks a group through the product or document, and the group asks questions and makes comments.

Y

  • Yellow Box Testing: A term sometimes used to describe testing software with partial knowledge of its internal workings, situated between black-box and white-box testing.

Z

  • Zero-Day Testing: Testing how software handles newly discovered security vulnerabilities that have not yet been addressed or patched.

Subscribe to our blog

Recommended posts

Top 10 Countries for Outsourcing Software Development in 2024

Top 10 Countries for Outsourcing Software Development in 2024

Outsourcing has become a vital strategy for businesses around the globe. This practice helps companies helping them reduce development costs, access specialized…

read more
Start Your Venture in 2024 with These 15 Tech Startup Ideas

Start Your Venture in 2024 with These 15 Tech Startup Ideas

Last year was tough for startups, especially in Europe.  Atomico report says funding from investors nearly dropped by half in 2023, leaving…

read more
All You Need to Know About How to Calculate Cost Per Click

All You Need to Know About How to Calculate Cost Per Click

Curious about how to make every click count? Dive into the world of Cost Per Click (CPC) with our comprehensive guide!  It…

read more
Churn Rate Calculator

Churn Rate Calculator

Struggling to keep your customers? Our Churn Rate Calculator is here to help! Simply input your data to see how many customers…

read more
AB Test Calculator

AB Test Calculator

Let’s delve into calculating statistical significance using an AB test calculator.  Our tool will help you compare two populations and determine if…

read more
Return on Assets Calculator

Return on Assets Calculator

Introducing the Return on Assets Calculator! This handy tool is designed to make calculating ROA, one of the most important ratios in…

read more
Сustomer Lifetime Value Calculator

Сustomer Lifetime Value Calculator

Ever wondered just how valuable your customers are over time?  Dive into our simple yet powerful tool to discover the lifetime potential…

read more
How to Calculate Annual Recurring Revenue?

How to Calculate Annual Recurring Revenue?

In this easy-to-follow guide, we’ll break down the simple steps to crunching the numbers and understanding the true value of your business. …

read more
How to Use CAC Calculator to Calculate Customer Acquisition Cost?

How to Use CAC Calculator to Calculate Customer Acquisition Cost?

Have you ever wondered how much money it takes you to get a new customer?  Tracking how much it costs to get…

read more