False Positive / Negative in Software Testing
Automated tests in software testing are responsible for verification of the software under test and for catching bugs. In this context, positive means that at least one test found a bug or a misfunction feature. Moreover, negative means that no test found a bug or misfunction feature in code.
Ideally, all automated tests should give negative signals. However, in reality, some tests show False Positive or False Negative signals.
False Positive in Software Testing
In the context of automated software testing, a False Positive means that a test case fails while the software under test does not have the bug in which the test tries to catch. As a result of a false positive, test engineers spend time hunting down a bug that does not exist.
A false positive indicates a bug when there is none.
A false positive indicates a bug when there is none.
While false positive results have no impact on the software product, they might upset engineers. As a result, some engineers might lose their faith in the test suite and start removing tests with a false positive result.
False Negative in Software Testing
In the context of automated software testing, a False Negative means that a test case passes while the software contains the bug that the test meant to catch. As a result of a false negative, bugs land in the production software and cause issues for the customers.
A false negative indicates no bug when there is one.
In the context of automated software testing, a False Negative means that a test case passes while the software contains the bug that the test meant to catch. As a result of a false negative, bugs land in the production software and cause issues for the customers.
A false negative indicates no bug when there is one.
Both false positives and false negatives are considered harmful. While a false positive wastes your time, false negative lies to you and lets a bug remain in software indefinitely. That said, false negatives get the worst press since they are more damaging, and it introduces a false sense of security.
False Negatives & False Positives in Static code analysis
Static code analysis is a software development process that analyzes computer software to identify potential errors, both semantic and syntactic, even before the code is run. As such, both false negatives and false positives apply to this field as well. In the static code analysis field, a positive result is bad news; it suggests a defect in the source code. However, a false negative is the worst since you are not aware of the defect in the code.
False Negatives & False Positives in Dynamic code analysis
Static code analysis is a software development process that analyzes computer software to identify potential errors, both semantic and syntactic, even before the code is run. As such, both false negatives and false positives apply to this field as well. In the static code analysis field, a positive result is bad news; it suggests a defect in the source code. However, a false negative is the worst since you are not aware of the defect in the code.
False Negatives & False Positives in Dynamic code analysis
Dynamic code analysis is the process of evaluating computer software for quality and correctness. Dynamic analysis involves executing the program to detect defects, whereas static code analysis analyzes code without running it. A positive result is good in the context of a code coverage tool since it suggests that you have achieved the minimum desired code coverage. Conversely, a false positive in this context means you have not covered some code area, but you think you have.
Cheatsheet
In summary, the context in which both positive and negative terms are used defines whether positive or negative is good or bad, making the concept confusing. Luckily, there is an easy formula to remember; it helps you figure out whether the false positive is the worse or the false negative.
If a positive is bad, a false negative is worse. If a positive is good, a false positive is worse!
Catch false negatives in tests
Catch false negatives in tests
False Positive tests are easy to catch. When a test fails, we can look at the root cause for the failure and decide whether it was a false positive or not. How about false negatives? How to catch them? Remember, a test with false negative lies to you by not failing when it should fail.
Practically, one reason why a test case does not fail can be because of a False Negative signal. Should we lose our faith in automated software testing then? The answer is: No.
To catch false negatives in tests, one should practice a technique known as mutation testing.
Software engineers exercise mutation testing by changing the code and introducing a bug continued by running the test responsible for catching the bug. In this situation, a test that passes is giving us a false negative.
To catch false negatives in tests, one should practice a technique known as mutation testing.
Software engineers exercise mutation testing by changing the code and introducing a bug continued by running the test responsible for catching the bug. In this situation, a test that passes is giving us a false negative.
Avoid False Positives & False Negatives
As we discussed earlier, both false positive and false negative signals interrupt us, so wouldn’t it be better to avoid false positives and false negatives rather than hunting them down? In this section, we will go through some of the best practices to prevent false positives and false negatives.
As we discussed earlier, both false positive and false negative signals interrupt us, so wouldn’t it be better to avoid false positives and false negatives rather than hunting them down? In this section, we will go through some of the best practices to prevent false positives and false negatives.
- Keep automated tests simple and minimize the logic in your code, and always remember that the test code is untested itself. The less logic you include in your test cases, the less chance of misbehavior from the test.
- When writing test cases by hand, leverage open-source testing frameworks and libraries instead of bespoke testing frameworks and libraries. This is because open-source libraries are battle-tested and used by many companies.
- Tests are code and subject to code review by colleagues since no one can assure that code is bug-free.
- Change in source code should trigger a review of the companion test cases to prevent false negatives due to refactoring.
- Software engineers should practice mutation testing before committing the code for a feature or a bug fix.
- Your tests should only assert what matters to them. Often, redundant coverage results in false positives.
- A dedicated environment for testing helps in reducing false positives. This environment must be only accessible by test cases.
Summary
This article covered the concept of false positive and false negative results in the field of software testing. As we discussed, false negative results is worse than a false positive since a bug stays in the code indefinitely. We introduced a technique called, mutation testing. Using mutation testing, test engineers can identify false negatives in code. Moreover, we listed some of the best practices to avoid false positive and false negative results in your tests.
This article covered the concept of false positive and false negative results in the field of software testing. As we discussed, false negative results is worse than a false positive since a bug stays in the code indefinitely. We introduced a technique called, mutation testing. Using mutation testing, test engineers can identify false negatives in code. Moreover, we listed some of the best practices to avoid false positive and false negative results in your tests.
1 comments:
I really want to thank the author for such a nice blog that helped me to understand why it is important. You have a real ability to write content that is helpful for us. Thank you for your efforts in sharing such blogs to us. 3pl WMS Software
Post a Comment