This is Post 4 of 8 in the Eight Bad Testing Ideas series.
Bad testing idea: Assuming that your tests are only temporary.
The famed Y2K Bug has passed from a cliche into a cultural legend.
Decades ago, when storage space was at a premium, many developers stored years as “17” or “33” or “97” to save two characters. Eventually, however, the assumption that “17” meant “1917” was no longer valid. Hilarity ensued, except where billions were spent to make sure that hilarity did not ensue.
The Y2K Bug did not occur because software developers did not realize that someday dates beginning in “20” would be needed. It occurred because they did not realize that their software would still be in use so many years later.
This reasoning is hard to fault. What really is the likelihood that your web application to queue requests for processing certain kinds of tickets will still be used in 2046?
But the same mistake is easy to make in the context of test development: assuming that tests won’t last forever.
Somebody wrote this test to test one particular thing, with one particular set of circumstances, as the result of a one-time fix:
Given the user logs into the system
When the user creates an invoice
And the user links an invoice to a purchase order
Then the system should properly apply the PR 771 fix and generate a “REQ_LINK_UPDATE” database touch with ACN “DB_TEST” applying condition code “-1” and transaction log limiter “51927374500”.
It is somewhat unlikely that the next poor test engineer who happens across this test, two years later, will have the slightest idea what this is supposed to do – even if he is familiar with all the systems in question. The use of magic numbers and magic constants – though they made perfect sense to the original developer – has rendered this incomprehensible.
Depending upon your organization’s release cycle, you may quickly push out new features. This may, however, lead you to quickly push out new automated tests. In time, quick, one-off tests for one particular bugfix, or specific validations of one particular use case, can accumulate to clog up your test suite:
- Feb 17 01:15 authentication_scripts.txt
- Feb 23 04:44 data_entry_scripts.txt
- Mar 02 12:03 file_reporting_series_1_scripts.txt
- Mar 09 02:33 file_reporting_series_2_scripts.txt
- Mar 18 10:32 774.txt
- Mar 20 12:39 patch_for_bug_4412_script.txt
- Mar 28 12:03 quick_update_for_integration_problem_script.txt
- Mar 29 11:19 backup_jobs_scripts.txt
- Mar 30 11:20 brads_filesystem_fix_script.txt
- Apr 08 03:09 script_for_665.txt
- Apr 10 12:02 json_import_scripts.txt
- Apr 11 08:30 script_for_665a.txt
- Apr 29 02:49 792.txt
It may be hard to reconstruct later what pull request #774 was for. What if it fixed something vital? Is it obvious that 774.txt contains a must-pass core regression test? After all, it was a temporary fix, that was supposed to be replaced by something more substantial. When the developer got some free time.
One final temptation to writing tests for the short term is the Next Big Thing. The current system is temporary. It’s being replaced by the XYZ Project next year, which is much better and will solve all of the organization’s problems. So only small temporary patches are necessary, just to keep it going in the interim. They don’t need to be systematic, or documented. It doesn’t matter that much.
Just as soon as the new software is finished, nobody will use the old one anymore, right?
* Avoid magic numbers, magic constants, and over-specific test conditions. The next test engineer will thank you.
* Avoid specific one-time temporary tests for one particular case or fix. These accumulate into a grab bag of edge cases and over-particular validations. If it’s worth writing a test for, it’s worth writing, and documenting, a permanent test for.
* Avoid assuming that writing permanent, quality tests isn’t necessary in one particular case because future developments will render them unnecessary.