software testing

Software Testing Is Not Hardware Testing

Best practices for medical device HW testing are not best for SW testing

Medical device development teams often ask me for help with software V&V. Even though they have their hardware development under control, these teams keep encountering software testing problems and the corresponding compliance documentation.  Frequently, it’s because their company’s quality system, explicitly or inadvertently, treats software testing the same as hardware testing.  

But SW testing is special. There are four key differences between HW and SW testing that have important implications for quality system procedures:

  1. Test samples
  2. Number of tests
  3. Repeated testing
  4. Varieties of SW Verification

1 – Test Samples

This first area of difference is perhaps the most obvious–the number of test samples needed. 

When planning hardware design verification, one of the most important questions we need to answer is how many samples do we need to test.  The number of hardware test samples is based on how much variation exists from sample to sample, the error in test measurements, the degree of risk associated with the design feature being tested, and the test samples’ cost.  For software, we only need to measure a single sample (usually generated from an automated build).  

This difference has big implications for test management.  For example, if a team is planning design verification of a hardware assembly and building the necessary set of test samples will take 8 weeks and $500K then they’re going to be very careful before committing to the build. They’ll need to make sure that the design is finalized, eliminating the need for repeat testing.  However, the software team can easily create a new build every night for testing and they can create as many copies of it as needed for testing performed in parallel.  This gives the software team a great deal more flexibility in planning and iterating their software development and testing. 

2 – Number of Tests

Working with dozens of medical device development projects, I’ve found that software testing usually consists of a large number of relatively simple tests whereas hardware testing tends to consist of a small number of complex tests.  This leads to two very different approaches to test management and some additional regulatory requirements for software testing.  The software team needs a robust system to document links from software requirements to software tests (maintain a trace matrix) to ensure all requirements are tested.  They also need a system for tracking software defects to make sure that every software bug is fixed.  FDA guidances explicitly call for software traceability and software defect documentation for regulatory submissions but don’t for hardware.

3 – Repeated Testing

Software testing is much more likely to need repeating than hardware testing because the software is more likely to be modified. Additionally, the complex interdependencies in software make it inherently more difficult to isolate the impact of changes, leading to extensive software regression testing.  Even very small changes to the software code base, for example, can lead to a significant portion of the software tests needing to be repeated–well beyond what was changed.  

This need for repeated software testing provides a strong incentive for investing in test automation.  Even if the automated tests don’t cover every single software function (most software teams need to utilize a combination of manual and automated testing), they are the quickest and least expensive to perform. Consequently, automated tests are an effective tool for rapidly identifying new bugs before more expensive manual tests are performed.

4 – Varieties of SW Verification

Finally, the last key difference is the nature of design verification. While hardware design verification usually means testing the hardware against the hardware requirements, software design verification encompasses much more than just testing the software against the software requirements.  Software verification is multi-faceted and consists of a wide variety of activities (multiple types of software testing and non-test verification activities).  

This complex approach to software verification arose out of experience decades ago of repeated software failures observed in medical devices on the market.  Even though the devices had undergone thorough software testing at the end of development, they still exhibited software defects in clinical usage. The FDA and other regulators concluded that thorough testing alone is not sufficient to claim that medical device software is fully validated. It also depends on verification activities during each stage of development.

To summarize, software teams now need to include steps to prevent defects throughout development (in addition to identifying and correcting defects through rigorous testing at the end of development).  And both FDA software guidances and the IEC 62304 medical device software standard describe multiple non-test activities for software verification, such as code reviews and design reviews. 

The table below summarizes software V&V in 4 broad categories from the lowest to the highest level: SW Unit Verification, SW Integration Testing, SW System Verification, and SW Validation*.  

These categories provide a good general framework for understanding regulatory requirements for software V&V. However, every product team still needs to carefully identify which test methods are most appropriate for their particular project.  Experienced SW test engineers will know that there are many more varieties of SW testing (functional testing, automated regression testing, UI testing, etc.) than shown in this table. However, I believe this 4-level framework is still a valuable tool for understanding how all those types of SW testing fit within the framework of regulatory requirements.

CategoryResponsible 
(typical group)
Testing Goals & Examples
SW Unit VerificationSW DevelopersDemonstrate that a particular section of code functions properly;
Examples: SW unit testing, code reviews, static analysis
SW Integration TestingSW Test GroupDemonstrate that SW units function properly together and with the associated hardware;
Example: testing combinations of SW modules and HW
SW System VerificationSW Test GroupDemonstrate that the software fulfills all software requirements;
Example: testing SW in full product (verification against SW requirements)
SW Validation*Clinical Group,
Human Factors Group
Demonstrate that the software meets user needs in a production environment;
Example: testing full product in a clinical study or usability study with representative users
*Note: This defines “Software Validation” as part of the overall design validation testing but some medical device companies define “Software Validation” as just testing software against software requirements.

Managing Medical Device V&V Testing

Given these considerable differences between hardware and software testing, your company’s quality system procedures might require considerable adjustment.  Here are some recommendations to fully support the specialized needs of software testing.

Develop test documentation specialized for software

First, don’t use a “one-size-fits-all” approach to test documentation across hardware, software, and system testing.  Define test documentation requirements and templates that are optimized for software testing.  The difference in the number of software tests means that you must carefully consider documentation requirements in your quality system for software vs. hardware testing.  Test documentation is likely to be a small fraction of the effort for hardware testing but could be a very large burden for software testing, particularly if it becomes a bottleneck for automated software testing.  Consider ways to efficiently compile long software test protocols and release them and execute them repeatedly (e.g., avoid long lists of approvers on software test protocols).

Plan for automated software testing

Ensure your quality system supports automated testing and that inefficiencies in test documentation don’t negate the advantages of automated testing.  For example, consider ways to enable automated generation of test documentation (from validated software tools).

Plan for repeated software testing

Expect the software team to be performing verification activities throughout software development, not just at the end of development.  Make it straightforward and flexible to repeat some or all software testing by streamlining your test management processes and utilizing appropriate tools for requirements management, test management, and defect management. For example, an Application Lifecycle Management (ALM) System is a very valuable tool for managing changes to requirements and tests and maintaining traceability between them.  And establish a formal software configuration management system early in development to always know exactly which version of the software was tested.

Lastly, establish a formal defect tracking system early in development to ensure that no software bugs are overlooked.  Pro tip: use your defect tracking system for all defects (software, hardware, and system defects)–it will make it much easier to manage V&V and finalize the product for launch.  

Design Freeze

Beware of instituting an across the board “design freeze” in your design control procedures and product development process.  Carefully consider the HW and SW testing timelines and make allowances for SW changes late in development where the cost of change is still manageable (compared to HW changes).  The right balance in flexibility in freezing the design versus the cost of change will depend on your product architecture and risk profile. 

The Big Picture

Software testing is not the same as hardware testing, and your company’s V&V procedures need to account for those differences.  The overall goal in all of these QMS considerations is to make software testing more efficient (while still maintaining regulatory compliance), minimizing the cost of design changes.

For more information and guidance on optimizing your quality system for software development (including support for agile methods), take a look at some of my presentations on agile design controls. Or contact me directly with your software V&V questions.

1 thought on “Software Testing Is Not Hardware Testing”

Leave a Comment

Your email address will not be published. Required fields are marked *