ibooksonline

Testing with Hardware

Where practical, tests with hardware should be automated. Let’s look at three kinds of tests we can create that interact with the hardware:

  • Automated hardware tests

  • Partially automated hardware tests

  • Automated hardware tests with external instrumentation

Automated Hardware Tests

Your embedded hardware will probably have areas that are well-suited for automated testing. Other areas of the hardware will probably need special instruments to test hardware functionality. Where possible, you should write tests that help you learn what the hardware does and give you confidence that the hardware is working. As the inevitable hardware changes happen, your tests can help you see when a new hardware design has problems. You might find that some of these tests are valuable during production and may want some to be included in a built-in test sequence that ships with the product.

Let’s say your design uses a Common Flash-Memory Interface (CFI)--compliant device. There are operations we can use to interrogate the flash memory device to see whether it is responding properly. For example, when a 0x98 is written to flash offset 0x55, a CFI-compliant flash memory device will respond with Q, R, and Y when offsets 0x10, 0x11, and 0x12 are read, respectively. The device must be reset after the query by writing a 0xff. This simple test, run on the target, will pass if the device is responding properly. It’s not a thorough test, but it is a quick sanity test.

 TEST(Flash, CheckCfiCommand)
 {
  FlashWrite(0x55, 0x98);
  CHECK( FlashRead(0x10) == 'Q');
  CHECK( FlashRead(0x11) == 'R');
  CHECK( FlashRead(0x12) == 'Y');
  FlashWrite(0, 0xff);
 }

In the following story, the tests that software developers put together to test-drive their code became an invaluable tool for their hardware developer colleagues.

No Fear of Change
by Randy Coulman, embedded software development engineer

We started a new project involving several custom hardware devices, all with an embedded processor and FPGA. We resolved to avoid problems we had in the past with buggy FPGA designs and fixes breaking other features. As with most projects with concurrent hardware/software development, we needed to start developing the software well before the hardware was available. We had a mostly complete spec for the hardware.

We decided that the best approach was to write tests for the hardware. We started with the most foundational feature of the hardware and wrote tests for it. We called these hardware acceptance tests. Since we didn’t have hardware yet, we also wrote a simple simulation that would pass the test. We continued on this way, writing tests for features of the hardware and simulations that passed them. We used TDD to write unit tests for our software as we went along as well.

When the hardware became available, the integration effort was much shorter and simpler than it had been in the past. We encountered three types of problems:

  • Places where certain language constructs worked differently on the embedded processor than they did on our development platform

  • Places where the compiler was generating memory access code that the FPGA didn’t support

  • Places where we misinterpreted the hardware spec

Initially, the hardware acceptance tests were for the software team’s benefit. Over time, the EEs came to trust our tests more and more. We had implemented a set of automated builds that would compile the software on the desktop platform and run all the tests against our hardware simulation and then install the latest software and FPGA binary on our target hardware, run the hardware acceptance tests (and some others as well), and report the results. At the request of the EEs, we added a “sandbox build” where they could drop in a new FPGA binary and have the automated tests run against it. Once it was passing all of the tests, they would then deliver the binary for us to integrate into the system. This allowed them to verify their work even if they were working in the middle of the night while the software engineers were home in bed.

These hardware acceptance tests have caught several regressions in the FPGA, allowing our EEs to upgrade their toolset, recompile their designs, and be confident that they didn’t break anything. Overall, the integration effort was much less than in the past, and we’ve been able to continue to add new features over time with great confidence.

Partially Automated Hardware Tests

The LedDriver example, completed in the previous chapter, shows how a hardware-dependent piece of code can be tested outside the target. But, how do you know it really turns on the right LEDs? The LedDriver  thinks it is controlling LEDs, but any number of mistakes could lead to software that thinks it is doing the right thing but actually does nothing or possibly something harmful. So, you have to make sure the last inch of code, right next to the hardware, is right.

What kind of problems could we have with the LedDriver? It can be initialized with the wrong base address. It is possible that you misread the spec and the bits are inverted. It is possible that the schematic and the silk screen don’t match. Maybe some of the connections on the board are not right. You’re not just testing software; you’re testing an embedded system. So, to be sure that the LedDriver really turns on the right LED at the right time, you have to look at it!

This is a good application for a partially automated test. A partially automated test displays a cue prompting the operator to manually interact with the system or view some system output. In this case, we would verify that a specific LED is either on or off. This would be repeated for each LED. This could also be part of built-in test capability shipped with the product or used to support manufacturing.

Manual tests are more expensive to run than automated tests, but they can’t be completely avoided. If we are effective at minimizing the code that depends on the hardware, it is likely that the hardware-dependent code will not be changed very often. Consequently, the manual acceptance test will likely not need to be rerun as often. You will have to decide when these are run. A new hardware revision or changes to the hardware-dependent code would trigger a manual retest. You might also consider short and long versions of the partially automated tests, running the short one regularly and the longer one less frequently or when necessitated by change.

Tests with External Instrumentation

Special-purpose external test equipment can help automate hardware-dependent tests. This story was ahead of its time.

In the late 1980s, we developed a digital telecommunication monitoring system that monitored 1.544 Mbps (T1) signals. A major part of the behavior depended on a custom application-specific integrated circuit (ASIC). The ASIC monitored the T1 signals in real time; our embedded software interrogated the ASIC and reported performance information on demand and alarm conditions as they occurred. Testing this system required specialized test equipment to generate T1 signals and inject errors found in the real world.

After tiring of the manual tests that involved poking buttons on the T1 signal generator, our test engineer, Dee, dug into the instrument’s capabilities and discovered it could be controlled through a serial port. Dee started writing test scripts. Her scripts instructed the external signal generator to corrupt the digital transmission with a specific bit error rate, then interrogate the system under test to see whether it reported the correct diagnosis. Incrementally, she automated her manual procedure, growing the automated test suite each day. The regression test grew to have wide functional coverage. This practice allowed Dee to report defects to the group within one day of their introduction.

Dee was not popular at first, walking from the lab smiling with a fresh list of bugs. After a while, the development team met the challenge and grew to rely on the morning bug report. A healthy competition developed; developers worked diligently to produce bug-free code. They used the test scripts before releasing to check their work. The quality improved.

We had zero defects reported from our installed base of thousands of units. Many other products developed at about the same time, by teams using manual test practices, had long bug lists and expensive field retrofits. This test investment had a great return.