Sunday, May 29, 2016
Publication Date: 05/1/2008
ARCHIVE >  May 2008 Issue >  Special Feature: Test and Measurement > 

Saving Time and Costs with Smart Testing
Traditional Test Flow.

Test Engineers are an unpopular bunch. The pinnacle of achievement for a test engineer is to confirm one of two unsavory truths: either the production department is doing a lousy job (no one wants to hear this) or the test process was a waste of time. In that case why was the product launch delayed, the board re-designed for test?

The principal reason for this unpopularity and, indeed, for the actual demise of many test departments is the failure to integrate with the production process as a whole. Test is often seen as a (marginally) necessary evil at best and is the first under the ax when budget cuts threaten. But this is a short-sighted approach; a properly managed and, above all, integrated approach to testing should present a significant time and cost saving to production.

Why? What? How? Where? These are the four questions every test engineer should ask when considering, proposing or analyzing a test strategy for a product.

  • Why do I want to test this?
  • What exactly do I want to test?
  • How do I test this?
  • Where is the best place to test this?

Too often the answers to these questions have been considered to be obvious, self-evident. "To make sure the product isn't faulty", "Everything!", "Buy the biggest, most expensive piece of test equipment I can get away with", and "As soon as it comes into my department" are not suitable answers.

A "Smart" Approach
We need an intelligent, "Smart" approach to test. We need to minimize costs (in fact test should save money, not consume it), we need to test only those things that can fail and that it makes sense to test at each stage and we need to do all of this the fastest, lowest cost way we can.

Why do I want to test this? This is more, much more, than preventing faulty product going out of the door — or in our isolated approach — the test department. We should be focusing on detecting both real and latent, hidden faults, rapidly feeding back the defect to production so that it doesn't happen again and then, ultimately, ensuring that, when the customer gets his product, it functions exactly as per the specification for at least the warranty period!
"Smart" Test Flow.

The final three questions can be answered by looking at the production and test process as a whole, making some intelligent assumptions and analyses and then implementing an integrated plan. Let's take a look at a typical production and test flow, let's say one where the value of in-circuit test (or flying probe test), combined with functional and system-level functional are still appreciated.

Fault-Laden Boards
The fresh, assumed fault-laden boards come out of the manufacturing section and are then seized by ICT-level test (whether bed-of-nails or flying probe). At this stage absolutely everything that it is possible to test is tested. For a complex product, in the case of bed-of-nails ICT, the fixtures cost a small fortune, take weeks to manufacture, and must often be thrown away if a simple re-spin of the board is required. In the case of flying probe in this simple, traditional model, test times are invariably incompatible with manufacturing cycle times, forcing greater and greater investment in test or, worse, non-intelligent, ad-hoc compromises over what exactly is tested (finding open pins on all the ICs takes too long so let's not bother and hope we don't get any).

Then, at the next stage, the board-level functional again tests everything that it is possible to test. Most often the board designer is the biggest input as to what should be tested here and he generally has zero knowledge of what has already been tested at ICT, has little interest in considering how long it takes to perform his tests (of course we need to time the 30-minute timeout function, it's a critical parameter) and cares little about the costs of performing these tests (yes we do need a Quad channel 2,000THz oscilloscope, how else can you see all 4 of those 3pS pulses?). Finally in this scenario we start from square one at system level functional test. The published product specification is king here. Every item in that specification (including that 30-minute timeout) must be tested all over again.

And, if we should actually find any faults at either of the functional stages, heads will roll in the ICT section. Because, "we all know", it's much more expensive to find these faults at this stage. Were you guys in ICT sleeping?

Do we begin to see why test is an unpopular, isolated, empire-building section?

So, let's look at an alternative to all this departmentalization. In a "Smart Test" approach we still have the ICT — but this is now most likely performed by flying probe — the board level and system level functional. However the difference is that each stage of test works only on those items, those faults that it makes sense to test at this stage. Of course, such a strategy can only work if the actual fault level is very low, the final yield "if there were no test" is very high. But, in the 21st century, we can safely assume that, if you had a 50 percent yield on conventional printed circuit boards, you were not going to be in business long enough to find the time to read this article.

Moreover, temporary excursions to lower yields can be met with a re-tuning of the strategy to catch more of these faults earlier in the cycle; flying probe is particularly suitable for this due to its high flexibility.

Now, what do we test at each stage then? At flying probe we test 3 classes of potential faults. We test for "catastrophic faults" — those faults that would cause smoke or rubble if the board or unit were powered up. We test for faults that cannot be reasonably detected at functional test — items like protection devices or fail-safe circuits, those parts not considered part of the actual functionality of the product. We finally test for those failures that have traditionally been a problem on this type of design.

Known Problem Areas
Comments from technicians such as "We always get problems with the solder joints on IC5" should be listened to, both to improve the process and to ensure that this common fault does not go undetected to a later test stage.

At functional board test we devise a test that would fail if any component that is not tested at ICT were faulty. Of course this is not an easy task but we may start with the design generated full tests and then simply remove those tests that would definitively be detected at ICT. For this, of course, we need good test coverage information from ICT. We don't concern ourselves with diagnostics on failures. We don't want to spend precious time designing a test procedure that analyzes the failures and attempts to find fault down to component level — that is the job of the return arrow on the diagram from functional test to ICT. On the ICT we also have a full ICT test that can be run on the functional failures to diagnose the faults. Or, we can be really "Smart" here and feed back the information of the failing section from the functional test and only test the components in that section.

Then, finally, at system level functional test, we perform those tests necessary for ensuring customer acceptance. We certainly expect nothing to ever fail at this stage. For a contract manufacturing organization this test setup is often supplied by the OEM customer. As such it often does have limited or no diagnosis of failures, just a "Red light" or a "Green light". That works just fine, since we don't ever expect this test to fail.

How does this strategy help? What we have here is an integrated approach based on costs. We all know the "rule of tens" axiom which states that a fault found at each later stage — board stuffing, ICT, functional test, customer — costs 10 times as much as the previous stage to rectify that fault. But, if we have a 99 percent yield and we need $10 Million worth of ICT to achieve 99.9 percent, do really we care if the one faulty IC that would be let through costs $10 to rectify at functional instead of $1 at ICT?

By using a flexible flying probe test at ICT we save large amounts of money on fixtures, fixture storage and maintenance, we become a much more lean department; we can begin testing a new design or re-spin hours after the design is approved instead of weeks and the test department becomes popular again!

Contact: SPEA America, 2609 S SW Loop 323 Tyler, TX 75701 903-595-4433 fax: 903-595-5003 E-mail: Web:

search login