Finished chips coming in from the foundry are topic to a battery of exams. For these destined for vital techniques in vehicles, these exams are significantly intensive and might add 5 to 10 p.c to the price of a chip. But do you really want to do each single take a look at?
Engineers at NXP have developed a machine-learning algorithm that learns the patterns of take a look at outcomes and figures out the subset of exams which are actually wanted and those who they might safely do with out. The NXP engineers described the method on the IEEE International Test Conference in San Diego final week.
NXP makes all kinds of chips with complicated circuitry and superior chip-making know-how, together with inverters for EV motors, audio chips for client electronics, and key-fob transponders to safe your automotive. These chips are examined with totally different indicators at totally different voltages and at totally different temperatures in a take a look at course of referred to as continue-on-fail. In that course of, chips are examined in teams and are all subjected to the entire battery, even when some components fail a few of the exams alongside the best way.
Chips had been topic to between 41 and 164 exams, and the algorithm was capable of advocate eradicating 42 to 74 p.c of these exams.
“We have to ensure stringent quality requirements in the field, so we have to do a lot of testing,” says Mehul Shroff, an NXP Fellow who led the analysis. But with a lot of the particular manufacturing and packaging of chips outsourced to different corporations, testing is likely one of the few knobs most chip corporations can flip to manage prices. “What we were trying to do here is come up with a way to reduce test cost in a way that was statistically rigorous and gave us good results without compromising field quality.”
A Test Recommender System
Shroff says the issue has sure similarities to the machine learning-based recommender techniques utilized in e-commerce. “We took the concept from the retail world, where a data analyst can look at receipts and see what items people are buying together,” he says. “Instead of a transaction receipt, we have a unique part identifier and instead of the items that a consumer would purchase, we have a list of failing tests.”
The NXP algorithm then found which exams fail collectively. Of course, what’s at stake for whether or not a purchaser of bread will need to purchase butter is sort of totally different from whether or not a take a look at of an automotive half at a selected temperature means different exams don’t should be finished. “We need to have 100 percent or near 100 percent certainty,” Shroff says. “We operate in a different space with respect to statistical rigor compared to the retail world, but it’s borrowing the same concept.”
As rigorous because the outcomes are, Shroff says that they shouldn’t be relied upon on their very own. You must “make sure it makes sense from engineering perspective and that you can understand it in technical terms,” he says. “Only then, remove the test.”
Shroff and his colleagues analyzed knowledge obtained from testing seven microcontrollers and functions processors constructed utilizing superior chipmaking processes. Depending on which chip was concerned, they had been topic to between 41 and 164 exams, and the algorithm was capable of advocate eradicating 42 to 74 p.c of these exams. Extending the evaluation to knowledge from different sorts of chips led to a good wider vary of alternatives to trim testing.
The algorithm is a pilot mission for now, and the NXP crew is trying to broaden it to a broader set of components, cut back the computational overhead, and make it simpler to make use of.
From Your Site Articles
Related Articles Around the Web