Notes from the Wired

Diagnostic Test Generation for Fault Localization in Printed Neuromorphic Circuits

May 14, 2026 | 639 words | 3min read

Paper Title: Diagnostic Test Generation for Fault Localization in Printed Neuromorphic Circuits

Link to Paper: https://past.date-conference.com/proceedings-archive/2026/DATA/1197.pdf

Date: 2026

Paper Type: Test Generation, Fault Detection, Fault Localization, Neuromorphic Computing, Neural Networks, Printed Electronics

Short Abstract: Printed electronics enable cheap, flexible, and lightweight devices, but their unreliable manufacturing process makes them prone to defects. This paper proposes a diagnostic testing framework for printed neuromorphic circuits that not only detects faults but also localizes them more effectively, achieving significantly higher diagnostic coverage and reducing undetectable subcircuits compared to traditional methods.

1. Preliminaries

1.1. Background: What are printed neuromorphic circuits?

Printed electronics are circuits made by printing conductive and semiconductive materials onto flexible substrates.

Examples:

Unlike silicon chips:

But:

The paper focuses on printed neuromorphic circuits (pNCs):

These are built from reusable analog primitives like:

1.2. The reliability problem

Printed manufacturing creates defects such as:

Because these circuits are analog and densely interconnected:

Worse:

So debugging is hard.

1.3. Traditional ATPG vs this paper

Traditional chip testing uses:

ATPG generates inputs that reveal faults.

Example:

If input x = 1011 causes bad output, then that test detects a fault.

But detection alone is insufficient.

Two different faults might produce:

Then you know:

This paper extends ATPG into:

The goal becomes:

Generate test inputs that make different faults behave differently.

2. Method

2.1 Setup

They assume:

They model faulty versions of the circuit:

$$ {y_f(\cdot)}_{f \in F} $$

where:

They want to find a compact set of test inputs:

$$ X = {x_1, x_2, ..., x_m} $$

that:

  1. detect faults
  2. distinguish faults from different primitives

2.2. Objective function

They optimize:

$$ L(X) = L_{det}(X) + \lambda L_{loc}(X) $$

where:

The detection part tries to maximize:

$$ d(y_x^0, y_x^f) $$

meaning:

If the outputs differ strongly, the fault is easy to detect.

In addition, they want faults from different primitives to produce distinguishable outputs.

They maximize:

$$ d(y_x^f, y_x^{f'}) $$

when:

So:

the generated tests try to make their behaviors visibly different.

For optimization purpose they use Adam.

2.3. Diagnostic coverage

They define a new metric:

A primitive pair is:

Then:

$$ \text{DiagCov} = \frac{ \text{distinguishable primitive pairs} }{ \text{detectable primitive pairs} } $$

3. Results

They evaluate on:

Compared against:

  1. sensitivity-based data-driven testing
  2. previous ATPG method (detection only)

Their method achieved:


Some Thoughts

Thoughts:

Email Icon reply via email