Notes from the Wired

Analog Printed Spiking Neuromorphic Circuit

June 29, 2025 | 1,205 words | 6min read

Info

It’s been a while since I listened to the lecture on Computer Organization, which covers circuits and related topics. As a result, I’m a bit rusty, and it’s more likely than usual that I got something wrong in this paper. Please keep that in mind while reading.

Paper Title: Analog Printed Spiking Neuromorphic Circuit
Link to Paper: https://core.ac.uk/download/pdf/596774544.pdf
Date: 25. March 2024
Paper Type: Neuromorphic Computing, SNN, Printed Electronics, Circuit Design
Short Abstract:
This paper introduces a novel design for an Analog Printed Spiking Neural Network (P-SNN) using printed electronics, aimed at creating a low-power and efficient neuromorphic system.

1. Introduction

Modern devices: such as small robots, IoT devices, and wearable technology, all require power-efficient and low-cost solutions. These characteristics are often not met by traditional silicon-based circuits.

An alternative to these circuits could be neuromorphic computing. Neuromorphic computing uses Spiking Neural Networks (SNNs), which, in contrast to traditional Artificial Neural Networks (ANNs), are inspired by the biology of the brain and neurons. SNNs use spikes, making them event-driven and therefore more power-efficient, as they don’t require continuous energy consumption.

Additional research has been done on Analog Artificial Neural Networks (P-ANNs), which, as the name suggests, use analog instead of digital signals and are printed directly onto circuits.

The authors propose to combine the benefits of P-ANNs with SNNs, thereby introducing Printable Spiking Neural Networks (P-SNNs). To enable this, they design a special printable circuit suitable for implementing SNNs.

2. Background

Printed Electronics (PE) is an additive manufacturing process in which layers of materials are deposited on a substrate to create circuits and, eventually, entire devices. PE offers several advantages: it is inexpensive, flexible (i.e., compatible with a variety of materials), and highly scalable.

Two common printing techniques are:

Printed Artificial Neural Networks (P-ANNs) are neural networks emulated using circuits composed of resistors and transistors created through printing techniques. The resistors in these networks are implemented using a structure known as a crossbar.

3. Printed Spiking Neural Networks (P-SNN)

3.1 Implementation of a Printed Spiking Neuron

A typical P-SNN neuron consists of three main components:

The synapses can be modeled with the following equation:

$$ \frac{V_g^1}{R_w^0} + \frac{V_g^1 - V_{in}^1}{R_w^1} + \cdots + \frac{V_g^1 - V_{in}^N}{R_w^N} = 0 $$

Where \(V\) denotes voltage and \(R_w\) are the resistors (i.e., weights).

3.2 Understanding the Circuit

Synapses Each neuron has multiple synapses, represented by inputs \(V_{in}^1, V_{in}^2, \ldots, V_{in}^N\). These inputs are encoded as voltages — the stronger the input signal, the higher the voltage. Each synapse includes a resistor \(R_w^i\), which acts as the synaptic weight. A higher resistance allows less current to pass through, reducing the influence of that input. The synapses are connected in parallel, effectively summing the weighted inputs. The resistor \(R_w^0\) grounds the synaptic network, ensuring the voltage is defined even when no inputs are active.

Charge Network The charge network includes a control gate \(V_g^1\), which behaves like a valve. The higher the input voltage, the more it affects the flow of current from the supply voltage \(V_{dd}\) to a capacitor \(C_{in}\), which represents the neuron’s internal membrane. The capacitor stores charge, thus encoding the neuron’s current activation level. A ground connection ensures a defined state even when both the capacitor and inputs are empty.

Reset and Discharge Network Another gate, \(V_g^2\), manages the reset mechanism. When the neuron fires, this gate closes, discharging the capacitor through a transistor \(M_2\) to ground. This resets the neuron and prepares it for the next input spike. The signal from the charge network is passed through an amplifier to produce \(V_{out}\), and then further amplified to produce \(V_{out}'\), making the output usable for the next stage in the network.

3.3 Training of P-SNN

Now that we’ve defined the model and circuit and seen how it functions, we need to train the model — that is, determine the input weights of the synapses \(R_w^1, \ldots, R_w^N\).

However, training directly on the physical circuit is difficult. Backpropagation is not feasible on hardware, so instead, we use surrogate training: we digitize the circuit, train it on a GPU, and once training is complete, we print the circuit with the learned weights fixed.

How does this work? The process consists of the following steps:

1. Rebuild the circuit in simulation software

Note: At this stage, we’re not training the neural network; instead, we’re generating a dataset that will be used for training a surrogate model.

2. Train a surrogate model

The goal is to approximate the mapping:

$$ V_g(t) \longrightarrow V_{\text{out}}(t) $$

3. Use the trained neuron in a larger P-SNN architecture

4. Train the full P-SNN on target data

5. Deploy by printing the trained network

4. Experiments

4.1 Setup

4.2 Results

5. Conclusion

This is the first Printable Spiking Neural Network (P-SNN) that combines a spiking neural network with a transformer architecture and is fully printable as a circuit. It achieves comparable accuracy to existing methods like P-ANN, while offering significantly improved power efficiency.

Email Icon reply via email