Notes from the Wired
This is a website where I write articles on various topics that interest me, carving out a bit of cyberspace for myself.
You shouldn't believe anything I talk about — I use words entirely recreationally.
Pinned
- May 23, 2025Info
I have based this commentary on the original German text as published by Reclam. The translation is my own, created with the assistance of ChatGPT.
Please note that this is purely my personal interpretation of the sermon. I have no formal training in theology or medieval studies, so my reading should be taken with a grain of salt.
Read moreMarch 25, 2025A year ago, a friend of mine had the idea to visit Namibia—often referred to as the “Gems of Africa” because of its diversity of animals and biomes. I’m not entirely sure how he came up with the idea. Maybe it was due to the country’s connection to Germany during its colonial period, or perhaps some algorithmic push from the “machine gods” in his feed. Whatever the reason, he asked our friend group if we were up for joining him. Another friend said yes, but I couldn’t go because it overlapped with some exams I had to take at university. However, I promised him that next semester, I would choose modules that allowed me to have some free time, which would overlap with theirs.
Read moreMost Recent
Jun. 26
RWKV: Reinventing RNNs for the Transformer Era Paper Title: RWKV: Reinventing RNNs for the Transformer Era Link to Paper: https://arxiv.org/abs/2305.13048 Date: 22. May 2023 Paper Type: LLM, Architecture Short Abstract: RWKV is a technique designed to replace self-attention in recurrent neural networks (RNNs). It is used to train a 14-billion-parameter model that achieves performance competitive with traditional transformer-based large language models, while requiring significantly fewer computational resources to train. 1. Introduction Deep learning has been highly successful, particularly in the domain of natural language processing (NLP), with the rise of the transformer architecture. Before transformers became popular, recurrent neural networks (RNNs) were the dominant architecture for language-related tasks. However, RNNs suffer from issues such as vanishing gradients, lack of parallelizability, and limited scalability.Jun. 25
SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks Paper Title: SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks Link to Paper: https://arxiv.org/abs/2302.13939 Date: 27. Feb. 2023 Paper Type: LLM, SNN, Neuromorphic-Computing Short Abstract: This paper presents an LLM using the SNN architecture. To achieve this, the authors develop their own attention mechanism 1. Introduction As the scale of LLMs continues to grow, they require increasingly more power, making them more costly to train. Spike-based neural networks (SNNs), especially when paired with neuromorphic hardware, hold great promise for significantly reducing power consumption in AI. In addition, they can be effectively incorporated into deep learning pipelines, granting access to state-of-the-art training techniques.Jun. 24
Training Spiking Neural Network Using Lessongs from Deep Learning Paper Title: Training Spiking Neural Network Using Lessongs from Deep Learning Link to Paper: https://arxiv.org/abs/2109.12894 Date: 27. Sep 2021 Paper Type: Neuromorphic-Computing, Spiking Neural Networks, Deep Learning Short Abstract: This paper provides an overview of training spiking neural networks (SNNs), which are networks inspired by the brain and use spikes for information propagation. It focuses particularly on how deep learning techniques can be used for SNNs. Info This is a very long paper, and I have left out a lot, focusing only on summarizing and writing down what I felt was important.