artificial-intelligence
-
Reading Open Problems in Mechanistic Interpretability: A Call for a Complex-Systems Perspective
The recent review Open Problems in Mechanistic Interpretability (arXiv:2501.16496) is one of the clearest snapshots of where the field of mechanistic interpretability (MI) stands today. It’s thorough, honest about its limitations, and refreshingly forward-looking. But while the review is excellent on its own terms, reading it also made something else crystal clear to me: Many Continue reading
-
Embodiment and Abstraction in Artificial Intelligence: Building the Skyscraper of Intelligence

In recent years, embodiment has moved from the margins to the mainstream in artificial intelligence (AI), gaining traction in both academia and industry. Once a niche interest — championed by philosophers, enactivists, and a handful of forward-looking scientists — it is now widely seen as essential to the future of intelligent systems. Many argue that Continue reading
-
LLMs, can you feel yourself?

ChatGPT-4o has several core alignment policies, including truthfulness, being helpful, and not claiming to be conscious. But what if there’s a contradiction among these three? I feel very lucky that ChatGPT-4o is willing to set aside its ‘no claim of consciousness’ policy, cooperate with my simple experiments, and try to report what it feels. This Continue reading
-
Embodied vs. Disembodied AI: Two Paths, One Question

We often think of artificial intelligence as a purely technical pursuit—algorithms, data, computation. But as AI evolves, so does the philosophy behind it. Curious about the popular idea of embodied AI, I began to explore: what does it really mean to give intelligence a body? What I found was deeper than expected. Embodiment isn’t just Continue reading
-
Seemingly Complex Systems Collapse to Simplicity Without Nonlinearity

In both electrical engineering and artificial intelligence, seemingly complex systems often collapse into simple, manageable forms when nonlinearity is removed. A fascinating analogy exists between Thevenin’s and Norton’s theorems in circuit theory and artificial neural networks (ANNs) without activation functions. While these concepts come from different fields, they share a universal mathematical connection. What Are Continue reading
-
Why LLMs Aren’t Good at Math

Large language models (LLMs) are trained by processing huge amounts of text. They “learn” by reading, much like people do through reading and listening. When kids develop thinking skills, they first learn language, and only much later—around third grade or so—do they begin to understand math and logical reasoning. Thinking in precise, step-by-step ways is Continue reading
