Uncategorized
-
How “System 2” Is Born
0. TL;DR Rather than treating System 1 and System 2 as distinct cognitive systems, I argue that “System 2” describes a behavioral profile associated with early-stage use of structured mental tools. I suggest this reframing may be useful for thinking about how reasoning-like behaviors emerge in language models during training. 1. The Classic Two-System View Continue reading
-
How a Kid Makes a Rubik’s Cube From Scratch
A Supervisor’s Perspective It started with a bet. Zimo had been watching too many YouTube videos. I was trying to teach him a lesson: “It’s easy when you just watch, totally different when you actually do it.” So I made a 1:100 bet with him. If he could make a Rubik’s cube (because he had Continue reading
-
Embodiment and Abstraction in Artificial Intelligence: Building the Skyscraper of Intelligence

In recent years, embodiment has moved from the margins to the mainstream in artificial intelligence (AI), gaining traction in both academia and industry. Once a niche interest — championed by philosophers, enactivists, and a handful of forward-looking scientists — it is now widely seen as essential to the future of intelligent systems. Many argue that Continue reading
-
Religions and Science

In the beginning, human beings tried to make sense of the world around them. They observed, guessed, discussed, and formed explanations for the things they didn’t understand–like the stars, the weather, life, and death. These early efforts became what we now call religions. They weren’t just belief systems; they were thoughtful attempts to understand the Continue reading
-
Why Indexing Makes Search Faster: A First-Principle Explanation
Imagine you have an array of 1,000,000 elements and need to find a specific value. Without an index, the only way to locate the value is to scan each element one by one. In the worst case, this requires checking all 1,000,000 elements, which is very slow. The Core Bottleneck: Searching Without Structure The fundamental Continue reading
-
Reproducibility
Initially, I thought the term ‘reproducibility’ meant the same thing in IT and science, but I discovered they are fundamentally different. In IT, reproducibility ensures identical results by controlling randomness, such as setting a fixed random seed. However, in science, true reproducibility requires consistency across different conditions, meaning that fixing a seed can actually reduce Continue reading
-
Teach AI to Think in Math

In our previous discussion on Why LLMs Aren’t Good at Math, we explored some limitations of large language models (LLMs) in mathematical reasoning. But what exactly makes math different from natural language skills like reading, writing, listening, and speaking? One insight is that “the essence of math is about symmetry.” Here, symmetry goes beyond just Continue reading
-
Why LLMs Aren’t Good at Math

Large language models (LLMs) are trained by processing huge amounts of text. They “learn” by reading, much like people do through reading and listening. When kids develop thinking skills, they first learn language, and only much later—around third grade or so—do they begin to understand math and logical reasoning. Thinking in precise, step-by-step ways is Continue reading

