Stephen Wolfram introduced the idea of Computational Irreducibility, which says that some systems are so complex, there’s no shortcut to predict what they’ll do–you have to go step by step, like running a full simulation. He suggests that in these cases, there’s no way to simplify the process.
I love this idea, but if we think about this using the lens of Information Theory, things get interesting. In Information Theory, something that’s truly random has no patterns and can’t be compressed–it’s maximum entropy, like rolling a fair die. But if we can find patterns, then we can compress the data, which means it’s not totally random after all–like in the case of rolling an unfair die.
This leads to the key point: if a system has patterns–which they usually do if they aren’t completely random–then maybe we can find ways to simplify its computation. That means it’s not fully irreducible.
So, while Wolfram’s idea is powerful, it may be based on an assumption that’s too strict–that each step of the process adds new, non-redundant information that cannot be predicted or compressed from previous steps. In reality, most complex systems contain patterns we can use, making them at least partly reducible.
[I woke up at 7 a.m. on a Sunday morning, grabbed my phone, and mumbled my vague idea to ChatGPT—and wonderfully, ChatGPT organized my thoughts into a readable article. Then, after I fully woke up, I was able to revise the idea further. What an amazing time to be alive.]


Leave a comment