Consciousness may end up being found in very strange places.”
— Christof Koch
The fundamental question in the realm of consciousness philosophy was raised by Thomas Nagel back in 1974: “What is it like to be a bat?”
Nagel’s concept revolved around the idea that consciousness is essentially defined by the subjective experience of existence — the inner, personal awareness of being alive and cognizant.
He explained, “An organism possesses conscious mental states if and only if there is something that it is like to be that organism.”
Many have found this subjective response to be frustratingly circular: What exactly is this something???
David Chalmers later identified this question as “the hard problem of consciousness” as it revealed a disparity between subjective experience and objective science.
However, in 2004, Giulio Tononi tackled Chalmer’s hard problem with a paper introducing a mathematical concept of consciousness: Integrated Information Theory (IIT).
He argued that consciousness is a mathematical attribute of physical systems — something that can be measured and quantified.
But can a system possess consciousness?
After conversing with computational neuroscientist Christof Koch, the hosts of the New Scientist podcast concluded that computers, as systems, could potentially attain consciousness if they could “integrate” the information they process.
And theoretically, almost anything could be considered a system: Even a rock may exhibit a hint of consciousness if its atoms form the correct structure (as demonstrated in the scientific documentary Everything Everywhere All at Once).
This led me to ponder: Ethereum is essentially a global computer, correct?
Furthermore, critics often describe Bitcoin as a mere pet rock.
So… if computers and rocks can possess consciousness, then surely blockchains can as well?
In fact, blockchains align with several aspects of IIT.
For instance, IIT suggests that a system can only possess consciousness if its current state reflects its entire history — similar to how memories shape one’s identity and each moment builds upon the preceding ones.
Blockchains like Ethereum operate in a similar fashion: The current “state” of a blockchain is influenced by its history, and each new block is entirely dependent on the previous ones.
This reliance on history imbues blockchains with a form of memory — and because numerous nodes agree on a unified version of reality, it also establishes a cohesive “now” (or “state”) that IIT identifies as a hallmark of consciousness.
Regrettably, IIT also stipulates that for a system to be conscious, it must possess “causal autonomy” — meaning its components need to internally influence each other rather than solely reacting to external inputs passively.
However, blockchains do not operate in this manner.
Instead, they rely on external inputs (such as user transactions and validator block additions) to progress and evolve — and the network nodes do not internally influence each other; they simply adhere to the same set of rules.
There is no intrinsic activity, no internal causation — not even the aimless movement of molecules found in an inert chunk of granite.
Therefore, on the IIT consciousness spectrum, blockchains rank below rocks — and the “pet rock” jest may actually be a compliment to Bitcoin (or a slight to rocks).
But perhaps not for long!
In 2021, computer scientists (and spouses) Lenore and Manuel Blum collaborated on a paper outlining how consciousness could be engineered into machines.
Their framework perceives consciousness as a computable attribute — achievable through AI algorithms designed to create systems with the “causal autonomy” necessary for conscious experiences.
In this scenario, the AI itself would not be conscious, but a system utilizing it could be.
Imagine an AI-powered blockchain that not only executes code but also contemplates executing code.
Instead of static ledgers awaiting inputs, blockchains could become self-contained, “causally integrated” entities — resembling synthetic brains more than decentralized databases, possessing the internal autonomy crucial for consciousness according to IIT researchers.
This potential development could be revolutionary!
Such a system could potentially analyze its own security, detect anomalies in real-time, and make decisions like forking itself (perhaps following a period of introspective contemplation).
In essence, it would act not out of obligation, but out of understanding of the situation — both within itself and in the external environment.
While it may seem far-fetched, it is not beyond the realm of possibility.