computational irreducibility

{{Short description|Concept proposed by Stephen Wolfram}}

{{Multiple issues|

{{Primary sources|date=April 2020}}

{{Original research|date=April 2020}}

{{More citations needed|date=May 2019}}

}}

Computational irreducibility suggests certain computational processes cannot be simplified such that the only way to determine the outcome of such a process is to go through each step of its computation. It is one of the main ideas proposed by Stephen Wolfram in his 2002 book A New Kind of Science, although the concept goes back to studies from the 1980s.{{Cite web |date=2022-06-06 |title=Multicomputational Irreducibility—Wolfram Physics Bulletins |url=https://bulletins.wolframphysics.org/2022/06/multicomputational-irreducibility/ |access-date=2025-03-23 |website=bulletins.wolframphysics.org |language=en}}

The idea

{{Expand section|date=January 2022}} Many physical systems are complex enough that they cannot be effectively measured. Even simpler programs contain a great diversity of behavior. Therefore no model can predict, using only initial conditions, exactly what will occur in a given physical system before an experiment is conducted. Because of this problem of undecidability in the formal language of computation, Wolfram terms this inability to "shortcut" a system (or "program"), or otherwise describe its behavior in a simple way, "computational irreducibility." The idea demonstrates that there are occurrences where theory's predictions are effectively not possible. Wolfram states several phenomena are normally computationally irreducible.{{Cite web |title=Stephen Wolfram: A New Kind of Science {{!}} Online—Table of Contents |url=https://www.wolframscience.com/nks/ |access-date=2025-02-03 |website=www.wolframscience.com |language=en}}

Computational irreducibility explains why many natural systems are hard to predict or simulate. The Principle of Computational Equivalence implies these systems are as computationally powerful as any designed computer.

Implications

  • There is no easy theory for any behavior that seems complex.
  • Complex behavior features can be captured with models that have simple underlying structures.
  • An overall system's behavior based on simple structures can still exhibit behavior indescribable by reasonably "simple" laws.

Analysis

Navot Israeli and Nigel Goldenfeld found that some less complex systems behaved simply and predictably (thus, they allowed approximations). However, more complex systems were still computationally irreducible and unpredictable. It is unknown what conditions would allow complex phenomena to be described simply and predictably.

Compatibilism

Marius Krumm and Markus P Muller tie computational irreducibility to Compatibilism.Computational irreducibility and compatibilism: towards a formalization https://arxiv.org/pdf/2101.12033.pdf They refine concepts via the intermediate requirement of a new concept called computational sourcehood that demands essentially full and almost-exact representation of features associated with problem or process represented, and a full no-shortcut computation. The approach simplifies conceptualization of the issue via the No Shortcuts metaphor. This may be analogized to the process of cooking, where all the ingredients in a recipe are required as well as following the 'cooking schedule' to obtain the desired end product. This parallels the issues of the profound distinctions between similarity and identity.

See also

References