Browsing Category

Natural Science of Computation

Natural Science of Computation, Philosophy of Science

Building Better Credence: The “Generalized Ellsberg Paradox” in Quantum Physics?

All paradoxes have one thing in common: their underlying world model has one or more assumptions which are mistaken.

In our notions of probability, I attempt to clarify an epistemological distinction
between space and time, and, I also make the distinction between
finite versus infinite games. This is relevant to Popper’s demarcation problem and I conjecture (what I call) “Ellsberg-type Paradox” extensions to infinite, but computable games. It is worth noting that cosmological models, i.e. models of the early universe in science, with initial conditions, can be infinite but computable, although not all are.

I look at two types of probabilities: Probabilities of observation, that is, the claim that risk can be approximated with probability, choosing from an urn with known and unknown distributions and
probabilities of explanation: aka cosmological models. I believe this work to contain valuable insights regarding shining light on where degrees of credence in scientific theories should be up-regulated or down-regulated.


The Church-Turing thesis told us that anything we regard as computable, such as in language
with universal grammar distributed over minds, is computable with a Turing machine. I think the
actual generator of the Ellsberg paradox and of Popper’s demarcation problem is that you
cannot apply Bayesianism to the behaviour of an unknown Turing machine (i.e. urn game in this
case).


Knowledge is finite. Infinities and unknown underlying distributions describing physical phenomena
cause problems for probabilities. It is worth considering an extension of so-called “finite” urn-type games; finite game geometries to infinite games to be preferences for/or against ambiguity in bets, a la what I call the Ellsberg experiments, after Daniel Ellsberg’s PhD thesis, whereby people were asked to choose between two urns: one that represent a preference for either a known probability distribution (within an Urn) or to choose the urn with ambiguity as to the outcome.


This philosophical question is two-fold: how should you reason in an infinite setting as an
individual (Turing machine) when Bayesianism breaks down, and further, how can we notice
where and how we are assigning space-or-time cutoffs or delinations in the game
(boundaries/payoff matrices) and evaluate either criterion in our latest models, in a clear fashion
to assess the credence of beliefs between rivaling scientific models.


The broader claim is that you could apply Bayesianism to the behavior of a specific known
Turing machine. Because of the halting problem, you can’t have multiple non-specified Turing
machines (coherently computable games) in the same ontology (an extension to Daniel’s point
within the framework of computability) applying Bayesianism. Recasting the discussion into
‘computable scenarios’ the world where Bayesianism ‘works’ (Aumann’s agreement theorem) is
where the multiple Turing machines doing Bayesianism are components of one Turing machine.

The so-called measure problem in quantum cosmology and pathologies associated with it, are
an illustration of this. That is, cosmologies describing an infinite universe with conjectured
boundaries, also known as initial conditions (i.e. payoffs matrices in the finitely computable
case). You do not “know the game” in an infinite universe or multiverse, only your constraints,
initial conditions, lurking here again, as in Ellsberg’s finite game. Further taking the view from apersonal self and subjecthood, Bayesianism (over scientific theory space) breaks down in the
same manner from a single scientist’s perspective who is posing the theory.

Economists give us examples of finite games. Another way of saying what Ellsberg showed, is
that preferences for ambiguity over well-known “spatially” discrete bets (urn) with the same bet
parameters ie. boundaries, can be rationally consistent in the same manner that subjective
preferences for known risk distributions. Infinite extensions to this principle might suggest in
some cases that we ought to be neutral over scientific models with combinations of computable
Turing machines (a particular state of the world) but in practice, we often aren’t.


Global prediction and theory credence in the sciences both contain embedded ambiguity
preferences analogous to the Urn models. Introducing a computational paradigm, the claim is
that we should be able to declare when certain types of ambiguity in games (i.e. computable
structures within theories) are present and therefore say when we prefer / ought to prefer one
theory over another, indeed making a bright line over model credence. Engaging from an ‘all-
knowing mathematician’s perspective’, questions of the form: Which model is more likely:
General Relativity or Special Relativity? And: How do we know it’s not the big bang right now?

All “infinite games” a la scientific models are delineated (i.e. game boundaries) with respect to
either space or time and explicitly require explanation that lends itself to a type of risk
assessment when attempting to calculate model confidence or degree of credence from a
sound agent evaluating the quality of those beliefs.


My claim is you cannot really be a rational agent (a single Turing machine) calculating risk with
an unknown underlying ontology (ie. game; such as an unknown distribution in an Urn).
In a similar manner, you cannot have strong credence in your accuracy involving global
technological questions or accuracy of your scientific world model, when you have infinite
“spatial geometries” (not an Urn) in the underlying framing of the problem/game/theory.

Where Bayesians Go Wrong

Aumann’s agreement theorem is important for Bayesian ontologies, that is the real paradox here, Ellsberg’s “paradox” is an empirical phenomena highlighting the fact that our academic assumptions about rationality (or more precisely, Savage’s assumptions, a la the Savage Axioms) are probably not quite right. There is a further interesting question about the removal of a conundrum coming from one underlying probability distribution of the same type versus many.

Economic versus Physical Theory Preferences

Economist’s notions of probabilities are that of experiments coming up one way or another
versus probabilities of particular theories eventually being falsified. These are questions I will call perspectives from local ‘subjecthood’ versus “a view from nowhere” in either space or time. The latter involves harder or impossible to specify (due to the infinite nature) measures than those of discrete economic ones with an adversary, which we will get into later. (Look out for a future section: “Quantum Games”). Although unsatisfying to the questioner offering the finite game, it might also be said to be rational in a finite game not to have a preference to play a game, including one with a monetary payoff, when you do not know the ontology/(probability distribution) of the underlying game.

This disposition I think of as the “declining to bet scientist”. In other words, is it perfectly rational
to decline to pick a ball in a finite (urn), assuming of course that your utility for money is greater
than none–when you do not know the underlying ontology of the game? (In the case of the
distribution of balls).

Explanation versus Risk is relevant to scientific versus economic thought, respectively: How does Explanation factor into probabilities, versus more straightforward games? Infinite ambiguous scenarios/games/bets, often Knightian ones, are distinct from Ellsberg scenarios, those depicting finite ambiguous games, and always require explanation to gain plausibility.

Indeed it matters whether we are explicit about the explanatory bias of the prediction or not.
The speed of light is constant as an assumption is a different species altogether, a different
thing from the assumption that the amount of information carrying capacity at small scales is not
infinite. The latter we don’t know about, therefore it is probably finite, because most things are.
This is a different kind of assumption from one that says: “here is a physical limit and there’s no
rival theory on the horizon”. Give examples: Berkenstein / speed of light.

Betting in an Infinite World


Is it really all about the size of your own world bubble? In the limit, yes.
What are the appropriate boundaries in cosmology (“infinite games”) and are we in an infinite or
finite universe? The jury is still out on this one, albeit with strong empirical evidence pointing
towards infinite. Ironically though, this is the subject where we have the strongest reasons to
suspect empirical evidence to be fundamentally unconvincing. Therefore how do we find
ourselves mathematically able to make bets at all about the broader world which we are embedded in?

So long as there is a clear “urn” geometry i.e. a world model with a boundary,
then we are in a finite scenario: statistical mechanics is preferable here. For instance, the problem with having an infinite multiverse is that if you ask a simple question like, ”If you flip a coin, what’s the probability it will come up heads,“ normally you would say 50 percent. But in the context of the multiverse, the answer is that there’s an infinite number of heads and infinite number of tails. Since there’s no unambiguous way of comparing infinities, there’s no clear way of saying that some types of events are common, and other types of events are rare. That leads to fundamental questions about the meaning of probability. And probability is crucial to physicists because our basic theory is quantum theory, which is based on probabilities, so we had better know what they mean.

Infinite models “games from the outside” so to speak, imperfectly kludge towards referring to
anthropic selection (where am I located in space and time, and, how do I know?) effects in
cosmology, versus what I will heretofore call the “local ambiguity of an agent” from an unknown
distribution of an Urn, a la Ellsberg’s empirical evidence for denying that the second Savage
Axiom is really held by a rational agent.

Local ambiguity is ambiguity over otherwise clearly defined boundaries (features of the game) in
a possible bet or other transaction, albeit with an overall unknown distribution. Ellsberg showed
us the possibility for making distinctions between conventional notions of risk (Bayesian versus
Frequency) and ambiguity-aversion more generally, over payoffs where it could still be rational
to prefer ambiguity to a sure-thing.

“Time-games” (ie. evolution of objects and technological betting and bets
on societal progress) versus “Space-games” (ie. be it in probability urns or explanations on cosmological distributions) can both be Explanatory reasoning towards Probabilities, where the boundaries are either time or space.

Below is an example of trying to construct probabilities in infinite time. In this example, time is the Ellsberg “urn”. In all “cutoff” schemes for an expanding infinite multiverse, a finite percentage of observers reach the cutoff during their lifetimes. Under most schemes, if a current observer is still alive five billion years from now, then the later stages of his life must somehow be “discounted” by a factor of around two compared to his current stages of life. For such an observer, Bayes’ theorem may appear to break down over this timescale due to anthropic selection effects; this hypothetical breakdown is sometimes called the “Guth-Vanchurin paradox”. One proposed resolution to the paradox is to posit a physical “end of time” that has a fifty percent chance of occurring in the next few billion years.

Another, overlapping, proposal is to posit that an observer no longer physically exists when it passes outside a given causal patch, similar to models where a particle is destroyed or ceases to exist when it falls through a black hole’s event horizon. Clarification of Ambiguity over finite versus infinite games (where games include our models of physical systems).

Knightian uncertainty simply refers to uncertainty that we lack a clear, agreed-upon way to
quantify—like our uncertainty about existence of extraterrestrial life, and models of physics as
opposed to our uncertainty about the outcome of a coin toss. An agent in a state of Knightian
uncertainty might describe its beliefs using a convex set of probability distributions, rather than a
single distribution: All models involving infinity are such models.

Here I make explicit the distinction of probability models from “subject” (my choice of balls in
particular urns) versus probability from object (whether I exist counterfactually in world A, versus
world B, which are both based on Bayesian reasoning and anthropics in infinite scenarios).

Here is an example of what I am calling infinite-in-space; versus in time:


The Self-sampling assumption “SSA” is characterized by the ethos: “All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.

For instance, if there is a coin flip that on heads will create one observer, while on tails they
will create two, then we have two possible worlds, the first with one observer, the second
with two. These worlds are equi-probable, hence the SSA probability of being the first (and
only) observer in the heads world is 1/2, that of being the first observer in the tails world is
1/2 x 1/2 = 1/4, and the probability of being the second observer in the tails world is also
1/4.

This is why SSA gives an answer of 1/2 probability of heads in the Sleeping Beauty
problem. SSA is dependent on the choice of reference class. If the agents in the above example were
in the same reference class as a trillion other observers, then the probability of being in the
heads world, upon the agent being told they are in the sleeping beauty problem, is ≈ ⅓.
SSA implies the doomsday argument if the number of total observers in one’s reference
class is finite. And another example, of infinite in space bets:

Self-indication Assumption “SIA”: All other things equal, an observer should reason as if
they are randomly selected from the set of all possible observers.


“Randomly selected” is weighted by the probability of the observers existing: under SIA you
are still unlikely to be an unlikely observer, unless there are a lot of them.
For instance, if there is a coin flip that on heads will create one observer, while on tails they
will create two, then we have three possible observers (1st observer on heads, 1st on tails,
2nd on tails), each existing with probability 0.5, so SIA assigns 1/3 probability to each.
Alternately, this could be interpreted as saying there are two possible observer (1st
observer, 2nd observer on tails), the first existing with probability one and the second
existing with probability 1/2, so SIA assigns 2/3 to being the first observer and 1/3 to being
the second – which is the same as the first interpretation.


This is why SIA gives an answer of 1/3 probability of heads in the Sleeping Beauty problem.
Notice that unlike SSA, SIA is not dependent on the choice of reference class, as long as
the reference class is large enough to contain all subjectively indistinguishable observers. If
the reference class is large, SIA will make it more likely, but this is compensated by the
much reduced probability that the agent will be that particular agent in the larger reference
class. Although this anthropic principle was originally designed as a rebuttal to the Doomsday
argument (by Dennis Dieks in 1992) it has general applications in the philosophy of
anthropic reasoning, and Ken Olum has suggested it is important to the analysis of quantum
cosmology.

Adversarial Games versus Cooperative ones

To be continued – A discussion of Quantum white-hat trade partners versus classical adversaries.
Other: Misc. Various generalizations of probability theory, of which the best-known is the
Dempster-Shafer theory of belief. Free energy / Negative entropy

Natural Science of Computation

Towards A Natural Science of Computation

This begins a series of questions that I examine regarding the relationship between time and space.

I refer to what I consider physically realizable complexity classes which I believe would be useful to consider for some of our most foundational computational questions in science today. Contemporary computer scientists refer to so-called complexity class hierarchies, without regard for adjustments to these classes for instance with respect to heat and their implementation in a real physical universe with finite resources.

The argument about physical computational complexity is pretty straightforward, and rises from the physics of reversible and quantum computation. Memory space is identified with negentropy: S_max – S = how far we are from maximum entropy.   In a quantum computer in a pure state, S=0, and memory space is just the number of qubits in the computer. If performing a logic operation generates any entropy at all, which is the case, for example, in which there is some probability of making an error, then performing a logic operation must also use up some memory space, even if it is only a small amount per operation (fraction of a bit).


Consequently, the physical memory space taken up by a computation is proportional to the number of operations performed during the computation.    It is not possible to use only polynomial physical memory space for a computation that takes exponential time.    Hence PhyP =PhyPSPACE.

Further studies I am interested in here remove the scientific matter and energy paradigm, and are about whether our best cosmological theories can be recast in a purely quantum and classical computational manner. I believe this delineation could conceivably provide implications for our best scientific theories, and inform, or even rule out existing matter/energy models, if both known and new complexity classes (based on the character of physical law, ie adjustments for entropy dissipation, including with quantum logic gates) displayed certain equivalences.

For instance, the notion of a risk in a game might be evaluated using frequencies or Bayes’s rule. But there are games with unknown underlying distributions and these might more effectively be considered not to be paradoxes, when understood computationally. Take as an example the Ellsberg paradox, one might simply say due to the halting problem that is it not irrational to refuse to play a game with unknown parameters i.e. a Turing machine or distribution of unknown Turing machines, since the urn itself represents an unknown Turing machine. That is an example of a spatially finite game. Scientific models can contain within them the same sort of so-called paradoxes, when not properly ontologized. We ask whether the theory of computation, when applied to betting containers, either infinite (cosmological theories) versus finite ones, “urns” within space (versus time) can help rule out or shed light on irregularities previously thought to be paradoxes. 

1-2 below can be thought of as different computational claims requiring more explanation than what I provide here.

Claim 1- If PSPACE=BQP, time is essentially an illusion.

A language L is in BQP if and only if there exists a polynomial time quantum Turing machine (or family of quantum circuits) that accepts L with a certain error probability: Quantum algorithms accepting decision problems in reasonable amounts of time. This is to say nothing of spacial constraints for the moment.

Everything in PSPACE is interconnected: A pre-established harmony, if you will, or the best of all possible worlds in Leibnitz’s sense of things. It is a form of being, but not in-time: Every bit’s configuration can be selected in a manner which takes into account every other bit.  PSPACE implies bits can be “fully optimized” in some manner. PSPACE means we have classical bits being laid down, as the output of some exhaustive, and possibly even quantum search, when you take as a starting point our universe to be a quantum multiverse.

Morally speaking here, rather than: “What can be figured out is a matter of how much time you have.” It is instead: “What we can figure out, is actually a matter of how much memory (or perhaps more realistically, free energy) that we have.”  The content of memory is whatever is optimal for some agent’s criterion.  PSPACE is like a Parmenidian block universe. In such a world, “the past” doesn’t ultimately leave a record. 

Claim 2 – If EXP=BQP, Many Worlds is literal and Wigner’s Friend can be run. 

For this argument, please refer to an overview here.

More physical complexity class relationships will be considered in future letters.

Natural Science of Computation, Talks

How Turing Put Us Into a Simulation–And How to Get Out of it (Talk at Princeton)

The mathematician’s misconception: The denial that proof is physical. 

Reality, for as long as anyone can remember, has always been a mix of empiricism and rationalism. A few poison mushrooms and you’re gone: Generalize further to color and tastes.

With every century that goes by, usually mathematicians (and physicists who are mathematicians themselves) contribute to functional theories, perspectives that afford us the luxury of shedding our ignorance, if only just a bit further, and with time. Theories which ultimately have implications, both for how we see the world, and how we build within it.

Personally I believe academic Philosophy departments ought to be contributing to clarifying model delineations in the foundations of physics. The age of phenomenology, without physics, as legitimate philosophy is over, unless you’re a yoga teacher (I happen to be one: I also am just as much, an analytic philosopher). It would be fantastic if academic philosophers had to connect to ever more foundational theories regarding what is the nature or shape of reality, let alone how we are located in it. The way to do this is through the sciences.

In the popular mind, the latest western philosophy became a sneaking sense many people have in common today: that we are sort of new-age mediocre ubermensch; living variations on The Matrix movie scenario with our own cast of characters, or more colloquially, “in a simulation.”

Extending my point above, this belief makes some sense as having been imported from the mathematics (Gödel, Von Neumann, et al) of the 1930s, with the foremost thinker being Alan Turing, ushering in a new era of extensive, groundbreaking computability work.

Problems with this Matrix conception of ourselves arise, and as with any good theory transitioning to something more accurate, both the sense-making and the math alike, lead to more questions than they answer. If we are a simulation, we are not a simulation of something at random. As Turing machines, we are specifically a simulation of a process whereby optimization processes are created and decision theory prevails. How do we find others like us? Where is the boundary of subject hood? Is there only one of us or multiple types of machines, trading off memory for compute resources (and, vice versa) in the quantum mechanical computer that we call our universe until the event horizon?

The bits that make up the world, those bits are placed there with shared memories from different perspectives. Qubits (quantum superpositions of states of the wave-function in a quantum mechanical Hilbert space) are the full set of bits in different perspectives. Are all of the universe’s bits (i.e. measured qubits) parts of qubits?  

Today, we are attempting to understand whether hardware can be created, within the sets of laws of physics that we live in, which allows true search through vast numbers of possible physics’.  

Do we have enough computation to look at some unspeakable number of universes and see what sorts of optimization targets emerge in those universes?   Say you try to find yourself in the searches executed by those optimization targets. Inside of a mathematical universe, can you find algorithms (Turing machines/optimization processes/evolutionary states, with various different constraints on run-time) a la Church-Turing thesis, who are thinking about you? The apparent “you” as it might be specified within a wave function is not well-defined.

Artificial General Intelligences (AGIs) are optimization problems: However most optimizations do not preserve your existence. This does not seem to be a coincidence. We shouldn’t have blundered into this much sentient leverage: By this I mean, everyone who is alive, but not equally!

So as a designer of a simulation, you want to be careful that whatever optimization emerges out of this world is about “you”. Regardless of what “you” are, most optimizations probably don’t preserve it.  This talk is a speculation regarding looking for those computational processes, those that build AGIs, which have preferences that you can satisfy, as a kind of trade partner in a computational universe. Is there a natural structure to our search?