Browsing Category

Philosophy of Science

Natural Science of Computation, Philosophy of Science

Building Better Credence: The “Generalized Ellsberg Paradox” in Quantum Physics?

All paradoxes have one thing in common: their underlying world model has one or more assumptions which are mistaken.

In our notions of probability, I attempt to clarify an epistemological distinction
between space and time, and, I also make the distinction between
finite versus infinite games. This is relevant to Popper’s demarcation problem and I conjecture (what I call) “Ellsberg-type Paradox” extensions to infinite, but computable games. It is worth noting that cosmological models, i.e. models of the early universe in science, with initial conditions, can be infinite but computable, although not all are.

I look at two types of probabilities: Probabilities of observation, that is, the claim that risk can be approximated with probability, choosing from an urn with known and unknown distributions and
probabilities of explanation: aka cosmological models. I believe this work to contain valuable insights regarding shining light on where degrees of credence in scientific theories should be up-regulated or down-regulated.


The Church-Turing thesis told us that anything we regard as computable, such as in language
with universal grammar distributed over minds, is computable with a Turing machine. I think the
actual generator of the Ellsberg paradox and of Popper’s demarcation problem is that you
cannot apply Bayesianism to the behaviour of an unknown Turing machine (i.e. urn game in this
case).


Knowledge is finite. Infinities and unknown underlying distributions describing physical phenomena
cause problems for probabilities. It is worth considering an extension of so-called “finite” urn-type games; finite game geometries to infinite games to be preferences for/or against ambiguity in bets, a la what I call the Ellsberg experiments, after Daniel Ellsberg’s PhD thesis, whereby people were asked to choose between two urns: one that represent a preference for either a known probability distribution (within an Urn) or to choose the urn with ambiguity as to the outcome.


This philosophical question is two-fold: how should you reason in an infinite setting as an
individual (Turing machine) when Bayesianism breaks down, and further, how can we notice
where and how we are assigning space-or-time cutoffs or delinations in the game
(boundaries/payoff matrices) and evaluate either criterion in our latest models, in a clear fashion
to assess the credence of beliefs between rivaling scientific models.


The broader claim is that you could apply Bayesianism to the behavior of a specific known
Turing machine. Because of the halting problem, you can’t have multiple non-specified Turing
machines (coherently computable games) in the same ontology (an extension to Daniel’s point
within the framework of computability) applying Bayesianism. Recasting the discussion into
‘computable scenarios’ the world where Bayesianism ‘works’ (Aumann’s agreement theorem) is
where the multiple Turing machines doing Bayesianism are components of one Turing machine.

The so-called measure problem in quantum cosmology and pathologies associated with it, are
an illustration of this. That is, cosmologies describing an infinite universe with conjectured
boundaries, also known as initial conditions (i.e. payoffs matrices in the finitely computable
case). You do not “know the game” in an infinite universe or multiverse, only your constraints,
initial conditions, lurking here again, as in Ellsberg’s finite game. Further taking the view from apersonal self and subjecthood, Bayesianism (over scientific theory space) breaks down in the
same manner from a single scientist’s perspective who is posing the theory.

Economists give us examples of finite games. Another way of saying what Ellsberg showed, is
that preferences for ambiguity over well-known “spatially” discrete bets (urn) with the same bet
parameters ie. boundaries, can be rationally consistent in the same manner that subjective
preferences for known risk distributions. Infinite extensions to this principle might suggest in
some cases that we ought to be neutral over scientific models with combinations of computable
Turing machines (a particular state of the world) but in practice, we often aren’t.


Global prediction and theory credence in the sciences both contain embedded ambiguity
preferences analogous to the Urn models. Introducing a computational paradigm, the claim is
that we should be able to declare when certain types of ambiguity in games (i.e. computable
structures within theories) are present and therefore say when we prefer / ought to prefer one
theory over another, indeed making a bright line over model credence. Engaging from an ‘all-
knowing mathematician’s perspective’, questions of the form: Which model is more likely:
General Relativity or Special Relativity? And: How do we know it’s not the big bang right now?

All “infinite games” a la scientific models are delineated (i.e. game boundaries) with respect to
either space or time and explicitly require explanation that lends itself to a type of risk
assessment when attempting to calculate model confidence or degree of credence from a
sound agent evaluating the quality of those beliefs.


My claim is you cannot really be a rational agent (a single Turing machine) calculating risk with
an unknown underlying ontology (ie. game; such as an unknown distribution in an Urn).
In a similar manner, you cannot have strong credence in your accuracy involving global
technological questions or accuracy of your scientific world model, when you have infinite
“spatial geometries” (not an Urn) in the underlying framing of the problem/game/theory.

Where Bayesians Go Wrong

Aumann’s agreement theorem is important for Bayesian ontologies, that is the real paradox here, Ellsberg’s “paradox” is an empirical phenomena highlighting the fact that our academic assumptions about rationality (or more precisely, Savage’s assumptions, a la the Savage Axioms) are probably not quite right. There is a further interesting question about the removal of a conundrum coming from one underlying probability distribution of the same type versus many.

Economic versus Physical Theory Preferences

Economist’s notions of probabilities are that of experiments coming up one way or another
versus probabilities of particular theories eventually being falsified. These are questions I will call perspectives from local ‘subjecthood’ versus “a view from nowhere” in either space or time. The latter involves harder or impossible to specify (due to the infinite nature) measures than those of discrete economic ones with an adversary, which we will get into later. (Look out for a future section: “Quantum Games”). Although unsatisfying to the questioner offering the finite game, it might also be said to be rational in a finite game not to have a preference to play a game, including one with a monetary payoff, when you do not know the ontology/(probability distribution) of the underlying game.

This disposition I think of as the “declining to bet scientist”. In other words, is it perfectly rational
to decline to pick a ball in a finite (urn), assuming of course that your utility for money is greater
than none–when you do not know the underlying ontology of the game? (In the case of the
distribution of balls).

Explanation versus Risk is relevant to scientific versus economic thought, respectively: How does Explanation factor into probabilities, versus more straightforward games? Infinite ambiguous scenarios/games/bets, often Knightian ones, are distinct from Ellsberg scenarios, those depicting finite ambiguous games, and always require explanation to gain plausibility.

Indeed it matters whether we are explicit about the explanatory bias of the prediction or not.
The speed of light is constant as an assumption is a different species altogether, a different
thing from the assumption that the amount of information carrying capacity at small scales is not
infinite. The latter we don’t know about, therefore it is probably finite, because most things are.
This is a different kind of assumption from one that says: “here is a physical limit and there’s no
rival theory on the horizon”. Give examples: Berkenstein / speed of light.

Betting in an Infinite World


Is it really all about the size of your own world bubble? In the limit, yes.
What are the appropriate boundaries in cosmology (“infinite games”) and are we in an infinite or
finite universe? The jury is still out on this one, albeit with strong empirical evidence pointing
towards infinite. Ironically though, this is the subject where we have the strongest reasons to
suspect empirical evidence to be fundamentally unconvincing. Therefore how do we find
ourselves mathematically able to make bets at all about the broader world which we are embedded in?

So long as there is a clear “urn” geometry i.e. a world model with a boundary,
then we are in a finite scenario: statistical mechanics is preferable here. For instance, the problem with having an infinite multiverse is that if you ask a simple question like, ”If you flip a coin, what’s the probability it will come up heads,“ normally you would say 50 percent. But in the context of the multiverse, the answer is that there’s an infinite number of heads and infinite number of tails. Since there’s no unambiguous way of comparing infinities, there’s no clear way of saying that some types of events are common, and other types of events are rare. That leads to fundamental questions about the meaning of probability. And probability is crucial to physicists because our basic theory is quantum theory, which is based on probabilities, so we had better know what they mean.

Infinite models “games from the outside” so to speak, imperfectly kludge towards referring to
anthropic selection (where am I located in space and time, and, how do I know?) effects in
cosmology, versus what I will heretofore call the “local ambiguity of an agent” from an unknown
distribution of an Urn, a la Ellsberg’s empirical evidence for denying that the second Savage
Axiom is really held by a rational agent.

Local ambiguity is ambiguity over otherwise clearly defined boundaries (features of the game) in
a possible bet or other transaction, albeit with an overall unknown distribution. Ellsberg showed
us the possibility for making distinctions between conventional notions of risk (Bayesian versus
Frequency) and ambiguity-aversion more generally, over payoffs where it could still be rational
to prefer ambiguity to a sure-thing.

“Time-games” (ie. evolution of objects and technological betting and bets
on societal progress) versus “Space-games” (ie. be it in probability urns or explanations on cosmological distributions) can both be Explanatory reasoning towards Probabilities, where the boundaries are either time or space.

Below is an example of trying to construct probabilities in infinite time. In this example, time is the Ellsberg “urn”. In all “cutoff” schemes for an expanding infinite multiverse, a finite percentage of observers reach the cutoff during their lifetimes. Under most schemes, if a current observer is still alive five billion years from now, then the later stages of his life must somehow be “discounted” by a factor of around two compared to his current stages of life. For such an observer, Bayes’ theorem may appear to break down over this timescale due to anthropic selection effects; this hypothetical breakdown is sometimes called the “Guth-Vanchurin paradox”. One proposed resolution to the paradox is to posit a physical “end of time” that has a fifty percent chance of occurring in the next few billion years.

Another, overlapping, proposal is to posit that an observer no longer physically exists when it passes outside a given causal patch, similar to models where a particle is destroyed or ceases to exist when it falls through a black hole’s event horizon. Clarification of Ambiguity over finite versus infinite games (where games include our models of physical systems).

Knightian uncertainty simply refers to uncertainty that we lack a clear, agreed-upon way to
quantify—like our uncertainty about existence of extraterrestrial life, and models of physics as
opposed to our uncertainty about the outcome of a coin toss. An agent in a state of Knightian
uncertainty might describe its beliefs using a convex set of probability distributions, rather than a
single distribution: All models involving infinity are such models.

Here I make explicit the distinction of probability models from “subject” (my choice of balls in
particular urns) versus probability from object (whether I exist counterfactually in world A, versus
world B, which are both based on Bayesian reasoning and anthropics in infinite scenarios).

Here is an example of what I am calling infinite-in-space; versus in time:


The Self-sampling assumption “SSA” is characterized by the ethos: “All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.

For instance, if there is a coin flip that on heads will create one observer, while on tails they
will create two, then we have two possible worlds, the first with one observer, the second
with two. These worlds are equi-probable, hence the SSA probability of being the first (and
only) observer in the heads world is 1/2, that of being the first observer in the tails world is
1/2 x 1/2 = 1/4, and the probability of being the second observer in the tails world is also
1/4.

This is why SSA gives an answer of 1/2 probability of heads in the Sleeping Beauty
problem. SSA is dependent on the choice of reference class. If the agents in the above example were
in the same reference class as a trillion other observers, then the probability of being in the
heads world, upon the agent being told they are in the sleeping beauty problem, is ≈ ⅓.
SSA implies the doomsday argument if the number of total observers in one’s reference
class is finite. And another example, of infinite in space bets:

Self-indication Assumption “SIA”: All other things equal, an observer should reason as if
they are randomly selected from the set of all possible observers.


“Randomly selected” is weighted by the probability of the observers existing: under SIA you
are still unlikely to be an unlikely observer, unless there are a lot of them.
For instance, if there is a coin flip that on heads will create one observer, while on tails they
will create two, then we have three possible observers (1st observer on heads, 1st on tails,
2nd on tails), each existing with probability 0.5, so SIA assigns 1/3 probability to each.
Alternately, this could be interpreted as saying there are two possible observer (1st
observer, 2nd observer on tails), the first existing with probability one and the second
existing with probability 1/2, so SIA assigns 2/3 to being the first observer and 1/3 to being
the second – which is the same as the first interpretation.


This is why SIA gives an answer of 1/3 probability of heads in the Sleeping Beauty problem.
Notice that unlike SSA, SIA is not dependent on the choice of reference class, as long as
the reference class is large enough to contain all subjectively indistinguishable observers. If
the reference class is large, SIA will make it more likely, but this is compensated by the
much reduced probability that the agent will be that particular agent in the larger reference
class. Although this anthropic principle was originally designed as a rebuttal to the Doomsday
argument (by Dennis Dieks in 1992) it has general applications in the philosophy of
anthropic reasoning, and Ken Olum has suggested it is important to the analysis of quantum
cosmology.

Adversarial Games versus Cooperative ones

To be continued – A discussion of Quantum white-hat trade partners versus classical adversaries.
Other: Misc. Various generalizations of probability theory, of which the best-known is the
Dempster-Shafer theory of belief. Free energy / Negative entropy