All Posts By

Michele Reilly

Natural Science of Computation

Towards A Natural Science of Computation

This begins a series of questions that I examine regarding the relationship between time and space.

I refer to what I consider physically realizable complexity classes which I believe would be useful to consider for some of our most foundational computational questions in science today. Contemporary computer scientists refer to so-called complexity class hierarchies, without regard for adjustments to these classes for instance with respect to heat and their implementation in a real physical universe with finite resources.

The argument about physical computational complexity is pretty straightforward, and rises from the physics of reversible and quantum computation. Memory space is identified with negentropy: S_max – S = how far we are from maximum entropy.   In a quantum computer in a pure state, S=0, and memory space is just the number of qubits in the computer. If performing a logic operation generates any entropy at all, which is the case, for example, in which there is some probability of making an error, then performing a logic operation must also use up some memory space, even if it is only a small amount per operation (fraction of a bit).


Consequently, the physical memory space taken up by a computation is proportional to the number of operations performed during the computation.    It is not possible to use only polynomial physical memory space for a computation that takes exponential time.    Hence PhyP =PhyPSPACE.

Further studies I am interested in here remove the scientific matter and energy paradigm, and are about whether our best cosmological theories can be recast in a purely quantum and classical computational manner. I believe this delineation could conceivably provide implications for our best scientific theories, and inform, or even rule out existing matter/energy models, if both known and new complexity classes (based on the character of physical law, ie adjustments for entropy dissipation, including with quantum logic gates) displayed certain equivalences.

For instance, the notion of a risk in a game might be evaluated using frequencies or Bayes’s rule. But there are games with unknown underlying distributions and these might more effectively be considered not to be paradoxes, when understood computationally. Take as an example the Ellsberg paradox, one might simply say due to the halting problem that is it not irrational to refuse to play a game with unknown parameters i.e. a Turing machine or distribution of unknown Turing machines, since the urn itself represents an unknown Turing machine. That is an example of a spatially finite game. Scientific models can contain within them the same sort of so-called paradoxes, when not properly ontologized. We ask whether the theory of computation, when applied to betting containers, either infinite (cosmological theories) versus finite ones, “urns” within space (versus time) can help rule out or shed light on irregularities previously thought to be paradoxes. 

1-2 below can be thought of as different computational claims requiring more explanation than what I provide here.

Claim 1- If PSPACE=BQP, time is essentially an illusion.

A language L is in BQP if and only if there exists a polynomial time quantum Turing machine (or family of quantum circuits) that accepts L with a certain error probability: Quantum algorithms accepting decision problems in reasonable amounts of time. This is to say nothing of spacial constraints for the moment.

Everything in PSPACE is interconnected: A pre-established harmony, if you will, or the best of all possible worlds in Leibnitz’s sense of things. It is a form of being, but not in-time: Every bit’s configuration can be selected in a manner which takes into account every other bit.  PSPACE implies bits can be “fully optimized” in some manner. PSPACE means we have classical bits being laid down, as the output of some exhaustive, and possibly even quantum search, when you take as a starting point our universe to be a quantum multiverse.

Morally speaking here, rather than: “What can be figured out is a matter of how much time you have.” It is instead: “What we can figure out, is actually a matter of how much memory (or perhaps more realistically, free energy) that we have.”  The content of memory is whatever is optimal for some agent’s criterion.  PSPACE is like a Parmenidian block universe. In such a world, “the past” doesn’t ultimately leave a record. 

Claim 2 – If EXP=BQP, Many Worlds is literal and Wigner’s Friend can be run. 

For this argument, please refer to an overview here.

More physical complexity class relationships will be considered in future letters.

Quantum Algorithms Research

QUANTUM BIT COMMITMENT MIGHT NOT BE IMPOSSIBLE USING “TIME TRAVEL”

In 2011, Lloyd, Maccone et al derived dynamical equations that a chronology-respecting system interacting with a closed time-like curve “CTC” would experience. They discuss the possibility of time travel in the absence of general-relativistic closed-timelike curves, and investigate the implications of P-CTCs for enhancing the power of computation.[1]

Now Maccone, Lloyd, and myself are considering game playing using this mathematical “time-machine” model in order to remove bad Nash equilibria in so-called Prisoner’s dilemma-type problems. The use of projective closed time-like curves (aka “P-CTCs”) in game theoretic settings may seem overpowered, but, as quantum physicists, it is a fun exercise to consider this question, one that involves cooperation with yourself, lending your most central decisions over to a quantum experiment. The age old philosophical question of free-will is one of central relevance, should it be possible to implement these closed time like curves. Experiments using photons in a lab in Turin, Italy have already been done. [2]

A scenario is the following, which I call post-selected games (for the moment). Alice and Bob play a game in which the rewards/penalties are only distributed if the result of a quantum measurement made at the end of the game is positive.   To participate in the game, they ante up (i.e., pay a fee in order to play), so that they have an incentive for the post-selecting measurement turning out positive.    The post-selection allows effective P-CTCs: that is, conditioned on the measurement at the end being positive, A knows what B will do when she picks her own strategy, and vice versa.    In the case of prisoner’s dilemma, this removes the bad Nash equilibrium, since if A defects, she knows that B will defect as well, while if A cooperates she knows that B will also cooperate.

To this end, I have been looking at the quantum private query (QPQ) protocol primitive, since these quantum private queries are indeed natural maps for implementing anonymous quantum queries, whereby the agent being queried does not see or store the question by definition of the construction of the quantum search.[3] I believe that using more general n-GHZ states (“quantum treasure maps”) we could implement a contract that executes only if (and only when) all the involved parties decide to
cooperate. This protocol might be able to find a use-case application in the smart contracts of Ethereum. We could perhaps propose a quantum smart contract algorithm as a result of the joint measurements of local quantum memories i.e. quantum random access memories “q-RAMs”.

References:

[1] Seth Lloyd, Lorenzo Maccone, Raul Garcia-Patron, Vittorio Giovannetti, and Yutaka Shikano Phys. Rev. D 84, 025007 – Published 13 July 2011
[2] Seth Lloyd, Lorenzo Maccone, Raul Garcia-Patron, Vittorio Giovannetti, Yutaka Shikano, Stefano Pirandola, Lee A. Rozema, Ardavan Darabi, Yasaman Soudagar, Lynden K. Shalm, and Aephraim M. Steinberg Phys. Rev. Lett. 106, 040403 – Published 27 January 2011
[3] Giovannetti, V., Lloyd, S. and Maccone, L., 2010. Quantum private queries: security analysis. IEEE Transactions on Information Theory56(7), pp.3465-3477.

Natural Science of Computation, Talks

How Turing Put Us Into a Simulation–And How to Get Out of it (Talk at Princeton)

The mathematician’s misconception: The denial that proof is physical. 

Reality, for as long as anyone can remember, has always been a mix of empiricism and rationalism. A few poison mushrooms and you’re gone: Generalize further to color and tastes.

With every century that goes by, usually mathematicians (and physicists who are mathematicians themselves) contribute to functional theories, perspectives that afford us the luxury of shedding our ignorance, if only just a bit further, and with time. Theories which ultimately have implications, both for how we see the world, and how we build within it.

Personally I believe academic Philosophy departments ought to be contributing to clarifying model delineations in the foundations of physics. The age of phenomenology, without physics, as legitimate philosophy is over, unless you’re a yoga teacher (I happen to be one: I also am just as much, an analytic philosopher). It would be fantastic if academic philosophers had to connect to ever more foundational theories regarding what is the nature or shape of reality, let alone how we are located in it. The way to do this is through the sciences.

In the popular mind, the latest western philosophy became a sneaking sense many people have in common today: that we are sort of new-age mediocre ubermensch; living variations on The Matrix movie scenario with our own cast of characters, or more colloquially, “in a simulation.”

Extending my point above, this belief makes some sense as having been imported from the mathematics (Gödel, Von Neumann, et al) of the 1930s, with the foremost thinker being Alan Turing, ushering in a new era of extensive, groundbreaking computability work.

Problems with this Matrix conception of ourselves arise, and as with any good theory transitioning to something more accurate, both the sense-making and the math alike, lead to more questions than they answer. If we are a simulation, we are not a simulation of something at random. As Turing machines, we are specifically a simulation of a process whereby optimization processes are created and decision theory prevails. How do we find others like us? Where is the boundary of subject hood? Is there only one of us or multiple types of machines, trading off memory for compute resources (and, vice versa) in the quantum mechanical computer that we call our universe until the event horizon?

The bits that make up the world, those bits are placed there with shared memories from different perspectives. Qubits (quantum superpositions of states of the wave-function in a quantum mechanical Hilbert space) are the full set of bits in different perspectives. Are all of the universe’s bits (i.e. measured qubits) parts of qubits?  

Today, we are attempting to understand whether hardware can be created, within the sets of laws of physics that we live in, which allows true search through vast numbers of possible physics’.  

Do we have enough computation to look at some unspeakable number of universes and see what sorts of optimization targets emerge in those universes?   Say you try to find yourself in the searches executed by those optimization targets. Inside of a mathematical universe, can you find algorithms (Turing machines/optimization processes/evolutionary states, with various different constraints on run-time) a la Church-Turing thesis, who are thinking about you? The apparent “you” as it might be specified within a wave function is not well-defined.

Artificial General Intelligences (AGIs) are optimization problems: However most optimizations do not preserve your existence. This does not seem to be a coincidence. We shouldn’t have blundered into this much sentient leverage: By this I mean, everyone who is alive, but not equally!

So as a designer of a simulation, you want to be careful that whatever optimization emerges out of this world is about “you”. Regardless of what “you” are, most optimizations probably don’t preserve it.  This talk is a speculation regarding looking for those computational processes, those that build AGIs, which have preferences that you can satisfy, as a kind of trade partner in a computational universe. Is there a natural structure to our search?