A personal account of solving the Three Gods Puzzle

This is the first I heard of the Three Gods Puzzle, and I have decided to log my thought process as I attempt to solve it. I don’t know if I will solve it, I have never been that good with logic puzzles. My goal is to be able to look back at it and see whether what I attempted was useful/relevant/interesting. Recording my thoughts is hard for me, they are often too fleeting to notice. Of course, recording my thoughts will necessarily affect my thought process. We’ll see what happens. The puzzle: Three gods A, B, and C are called, in no particular order, True, False, and Random. True always speaks truly, False always speaks falsely, but whether Random speaks truly or falsely is a completely random matter. Your task is to determine the identities of A, B, and C by asking three yes-no questions; each question must be put to exactly one god. The gods understand English, but will answer all questions in their own language, in which the words for yes and no are da and ja, in some order. You do not know which word means which. It looks like there is a subtask, to identify the meaning of the words da and ja. Though it is possible that the puzzle is solvable even without doing that (how?). In any case, we start with 6 possibilities:

G = {AT BF CR, AT BR CF, AF BT CR, AF BR CT, AR BT CF, AR BF CT}.

This is less than 3 bits of information, so if we gain at least one bit per question, the puzzle can be solved. I wonder if it is even possible to gain more than one bit of information from a Yes/No answer?

Wait, in addition, there are two more possibilities:

Language = {daYes jaNo, daNo jaYes}.

They appear to be independent from the 6 above. This brings the total to 12 possibilities from the product set

GL = G x L,

which is more than 3 bits. Assuming only one bit per question can be gained, we cannot hope to figure out both the god assignment and the language assignment. So probably at the end of the day we will not know the language assignment precisely. Hmm, something is fishy here. Once the problem is solved, and we know who is who, we should be able to retrospect and see which answers were false and which true, right? If so, at least some answers should yield more than 1 bit! Or our questions are posed in a way which yields information without ever determining the language assignment. I recall something that looks like an inverse of this, actually, with just two people, truth teller (T) and a liar (L). Asking either of them if the other person lies yields “Yes”. And asking them if they lie always yields “No”. Kind of useless, but maybe if it can be inverted. What does it mean? Something like asking will tell you that one person is a liar without knowing whether the answer is Yes or No?

OK, time to simplify. The original TL puzzle ought to be solvable with one question, what is it? The liar must answer such a question differently from the truther. This is easy, just ask something you know the answer to, like is 1+1=2 or something.

What’s next?

An aside to think through: is there a way to separate R from the rest? For example, is there a question which R always answers differently than T or L? Actually, R’s answers are not truly random:

Whether Random speaks truly or not should be thought of as depending on the flip of a coin hidden in his brain: if the coin comes down heads, he speaks truly; if tails, falsely. 

This means that it mimics T or F, randomly, not provides a random answer! Thus R will necessarily answer “Yes” to the question which both T and F answer “Yes”. So R’s answers are not random, only its mode of operation is.

If I can devise such a question, I can figure out the language assignment. Which still leaves me with 6 choices, just two questions and no idea about the god assignment. Seems like a bad idea. Probably should try to figure out a question which partitions the 12 cases into two parts with different god assignments.

Another toy problem to set up: is it ever possible to discriminate between 3 possibilities with one Yes/No question? Not if getting an answer is equivalent to partitioning the set of possibilities into two disjoint subsets. Can’t think of any other possibility.

So back to the original puzzle: what would be the initial or the final partition of the set GL? Let’s try the dynamic programming approach: working backwards. An example of the final subset could be

{AR BT CF, AR BF CT} x L.

Can this situation be resolved with one question, but without knowing the language assignment? Here we already know that A=R, so the hard part is presumably done. All we have left is the toy problem {AT BF, AF BT} in a slightly harder version, with unpartitioned L. … This is hard. if I have no clue whether the answer means Yes or No, how can I differentiate based on it? How do we get information from an answer that can mean either Yes or No?

Is there another final partition? If so, it is probably not of the form {something} x L. What other forms are there? Here is an example: {AR BT CF da, AR BF CT ja}. This is not a direct product! And getting an answer lets us perform the final partition without knowing the language assignment. So the next questions to ponder are:

  • which two elements G1 and G2 of G are the best candidates for the subset {G1 da, G2 ja}?
  • what type of questions lets us receive da or ja for such a subset? Heck, what type of question lets us discriminate without knowing what the answer means?
  • is there a different version of the Question 3 subset, other than {G1 da, G2 ja}?

But first, let’s spell out what G1 da means, to begin with: receiving “da” as the answer to the (yet unknown) question means G1. Maybe we can short-circuit the set L by making questions self-referential? Like “would you answer “ja” to the <question>?”. Let’s see where this leads. …Returning to this after a couple of days… Let’s first solve the puzzle with yes/no instead of da/ja. We can apparently force any of the 3 to answer truthfully with a self-referential “would you answer yes to the question …”. How many questions would it take to figure out who’s who? If 2 is enough, then the above partition logic is flawed, as we would have to gain more than 1 bit per question.

Another toy problem, forget lying, since we can deal with that. Let’s assume there are 3 people {A,B,C} hiding 3 numbers {1,2,3}. All are known to be truthful, but will only answer a yes/no question. How many questions are needed to figure out the assignment? Something simple like “A: are you hiding a?” or “A: are you hiding a or b?”. First one: partitions are Yes:{Aa{{Bb,Cc},{Bc,Cb}}, No:{Ab{{Ba,Cc},{Bc,Ca}},Ac{{Ba,Cb},{Bc,Cb}}, so 2 and 4 choices. Second one: partitions are Yes:{AaBbCc,AaBcCb,AbBaCc,AbBcCa},No:{AcBaCb,AcBbCa}, so 4 and 2 choices.

Just to be sure, I should try to find a 3 and 3 partition question. Let’s start by figuring out the equivalent partitions. Let’s simplify the notation by making ABC assignment implied by position. Then the choices are {123,132,213,231,312,321}. This is a permutation group. Let’s separate the cyclic subgroup based on 123: {123, 312,231}, The remainder is the cyclic group based on 321: {321,132,213}. It is certainly possible to ask a question based on that. So we can narrow the choices down to 3 with one  question.

OK, it does not look like there is some magical way to squeeze more than one bit of information per question. So, based on that, I need to do:

  • solve the 123 problem (easy)
  • figure out how to avoid knowing the language (harder)

Let’s start with the easy problem: 3 truthful people {A,B,C} hiding 3 numbers {1,2,3}. The questions are, for example:

  1. A: Is your number 1? Partitions: Y:123 or 132, N: 213, 231,312,321
  2. Y: B: Is your number 2? Partitions: YY:123, YN:132. N: A: Is your number 2? Partitions: NY:213,231, NN:312,321
  3. B: Is your number 1? Partitions: NYY:213,NYN:231,NNY:312,NNN:321

Next step: allow for consistent or random liars: rephrase each question as “If I asked you whether your number is <n>, what would you say?”

Second part: language-independence. Let’s try the 2 truthful people {A,B} hiding 2 numbers {1,2}, able to answer ja (j) or da (d), where the bijection from {j,d} to {yes, no} is not specified. We will attempt to incorporate the potential answer into the questions. For example, “would you answer j to “my number is 1″?”. Two cases: j=y: “Would you answer yes to “my number is 1″?” The answer is Yes if it’s 1, No if it’s not 1, or j if 1, d if not 1. Second case: j=n: Would you answer no to “my number is 1”? The answer is No if it’s 1, Yes if it’s not 1, or j if 1, d if it’s not 1. This works!

Now, let’s see if this can be transferred from AB->12 to AB->(honest,liar). The original question: “If I asked you whether you are honest, what would you say?” results in the answer Yes for Honest, No for Liar. Now incorporate shrouded answers: “If I asked you whether you are honest, would you answer with “ja”?” 4 cases: (Honest,Liar) x (ja=yes,ja=no)

  • Honest, ja=yes: answer is yes = ja
  • Honest,ja=no: answer is no = ja
  • Liar, ja=yes: “asked you whether you are honest” = yes, would you answer “yes” = no=da
  • Liar,ja=no: “asked you whether you are honest” = yes, would you answer “no” = yes=da

So this also works, we can tell the liar without knowing what “ja” means just from the reply being “da”. Now it’s time to put it all together as a solution to the original problem.

  1. Ask A “If I asked you whether you are always honest, would you answer with “ja”? Partitions: (ja:(HFR,HRF),da:(FHR,FRH,RHF,RFH)
  2. If the answer was “ja”, Ask B “If I asked you whether you are a consistent liar, would you answer with “ja”? Partitions: jaja:HFR,jada:HRF [and we don’t even need the 3rd question]. If the answer to 1 was “da”, then ask A “If I asked you whether you are a consistent liar, would you answer with “ja”? Partitions: daja: (FHR,FRH), dada:(RHF,RFH)
  3. Ask B: “If I asked you whether you are always honest, would you answer with “ja”? Partitions: dajaja:FHR, dajada:FRH, dadaja:RHF, dadada:RFH.

This is basically identical to the 123 problem, after mapping 123->HFR.

So, this solves the 3 Gods puzzle. Some notes:

  • How confident am I in this solution? Potential issues: maybe there is something wrong with using double negation to defeat the liar mode? Maybe I missed something in the mapping between 123 and HFR? Maybe there was a missed subtlety in the behavior of R, or in defeating the language barrier?
  • What are the odds that this solution is correct, in a sense that it would be accepted as correct by the person who posed it? I am feeling pretty confident, so probably 90%.
  • There is a room for more choices, up to 8 total, vs the current 6. How would one rephrase the problem to max out the number of options and requiring 3 questions every time, not sometimes 2?

OK, the moment of truth, time to check the answer in Wikipedia!

Hmm, looks like I got most of the features right, though there are multiple solutions. The following is the closest one to mine:

  • Q1: Ask god B, “If I asked you ‘Is A Random?’, would you say ja?”. If B answers ja, either B is Random (and is answering randomly), or B is not Random and the answer indicates that A is indeed Random. Either way, C is not Random. If B answers da, either B is Random (and is answering randomly), or B is not Random and the answer indicates that A is not Random. Either way, you know the identity of a god who is not Random.
  • Q2: Go to the god who was identified as not being Random by the previous question (either A or C), and ask him: “If I asked you ‘Are you False?’, would you say ja?”. Since he is not Random, an answer of da indicates that he is True and an answer of ja indicates that he is False.
  • Q3: Ask the same god the question: “If I asked you ‘Is B Random?’, would you say ja?”. If the answer is ja, B is Random; if the answer is da, the god you have not yet spoken to is Random. The remaining god can be identified by elimination.

There is also the qualification “in your current mental state” to elicit a useful response from R. Let’s see the partitions as presented in the above answers.

There is another feature I missed: “exploding god heads”, when the question posed cannot be answered either truthfully or falsely. Assuming that the gods have an option of not answering, this effectively provides us with three possible answers, Yes, No and Silence. Thus each question lets us partition the set into 3. So we can partition a 9-element set with just two questions.

Conclusion: I don’t think I screwed up anywhere, and in some ways my analysis was more thorough than what’s quoted on Wikipedia (sometimes 2 questions are enough), but in other ways it was less thorough (I missed the non-reflective ways of solving it).

A harrowing attempt of editing and sharing Google Drive photos

So I have a gig or so of photos on my Google Drive (GD). Who doesn’t. I wanted to share some of them, after a wee bit of editing to make them presentable. What resulted was a journey through Google services much longer and more painful than I ever expected.

Initial stab at the problem: I’ll edit them in the Google’s online photo editor I recall using at some point. Here is a click-by-click account of what happened.

  1. Selected a photo I want to share
  2. Looking for the Edit button… FAIL
  3. Huh. Right-click to check the context menu… FAIL: there is no context menu. Nothing. Madly right-click a few more times… still nothing.
  4. WTF? Everything has had a context menu since at least Windows 95. Good job, Google!
  5. Hmm, maybe there is a menu or a button in the grid view? Oh, there is… Looking for something like an EDIT… nope.
  6. OK, what other options are available? This is mildly promising: Open with… Google Drive Viewer or Google Docs. Which one should I try? Is there a difference? Deciding to try the Viewer thing.
  7. Opened photo in the Viewer. Yes! There is an Edit Menu!! Click… What a disappointment… Everything is greyed out except “Comment”. Bummer.
  8. Look around some more, notice a tiny half-cut pokemon-like icon of Google Desktop top right. Mouse over… OK, this is promising, “Discover apps that let you do more with this file“. Let’s discover! Click.
  9. A few apps are shown, the one called PicMonkey even has the word EDIT in the description. Maybe worth a try. Click!
  10. Install… Start… Wait, how come I’m not in the Google Drive but on the PicMonkey site? Oh, fine, how do I use it to edit my photos? Clicked on “Edit a photo”. Huh, it wants me to upload a photo. Look, if I wanted to upload, I’d edit it locally first with Irfanview, OK?
  11. Maybe installing this extension added some new buttons or menu items to GD for online editing? Let’s check.
  12. Nope, no change. Refresh? Nope, still nothing. OK, so much for the useless PicMonkey, get rid of this junk.
  13. Do I sound a bit frustrated? Yes, I imagine. No progress after 12 or so steps. Let’s try the other option, Google Docs. Click…
  14. Damn, this is your standard document editor, I should have known. So no dice here.
  15. Close everything, take a deep breath and think… How did I edit it before?

YES! I remember!! It was in Google Plus, not in Google Drive! Why wouldn’t mighty Google let me using the G+ image editor in GD? Is it that hard to make it available across two apps? Well, anyway, now at least the path is clear:

Second Try: Move my photos from GD to G+, then edit and share. This should be easy, surely, Google is all about sharing and integrating G+ everywhere, right? Right? That’s gotta work. Let’s see.

  1. Look for a way to import/copy/share my GD photo folder with G+. Hmm… Where would I put it?
  2. Try context menu for the folder…  The only promising item is Share…
  3. That brings up the standard sharing dialog, nothing about G+. FAIL
  4. I’m kinda lost now, this should have been trivial, surely. Well, I’ll try to think like an IQ 160 Google programmer, not like a lowly user. Were I smart, how would I do it? Yeah, right. Probably should look in the least obvious place. Probably not even on GD. Oh! Right, maybe I have to start from the G+ site, not GD site. Surely (don’t call me… you know…) Google should make it easy to import my stuff from all kinds of places!

Third Try: Import my photos in G+ from GD, then… I don’t know, I’m not sure anymore.

  1. Go to Google Plus!
  2. Try Home->Photos->Albums
  3. What the heck? “Start Uploading Now“, which leads to the Open Files dialog on my local drive, not on GD. FAIL
  4. Maybe there is some “Import” option? Nothing suitable in any of the menus: What’s New Highlights Photos Albums.
  5. OK, Another FAIL. Time to google the Google. How ironic.

Fourth Try: Search for solutions online.

  1. Query: “Import photos from google drive to google plus”. Seems straightforward enough… First few hits look more like misses, but #4 seems promising: http://www.maketecheasier.com/share-photos-from-google-drive-to-google-plus/2013/04/22. It says, among other things: “At first glance, you will only see three options: add photos, create an album, and from instant upload. Move your mouse over the tiny arrow at the bottom of the list to bring up two more options: Take a Photo and from Google Drive.
  2. OK, “tiny arrow”, really, Google? Fine, let me find it… Looking… Looking… Aaand another FAIL. My version of G+ does not have a “tiny arrow” I can mouse over.
  3. Back to sifting through the search results… Nothing much. There are some apps for importing from Picasa and Facebook… I am at a loss now.

And this concludes my failure to edit and share stuff from my Google Drive: failed to edit online. I can still edit it locally and upload again, of course, or sync the local folder with GD and have it updated in the background, but apparently no way to do it online. Suggestions welcome!

 

Quotes and Notes on Scott Aaronson’s The Ghost in the Quantum Turing Machine

The paper is at http://arxiv.org/abs/1306.0159

p.6. On QM’s potentially limiting “an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems” : “In this essay I’ll argue strongly […] that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question is yes, and other such worlds where the answer is no. And we don’t yet know which kind we live in.”

p. 7. “The […] idea—that of being “willed by you”—is the one I consider outside the scope of science, for the simple reason that no matter what the empirical facts were, a skeptic could always deny that a given decision was “really” yours, and hold the true decider to have been God, the universe, an impersonating demon, etc. I see no way to formulate, in terms of observable concepts, what it would even mean for such a skeptic to be right or wrong.”

“the situation seems different if we set aside the “will” part of free will, and consider only the “free” part.”

“I’ll use the term freedom, or Knightian freedom, to mean a certain strong kind of physical unpredictability: a lack of determination, even probabilistic determination, by knowable external factors. [..] we lack a reliable way even to quantify using probability distributions.”

p.8. “I tend to see Knightian unpredictability as a necessary condition for free will. In other words, if a system were completely predictable (even probabilistically) by an outside entity—not merely in principle but in practice—then I find it hard to understand why we’d still want to ascribe “free will” to the system. Why not admit that we now fully understand what makes this system tick?”

p.12. “from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.” — professional philosophers would do well to keep this in mind. Of course, once you break off such answerable part, it tends to leave the realm of philosophy and become a natural science of one kind or another. Maybe something useful professional philosophers could do is to look for “answerable parts”, break them off and pass along to the experts in the subject matter. And maybe look for the answers in the natural sciences and see how they help sculpt the “unanswerable riddles”.

p.14. Weak compatibilism: “My perspective embraces the mechanical nature of the universe’s time-evolution laws, and in that sense is proudly “compatibilist.” On the other hand, I care whether our choices can actually be mechanically predicted—not by hypothetical Laplace demons but by physical machines. I’m troubled if they are, and I take seriously the possibility that they aren’t (e.g., because of chaotic amplification of unknowable details of the initial conditions).”

p.19. Importance of copyability: “the problem with this response [that you are nothing but your code] is simply that it gives up on science as something agents can use to predict their future experiences. The agents wanted science to tell them, “given such and- such physical conditions, here’s what you should expect to see, and why.” Instead they’re getting the worthless tautology, “if your internal code causes you to expect to see X, then you expect to see X, while if your internal code causes you to expect to see Y , then you expect to see Y .” But the same could be said about anything, with no scientific understanding needed! To paraphrase Democritus, it seems like the ultimate victory of the mechanistic worldview is also its defeat.” — If a mind cannot be copied perfectly, then there is no such thing as your “code”, i.e. an algorithm which can be run repeatedly.

p.20. Constrained determinism: “A form of “determinism” that applies not merely to our universe, but to any logically possible universe, is not a determinism that has “fangs,” or that could credibly threaten any notion of free will worth talking about.”

p.21: Bell’s theorem, quoting Conway and Kochen: “if there’s no faster than-light communication, and Alice and Bob have the “free will” to choose how to measure their respective particles, then the particles must have their own “free will” to choose how to respond to the measurements.” — the particles’ “free will” is still constrained by the laws of Quantum Mechanics, however.

p.23. Multiple (micro-)past compatibilism: “multiple-pasts compatibilism agrees that the past microfacts about the world determine its future, and it also agrees that the past macrofacts are outside our ability to alter. […] our choices today might play a role in selecting one past from a giant ensemble of macroscopically-identical but microscopically-different pasts.”

p.26. Singulatarianism: “all the Singulatarians are doing is taking conventional thinking about physics and the brain to its logical conclusion. If the brain is a “meat computer,” then given the right technology, why shouldn’t we be able to copy its program from one physical substrate to another? […] given the stakes, it seems worth exploring the possibility that there are scientific reasons why human minds can’t be casually treated as copyable computer programs: not just practical difficulties, or the sorts of question-begging appeals to human specialness that are child’s-play for Singulatarians to demolish. If one likes, the origin of this essay was my own refusal to accept the lazy cop-out position, which answers the question of whether the Singulatarians’ ideas are true by repeating that their ideas are crazy and weird. If uploading our minds to digital computers is indeed a fantasy, then I demand to know what it is about the physical universe that makes it a fantasy.”

p.27. Predictability of human mind: “I believe neuroscience might someday advance to the point where it completely rewrites the terms of the free-will debate, by showing that the human brain is “physically predictable by utside observers” in the same sense as a digital computer.”

p.28. Em-ethics: “I’m against any irreversible destruction of knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner.” — E.g. it’s not immoral to stop a simulation which can be resumed or restored from a backup. (The cryonics implications are obvious.) “Deleting the last copy of an em in existence should be prosecuted as murder, not because doing so snuffs out some inner light of consciousness (who is anyone else to know?), but rather because it deprives the rest of society of a unique, irreplaceable store of knowledge and experiences, precisely as murdering a human would.” — Again, this is a pretty transhumanist view, see the anti-deathist position of Eliezer Yudkowsky as expressed in HPMoR.

p.29. Probabilistic uncertainty vs Knightian uncertainty: “if we see a conflict between free will and the deterministic predictability of human choices, then we should see the same conflict between free will and probabilistic predictability, assuming the probabilistic predictions are as accurate as those of quantum mechanics. […] If we know a system’s quantum state , then quantum mechanics lets us calculate the probability of any outcome of any measurement that might later be made on the system. But if we don’t know the state, then  itself can be thought of as subject to Knightian uncertainty.”

On the source of this unquantifiable “Knightian uncertainty”: “in current physics, there appears to be only one source of Knightian uncertainty that could possibly be both fundamental and relevant to human choices. That source is uncertainty about the microscopic, quantum-mechanical details of the universe’s initial conditions (or the initial conditions of our local region of the universe)”

p.30. “In economics, the “second type” of uncertainty—the type that can’t be objectively quantified using probabilities—is called Knightian uncertainty, after Frank Knight, who wrote about it extensively in the 1920s [49]. Knightian uncertainty has been invoked to explain phenomena from risk-aversion in behavioral economics to the 2008 financial crisis (and was popularized by Taleb [87] under the name “black swans”).”

p.31. “I think that the free-will-is-incoherent camp would be right, if all uncertainty were probabilistic.” Bayesian fundamentalism: “Bayesian probability theory provides the only sensible way to represent uncertainty. On that view, “Knightian uncertainty” is just a fancy name for someone’s failure to carry a probability analysis far enough.”” Against the Dutch-booking argument for Bayesian fundamentalism: “A central assumption on which the Dutch book arguments rely—basically, that a rational agent shouldn’t mind taking at least one side of any bet—has struck many commentators as dubious.”

p.32. Objective prior: “one can’t use Bayesianism to justify a belief in the existence of objective probabilities underlying all events, unless one is also prepared to defend the existence of an “objective prior.””

Universal prior: “a distribution that assigns a probability proportional to 2^(−n) to every possible universe describable by an n-bit computer program.” Why it may not be a useful “true” prior: “a predictor using the universal prior can be thought of as a superintelligent entity that figures out the right probabilities almost as fast as is information-theoretically possible. But that’s conceptually very different from an entity that already knows the probabilities.”

p.34. Quantum no-cloning: “it’s possible to create a physical object that (a) interacts with the outside world in an interesting and nontrivial way, yet (b) effectively hides from the outside world the information needed to predict how the object will behave in future interactions.”

p.35. Quantum teleportation answers the problem of “what to do with the original after you fax a perfect copy of you to be reconstituted on Mars”: “in quantum teleportation, the destruction of the original copy is not an extra decision that one needs to make; rather, it happens as an inevitable byproduct of the protocol itself”

p.36. Freebit picture: “due to Knightian uncertainty about the universe’s initial quantum state, at least some of the qubits found in nature are regarded as freebits” making “predicting certain future events—possibly including some human decisions—physically impossible, even probabilistically”. Freebits are qubits because otherwise they could be measured without violating no-cloning. Observer-independence requirement: “it must not be possible (even in principle) to trace [the freebit’s] causal history back to any physical process that generated [the freebit] according to a known probabilistic ensemble.”

p.37. On existence of freebits: “In the actual universe, are there any quantum states that can’t be grounded in PMDs?” PMD, a “past macroscopic determinant” is a classical observable that would have let one non-invasively probabilistically predict the prospective freebit to arbitrary accuracy. This is the main question of the paper: can freebits from the initial conditions of the universe survive till present day and even affect human decisions?

p.38: CMB (cosmic microwave background radiation) is one potential example of freebits: detected CMB radiation did not interact with matter since the last scattering, roughly 380, 000 years after the Big Bang. Objections: a) last scattering is not initial conditions by any means, b) one can easily shield from CMB.

p.39. Freebit effects on decision-making: “what sorts of changes to [the quantum state of the entire universe] would or wouldn’t suffice to … change a particular decision made by a particular human being? … For example, would it suffice to change the energy of a single photon impinging on the subject’s brain?” due to potential amplification of “microscopic fluctuations to macroscopic scale”. Sort of a quantum butterfly effect.

p.40. Freebit amplification issues: amplification time and locality. Locality: the freebit only affects the person’s actions, which mediates all other influences on the rest of the world. I.e. no direct freebit effect on anything else. On why these questions are interesting: “I can easily imagine that in (say) fifty years, neuroscience, molecular biology, and physics will be able to say more about these questions than they can today. And crucially, the questions strike me as scientifically interesting regardless of one’s philosophical predilections.”

p.41. Role of freebits: “freebits are simply part of the explanation for how a brain can reach decisions that are not probabilistically predictable by outside observers, and that are therefore “free” in the sense that interests us.” It could just a noise source, it can help “foils probabilistic forecasts made by outside observers, yet need not play any role in explaining the system’s organization or complexity.”

p.42. “Freedom from the inside out”:  “isn’t it anti-scientific insanity to imagine that our choices today could correlate nontrivially with the universe’s microstate at the Big Bang?” “Causality is based on entropy increase, so it can only make sense to draw causal arrows “backwards in time,” in those rare situations where entropy is not increasing with time. […] where physical systems are allowed to evolve reversibly, free from contact with their external environments.” E.g. the normal causal arrows break down for, say, CMB photons. — Not sure how Scott jumps from reversible evolution to backward causality.

p.44. Harmonization problem: backward causality leads to all kinds of problems and paradoxes. Not an issue for the freebit model, as backward causality can point only to “microfacts”, which do not affect any “macrofacts”. “the causality graph with be a directed acyclic graph (a dag), with all arrows pointing forward in time, except for some “dangling” arrows pointing backward in time that never lead anywhere else.” The latter is justified by “no-cloning”. In other words, “for all the events we actually observe, we must seek their causes only to their past, never to their future.”” — This backward causality moniker seems rather unfortunate and misleading, given that it seems to replace the usual idea of discovery of some (micro)fact about the past with “a microfact is directly caused by a macrofact F to its future”. “A simpler option is just to declare the entire concept of causality irrelevant to the microworld.”

p.45. Micro/Macro distinction: A potential solution: “a “macrofact” is simply any fact of which the news is already propagating outward at the speed of light”. I.e. an interaction turns microfact into a macrofact. This matches Zurek’s einselection ideas.

p.47 Objections to freebits: 5.1: Humans are very predictable. “Perhaps, as Kane speculates, we truly exercise freedom only for a relatively small number of “self-forming actions” (SFAs)—that is, actions that help to define who we are—and the rest of the time are essentially “running on autopilot.”” Also note “the conspicuous failure of investors, pundits, intelligence analysts, and so on actually to predict, with any reliability, what individuals or even entire populations will do”

p.48. 5.2: The weather objection: How are brains different from weather? “brains seem “balanced on a knife-edge” between order and chaos: were they as orderly as a pendulum, they couldn’t support interesting behavior; were they as chaotic as the weather, they couldn’t support rationality. […] a single freebit could plausibly influence the probability of some macroscopic outcome, even if we model all of the system’s constituents quantum-mechanically.”

p.49 5.3: The gerbil objection: if a brain or an AI is isolated from freebits except through a a gerbil in a box connected to it, then “the gerbil, though presumably oblivious to its role, is like a magic amulet that gives the AI a “capacity for freedom” it wouldn’t have had otherwise,” in essence becoming the soul of the machine. “Of all the arguments directed specifically against the freebit picture, this one strikes me as the most serious.” Potential reply:  brain is not like AI in that “In the AI/gerbil system, the “intelligence” and “Knightian noise” components were cleanly separable from one another. […] With the brain, by contrast, it’s not nearly so obvious that the “Knightian indeterminism source” can be physically swapped out for a different one, without destroying or radically altering the brain’s cognitive functions as well.” Now this comes to the issue of identity.

“Suppose the nanorobots do eventually complete their scan of all the “macroscopic, cognitively-relevant” information in your brain, and suppose they then transfer the information to a digital computer, which proceeds to run a macroscopic-scale simulation of your brain. Would that simulation be you? If your “original” brain were destroyed in this process, or simply anesthetized, would you expect to wake up as the digital version? (Arguably, this is not even a philosophical question, just a straightforward empirical question asking you to predict a future observation!) […] My conclusion is that either you can be uploaded, copied, simulated, backed up, and so forth, leading to all the puzzles of personal identity discussed in Section 2.5, or else you can’t bear the same sort of “uninteresting” relationship to the “non-functional” degrees of freedom in your brain that the AI bore to the gerbil box.”

p.51. The Initial-State Objection: “the notion of “freebits” from the early universe nontrivially influencing present-day events is not merely strange, but inconsistent with known physics” because “it follows from known physics that the initial state at the Big Bang was essentially random, and can’t have encoded any “interesting” information”. The reply is rather involved and discusses several new speculative ideas in physics. It boils down to “when discussing extreme situations like the Big Bang, it’s not okay to ignore quantum-gravitational degrees of freedom simply because we don’t yet know how to model them. And including those degrees of freedom seems to lead straight back to the unsurprising conclusion that no one knows what sorts of correlations might have been present in the universe’s initial microstate.”

p.52. The Wigner’s-Friend Objection: A macroscopic object “in a superposition of two mental states” requires freebits to make a separate “free decision” in each one, requiring 2^(number of states) freebits for independent decision making in each state.

Moreover “if the freebit picture is correct, and the Wigner’s-friend experiment can be carried out, then I think we’re forced to conclude that—at least for the duration of the experiment—the subject no longer has the “capacity for Knightian freedom,” and is now a “mechanistic,” externally-characterized physical system similar to a large quantum computer.”

p.55.  “what makes humans any different [from a computer]? According to the most literal reading of quantum mechanics’ unitary evolution rule—which some call the Many-Worlds Interpretation—don’t we all exist in superpositions of enormous numbers of branches, and isn’t our inability to measure the interference between those branches merely a “practical” problem, caused by rapid decoherence? Here I reiterate the speculation put forward in Section 4.2: that the decoherence of a state should be considered “fundamental” and “irreversible,” precisely when [it] becomes entangled with degrees of freedom that are receding toward our deSitter horizon at the speed of light, and that can no longer be collected together even in principle. That sort of decoherence could be avoided, at least in principle, by a fault-tolerant quantum computer, as in the Wigner’s-friend thought experiment above. But it plausibly can’t be avoided by any entity that we would currently recognize as “human.”

p.56. Difference from Penrose: ” I make no attempt to “explain consciousness.” Indeed, that very goal seems misguided to me, at least if “consciousness” is meant in the phenomenal sense rather than the neuroscientists’ more restricted senses.”

p.57. “instead of talking about the consistency of Peano arithmetic, I believe Penrose might as well have fallen back on the standard arguments about how a robot could never “really” enjoy fresh strawberries, but at most claim to enjoy them.”

“the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s knowable by other physical agents.”

“I’m profoundly skeptical that any of the existing objective reduction [by minds] models are close to the truth. The reasons for my skepticism are, first, that the models seem too ugly and ad hoc (GRW’s more so than Penrose’s); and second, that the AdS/CFT correspondence now provides evidence that quantum mechanics can emerge unscathed even from the combination with gravity.”

“I regard it as a serious drawback of Penrose’s proposals that they demand uncomputability in the dynamical laws”

p.61. Boltzmann brains: “By the time thermal equilibrium is reached, the universe will (by definition) have “forgotten” all details of its initial state, and any freebits will have long ago been “used up.” In other words, there’s no way to make a Boltzmann brain think one thought rather than another by toggling freebits. So, on this account, Boltzmann brains wouldn’t be “free,” even during their brief moments of existence.”

p.62. What Happens When We Run Out of Freebits? “the number of freebits accessible to any one observer must be finite—simply because the number of bits of any kind is then upper-bounded by the observable universe’s finite holographic.entropy. […] this should not be too alarming. After all, even without the notion of freebits, the Second Law of Thermodynamics (combined with the holographic principle and the positive cosmological constant) already told us that the observable universe can witness at most s 10122 “interesting events,” of any kind, before it settles into thermal equilibrium.”

p.63. Indexicality: “indexical puzzle: a puzzle involving the “first-person facts” of who, what, where, and when you are, which seems to persist even after all the “third-person facts” about the physical world have been specified.” This is similar to Knightian uncertainty: “For the indexical puzzles make it apparent that, even if we assume the laws of physics are completely mechanistic, there remain large aspects of our experience that those laws fail to determine, even probabilistically. Nothing in the laws picks out one particular chunk of suitably organized matter from the immensity of time and space, and says, “here, this chunk is you; its experiences are your experiences.””

Free will connection: Take two heretofore identical Earths A and B in an infinite universe and are about to diverge based on your  decision, and it’s not impossible for a superintelligence to predict this decision, not even probabilistically, because it is based on a freebit:

“Maybe “youA” is the “real” you, and taking the new job is a defining property of who you are, much as Shakespeare “wouldn’t be Shakespeare” had he not written his plays. So maybe youB isn’t even part of your reference class: it’s just a faraway doppelg¨anger you’ll never meet, who looks and acts like you (at least up to a certain point in your life) but isn’t you. So maybe p = 1. Then again, maybe youB is the “real” you and p = 0. Ultimately, not even a superintelligence could calculate p without knowing something about what it means to be “you,” a topic about which the laws of physics are understandably silent.” “For me, the appeal of this view is that it “cancels two philosophical mysteries against each other”: free will and indexical uncertainty”.

p.65. Falsifiability: “If human beings could be predicted as accurately as comets, then the freebit picture would be falsified.” But this prediction has “an unsatisfying, “god-of-the-gaps” character”. Another: “chaotic amplification of quantum uncertainty locally and on “reasonable” timescales. Another: “consider an omniscient demon, who wants to influence your decision-making process by changing the quantum state of a single photon impinging on your brain. […] imagine that the photons’ quantum states cannot be altered, maintaining a spacetime history consistent with the laws of physics, without also altering classical degrees of freedom in the photons’ causal past. In that case, the freebit picture would once again fail.”

p.68. Conclusions: “Could there exist a machine, consistent with the laws of physics, that “non-invasively cloned” all the information in a particular human brain that was relevant to behavior— so that the human could emerge from the machine unharmed, but would thereafter be fully probabilistically predictable given his or her future sense-inputs, in much the same sense that a radioactive atom is probabilistically predictable?”

“does the brain possess what one could call a clean digital abstraction layer : that is, a set of macroscopic degrees of freedom that (1) encode everything relevant to memory and cognition, (2) can be accurately modeled as performing a classical digital computation, and (3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure random number sources, generating noise according to prescribed probability distributions? Or is such a clean separation between the macroscopic and microscopic levels unavailable—so that any attempt to clone a brain would either miss much of the cognitively-relevant information, or else violate the No-Cloning Theorem? In my opinion, neither answer to the question should make us wholly comfortable: if it does, then we haven’t sufficiently thought through the implications!”

In a world where a cloning device is possible the indexical questions “would no longer be metaphysical conundrums, but in some sense, just straightforward empirical questions about what you should expect to observe!”

p.69. Reason and mysticism. “but what do I really think?” “in laying out my understanding of the various alternatives—yes, brain states might be perfectly clonable, but if we want to avoid the philosophical weirdness that such cloning would entail,  […] I don’t have any sort of special intuition […]. The arguments exhaust my intuition.”

Fact which aren’t.

I’ve learned a long time ago that in an electromagnetic wave electric and magnetic fields are perpendicular to each other (and to the direction of propagation):

http://upload.wikimedia.org/wikipedia/commons/3/35/Onde_electromagnetique.svg

But recently someone pointed out to me that this is not necessarily the case, and I did not believe them at first.

Here is this example:

E=E_0\cos(kz)\left(cos(\omega t)\hat{x}-\sin(\omega t)\hat{y}\right)

B=\frac{E_0}{c}\sin(kz)\left(cos(\omega t)\hat{x}-\sin(\omega t)\hat{y}\right)

(I may have screwed up the overall sign of the magnetic field. If so, pretend that this is a left-handed basis. 🙂 This is a circularly polarized electromagnetic wave “propagating” along z direction. The trick which leads to the electric and magnetic fields being parallel is that they are shifted one quarter wavelength spatially.

It took me some time to believe this, after confirming that the expressions above satisfy the vacuum Maxwell equations. My mind was blown. Something I took as a true fact inescapably following from the classical E&M turned out to be false. Even now I am not 100% sure, maybe I screwed up some place. I even googled it, just to feel more comfortable, and found the following paper:

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=147908:

Gray, J.E. Electromagnetic waves with E parallel to B. US Naval Surface Warfare Center, Dahlgren, VA, USA. 

I don’t have access to the full text of the paper (paywalls must die!), but it’s clear from the abstract that I am not the first person to be surprised by this.

I haven’t look into this in any depth, but shifting E and B fields relative to each other ought to require some rather special boundary conditions. I am not sure if it’s even possible to obtain by reflecting the standard electromagnetic wave from some special boundary.

But the point of this post is not about some unusual boundary conditions, but about how what I thought was true and would probably bet money on ended up being false. And given that one example is probably a good indication that there are others. Let me emphasize how unexpected this was. Now if someone tells me an example of an object accelerating away from another body due to their mutual gravitational attraction, I will no longer dismiss it as obvious nonsense.

There are still some “facts” which appear inviolate, however. Like, if someone claims to have constructed a perpetual motion engine out of cogs and chains, I will probably still not take them seriously.

But, to quote Tom De Marco’s The Deadline, “It’s not what you don’t know that kills you, it’s what you do know that isn’t s0.” And so I remain worried and curious, as to what other false facts I am absolutely sure of, to the degree where I don’t even see them as facts, but just the way things are. I ought to attempt to notice these transparent facts more often, and critically examine them. Maybe I’ll find something interesting.

Definition of Agency: SEPic Fail

So I was trying to figure out what agency is. Let’s try the usual starting point first:

Wikipedia:

agency is the capacity of an agent (a person or other entity, human or any living being in general, or soulconsciousness in religion) to act in a world

What does it mean “to act in a world”? Wikipedia, again:

Basic action theory typically describes action as behavior caused by an agent in a particular situation.

…And we are back to “what is agency?” (and what is behavior).

OK, let’s try SEP:

Donald Davidson [1980, essay 3] asserted that an action, in some basic sense, is something an agent does that was ‘intentional under some description,’ and many other philosophers have agreed with him that there is a conceptual tie between genuine action, on the one hand, and intention, on the other.

…So action is, again, something that an agent does. (SEP does not have a separate entry for agency.) Under Causation and Manipulability is admits to the potential circularity:

von Wright responds as follows:

The connection between an action and its result is intrinsic, logical and not causal (extrinsic). If the result does not materialize, the action simply has not been performed. The result is an essential “part” of the action. It is a bad mistake to think of the act(ion) itself as a cause of its result. (pp. 67–8)

Here we see a very explicit attempt to rebut the charge that an account of causation based on agency is circular by contending that the relation between an action (or a human manipulation) and its result is not an ordinary causal relation.

Next section is A More Recent Version of an Agency Theory (of causation). It muses about the difference between “free agency” and causation in the usual philosophical ways, with no clear definition ever given. Here is a typical sentence:

The idea is thus that the agent probability of B conditional on A is the probability that B would have conditional on the assumption that A has a special sort of status or history—in particular, on the assumption that A is realized by a free act.

Further about this “free act”:

(What “free act” might mean in this context will be explored below, but I take it that what is intended—as opposed to what Price and Menzies actually say—is that the manipulation of X should satisfy the conditions we would associate with an ideal experiment designed to determine whether X causes Y—thus, for example, the experimenter should manipulate the position of the barometer dial in a way that is independent of the atmospheric pressure Z, perhaps by setting its value after consulting the output of some randomizing device.)

So a free act is an act of an “observer”, who is by definition an agent. I don’t know how to read this section charitably. It all seems very circular to me.

OK, let’s try the next section: Causation and Free Action:

It seems clear, however, that whether (as soft determinists would have it) a free action is understood as an action that is uncoerced or unconstrained or due to voluntary choices of the agent, or whether, as libertarians would have it, a free action is an action that is uncaused or not deterministically caused, the persistence of a correlation between A and B when A is realized as a “free act” is not sufficient for A to cause B.

OK, so what’s a “free act”? Alas, there is no definition anywhere in there, not that I can find. There is a related notion of “intervention”, which is somehow different from “free action”:

 The simplest sort of intervention in which some variable Xi is set to some particular value xi amounts, in Pearl’s words, to “lifting Xi from the influence of the old functional mechanism Xi = Fi (PaiUi) and placing it under the influence of a new mechanism that sets the value xi while keeping all other mechanisms undisturbed.” (Pearl, 2000, p. 70; I have altered the notation slightly). In other words, the intervention disrupts completely the relationship between Xi and its parents so that the value of Xi is determined entirely by the intervention. Furthermore, the intervention is surgical in the sense that no other causal relationships in the system are changed. Formally, this amounts to replacing the equation governing Xi with a new equation Xi = xi, substituting for this new value of Xi in all the equations in which Xi occurs but leaving the other equations themselves unaltered. Pearl’s assumption is that the other variables that change in value under this intervention will do so only if they are effects of Xi.

Bummer, so an intervention is also defined in terms of some external “lifter” who “disrupts completely the relationship” and “replaces equations”. And that external lifter is presumably an agent. So, we are again, back to square one, what is an agent? Appropriately, the next section is called Is Circularity a Problem?:

Suppose that we agree that any plausible version of a manipulability theory must make use of the notion of an intervention and that this must be characterized in causal terms. Does this sort of “circularity” make any such theory trivial and unilluminating?

No, of course not, says the article, but how it justifies this conclusion is not at all clear to me. It muses about non-reductionism and how  (emphasis mine)

 Whether one regards the verdicts about these cases [something about a “failure of a gardener to water his plants”] reached by causal process accounts or by interventionist accounts as more defensible, the very fact that the accounts lead to inconsistent judgments shows that interventionist approaches are not trivial or vacuous, despite their “circular”, non-reductive character.

I don’t understand how they came to this conclusion, but fine, let’s see if there is anything at all that I can salvage from this entry. Another section talks about Interventions That Do Not Involve Human Action (emphasis mine):

a purely “natural” process involving no animate beings at all can qualify as an intervention as long as it has the right sort of causal history—indeed, this sort of possibility is often described by scientists as a natural experiment. Moreover, even when manipulations are carried out by human beings, it is the causal features of those manipulations and not the fact that they are carried out by human beings or are free or are attended by a special experience of agency that matters for recognizing and characterizing causal relationships. Thus, by giving up any attempt at reduction and characterizing the notion of an intervention in causal terms, an “interventionist” approach of the sort described under §§5 and 6 avoids the second classical problem besetting manipulability theories—that of anthropocentrism and commitment to a privileged status for human action.

So, it’s the “causal features” that matter but “not the fact that they are carried out by human beings” that matter, yet they still have to be caused by human beings at some point? Yep, back to the missing definition of agency, yet again.

So, my attempt to understand the intuitively obvious concept of agency using the accessible philosophical resources was a complete flop.

Conditions for the Skynet Takeover, or Hostile Intelligence Explosion

 

Eliezer Yudkowsky has posted a report titled Intelligence Explosion Microeconomics, where he outlines his approach to the need for and directions of research into recursive self-improvement of machine intelligence. Below are my impressions of it.

 

Basically, the Skynet-type outcome, with humans marginalized and possibly extinct is almost certain to happen if all of the following conditions come true:

 

  1. A general enough intelligence, likely some computer software, but not necessarily so, gains the ability to make a smarter version of itself, which can then make an even faster version of itself, and so on. leading to the Technological Singularity
  2. The ethics guiding this super-intelligence will not be anything like the human ethics (Nick Bostrom’s fancily named Orthogonality Thesis), so it won’t care for human values and welfare one bit (yes, it’s a pun).
  3. This superintelligence could be interested in acquiring the same resources people find useful (Nick Bostrom’s Instrumental Convergence Thesis). Another fancy name.

 

Now, what is this “general enough intelligence”? Yudkowsky’s definition is that it is an optimizer in various unrelated domains, like humans are, not just something narrow, like a pocket calculator, a Roomba vacuum or a chess program.

 

So what happens if someone very smart but not very ethical finds something of yours very useful? Yeah, you don’t have a chance in hell. Like the animals on Earth, they are at the rather questionable mercy of humans. So, there you go, Skynet.

 

Now, this is not a new scenario, by any stretch, though it is nice to have the Skynet takeover conditions listed explicitly. How to avoid this fate? Well, you can imagine a situation where one of the three condition does not hold or can be evaded.

 

For example, Asimov dealt with the condition 2 by introducing (and occasionally demolishing) his 3 laws of robotics, which are supposed to make machine intelligence safe for humans. Yudkowsky’s answer to this is the Complexity of Value: if you try to define morality with a few simple rules, the literal genie, trying to obey them most efficiently, will do something completely unexpected and likely awful. In other words, morality is way too complicated to formalize, even for a group of people who tend to agree on what’s moral and what’s not. The benevolent genie is easy to imagine, but hard to design.

 

Well, maybe we don’t need to worry, and the Orthogonality Thesis (condition 2) is simply wrong? That’s the position taken by David Pearce, who promotes the Hedonistic Imperative, the idea that “genetic engineering and nanotechnology will abolish suffering in all sentient life”. In essence, even if it appears that intelligence is independent of morality, a superintelligence is necessarily moral. This description might be a bit cartoonish, but probably not too far from his actual position. Yudkowsky does not have much patience for this, calling it wishful thinking and a belief in the just universe.

 

It might be a happy accident if the condition 3 is false. For example, a super-intelligence might invent FTL travel and leave the Galaxy, or invent baby universes and leave the Universe, or decide to miniaturize to Planck scale, or something. But there is no way to know, so no reason counting on it.

 

The condition 1 is probably the easiest of the three to analyze (though still very hard), and that’s the subject of the Intelligence Explosion Microeconomics report. The goal is to figure out if “investment in intelligence” pays dividend enough for a chain reaction, or if the return remains linear or maybe flattens out completely at some point. Different options lead to different dates for the “intelligence explosion to go critical”, and the range varies from 20 years from now to centuries to never. The latter might happen if, somehow, the human-level intelligence is the limit of what a self-improving AI can do without human input.

The majority of the report on the “microecon0mics of intelligence explosion” goes through the variety of reasons why it may (or may not) occur sooner or later, and eventually suggests that there are enough examples of exponential increase in power with linear increase in effort to be very worried. The examples are the chess programs, the Moore’s law, the economic output and the scientific output (this last one measured in the number of papers produced, admittedly a lousy metric). The report goes on to discuss the effects of the preliminary research on intelligence explosion on the policies of MIRI (Machine Intelligence Research Institute) and so not overly interesting from the pure research perspective.

This seems to be the clearest outline yet of the MIRI’s priorities and research directions related to Intelligence Explosion. They also apparently do a lot of research related to the condition 2, making a potential AGI more humane, but this is not addressed in this report.

 

 

Litany of a Bright Dilettante

So, one more litany, hopefully someone else finds it useful.

It’s an understatement that humility is not a common virtue in online discussions, even, or especially when it’s due.

I’ll start with my own recent example. I thought up a clear and obvious objection to one of the assertions an expert in the subject area was making and started writing a witty reply. …And then I stopped. In large part because I had just gone through the same situation, but on the other side, dealing with some of the comments to my post about General Relativity and time-turners by those who know next to nothing about General Relativity. It was irritating, yet here I was, falling into the same trap. And not for the first time, far from it. The following is the resulting thought process, distilled to one paragraph.

I have not spent 10,000+ hours thinking about this topic in a professional, all-out, do-the-impossible way. I probably have not spent even one hour seriously thinking about it. I probably do not have the prerequisites required to do so. I probably don’t even know what prerequisites are required to think about this topic productively. In short, there are almost guaranteed to exist unknown unknowns which are bound to trip up a novice like me. The odds that I find a clever argument contradicting someone who works on this topic for a living, just by reading one or two popular explanations of it are minuscule. So if I think up such an argument, the odds of it being both new and correct are heavily stacked against me. It is true that they are non-zero, and there are popular examples of non-experts finding flaws in an established theory where there is a consensus among the experts. Some of them might even be true stories. No, Einstein was not one of these non-experts, and even if he were, I am not Einstein. But just because (almost) every lottery has a winner does not mean that buying a ticket is a smart decision.

And so on. So I came up with the following, rather unpolished mantra:

If I think up what seems like an obvious objection, I will resist assuming that I have found a Weaksauce Weakness in the experts’ logic. Instead I may ask politely whether my argument is a valid one, and if not, where the flaw lies.