# The Qual

or, How I Learned to Stop Thinking and Love the Tilt.

A Play in Three Acts.

Starring Leo Harrington, Ted Slaman, John Steel, and Luca Trevisan as themselves.

Introducing Patrick Lutz in the role of me. ## Prologue

I arrived at the room at 2 pm but a few people were finishing a seminar or something. So I wandered around the hallway for a minute or two and then ran into Luca. I introduced myself to him and told him that there were still people in the room. He said he guessed it would start on Berkeley time anyway. I said “I don’t know, I’ve never done this before,” and we both laughed a little (more out of awkwardness than genuine humor I think). We both stood there silently for a few minutes. Then he walked away, maybe to go to the bathroom. A few minutes later, Ted and Leo and John arrived and we entered the room. Once Luca came back, John said that it is traditional for the student to leave the room for a few minutes, so I did. Pretty soon after that, Luca came to tell me to come back.

## Recursion theory

Leo (I think): State and prove Rice’s theorem.

me: (I state and prove Rice’s theorem)

John: What is the Rice-Shapiro theorem?

me: It says when index sets are r.e. Basically it’s when membership is determined by values on a finite number of points. The theorem says that an index set is r.e. when it consists of all functions extending some r.e. set of finite partial functions.

Should I prove it?

John: No.

Leo: You have priority arguments on your syllabus. Let’s do an infinite injury argument. You have two examples written here. Which one would you rather do?

me: I have no preference.

Leo: Really?

me: Yes.

Leo: I would have a preference if I were you.

me: I can have a preference if you want me to.

Leo: One of them is easier. Do you really not have a preference?

me: I don’t.

Leo: Let’s do Sacks’ jump inversion then.

me: (long process of explaining Sacks’ jump inversion. A couple times I said something wrong and Ted or Leo would hint that it wasn’t quite right and I would fix it.)

Ted: Let’s move on to Kolmogorov complexity. What is prefix free Kolmogorov complexity?

me: It’s easiest to start with plain Kolmogorov complexity. (I define it and then say what the difference is for prefix-free and what the motivation is)

Ted: What is the definition of Martin-Lof randomness in terms of Kolmogorov complexity?

me: (I state it)

Ted: How about the measure theory definition?

me: (I state it)

Ted: Pick one of them and show it implies the other.

me: Let’s see. I think one direction is much harder than the other. Well, not much harder, but somewhat harder. Let me figure out which one.

Ted: Why don’t you show that a failure of the Kolmogorov complexity definition implies a failure of the measure theory definition.

me: Okay. (I proceed to prove that direction. At one point I thought I had made a mistake and said “something seems kind of weird” but I just kept going and there was no mistake.)

Ted: Can you give an example of a random real?

me: What do you mean?

Ted: There’s a measure one set of random reals, so we know they exist. But can you give a specific example?

me: (I write something like $$\sum_x 2^{-K(x)}$$ and hem and haw a little about whether it works)

Ted: (seems a little skeptical) What about Chaitin’s Omega?

me: Okay, that’s $$\sum_{x \text{ codes a halting program}} 2^{-|x|}$$ where the programming language is still prefix free.

Ted: Okay, I think that’s enough recursion theory for me. Leo, do you want to ask anything else?

John: I think we should move on.

Ted: (They are about to move on when Ted looks over at Luca, who has been sitting quietly the whole time) Would you like to ask anything?

Luca: (with almost no hesitation) Yes.

Luca: Prove that there is no infinite r.e. set where every string has Kolmogorov complexity at least its length.

me: (momentarily surprised he is asking a question, and at first worry it’s hard. But then I write the question on the board, realize it’s easy, and do it)

Luca: You mentioned earlier that the sum of $$2^{-C(x)}$$ over all finite strings $$x$$ diverges. Can you prove this?

me: (at first not sure the best way to expain it, but then I do it)

Ted (or maybe Leo or John): Do you want to take a break?

me: (making a mistake) No, let’s just move on to set theory.

## Set Theory

Comment: I said a lot of stupid things in this part of the qual. The first question was not hard, but for some reason I became confused and had trouble regaining my composure. At a few points during this section of the qual I didn’t really think about the questions I was asked and instead just said the first thing that came to mind. For that reason, I don’t clearly remember the order and content of all the questions or my responses.

John: Show that the axiom of Replacement is independent of the other axioms.

me: The axiom of replacement says that every definable function whose domain is a set has range that’s also a set. So I need some model of the other axioms where there is a definable map whose range is not in the model.

me: (for some reason, at this point I started to feel confused, and I said a few basically trivial things)

me: (eventually) I guess it should be $$V_\alpha$$ for some limit $$\alpha$$. When $$\alpha$$ is a limit we get power set for free. Also comprehension. Wait, that might be wrong. And if $$\alpha$$ is greater than $$\omega$$ then we have infinity. And lots of the other axioms come for free too.

John: Wait, what’s the definition of comprehension?

me: Every definable subset of a set is actually a set. Well, we need that the formula defining it does not mention the set that’s being defined.

John: (confused) What?

me: (I write down the axiom of comprehension and eventually make it clear what I mean)

John: That’s called separation.

me: Really? What does comprehension mean then?

John: In comprehension there’s no big set.

Ted and Leo: Really?

John: Yes.

John: Can you write down the axioms of ZFC?

me: There’s some basic ones like pairing, union, empty set.

[…]

Then there’s infinity.

[…]

Also separation and replacement. And choice.

John: You’re missing a big one.

me: Oh, also foundation.

John: Okay, you were missing a big one and a little one.

me: (hem and haw for a little bit) Oh, I also forgot power set. Let me add it up where I wrote infinity.

John: And which ones are satisfied at $$V_\alpha$$ for $$\alpha$$ a limit?

me: Pairing, union, empty set are basically free.

me: We get infinity if $$\alpha > \omega$$. And power set is for free. And choice is also free.

John: Can foundation ever fail in $$V_\alpha$$?

me: No.

me: For separation, I said earlier we have it, but I think that was wrong.

John: Why?

me: Well, satisfaction is defined over the entire universe so we could be adding new witnesses to existential formulas at higher levels and thus adding new definable subsets.

Ted or Leo: But where do those subsets get in?

me: (realizing I’m being very stupid) Oh, yeah, separation is also basically for free.

me: (laughs nervously)

John: How about in $$L_\alpha$$? When does separation fail there?

me: Well, it can fail if you add a new real above $$L_\omega$$.

So maybe take something that thinks it has $$\omega_1$$. Then when it finds out it doesn’t it means a new real coding that order type was added.

John: Why should that happen at a limit stage?

me: Oh, I guess it might not. Let me see. Maybe I can take some $$L_\alpha$$ that believes it has $$\omega_1$$ and $$\omega_2$$ and take the first stage that realizes it didn’t have $$\omega_1$$.

John: Why is that stage a limit?

me: Oh, I guess it might not be.

John: If you push it a little more you can probably get it.

me: (start to say something about how you can do that, probably involving $$\omega_\omega$$ or something)

Leo: What’s a limit $$\alpha$$ where $$L_\alpha$$ doesn’t add a new real?

me: I guess take any countable $$\alpha$$ that believes it has $$\omega_1$$.

John: Why is that a limit?

me: Okay, take a countable alpha that believes it has $$\omega_1$$ and $$\omega_2$$ and take some limit before that which thinks the same thing is $$\omega_1$$.

Leo: Okay, so what kind of $$\alpha$$ don’t satisfy separation?

me: (hems and haws a little bit)

Leo: What kind of $$\alpha$$ aren’t like the one from the example you just gave?

me: (eventually I realize what Leo is trying to say) Okay, so I guess for instance $$L_{\omega\times 2}$$ works. Because you add a real at every stage before that and then you can take the join of all of them and that enters at stage $$\alpha + 1$$.

Leo: And what are those reals that enter at each stage up to that?

John: Okay, enough recursion theory masquerading as set theory.

Ted: From a fine structure guy like you?

John: Okay, so if for some limit $$\alpha$$, $$V_\alpha$$ satisfies replacement, what does that mean?

me: Oh, I guess that means it’s a model of ZFC, which is very bad.

John: There’s a name for regular cardinals alpha so that $$V_\alpha$$ satisfies ZFC. What is it?

me: (for some reason not thinking about it at all) Weakly inaccessible.

John: (notably silent)

Ted: Look, there’s two possibilities. You said one of them and no one is smiling.

me: Strongly inaccessible.

John: Take the first $$\alpha$$ so that $$V_\alpha$$ satisfies ZFC. Is it regular?

me: Yes.

John: No.

me: Oh, I can do a Lowenheim-Skolem argument and if it’s regular then the result is below $$V_\alpha$$.

John: If you look at that carefully you can get the cofinality of the first such $$\alpha$$.

me: Oh, I guess it has to be $$\omega$$ because the Lowenheim-Skolem argument only takes $$\omega$$ many steps.

John: What do you mean by that?

me: (I explain it a little bit more and say something about reflection)

John: Okay, so what’s a model of ZFC minus replacement?

me: Okay, I guess I can take $$V_{\omega\times 2}$$.

John: Okay, let’s move on.

John: What is a Suslin tree?

me: A tree of height $$\omega_1$$ where every chain and antichain is countable.

John: What’s a Suslin line?

me: A linear order that in the order topology is c.c.c. but not separated.

John: Is that enough?

me: Well, I think the original version is a dense linear order without endpoints that is c.c.c. and connected but not isomorphic to $$\mathbb{R}$$. But that is equivalent to not being separated. And then you can drop some of the other conditions like connected and dense without causing any problems.

John: Okay. Why is this equivalent to Suslin trees?

me: To get a Suslin line from a tree, take the maximal paths to be points. For the other direction, you can build some tree using disjoint intervals (here I waffled around a little bit but said something more or less plausible and they didn’t make me explain)

John: In what situations is there or is there not a Suslin tree?

me: In $$L$$ there is a Sulsin tree. Also, any time you have diamond. And you can force to remove Suslin trees. In particular, under $$\text{MA} + \lnot\text{CH}$$ there are no Suslin trees.

John: Is there any other way to get Suslin trees?

me: Yes, you can force to get them.

John: How do you do that?

me: Let’s see. I think the partial order should be countable trees. Also, maybe we need to require that every node in the tree has a descendant at every level in the tree. I think this is called being pruned. And they are ordered by extension.

John: What do you mean by extension?

me: Oh, yeah. It means that one tree only adds stuff on top of the other. We shouldn’t allow them to add stuff lower down because then the levels will not end up countable.

Extension is kind of confusing because it can mean other things. But I don’t know of any good name for this concept.

[The phrase I was looking for is probably something like “top extension” or “end extension.”]

John: Okay, so can you show that every condition can be extended?

me: This is only hard if the height of the tree is a limit ordinal. So suppose we’re in that case. For every node in the tree, we’ll form a path through the tree above that node. We can do this by picking a sequence of length $$\omega$$ that is cofinal in the height of the tree. And then for each thing in this sequence we extend to a node at that level. And then at the end we add a node above this entire path.

John: What other properties does this forcing have?

me: It’s countably closed. Because I can just take the union of any countable chain and it’s still countable and still pruned.

John: Why does it give you a Suslin tree?

me: It’s basically for free that it has height $$\omega_1$$ because $$\omega_1$$ is preserved.

So I need to show it doesn’t have chains or antichains of size $$\omega_1$$.

(At this point I waffled around a little bit. The problem was that I wanted to take some name for a chain and kill it but the things in the chain might not be decided yet. At some point John or Leo gave me some gentle encouragement)

me: Okay, so say $$f$$ is some name and $$T$$ forces that $$f$$ is a size $$\omega_1$$ chain. We can extend $$T$$ to a condition that forces whether each thing in $$T$$ is in or out of $$f$$. We do this $$\omega$$ many times and get a tree that forces whether each of its nodes is in or out of $$f$$. And this tree’s height can be assumed to be a limit. So we extend it while avoiding the path that $$f$$ defines through the tree and this kills $$f$$.

Okay, for antichains I can do something similar. I take $$A$$ a name for a maximal antichain and $$T$$ forcing $$A$$ is a maximal antichain. I can extend $$T$$ to a tree that forces whether each of its nodes is in or out of $$A$$. And I can also make it maximal in this tree. So then I can extend to get a tree that kills the antichain. Oh wait, I don’t think I even need to extend it again.

John: Why not? Why couldn’t more nodes enter the antichain later?

me: Oh, yeah I actually do. I extend the tree while keeping the antichain maximal so now it’s a bounded maximal antichain. So nothing else can enter $$A$$.

John: How do you extend it to keep the antichain maximal?

me: When I’m forming paths above each node, I first make sure the path gets into the antichain.

John: Okay. I see you have Solovay’s model on your syllabus. What is that?

me: Let me assume there is an inaccessible $$\kappa$$. Then I take the Levy collapse of $$\kappa$$ to $$\omega_1$$ and inside that I compute $$L(\mathbb{R})$$.

John: Okay, let’s just do showing that AC fails in Solovay’s model.

[I never wrote down the next part of the qual and I’ve long since forgotten the details. According to my notes, I was asked questions about the properties of the Levy collapse, the fact that the Levy collapse preserves the cardinal $$\kappa$$, and about why AC fails in Solovay’s model. I definitely remember flubbing this part quite a bit; I think I was still a bit off-balance after saying some pretty dumb things during the first set theory question.]

[Later…]

John: I see you have $$\mathbf{\Pi^1_1}$$ determinacy on your syllabus. What’s a consequence of $$\mathbf{\Pi^1_1}$$ determinacy?

me: You mean a consequence that’s not already a theorem of ZFC?

John: Yes.

me: Hmm, a lot of the consequences of determinacy that I know are already true for $$\mathbf{\Pi^1_1}$$ sets. There is one major consequence of $$\mathbf{\Pi^1_1}$$ determinacy that I know, but I don’t want to have to prove it: every real has a sharp.

John: What about regularity properties like Lebesgue measurability?

me: But Lebesgue measurability for $$\mathbf{\Pi^1_1}$$ is already true in ZFC.

John: What about Lebesgue measurability for things above $$\mathbf{\Pi^1_1}$$?

me: Let me write down the game. So the game I know for getting measurability works like this: player I plays codes for open sets of small measure and player II tries to play a real that’s in the set but avoids the open sets played by I. But the payoff set for this game has the same complexity as the original set.

John: You can get $$\mathbf{\Sigma^1_2}$$ measurability. But okay, why don’t you try the perfect set property?

me: The game for that one works like this: player I offers two finite sequences and then player II picks one of them. Then player I offers two extensions of the first sequence picked and so on. At the end player I wins if the sequence formed is in the set. But this game’s payoff set also has the same complexity as the original set.

John: Is the perfect set property for $$\mathbf{\Pi^1_1}$$ sets provable in ZFC?

me: Yes.

John: How?

me: Okay, so let me use the tree representation of $$\mathbf{\Pi^1_1}$$ sets. For every $$\Pi^1_1(a)$$ set $$A$$ there is a computable-in-$$a$$ tree on $$\omega \times \omega$$ such that $$x$$ is in $$A$$ iff $$T_x$$ is well-founded. So I will assume $$A$$ is not countable and I need to build a perfect set in $$A$$.

John: How are you going to ensure that the set you build is in $$A$$?

me: Okay, so I’ll build paths through $$T_x$$ at the same time I’m building the $$x$$’s in the perfect set.

John: You’re building reals in a $$\Sigma^1_1$$ set

me: Oh, you’re right.

John: Actually, it’s not true for $$\mathbf{\Pi^1_1}$$.

me: What?

John: It is not a theorem of ZFC that $$\mathbf{\Pi^1_1}$$ sets have the perfect set property.

me: Oh, really?

John: Yes. Can you say why?

me:

Leo: Where do bad things happen in descriptive set theory?

me: What do you mean?

Leo: I mean where are you likely to find counterexamples?

me: Huh?

Leo: In L.

me: Oh.

[Somehow this led into a discussion of a few things in descriptive and effective descriptive set theory. I don’t remember it very well, but I do remember that it involved something about the complexity of defining Kleene’s $$\mathcal{O}$$ over $$L_{\omega_1^{CK}}$$. At this point I was so confused about everything that I couldn’t answer that question at all without a lot of help. At the most embarrassing moment of the entire qual, I think that Ted tried to use hand signs to tell me what the level of complexity was (signalling $$\Sigma$$ vs. $$\Pi$$) but I was too confused to even understand what he was doing (and I’m still not entirely sure that he was doing that).]

Leo: In $$L$$, there’s a $$\mathbf{\Pi^1_1}$$ set that does not have the perfect set property.

me: Okay.

John: Is this on his syllabus? I guess not. Okay, fine.

Ted: (Mentions some example of a set in $$L$$ that is $$\mathbf{\Pi^1_1}$$ but does not have the perfect set property)

## Interlude

Comment: At this point I was pretty flustered. But it looked like we were about to move on to the next section right away, perhaps because I hadn’t wanted a break after the first section.

Luca: (Just as we are about to move on to the last section) Do you want to take a break?

me: (very grateful to Luca) Yes.

Ted: Okay, let’s meet back here in 5 minutes.

me: (I wander around the hallway a little, fill my cup at the drinking fountain, and come back to the room, where only Luca remains. He is staring out the window. We both stand there for a few minutes, awkwardly, saying nothing. Eventually, the others come back and we resume.)

## Computational Complexity

Luca: Define EXP, PSPACE and P/poly.

me: (I define them, very unsure of how formal to be)

Luca: What is the circuit definition of P/poly?

me: (I say it)

Luca: Show that PSPACE is in EXP.

me: (I do it)

Luca: Show that if EXP is in P/poly then EXP = PSPACE.

me: I think if EXP is in P/poly then the polynomial time hierarchy collapses, but maybe I shouldn’t use that.

Luca: By the way, does EXP contain P/poly?

me: No, P/poly contains uncomputable stuff.

Luca: Okay, so say you have some circuit that computes a problem in EXP.

me: I want to search for that circuit using PSPACE. But I need to be able to tell when I have the right one (thinks for a little while).

Luca: Okay, so say you have this circuit.

me: (more or less interrupting him) Okay, so I have some circuit that outputs each step in the EXP computation. No actually, it outputs each bit in each step [I no longer know what distinction I was trying to make when I amended “outputs each step” to “outputs each bit in each step”]. And then I can check that using PSPACE since I just need to make sure that each transition between steps is correct.

Luca: That’s right.

Define NL.

me: Can you repeat that?

Luca: NL

me: NN?

Luca: NL.

me: RL?

Ted: (speaking very slowly and enunciating very clearly) N. L.

me: Oh, NL. (I define it)

Luca: Consider the following problem. You are given a graph and two vertices, $$s$$ and $$t$$, in that graph, along with a number $$k$$. You want to check if the shortest path connecting $$s$$ and $$t$$ has length $$k$$. Is this problem in NL?

me: Yes. First, I can check if $$s$$ and $$t$$ are connected by a path of length $$k$$ in NL just by guessing the path. So I need to show that I can also check if they are not connected by a path of length $$k - 1$$. Let me assume that I’ve added self-loops to the graph so that if there’s a path of length less than $$k$$ then there’s a path of length exactly $$k - 1$$ (this was not actually necessary, but whatever)

me: So I need to show that the following problem is in NL: Given a graph and vertices $$s$$ and $$t$$ and a number $$k$$, check that $$s$$ and $$t$$ are not connected by a path of length $$k$$. This is the Immerman-Szele-something theorem (I briefly and unsuccessfully try to pronounce it and then I describe the strategy of the proof of the theorem).

Luca: What is the Nisan-Wigerson construction?

me: (I outline the construction, starting with the definition of a combinatorial design and along the way say a few things about how this is intended to give pseudorandom generators from hard functions)

Luca: And why do we want pseudorandom generators?

me: (briefly talk about derandomizing BPP)

Luca: You mentioned that the Nisan-Wigderson construction uses hard-to-compute functions to get pseudorandom generators. Can you say something about how the proof of that works?

me: (I sketch the proof without giving much detail. But apparently that satisfies Luca.)

Luca: What is the expander mixing lemma?

me: (I state it) This basically says that in a graph with good expansion, the number of edges between any two subsets of the vertex set is close to what you would expect if you chose the edges randomly with the same density. Let’s say the graph is $$d$$ regular, so you choose an edge from $$u$$ to $$v$$ with probability $$d/n$$. And I count edges twice if they are between two vertices that are both in both subsets.

Luca: In your statement of the expander mixing lemma, what is $$\lambda$$?

me: That’s the second eigenvalue of the normalized adjacency matrix if you order all the eigenvalues by their absolute values.

Luca: And this $$\lambda$$, what can it be? Can it be anything? Can it be $$1/d$$?

me: No, it shouldn’t be so low.

(I hesitate, unsure of how to prove that)

Luca: What happens if you take $$S$$ to be one vertex and $$T$$ to be all its neighbors in the expander mixing lemma?

me: Oh, okay, so on this side you get (I plug in $$1$$ for $$|S|$$ and $$d$$ for $$|T|$$ and $$d$$ for $$E(S, T)$$)

Luca: How about you work out the bound on $$\lambda$$ that you get from this example.

me: (I do and get essentially $$1/\sqrt{d}$$ assuming $$n$$ is much larger than $$d$$)

Luca: What is the Lovasz local lemma?

me: I guess that doesn’t really belong in this category. (For some reason, on my syllabus I put the algorithmic Lovasz local lemma in the section on derandomization even though it doesn’t really make sense. But it didn’t fit anywhere else and it’s super cool. Oh well.)

Luca: (nods)

me: (I state the regular Lovasz local lemma)

Luca: And this has some application to sparse instances of SAT?

me: (I go through the calculation to determine what instances of SAT the Lovasz local lemma guarantees to be satisfiable)

Luca: But these satisfying assignments may be very rare, right?

me: Yes.

Luca: But there is some way to find them?

me: Should I describe the algorithm?