Friday, September 14, 2018

Extra dimensions, black holes, and vacuum decay, oh my


Sometimes you get an idea. And it's a fun idea, and it brings together a lot of cool weird things, and you think, "maybe this could actually work out." This post is the story of one such idea. It's also the story of a paper, "Bounds on extra dimensions from micro black holes in the context of the metastable Higgs vacuum," by me and Professor Robert McNees, just posted on the arxiv.

Note added 14 February 2019: See note at bottom of post for important science correction!

A Bubble of Quantum Death

There are several ways the Universe could end, some dramatic, some pathetic, some just outright weird. My personal favorite is vacuum decay, in which the Universe succumbs to an expanding bubble of unimaginable destruction that arises from what could be described as a manufacturer's flaw in the fabric of the cosmos.

I first encountered the idea of vacuum decay as a grad student learning about some exotic ideas about dark matter, and in my readings I veered into the territory of some of the classic works examining whether or not our Universe is really as stable as we think. The idea is that it's possible that the fundamental nature of the Universe, what we call the "vacuum state," might not be unique. The Universe could, in principle, be in other vacuum states with different constants of nature and totally unrecognizable laws of physics.

As an abstract concept, this might not be a big deal. Maybe there are other ways the Universe could have been set up -- so what? But here’s the problem: The Universe could transition from one vacuum state to another. And that would kill us all.

The picture looks like this: Maybe there are two possible vacuum states, one at a somewhat higher energy than the other. The higher energy one is called the "false vacuum," and the lower one the "true vacuum." If you're in a true vacuum, you're fine, and the Universe is stable. It's like living on the bottom of a valley -- there's nowhere to fall into. But if you're in a false vacuum, it's like being stuck in a little divot on the side of a cliff with the valley far below. A little bump could send you into the abyss.

There's a connection here to the Higgs field, a sort of energy field that pervades the Universe and is responsible for particles having mass. When I talk about "the vacuum state of the Universe," I’m referring to the Higgs vacuum -- it has to do with properties of the Higgs field. A few years ago, scientists at the Large Hadron Collider completed a decades-long effort to detect the Higgs boson, a particle associated with the Higgs field that finally filled in the missing piece of the Standard Model of Particle Physics. Unfortunately, that discovery came with some ominous news about the state of the Higgs vacuum and the stability of the cosmos.

Sometime in the 60s or 70s, physicists started to explore the possibility that we live in a false vacuum -- a "metastable" universe that is precariously teetering on the edge of disaster. If an extremely high energy event happened somewhere in the Universe, it could kick the Higgs field over the metaphorical cliff and send that little part of the Universe into the true vacuum. Because the true vacuum is more stable than the false one, the transition would spread, creating a bubble of true vacuum within our space that would expand at the speed of light in all directions. This is called vacuum decay, and it's suuuuper fatal.

A diagram from Coleman & de Luccia 1980 showing a false vacuum (right-hand-side valley) and a true vacuum (left-hand-side valley). You can imagine our Universe as a ball sitting in the bottom of the right-hand-side valley, that could either be knocked over the hill into the other one, or tunnel through the barrier between. 

If you're standing next to the vacuum decay event, you have no idea it's happened, and you definitely don't see it coming. When something travels at the speed of light, it can't send a signal ahead -- the first clue that it's occurred is that it's on top of you. And it's fatal in two ways. First, the bubble wall hits, carrying with it extreme energies that incinerate everything in its path. Second, once you're inside the bubble, you're in a kind of space that has different laws of physics. Your atoms don't hold together anymore, and you disintegrate immediately. Of course, you don't notice, because the bubble expands at the speed of light, and your nerve impulses travel far more slowly. So, that's a mercy.

In any case, vacuum decay is bad. And unfortunately, measurements of the properties of the Higgs boson suggest that our Universe is, in fact, metastable, and thus vulnerable to vacuum decay events. The bright side is that it would take an unimaginably huge amount of energy to kick us over the edge. We couldn’t do it ourselves, and we don't know of anything in the Universe that could. And, in fact, the current favored hypothesis for the early universe suggests that if vacuum decay could be triggered, it would have happened in the very early universe, so perhaps it can't happen at all.

Figure from Degrassi et al. 2012, showing the possible stability states of the Higgs vacuum based on measurements of the Higgs boson mass and top quark mass. The little dot in the rectangle is where it appears our Universe sits, in the meta-stability region. 

Still, at the moment, we don't really know. Even worse, vaccuum decay can happen in other ways. One particularly unsettling possibility is quantum tunnelling. Just as physicists discovered decades ago that particles can sometimes "tunnel" through barriers, a phenomenon fundamental to quantum theory and practically applied in electronics (like flash memory cards) every day, it's possible, if extraordinarily improbable, that the Higgs field could tunnel right through the barrier keeping it from the true vacuum—and that this could happen literally at any moment. But the probability is so small that throughout the whole observable universe, chances are it will not happen over a timespan much longer than the current age of the Universe which is 13.8 billion years. So, we're probably fine. Probably.


Death by Black Hole?

But there's another possibility. A couple years ago, a group of physicists, Philipp Burda, Ruth Gregory, and Ian Moss, calculated that black holes could also trigger vacuum decay (you can see the paper here). It turns out that if you leave a black hole alone long enough (according to theoretical work pioneered by Stephen Hawking), it'll "evaporate" -- slowly disappearing by letting out a little bit of radiation over time. As it gets smaller and smaller, it evaporates faster and faster, eventually disappearing completely—possibly in a dramatic final explosion.

The black holes we know about in the Universe, which range from a few to millions of times as massive as our Sun, would take many, many orders of magnitude longer than the age of the Universe to evaporate -- something north of 10 to the power of 64 years. But a tiny black hole could evaporate much more quickly. What the recent work showed was that when black holes are small enough that they're just about to finish evaporating completely, they can trigger vacuum decay. This is all theoretical of course, but if it's true, it tells us that primordial black holes -- tiny black holes that may have formed in the early universe -- would have to have been above a certain mass, or they’d have already destroyed us.

Which brings us to our new paper. In this work, we show that the fact that tiny black holes haven't killed us all via vacuum decay can tell us something about the possibility that the Universe has more than three dimensions of space. It's a bit of an involved argument, but it's a fun one.

You may have heard people express concerns that the Large Hadron Collider might accidentally create a black hole that could swallow the Earth. (If you're worried about this, I recommend bookmarking the website www.hasthelargehadroncolliderdestroyedtheworldyet.com.) This can't happen for a lot of reasons, which I'll get into, so please don't panic. But scientists at the LHC actually have been hoping to create little black holes, because if they manage to make one (and watch it evaporate immediately), they’ll learn something about the structure of space itself. Those little black holes can only form in the LHC if there are more dimensions of space than we can perceive. And that would be a very exciting discovery. In fact, the lack of production of little black holes in the LHC can tell us that any extra dimensions that might exist have to be really really small.

Now, I just told you that little black holes can be incredibly, universe-destroying-ly dangerous, so why should we not be worried about this? The reason is that while the LHC is the most powerful particle collider ever created by humans, it's NOT the most powerful particle collider in the Universe. Cosmic rays -- high-energy particles that frequently get spit out by the matter swirling around supermassive black holes in other galaxies -- get to much higher energies than whatever the LHC can do, and they occasionally collide with each other out in space. Anything we can do, space can do better. So if the LHC could make little black holes capable of destroying the Universe via vacuum decay, the cosmos would have already done this, billions and billions of times over, all on its own.

But this is also an opportunity. Because just as we can place limits on how big extra dimensions are by not seeing black holes in the LHC, we can place even better limits on them by knowing that little black holes haven’t been created out in space somewhere, because those collisions are at much higher energies. Generally speaking, we wouldn't notice if a cosmic ray collision in a galaxy several million light years away made a tiny black hole. But if Burda, Gregory, and Moss are correct that black hole evaporation can cause vacuum decay, then the production of tiny black holes would have extremely obvious consequences. Or rather, if we notice that we're still here, it hasn't happened.

So in 2016, while Professor McNees and I were chatting at a meeting at Caltech, we had the idea that we could figure out what kinds of particle collisions should make black holes and therefore cause vacuum decay, and we could then use that to place really strong limits on how big extra dimensions can be. While we were working out the details of this, another paper including Gregory, Moss, and some of their colleagues, mentioned this possibility and put forward some additional calculations showing that it's a reasonable thing to do. So we pressed on, and did some quantitative calculations, and found that not only could we get good limits, we could get much better limits on extra dimensions than anything previously published.


Little Curled Up Dimensions

When we talk about the "size" of extra dimensions, what we mean is that if there are other dimensions of space (just, other, hard-to-imagine directions besides up/down, left/right, front/back), they have to be limited in extent, like they're curled up somehow. It's weird to imagine a dimension having a limit, but you can think of it like if you have a sheet of paper where two dimensions are pretty large, but the third (the thickness) is much much smaller. If we lived on the sheet of paper, we could travel a long way up or down or across the page, but not very far into it. In a lot of theories of extra dimensions, the dimensions curl up around on themselves so if you go a little ways into them, you come back to where you started. Extra dimensions of space are a useful idea in physics because they might allow gravity to leak into the extra dimensions kind of like how ink can seep into a sheet of paper.

Some of the previous constraints on how big the extra dimensions can be come from things like measuring the force of gravity on very very small scales, around a millimeter, to see if any of the gravity might be leaking out into other dimensions. Then there's the LHC-black-hole-production attempts, and some more complicated calculations involving thinking about how particles carrying gravity in higher-dimensional theories could affect neutron stars and supernovae. The limits on the size of the extra dimensions depend on how many extra dimensions you think we might have, but for two extra dimensions (the most commonly explored possibility), the limits are generally around a millimeter, from measurements of the strength of gravity. Our method pushes that down to a few hundred femtometers -- not much larger than an atomic nucleus. For other methods, the limits are usually expressed in terms of an energy scale connected with the higher-dimensional theory, and in those terms, our limits are five or more orders of magnitude better than limits previously published.

Our limits on the size (L) of extra dimensions for different numbers of extra dimensions (n). The blue region is the region ruled out by our calculations, and the red dotted line is how big the extra dimensions would be in TeV-scale gravity, one of the most commonly discussed extra-dimension theories. 



Our limits on the fundamental (energy) scale in the extra dimension theories, for different numbers of extra dimensions (n). The blue area is ruled out. All the previous limits are much lower -- below the bottom of this plot (around 12 or 13 on the vertical axis). See the paper for more details


The Fine Print

Of course, there are always caveats to any result in physics. In our case, our analysis depends on a few big assumptions: (1) that the vacuum is metastable (which is supported by measurements in the Standard Model of Particle Physics, but could be disproven if we get evidence of new kinds of physics that change the picture entirely), and (2) that black holes can indeed trigger vacuum decay. There are also some caveats around the energy scale at which the vacuum becomes metastable, and little assumptions about things like the way the extra dimensions are compactified (curled up) or how the cosmic rays happen. But we think this result is an exciting one, and it might present cool new opportunities to explore the stability and structure of our Universe. On the less fun side, this kind of work does suggest that there's really no way the LHC is ever going to make tiny black holes, which is a bummer, because those would be neat. But, if we do see black holes at the LHC, it'll be fantastic evidence that they can't destroy the Universe, which may be some consolation.

Theoretical physics is a journey, and sometimes it's a fun one that can take you to really wild places. I hope you've enjoyed taking this little diversion with me -- check out the paper if you want to get into the details, and hang out with me over on Twitter if you want to hear more about science, life, and weird new ways to destroy the Universe.


UPDATE: We Were Wrong About a Thing (but we fixed it)

Sometimes when talking about the self-correcting nature of science, scientists will say things like "Scientists love to be wrong! That's how we make progress!" This is only half true. It's fair to say that being wrong is how we make progress --- at least in the sense that disproving something is almost always more possible than proving something, and so every time someone is able to demonstrate that a theory doesn't work, it gives us a clue about where to go next. (For a more nuanced discussion of this, see this Cosmos article.) But I don't think any of us really love being wrong. At least, I sure don't, and I've never heard anyone proudly announce at morning coffee, "My equations didn't work at all! I have to start over now!" Mostly, when it happens, we get a bit sad, put on a brave face, and try to fix it.

So we do what we can to avoid being wrong. In our case, with this paper, we double-checked all our calculations, read a bunch of references, and sent our paper out to a few expert colleagues for comments before putting it on the arxiv. When the comments came back positive, we figured we were on pretty solid ground, so we went ahead and put the paper on the arxiv, with the hope that we would get MORE comments that might help us to see if there were any issues before we sent it to the journal for publication. The good thing about doing it this way is that instead of just the one or two referees you usually get in the peer review process, this opened our paper up to peer review by the entire physics community, which, we figured, gave us the best chance of ensuring our paper had the very best science.

And the process worked! We got an e-mail within a week or so from another physicist who had seen our paper on the arxiv and who happened to have thought about some of these issues before. And he reminded us of something that we had forgotten to take into account in our results. And he pointed out that as a result of the oversight, those amazing limits we got... did not quite hold up.

The issue is a slightly subtle one but the short version is that our analysis has to assume an instability scale --- an energy scale at which the Higgs becomes unstable --- and we can only really draw conclusions about extra dimensions for models with a fundamental scale above that energy. That means that instead of ruling out a huge range of extra dimension models, including the most commonly discussed ones, we're only ruling out a somewhat narrow range of models with scales that hover around the Higgs instability scale. Or, equivalently, a narrow range of extra dimension sizes. So our results plots had to be updated, and our main conclusions revised.

Our revised limits on the size (L) of extra dimensions for different numbers of extra dimensions (n). The blue region is the region ruled out by our calculations, the light blue a less conservative version of our limits, and the red dotted line is how big the extra dimensions would be in TeV-scale gravity, one of the most commonly discussed extra-dimension theories. In this new version of the plot, we can't rule that out.


Our revised limits on the fundamental (energy) scale in the extra dimension theories, for different numbers of extra dimensions (n). The blue area is ruled out; the light blue possibly ruled out. As in the previous version of the plot, all the previous limits are much lower -- below the bottom of this plot (around 12 or 13 on the vertical axis). So we're leaving a big gap, but we're also ruling out models that were totally inaccessible before. 

After we made the revisions, we posted the revised version on the arxiv (the site makes it very easy and transparent to do this --- you can always access the previous versions, so everyone knows what's changed). When, after a little over two weeks, no one had contacted us with any more comments, we decided it was time to submit the paper and we sent it to the journal Physical Review D. We settled in for what we expected to be a long wait and a series of back-and-forth revisions, but amazingly, after only a week in review, the journal contacted us to say the paper was accepted! The reviewer only had a short comment: The paper reports an interesting analysis of black hole production in cosmic ray collisions and ensuing constraints on the fundamental scale. I am happy to recommend publication.

Despite the satisfaction of the speedy review and publication, I'll admit it was still a bit of a disappointment to learn that our limits were more, well, limited, than we had originally thought. At first glance, our results had really seemed like they could change the conversation around extra dimensions, with the only major caveat being that you had to take seriously the meta-stability of the vacuum (which, of course, many do). But as it is, the result is still a pretty cool one. This method can tell us something about energy scales that we really seemed to have no way of accessing. What it can tell us may not be what we're trying to learn to rule out the current most favored theories, but that doesn't mean that it won't prove to be a fruitful new direction.

I'm still proud of our work, and I still think that what we did is interesting. We're in a totally different regime right now, and we don't know where things will lead. In physics, you can't always get what you want, but you never know --- it might turn out, someday, to be exactly what you need.

Wednesday, September 24, 2014

Breaking Down a Big Bang Breakthrough

I like clean, simple arguments. I like inescapable logical conclusions. And when those things appear in a paper telling us something new about the very early universe – even better.

Today, David Parkinson, a research fellow at the University of Queensland, came to Melbourne to give a talk on the paper he just posted to the arxiv today with his colleagues Marina Cortes and Andrew Liddle. It was incredibly (and not coincidentally) well timed. Just two days ago, I spend most of the day grappling with the implications of the latest cosmic microwave background results from the Planck satellite, and now David was going to come and give a new perspective on it.

Marina Cortes
Marina Cortes
Andrew Liddle
Andrew Liddle
David Parkinson
David Parkinson

But let me back up. In March, researchers with a telescope at the South Pole – the BICEP2 collaboration – announced an astonishing new result. They were studying the cosmic microwave background (CMB, the afterglow of the Big Bang), and they presented what they said was the first evidence of primordial gravitational waves – ripples in the fabric of spacetime, from the first tiny fraction of a second after the Big Bang. The announcement made a HUGE splash. There were articles in all the major newspapers and magazines hailing the discovery as the start of a new era in cosmology, and in some cases even as proof of cosmic inflation. This is a scenario in which the early universe, in the first billionth of a billionth of a billionth of a billionth of a second after the moment of creation, expanded extremely rapidly and grew several orders of magnitude in size. The theory of inflation has been around since the 1980s, first proposed by Alan Guth, and developed by Andre Linde, Paul Steinhardt, and others. One of the key predictions of inflation is that it would produce gravitational waves, and in principle these could be seen as little swirls in the pattern of polarization of the CMB. The CMB is one of the strongest pieces of evidence that the Big Bang happened at all, and by studying its light we can learn a huge amount about what the early universe looked like. Some of that light is polarized, meaning it is preferentially oriented one way or another when it reaches the detector. Patterns in the polarization can show us traces of those early spacetime ripples. Although many experiments had been looking for these patterns, BICEP2’s signal was unexpectedly strong, and it was in some tension with previous tensor measurements by the Planck satellite, among others. A viral video went around showing a flabbergasted Andre Linde receiving the news, and even he, a vocal supporter of inflation theory, looked shocked.

 
Characteristic swirls in cosmic microwave background polarization, found by BICEP2. Image credit: BICEP2 Collaboration.

It wasn’t long after BICEP2’s announcement, though, that problems appeared. Rumors went around saying that the BICEP2 team had made a mistake in their calculations. The problem was interstellar stardust. It turns out that dust can create polarization too, and although the BICEP2 team considered a few different possibilities for the level of contribution of dust to their signal, several cosmologists argued that the estimates were way too low. Two papers came out showing that the BICEP2 signal – the one that was supposed to be a beautiful picture of gravitational waves – could have been entirely due to dust in our Galaxy mimicking the primordial signal.

More articles appeared, now announcing that the “Big Bang result” has “turned to dust” (among other clever puns). Paul Steinhardt, who has spent the last several years developing alternatives to inflation theory, wrote an article proclaiming that inflation was never a good idea to begin with, and the dust problems go to show that the hype was all for nothing. Most of the cosmology community, however, took the attitude that we should probably just wait and see. There were several other experiments taking data to confirm or rule out BICEP2’s discovery, and the Planck satellite – the current flagship in the CMB detection game – would be producing maps of interstellar dust really soon. That should clear everything up.

Two days ago, Planck released their dust polarization results. They specifically addressed the BICEP2 study, and while they were very measured in their statements (pointing to an upcoming joint analysis), the upshot of the work was that the dust polarization signal was so high that it could easily account for everything BICEP2 saw. Maybe the gravitational waves are there, but if Planck is right about the amount of dust in the way, there’s really no way to say that BICEP2 actually discovered them. In physics, a discovery means you’ve shown something to be the case beyond any reasonable (statistical) doubt. Usually that comes in the form of a statement of how incredibly unlikely it is that chance or some spurious signal could have given you the same result. A signal that could just as easily be all dust is definitely not a discovery.
 
Comparison of original BICEP2 result (left) and Planck dust polarization result (right). The circled region in each shows where the primordial gravitational wave signal is expected to show up for the model supported by BICEP2's result. The colored lines in the BICEP2 figure are their dust models, all well below the signal. The blue boxes in the Planck figure are their estimates for the dust amplitude, and the solid line is where the gravitational wave signal should occur. You can see that the dust amplitude is comparable to the expected gravitational wave signal amplitude, suggesting the two could not be distinguished. Image credits: BICEP2 Collaboration, Planck Collaboration.

This all brings us to David’s paper. The details are technical, but David and his colleagues basically go back to the drawing board to determine how we can analyze data to get the best, most unbiased estimate of the gravitational wave signal. They re-analyze the BICEP2 polarization signal, under a couple of different assumptions, using Planck's previous limits on the gravitational wave contribution as a starting point. First, they assume there was no dust contamination at all. Then they look at an “optimistic” dust model, where dust contamination is there but not bad enough to drown out the signal, and a “pessimistic” dust model, where dust can account for everything. They look at not just the level of primordial gravitational waves – also known as tensor modes – but also the “tilt” of the tensor mode spectrum, an important parameter in inflationary models.

What they find is striking. In the “optimistic” and dust-free models, they find tensor modes, just as BICEP2 did, but they also find a tilt that is utterly incompatible with standard models of inflation. Basically, if BICEP2 and Planck’s previous measurements are correct, and the dust is at a manageable level, BICEP2 not only doesn’t prove inflation – it just about rules it out! The only other option is to use the “pessimistic” dust model, in which case BICEP2 discovered nothing. As it happens, Planck’s new measurements fit the pessimistic dust model best.

Results from Cortes, Liddle & Parkinson paper. The data points are the BICEP2 results, and the circled region can be compared with the regions in the figure above. The black lines are the expected level of the signal for dust plus tensor modes plus the contribution from gravitational lensing. The green line is the tensor contribution -- this can be directly compared to the dashed red line in the BICEP2 figure above. You can see the tilt in the spectrum in the way the green tensor line extends up and to the right in the figure. Image credit: Cortes, Liddle & Parkinson 2014.

In any case, the implication is clear, and somewhat unsettling. It presents us with three items – Planck’s previous tensor limits, BICEP2’s gravitational wave signal, and the inflationary model – and it says we can pick two. At least one has to be incorrect.

That’s a bold statement, and a big deal if it holds up. I love the irony in the suggestion that keeping the “inflation-proving” result requires disproving inflation. But it also illustrates the danger of jumping the gun in these kinds of complicated data analyses. It’s widely believed that BICEP2 made too strong a statement in their original paper and press-release, both in their optimism about dust foregrounds and in their statement of confidence in the signal. Now it appears that their analysis may also have introduced a bias that hid the implications for the tensor tilt.

To know with any degree of certainty what the BICEP2 result really means, we’ll have to wait for a joint analysis being carried out by the BICEP2 and Planck teams in collaboration, and we’ll have to see what the other experiments find. But it’s certainly an exciting time, and, as always, it’s fascinating to see the scientific process in action.



Footnote: My PhD thesis was partially based on a study with a similar sort of gist – that you can have two of three theories, but not all of them together. In my case, the theories were axion dark matter, string theory, and inflation. If you’re really curious, you can find the paper here.

Saturday, June 1, 2013

The Lone Genius Hypothesis

When I was a little kid, I knew I wanted to be a cosmologist, like Stephen Hawking. I would tell people that my dream was to have a little office somewhere with a giant blackboard, and I would fill it with equations and solve the mysteries of the Universe. Sometimes I imagined that instead of working in my office, maybe for a change of scene I would sit under an apple tree staring off into the sky, contemplating the nature of reality.

Thinking about doing physics. Photo credit: Demelza Kooij

That's not exactly how things turned out. True, there are times when I sit alone in my office and scribble equations. There are times when I sit outside and stare and think. But, to be honest, those times are usually not especially productive. When I really make progress, when I really have breakthroughs -- those are always times when I'm talking to other physicists and astronomers, chewing through new ideas and checking that I'm on the right track. And even more often, the most important work we do is what grows organically from our conversations or e-mails or paper perusals. Sometimes it's hard even to know who should get the credit.

Actually doing physics. Photo credit: CAASTRO.

So I was wrong. But I think adolescent-Katie could be forgiven for imagining a future career of solitary contemplation. When we're presented with images of great theoretical physicists, the picture is almost always of a lone genius, hidden away with a blackboard, making leaps no one else could have seen, using nothing but pure, unadulterated mind-power. (That those lone geniuses are almost always depicted as male is the subject for another discussion entirely.) This image has been brought to the forefront once again in the last couple of weeks by the media attention heaped on the mathematical physicist-turned-hedge-fund-consultant Eric Weinstein, whose name happens to be only one letter off from Einstein, which just adds fuel to the media-hype fire. He has a homemade theory of everything, developed over many years by himself, in his spare time, which he is just now announcing to the public. I haven't attended Weinstein's lectures and I haven't seen his work (very few people have so far), so I'm not going to comment on its genius or lack thereof. I also won't comment on the media attention per se, as others have done plenty of that. What I will say is that the W/Einstein lone-genius model of theoretical physics is, nearly always, in stark contrast to how theoretical physics is actually done.

But... Einstein!


One of the reasons Einstein carries such a hefty cultural weight is that he, like Newton a few centuries before him, appears to have basically single-handedly invented a fundamentally new view of the Universe. Newton did it over the course of 18 months, starting in 1665 while isolated to avoid the Plague, revolutionizing optics and gravity, and inventing calculus along the way. Einstein's turn came in his "annus mirabilus" in 1905 when he published four groundbreaking papers and a PhD thesis. These touched on optics, the size and motions of atoms, and, as you might have heard, the theory of special relativity.

This approach doesn't usually work out. Photo: Associated Press, found here.

Einstein is frequently depicted as having been completely cut off from the academic establishment during this time, being "just a patent clerk." But although he was certainly not a working academic physicist, he still had connections with the community, people to bounce ideas off of, and a (stalled) PhD-in-progress at the University of Zurich. He had also published several papers, though they didn't receive much attention. And working as a patent clerk was actually a fairly technical job, involving evaluating new ideas and requiring a deep understanding of science and engineering. He is, however, I will grant, probably the best example in the modern era of a theoretical physicist revolutionizing science from outside "the establishment." In fact, he's the only one I can think of.

"Hey, who invented quantum mechanics?"


I asked this question of a colleague of mine while writing up this post, not because I thought he'd have a single answer, but because I was curious what the list might look like. There are a few people who should probably get some credit: Maxwell, who first formulated the basic equations of electromagnetism; Hertz, an experimentalist who helped demonstrate the photoelectric effect; Planck, who was so important to quantum theory that its most fundamental constant is named after him; Einstein, who first explained the photoelectric effect from a theoretical point of view; or Pauli, or Heisenberg, or Bohr, or Nernst, or Schroedinger... there was kind of a lot going on around that time. The point is that quantum mechanics is a great illustration of the fact that it doesn't take a lone iconoclast to revolutionize our understanding of the Universe. Even huge breakthroughs that fundamentally change how we see and do physics can come about through a series of incremental steps. Experimentalists see something odd in their experiments, theorists propose possible explanations, experimentalists go back and test the consequences of that theory and the cycle begins again.

This has happened a number of times since Einstein's era. In addition to quantum mechanics, we've seen the appearance of the Standard Model of Particle Physics, quantum field theory, the concordance model of cosmology (including dark matter and dark energy), and the as-yet purely theoretical frameworks of supersymmetry and string theory. None of these advances could be attributed to one person, nor did they generally involve people working in isolation on theories of their very own.

So how does it usually work?


Physics is, these days, an immensely collaborative field. There are a lot of conferences. There are institutes and workshops and collaboration visits and endless seminars and dissections of research papers. Newly built physics institutes tend to have hallways lined with blackboards or dry-erase-glass cubicles to get people out of their offices to collaborate. We talk to each other, not because we are inherently very social (though a lot of us are), but because it's a really productive way to proceed. Personally, I find I think better when I'm explaining my ideas to someone. Some people, after staring at the same equations for days, just need to get the math written down and show it to other people to make sure it really makes sense. And, even more importantly, we're not all experts on all areas of physics. One person might have spent four years working on a particular quantum mechanical process in the early universe while another might be an expert on strong-field gravitation, and together they can create a much clearer picture of, say, how gravitational waves might be produced right after the big bang. Or two people might have slightly different perspectives on the same subfield of physics, because they were taught by different people or did projects on different things. For whatever reason, it turns out that talking to other physicists is one of the most productive things a physicist can do, if he or she wants to really make a breakthrough. And here I'm just talking about pure theory -- if you want to actually test any of this stuff, to see if it's on the right track for describing the actual universe in which we live, you have to be in touch with experimentalists and observers and find out what kind of tools they have available too.

Progress through collaboration: CMS at the LHC. Photo credit: CERN.

The way theoretical physics is funded (though keep in mind the funding system has its own problems) is a good clue to what we've found to be successful over time. Unless perhaps you win a MacArthur "Genius Grant," neither grant decisions nor academic hiring are determined solely by how incredibly brilliant you are. They're determined by how much science you produce, how good it is, how much it adds to previous research, and how your advisors and collaborators see your work. "Quality of the Investigator" is only one section of a grant application -- you have to also explain how your work fits in with the work of others at the institute where you'll work, and why it's a good environment for you. I've actually had a fellowship application rejected based entirely on the institute not being "a good fit" -- the assumption being that without anyone to talk to, I just wouldn't be all that productive there.

So, the synergy factor is not to be dismissed. (You would be amazed how many papers out there include in the acknowledgments something like "We thank [conference/workshop] where part of this work was carried out.") Smart people are smarter when they work together.

What about the W/Einsteins of the world?


To clarify again: I have no intention of passing judgment on Weinstein's ideas. It's entirely possible he's onto something incredible, and it's entirely possible the work will lead to nothing at all. It might turn out to solve all the problems of cosmology, or it might already be ruled out by experiment. I haven't seen the paper or heard the talk, so there's really no way for me to hazard an educated guess.

But I will say that if you think you might want to solve the biggest outstanding problems in theoretical physics, I don't recommend the lone-genius approach. Maybe Weinstein had some really good reason not to talk to other physicists about his work before now. Perhaps he was worried it might be wrong and didn't want to embarrass himself, or perhaps he was worried it might be right and he'd be scooped or not get all the credit. Or maybe he just doesn't like to talk to physicists all that much. It's even possible that he thought his ideas were so revolutionary that no one else would understand. But I kind of doubt that. We physicists love finding new ways to think about things. We love stretching our minds and seeing things from another point of view. It's why we do this work at all. And it's why we spend so much time talking to each other about it.

Saturday, April 27, 2013

The Art of Darkness

The Universe is a very dark place.
The contents of the Universe, according to recent results from the Planck Satellite. Image copyright ESA.
This post focuses on the blue bit; for more on the pink segment, see my earlier post.

You've probably been hearing a lot about dark matter detection lately. In the past couple of months, there have been announcements of announcements, delays of announcements, press conferences, ambitious claims, cautious optimism, not-so-cautious optimism, and various "hints," "signs" and "clues." But what does it all mean? Have we actually detected dark matter?

Short answer: Um, maybe. Also: It's complicated. Really complicated.

The Truth is Out There


I'll start by saying the one thing we're really pretty sure of: dark matter is real. We've known for a very long time that the matter we can see in our telescopes -- stars, galaxies, gas, dust -- doesn't have enough gravitational pull to explain the motions of the cosmos. There are some very good explanations of dark matter and its evidence on the web out there already, but in brief, given how fast stars and galaxies are moving (stars moving in galaxies, galaxies moving in clusters), the matter we can see isn't enough to hold them all together. The first evidence of this came out in the 1930s, and since then, astronomers have hypothesized that some mysterious new component of matter that we can't see -- dubbed dark matter -- is pervading and surrounding galaxies and clusters and keeping everything from flying off into space.

Sorry about the afterimage.
Artist's impression (well, mine) of a galaxy embedded within a spherical dark matter "halo." Image of Andromeda Galaxy credit GALEX, JPL-Caltech, NASA, from APOD.

If this was the only evidence, it might be reasonable to suggest that it's not a new form of matter, but rather an altered law of gravity that explains the inconsistency. But it turns out that evidence for dark matter being a fundamentally new kind of matter pops up virtually everywhere we look -- from the way light bends around massive objects, to the history of galaxy formation, to the chemical make-up of the early universe. Some of the strongest evidence for dark matter is found in the aftermath of collisions of galaxy clusters, since these cosmic train wrecks can effectively separate dark matter from stars and gas.

The Bullet Cluster, a.k.a., dark matter's smoking gun.
Composite image of the Bullet Cluster of galaxies, with optical Hubble Space Telescope image, Chandra X-ray image of ionized gas in pink, and dark matter abundance determined from gravitational lensing portrayed in blue. Credit: X-ray: NASA/CXC/CfA/M.Markevitch et al.; Optical: NASA/STScI; Magellan/U.Arizona/D.Clowe et al.; Lensing Map: NASA/STScI; ESO WFI; Magellan/U.Arizona/D.Clowe et al. Annotated image from animation found here.

So we know dark matter is out there, but we don't know what it is. We think it's probably some kind of new elementary particle, and the leading theories all suggest that it should have some interaction with light and/or ordinary matter (i.e., particles contained in the Standard Model of particle physics), so over the last few decades the physics community has put a lot of effort into finding a way to detect those interactions. There are basically three approaches:

  • Direct detection: If dark matter is a new elementary particle that interacts mainly via gravity and only very weakly via any other force, dark matter particles should be passing through the Earth all the time, and, very occasionally, you'd expect one to bump into something. Direct detection experiments look for that collision, called "nuclear recoil" (because you're looking for the movement of the atomic nucleus, not the electrons). Basically they put a box full of some target material (in the case of the CDMS experiment, that's silicon or germanium) in a heavily shielded lab deep underground where virtually no standard model particles can get in. Then, very sensitive detectors watch for one of the target nuclei to be bumped. If the scientists can rule out other explanations for the bump (like radioactive decay of the material around the target sending in neutrons, for instance), and if the recoil energy is what they expect dark matter to produce, then they have a dark matter event candidate.
  • Indirect detection: In many of the models of dark matter, the dark matter particle is its own antiparticle, which means that if two dark matter particles collide precisely enough, they annihilate. In theory, this produces standard model particles that we can see. If that's correct, then one way to find dark matter particles is to look at where dark matter is densely concentrated (like in the Galactic Center) and see if there are gamma rays or high-energy particles being produced in a way that ordinary astrophysics can't explain. There are other ways dark matter particle physics could be probed with direct detection, like if the particle decays or has other (non-annihilating) interactions with itself or other matter, but annihilation is the most common thing to look for. One of the reasons we think annihilation happens is because it leads to a natural way to explain the production of dark matter in the early universe -- the idea being that dark matter was annihilating and being produced all the time in the beginning when the universe was very dense, and it was only when expansion allowed dark matter particles to interact less frequently that they were able to exist in a more or less stable way for long periods of time.
  • Collider production: If two dark matter particles can annihilate to make standard model particles, then you should be able to reverse the process and make dark matter particles by colliding standard model particles at high energies. This is the idea behind the search for dark matter at colliders such as the LHC. A dark matter particle produced in a collider would pass right through the surrounding detectors without leaving a mark, so the way we'd see it would be to look for "missing energy." You add up all the energy of all the particles you do detect in the collision aftermath, compare it to the total energy you put in, and see if the missing energy is consistent with what a dark matter particle would spirit away.
Ways to make, destroy, or detect dark matter particles.
Different ways to detect dark matter (DM) particle interactions with standard model (SM) particles. Image found on the MPIK website, originally produced by Jonathan Feng. "Thermal freeze-out" is what happens when the dark matter is no longer dense enough to annihilate all the time due to the expansion of the early universe.

So, have we found it yet?


There's been a lot of hype. A few weeks ago, the team behind an experiment called AMS-02, a cosmic ray detector that hangs off the side of the International Space Station, made an announcement that they found a signal "consistent" with dark matter, but that it was "not yet sufficiently conclusive to rule out other explanations." They held a press conference and released a short publication summarizing the work. As I pointed out in a blog post for IoP's Physics Focus, the tone of the announcement, and the especially media hype that followed, went far beyond what was really justified by the results. What they actually saw was an excess of positrons over what would be expected from standard astrophysical processes. The excess might have arisen from dark matter annihilation. But it could also have come from something else. Like pulsars, which are known to accelerate particles and which could certainly produce a positron excess like the one seen by AMS. I go into more detail on this in my Physics Focus blog post, but the gist is that while the AMS signal is intriguing, it's really difficult to pin it on dark matter with any degree of certainty.

But that didn't keep the media from running away with the idea. Here's a sampling of the kinds of headlines I saw in response to the AMS result:

"Experiment believed to detect evidence of dark matter" - Boston Globe
"Strong hints of dark matter detected by space station, physicists say" - Guardian
"CERN Scientists Continue to Prove Their Value with First Evidence of Dark Matter" - Atlantic Wire
"Hints of Dark Matter Have NASA Scientists over the Moon" - Space News

Being excited about the prospect of a big discovery is fair, but overhyping it doesn't help anyone. Especially because only a couple of weeks later, another experiment, called CDMS, also claimed a possible detection of dark matter, and news articles said pretty similar things, sometimes without even referencing AMS:

"Researchers May Have Finally Detected a Dark Matter Particle" - Universe Today
"Homing in on Dark Matter" - Sky & Telescope
"Dark matter researchers think they've got a signal" - The Register
"Another dark-matter sign from a Minnesota mine" - Nature News Blog

Actually, the CDMS result got quite a bit less press, which was surprising to me. Pretty much any way you look at it, it's a much more direct result, if (as I'll explain) fantastically confusing.

Deep dark secrets


CDMS (or, specifically, CDMS-II) is an underground dark matter direct detection experiment. It's located in an old iron mine in Minnesota and it consists of super-cooled targets of silicon and germanium surrounded by sensitive detectors that can measure the positions and energies of any movements they see in their target nuclei. They expect to see, as a background, electron recoils from a variety of processes, and they can distinguish these from recoils of nuclei by looking at the way bumped electrons would ionize the target material. There are a number of ways they slice up the data to take out the electron background, but they also expect a tiny number of neutrons to get into the detector (either from space or from radioactive decay more locally) and bump into their target nuclei, and these would look exactly like dark matter collisions. The only way to deal with those is to estimate the number they expect from neutrons, and get excited if they see way more than that.

"Aww, look at the little WIMPy candidates." -@sc_k
The dark matter candidate events found by the CDMS-II experiment. Plot from presentation by Kevin McCarthy at the APS meeting. The full presentation can be found here and the paper is here. I was alerted to this plot by this tweet.

In the end, CDMS found three candidate events. In the plot above, they're labelled Candidate 1, 2 and 3. (I hope the CDMS folk actually named the candidate events, in the style of the IceCube collaboration, who found two extragalactic neutrinos and called them Bert and Ernie.) The collaboration claims that the chance of these events actually being dark matter -- as opposed to misidentified background or random chance -- is 99.81%. That corresponds to what we call a 3-sigma result, which, by particle physics convention, is officially "evidence" but not officially a "detection." For comparison, the Higgs Boson discovery was deemed a true discovery when it reached 5-sigma.

Mixed signals


Obviously, a result at 99.81% confidence, while maybe not quite a detection, is intriguing. And, due to CDMS's ability to distinguish backgrounds, I would say it's far more intriguing than the AMS result as far as dark matter implications are concerned. But there are a number of very good reasons the physics community is staying cautious on this one. The biggest reason is that the simplest model of dark matter that could explain the CDMS result has already been ruled out by other experiments. There are lots of detectors in the direct detection game right now, and at the moment, many of them seem to be giving us very conflicting information. There have been detections -- tentative or otherwise -- claimed by four different experiments now, if CDMS is included. The others are DAMA/LIBRA, CoGeNT, and CRESST -- and, actually, a previous signal was claimed by CDMS but has since been considered more likely to be background). All these results could be signs of a dark matter particle -- specifically, a weakly interacting massive particle (or WIMP), but it's difficult to find a way to make them agree with one other. They all seem to find particles with different masses and interaction rates. Even worse, combining the constraints from other experiments, such as XENON and EDELWEISS, and even previous results from CDMS, seem to rule out all the claimed detections.

"looks like Pollock's painting" -Resonaances Blog
Constraints and hints from direct detection experiments. The horizontal axis is the mass of the dark matter particle and the vertical axis measures its interaction with standard model nuclei. Filled regions indicate signals interpreted as dark matter; lines indicate upper limits. Everything to the upper right of a line is ruled out to 90% confidence by that experiment.  The lines are, roughly from left to right: XENON100 (dark dash-dotted green), XENON10 (light dash-dotted green), CDMS II Ge (dark and light dashed red), EDELWEISS (orange diamonds), and CDMS II Si (dark blue solid and black dotted). The asterisk is the best-fit point for CDMS's candidate events. This plot and more details can be found in arXiv:1304.4279 by the CDMS Collaboration.
AMS wasn't discussed in the CDMS paper, but I should point out that the best candidate dark matter model for the AMS result and the CDMS dark matter candidates do not agree either. It's a little difficult to compare them directly, because one is looking at dark matter annihilation and the other at dark matter interactions with nuclei, but the inferred particle masses are very different. To explain the AMS result, the dark matter particle would need a mass in the TeV (trillion electron-volt) range, whereas CDMS needs a particle with a mass a thousand times lighter. (Even though it's technically energy, an electron-volt is used as a measure of mass for fundamental particles, via E=mc2. GeV is a billion electron-volts and TeV is a trillion. For comparison, a proton is 0.938 GeV.)

AMS? CDMS? Total mess?


In the astro/physics community, the response to the result from CDMS has been mixed.


It's really not clear what we should make of all these conflicting results, and it's even less clear how to reconcile them. It could be that several of the experiments have just made mistakes or been misinterpreted, and with more data and more careful analysis we'll find out which signals were actually background events or random fluctuations. Or, it could be that dark matter is way more complicated than we realized. For instance, maybe it interacts differently with protons than with neutrons, or maybe there's more than one kind of dark matter particle, or maybe we've made an error with our assumptions about how dark matter is distributed in our galaxy, and fixing that will alleviate some of the tensions in the data. A few papers posted recently have also argued that the CDMS analysis of the XENON 100 constraint made it out to be more constraining than it is, so the CDMS result is maybe not entirely ruled out by XENON 100. But even that wouldn't explain all the other signals, and the results still don't easily agree.

As usual in science, the only thing to do now is to get more data. The business of dark matter detection is still in a fairly early stage -- as the detectors take more data and become more sophisticated, hopefully these signals and limits will start to make more sense. And of course we will keep looking in other places. The LHC is starting to place interesting limits on the dark matter parameter space, and even beyond AMS, efforts at indirect detection is also giving us some intriguing signals that may or may not have anything to do with dark matter. Some of us (e.g. me) are also looking toward the early universe to see if we can find hints for dark matter's effects on the first stars and galaxies.

Meanwhile, the preprint archive is happily aglow with new theory papers trying to piece this all together. It really is an exciting time; sometimes it's fun to have no idea what's going on.

Tuesday, November 13, 2012

Academic Nomad

As you might know, the pre-faculty academic life can involve a lot of moving around. For me, I went from an undergraduate institution in California, to grad school in New Jersey, to a postdoctoral fellowship in England. And just a month an a half ago I moved to a second postdoctoral fellowship in Melbourne, Australia.
I actually went around the other direction. Image source.
All the moving and settling in has kept me pretty busy lately, but I'll try to write some more posts soon. Meanwhile, here's where some of my writing has appeared elsewhere on the Internet in the last few months:

  • American Physical Society Physics, "Focus: Magnetic Fields Explain Lunar Surface Features" (20 August 2012)
    Why are there swirly white blotches all over the Moon? They come from miniature magnetic forcefields. And they might some day lead to Star Trek-style deflector shield technology, at least to protect spacefarers from the Solar Wind.
  • Foundational Questions Institute (FQXi) Blog, "Losing Neil Armstrong"
    (3 September 2012)
    The death of Neil Armstrong hit me kind of hard. Here I share some thoughts about what the human spaceflight program means to me, and I tell the story of how my grandfather played a part in saving the Apollo 11 mission astronauts from certain death.
  • The Economist "Babbage" Science and Technology Blog, "Becoming an astronaut: Frequent travel may be required"
    (6 September 2012)
    I recently applied to the NASA astronaut program. I made the first cut (and I'm still waiting to hear if I'll make the second). If you've ever wondered what the astronaut job application process is like, check out this piece. (Note: In case you're not familiar with the Economist style, the tech blog posts are all in the third person, with the correspondent referred to as "Babbage.")
I should be able to write more actual blog posts in the coming weeks. Stay tuned!

Friday, August 31, 2012

The Long Dark Tea-Time of the Cosmos


(This post is adapted from a longer, more rambling, and somewhat more technical post I wrote for a group blog, here.)

There have been a few truly transitional moments in the history of the Universe in which something fundamental about the cosmic environment changed. Some of these -- the beginning and end of cosmic inflation, reheating, big bang nucleosynthesis -- altered the very nature of spacetime or the kinds of particles that populated it, and all happened within the first few minutes. The first atoms formed a few hundred thousand years later, marking another milestone.  For the 13 billion or so years since then, though, you could argue that it's all been a bit samey.  Except for cosmic reionization.

Cosmic reionization can be explained in just a few words: the gas in the Universe went from being mostly neutral to mostly ionized. That might sound trivial, but it turns out that the implications are profound -- reionization is the reason we are able to see other galaxies billions of light-years away, and if we can understand how it happened, we will understand the formation of the very first stars and galaxies in the Universe.

But I should back up for a moment. In order to see why reionization matters, you need to know something about recombination and the cosmic dark ages.

Timeline of the Universe, showing recombination, the dark ages (not even labelled because that epoch just isn't  interesting enough, apparently), reionization and the age of galaxies. Source. Credit: Bryan Christie Design)

Great ball of (primordial) fire

Recombination is probably the most inaccurately named event in the history of the Universe, on account of the fact that there was no "combination" before it.*  In the beginning, there was the all-encompassing energy-matter-plasma-fireball, the product of the first cosmic explosion, which rapidly expanded in all directions.  We sometimes refer to this as the "hot big bang."  This fireball was formed mainly of protons and electrons, all of which were hot and unbound and bouncing off photons and generally being really energetic.  (Charged particles that aren't bound together are called ions and ionized gas is called plasma, so you could call it a plasmaball instead if you like, but I'll stick with "fireball" for the purpose of dramatic imagery.)  In the fireball, the particles and photons were tightly coupled, meaning that they were all mixed up and interacting in a big indistinguishable mess.  But as spacetime expanded and the fireball got cooler, the particles lost some of their frenetic energy.  Eventually, there came a time when the fireball was cool and diffuse enough that protons and electrons could chill out and become bound atoms.  The photons were still there, but now instead of just ricocheting off ions, they could get absorbed by atoms, or sail right by them in the newly abundant spaces between.  Some photons still occasionally broke atoms apart, but the Universe was becoming diffuse enough that atoms spent more time bound than not.

Illustration of the transition between the cosmic fireball and the post-recombination Universe. Red spheres are protons, green spheres are neutrons, blue dots are electrons and yellow smudges are photons. The color bar on the bottom represents the average temperature (or energy) of the Universe at that epoch. Source.
(*Terminology note: "Recombination" is also sort of technical term in physics, which in general refers to the joining of an electron and a proton, without regard to whether that particular electron and proton had made up an atom before.  In the very early universe, inside the cosmic fireball, hydrogen atoms would sometimes form, but they'd be broken up immediately by energetic photons.  The name "recombination," when talking about the epoch, refers to the time when the hydrogen atoms that formed could stay bound for an appreciable amount of time.)

And so, at the epoch of recombination, around 300,000 years after the big bang, the gas went from being ionized to neutral.  Recombination set off the decoupling era -- the time when the matter and radiation that were previously tightly coupled (i.e., interacting a lot) became more free to do their own thing.  Decoupling is also known as last-scattering, because it was the last moment when photons would immediately be scattered off matter as they flew around.  After decoupling, the photons were free to sail around unimpeded and travel for long distances.  Which is where the cosmic microwave background (CMB) comes from -- the newly decoupled photons free-streaming through the Universe out of the great primordial fireball.

Map of the cosmic microwave background, the radiation leftover from the primordial cosmic fireball. Tiny fluctuations in the temperature of microwave radiation coming to us from  all directions give us clues about how matter was distributed at the earliest times in the Universe. In this rendering, we would be at a tiny dot in the center of the sphere. Source.

And then we wait

The next phase of the Universe was, in many ways, distinctly unexciting.  It's called the dark ages.  During the dark ages, the Universe was full of cooling neutral gas (mostly hydrogen), and that gas was very very slowly coming together into clumps via gravity.  At decoupling, the fluctuations seeding these clumps were more dense than their surroundings by only about one part in 100,000.  Those tiny blips, which we see in the CMB, were enough to tip the scales of gravity to draw more matter together into bigger and bigger clumps.  But it took a while for anything particularly interesting to happen.  Sometime between 100 and 500 million years after the big bang, one of these little clumps became dense enough to form the first star, and that defined the "first light" of the universe.  (Of course it wasn't strictly the first light -- the fireball made plenty of light, and we still see it as the CMB -- but it was the first starlight.)

So, if we had a big enough telescope, could we look far enough back into the Universe to see that first star?

Unfortunately, no.  It turns out the dark ages were dark for two reasons.  One was that there wasn't any (visible) light being produced at the time.  The other is that neutral hydrogen is actually pretty opaque to starlight.

Atoms and molecules can only absorb photons at particular frequencies -- those corresponding to transitions between the energy levels of the electrons.  During the dark ages, any photon whose energy was in the sweet spot for a hydrogen atom transition would very likely be absorbed.  Radio waves or other low-energy photons could get through because there weren't any transitions of the right energies to take them, but visible light was another story.  It's easy for a hydrogen atom to absorb visible-light photons and use them to knock its electrons into higher energy levels (the same goes for ultra-violet light).  Those atoms release the photons again eventually, but in different directions, so the vast majority of the light produced by the first stars isn't able to make it all the way to our telescopes.

Opacity and transparency. The primordial fireball was opaque like fire is opaque: energetic particles couple with photons and keep them from free-streaming away. The edge of the wall of flame is like the last-scattering surface, where the light is finally free to escape. The dark ages were opaque like fog is opaque: the light was absorbed and scattered and attenuated. Once reionization cleared the "fog" of of the dark ages, light was able to travel unimpeded. Photo sources: herehere and here (but really here).

Here comes the sun(s)...

Once stars were forming in earnest,  though, astrophysics really got going, and fun things started to happen.  The vast majority of the gas in the Universe (called the intergalactic medium, or IGM) was still neutral at this point -- mostly hydrogen, not doing much -- but each star or galaxy that formed would heat the gas around it and make a bubble of ionized gas.  As more and more of these bubbles formed, the intergalactic medium had a sort of swiss-cheese nature, with bubbles of ionized gas growing and coming together, burning away the fog of the dark ages.

Once there were enough stars and galaxies to ionize a significant fraction of the IGM, we finally had reionization: the (aptly named) epoch when the Universe went from being neutral to being fully ionized again.  And this time, the universe was much less dense and the starlight could easily pass through the ionized gas, so the IGM became transparent to starlight. And that's why we can see other galaxies -- because there's very little neutral gas left to absorb the light en route.

Artist's conception of bubbles of ionized gas percolating through the IGM during the dark ages. The CMB is at the far left, and the right is the present-day Universe. Source: illustration from a Scientific American article by Avi Loeb, which can be found here.
When did reionization happen?  And why does it matter?  Second question first: it matters because understanding reionization means understanding how the first sources of light in the Universe formed and how the IGM turned into the galaxies and clusters and all the amazing stuff we see today.  Also, it's a major milestone in the Universe's history, and a phase transition of the entire IGM, so it seems important.

Back to the other question: we think reionization happened around a billion years after the big bang, though probably gradually and clumpily and at different times in different places, and we're still trying to pin down the exact epoch.  There are a few ways to go about figuring it out.  One is to look for the Universe not being transparent.  In astronomy, opacity usually manifests as something absorbing light from something behind it.  On a foggy day, you know the fog is there because it makes it hard to see things that are far away, not because you really see the water droplets in the air.  Reionization is similar -- you know you're getting close to it if some of the light from a distant source (a quasar, generally) is absorbed before it gets to you.

Unfortunately, looking at absorption only tells us roughly when reionization was pretty much over, since it doesn't take much neutral hydrogen (about one part in 100,000) to absorb all the light from a distant quasar.

Another way to pin down reionization is to look at some subtle effects it has on the CMB, but that would take another blog post to even begin to describe, so I'll just say the CMB gives us a pretty good idea of the earliest reionization might have started, but it's hard to get much more than that.

So where does that leave us?  We can't use visible light, because that's absorbed as soon as the IGM is slightly neutral.  And the CMB tells us a lot about the early universe, and gives us a hint about the beginning of reionization, but doesn't tell us when the bulk of it happened.

Radio astronomy FTW

The big innovation, the thing that institutions all over the world are investing in, is looking for radio signals coming from the neutral hydrogen itself.  Neutral hydrogen has a low-energy transition that, when it occurs, emits or absorbs a photon with a wavelength of 21 cm: it's called the 21 cm line. (The frequency is about 1420 MHz.)  This wavelength puts it in the radio part of the electromagnetic spectrum, so we see it as radio waves.

The origin of 21 cm radiation. In the higher-energy state, the hydrogen atom's electron and proton are aligned. If one flips its spin, the atom is in a lower-energy state and a 21 cm photon is produced. Source.
The reason 21 cm radiation can let us peer into the dark ages is twofold.  One, it's so low-energy that it doesn't take a lot to excite it, so you can get 21 cm radiation being produced even if there's not a heck of a lot going on (just atoms colliding and a few stray photons).  The other advantage is that radio waves are really hard for neutral hydrogen to absorb.  An atom creates a 21 cm photon in the dark ages, and then the universe expands a little, making that photon just a little longer in wavelength, and then it's too low-energy to be absorbed by anything.  So all we have to do is set up a radio telescope and wait for it to arrive here!

Is it here yet? (Photo by Mike Dodds)
Okay, so it's not quite that simple.  Because the photons stretch out as the Universe expands, we're really talking about something like 100-200 MHz for "21 cm" photons from the epoch of reionization and the end of the dark ages.  There are some major downsides to working at those frequencies.  One is that you're now smack in the middle of all sorts of terrestrial radio communication: FM radio, cell phones, satellite transmissions... it's a big mess.  Also, at low frequencies, the Earth's ionosphere is highly refractive and can do all sorts of horrible things to your signals as they're coming down from space.  Somewhere in the tens of MHz, the ionosphere is completely opaque.  So if you want to pick up 21 cm radiation from the epoch of reionization, you have to find a place that's relatively radio-quiet (i.e., unpopulated) to do this sort of study, or you have to find a way to deal with the radio noise. (One example of a relatively radio-quiet place is the Australian outback. Another is the Moon.)

A major challenge that you definitely can't get away from is our own Galaxy.  The Galaxy produces a lot of radiation which is extremely bright at the low frequencies we're dealing with here.  Galactic radio signals are typically about 10,000 brighter than the signal from reionization.  And it doesn't help that the radiation is spatially varying in weird and complex ways.  Here's a map of the Galactic radiation at 408 MHz.  It's pretty bright, and it gets worse for lower frequencies.

Galactic synchrotron radiation at 408 MHz -- the emission is stronger at lower wavelengths. The color scale here gives the brightness temperature (a measure of the intensity of the signal) in Kelvins. For comparison, the 21 cm reionization signal would be around 10 mK. Source.
In spite of the challenges, there's a lot of effort right now going into building the telescopes to see this signal, because it would allow us to actually probe the IGM in the epoch of reionization.  Ideally, we'd get pictures like this:
Simulation of reionization. Source.
Each square represents a bit of the Universe at a different moment in cosmic history, going forward in time as you move left to right and top to bottom.  In the upper left-hand panel (0.4 billion years after the big bang), the IGM is largely neutral.  In the lower-right hand panel (0.8 billion years), it's ionized.  The features in the other panels are ionized bubbles forming and growing.  Each of these simulation panels represents just a small patch of sky, but in theory you can imagine doing a full-sky map.  Taking into account the expansion of the Universe (and consequent stretching of photons) and tuning the telescope to different frequencies, you ideally get a map of all the neutral hydrogen at each epoch.

I should also point out: the dark ages and epoch of reionization cover a lot of the observable Universe. This sketch shows roughly how much volume is covered by different kinds of observations, where we're in the middle, looking out.  The z values are redshift -- a measure of how much the Universe has expanded since that time.  (So the edge, the farthest away in space and time, is at a redshift of infinity, since the Universe is infinitely bigger now than at the big bang; the redshift today is zero.  Reionization was between redshifts 6 and 10 or so.)  In the diagram, the colorful part in the middle contains most of the galaxies we've seen directly.  The thick dark circle near the edge is the CMB.  Everything inwards of z=50 can be probed with 21cm observations, and almost everything outwards of z=6 can't be seen any other way.
Schematic of how much of the Universe we're seeing with different kinds of observations. Red, yellow and green are optical. The black circle around the edge is the CMB.  Everything in blue can be observed with 21 cm radio signals. Source: Tegmark & Zaldarriaga 2009.

If you build it...

There's something of a global competition collaboration going on right now to try to get at this signal, because it would open a whole new window on the evolution of the Universe.  You may have heard of the Square Kilometer Array, which is going to be the world's largest array of radio telescopes when it's completed in a decade or so.  It'll be split between South Africa and Western Australia, and one of the key goals of the project is to look deeper into the epoch of reionization than we ever have before, using the 21 cm line.  In the meantime, there's the Low-Frequency Array (LOFAR), the Murchison Widefield Array (MWA), and lots of other projects that are just getting going.  It's a big industry.

But before we get too excited, I should reiterate that dealing with the foregrounds and instrumental calibration and stuff is hard.  There are actually a number of intermediate steps (including getting an all-sky average signal, or doing some kind of statistical detection) that would have to happen before any attempt at mapping (or "tomography").  But mapping remains the ultimate goal.  And if we can map out what the neutral hydrogen in the Universe was doing in the first couple of billions of years, we can basically watch the Universe as we know it come into being.  And that would be pretty darn cool.

Credit: SPDO/TDP/DRAO/Swinburne Astronomy Productions.