Wednesday, September 24, 2014

Breaking Down a Big Bang Breakthrough

I like clean, simple arguments. I like inescapable logical conclusions. And when those things appear in a paper telling us something new about the very early universe – even better.

Today, David Parkinson, a research fellow at the University of Queensland, came to Melbourne to give a talk on the paper he just posted to the arxiv today with his colleagues Marina Cortes and Andrew Liddle. It was incredibly (and not coincidentally) well timed. Just two days ago, I spend most of the day grappling with the implications of the latest cosmic microwave background results from the Planck satellite, and now David was going to come and give a new perspective on it.

Marina Cortes
Marina Cortes
Andrew Liddle
Andrew Liddle
David Parkinson
David Parkinson

But let me back up. In March, researchers with a telescope at the South Pole – the BICEP2 collaboration – announced an astonishing new result. They were studying the cosmic microwave background (CMB, the afterglow of the Big Bang), and they presented what they said was the first evidence of primordial gravitational waves – ripples in the fabric of spacetime, from the first tiny fraction of a second after the Big Bang. The announcement made a HUGE splash. There were articles in all the major newspapers and magazines hailing the discovery as the start of a new era in cosmology, and in some cases even as proof of cosmic inflation. This is a scenario in which the early universe, in the first billionth of a billionth of a billionth of a billionth of a second after the moment of creation, expanded extremely rapidly and grew several orders of magnitude in size. The theory of inflation has been around since the 1980s, first proposed by Alan Guth, and developed by Andre Linde, Paul Steinhardt, and others. One of the key predictions of inflation is that it would produce gravitational waves, and in principle these could be seen as little swirls in the pattern of polarization of the CMB. The CMB is one of the strongest pieces of evidence that the Big Bang happened at all, and by studying its light we can learn a huge amount about what the early universe looked like. Some of that light is polarized, meaning it is preferentially oriented one way or another when it reaches the detector. Patterns in the polarization can show us traces of those early spacetime ripples. Although many experiments had been looking for these patterns, BICEP2’s signal was unexpectedly strong, and it was in some tension with previous tensor measurements by the Planck satellite, among others. A viral video went around showing a flabbergasted Andre Linde receiving the news, and even he, a vocal supporter of inflation theory, looked shocked.

 
Characteristic swirls in cosmic microwave background polarization, found by BICEP2. Image credit: BICEP2 Collaboration.

It wasn’t long after BICEP2’s announcement, though, that problems appeared. Rumors went around saying that the BICEP2 team had made a mistake in their calculations. The problem was interstellar stardust. It turns out that dust can create polarization too, and although the BICEP2 team considered a few different possibilities for the level of contribution of dust to their signal, several cosmologists argued that the estimates were way too low. Two papers came out showing that the BICEP2 signal – the one that was supposed to be a beautiful picture of gravitational waves – could have been entirely due to dust in our Galaxy mimicking the primordial signal.

More articles appeared, now announcing that the “Big Bang result” has “turned to dust” (among other clever puns). Paul Steinhardt, who has spent the last several years developing alternatives to inflation theory, wrote an article proclaiming that inflation was never a good idea to begin with, and the dust problems go to show that the hype was all for nothing. Most of the cosmology community, however, took the attitude that we should probably just wait and see. There were several other experiments taking data to confirm or rule out BICEP2’s discovery, and the Planck satellite – the current flagship in the CMB detection game – would be producing maps of interstellar dust really soon. That should clear everything up.

Two days ago, Planck released their dust polarization results. They specifically addressed the BICEP2 study, and while they were very measured in their statements (pointing to an upcoming joint analysis), the upshot of the work was that the dust polarization signal was so high that it could easily account for everything BICEP2 saw. Maybe the gravitational waves are there, but if Planck is right about the amount of dust in the way, there’s really no way to say that BICEP2 actually discovered them. In physics, a discovery means you’ve shown something to be the case beyond any reasonable (statistical) doubt. Usually that comes in the form of a statement of how incredibly unlikely it is that chance or some spurious signal could have given you the same result. A signal that could just as easily be all dust is definitely not a discovery.
 
Comparison of original BICEP2 result (left) and Planck dust polarization result (right). The circled region in each shows where the primordial gravitational wave signal is expected to show up for the model supported by BICEP2's result. The colored lines in the BICEP2 figure are their dust models, all well below the signal. The blue boxes in the Planck figure are their estimates for the dust amplitude, and the solid line is where the gravitational wave signal should occur. You can see that the dust amplitude is comparable to the expected gravitational wave signal amplitude, suggesting the two could not be distinguished. Image credits: BICEP2 Collaboration, Planck Collaboration.

This all brings us to David’s paper. The details are technical, but David and his colleagues basically go back to the drawing board to determine how we can analyze data to get the best, most unbiased estimate of the gravitational wave signal. They re-analyze the BICEP2 polarization signal, under a couple of different assumptions, using Planck's previous limits on the gravitational wave contribution as a starting point. First, they assume there was no dust contamination at all. Then they look at an “optimistic” dust model, where dust contamination is there but not bad enough to drown out the signal, and a “pessimistic” dust model, where dust can account for everything. They look at not just the level of primordial gravitational waves – also known as tensor modes – but also the “tilt” of the tensor mode spectrum, an important parameter in inflationary models.

What they find is striking. In the “optimistic” and dust-free models, they find tensor modes, just as BICEP2 did, but they also find a tilt that is utterly incompatible with standard models of inflation. Basically, if BICEP2 and Planck’s previous measurements are correct, and the dust is at a manageable level, BICEP2 not only doesn’t prove inflation – it just about rules it out! The only other option is to use the “pessimistic” dust model, in which case BICEP2 discovered nothing. As it happens, Planck’s new measurements fit the pessimistic dust model best.

Results from Cortes, Liddle & Parkinson paper. The data points are the BICEP2 results, and the circled region can be compared with the regions in the figure above. The black lines are the expected level of the signal for dust plus tensor modes plus the contribution from gravitational lensing. The green line is the tensor contribution -- this can be directly compared to the dashed red line in the BICEP2 figure above. You can see the tilt in the spectrum in the way the green tensor line extends up and to the right in the figure. Image credit: Cortes, Liddle & Parkinson 2014.

In any case, the implication is clear, and somewhat unsettling. It presents us with three items – Planck’s previous tensor limits, BICEP2’s gravitational wave signal, and the inflationary model – and it says we can pick two. At least one has to be incorrect.

That’s a bold statement, and a big deal if it holds up. I love the irony in the suggestion that keeping the “inflation-proving” result requires disproving inflation. But it also illustrates the danger of jumping the gun in these kinds of complicated data analyses. It’s widely believed that BICEP2 made too strong a statement in their original paper and press-release, both in their optimism about dust foregrounds and in their statement of confidence in the signal. Now it appears that their analysis may also have introduced a bias that hid the implications for the tensor tilt.

To know with any degree of certainty what the BICEP2 result really means, we’ll have to wait for a joint analysis being carried out by the BICEP2 and Planck teams in collaboration, and we’ll have to see what the other experiments find. But it’s certainly an exciting time, and, as always, it’s fascinating to see the scientific process in action.



Footnote: My PhD thesis was partially based on a study with a similar sort of gist – that you can have two of three theories, but not all of them together. In my case, the theories were axion dark matter, string theory, and inflation. If you’re really curious, you can find the paper here.

Saturday, June 1, 2013

The Lone Genius Hypothesis

When I was a little kid, I knew I wanted to be a cosmologist, like Stephen Hawking. I would tell people that my dream was to have a little office somewhere with a giant blackboard, and I would fill it with equations and solve the mysteries of the Universe. Sometimes I imagined that instead of working in my office, maybe for a change of scene I would sit under an apple tree staring off into the sky, contemplating the nature of reality.

Thinking about doing physics. Photo credit: Demelza Kooij

That's not exactly how things turned out. True, there are times when I sit alone in my office and scribble equations. There are times when I sit outside and stare and think. But, to be honest, those times are usually not especially productive. When I really make progress, when I really have breakthroughs -- those are always times when I'm talking to other physicists and astronomers, chewing through new ideas and checking that I'm on the right track. And even more often, the most important work we do is what grows organically from our conversations or e-mails or paper perusals. Sometimes it's hard even to know who should get the credit.

Actually doing physics. Photo credit: CAASTRO.

So I was wrong. But I think adolescent-Katie could be forgiven for imagining a future career of solitary contemplation. When we're presented with images of great theoretical physicists, the picture is almost always of a lone genius, hidden away with a blackboard, making leaps no one else could have seen, using nothing but pure, unadulterated mind-power. (That those lone geniuses are almost always depicted as male is the subject for another discussion entirely.) This image has been brought to the forefront once again in the last couple of weeks by the media attention heaped on the mathematical physicist-turned-hedge-fund-consultant Eric Weinstein, whose name happens to be only one letter off from Einstein, which just adds fuel to the media-hype fire. He has a homemade theory of everything, developed over many years by himself, in his spare time, which he is just now announcing to the public. I haven't attended Weinstein's lectures and I haven't seen his work (very few people have so far), so I'm not going to comment on its genius or lack thereof. I also won't comment on the media attention per se, as others have done plenty of that. What I will say is that the W/Einstein lone-genius model of theoretical physics is, nearly always, in stark contrast to how theoretical physics is actually done.

But... Einstein!


One of the reasons Einstein carries such a hefty cultural weight is that he, like Newton a few centuries before him, appears to have basically single-handedly invented a fundamentally new view of the Universe. Newton did it over the course of 18 months, starting in 1665 while isolated to avoid the Plague, revolutionizing optics and gravity, and inventing calculus along the way. Einstein's turn came in his "annus mirabilus" in 1905 when he published four groundbreaking papers and a PhD thesis. These touched on optics, the size and motions of atoms, and, as you might have heard, the theory of special relativity.

This approach doesn't usually work out. Photo: Associated Press, found here.

Einstein is frequently depicted as having been completely cut off from the academic establishment during this time, being "just a patent clerk." But although he was certainly not a working academic physicist, he still had connections with the community, people to bounce ideas off of, and a (stalled) PhD-in-progress at the University of Zurich. He had also published several papers, though they didn't receive much attention. And working as a patent clerk was actually a fairly technical job, involving evaluating new ideas and requiring a deep understanding of science and engineering. He is, however, I will grant, probably the best example in the modern era of a theoretical physicist revolutionizing science from outside "the establishment." In fact, he's the only one I can think of.

"Hey, who invented quantum mechanics?"


I asked this question of a colleague of mine while writing up this post, not because I thought he'd have a single answer, but because I was curious what the list might look like. There are a few people who should probably get some credit: Maxwell, who first formulated the basic equations of electromagnetism; Hertz, an experimentalist who helped demonstrate the photoelectric effect; Planck, who was so important to quantum theory that its most fundamental constant is named after him; Einstein, who first explained the photoelectric effect from a theoretical point of view; or Pauli, or Heisenberg, or Bohr, or Nernst, or Schroedinger... there was kind of a lot going on around that time. The point is that quantum mechanics is a great illustration of the fact that it doesn't take a lone iconoclast to revolutionize our understanding of the Universe. Even huge breakthroughs that fundamentally change how we see and do physics can come about through a series of incremental steps. Experimentalists see something odd in their experiments, theorists propose possible explanations, experimentalists go back and test the consequences of that theory and the cycle begins again.

This has happened a number of times since Einstein's era. In addition to quantum mechanics, we've seen the appearance of the Standard Model of Particle Physics, quantum field theory, the concordance model of cosmology (including dark matter and dark energy), and the as-yet purely theoretical frameworks of supersymmetry and string theory. None of these advances could be attributed to one person, nor did they generally involve people working in isolation on theories of their very own.

So how does it usually work?


Physics is, these days, an immensely collaborative field. There are a lot of conferences. There are institutes and workshops and collaboration visits and endless seminars and dissections of research papers. Newly built physics institutes tend to have hallways lined with blackboards or dry-erase-glass cubicles to get people out of their offices to collaborate. We talk to each other, not because we are inherently very social (though a lot of us are), but because it's a really productive way to proceed. Personally, I find I think better when I'm explaining my ideas to someone. Some people, after staring at the same equations for days, just need to get the math written down and show it to other people to make sure it really makes sense. And, even more importantly, we're not all experts on all areas of physics. One person might have spent four years working on a particular quantum mechanical process in the early universe while another might be an expert on strong-field gravitation, and together they can create a much clearer picture of, say, how gravitational waves might be produced right after the big bang. Or two people might have slightly different perspectives on the same subfield of physics, because they were taught by different people or did projects on different things. For whatever reason, it turns out that talking to other physicists is one of the most productive things a physicist can do, if he or she wants to really make a breakthrough. And here I'm just talking about pure theory -- if you want to actually test any of this stuff, to see if it's on the right track for describing the actual universe in which we live, you have to be in touch with experimentalists and observers and find out what kind of tools they have available too.

Progress through collaboration: CMS at the LHC. Photo credit: CERN.

The way theoretical physics is funded (though keep in mind the funding system has its own problems) is a good clue to what we've found to be successful over time. Unless perhaps you win a MacArthur "Genius Grant," neither grant decisions nor academic hiring are determined solely by how incredibly brilliant you are. They're determined by how much science you produce, how good it is, how much it adds to previous research, and how your advisors and collaborators see your work. "Quality of the Investigator" is only one section of a grant application -- you have to also explain how your work fits in with the work of others at the institute where you'll work, and why it's a good environment for you. I've actually had a fellowship application rejected based entirely on the institute not being "a good fit" -- the assumption being that without anyone to talk to, I just wouldn't be all that productive there.

So, the synergy factor is not to be dismissed. (You would be amazed how many papers out there include in the acknowledgments something like "We thank [conference/workshop] where part of this work was carried out.") Smart people are smarter when they work together.

What about the W/Einsteins of the world?


To clarify again: I have no intention of passing judgment on Weinstein's ideas. It's entirely possible he's onto something incredible, and it's entirely possible the work will lead to nothing at all. It might turn out to solve all the problems of cosmology, or it might already be ruled out by experiment. I haven't seen the paper or heard the talk, so there's really no way for me to hazard an educated guess.

But I will say that if you think you might want to solve the biggest outstanding problems in theoretical physics, I don't recommend the lone-genius approach. Maybe Weinstein had some really good reason not to talk to other physicists about his work before now. Perhaps he was worried it might be wrong and didn't want to embarrass himself, or perhaps he was worried it might be right and he'd be scooped or not get all the credit. Or maybe he just doesn't like to talk to physicists all that much. It's even possible that he thought his ideas were so revolutionary that no one else would understand. But I kind of doubt that. We physicists love finding new ways to think about things. We love stretching our minds and seeing things from another point of view. It's why we do this work at all. And it's why we spend so much time talking to each other about it.

Saturday, April 27, 2013

The Art of Darkness

The Universe is a very dark place.
The contents of the Universe, according to recent results from the Planck Satellite. Image copyright ESA.
This post focuses on the blue bit; for more on the pink segment, see my earlier post.

You've probably been hearing a lot about dark matter detection lately. In the past couple of months, there have been announcements of announcements, delays of announcements, press conferences, ambitious claims, cautious optimism, not-so-cautious optimism, and various "hints," "signs" and "clues." But what does it all mean? Have we actually detected dark matter?

Short answer: Um, maybe. Also: It's complicated. Really complicated.

The Truth is Out There


I'll start by saying the one thing we're really pretty sure of: dark matter is real. We've known for a very long time that the matter we can see in our telescopes -- stars, galaxies, gas, dust -- doesn't have enough gravitational pull to explain the motions of the cosmos. There are some very good explanations of dark matter and its evidence on the web out there already, but in brief, given how fast stars and galaxies are moving (stars moving in galaxies, galaxies moving in clusters), the matter we can see isn't enough to hold them all together. The first evidence of this came out in the 1930s, and since then, astronomers have hypothesized that some mysterious new component of matter that we can't see -- dubbed dark matter -- is pervading and surrounding galaxies and clusters and keeping everything from flying off into space.

Sorry about the afterimage.
Artist's impression (well, mine) of a galaxy embedded within a spherical dark matter "halo." Image of Andromeda Galaxy credit GALEX, JPL-Caltech, NASA, from APOD.

If this was the only evidence, it might be reasonable to suggest that it's not a new form of matter, but rather an altered law of gravity that explains the inconsistency. But it turns out that evidence for dark matter being a fundamentally new kind of matter pops up virtually everywhere we look -- from the way light bends around massive objects, to the history of galaxy formation, to the chemical make-up of the early universe. Some of the strongest evidence for dark matter is found in the aftermath of collisions of galaxy clusters, since these cosmic train wrecks can effectively separate dark matter from stars and gas.

The Bullet Cluster, a.k.a., dark matter's smoking gun.
Composite image of the Bullet Cluster of galaxies, with optical Hubble Space Telescope image, Chandra X-ray image of ionized gas in pink, and dark matter abundance determined from gravitational lensing portrayed in blue. Credit: X-ray: NASA/CXC/CfA/M.Markevitch et al.; Optical: NASA/STScI; Magellan/U.Arizona/D.Clowe et al.; Lensing Map: NASA/STScI; ESO WFI; Magellan/U.Arizona/D.Clowe et al. Annotated image from animation found here.

So we know dark matter is out there, but we don't know what it is. We think it's probably some kind of new elementary particle, and the leading theories all suggest that it should have some interaction with light and/or ordinary matter (i.e., particles contained in the Standard Model of particle physics), so over the last few decades the physics community has put a lot of effort into finding a way to detect those interactions. There are basically three approaches:

  • Direct detection: If dark matter is a new elementary particle that interacts mainly via gravity and only very weakly via any other force, dark matter particles should be passing through the Earth all the time, and, very occasionally, you'd expect one to bump into something. Direct detection experiments look for that collision, called "nuclear recoil" (because you're looking for the movement of the atomic nucleus, not the electrons). Basically they put a box full of some target material (in the case of the CDMS experiment, that's silicon or germanium) in a heavily shielded lab deep underground where virtually no standard model particles can get in. Then, very sensitive detectors watch for one of the target nuclei to be bumped. If the scientists can rule out other explanations for the bump (like radioactive decay of the material around the target sending in neutrons, for instance), and if the recoil energy is what they expect dark matter to produce, then they have a dark matter event candidate.
  • Indirect detection: In many of the models of dark matter, the dark matter particle is its own antiparticle, which means that if two dark matter particles collide precisely enough, they annihilate. In theory, this produces standard model particles that we can see. If that's correct, then one way to find dark matter particles is to look at where dark matter is densely concentrated (like in the Galactic Center) and see if there are gamma rays or high-energy particles being produced in a way that ordinary astrophysics can't explain. There are other ways dark matter particle physics could be probed with direct detection, like if the particle decays or has other (non-annihilating) interactions with itself or other matter, but annihilation is the most common thing to look for. One of the reasons we think annihilation happens is because it leads to a natural way to explain the production of dark matter in the early universe -- the idea being that dark matter was annihilating and being produced all the time in the beginning when the universe was very dense, and it was only when expansion allowed dark matter particles to interact less frequently that they were able to exist in a more or less stable way for long periods of time.
  • Collider production: If two dark matter particles can annihilate to make standard model particles, then you should be able to reverse the process and make dark matter particles by colliding standard model particles at high energies. This is the idea behind the search for dark matter at colliders such as the LHC. A dark matter particle produced in a collider would pass right through the surrounding detectors without leaving a mark, so the way we'd see it would be to look for "missing energy." You add up all the energy of all the particles you do detect in the collision aftermath, compare it to the total energy you put in, and see if the missing energy is consistent with what a dark matter particle would spirit away.
Ways to make, destroy, or detect dark matter particles.
Different ways to detect dark matter (DM) particle interactions with standard model (SM) particles. Image found on the MPIK website, originally produced by Jonathan Feng. "Thermal freeze-out" is what happens when the dark matter is no longer dense enough to annihilate all the time due to the expansion of the early universe.

So, have we found it yet?


There's been a lot of hype. A few weeks ago, the team behind an experiment called AMS-02, a cosmic ray detector that hangs off the side of the International Space Station, made an announcement that they found a signal "consistent" with dark matter, but that it was "not yet sufficiently conclusive to rule out other explanations." They held a press conference and released a short publication summarizing the work. As I pointed out in a blog post for IoP's Physics Focus, the tone of the announcement, and the especially media hype that followed, went far beyond what was really justified by the results. What they actually saw was an excess of positrons over what would be expected from standard astrophysical processes. The excess might have arisen from dark matter annihilation. But it could also have come from something else. Like pulsars, which are known to accelerate particles and which could certainly produce a positron excess like the one seen by AMS. I go into more detail on this in my Physics Focus blog post, but the gist is that while the AMS signal is intriguing, it's really difficult to pin it on dark matter with any degree of certainty.

But that didn't keep the media from running away with the idea. Here's a sampling of the kinds of headlines I saw in response to the AMS result:

"Experiment believed to detect evidence of dark matter" - Boston Globe
"Strong hints of dark matter detected by space station, physicists say" - Guardian
"CERN Scientists Continue to Prove Their Value with First Evidence of Dark Matter" - Atlantic Wire
"Hints of Dark Matter Have NASA Scientists over the Moon" - Space News

Being excited about the prospect of a big discovery is fair, but overhyping it doesn't help anyone. Especially because only a couple of weeks later, another experiment, called CDMS, also claimed a possible detection of dark matter, and news articles said pretty similar things, sometimes without even referencing AMS:

"Researchers May Have Finally Detected a Dark Matter Particle" - Universe Today
"Homing in on Dark Matter" - Sky & Telescope
"Dark matter researchers think they've got a signal" - The Register
"Another dark-matter sign from a Minnesota mine" - Nature News Blog

Actually, the CDMS result got quite a bit less press, which was surprising to me. Pretty much any way you look at it, it's a much more direct result, if (as I'll explain) fantastically confusing.

Deep dark secrets


CDMS (or, specifically, CDMS-II) is an underground dark matter direct detection experiment. It's located in an old iron mine in Minnesota and it consists of super-cooled targets of silicon and germanium surrounded by sensitive detectors that can measure the positions and energies of any movements they see in their target nuclei. They expect to see, as a background, electron recoils from a variety of processes, and they can distinguish these from recoils of nuclei by looking at the way bumped electrons would ionize the target material. There are a number of ways they slice up the data to take out the electron background, but they also expect a tiny number of neutrons to get into the detector (either from space or from radioactive decay more locally) and bump into their target nuclei, and these would look exactly like dark matter collisions. The only way to deal with those is to estimate the number they expect from neutrons, and get excited if they see way more than that.

"Aww, look at the little WIMPy candidates." -@sc_k
The dark matter candidate events found by the CDMS-II experiment. Plot from presentation by Kevin McCarthy at the APS meeting. The full presentation can be found here and the paper is here. I was alerted to this plot by this tweet.

In the end, CDMS found three candidate events. In the plot above, they're labelled Candidate 1, 2 and 3. (I hope the CDMS folk actually named the candidate events, in the style of the IceCube collaboration, who found two extragalactic neutrinos and called them Bert and Ernie.) The collaboration claims that the chance of these events actually being dark matter -- as opposed to misidentified background or random chance -- is 99.81%. That corresponds to what we call a 3-sigma result, which, by particle physics convention, is officially "evidence" but not officially a "detection." For comparison, the Higgs Boson discovery was deemed a true discovery when it reached 5-sigma.

Mixed signals


Obviously, a result at 99.81% confidence, while maybe not quite a detection, is intriguing. And, due to CDMS's ability to distinguish backgrounds, I would say it's far more intriguing than the AMS result as far as dark matter implications are concerned. But there are a number of very good reasons the physics community is staying cautious on this one. The biggest reason is that the simplest model of dark matter that could explain the CDMS result has already been ruled out by other experiments. There are lots of detectors in the direct detection game right now, and at the moment, many of them seem to be giving us very conflicting information. There have been detections -- tentative or otherwise -- claimed by four different experiments now, if CDMS is included. The others are DAMA/LIBRA, CoGeNT, and CRESST -- and, actually, a previous signal was claimed by CDMS but has since been considered more likely to be background). All these results could be signs of a dark matter particle -- specifically, a weakly interacting massive particle (or WIMP), but it's difficult to find a way to make them agree with one other. They all seem to find particles with different masses and interaction rates. Even worse, combining the constraints from other experiments, such as XENON and EDELWEISS, and even previous results from CDMS, seem to rule out all the claimed detections.

"looks like Pollock's painting" -Resonaances Blog
Constraints and hints from direct detection experiments. The horizontal axis is the mass of the dark matter particle and the vertical axis measures its interaction with standard model nuclei. Filled regions indicate signals interpreted as dark matter; lines indicate upper limits. Everything to the upper right of a line is ruled out to 90% confidence by that experiment.  The lines are, roughly from left to right: XENON100 (dark dash-dotted green), XENON10 (light dash-dotted green), CDMS II Ge (dark and light dashed red), EDELWEISS (orange diamonds), and CDMS II Si (dark blue solid and black dotted). The asterisk is the best-fit point for CDMS's candidate events. This plot and more details can be found in arXiv:1304.4279 by the CDMS Collaboration.
AMS wasn't discussed in the CDMS paper, but I should point out that the best candidate dark matter model for the AMS result and the CDMS dark matter candidates do not agree either. It's a little difficult to compare them directly, because one is looking at dark matter annihilation and the other at dark matter interactions with nuclei, but the inferred particle masses are very different. To explain the AMS result, the dark matter particle would need a mass in the TeV (trillion electron-volt) range, whereas CDMS needs a particle with a mass a thousand times lighter. (Even though it's technically energy, an electron-volt is used as a measure of mass for fundamental particles, via E=mc2. GeV is a billion electron-volts and TeV is a trillion. For comparison, a proton is 0.938 GeV.)

AMS? CDMS? Total mess?


In the astro/physics community, the response to the result from CDMS has been mixed.


It's really not clear what we should make of all these conflicting results, and it's even less clear how to reconcile them. It could be that several of the experiments have just made mistakes or been misinterpreted, and with more data and more careful analysis we'll find out which signals were actually background events or random fluctuations. Or, it could be that dark matter is way more complicated than we realized. For instance, maybe it interacts differently with protons than with neutrons, or maybe there's more than one kind of dark matter particle, or maybe we've made an error with our assumptions about how dark matter is distributed in our galaxy, and fixing that will alleviate some of the tensions in the data. A few papers posted recently have also argued that the CDMS analysis of the XENON 100 constraint made it out to be more constraining than it is, so the CDMS result is maybe not entirely ruled out by XENON 100. But even that wouldn't explain all the other signals, and the results still don't easily agree.

As usual in science, the only thing to do now is to get more data. The business of dark matter detection is still in a fairly early stage -- as the detectors take more data and become more sophisticated, hopefully these signals and limits will start to make more sense. And of course we will keep looking in other places. The LHC is starting to place interesting limits on the dark matter parameter space, and even beyond AMS, efforts at indirect detection is also giving us some intriguing signals that may or may not have anything to do with dark matter. Some of us (e.g. me) are also looking toward the early universe to see if we can find hints for dark matter's effects on the first stars and galaxies.

Meanwhile, the preprint archive is happily aglow with new theory papers trying to piece this all together. It really is an exciting time; sometimes it's fun to have no idea what's going on.

Wednesday, November 14, 2012

Academic Nomad

As you might know, the pre-faculty academic life can involve a lot of moving around. For me, I went from an undergraduate institution in California, to grad school in New Jersey, to a postdoctoral fellowship in England. And just a month an a half ago I moved to a second postdoctoral fellowship in Melbourne, Australia.
I actually went around the other direction. Image source.
All the moving and settling in has kept me pretty busy lately, but I'll try to write some more posts soon. Meanwhile, here's where some of my writing has appeared elsewhere on the Internet in the last few months:

  • American Physical Society Physics, "Focus: Magnetic Fields Explain Lunar Surface Features" (20 August 2012)
    Why are there swirly white blotches all over the Moon? They come from miniature magnetic forcefields. And they might some day lead to Star Trek-style deflector shield technology, at least to protect spacefarers from the Solar Wind.
  • Foundational Questions Institute (FQXi) Blog, "Losing Neil Armstrong"
    (3 September 2012)
    The death of Neil Armstrong hit me kind of hard. Here I share some thoughts about what the human spaceflight program means to me, and I tell the story of how my grandfather played a part in saving the Apollo 11 mission astronauts from certain death.
  • The Economist "Babbage" Science and Technology Blog, "Becoming an astronaut: Frequent travel may be required"
    (6 September 2012)
    I recently applied to the NASA astronaut program. I made the first cut (and I'm still waiting to hear if I'll make the second). If you've ever wondered what the astronaut job application process is like, check out this piece. (Note: In case you're not familiar with the Economist style, the tech blog posts are all in the third person, with the correspondent referred to as "Babbage.")
I should be able to write more actual blog posts in the coming weeks. Stay tuned!

Friday, August 31, 2012

The Long Dark Tea-Time of the Cosmos


(This post is adapted from a longer, more rambling, and somewhat more technical post I wrote for a group blog, here.)

There have been a few truly transitional moments in the history of the Universe in which something fundamental about the cosmic environment changed. Some of these -- the beginning and end of cosmic inflation, reheating, big bang nucleosynthesis -- altered the very nature of spacetime or the kinds of particles that populated it, and all happened within the first few minutes. The first atoms formed a few hundred thousand years later, marking another milestone.  For the 13 billion or so years since then, though, you could argue that it's all been a bit samey.  Except for cosmic reionization.

Cosmic reionization can be explained in just a few words: the gas in the Universe went from being mostly neutral to mostly ionized. That might sound trivial, but it turns out that the implications are profound -- reionization is the reason we are able to see other galaxies billions of light-years away, and if we can understand how it happened, we will understand the formation of the very first stars and galaxies in the Universe.

But I should back up for a moment. In order to see why reionization matters, you need to know something about recombination and the cosmic dark ages.

Timeline of the Universe, showing recombination, the dark ages (not even labelled because that epoch just isn't  interesting enough, apparently), reionization and the age of galaxies. Source. Credit: Bryan Christie Design)

Great ball of (primordial) fire

Recombination is probably the most inaccurately named event in the history of the Universe, on account of the fact that there was no "combination" before it.*  In the beginning, there was the all-encompassing energy-matter-plasma-fireball, the product of the first cosmic explosion, which rapidly expanded in all directions.  We sometimes refer to this as the "hot big bang."  This fireball was formed mainly of protons and electrons, all of which were hot and unbound and bouncing off photons and generally being really energetic.  (Charged particles that aren't bound together are called ions and ionized gas is called plasma, so you could call it a plasmaball instead if you like, but I'll stick with "fireball" for the purpose of dramatic imagery.)  In the fireball, the particles and photons were tightly coupled, meaning that they were all mixed up and interacting in a big indistinguishable mess.  But as spacetime expanded and the fireball got cooler, the particles lost some of their frenetic energy.  Eventually, there came a time when the fireball was cool and diffuse enough that protons and electrons could chill out and become bound atoms.  The photons were still there, but now instead of just ricocheting off ions, they could get absorbed by atoms, or sail right by them in the newly abundant spaces between.  Some photons still occasionally broke atoms apart, but the Universe was becoming diffuse enough that atoms spent more time bound than not.

Illustration of the transition between the cosmic fireball and the post-recombination Universe. Red spheres are protons, green spheres are neutrons, blue dots are electrons and yellow smudges are photons. The color bar on the bottom represents the average temperature (or energy) of the Universe at that epoch. Source.
(*Terminology note: "Recombination" is also sort of technical term in physics, which in general refers to the joining of an electron and a proton, without regard to whether that particular electron and proton had made up an atom before.  In the very early universe, inside the cosmic fireball, hydrogen atoms would sometimes form, but they'd be broken up immediately by energetic photons.  The name "recombination," when talking about the epoch, refers to the time when the hydrogen atoms that formed could stay bound for an appreciable amount of time.)

And so, at the epoch of recombination, around 300,000 years after the big bang, the gas went from being ionized to neutral.  Recombination set off the decoupling era -- the time when the matter and radiation that were previously tightly coupled (i.e., interacting a lot) became more free to do their own thing.  Decoupling is also known as last-scattering, because it was the last moment when photons would immediately be scattered off matter as they flew around.  After decoupling, the photons were free to sail around unimpeded and travel for long distances.  Which is where the cosmic microwave background (CMB) comes from -- the newly decoupled photons free-streaming through the Universe out of the great primordial fireball.

Map of the cosmic microwave background, the radiation leftover from the primordial cosmic fireball. Tiny fluctuations in the temperature of microwave radiation coming to us from  all directions give us clues about how matter was distributed at the earliest times in the Universe. In this rendering, we would be at a tiny dot in the center of the sphere. Source.

And then we wait

The next phase of the Universe was, in many ways, distinctly unexciting.  It's called the dark ages.  During the dark ages, the Universe was full of cooling neutral gas (mostly hydrogen), and that gas was very very slowly coming together into clumps via gravity.  At decoupling, the fluctuations seeding these clumps were more dense than their surroundings by only about one part in 100,000.  Those tiny blips, which we see in the CMB, were enough to tip the scales of gravity to draw more matter together into bigger and bigger clumps.  But it took a while for anything particularly interesting to happen.  Sometime between 100 and 500 million years after the big bang, one of these little clumps became dense enough to form the first star, and that defined the "first light" of the universe.  (Of course it wasn't strictly the first light -- the fireball made plenty of light, and we still see it as the CMB -- but it was the first starlight.)

So, if we had a big enough telescope, could we look far enough back into the Universe to see that first star?

Unfortunately, no.  It turns out the dark ages were dark for two reasons.  One was that there wasn't any (visible) light being produced at the time.  The other is that neutral hydrogen is actually pretty opaque to starlight.

Atoms and molecules can only absorb photons at particular frequencies -- those corresponding to transitions between the energy levels of the electrons.  During the dark ages, any photon whose energy was in the sweet spot for a hydrogen atom transition would very likely be absorbed.  Radio waves or other low-energy photons could get through because there weren't any transitions of the right energies to take them, but visible light was another story.  It's easy for a hydrogen atom to absorb visible-light photons and use them to knock its electrons into higher energy levels (the same goes for ultra-violet light).  Those atoms release the photons again eventually, but in different directions, so the vast majority of the light produced by the first stars isn't able to make it all the way to our telescopes.

Opacity and transparency. The primordial fireball was opaque like fire is opaque: energetic particles couple with photons and keep them from free-streaming away. The edge of the wall of flame is like the last-scattering surface, where the light is finally free to escape. The dark ages were opaque like fog is opaque: the light was absorbed and scattered and attenuated. Once reionization cleared the "fog" of of the dark ages, light was able to travel unimpeded. Photo sources: herehere and here (but really here).

Here comes the sun(s)...

Once stars were forming in earnest,  though, astrophysics really got going, and fun things started to happen.  The vast majority of the gas in the Universe (called the intergalactic medium, or IGM) was still neutral at this point -- mostly hydrogen, not doing much -- but each star or galaxy that formed would heat the gas around it and make a bubble of ionized gas.  As more and more of these bubbles formed, the intergalactic medium had a sort of swiss-cheese nature, with bubbles of ionized gas growing and coming together, burning away the fog of the dark ages.

Once there were enough stars and galaxies to ionize a significant fraction of the IGM, we finally had reionization: the (aptly named) epoch when the Universe went from being neutral to being fully ionized again.  And this time, the universe was much less dense and the starlight could easily pass through the ionized gas, so the IGM became transparent to starlight. And that's why we can see other galaxies -- because there's very little neutral gas left to absorb the light en route.

Artist's conception of bubbles of ionized gas percolating through the IGM during the dark ages. The CMB is at the far left, and the right is the present-day Universe. Source: illustration from a Scientific American article by Avi Loeb, which can be found here.
When did reionization happen?  And why does it matter?  Second question first: it matters because understanding reionization means understanding how the first sources of light in the Universe formed and how the IGM turned into the galaxies and clusters and all the amazing stuff we see today.  Also, it's a major milestone in the Universe's history, and a phase transition of the entire IGM, so it seems important.

Back to the other question: we think reionization happened around a billion years after the big bang, though probably gradually and clumpily and at different times in different places, and we're still trying to pin down the exact epoch.  There are a few ways to go about figuring it out.  One is to look for the Universe not being transparent.  In astronomy, opacity usually manifests as something absorbing light from something behind it.  On a foggy day, you know the fog is there because it makes it hard to see things that are far away, not because you really see the water droplets in the air.  Reionization is similar -- you know you're getting close to it if some of the light from a distant source (a quasar, generally) is absorbed before it gets to you.

Unfortunately, looking at absorption only tells us roughly when reionization was pretty much over, since it doesn't take much neutral hydrogen (about one part in 100,000) to absorb all the light from a distant quasar.

Another way to pin down reionization is to look at some subtle effects it has on the CMB, but that would take another blog post to even begin to describe, so I'll just say the CMB gives us a pretty good idea of the earliest reionization might have started, but it's hard to get much more than that.

So where does that leave us?  We can't use visible light, because that's absorbed as soon as the IGM is slightly neutral.  And the CMB tells us a lot about the early universe, and gives us a hint about the beginning of reionization, but doesn't tell us when the bulk of it happened.

Radio astronomy FTW

The big innovation, the thing that institutions all over the world are investing in, is looking for radio signals coming from the neutral hydrogen itself.  Neutral hydrogen has a low-energy transition that, when it occurs, emits or absorbs a photon with a wavelength of 21 cm: it's called the 21 cm line. (The frequency is about 1420 MHz.)  This wavelength puts it in the radio part of the electromagnetic spectrum, so we see it as radio waves.

The origin of 21 cm radiation. In the higher-energy state, the hydrogen atom's electron and proton are aligned. If one flips its spin, the atom is in a lower-energy state and a 21 cm photon is produced. Source.
The reason 21 cm radiation can let us peer into the dark ages is twofold.  One, it's so low-energy that it doesn't take a lot to excite it, so you can get 21 cm radiation being produced even if there's not a heck of a lot going on (just atoms colliding and a few stray photons).  The other advantage is that radio waves are really hard for neutral hydrogen to absorb.  An atom creates a 21 cm photon in the dark ages, and then the universe expands a little, making that photon just a little longer in wavelength, and then it's too low-energy to be absorbed by anything.  So all we have to do is set up a radio telescope and wait for it to arrive here!

Is it here yet? (Photo by Mike Dodds)
Okay, so it's not quite that simple.  Because the photons stretch out as the Universe expands, we're really talking about something like 100-200 MHz for "21 cm" photons from the epoch of reionization and the end of the dark ages.  There are some major downsides to working at those frequencies.  One is that you're now smack in the middle of all sorts of terrestrial radio communication: FM radio, cell phones, satellite transmissions... it's a big mess.  Also, at low frequencies, the Earth's ionosphere is highly refractive and can do all sorts of horrible things to your signals as they're coming down from space.  Somewhere in the tens of MHz, the ionosphere is completely opaque.  So if you want to pick up 21 cm radiation from the epoch of reionization, you have to find a place that's relatively radio-quiet (i.e., unpopulated) to do this sort of study, or you have to find a way to deal with the radio noise. (One example of a relatively radio-quiet place is the Australian outback. Another is the Moon.)

A major challenge that you definitely can't get away from is our own Galaxy.  The Galaxy produces a lot of radiation which is extremely bright at the low frequencies we're dealing with here.  Galactic radio signals are typically about 10,000 brighter than the signal from reionization.  And it doesn't help that the radiation is spatially varying in weird and complex ways.  Here's a map of the Galactic radiation at 408 MHz.  It's pretty bright, and it gets worse for lower frequencies.

Galactic synchrotron radiation at 408 MHz -- the emission is stronger at lower wavelengths. The color scale here gives the brightness temperature (a measure of the intensity of the signal) in Kelvins. For comparison, the 21 cm reionization signal would be around 10 mK. Source.
In spite of the challenges, there's a lot of effort right now going into building the telescopes to see this signal, because it would allow us to actually probe the IGM in the epoch of reionization.  Ideally, we'd get pictures like this:
Simulation of reionization. Source.
Each square represents a bit of the Universe at a different moment in cosmic history, going forward in time as you move left to right and top to bottom.  In the upper left-hand panel (0.4 billion years after the big bang), the IGM is largely neutral.  In the lower-right hand panel (0.8 billion years), it's ionized.  The features in the other panels are ionized bubbles forming and growing.  Each of these simulation panels represents just a small patch of sky, but in theory you can imagine doing a full-sky map.  Taking into account the expansion of the Universe (and consequent stretching of photons) and tuning the telescope to different frequencies, you ideally get a map of all the neutral hydrogen at each epoch.

I should also point out: the dark ages and epoch of reionization cover a lot of the observable Universe. This sketch shows roughly how much volume is covered by different kinds of observations, where we're in the middle, looking out.  The z values are redshift -- a measure of how much the Universe has expanded since that time.  (So the edge, the farthest away in space and time, is at a redshift of infinity, since the Universe is infinitely bigger now than at the big bang; the redshift today is zero.  Reionization was between redshifts 6 and 10 or so.)  In the diagram, the colorful part in the middle contains most of the galaxies we've seen directly.  The thick dark circle near the edge is the CMB.  Everything inwards of z=50 can be probed with 21cm observations, and almost everything outwards of z=6 can't be seen any other way.
Schematic of how much of the Universe we're seeing with different kinds of observations. Red, yellow and green are optical. The black circle around the edge is the CMB.  Everything in blue can be observed with 21 cm radio signals. Source: Tegmark & Zaldarriaga 2009.

If you build it...

There's something of a global competition collaboration going on right now to try to get at this signal, because it would open a whole new window on the evolution of the Universe.  You may have heard of the Square Kilometer Array, which is going to be the world's largest array of radio telescopes when it's completed in a decade or so.  It'll be split between South Africa and Western Australia, and one of the key goals of the project is to look deeper into the epoch of reionization than we ever have before, using the 21 cm line.  In the meantime, there's the Low-Frequency Array (LOFAR), the Murchison Widefield Array (MWA), and lots of other projects that are just getting going.  It's a big industry.

But before we get too excited, I should reiterate that dealing with the foregrounds and instrumental calibration and stuff is hard.  There are actually a number of intermediate steps (including getting an all-sky average signal, or doing some kind of statistical detection) that would have to happen before any attempt at mapping (or "tomography").  But mapping remains the ultimate goal.  And if we can map out what the neutral hydrogen in the Universe was doing in the first couple of billions of years, we can basically watch the Universe as we know it come into being.  And that would be pretty darn cool.

Credit: SPDO/TDP/DRAO/Swinburne Astronomy Productions.

Friday, July 27, 2012

You Don't Have to Blow Up the Universe to Be Cool


This was supposed to be a story about dark energy. It still is -- dark energy is one of the most intriguing mysteries in cosmology, after all -- but it's mostly a story about cosmic doom, why I love theoretical physics, and why you shouldn't believe everything you read on io9.

What goes up...

It's difficult to express to a non-physicist just how weird dark energy is, because most people are used to encountering things that they don't understand in physics, and they generally assume that someone else is on top of the situation. That's not the case with dark energy. Here's an analogy. Let's say you're throwing a ball in the air. There are logically two possibilities: (1) You throw it, it goes up for a while, slows down, and falls back to Earth. (2) If you happen to have superhuman strength, you throw it hard enough that it escapes Earth's atmosphere and then sort of coasts forever through the void. But imagine neither of those things happen. Imagine instead that you toss the ball up in the normal way, and it looks like it's starting to slow down, but just as you think it's about to reach its maximum height and come back, it suddenly speeds up and shoots off into space.

That's not supposed to happen.
[Source: Norwalk Citizen Online, Christian Abraham / Connecticut Post. ]
Dark energy is like that. It's actually the exact same physics. The big bang is like the throw, starting off the expansion of the Universe. That expansion means distant galaxies are all moving away from us, but since all those galaxies have mass and gravity is still always attractive, ultimately everything in the Universe should be pulling on everything else. This should slow the expansion down, through the same kind of attraction that pulls the ball toward the Earth, slowing it down and keeping it from floating away. But a couple decades ago astronomers discovered that the expansion isn't slowing down at all. There's something out there in the cosmos that's acting against the gravity of all those galaxies. It's not just keeping the Universe from recollapsing, it's actually pushing all the galaxies apart faster and faster, accelerating the expansion. And just as physicists would be at a loss to explain why your baseball suddenly went (non-)ballistic, everything we understand about physics tells us this should not be happening to the Universe. We call it dark energy because we have no idea what it is.

The cosmological constant

We have some theories, of course. In fact, there are probably hundreds of theories, many of them difficult to distinguish from one another with the data we currently have. The most familiar and longest-standing idea is that of the cosmological constant -- a sort of fudge factor that Einstein originally put into his equations of gravity. He wasn't trying to explain acceleration -- at the time, he thought the Universe was static, and he needed an anti-gravity term to balance out the pull of all the mass in the Universe. He discarded the extra term in embarrassment when the expansion of the Universe was discovered, but this new acceleration is making many cosmologists now think we need to put it back in.
The equation governing the acceleration of the expansion of the Universe, with a cosmological constant term. The gravity term includes the density (p) and pressure (ρ) of all matter and energy -- the minus sign means this term slows the expansion. The cosmological constant term (with Λ) has a positive sign, and therefore contributes to acceleration. The parameter a is the scale factor measuring the size of the Universe, and the double dots indicate the second derivative (acceleration) with respect to time. 
A definining property of the cosmological constant is, unsurprisingly, that it is constant. In fact -- and this is almost weirder than the acceleration -- the density of the "stuff" described by the cosmological constant stays the same even as the Universe expands. If you have a box filled with cosmological constant, and you suddenly make the box twice as large without opening it or putting anything in, you now have twice as much cosmological constant in your box. As I said: it's weird.

The cosmological non-constant?

Unfortunately, the cosmological constant isn't really that appealing a solution, since it still looks a lot like a fudge factor and it seems somewhat arbitrary. The main alternative is dynamical dark energy, which is any kind of dark energy that can change with time. Most theories of dynamical dark energy (often just called "dark energy" as opposed to a cosmological constant, which is sort of a special case) involve scalar fields. Until recently, we had no evidence whatsoever for scalar fields in nature, even though they were constantly popping up in theories. Now that we think we might have discovered the Higgs boson (yay!), we have evidence for the first scalar field: the Higgs field. The Higgs field itself doesn't have anything to do with dark energy, but it's comforting that at least one example of a scalar field might actually exist. The nice thing about a scalar field is that it can have the same value everywhere in space while varying with time, which is just what you need if you want some kind of time-dependent dark energy that fills the Universe.

So how do we distinguish between a cosmological constant and dynamical dark energy? The usual way is to look at the relationship between the dark energy's pressure (denoted p) and density (denoted, somewhat confusingly, by the Greek letter rho: ρ). One of the key features of any form of dark energy is the fact that it has negative pressure.

In general relativity, pressure is a form of energy, and energy has a gravitational effect -- your pressure adds to your gravitational field. (So, gravitationally, pressure pulls.) Negative pressure, therefore, subtracts from a gravitational field, and counteracts gravity -- it pushes. For a cosmological constant, the pressure is exactly -1 times the density: p=-ρ. (I'm using units where the speed of light is 1. You could also write this as p=-ρc2.) For other forms of dark energy, there could be a different relationship.

We use a parameter called the equation of state, w=p/ρto describe the ratio of pressure to density. All substances have one: pressureless matter has w=0; radiation has w=1/3. For a cosmological constant, w=-1.

As far as we can tell from astronomical measurements, w is pretty darn close to -1. Every measurement we've done is consistent with w=-1, and every time we improve on our measurements, we find a value of w even closer to -1. But it would be hard to say for sure that w is exactly -1, because all measurements have uncertainties associated with them. We may at some point measure a value of w that is infinitesimally close to -1, but, without some other reason to believe that we have a cosmological constant, we'll never be able to say that it's not just very slightly higher or lower.

The importance of asking "What if...?"

Until about 10 years ago, no one really talked about the idea that w could be less than -1. Anything with w<-1 was called phantom energy and was considered way too uncouth to be plausible. There are good theoretical reasons for this: constructing a theory with w<-1 is difficult, and if you manage to do it, you've probably had to do something tricky like introduce a negative kinetic energy, which is the sort of thing that would make a ball roll up a hill instead of down. You might even accidentally invent a theory with time travel and wormholes. So it was generally thought that we should leave w<-1 alone, and people made constraint plots like this:
Fraction of the Universe made of matter (Ωm) plotted against the dark energy equation of state parameter (w). Values in the orange region have a good fit to the data. [Source: Caldwell, Kamionkowski & Weinberg 2003 (PRL, 91, 071301)]
This is a plot of the fraction of the Universe made of matter (Ωm) versus w. The colored swaths are where the parameters are allowed by different kinds of observations. The orange is the most favored region. You can see from the plot that everything converges around w=-1: a cosmological constant.

But a group of theorists at Dartmouth and Caltech (Rob Caldwell, Mark Kamionkowski and Nevin Weinberg) looked at that and thought, "Maybe it's not converging at w=-1 -- maybe it just looks that way because it's really converging at some value of w less than -1. What would happen if that were the case?"
Same as above figure, but with the range extended to allow w<-1.
[Source: Caldwell, Kamionkowski & Weinberg 2003 (PRL, 91, 071301)]
And then they wrote my favorite paper ever [Caldwell, Kamionkowski & Weinberg 2003 (PRL, 91, 071301)].

Theory is awesome

It really is an amazing paper. Honestly, you should check it out. I wouldn't ordinarily recommend a theoretical physics paper to a general audience, but this paper is so well written, so accessible, and so beautiful that I can't resist. And it's only 4 pages long.

The authors start from a very simple idea: "What if some day we look at the data and we find out that w<-1?" It doesn't sound like a revolutionary idea, but no one had ever followed that idea to its logical conclusion. So they do it, and after jotting down just a few fairly simple equations, they discover that the universe would rip itself apart.

How often do you get to invent an ultimate cosmic doomsday in the course of your professional life? This is the kind of work I got into theoretical physics to do. It's awesome.

Here's how it works. I said before that w=-1 is a cosmological constant -- the energy density doesn't increase or decrease as the Universe expands. It turns out that if w>-1, that means that the energy density goes down as the Universe expands (like ordinary matter). Expand a box of matter and you have the same amount of matter, but more space, so your matter is now less dense. But if w<-1, the energy density increases as the Universe expands. Think about that for a minute. If you have a box of phantom energy, and you suddenly make the box twice as big, you now have more than twice as much phantom energy in your box.

Aside from being unsettling, this kind of behavior can actually have some pretty gruesome consequences for the Universe. If we stick with our familiar cosmological constant, then as the Universe expands, even though all the galaxies are moving away from each other, anything that's gravitationally bound stays bound, because there's just not enough dark energy in any bound system (like a solar system or a galaxy) to mess with it. But with phantom energy, the amount of dark energy in any bit of space is increasing all the time, so a planet orbiting a star will actually eventually be pushed away to drift on its own. Everything will become isolated.

And that's not even the worst of it. Caldwell and his colleagues realized that if the density of dark energy is increasing with time, it will eventually be accelerating the expansion of space so quickly that the cosmic scale factor -- the parameter that measures the characteristic size of a region of space -- will reach infinity in a finite time. If the scale factor is infinite, that means that the space in between any two points is infinite, no matter how close they were to begin with. It means that spacetime itself is literally torn apart. When Caldwell and his colleagues realized they'd discovered a new possible end state of the Universe, they dubbed it, appropriately, the big rip.

video
Animation of the big rip (link to original). From Caldwell, Kamionkowski & Weinberg's paper: It will be necessary to modify the adopted slogan among cosmic futurologists — ‘‘Some say the world will end in fire, Some say in ice’’ — for a new fate may await our world.  [Source: NASA/STScI/G.Bacon]


DOOOOM!

Having just invented a new cosmic doomsday, the authors decided to go a step further. They worked out exactly when the big rip would occur for any given value of w, and then, for a specific example (w=-1.5, which would have a big rip about 21 billion years from now), they worked out exactly how long we'd have to wait before all of the cosmic structures we know and love be destroyed. Galaxy clusters would be erased 1 billion years before the end. The Milky Way would be dismantled with 60 million years to go. At doom-3 months, the Earth would drift away from the Sun. With 30 minutes to go, our planet would explode, and atoms would be ripped apart in the last 10-19 seconds. Discussing this handy timetable of doom, the authors state with admirable detachment that, were humans to survive long enough to observe the big rip, we might even get to watch the other galaxies get torn apart as we await the end of days. I'm sure that would be lovely.

io9, you have forsaken me

Given my affection for the original phantom energy paper, you can imagine I was intrigued the other day to see an article on the io9 website proclaiming "The Universe Could Tear Itself Apart Sooner Than Anyone Believed." Could it be some new evidence for phantom energy, I thought? Sadly, no. It turned out to be an utterly overblown scare-piece that had hidden all the beauty of the physics behind false assertions and dramatic flaming-Earth graphics.

The io9 post discusses the work of Li and colleagues, researchers in China who have published an article called "Dark Energy and the Fate of the Universe." The paper isn't bad, or even really wrong (though I don't agree with all of it). But it's really nothing new or interesting. It starts from the assumption that dark energy is dynamic and that it is evolving to have w<-1 in the future. It then uses a new parameterization of the evolution of w to draw conclusions about the fate of the Universe.

I won't go into a lot of details, but the gist is as follows. If you want to determine if w is changing with time, you have to start with some model for how it's changing -- basically, you have to assume a functional form. You look at data from the past, determine what w was then, and choose some function for w that changes with time and try to measure its parameters. In cosmology, we usually discuss time in terms of redshift (denoted by z), which is a measure of how much the Universe has expanded since whatever bit of the past we're observing. The redshift z decreases with time and is zero today; future times have negative redshifts.

A typical parameterization of dark energy looks like this: w(z) = w0 + wa (z/(1+z)). The form doesn't matter so much except in that w0 is the value of w today, a positive value of wa means w is decreasing with time, and a negative wa means it's increasing. This parameterization has the property that it goes to infinity in the future at z=-1. Li and colleagues don't like this, but it's hard for me to see why it matters. A redshift of -1 corresponds to an infinite scale factor, which is a big rip. If the only problem with the formula occurs when the big rip is actually in progress, it's hard to see why that should be a big deal for determining anything that happens up to that point.

In any case, they have an alternative, slightly more complicated parameterization, for which w doesn't go to infinity at z=-1: w(z) = w0 + wa [ln(2+z)/(1+z) - ln(2)]. In their formulation, a positive wa means w is increasing, and a negative wa means w is decreasing. They run some simulations and find out that the best-fit points for w0 and wa -- the values the data seem to be pointing to -- imply a big rip will occur.
Constraints on wa and w0 for the model by Li and colleagues. The red point indicates their best fit. The green point is a cosmological constant. The brown region is the best-fit region and the blue region is the 95.4% confidence region. [Source: Li X D et al. 2012 (Sci China-Phys Mech Astron 55,1330)] 
Shouldn't I be scared?

The fact that the best-fit point implies a big rip sounds important, but it isn't really. Many of the latest results have a best-fit value for w that's less than -1; the data just aren't yet good enough for us to draw any conclusions. A cosmological constant easily fits the data, and there's no compelling evidence that dark energy is anything more exotic. Also, as Li and colleagues readily admit, all their conclusions are based on the assumption that dark energy follows their own special functional form -- if it doesn't (and there's no reason to think it would), there's nothing they can say about what would happen.

Nonetheless, Li and colleagues go on to calculate when the big rip would occur with both their best-fit value and their worst-case value (the value still allowed by the data in which the big rip happens soonest) and they say that doomsday could be as soon as 16.7 billion years from now. They even include their own timetable of doom, with earlier times than the original one.

It's a reasonable calculation to make, but I wouldn't call it newsworthy. The comparison they make to say it's "earlier than we thought" is with Caldwell's doomsday value, which used an arbitrarily chosen w=-1.5 for illustrative purposes, not from a fit to any data. The io9 people apparently got hooked by an unreasonably enthusiastic press release and ran with it, trying to stir up the paper's conclusions to make it as significant and alarming as possible.

Disappointingly, the io9 article also contains several blatantly wrong statements, such as "cosmologists are pretty sure dark energy has a value less than -1" (not true!) and "a likely value of -1.5" (completely ruled out!) and "the cosmologists are fairly convinced that w will continue to exhibit a value less than -1 well into the future" (also totally wrong!). Phantom energy is truly an awesome idea, but I don't think many cosmologists would say it's especially likely, and certainly none of us would bet the house. The theoretical problems are substantial and the data just aren't good enough yet for us to say anything either way. The big rip scenario is still fun to think about; it's not necessary to think it's actually imminent to appreciate that. Probably dark energy is a cosmological constant -- and plenty weird enough.

Did I mention theory is awesome?

As a theorist, I encounter a lot of really bizarre ideas. Sometimes I encounter an idea like phantom energy, which is incredibly cool and leads to some truly revolutionary possibilities ... but is probably ultimately wrong. Other times, I get to study something like dark energy, which is mind-bending in a totally different way: not because it breaks physics and makes the Universe blow up, but because it is, contrary to all our understanding, actually out there, just waiting for us to figure it out.