Wednesday, August 17, 2011

Rick Perry and Texas Job Numbers

by Matthias Shapiro, POLITICAL MATH

Full disclosure: I don't like Rick Perry for our next president. I have my reasons that aren't worth going into here. However, when I was watching the GOP debate and pro-Perry people started bringing up Rick Perry's job numbers as a cudgel against other candidates, I looked into the BLS data on Texas jobs. Having familiarized myself with the data, I started noticing claims on the Texas jobs data that started popping up that directly contradicted what I was seeing in the data. So I wanted to clear up a couple of these common misconceptions.

Note: If you are going to comment and you want to introduce some new objection to the Texas job numbers, you MUST provide original data. I spent about 4 hours digging through raw data to write this post. I don't want you to point to some pundit or blog post and take it on their authority, because I've already researched several idiot pundits who are talking directly out of their asses when it comes to the data. I want you to point to the raw data that I can examine for myself. This means links. I refuse to waste any more of my time on speculative bullshit or "Well, I'll wager that the Texas jobs don't really count because..." If you're willing to wager, take that money and put it towards finding the actual data. In short, put up or shut up.

I'm not cranky, I swear.

Anyway, let's deal with the complaints in no particular order:

"Texas has an unemployment rate of 8.2%. That's hardly exceptional."

See... that's what I thought when I started looking at the data. I knew that Utah had a lower unemployment rate than Texas and I kept hearing that Texas was go great at jobs, blah, blah, blah, so I looked up the unemployment rate.

Nothing special.

So I was going to drive my point home that Texas was nothing special by looking at their raw employment numbers and reporting on those. That's when I saw this:

This may not look like anything special, but I've been looking closely at employment data for a couple years now and I've become very accustomed to seeing data that looks like this.

In a "normal" employment data set, we can easily look at it and say "Yep, that's where the recession happened. Sucks to be us." But not with Texas. With Texas, we say "Damn. Looks like they've recovered already."

(To get to this data, go to this link http://data.bls.gov/cgi-bin/dsrv?la then select the state or states you want, the select "Statewide", then select the states again, then select the metrics you want to see.)

But if Texas has so many jobs, why do they have such a high unemployment rate? Let's take a closer look at that data.

As a percentage of the number of pre-recession jobs, here is a chart of the growth of a selection of states. (For clarity, in this chart I selected a number of the largest states and tried to focus on states that have relatively good economic reputations. I did not chart all 50 states b/c it would have taken me too long.)

We can see that Texas has grown the fastest, having increased jobs by 2.2% since the recession started. I want to take a moment and point out that second place is held by North Dakota. I added North Dakota to my list of states  to show something very important. North Dakota currently has the lowest unemployment rate of any state at 3.2%. And yet Texas is adding jobs at a faster rate than North Dakota. How can this be?

The reason is that people are flocking to Texas in massive numbers. Starting at the beginning of the recession (December 2007), let's look at how this set of states have grown in their labor force.

As you can see, Texas isn't just the fastest growing... it's growing over twice as fast as the second fastest state and three times as fast as the third. Given that Texas is (to borrow a technical term) f***ing huge, this growth is incredible.

People are flocking to Texas in massive numbers. This is speculative, but it *seems* that people are moving to Texas looking for jobs rather than moving to Texas for a job they already have lined up. This would explain why Texas is adding jobs faster than any other state but still has a relatively high unemployment rate.

"Sure, Texas has lots of jobs, but they're mostly low-paying/minimum wage jobs"

Let's look at the data. Here's a link: Occupational Employment and Wage Estimates

Texas median hourly wage is $15.14...  almost exactly in the middle of the pack (28th out of 51 regions). Given that they've seen exceptional job growth (and these other states have not) this does not seem exceptionally low.

But the implication here is that the new jobs in Texas, the jobs that Texas seems to stand alone in creating at such a remarkable pace, are low paying jobs and don't really count.

If this were true, all these new low-paying jobs should be dragging down the wages data, right? But if we look at the wages data since the beginning of the recession (click to enlarge, states are listed alphabetically)

And it turns out that the opposite is true. Since the recession started hourly wages in Texas have increased at a 6th fastest pace in the nation.

As a side note, the only blue state that has faster growing wages is Hawaii. Just thought I'd get that jab in since so many people have been making snarky "Yeah, I could get a job in Texas is I wanted to flip burgers!" comments at me on Twitter.

"Texas is oil country and the recent energy boom is responsible for the incredible jobs increase."

In identifying "energy jobs" I cast as wide a net as possible. If you want to replicate my findings, go to this link: http://www.bls.gov/sae/data.htm, click on "One-Screen Data Search", then select "Texas", then select "Statewide", then in Supersectors select "Mining and Logging", "Non-Durable Goods" and "Transportation and Utilities" and then in Industries select "Mining and Logging", "Natural Gas Distribution", "Electric Power Generation" and "Petroleum and Coal Products Manufacturing".

Tedious, I know, but transparency is important and this is how you get the data.

When we finally get the data, we discover that energy isn't really the biggest part of the Texas economy. Increases in jobs in the energy sector (or closely related to it) account for about 25% of the job increases in the last year. Since the energy sector only makes up 3% of all employment, there is some truth to this claim.

However, take the energy sector completely out of the equation and Texas is still growing faster than any other state. This indicates to us that the energy sector is not a single sector saving Texas from the same economic fate as the rest of the states. It's not hurting, but Texas would still be growing like a weed without it.

"Texas has 100,000 unsustainable public sector jobs that inflate the growth numbers."

I'm not sure where this one comes from, but the numbers are these (and can be found by selecting government employment from the data wizard at this link http://www.bls.gov/sae/data.htm):

Counting from the beginning of the recession (December 2007) the Texas public sector has grown 3.8%, or a little under 70,000 employees. This is faster than normal employment, but it's not off the charts.

Given that the Texas economy has grown so much and private sector jobs have grown so much, that doesn't strike me as an unsustainable growth in the public sector.

But, just in case you're really worried about it, you can lay your fears to rest because in the last year the Texas public sector has shrunk by 26,000 jobs. In the last 12 months, Texas lost 31,300 federal employees, trimmed 3,800 state jobs, and increased local government jobs by 8,400 jobs.

(To be fair, this was partially driven by the role Texas employees played in the census, which inflated federal job numbers this time last year. Since the census numbers stabilized, federal employment has been at about break-even.)

As you can see, we're nowhere near the "100,000 unsustainable jobs" number.

My Personal Favorite Chart

I'll leave you with my personal favorite chart. I mentioned at the beginning that Texas is seeing high unemployment in a large part because they're growing so damn fast. The problem with this from a charts and graphs perspective is that it leaves worse states off the hook, making them look better than they actually are. Looking at unemployment alone, we would conclude that Wisconsin has a better economy than Texas. But Wisconsin is still 120K short of it's pre-recession numbers. The only reason they look better than Texas is because 32,000 people fled the state.

During that time, 739,000 people fled into Texas. Anyone who takes that data and pretends that this is somehow bad news for Texas is simply not being honest. At the worst, I'd call it a good problem to have.

So, to give something of a better feeling for the economic situation across states, this chart takes the population of the states I selected above and judges the current job situation against the population as it stood at the beginning of the recession.

Using that metric, Texas would have a very low unemployment rate of 2.3%. But the fact that unemployment in the United States is fluid means that the unemployed flock to a place where there are jobs, which inflates its unemployment rate (at least in the short term). It's not a bad thing for Texas... it just looks bad when dealing with the isolated "unemployment %" statistic.

UPDATE: @francisgagnon on Twitter felt that this chart was dishonest because it charts Texas as having 2.3% unemployment and (in his words so I don't get him wrong): "It assumes immigrants create no jobs. But more people = more consumers = more jobs."

He is absolutely right about this. I tried to be clear above that this chart doesn't account for the fluid nature of an economy with immigration and departures of hundreds of thousands of people, but I don't want to leave anyone with the wrong impression. So here it is: This chart doesn't account for the fluid nature of an economy with immigrations and departures of hundreds of thousands of people. The point of this chart is not to say "Texas should have 2.3% unemployment if only things were fair." Instead, it is an attempt to chart job growth in such a way that controls for people leaving one job market to enter another. To say "Wisconsin has a better job market than Texas because its unemployment rate is 0.6% lower" is a wholly untrue statement even though it cites accurate numbers. What this chart is meant to do is not posit a counter-factual, but to give a visual representation of the employment reality that is obscured by the way we calculate unemployment numbers.

END UPDATE

And... that's it.

You may have noticed that I don't mention Rick Perry very much here. That is because Rick Perry is, in my opinion, ancillary to this entire discussion. He was governor while these these numbers happened, so good for him. Maybe that means these jobs they are his "fault". Maybe the job situation is the result of his policies. Or maybe Texas is simply the least bad option in a search for a favorable economic climate.

That is not an argument I'm having at this exact moment. My point is to show that most of the "excuses" you will hear about Texas' job statistics are based in nothing more than a hope that Rick Perry had nothing to do with them and not on a sound understanding of the data.

My advice to anti-Perry advocates is this: Give up talking about Texas jobs. Texas is an incredible outlier among the states when it comes to jobs. Not only are they creating them, they're creating ones with higher wages.

One can argue that Perry had very little to do with the job situation in Texas, but such a person should be probably prepare themselves for the consequences of that line of reasoning. If Rick Perry had nothing to do with creating jobs in Texas, than why does Obama have something to do with creating jobs anywhere? And why would someone advocate any sort of "job creating" policies if policies don't seem to matter when it comes to the decade long governor of Texas? In short, it seems to me that this line of reasoning, in addition to sounding desperate and partisan, hogties its adherents into a position where they are simultaneously saying that government doesn't create jobs while arguing for a set of policies where government will create jobs.

Or, to an uncharitable eye, it seem they are saying "Policies create jobs when they are policies I like. They don't create jobs when they are policies I dislike."

People will continue to argue about the data. But hopefully this will be helpful in sorting out reality from wishful and desperate thinking. I mentioned on Twitter that the Texas jobs situation was nothing short of miraculous. This is why I said that and why I'm standing by that statement.

Friday, July 29, 2011

Study Finds "Huge Discrepancy" Between Hard Data and Warming Models

by Jason Mick, DailyTech  [July 29, 2011 9:53 AM]


Alarmism and climate profiteering is dealt yet another serious blow
Many are still operating under the perception that current global warming models are "good enough" to make drastic economic decisions.  That party line has been pushed, in part, by certain individuals like ex-U.S. Vice President and Nobel Prize winner Al Gore, who have stood to gain tremendously in personal finances by promoting alarmist and sensationalist rhetoric.  Indeed, Mr. Gore's "documentary" An Inconvenient Truth painted a grim picture of a pending apocalypse and made Mr. Gore hundreds of millions in sales and speaking fees -- but its accuracy is hotly debated.

I. New Study Blasts a Hole in Current Models


In a new study, Roy Spencer, Ph.D -- a prestigious former National Aeronautics and Space Administration (NASA) climatologist who currently works at the University of Alabama -- has examined data between 2001 and 2011 gathered by the Advanced Microwave Scanning Radiometer sensor housed aboard NASA's Aqua satellite.

The study was published [PDF] in the peer-reviewed journal Remote Sensing.

image
Hard data from NASA's Aqua satellite has shown that climate models have a "huge discrepancy" with reality, when it comes to the amount of heat escaping the atmosphere.  (Source: NASA)

The data reveals yet another thorough analysis of atmospheric heat dissipation -- an important factor in heating or cooling.  And like past studies, it found that the Earth's atmosphere shed heat at a much faster rate than what's predicted in widely used global warming models.

The hard facts show that both the predictions of the amount of heat shed during a a full warming scenario, and the amount of heat shed as warming begins were understated.

As the data shows the Earth's atmosphere to be trapping less heat; that means the outcomes of any sort of human-based warming caused by the emission of carbon greenhouse gases and other compounds is likely overstated.  Thus the dire predictions of models used by the United Nation's International Panel on Climate Change (IPCC) and researchers are likely flawed.

States Professor Spencer in a press release from University of Alabama, "The satellite observations suggest there is much more energy lost to space during and after warming than the climate models show. There is a huge discrepancy between the data and the forecasts that is especially big over the oceans."

This is a critical conclusion as it shows that the secondary "indirect" trapping from atmospheric water may be far less than previously predicted.

II.  Supporting Evidence Builds Stronger Case


The new study isn't necessarily cause to abandon climate models altogether.  After all, understanding our planet's climate is the key to growing better crops and protecting people from natural disasters.  That said, the models likely will need a major overhaul, one which some leading climate alarmists may regret.
Supporting evidence strengthens the case that such an overhaul is needed.

Researchers at NASA and the National Oceanic and Atmospheric Administration (NOAA) have been baffled by the fact that the widely used climate models were failing to properly predict atmospheric humidity and the rate of cirrus cloud formation -- phenomena driven by atmospheric heat.

Few public voiced such thoughts, likely for fear of persecution by their more sensationalist warming colleagues.  Still, despite the politics, the data crept silently into several studies.

Additionally, sensors aboard NASA's ERBS satellite collected long-wave radiation (resulting from escaping atmospheric heat) between 1985 and 1999 than was predicted by computer models.
Between the relatively comprehensive volume satellite and atmospheric data, the picture appears clear -- the climate models are badly flawed.

III. Indirect v. Direct Warming


So what's the difference between direct and indirect warming?  Well, direct warming is caused by substance like carbon dioxide, which trap a certain amount of heat when they're found in large quantities in the atmosphere.

While carbon dioxide has been vilified in the media, peer reviewed research states with relative certainty that it is actually a very weak greenhouse gas and a weak contributor to "direct" warming.

The fearful hypothesis, which alarmists have been pleased to promote, is that carbon's direct heating -- while small -- will somehow throw the environment out of whack, causing an increased abundance of atmospheric water.  As water is a far better greenhouse gas at trapping heat, this could lead to a domino effect -- or so they say.

But the new study shows that the predictions of runaway indirect heating are likely badly flawed.

IV. The New Climate Picture


The new study doesn't dismiss that warming will occur if man keeps burning fossil fuels.  Rather, it indicates that it will likely occur at a much gentler pace than previously predicted, and that the maximum temperature reached will likely be lower the predicted, as well.

This is significant as alarmists have tried to use the hypothesis of rapid runaway warming as a justification for sweeping economic changes.  Under a gentler warming scenario, slowly rises in sea levels would not be that big a deal as mankind would have plenty of time to adjust to them.  Plus the levels would not rise as fast as previously predicted.

Of course, this means some of the "good effects" of warming -- such as resource harvesting in an ice-free Arctic -- won't be realized either.  Thus the more temperate, data-based climate picture has both advantages and disadvantages versus the more fantastic past models.

V. A Brave Scientist


Professor Spencer deserves to be commended for his thorough analysis and outstanding work.  It takes a bold man to defy some of one's colleagues when they're clearly perpetrating a factual inaccuracy.
It's not hard to imagine how difficult it must have been for Professor Spencer to get his work funded and published in a field dominated by NASA, whose higher ranks are heavily dominated by pro-warming advocates like James Hansen.  The Nov. 2009 "climategate" email scandal at the University of East Anglia seemingly confirmed what many suspected -- it's hard for scientists to voice alternate opinions given the dogmatic state of climate research.

And yet it's tremendously important to do so.

For the most part, everyday environmentalists who have bought into the rhetoric of wealthy entrepreneurs like Mr. Gore, or powerful research chiefs like James Hansen did not personally profit off of the alarmism and approached the climatology debate with the best of intentions.

Sadly, in doing so pressing real environmental crises like the destruction of the Earth's rainforests faded into the background.  Further, the climate emphasis led, in some cases, to lesser cuts to toxic gases such as nitrogen and sulfur compounds produced in the burning of fossil fuels.  Regulators allowed greater levels of these gases, so they could focus on forcing industry to adopt stricter carbon standards.

These toxic gases have contributed tremendously, according to thorough peer review, to problems like asthma.  Thus the climate alarmism may have indirectly cost the public's money, the health of the environment, but the public's health, as well.

Saturday, July 23, 2011

Al Gore Lashes Out at Obama, Saying He "Failed" to Fight Warming

By Jason Mick, DailyTech [June 23, 2011 12:19 AM]

Al Gore, who made hundreds of millions of dollars off promoting his thoughts on "global warming", accused President Obama of having "failed" to act to stop warming.  (Source: Jewel Samad/AFP/Getty images)
 
---
 
Wealthy investor-cum-advocate continues to be one of the global warming movement's noisiest voices
United States President Barack Obama must be feeling a bit like his predecessor, George W. Bush, when it comes to the topic of climate change. President Bush was criticized by Democrats as being too weak on climate change. At the same time, more extreme elements of his party criticized his efforts like CAFE revisions for supposedly being too heavy-handed. Likewise, President Obama has been criticized by Republicans for being to heavy-handed on climate change, but has been criticized by extreme members of his own party for being too weak.

Taking to the pulpit in a rambling 8-page online editorial in the magazine Rolling Stone, former Vice President and Nobel Prize winner Al Gore delivered perhaps the most stinging criticism yet against President Obama. Entitled "Climate of Denial", Gore speaks on behalf of the latter contingent -- extreme elements of the Democratic party -- in lashing out at the President saying he has "failed" to do his part to advert the climate crisis.

I. A Question of Credibility

It's a widely known fact that Al Gore makes over $100,000 for speaking appearances. In 2007 Fast Company estimated a speaking date with Mr. Gore would cost you a cool $175,000 USD.
In his global warming "documentary" An Inconvenient Truth, Mr. Gore claims to have given at least 1,000 speeches, meaning that he's likely earned in excess of $100M USD. And there's the profits from that documentary as well -- Mr. Gore likely earned a tidy cut of the film's almost $50M USD box office gross [source] and $31M USD in DVD sales [source].

That's not too shabby for a man who was once written off as too boring to become president.


Mr. Gore, who recently bought his fourth luxury mansion, uses carbon like there's no tomorrow. But he says he's actually "carbon neutral" thanks to carbon credits he buys from his own company.  (Source: coldwell banker previews via real estalker)
 
And then there's Mr. Gore's alternative energy climate firms such as Kleiner Perkins and Generation Investment Management LLP. According to reports, Mr. Gore is poised to become the "world's first carbon billionaire", thanks to these investments.

Mr. Gore defends these holdings, stating, "Do you think there is something wrong with being active in business in this country? I am proud of it. I am proud of it."

He's also been forced to defend his palatial living quarters, which are far from carbon-neutral [source]. In 2007 his 20 room, 8 bathroom mansion used as much electricity in a month as the average American household did in a year. The Gore manor also devoured a very sizable amount of natural gas a year. In 2010 he bought a fourth mansion -- an even more extravagant abode [source].

And that's not to mention the companies private jets that he's used over the years to promote his "anti-warming" efforts [source]. (Mr. Gore contends that he's never owned a jet personally so this doesn't count.)
Faced with ever present criticism over his apparent green hypocrisy, Mr. Gore says he lives "carbon neutral" by purchasing a wealth of carbon credits to offset his lavish lifestyle. But reports indicate Mr. Gore is really just paying himself -- his credits allegedly come from Generation Investment Management, a London-based company with offices in Washington, D.C., for which he serves as chairman. [source]

In legal cases justices are supposed to recuse themselves from matters where they have a vested interest. But Al Gore is no judge and he doesn't seem ready to recuse himself of this debate in which he has a massive vested interest in anytime soon.

Mr. Gore does have the honor of a Nobel Peace Prize, along with United Nations International Panel on Climate Change's (IPCC) embattled chairman Rajendra K. Pachauri, for what it's worth, though.


White House officials insist Mr. Gore's accusations are untrue and that the President hasn't "failed" to address climate change.  (Source: AP Photo)
 
II. Obama -- "Weak" on Climate?

Al Gore attacks Obama in a piece he writes for Rolling Stone he comments:
President Obama has thus far failed to use the bully pulpit to make the case for bold action on climate change. After successfully passing his green stimulus package, he did nothing to defend it when Congress decimated its funding.

Without presidential leadership that focuses intensely on making the public aware of the reality we face, nothing will change.
Mr. Gore contends it wouldn't damage the President politically to get "tougher" on climate, writing:
Many political advisers assume that a president has to deal with the world of politics as he finds it, and that it is unwise to risk political capital on an effort to actually lead the country toward a new understanding of the real threats and real opportunities we face. Concentrate on the politics of re-election, they say. Don't take chances.
All that might be completely understandable and make perfect sense in a world where the climate crisis wasn't "real." Those of us who support and admire President Obama understand how difficult the politics of this issue are in the context of the massive opposition to doing anything at all — or even to recognizing that there is a crisis. And assuming that the Republicans come to their senses and avoid nominating a clown, his re-election is likely to involve a hard-fought battle with high stakes for the country.
...
But in this case, the President has reality on his side. The scientific consensus is far stronger today than at any time in the past. Here is the truth: The Earth is round; Saddam Hussein did not attack us on 9/11; Elvis is dead; Obama was born in the United States; and the climate crisis is real. It is time to act.
The attack sent the White House press department into a panic.  They rushed to point out the 960 metric tons yearly saved by the President's Recovery Act that set "aggressive new joint fuel economy and emissions standards for cars and trucks."

States White House official Clark Stevens in a written response, "The President has been clear since day one that climate change poses a threat domestically and globally, and under his leadership we have taken the most aggressive steps in our country’s history to tackle this challenge."

Mr. Gore dismisses anyone who questions that global warming is real, man-made, and "destroying the climate balance that is essential to the survival of our civilization" as a "polluter" or "idealogue".  It's a strategy that promises huge profits for Mr. Gore -- and one that he claims to firmly believe in from an altruistic perspective as well.

One thing's for sure -- this won't be the last time Mr. Gore will be spotted beating the drum of the global warming movement and noisily opening his mouth as a self-proclaimed expert on climate change.

Sunday, June 5, 2011

FROM DARK STARS TO DARK MATTER

“Dark Stars”

In the late 1700s geologist John Mitchell and mathematician Pierre-Simon Laplace postulated the existence of objects referred to as “dark stars.” These were hypothetical objects whose mass would stipulate a surface escape velocity that would exceed the velocity of light.
It wasn't until nearly 120 years later that the concept of a black hole began to gain some real mathematical and scientific support from the seminal work of Einstein's theory of General Relativity and Karl Schwarzschild’s solutions to Einstein's field equations. Even so, Einstein viewed black holes as being more of an interesting theoretical exercise than a realistically possible natural phenomenon. Still, the notion continued to gain traction as theoreticians further defined and refined the properties and structure of these still-theoretical entities.
It wasn't until 1971 that black holes began to gain observational support with the discovery of Cygnus X-1.

An Invisible Companion: First Evidence

In the constellation Cygnus, astronomers noticed an x-ray source that was quite different from other previously observed x-ray sources in that it would “flicker” as opposed to producing the more rhythmic repeating signal of pulsars. Dubbed Cygnus X-1, this x-ray source had a signature that looked more like a candle trying to stay lit in a breeze. The flickering was so fast that it could not be an object occulting the x-ray source since it would require that object to be moving across the line of sight at faster than light speeds. On top of that, there were occasional bursts of radio emissions emanating from the same point in the sky. That point was later identified to be a star designated HDE 226868, a blue supergiant, spectral class B0, with an estimated surface temperature of 31,000 K. This star was not likely the source of the x-ray emissions since B class stars do not have enough surface temperature to emit significantly in the x-ray bands.
Subsequent spectrographic analysis of HDE 226868 showed Doppler wobbling with a shift characteristic of a spectroscopic binary. The period was measured to be 5.6 days but only one star was visible. HDE 226868, based on its spectral classification, is estimated to have a mass of 30 Mʘ, while its unseen partner is estimated to have a mass of 7 Mʘ. That is considerably above the threshold for neutron star. The only compact object that fits the bill for HDE 226868’s missing companion is a black hole.
Astronomers are confident that this was the first observational evidence of the stellar mass black hole that fully corroborates theory. It is far from unequivocal proof, however. It is referred to as a black hole candidate. [Lewin 06]

A Simple Portrait of Darkness

Black holes are interesting in that they are the most extreme objects in the universe and yet they are extremely simple to describe having only three basic characteristics. They can be described by mass, by charge and by spin and that's it!
Nonrotating black holes, the simplest of all, also called Schwarzschild black holes, are described only by their mass. If an intrepid astronaut were to encounter a 10 Mʘ Schwarzschild black hole in the course of his travels he would see a perfectly dark region… no color, no texture, nothing. But, he would detect the curvature of spacetime and the increasing "gravitational
clip_image002
Figure 1 Nonrotating black hole (artist impression)
pull" as he moved closer to the dark region. He may likely see the background stars lensed around the black hole’s boundary so it in d appear that this dark region was surrounded by a shimmering ring of light. [Fig. 1]
It is only the black hole’s mass that describes its very simple structure. It can be defined as an object with enough mass to raise its surface escape velocity to be equal to, or exceed, the speed of light. As shown by…
clip_image004
where
clip_image006
So let's get back to our astronaut’s black hole. He steers his ship into a safe, but close, orbit around the black hole. Using Kepler’s 3rd Law he is able to accurately measure the object’s mass of precisely 10 Mʘ.
Next he wants to know how close a “safe” orbit is. For that he needs to know the radius of the point-of-no-return… the black hole’s event horizon. Using the equation derived by Schwarzschild…
clip_image008
he plugs in the 10 Mʘ value for M (or about 1.99 X 1031 kg) that works out to a Schwarzschild radius of only about 29.5 km. Compare that to our 1 Mʘ Sun’s radius of 695,500 km!
Since our astronaut is on a mission of scientific exploration he decides to send a probe to the black hole’s horizon to gather data. To make it easy to identify against the background of stars the probe is flashing a brilliant white light, once every second. He launches the probe directly toward the center of the region of darkness tracking it by its flashing white light and also by the telemetry that the probe is sending back to the ship. So far, other than the probe’s rapid acceleration towards its destination nothing unusual is happening.
As our probe gets closer to the black hole's event horizon the astronaut begins to notice something unusual. The white light that has been flashing consistently once per second seems to be slowing down a bit. It couldn't be that the batteries are draining because the probe had a full charge. Something else… the white light doesn't seem to be quite as white as it was before. It is starting to show a redder tint. What the astronaut is seeing are relativistic effects experienced by the probe as it accelerates toward the strong gravitational field. The closer that the probe gets to the event horizon time slows down as viewed by the astronaut and his frame of reference. Flashes of light are not only longer but the intervals of darkness between flashes are lengthening as well. But if our probe could tell us what was happening from its perspective everything would appear to be normal the beacon would be flashing at 1 flash per second.
The strong gravitational field also causes a gravitational red shift. It's as if the wavelength of light is being stretched as it tries to escape the powerful gravity and its distortion if spacetime.
So as our astronaut continues to observe the probe's descent into darkness the probe continues to flash redder until the beacon can only be seen in the infrared. Its time is slowing to the point that the intervals between each flash are growing to be multiples, then exponents of each other. The last view our astronaut has of the probe it appears to be motionless in the deep infrared. It never actually appears to cross the event horizon.
What the probe is experiencing however is quite different. Even though the astronaut has a bizarre view of its journey everything appears to be quite normal on the probe even as it crosses the event horizon. Its strain-gauge sensors do detect an uneven gravitational pull since the nose of the craft is closer to the black hole’s center of mass then the after portion. It is being tidally stretched. The force differential continues to increase until the probe is ripped apart, eventually into thin streams of particles accelerating toward the center known as the singularity. Then, even the particles are shredded.
Unfortunately, no telemetry reaches the astronaut's ship after it crosses the event horizon. It’s signal is lost along with the image of the flashing light appear to be frozen in time at the ‘surface.”

It’s All Relative

Clearly no definition of black holes is adequate if we do not take relativity into account. Einstein's General Theory of Relativity describes how matter shapes space-time and then how matter moves through it. As I mentioned before, one of the consequences of General Relativity was the prediction that a large mass compressed into a small volume could distort space-time to the extent that the slope of the gravitational well could become, undefined. [Fig. 2]
clip_image010
Figure 2 Rubber sheet analogy
That would be problematic for anything or anyone who crossed that horizon since all possible futures point only toward the center of mass, the singularity.

Mass, Charge and Spin

As we will see when we examine how black holes form there needs to be enough mass compressed into a small enough volume for the gravitational force to overwhelm the electromagnetic and nuclear forces that resist further compression. When that limit is reached the force of gravity becomes irresistible and matter collapses into a theoretical point of infinite density known as the singularity. However, most astrophysicists believe that quantum effects will govern the maximum possible compression and zero volume is not possible.
The second property the black hole can exhibit is electrostatic charge. Charged black holes are thought to be rare however, since the electromagnetic force is so much more powerful than the gravitational force a black hole with charge would effectively “reach out” and be quickly neutralized by opposite charges attracted from the interstellar medium.
When we add the third property of spin that's when things get really interesting and really weird.
If you were to take a child's toy top and spin it so that it was happily balancing on its point it would have the identical properties as it would have if it was resting on a tabletop except for its angular momentum. This is not the case with black holes. They are relativistic objects. A rotating black hole can actually drag space-time around with it. The property is known as frame-dragging. To visualize it imagine space-time being a rotating fluid that is spiraling and accelerating down a bathtub drain.
The complex mathematics of a rotating black hole was best described by University of Texas astrophysicist Roy Kerr. Rotating black holes are likely to be common and some can have extreme rates of rotation. The black hole candidate GRS 1915+105 was discovered in 1992. It is an x-ray source and a spectroscopic binary a distance of 11 kpc. Studies have indicated that it is a heavy stellar black hole at 10 to 18 Mʘ spinning at an astonishingly high 69,000 rpm (1,150 rps). [Castro 1996] This is near the theoretical limit.


Black Holes in Cross-Section

Let's take a closer look at the structure of a Kerr black hole [Fig. 3]. Since the rotation of the black hole twists space-time along due to frame dragging the structure is a bit more complex than a nonrotating Schwarzschild black hole. There is more to the Kerr black hole than the singularity at the center of mass and an event horizon. In fact, even the singularity is different in that it is not defined as the point but rather because of the sweep of space-time it assumes a ring-like configuration. This is still the region were space-time curvature becomes infinite but because of frame dragging it’s smeared out, coplanar to the plane of rotation. Moving away from the singularity we encounter the event horizon which has a roughly spherical shape. As in a nonrotating black hole is the point of no return but there is an interesting twist.
clip_image012
Figure 3 A rotating black hole
In a Reissner-Nordstrøm black hole there are two event horizons where time and space flip. Roger Penrose described that region as something freakishly bizarre since that's where all the matter and energy falling into a black hole piles up. In their 1989 paper Canadian physicists Eric Poisson and Werner described that region as a violent zone of "inflationary instability.” It's where everything… mass, energy, pressure, everything… keeps growing exponentially! And yet, all futures are still directed downward toward the singularity. It's a frightening concept, the mathematics even more so. [Poisson 1989]
Surrounding the event horizons is an interesting region that has the shape of an oblate spheroid called the ergosphere. This is a region where you paradoxically have to expend energy to remain still. Frame dragging is sweeping space-time around the rotating black hole. This is analogous to trying to swim against the current of a powerful river. You either swim like crazy against the flow or you are carried downstream at relativistic velocities. In fact, you would need to be swimming faster than the speed of light in order to be still and this is impossible. Interestingly, an object is not doomed to oblivion in the ergosphere (assuming it could withstand the fragmentation, radiation and temperatures). Penrose theorized that rotational energy can actually be extracted from a rotating black hole so that an object falling into the ergosphere could be split with one part spiraling inward to the event horizon and the other part ejected outward like a slingshot carrying with it some of the angular momentum of the rotating black hole and now having a greater mass-energy than it had when it went in. The mass-energy of the divided portion that fell in became negative and the positive mass-energy was transferred to the ejected object along with extra angular momentum.
Just beyond the ergosphere and the static limit of a black hole is the photon sphere. Imagine a beam of light, say the image of a distant star, grazes a black hole. Photons of light traveling at particular angles can actually become trapped in an orbit around the black hole circling it until perturbed. But these orbits are inherently unstable so they would be most likely to occur in nonrotating black holes. The radius of the photon sphere in a Schwarzschild black hole is 1.5 times the Schwarzschild radius.

Maelstrom

If a black hole was to exist in isolation, ignoring the contortions of space-time, it might be viewed as a tranquil place… dark, distorted, weird but uninteresting. But if a stellar mass black hole exists as part of a binary pair then the stage could be set for a spectacular maelstrom.
The x-ray flickering that we observed coming from black hole candidate Cygnus X-1 [Fig. 4] is likely the signature of a black hole feeding off its larger companion. Material from the larger star is filling its Roche lobe and extending beyond it, crossing the Lagrangian point and cascading toward the black hole. As it spirals inward it forms an accretion disk. As the material’s velocity increases it is subject to high frictional (thermal) heating and also to plasma current heating. The matter that came from the companion star now begins to glow in the x-ray bands at a Tmax of millions Kelvin. [Poutanen 2007]
Not all the energy from accretion is radiated outward. Some of it crosses the event horizon and falls inward and is lost. Some is launched away. All that rapidly spiraling and charged material has a property of being able to set up powerful magnetic field lines. A black hole that is accreting matter can become a dipole.

clip_image013
Figure 4 Cygnus X-1 in X-rays from ESA / INTEGRAL
While some of the accreting material will spiral into the event horizon some of the superheated plasma gets caught up and ejected by the magnetic flux in twin relativistic jets perpendicular to the axis of rotation. [Fig. 5] This phenomenon occurs at the stellar mass level as well as at the supermassive black hole level in the hearts of most galaxies.
clip_image015
Figure 5 Twin jets of subatomic particles ejected by magnetic flux perpendicular to the axis of rotation.

Small, Medium and Supersized, BH Formation

We have very strong evidence for the existence of two classes of black holes. The first being the stellar mass black hole and the second being supermassive black holes that can have masses in the billions of solar masses. Two other types of black holes have been hypothesized and they include microscopic black holes that may have formed during the Big Bang and also intermediate mass black holes that may exist in the hearts of globular clusters or some dwarf galaxies. First let's consider the stellar mass black hole.

Stellar Mass Black Holes

When a massive star nears the end of its life it continues to fuse heavier elements from lighter ones all the while maintaining its hydrostatic equilibrium. For most of its brief 10 million year main sequence lifetime the star burned hydrogen into helium. But with about one million years to go the hydrogen supply in the core begins to exhaust resulting in an inward crush due to gravity and soaring temperatures. When the core temperatures rise past 170 million Kelvin helium begins to fuse into carbon and oxygen.
With just one thousand years to go new cycles of fuel exhaustion followed by rounds of fusion convert the carbon and oxygen into neon and magnesium, then into oxygen and magnesium, followed by silicon and sulfur. A cutaway of the star would look like a hot thermonuclear onion. The silicon and sulfur core has a temperature approaching 2 billion K and just days to live. The pressure of collapse drives the core temperature beyond 3 billion K and then silicon and sulfur core begins to fuse iron but because iron fusion takes in more energy than it produces hydrostatic equilibrium is destabilized. When the sphere of iron rises to a mass of 1.44 Mʘ , the Chandrasekhar limit, it collapses in less than a fraction of a second. The core collapses at velocities reaching nearly 1/4 the speed of light and showering the universe with neutrinos. The rebounding shockwave produces a Type II supernova, one of the most powerful explosions in nature that obliterates the progenitor star. Depending upon the core mass of the progenitor a Type II supernova can leave behind one of two types of relics.
On the lean side of a ≈ 3-4 Mʘ threshold (the Tolman-Oppenheimer-Volkoff limit) the resulting relic is a neutron star, a peculiar object that is essentially a dense aggregation of tightly packed neutrons. The second and even more extreme type of relic is the black hole. It results when the progenitor star’s core has a mass in excess of the ≈ 3-4 Mʘ limit. At those masses, even neutron degeneracy pressure (resulting from the Pauli Exclusion Principle) cannot resist the inward crush of gravity. Hydrostatic equilibrium collapses, the dead star’s mass reaches near infinite density, space-time curvature approaches infinity and the new black hole is born. (There is some uncertainty here however. Beyond the Tolman-Oppenheimer-Volkoff limit of ≈ 3-4 Mʘ Quark degeneracy pressure may resist further collapse although this is an area of ongoing research.) [O’Connor 2001]
There are other ways for stellar mass black holes to form other than core-collapse supernovae. Black holes can also form the merger of compact objects such as two neutron stars or neutron stars and white dwarfs.
As these compact objects tightly orbit each other they are engaged in a fatal dance. Gravitational interaction produces gravity waves that radiate away the angular momentum (orbital energy) of the two stars. This makes it inevitable that the two compact objects will at some point merge. When they do their combined mass easily exceeds the TOV limit and a new black hole is born in a brief, but powerful shower of gamma rays that heralds its arrival. This is one hypothesized explanation for the observation of short duration gamma ray bursts.

Supermassive Black Holes

At the other end of the size spectrum are the true giants. These are the black holes that reside in the hearts of most galaxies.
Since the invention of the optical telescope the center of the galaxy has been off limits to observation due to intervening gas and dust. Only with the advent of infrared and radio astronomy have we been able to explore what is going on at the heart of the Milky Way. The view has been fascinating!
Several teams have been exploring the curious orbital dynamics of stars surrounding a radio source known as Sagittarius A*, Sgr A* for short, at the galactic center. It was discovered in 1974 by astronomers Bruce Bailick and Robert Brown using radio interferometry techniques. Just about 28 years later a team led by Rainer Schodel of the Max Planck Institute for Extraterrestrial Physics reported tracking a star designated S2 that appeared to be ignored it around an unseen object. Recently, another team led by UCLA’s Dr. Andrea Ghez has been using the advanced adaptive optics of the Keck telescope in Hawaii to study the Sgr A* region and S2 (along with other stars) with unprecedented resolution. Over the past 16 years astronomers have been tracking the positions of these stars and plotting their orbits. It becomes readily apparent is that they are orbiting something truly gargantuan. S2 has nearly completed one observed revolution around the massive unseen object and has a period of 15.56 years.
clip_image017
Figure 6 Orbital dynamics of stars surrounding Sgr A*
Analysis of S2’s Keplerian orbit [Fig. 6] shows that it is orbiting an object with a mass of about 4 million Mʘ. The object has a radius no larger than 45 AU. On closest approach to Sgr A*, S2 is whipping around with a ballistic velocity of nearly 2% of the speed of light! Speeds in excess of 5,000 km/s! Only one known object with such low luminosity could have a mass of 4 million Mʘ occupying a volume with a radius of 45 AU… that is a supermassive black hole. This is pretty solid empirical evidence for the SMBH.
Further research has indicated however, that Sgr A*, the radio emissions source, may not correspond exactly to the black hole's center of mass. The radio emissions seem to be arising from a bright spot near the hole’s event horizon in the accretion disk or in a relativistic jet. This is another area of ongoing investigation.
There doesn't seem to be an upper limit in terms of the size of the supermassive black hole but it is proportional to its host galaxy. The largest SMBH we have identified resides in the giant elliptical galaxy M87. It has a mass estimated to be a staggering 6.4 billion Mʘ! M87 is also unusual in that it has a prominent relativistic jet shooting out from its nucleus almost certainly powered by the black hole at its heart. [Fig. 7] This jet is estimated to be 5,000 ly long and is tightly collimated but the Chandra observatory has detected lobes of matter extending nearly a quarter million light years away from the jet in both directions.
clip_image019
Figure 7 M87's relativistic jet

SMBH Formation

How supermassive black holes form is not entirely clear. We do know however is that they appear to be at the centers of most galaxies and the mass of the supermassive black hole seems to be proportional to the mass of the galactic bulge. This relationship correlates across many galaxies not just the Milky Way. This is known as the M-sigma relation and implies a co-evolution of the galaxy with its supermassive black hole. Further, there is a relation between galaxy’s bulge and the dark matter halo that surrounds most if not all galaxies. There appears to be a clear and inextricable connection between the dark matter halo, the galactic bulge and the supermassive black hole. [Umeda 2009]
In fact, this is an interaction that needs to be considered, and is often overlooked. For example, when modeling a galaxy’s SMBH. Gebhardt & Thomas in 2009 showed that just accounting for a dark matter halo increased the black hole’s mass of the giant elliptical galaxy M87 by a factor of two [Schulze 2011]
However, what would that interaction look like? If the black hole was rotating, would there be a dark matter accretion disk (maybe superposed on any baryonic matter accretion disk) due to relativistic frame dragging? The dark matter accretion disk would be "cold" since there is no friction or blackbody radiation in the absence of electromagnetic interaction. File this under speculation.
Some theories propose that sometime between 300,000 and 1 million years after the Big Bang over-densities of matter (within dark matter halos) collapsed to form supermassive black holes as active galactic nuclei concurrently with the galaxies themselves. The big question is what happened in between? What is the connection between the collapsing gas cloud and the supermassive black hole? Astronomers are unsure if the intermediate step was from the aggregation of stellar mass black holes, or the collapse of highly speculative supermassive "relativistic stars," or did the proto-galaxy collapse directly into a supermassive black hole at the very start?
Again, this is an area of ongoing investigation.

Intermediate Mass Black Holes

We have seen powerful and convincing evidence for the existence of stellar-mass black holes and the supermassive black holes that reside in the hearts of most galaxies but where are the intermediate mass black holes? There is a mass distribution gap. Is there some principle that says that a black hole can grow to only 40 or 50 Mʘ? Is there a mechanism that prevents the aggregation of matter in the hearts of galaxies in amounts less than the supermassive variety, a choke point?
At a conference at UC Berkeley last March I asked UCLA professor Andrea Ghez what her thoughts were about the evidence pointing to the existence of intermediate mass black holes and she responded that they are “still a matter of some controversy.” Frankly, this surprised me since it seems counterintuitive. Why couldn't there be intermediate mass black holes forming in the hearts of globular clusters? Some have speculated that ultra-luminous x-ray sources (ULX’s) could be the direct result of intermediate mass black holes (or IMBH’s) swallowing matter. And the M-sigma relation predicts the existence of IMBH’s in lower luminosity or dwarf galaxies with masses ranging from 104 to 106 Mʘ. The biggest problem doesn't seem to be a lack of direct observational evidence for their existence but rather a lack of a formation mechanism that's airtight.

Primordial Black Holes

In 1971, Stephen Hawking laid some seminal groundwork in theoretical astrophysics when he proposed the notion of primordial black holes. These black holes, in the billions, may have formed in the enormous mass-energy densities that existed in the fraction of a second after the Big Bang. These primordial black holes would've occupied a volume equal to that of subatomic particle but contained a mass equal to that of the mountain.
Hawking further went on to show that primordial black holes would have a limited lifetime (t) especially given their small mass, a lifetime of roughly 10 billion years. Lifetime???

Are Black Holes Forever?

In short, no. In quantum mechanics the uncertainty principle allows for (actually it requires) the creation of virtual particles out of nothing. These particles exist for the briefest of moments before annihilating each other. The creation of these particle-antiparticle pairs violates conservation of mass-energy laws for an instant, but since the pairs immediately annihilate themselves the sum of the pairs is zero and all is well. Strange as it sounds these virtual particles do exist. They have been experimentally verified and exert a measurable pressure.
But at the extreme edge of a black hole's event horizon something really weird happens. When two virtual particles are created one dashes across the event horizon and is lost into the black hole. The other survives and escapes into surrounding space and importantly, it reduces the black holes mass by a tiny amount. Over time the mass lost is cumulative.
From the point of view of an observer outside the black hole it would appear that the black hole created and emitted a particle. In fact, it would appear that the black hole would be emitting a measurable flux of particles that would cause it to "glow." It's not a paradox because the particle creation and emission is taking place on our side of the event horizon so that the black hole is, indeed, black. Interestingly, the energy for the glow is coming at the expense of the mass lost in the hole. So over time the black hole shrinks, and the more it shrinks the faster it will radiate (the greater the flux) until it explodes into a flash of radiation.
Hawking can claim credit for the theoretical discovery of black hole evaporation. The particles radiating away from an evaporating black hole are referred to as Hawking radiation. Curiously, this gives black holes temperature. And the evaporation rate (and temperature) of a black hole is inversely proportional to its mass. The smaller the black hole higher the evaporation rate and the higher the temperature as in…
clip_image021
Even so, black hole evaporation can be ponderously slow. You can estimate a 1 Mʘ black hole to take over 1067 years to evaporate as expressed by…
clip_image023
In other words, we’re in for a considerable wait. It is possible however that we may see the lingering gamma ray flashes associated with the evaporation of the primordial black holes that may have been formed during the Big Bang. They are predicted to have a unique gamma ray signature and so far none have been detected.

Unanswered Questions Remain

There are still mysteries however… where are all the stellar mass black holes? Theoretically, there should be vast numbers of black holes. [Ozell 2010] In the 13.7 billion year history of the universe there must have been unaccountable billions of stars massive enough to form Type II supernovae that have collapsed into black holes. Where are they? Sure, they can be difficult to detect given their dark nature but to date the count of verifiable stellar mass black hole candidates numbers just in the dozens.
For example, there was the demise of supernova SN 1987a, the most studied supernova in history. The progenitor star was an unusual class B3 blue supergiant with a mass of about 20 Mʘ in the Large Magellanic Cloud at an estimated distance of 51.4 kpc. [Gilmozzi 1987] Twenty-four years later despite intense observational study the remnant is still missing.
clip_image025
Figure 8 SN 1987a
Our studies have taken us to the theoretical boundaries of these most extreme of objects either by indirect observation through powerful telescopes or through supercomputer modeling. Even though we may never directly observe a black hole in situ we can say with confidence that they are real and they are extraordinary!
 

References:

Castro-Tirado, A. et al (1996), ApJ 461, L99 Infrared Spectroscopy of the Superluminal Galactic Source GRS 1915+105 during the 1994 September Outburst
Gilmozzi, et al 1987 Nature 328 (1987) 318 The progenitor of SN1987A
Lewin, Walter; Van Der Klis, Michiel (2006). Compact Stellar X-ray Sources. Cambridge University Press. pp. 159. ISBN 0-521-82659-4.
O’Connor, E. &. Ott, C. ApJ, 730:70 (20pp), 2011 April 1 BLACK HOLE FORMATION IN FAILING CORE-COLLAPSE SUPERNOVAE
O¨ zel, F. et al 2010 ApJ, 725:1918–1927, 2010 THE BLACK HOLE MASS DISTRIBUTION IN THE GALAXY
Poisson, E. & Israel, W. (1989) APS Phys. Rev. Lett. 63, 1663–1666 Inner-horizon instability and mass inflation in black holes
Richstone, D., et al (1998) Supermassive Black Holes and the Evolution of Galaxies. Nat, 395, A14.
Schulze, A. and Gebhardt, K 2011 EFFECT OF A DARK MATTER HALO ON THE DETERMINATION OF BLACK HOLE MASSES ApJ 729
Umeda, Hideyuki, et al 2009 JCAP08 (2009)024  Early Black Hole formation by accretion of gas and dark matter

AN EXAMINATION OF INTERPLANETARY TRAVEL

 
On the 3rd of August, 2004 a Delta II rocket launched the MESSENGER spacecraft on its voyage of exploration toward the planet Mercury [Fig. 2]. More than 6 1/2 years later on the 17th of March, 2011 MESSENGER inserted itself into orbit around the planet. The journey was a long one… nearly 8 billion kilometers. It included one Earth flyby, two Venus flybys and three Mercury flybys before parking itself into an elliptical orbit around Mercury. [Johns Hopkins 2008]
Why did it take so long? Mercury's orbit takes it to roughly the same distance from the Earth as Mars but a coasting trip to Mars takes only between 6 and 11 months. There were two reasons… physics and cost.
The planet Mercury orbits the sun in an elliptical orbit with a semimajor axis of ≈ 0.39 AU, an orbital period of 88 days, and importantly, an orbital velocity of 47.9 km/s. By comparison, the Earth has an orbital velocity of just 29.8 km/s.
clip_image002
Figure 1 MESSENGER orbital trajectory [NASA]
So one of the biggest challenges that the designers of the MESSENGER spacecraft mission had to solve was getting the spacecraft to catch up to Mercury but not blow past it. The difference in velocities is 18.1 km/s. For the probe to accelerate to Mercury’s orbital velocity is not the issue since it will “fall” into the sun’s gravitational well. The issue is managing the velocity so that it slows down enough to do a tangential orbital insertion and that requires either a lot of propellant or the very clever use of physics. Given MESSENGER’s tight energy budget meeting the target required careful maneuvering and mission planning. [McAdams 1998]
Building and launching a spacecraft that can accelerate rapidly is not an insurmountable problem however doing so within anemic fiscal budget constraints can make the challenge formidable. Clearly the mission planners for MESSENGER had to accomplish lofty mission objectives at a minimum cost or risk facing program cancellation. So they had to be ingenious and take advantage of what nature has provided in the form of gravity assists. The trade-off is mission time. These kinds of trade-offs exist as considerations for virtually all missions of space exploration.
clip_image004
Figure 2 MESSENGER launch aboard a Delta II [Boeing]
In this paper I will examine the challenges of interplanetary spacecraft propulsion systems and mission designs of today’s, as well as future vehicles.

Navigating in Space

Interplanetary travel requires careful energy management. Let's say that we’re planning for a mission to Mars. The first thing the mission planner needs to understand is that the launch platform, namely the Earth, is moving with respect to the faraway destination, and that too is moving. Secondly, he needs to understand that massive objects like the Sun or the planets create their own "gravity wells" that the probe will need to either “fall into” or “climb out” of. These maneuvers require energy and that is usually in limited supply.
So, let's get back to our mission to Mars. If we were to take our rocket and simply point it at the planet then push the throttles to the firewall our mission would likely be lost into deep space. Like a competitive trap shooter the trajectory designer has to consider that Mars is moving and he will need to “lead” the planet so that Mars’ orbit intersects the probe's trajectory. He also will need to factor in enough thrust to climb out of the Earth's gravity well as well as providing for enough thrust to decelerate when falling into Mars’ gravitational well to enter orbit.
The mission planner must consider that fuel carried is also additional mass that needs to be accelerated. In other words, the shorter the mission time for a given distance, the more fuel needs to be carried, and the more fuel that needs to be carried means incremental thrust requirements, and additional thrust means requiring more fuel. This is an exercise in economics.
So our Mars mission planner looks for the most fuel-efficient way to get to the Red Planet and decides to use a technique known as the Hohmann transfer orbit, named after Walter Hohmann. [Wiki 2011]

Hohmann Transfer Orbit

The low energy Hohmann transfer orbit is the most fuel-efficient.
clip_image006
Figure 3 Hohmann transfer orbit diagram [Wikipedia Commons]
Here's how it works. Look at the coplanar diagram above. The green circle is the orbit of the earth. The red circle is Mars’ orbit. The probe initiates a trans-planetary burn that is roughly tangential to the orbit of the Earth. Keep in mind that at this point the Earth and the spacecraft have an orbital velocity that is essentially “free” angular momentum. The probe's burn puts the spacecraft in a highly elliptical orbit with the Sun at one focal point (the yellow elliptical orbit). Mission planners will time the burn so that the probe arrives at Mars coincident with the probe’s aphelion and with an orbital velocity which is roughly equal to that of Mars. The difference between the initial orbital velocity at the probe's departure and the arrival orbital velocity is known as delta-v. After the initial burn the probe will coast to Mars in the no friction environment of space. [JPL 2011]
On June 2nd of 2003 the European Space Agency launched the Mars Express it attained Mars orbit on Christmas Day of the same year. That was a relatively fast 6 1/2 month trip that covered 400 million km (even though the mission was plagued with navigation problems and was hit by massive solar flares). One of the factors determining the speed of the mission’s coast phase was the alignments of Mars and Earth in 2003 that were the closest that they had been in over 60 millennia.
In order to complete the transfer the spacecraft must initiate a second corrective burn in order to place it into a roughly circular orbit otherwise it will simply continue on its initial orbit and head back to the Sun.
The last step is known as orbital insertion and the spacecraft will make final adjustments to place it in an orbit around Mars.
This is a fuel-efficient trajectory; however it does place limitations on mission planners. The biggest limitation is the launch window. Essentially these are “windows of opportunity” where the Earth and the destination planet are in the correct alignment for the Hohmann transfer to result in a rendezvous. Unless the orbital alignments match, the probe arrives at its aphelion and Mars isn't there. The planet will be either too far ahead or too far behind the spacecraft.
Going to Mars using the Hohmann transfer orbit requires capitalizing on planetary alignment opportunities that are periodic. Mars and the Earth arrive at the same position in their orbits relative to each other once every 2.135 years this is known as the synodic period. That means the launch window recurs only every 780 days and remains open for only a couple of weeks. This is a critical consideration especially when it comes to manned spaceflight. If we travel to Mars we can't just turn around and come home if something goes wrong.
Mission planners use what are known as porkchop plots [Fig. 4] to assist in planning optimal launch windows. These charts show contours of equal characteristic energy, or C3, against specific combinations of launch and insertion dates for any given interplanetary flight. C3 is a measure of the energy required for a mission that requires attaining an excess of orbital velocity over and escape velocity required for additional maneuvers. [Sellers 2005]
clip_image010
Figure 4 Porkchop plot for a Mars 2005 mission [NASA]

Delta V

Knowing when to launch is only part of the problem, the other part is knowing how to manage your velocities and T2/a3 = k (Kepler's 3rd Law).
When you light up your initial burn you need to be sure to add enough delta-v to carry the spacecraft to Mars in time to intercept and match orbital velocities.
Calculating the departure and arrival delta-v ‘s as well as the transfer time are pretty straightforward. [clip_image014 is the standard gravitational parameter and clip_image016 and clip_image018 are the radii of Earth and Mars orbits respectively.]
Delta-v departure (instantaneous burns)
clip_image020
The delta-v for the Mars transfer orbit works out to approximately 0.6 km/s.
Delta-v arrival (instantaneous burns)
clip_image022
The delta-v for Mars capture where the highly elliptical Hohmann orbit is circularized 0.9 km/s.
Total delta-v
clip_image024
Combine the two for a total delta-v of 1.5 km/s.
Time of transfer (Hohmann)
clip_image026
The time of transfer works out to .708 years or just under 8 months 3 weeks.
Again, the biggest disadvantage of low-cost, high fuel efficiency is that these missions can be ponderously slow, with missions taking years instead of months and launch windows can be few and far between.

Type I and Type II Trajectories

More direct trajectories that carry the spacecraft less than 180° around the Sun are called Type I trajectories if it's more than 180° they are Type II trajectories.
A couple of notes are in order about shaping a spacecraft’s orbit… If there is a burn somewhere in the vehicle's trajectory the shape of the orbit will change however the spacecraft will return to that same point on every subsequent go around. So if a mission planner wanted to increase the altitude of a circular orbit above the planet and not just change the periapse or apoapse of an orbit than two short burns will be required to shape the orbit.
If a vehicle is in a circular orbit and the rocket is fired in the direction of travel, effectively slowing the vehicle down as if it was retro rocket, the new orbit will be elliptical with a lower periapse, 180° opposite the firing point. When that new periapse dips into the planet’s atmosphere then that becomes a reentry maneuver. [Fig. 5]
Lastly it's important to discuss when to burn. You want to use your probe's rocket engine to burn when the craft is traveling at higher speed because it generates much more useful energy than at low speed. This is known as the Oberth Effect. This effect occurs because the propellant has more usable energy when kinetic energy is added to chemical potential energy to produce more useful mechanical energy. So in an elliptical orbit you would want to plan a burn for periapsis when the craft's KE is higher in MGH is lower. Understanding how to utilize the Oberth Effect was considered revolutionary in understanding that enormous tank mass of fuel was not required for effective interplanetary travel.
clip_image028
Figure 5 Apollo capsule reentry [NASA]

Newton’s 3rd Law

So now we know how much delta-v is required to initiate the transfer and how much is required to put the probe into a circular orbit (around the sun prior to any Mars orbital insertion maneuver). How do we determine what type, and how long, of burn will produce the required delta-v.
Newton's third law states that for every action there is an equal and opposite reaction. With few exceptions most every space propulsion system in the worldwide inventory are reaction based chemical rockets. That means that mass is accelerated to high velocities and expelled out one end of the spacecraft and inducing a reaction force vectored in the opposite direction producing thrust and acceleration.
The formula that relates delta-v to the rocket’s exhaust velocity and the difference in its mass at the beginning of his burn and end of its burn is known as the ideal rocket equation, also called the Tsiolkovsky rocket equation. The equation is…
clip_image032
where
clip_image034 is the initial mass including fuel
clip_image036is the final mass, and and
clip_image038 is the effective exhaust velocity
So to produce the required delta-v we need to accelerate enough mass out the nozzle of our engine with sufficient velocity to give our spacecraft the thrust necessary to make the transfer.
The thrust of a bipropellant pumped liquid rocket is expressed as…
clip_image040
where
clip_image042 is the mass flow rate
ve is exit velocity
pe is the exit pressure
po is the outside pressure
Ae is the area of the exit
In the case of liquid oxygen and liquid hydrogen (LO2 + LH2) the exhaust velocities can get to as high as 4,500 m/s.
How much thrust a rocket produces in one second is known as its specific impulse. That is a measure of the efficiency of your rocket propellant.
What it really comes down to is how quickly can you get a lot of mass out the rear of the spacecraft and at what velocity? That will determine the thrust and that in turn determines your delta-v . And that… determines whether or not you get to Mars.
Our little Mars bound spacecraft uses a chemical rocket for its primary propulsion. Simple in principle, it reacts one or more propellants together to produce a high velocity directed exhaust that exits the spacecraft through a specially shaped nozzle which enhances thrust through thermodynamic expansion. Some rockets, such as the Space Shuttle's main engines react oxygen and hydrogen together to produce massive volumes of high velocity steam. Others use a mixture of kerosene and liquid oxygen. Smaller attitude control thrusters may use hypergolic fuels such as hydrazine or compressed gasses.
Typically, solid fuel rockets are not used in spacecraft propulsion except in launch boosters. Solid fuel rockets are stable and exceptionally reliable however they cannot be throttled in-flight and once lit they burn to completion.
clip_image044
Figure 6 Shuttle launch [NASA]
(Note the photograph of a shuttle launch in figure 6. The exhaust coming from the three main engines is invisible since it is high temperature steam. Now compare that to the fiery plume of the twin solid rocket booster’s exhaust.)
clip_image045
Figure 7 A liquid fueled rocket engine [NASA]
All reaction propulsion systems need to carry their fuel with them. It's that mass that is expelled from the rear to produce an opposite forward thrust. [Fig. 7] Unfortunately, that exacts a penalty on mission planners since the fuel that makes the rocket go, first needs to be carried up to the spacecraft and it’s heavy. It's called the tank mass and that has to be added to the structural mass of the vehicle plus the payload. For a given amount of thrust if you have too much tank mass your thrust to weight ratios will be way off and you're acceleration will be poor. That may cause you to supply insufficient delta-v to complete your mission. So keeping weight down and keeping fuel requirements to an absolute minimum are essential design considerations. So how do you carry enough fuel to get to Mercury? Answer, you get nature to help out.

Gravity Assists

If one were to look at space away Einstein did he would see that the fabric of space-time has a topology.
clip_image047
Figure 8 Interplanetary superhighway [NASA]
Borrowing the famous rubber sheet analogy just imagine the Sun and the orbiting planets being heavy masses resting on a rubber sheet stretched tight. The more mass there is the more space-time around it is deformed. So the Sun would be like a bowling ball and the inner planets like golf balls bending and stretching the space around them. Objects traversing this interplanetary space need to be "cognizant" of the topology. Moons, comets, space probes and even light follows the same geometry. The “rubber sheet” that is the fabric of space-time has planes and valleys, dimples and wells.
Since keeping fuel consumption to a minimum is a primary objective, mission designers have found a way to use a planet’s gravitational tug to transfer angular momentum from the planet to a spacecraft in a maneuver called a gravity assist flyby.
clip_image049
Figure 9 Mariner 10 flybys 1973-1974
This very cool maneuver was developed in 1961 by a UCLA grad student named Michael Minovitch while he was working a summer job at the Jet Propulsion Laboratory.
While working on another problem, Minovitch realized that you could travel almost anywhere in the solar system by using the gravitational fields of other bodies to catapult your vehicle to distant targets without using any additional propulsion. The energy requirements were even lower than those of the Hohmann "minimum-energy" co-tangential trajectories discussed earlier.
It works something like this. If you directed your probe to just to skirt by the planet Venus the spacecraft would begin to accelerate as it enters Venus’ gravity well. It would continue to pick up speed until it passes the planet and decelerates as it climbs out of the same gravitational well. Taken in isolation, the velocity at the beginning of the encounter would be exactly the same as the velocity at the end. There's no gain.
However, the planet Venus is not static. It too is orbiting the sun with a mean orbital velocity of 35 km/s. As your spacecraft approaches the planet some of Venus’ angular momentum is lost to the probe. In physics this is called an elastic collision in that the momentum is fully transferred from one body to the other even though there is no direct contact between the probe and the planet. The angular momentum that the probe gains (due to the conservation of angular momentum) results in an increased delta-v. This relationship is best visualized by using the vector addition model shown in figure 9. Note the resultant vectors.
clip_image051
Figure 8
MESSENGER’s mission planners had to find a way to reduce the probe’s delta-v while conserving its scarce supply of onboard propellant. Also, since the spacecraft is navigating through three-dimensional space the astrogators could use the six gravity assists not only to change the probe’s velocity but also to reshape the orbit… either to more circular or more elliptical and even adjust the orbit’s tilt and rotation.
At one point MESSENGER’s velocity increased to 62.6 km/s while sliding down the Sun’s deep gravitational well!
Using Minovitch’s gravity assist maneuver and theory of space travel we have been able to send missions to all the planets. [Minovitch 1997] The missions have included the Mariner series, Pioneer, the Voyager’s, Galileo, Cassini, MESSENGER and recently the New Horizons mission to Pluto.
Changing the magnitude of the spacecraft’s velocity through gravity assist is called "orbit pumping." Using gravity to change the spacecraft's trajectory relative to the planet’s orbital plane is called “orbit cranking.” Most interplanetary missions use both techniques.
Before the gravity assist technique was developed the only way that mission planners foresaw the ability to travel to deep space destinations was through the development of exceedingly large and expensive nuclear rockets.

Mission Planning

So what are some of the principal considerations when designing a mission? It all starts with the payload. What type of payload? How big is the payload? Where is it going? How fast does it need to get there? What are the range of operating temperatures? Is it an instrumental mission or a manned mission? What are the power requirements? Is it a flyby, an orbital insertion mission, or a landing? Does the mission plan to return to the Earth or is it a one-way trip? Is it returning with samples or people? Is there a tight launch window? Most important, what's the budget?
Once these objectives have been identified the designers can get to work in engineering the attitude and orbit control subsystems, the communications and data handling subsystems, environmental control and life support subsystems (for manned missions), structure and mechanisms, propulsion subsystems, and electrical power subsystems. Adjustment to these systems can take multiple iterations. [Sellers 2005]

New Propulsion Systems for Interplanetary Missions

So far the only propulsion system that we have discussed has been chemical reaction rockets. Chemical propellants pack a lot of energy for a given mass but they are massive and given the limitations of fuel supply that can only be fired in short bursts. So alternative electrodynamic propulsion systems are being developed and they include ion electric, Hall effect and pulsed- plasma thrusters.
An ion drive uses electrical energy from solar panel or on-board nuclear sources to accelerate charged particles out the nozzle of a spacecraft at high velocities rather than the heat supplied from an exothermic chemical reaction. By comparison to chemical propulsion ion drives produce very low thrust but that thrust can be sustained for very long periods of time. In the low friction environment of space the acceleration is cumulative.
The way an ion engine works is to take a gas and strip it of its electrons, ionizing it. Ions are charged particles that can be accelerated by electrical and magnetic fields to extremely high velocities. Then in response to Newton's 3rd law the spacecraft accelerates in the opposite direction. The energy required to ionize the gas and accelerate it can be supplied by solar panel arrays or from nuclear sources.
clip_image053
Figure 9 Deep Space One [NASA]
The Deep Space One [Fig. 10] mission was a demonstrator of ion electric propulsion systems. It used the inert gas xenon as its ion source. A pair of grids in the engine charged to almost 1300 V of potential accelerated the ions to very high velocities. Those velocities approached 40 km/s. That's almost 9 times higher velocities than the H2 + O2 reaction of a bipropellant chemical rocket.
So although the thrust is low it can be sustained not for minutes but for months or even years. The Deep Space One test vehicle used 74 kg of propellant and sustained its thrust for 678 days. Importantly, it was able to achieve a delta-v of 4.3 km/s. That's a record.

The Future of Propulsion Technology

Are there other types of propulsion systems that could be developed to provide an efficient and economical method to power interplanetary travel?
It wasn't that long ago when the prospects for nuclear rocket technology seem to be the future. Perhaps it still is.
A nuclear rocket is simply another type of reaction engine where propellant gas (stored as liquid hydrogen) is thermally heated by a nuclear reactor and expelled out the rear from the nozzle providing efficient forward thrust. Or it could be a nuclear reactor supplying the electrical energy required for an ion thrust system. Both of these systems would be effective in deeper space where electrical energy harvested from solar panels is less efficient in the dim sunlight of deep space.
In the late 60s and early 70s NASA had a program known as NERVA. That stood for Nuclear Engine for Rocket Vehicle Application. It was a demonstration of a nuclear thermal rocket engine that proved that this type of propulsive technology was efficient and practical.
NERVA designs [Fig. 12] have very favorable thrust to weight ratios. They theoretically outperform the most advanced chemical reaction rockets on the drawing boards. This design of rocket can burn for hours and is limited only by the volume of propellant on board. Potentially, it is capable of pretty dramatic delta-v and has many the advantages of both ion and chemical rockets.
Unfortunately, the NERVA program fell victim to the budget ax in the Nixon administration. There are safety and environmental concerns however when considering the launch of nuclear material into orbit. If everything goes well this technology has a lot to offer but if something goes wrong during the boost phase we could be looking at an environmental disaster. The benefits certainly have to outweigh the disadvantages when considering a NERVA type application. I think we will see it again. [Robbins 1991]
clip_image055
Figure 10 NERVA schematic [NASA]

Sailing

I mentioned solar sails because no review of interplanetary propulsion systems would be complete without their mention. Solar sails are a completely passive system relying on light pressure from the Sun (or from a fixed point laser) to provide thrust to carry the probe out into the solar system. But unlike a sailing ship in a fluid environment a solar sailboat is incapable of tacking or sailing into the "wind." It is an outbound system only.
In 2010, a craft using solar sail propulsion was successfully launched. It was called IKAROS.
The disadvantages of solar sails are that the thrust produced is minimal and decreases by the square of the distance. The sails have to be enormous! To produce any useful thrust their areas need to be measured in square kilometers. Mission times are measured in decades not years.
One application where solar sails may be useful would be in helping to save the planet. Let's say we discovered a near earth orbiting (NEO) asteroid with the trajectory that could possibly send it on a collision course with the Earth. Given enough time, we may be able to install solar sails on the asteroid so that sunlight alone would give it that extra tiny nudge to move the asteroid into safer orbit.

Mass Drivers

The last system to be discussed in this essay may be most useful for the propulsion of larger masses such as asteroids given enough power. Let's say again we have to divert an Earth- killing asteroid or perhaps we want to put a metal-rich asteroid into a stable parking orbit so that it can be mined for its metals efficiently. If the asteroid had a high enough iron content it could be mined in useful chunks. Those chunks of iron could then be propelled into space for later collection by an electromagnetic rail gun installed on its surface. That would create a reaction force that could be used to steer the asteroid into safer or more useful orbits. The freefalling chunks could be harvested.

Spacefaring

What used to be the realm of science fiction writers and then pioneering seat-of-the-pants explorers is now a robust and practical technology that will define our species in centuries to come. We are at the dawn of the first known species to become spacefaring.
 

References:

FAA INTERPLANETARY TRAVEL 4.1.6 www.faa.gov/.../Section%20III.4.1.6%20Interplanetary%20Travel.pdf
Johns Hopkins University (December 4, 2008). "Deep-Space Maneuver Positions MESSENGER for Third Mercury Encounter". PR. Retrieved 2010-04-20.
JPL. Jet Propulsion Laboratory THE BASICs OF SPACE FLIGHT http://www2.jpl.nasa.gov/basics/bsf4-1.php
McAdams, J. V.; J. L. Horsewood, C. L. Yen (August 10–12, 1998), "Discovery-class Mercury orbiter trajectory design for the 2005 launch opportunity", 1998 Astrodynamics Specialist Conference, Boston, MA: American Institute of Aeronautics and Astronautics/American Astronautical Society, pp. 109–115, AIAA-98-4283,
Minovitch, M. (1997) JPL. THE INVENTION OF GRAVITY PROPELLED INTERPLANETARY TRAVEL from http://www.gravityassist.com/IAF-4.htm
NASA/JPL Interplanetary Superhighway http://www.nasa.gov/mission_pages/genesis/media/jpl-release-071702.html
Robbins, W.H. and Finger, H.B., "An Historical Perspective of the NERVA Nuclear Rocket Engine Technology Program", NASA Contractor Report 187154/AIAA-91-3451, NASA Lewis Research Center, NASA, July 1991
Sellers, J. (2005) UNDERSTANDING SPACE 3rd Ed. ISBN 978-0-07-340775-3
Wikipedia 2011 Hohmann Transfer Orbit http://en.wikipedia.org/wiki/Hohmann_transfer_orbit