Sunday, June 5, 2011

AN EXAMINATION OF INTERPLANETARY TRAVEL

 
On the 3rd of August, 2004 a Delta II rocket launched the MESSENGER spacecraft on its voyage of exploration toward the planet Mercury [Fig. 2]. More than 6 1/2 years later on the 17th of March, 2011 MESSENGER inserted itself into orbit around the planet. The journey was a long one… nearly 8 billion kilometers. It included one Earth flyby, two Venus flybys and three Mercury flybys before parking itself into an elliptical orbit around Mercury. [Johns Hopkins 2008]
Why did it take so long? Mercury's orbit takes it to roughly the same distance from the Earth as Mars but a coasting trip to Mars takes only between 6 and 11 months. There were two reasons… physics and cost.
The planet Mercury orbits the sun in an elliptical orbit with a semimajor axis of ≈ 0.39 AU, an orbital period of 88 days, and importantly, an orbital velocity of 47.9 km/s. By comparison, the Earth has an orbital velocity of just 29.8 km/s.
clip_image002
Figure 1 MESSENGER orbital trajectory [NASA]
So one of the biggest challenges that the designers of the MESSENGER spacecraft mission had to solve was getting the spacecraft to catch up to Mercury but not blow past it. The difference in velocities is 18.1 km/s. For the probe to accelerate to Mercury’s orbital velocity is not the issue since it will “fall” into the sun’s gravitational well. The issue is managing the velocity so that it slows down enough to do a tangential orbital insertion and that requires either a lot of propellant or the very clever use of physics. Given MESSENGER’s tight energy budget meeting the target required careful maneuvering and mission planning. [McAdams 1998]
Building and launching a spacecraft that can accelerate rapidly is not an insurmountable problem however doing so within anemic fiscal budget constraints can make the challenge formidable. Clearly the mission planners for MESSENGER had to accomplish lofty mission objectives at a minimum cost or risk facing program cancellation. So they had to be ingenious and take advantage of what nature has provided in the form of gravity assists. The trade-off is mission time. These kinds of trade-offs exist as considerations for virtually all missions of space exploration.
clip_image004
Figure 2 MESSENGER launch aboard a Delta II [Boeing]
In this paper I will examine the challenges of interplanetary spacecraft propulsion systems and mission designs of today’s, as well as future vehicles.

Navigating in Space

Interplanetary travel requires careful energy management. Let's say that we’re planning for a mission to Mars. The first thing the mission planner needs to understand is that the launch platform, namely the Earth, is moving with respect to the faraway destination, and that too is moving. Secondly, he needs to understand that massive objects like the Sun or the planets create their own "gravity wells" that the probe will need to either “fall into” or “climb out” of. These maneuvers require energy and that is usually in limited supply.
So, let's get back to our mission to Mars. If we were to take our rocket and simply point it at the planet then push the throttles to the firewall our mission would likely be lost into deep space. Like a competitive trap shooter the trajectory designer has to consider that Mars is moving and he will need to “lead” the planet so that Mars’ orbit intersects the probe's trajectory. He also will need to factor in enough thrust to climb out of the Earth's gravity well as well as providing for enough thrust to decelerate when falling into Mars’ gravitational well to enter orbit.
The mission planner must consider that fuel carried is also additional mass that needs to be accelerated. In other words, the shorter the mission time for a given distance, the more fuel needs to be carried, and the more fuel that needs to be carried means incremental thrust requirements, and additional thrust means requiring more fuel. This is an exercise in economics.
So our Mars mission planner looks for the most fuel-efficient way to get to the Red Planet and decides to use a technique known as the Hohmann transfer orbit, named after Walter Hohmann. [Wiki 2011]

Hohmann Transfer Orbit

The low energy Hohmann transfer orbit is the most fuel-efficient.
clip_image006
Figure 3 Hohmann transfer orbit diagram [Wikipedia Commons]
Here's how it works. Look at the coplanar diagram above. The green circle is the orbit of the earth. The red circle is Mars’ orbit. The probe initiates a trans-planetary burn that is roughly tangential to the orbit of the Earth. Keep in mind that at this point the Earth and the spacecraft have an orbital velocity that is essentially “free” angular momentum. The probe's burn puts the spacecraft in a highly elliptical orbit with the Sun at one focal point (the yellow elliptical orbit). Mission planners will time the burn so that the probe arrives at Mars coincident with the probe’s aphelion and with an orbital velocity which is roughly equal to that of Mars. The difference between the initial orbital velocity at the probe's departure and the arrival orbital velocity is known as delta-v. After the initial burn the probe will coast to Mars in the no friction environment of space. [JPL 2011]
On June 2nd of 2003 the European Space Agency launched the Mars Express it attained Mars orbit on Christmas Day of the same year. That was a relatively fast 6 1/2 month trip that covered 400 million km (even though the mission was plagued with navigation problems and was hit by massive solar flares). One of the factors determining the speed of the mission’s coast phase was the alignments of Mars and Earth in 2003 that were the closest that they had been in over 60 millennia.
In order to complete the transfer the spacecraft must initiate a second corrective burn in order to place it into a roughly circular orbit otherwise it will simply continue on its initial orbit and head back to the Sun.
The last step is known as orbital insertion and the spacecraft will make final adjustments to place it in an orbit around Mars.
This is a fuel-efficient trajectory; however it does place limitations on mission planners. The biggest limitation is the launch window. Essentially these are “windows of opportunity” where the Earth and the destination planet are in the correct alignment for the Hohmann transfer to result in a rendezvous. Unless the orbital alignments match, the probe arrives at its aphelion and Mars isn't there. The planet will be either too far ahead or too far behind the spacecraft.
Going to Mars using the Hohmann transfer orbit requires capitalizing on planetary alignment opportunities that are periodic. Mars and the Earth arrive at the same position in their orbits relative to each other once every 2.135 years this is known as the synodic period. That means the launch window recurs only every 780 days and remains open for only a couple of weeks. This is a critical consideration especially when it comes to manned spaceflight. If we travel to Mars we can't just turn around and come home if something goes wrong.
Mission planners use what are known as porkchop plots [Fig. 4] to assist in planning optimal launch windows. These charts show contours of equal characteristic energy, or C3, against specific combinations of launch and insertion dates for any given interplanetary flight. C3 is a measure of the energy required for a mission that requires attaining an excess of orbital velocity over and escape velocity required for additional maneuvers. [Sellers 2005]
clip_image010
Figure 4 Porkchop plot for a Mars 2005 mission [NASA]

Delta V

Knowing when to launch is only part of the problem, the other part is knowing how to manage your velocities and T2/a3 = k (Kepler's 3rd Law).
When you light up your initial burn you need to be sure to add enough delta-v to carry the spacecraft to Mars in time to intercept and match orbital velocities.
Calculating the departure and arrival delta-v ‘s as well as the transfer time are pretty straightforward. [clip_image014 is the standard gravitational parameter and clip_image016 and clip_image018 are the radii of Earth and Mars orbits respectively.]
Delta-v departure (instantaneous burns)
clip_image020
The delta-v for the Mars transfer orbit works out to approximately 0.6 km/s.
Delta-v arrival (instantaneous burns)
clip_image022
The delta-v for Mars capture where the highly elliptical Hohmann orbit is circularized 0.9 km/s.
Total delta-v
clip_image024
Combine the two for a total delta-v of 1.5 km/s.
Time of transfer (Hohmann)
clip_image026
The time of transfer works out to .708 years or just under 8 months 3 weeks.
Again, the biggest disadvantage of low-cost, high fuel efficiency is that these missions can be ponderously slow, with missions taking years instead of months and launch windows can be few and far between.

Type I and Type II Trajectories

More direct trajectories that carry the spacecraft less than 180° around the Sun are called Type I trajectories if it's more than 180° they are Type II trajectories.
A couple of notes are in order about shaping a spacecraft’s orbit… If there is a burn somewhere in the vehicle's trajectory the shape of the orbit will change however the spacecraft will return to that same point on every subsequent go around. So if a mission planner wanted to increase the altitude of a circular orbit above the planet and not just change the periapse or apoapse of an orbit than two short burns will be required to shape the orbit.
If a vehicle is in a circular orbit and the rocket is fired in the direction of travel, effectively slowing the vehicle down as if it was retro rocket, the new orbit will be elliptical with a lower periapse, 180° opposite the firing point. When that new periapse dips into the planet’s atmosphere then that becomes a reentry maneuver. [Fig. 5]
Lastly it's important to discuss when to burn. You want to use your probe's rocket engine to burn when the craft is traveling at higher speed because it generates much more useful energy than at low speed. This is known as the Oberth Effect. This effect occurs because the propellant has more usable energy when kinetic energy is added to chemical potential energy to produce more useful mechanical energy. So in an elliptical orbit you would want to plan a burn for periapsis when the craft's KE is higher in MGH is lower. Understanding how to utilize the Oberth Effect was considered revolutionary in understanding that enormous tank mass of fuel was not required for effective interplanetary travel.
clip_image028
Figure 5 Apollo capsule reentry [NASA]

Newton’s 3rd Law

So now we know how much delta-v is required to initiate the transfer and how much is required to put the probe into a circular orbit (around the sun prior to any Mars orbital insertion maneuver). How do we determine what type, and how long, of burn will produce the required delta-v.
Newton's third law states that for every action there is an equal and opposite reaction. With few exceptions most every space propulsion system in the worldwide inventory are reaction based chemical rockets. That means that mass is accelerated to high velocities and expelled out one end of the spacecraft and inducing a reaction force vectored in the opposite direction producing thrust and acceleration.
The formula that relates delta-v to the rocket’s exhaust velocity and the difference in its mass at the beginning of his burn and end of its burn is known as the ideal rocket equation, also called the Tsiolkovsky rocket equation. The equation is…
clip_image032
where
clip_image034 is the initial mass including fuel
clip_image036is the final mass, and and
clip_image038 is the effective exhaust velocity
So to produce the required delta-v we need to accelerate enough mass out the nozzle of our engine with sufficient velocity to give our spacecraft the thrust necessary to make the transfer.
The thrust of a bipropellant pumped liquid rocket is expressed as…
clip_image040
where
clip_image042 is the mass flow rate
ve is exit velocity
pe is the exit pressure
po is the outside pressure
Ae is the area of the exit
In the case of liquid oxygen and liquid hydrogen (LO2 + LH2) the exhaust velocities can get to as high as 4,500 m/s.
How much thrust a rocket produces in one second is known as its specific impulse. That is a measure of the efficiency of your rocket propellant.
What it really comes down to is how quickly can you get a lot of mass out the rear of the spacecraft and at what velocity? That will determine the thrust and that in turn determines your delta-v . And that… determines whether or not you get to Mars.
Our little Mars bound spacecraft uses a chemical rocket for its primary propulsion. Simple in principle, it reacts one or more propellants together to produce a high velocity directed exhaust that exits the spacecraft through a specially shaped nozzle which enhances thrust through thermodynamic expansion. Some rockets, such as the Space Shuttle's main engines react oxygen and hydrogen together to produce massive volumes of high velocity steam. Others use a mixture of kerosene and liquid oxygen. Smaller attitude control thrusters may use hypergolic fuels such as hydrazine or compressed gasses.
Typically, solid fuel rockets are not used in spacecraft propulsion except in launch boosters. Solid fuel rockets are stable and exceptionally reliable however they cannot be throttled in-flight and once lit they burn to completion.
clip_image044
Figure 6 Shuttle launch [NASA]
(Note the photograph of a shuttle launch in figure 6. The exhaust coming from the three main engines is invisible since it is high temperature steam. Now compare that to the fiery plume of the twin solid rocket booster’s exhaust.)
clip_image045
Figure 7 A liquid fueled rocket engine [NASA]
All reaction propulsion systems need to carry their fuel with them. It's that mass that is expelled from the rear to produce an opposite forward thrust. [Fig. 7] Unfortunately, that exacts a penalty on mission planners since the fuel that makes the rocket go, first needs to be carried up to the spacecraft and it’s heavy. It's called the tank mass and that has to be added to the structural mass of the vehicle plus the payload. For a given amount of thrust if you have too much tank mass your thrust to weight ratios will be way off and you're acceleration will be poor. That may cause you to supply insufficient delta-v to complete your mission. So keeping weight down and keeping fuel requirements to an absolute minimum are essential design considerations. So how do you carry enough fuel to get to Mercury? Answer, you get nature to help out.

Gravity Assists

If one were to look at space away Einstein did he would see that the fabric of space-time has a topology.
clip_image047
Figure 8 Interplanetary superhighway [NASA]
Borrowing the famous rubber sheet analogy just imagine the Sun and the orbiting planets being heavy masses resting on a rubber sheet stretched tight. The more mass there is the more space-time around it is deformed. So the Sun would be like a bowling ball and the inner planets like golf balls bending and stretching the space around them. Objects traversing this interplanetary space need to be "cognizant" of the topology. Moons, comets, space probes and even light follows the same geometry. The “rubber sheet” that is the fabric of space-time has planes and valleys, dimples and wells.
Since keeping fuel consumption to a minimum is a primary objective, mission designers have found a way to use a planet’s gravitational tug to transfer angular momentum from the planet to a spacecraft in a maneuver called a gravity assist flyby.
clip_image049
Figure 9 Mariner 10 flybys 1973-1974
This very cool maneuver was developed in 1961 by a UCLA grad student named Michael Minovitch while he was working a summer job at the Jet Propulsion Laboratory.
While working on another problem, Minovitch realized that you could travel almost anywhere in the solar system by using the gravitational fields of other bodies to catapult your vehicle to distant targets without using any additional propulsion. The energy requirements were even lower than those of the Hohmann "minimum-energy" co-tangential trajectories discussed earlier.
It works something like this. If you directed your probe to just to skirt by the planet Venus the spacecraft would begin to accelerate as it enters Venus’ gravity well. It would continue to pick up speed until it passes the planet and decelerates as it climbs out of the same gravitational well. Taken in isolation, the velocity at the beginning of the encounter would be exactly the same as the velocity at the end. There's no gain.
However, the planet Venus is not static. It too is orbiting the sun with a mean orbital velocity of 35 km/s. As your spacecraft approaches the planet some of Venus’ angular momentum is lost to the probe. In physics this is called an elastic collision in that the momentum is fully transferred from one body to the other even though there is no direct contact between the probe and the planet. The angular momentum that the probe gains (due to the conservation of angular momentum) results in an increased delta-v. This relationship is best visualized by using the vector addition model shown in figure 9. Note the resultant vectors.
clip_image051
Figure 8
MESSENGER’s mission planners had to find a way to reduce the probe’s delta-v while conserving its scarce supply of onboard propellant. Also, since the spacecraft is navigating through three-dimensional space the astrogators could use the six gravity assists not only to change the probe’s velocity but also to reshape the orbit… either to more circular or more elliptical and even adjust the orbit’s tilt and rotation.
At one point MESSENGER’s velocity increased to 62.6 km/s while sliding down the Sun’s deep gravitational well!
Using Minovitch’s gravity assist maneuver and theory of space travel we have been able to send missions to all the planets. [Minovitch 1997] The missions have included the Mariner series, Pioneer, the Voyager’s, Galileo, Cassini, MESSENGER and recently the New Horizons mission to Pluto.
Changing the magnitude of the spacecraft’s velocity through gravity assist is called "orbit pumping." Using gravity to change the spacecraft's trajectory relative to the planet’s orbital plane is called “orbit cranking.” Most interplanetary missions use both techniques.
Before the gravity assist technique was developed the only way that mission planners foresaw the ability to travel to deep space destinations was through the development of exceedingly large and expensive nuclear rockets.

Mission Planning

So what are some of the principal considerations when designing a mission? It all starts with the payload. What type of payload? How big is the payload? Where is it going? How fast does it need to get there? What are the range of operating temperatures? Is it an instrumental mission or a manned mission? What are the power requirements? Is it a flyby, an orbital insertion mission, or a landing? Does the mission plan to return to the Earth or is it a one-way trip? Is it returning with samples or people? Is there a tight launch window? Most important, what's the budget?
Once these objectives have been identified the designers can get to work in engineering the attitude and orbit control subsystems, the communications and data handling subsystems, environmental control and life support subsystems (for manned missions), structure and mechanisms, propulsion subsystems, and electrical power subsystems. Adjustment to these systems can take multiple iterations. [Sellers 2005]

New Propulsion Systems for Interplanetary Missions

So far the only propulsion system that we have discussed has been chemical reaction rockets. Chemical propellants pack a lot of energy for a given mass but they are massive and given the limitations of fuel supply that can only be fired in short bursts. So alternative electrodynamic propulsion systems are being developed and they include ion electric, Hall effect and pulsed- plasma thrusters.
An ion drive uses electrical energy from solar panel or on-board nuclear sources to accelerate charged particles out the nozzle of a spacecraft at high velocities rather than the heat supplied from an exothermic chemical reaction. By comparison to chemical propulsion ion drives produce very low thrust but that thrust can be sustained for very long periods of time. In the low friction environment of space the acceleration is cumulative.
The way an ion engine works is to take a gas and strip it of its electrons, ionizing it. Ions are charged particles that can be accelerated by electrical and magnetic fields to extremely high velocities. Then in response to Newton's 3rd law the spacecraft accelerates in the opposite direction. The energy required to ionize the gas and accelerate it can be supplied by solar panel arrays or from nuclear sources.
clip_image053
Figure 9 Deep Space One [NASA]
The Deep Space One [Fig. 10] mission was a demonstrator of ion electric propulsion systems. It used the inert gas xenon as its ion source. A pair of grids in the engine charged to almost 1300 V of potential accelerated the ions to very high velocities. Those velocities approached 40 km/s. That's almost 9 times higher velocities than the H2 + O2 reaction of a bipropellant chemical rocket.
So although the thrust is low it can be sustained not for minutes but for months or even years. The Deep Space One test vehicle used 74 kg of propellant and sustained its thrust for 678 days. Importantly, it was able to achieve a delta-v of 4.3 km/s. That's a record.

The Future of Propulsion Technology

Are there other types of propulsion systems that could be developed to provide an efficient and economical method to power interplanetary travel?
It wasn't that long ago when the prospects for nuclear rocket technology seem to be the future. Perhaps it still is.
A nuclear rocket is simply another type of reaction engine where propellant gas (stored as liquid hydrogen) is thermally heated by a nuclear reactor and expelled out the rear from the nozzle providing efficient forward thrust. Or it could be a nuclear reactor supplying the electrical energy required for an ion thrust system. Both of these systems would be effective in deeper space where electrical energy harvested from solar panels is less efficient in the dim sunlight of deep space.
In the late 60s and early 70s NASA had a program known as NERVA. That stood for Nuclear Engine for Rocket Vehicle Application. It was a demonstration of a nuclear thermal rocket engine that proved that this type of propulsive technology was efficient and practical.
NERVA designs [Fig. 12] have very favorable thrust to weight ratios. They theoretically outperform the most advanced chemical reaction rockets on the drawing boards. This design of rocket can burn for hours and is limited only by the volume of propellant on board. Potentially, it is capable of pretty dramatic delta-v and has many the advantages of both ion and chemical rockets.
Unfortunately, the NERVA program fell victim to the budget ax in the Nixon administration. There are safety and environmental concerns however when considering the launch of nuclear material into orbit. If everything goes well this technology has a lot to offer but if something goes wrong during the boost phase we could be looking at an environmental disaster. The benefits certainly have to outweigh the disadvantages when considering a NERVA type application. I think we will see it again. [Robbins 1991]
clip_image055
Figure 10 NERVA schematic [NASA]

Sailing

I mentioned solar sails because no review of interplanetary propulsion systems would be complete without their mention. Solar sails are a completely passive system relying on light pressure from the Sun (or from a fixed point laser) to provide thrust to carry the probe out into the solar system. But unlike a sailing ship in a fluid environment a solar sailboat is incapable of tacking or sailing into the "wind." It is an outbound system only.
In 2010, a craft using solar sail propulsion was successfully launched. It was called IKAROS.
The disadvantages of solar sails are that the thrust produced is minimal and decreases by the square of the distance. The sails have to be enormous! To produce any useful thrust their areas need to be measured in square kilometers. Mission times are measured in decades not years.
One application where solar sails may be useful would be in helping to save the planet. Let's say we discovered a near earth orbiting (NEO) asteroid with the trajectory that could possibly send it on a collision course with the Earth. Given enough time, we may be able to install solar sails on the asteroid so that sunlight alone would give it that extra tiny nudge to move the asteroid into safer orbit.

Mass Drivers

The last system to be discussed in this essay may be most useful for the propulsion of larger masses such as asteroids given enough power. Let's say again we have to divert an Earth- killing asteroid or perhaps we want to put a metal-rich asteroid into a stable parking orbit so that it can be mined for its metals efficiently. If the asteroid had a high enough iron content it could be mined in useful chunks. Those chunks of iron could then be propelled into space for later collection by an electromagnetic rail gun installed on its surface. That would create a reaction force that could be used to steer the asteroid into safer or more useful orbits. The freefalling chunks could be harvested.

Spacefaring

What used to be the realm of science fiction writers and then pioneering seat-of-the-pants explorers is now a robust and practical technology that will define our species in centuries to come. We are at the dawn of the first known species to become spacefaring.
 

References:

FAA INTERPLANETARY TRAVEL 4.1.6 www.faa.gov/.../Section%20III.4.1.6%20Interplanetary%20Travel.pdf
Johns Hopkins University (December 4, 2008). "Deep-Space Maneuver Positions MESSENGER for Third Mercury Encounter". PR. Retrieved 2010-04-20.
JPL. Jet Propulsion Laboratory THE BASICs OF SPACE FLIGHT http://www2.jpl.nasa.gov/basics/bsf4-1.php
McAdams, J. V.; J. L. Horsewood, C. L. Yen (August 10–12, 1998), "Discovery-class Mercury orbiter trajectory design for the 2005 launch opportunity", 1998 Astrodynamics Specialist Conference, Boston, MA: American Institute of Aeronautics and Astronautics/American Astronautical Society, pp. 109–115, AIAA-98-4283,
Minovitch, M. (1997) JPL. THE INVENTION OF GRAVITY PROPELLED INTERPLANETARY TRAVEL from http://www.gravityassist.com/IAF-4.htm
NASA/JPL Interplanetary Superhighway http://www.nasa.gov/mission_pages/genesis/media/jpl-release-071702.html
Robbins, W.H. and Finger, H.B., "An Historical Perspective of the NERVA Nuclear Rocket Engine Technology Program", NASA Contractor Report 187154/AIAA-91-3451, NASA Lewis Research Center, NASA, July 1991
Sellers, J. (2005) UNDERSTANDING SPACE 3rd Ed. ISBN 978-0-07-340775-3
Wikipedia 2011 Hohmann Transfer Orbit http://en.wikipedia.org/wiki/Hohmann_transfer_orbit

Sunday, April 3, 2011

The Mystery of Gamma Ray Bursts

I’m old enough to remember the regular drills that we used to have in elementary school. At the sound of a siren we were all instructed to “duck and cover.” But, this wasn’t an earthquake drill. This was a civil defense precaution. Ducking and covering wasn’t enough. We had to get into the “shadow” of a potential nuclear fireball rising over New York City. Of course, even at that young age we had a fatalistic point of view. If that Soviet first strike ever came “ducking and covering” meant kissing your rear-end goodbye!

The 1960’s was the height of the Cold War. It was an era of mutual and, some say, healthy mistrust between superpowers. The nuclear test ban treaty had been signed as a way to deescalate tensions. But as with any treaty its success depended on each party doing its part to honor the terms of the agreement. So each side closely monitored the other. The United States was alert to the possibility that the Soviet Union would try to skirt the terms of the treaty by testing nuclear weapons in space. Ground based testing was easy to monitor. Just look for the distinctive seismic signatures of a nuclear detonation. Space-based blasts were more of a challenge to spot. In anticipation of a treaty violation the U.S. Air Force launched the Vela series of military satellites in hopes of detecting the distinctive gamma radiation that is produced by the triggering of a nuke.

On July 2nd, 1967 an event occurred that raised eyebrows among the Vela monitoring team. It seemed that two of the Vela satellites had detected an unusual flash of gamma radiation. It was unusual because the flash did not resemble that of any known type of nuclear weapons signature. It turned out that this was the first detection of what later became known as gamma ray bursts. Their source was unknown but one thing became obvious, they had an extraterrestrial, extrasolar origin.[i]
 
So what kind of event could produce such fantastic amounts of energetic radiation? The search began for culprits. Most were attributed to intergalactic sources. I remember one particularly interesting, but highly speculative, possibility in the popular science press. Could we have been witnessing an interstellar war taking place between space-faring civilizations? Keep the context in mind. These were the years of the first Star Wars trilogy. These were also the years of the United State’s Strategic Defense Initiative or SDI which later became popularly known as “Star Wars.” In retrospect, it didn’t seem like such a kooky idea. What really did seem preposterous was still to be discovered.

In 1991 NASA launched it Compton Gamma Ray Observatory on a nine year mission to uncover the sources of these enigmatic GRB’s among other things.[ii] The data that the observatory streamed back only deepened the GRB mystery.
clip_image002
Figure 1. Isotropic distribution of GRB's
Astronomers were expecting to see a map of sources that roughly conformed to the galactic plane since most had assumed that the GRB’s had a “local origin.” Compton’s map showed an isotropic distribution of GRB sources instead.[iii] What this implied was that GRB’s were extragalactic in origin and this was truly startling. The reason that astronomers did a collective double-take was that an extragalactic origin with an isotropic distribution coupled with their observed redshift values meant that distances to the GRB’s had to be measured in mega to gigaparsecs![iv] The first to be reliably distanced was GRB 970508 at just under 2 Gigaparsecs. [v] That meant that GRB’s had to be bright… very bright. In order to shine as brightly as they do in highly energetic gamma radiation there had to be something radically different at their hearts to power these flashes. Calculations of the required energy flux for some sources revealed something quite extraordinary. In fact it had to be so extraordinary that it was thought to be “un-physical.” At those distances even the most massive star would not be able to emit the required energy (bolometric flux) for some observed GRB’s at even a 100% mass to energy conversion. So either we had to get a better understanding of what was going on or admit that this was a new kind of physics. Clearly further study was required.

GAMMA RAY OBSERVATION

clip_image004 Figure 2. Focusing X-rays. Ineffective for gamma rays.

There are two enormous challenges to observing gamma ray bursts. First is that gamma rays are difficult (if not impossible) to “focus.” Gamma rays are so energetic that they pass though lenses and mirrors or are absorbed. So instruments of a design that are similar to those used by optical astronomers won’t work.
Even instruments that are designed to image energetic X-ray sources by focusing the (x-ray) light using grazing incidence mirrors are ineffective with gamma rays. [vi] [Fig. 2.]

The second problem is that GRB’s are relatively short duration events lasting from just over one second to only a few minutes.[vii] [Fig. 3]

clip_image006
Figure 3. Light curves of GRB's showing event durations

So in order to get an observation you either needed to be extraordinarily lucky to have your telescope pointed in the right part of the sky at the right time or you needed to have a very responsive collection of instruments. One of these instruments was NASA’s SWIFT[viii].

Swift is a satellite (and collection of instruments) designed to rapidly detect, locate and classify GRB’s and then alert ground based teams so that the afterglow of the GRB could be studied. This mission has observed and classified well over 500 GRB’s. Something else was needed though… an observatory that could quickly detect GRB’s and image them at resolutions that were previously unattainable.

FERMI

Last February 9th, I was fortunate enough to attend a talk by Dr. Lynn Cominsky here in Sonoma County (CA) on GRB’s. Dr. Cominsky is one of the scientific co-investigators for the Fermi observatory as well as Swift (and the education and public outreach lead for Fermi, Swift, XMM-Newton and Nustar).[ix] Dr. Cominsky’s principal area of interest is high-energy astronomy.

Fermi (TOFKAG)[x] brought exciting new capabilities to the imaging of gamma rays. Dr. Cominsky explained that Fermi uses two separate instruments to sense and image a GRB. First, is the GLAST Burst Monitor… it is sensitive to (softer) gamma rays and x-rays with energies in the range of 8 KeV to 30 MeV. GRB photons have energies in the neighborhood of 100 MeV[xi] so the GBM triggers an “alert” that a high energy event is going on but it can’t be specific that it’s a GRB. Further, the GBM doesn’t discriminate by direction as it is an omnidirectional detector… except that the Earth acts as an effective shield. So if the GBM is triggered, the event must be in the direction away from the Earth. When the GBM detects a bright gamma ray source it “alerts” the 3-ton LAT instrument in addition to numerous ground based manned and robotic telescopes.

LARGE AREA TELESCOPE

When Dr. Cominsky began describing the LAT she did so with equal measures of pride and giddy enthusiasm. LAT solves the problem of how to image the highest energy photons (20 KeV to 300MeV) by converting them directly into matter then watching how that matter scintillates in the detector arrays. It’s actually pretty cool!

A gamma ray barrels into the LAT and passes through an anti-coincidence detector to reduce background noise. It then shoots through an array of tungsten foils. When our gamma ray strikes (interacts with) a tungsten atom its energy is converted directly into mass as a particle-antiparticle pair (the pair are an electron and a positron to conserve charge). [Fig 4.]
clip_image008
Figure 4. LAT instrument.

The newly minted electron-positron pair cascade down the detector passing silicon strip detectors that track their position and progress so that the direction can be evaluated. At the base of the detector the electron-positron pair slam into a cesium iodide calorimeter that then evaluates the energy of not only the scintillating pair but also of the energy of the original gamma ray photon. LAT is an orbiting telescope that’s constructed like a detector in a sophisticated particle accelerator. [xii] And it’s surprisingly capable as an imaging device. [Fig. 5]

clip_image010
Figure 5. Fermi’s whole sky view showing gamma ray sources in the galactic plane and beyond.

SO WHAT ARE THEY?

Gamma rays are produced thermally (at as much as billions Kelvin), and by the relativistic acceleration of charged particles colliding with the interstellar medium as well as by the annihilation of particle pairs. They are associated with high energy bodies and events such as pulsars, rapidly spinning black hole accretion discs, compact body mergers, active galactic nuclei, blazars, collapsars (hypernovas), magnetars and yes, even terrestrial thunderstorms.[xiii]
 
We have learned that gamma ray bursts come in two main flavors… long duration gamma-ray bursts and those of short duration. [Fig. 3] But it’s important to remember that much like fingerprints no two are alike.
So what are the clues? First, we can detect an optical afterglow from some GRB’s. If an afterglow is visible then we can analyze its spectrum. The long duration GRB030329 emitted an afterglow that, on analysis, correlated pretty closely with the spectral signature of a Type 1c core-collapse supernova. Later an expanding shell was detected adding further evidence to its identity. [xiv]

A NEW PHYSICS?

Some of these long duration bursts have been detected at EXTREME distances… as in 13 GLY. How is that possible? For a star to radiate that much energy from that kind of distance it would require more than the most massive star known to convert every kilogram of its mass directly into energy (E=mc2) in just a couple of minutes! Wait a minute… even that’s not close to enough? Did we need to find a new physics to explain how a body could radiate that amount of energy in such a brief amount of time to be so brightly visible at high energy wavelengths at such stupendous distances?

No. The old physics works fine.

Imagine if you will that you are standing 5m next to a sportsman with a rifle. You are wearing your hearing and eye protection so that when he fires a round downrange the most that happens to you is maybe you flinch at the sound of the gun’s report. Now let’s try something different (but don’t do this at home!!).

Stand the same 5m away from the rifle but this time look right down the bore. Stop!!! You get it. Same rifle, same distance, same ammunition, dramatically different consequences! These core-collapse supernovas, also known as collapsars or hypernovas, produce radiation in twin narrow beams, jets if you will. Core collapse supernovae have different energetics. The doomed star has no hydrogen envelope, no helium envelope and it is rotating so there is a preferred axis of ejection.[xv] If the observer on Earth happens to be looking directly down the axis of one of the beams it’s like looking down the gun barrel.

Short duration GRB’s are not explained by hypernovas. What are they? Is an area of ongoing research because we still don’t know for sure. There are some interesting models, however. According to Dr. Cominsky the leading candidates are neutron star mergers, or neutron star-black hole mergers and the dramatic “rearrangement” or unraveling of the tightly coiled intense magnetic field flux of a magnetar. Both would produce the particle accelerations necessary to radiate EM strongly (and briefly) in the gamma. Only one thing is for certain… we still have much to learn about these high energy phenomena.



References and endnotes:
---
[i] "Observations of Gamma-Ray Bursts of Cosmic Origin" Klebesadel R.W., Strong I.B., and Olson R.A. 1973, Ap.J.(Letters) 182, L85
[ii] http://heasarc.gsfc.nasa.gov/docs/cgro/
[iii] http://heasarc.gsfc.nasa.gov/docs/cgro/images/epo/gallery/grbs/2704_grbs_fluence.jpg
[iv] http://ucp.uchicago.edu/cgi-bin/resolve?id=doi:10.1086/133674
[v] Reichart, Daniel E. (1998). "The Redshift of GRB 970508". Astrophysical Journal Letters 495: L99. doi:10.1086/311222.
[vi] http://imagine.gsfc.nasa.gov/docs/science/how_l1/xray_telescopes.html
[vii] http://upload.wikimedia.org/wikipedia/commons/e/ef/GRB_BATSE_12lightcurves.png
[viii] http://www.nasa.gov/mission_pages/swift/main/index.html
[ix] http://www-glast.sonoma.edu/~lynnc/
[x] The Observatory Formerly Known As GLAST :-)
[xi] Lynn Cominsky 2/27/11
[xii] http://www-glast.stanford.edu/
[xiii] Lynn Cominsky 2/27/11
[xiv] Freedman, R. Universe 9th Ed. 2010 P. 589
[xv] Filippenko, A. 2007 Astro C10 lecture 30

Friday, March 18, 2011

The Little Rovers That Could

 
It’s Saturday, March 18th 2011 and I am saddened by a headline on the website Universe Today… “Hopes Dim for Contacting Spirit Rover.”[i] It seems that these two little robots have not only captured the imagination of a nation but their hearts as well. So, what are Spirit and Opportunity? What was their mission and what have they found in their 7½ half year science ‘sojourn’ traversing the Martian landscape?

THE MISSION

The primary mission for both rovers was to search for water, an essential ingredient for the existence of life as we know it. That doesn’t mean that they were designed as robotic dowsers but rather they were crafted to look for scientific evidence that at some point in Mars’ history liquid water was present.
One look at the images of the Martian surface from orbit shows that water may once have played an important part in Mars’ geology but it’s not the kind of hard evidence that would prove that it is present now or was there in liquid form in sufficient abundance to make the genesis of life a real possibility. Could this cold and arid planet have a warmer and wetter past?
image
The surface geology looks remarkably like that of so many deserts here on Earth. The wind driven dunes reminds us of the central Sahara but it’s the Martian canyons that look so remarkably like the water-cut gulleys and ravines here on Earth that have invited further investigation for possible hydrology.

PLANNING

In the last decades of the 20th century we have launched numerous international robotic missions to orbit the red planet on voyages of remote sensing and mapping. We have even landed a few to conduct science in situ. Notably, the Viking landers touched down in 1976. In addition to the array of imagers, the Vikings carried sophisticated chemical laboratories whose principal function was to look for evidence of life. Even though we didn’t see any macrobiotics the results were inconclusive enough on the question of microbiotic life to further stimulate our scientific curiosity. Even though the Vikings performed flawlessly as meteorological stations the fundamental limitation of the geologic mission was that they were limited in their soil analysis to what they could reach.
Later missions were designed to take some tentative steps (um, rolls) with the Mars Pathfinder / Sojourner of 1996. [ii] This mission consisted of a lander and a remotely operated vehicle that together would survey the geology and meteorology of the Ares Vallis region on the northern hemisphere of Mars, a region believed to be an ancient floodplain from its appearance.
Sojourner (the rover) carried cameras and an X-ray spectrometer to analyze the soil and rocks it encountered. The mission collected an impressive amount of data before it went dark in late September of 1997. The data was interpreted as showing that indeed Mars had a warmer climate that included substantial quantities of liquid water and a much denser atmosphere. These discoveries set the stage for the coming of the twins less than six years later.

SPRIT AND OPPORTUNITY

Two vehicles were launched during the summer of 2003. The reason that two vehicles were launched was to provide mission redundancy. History has taught us that successfully landing a vehicle on Mars can be an iffy proposition. Should one of the vehicles miss its destination or worse still, crash, because someone botched a metric conversion then at least the whole mission would not have been a total failure.
Landing sites were chosen on opposite sides of the planet Mars because of the differences in their terrain and geology. Both were likely to hold evidence for the past existence of water. The two rovers, named Spirit and Opportunity, were nearly identical twins. Interestingly, one of the most significant differences between the two was in their software programming. Spirit was designed to be a bit more of a risk-taker when operating autonomously. Opportunity was the more cautious brother. Otherwise, these 180 kg rovers carried essentially identical instrument packages. These instruments included a panoramic camera, and alpha particle x-ray spectrometer, a Mossbauer spectrometer, a thermal emission spectrometer, a microscope, magnets to detect the presence of iron and iron containing minerals, and my favorite, the RAT, the rock abrasion tool. It is used to scrape away samples of rock in order to analyze its physical and chemical composition.
These instruments working independently or together were designed to analyze the soil, rocks, and airborne dust to determine their composition, and importantly, how they were formed in the first place.
image
Figure 1: Spirit, the adventurous one

Landings

After a relatively brief journey of just over six months, both Spirit and Opportunity approached Mars and prepared to land using the method pioneered by the Pathfinder mission in 1996. These vehicles entered the Martian atmosphere, using aerodynamic braking provided by their ablative aeroshells, followed by the deployment of parachutes. Just before touchdown, retro-rockets would fire and a tether holding the landers dropped. Next, a protective cocoon of airbags inflated around the landers and the tether cut. The last few meters of the landers’ flight was spent in freefall. Moments later, the landers impacted the Martian surface, bouncing to a stop protected by their balloon armor. There were breathless moments in mission control waiting for the first signal indicating that the landers had arrived safely, and the airbags deflated. That signal for both came in January of 2004.[iii] It wasn't long after that we glimpsed the alien Martian landscape complete with its red skies and blue sunsets.[iv]

image
Figure 2: A blue Martian sunset

THE SEARCH FOR WATER

NASA clearly articulated four principal mission goals. They include determining whether or not life has ever existed, or exists, on Mars as well as characterizing the Martian geology and the climate. An even longer-term goal is to gather the data necessary to prepare for human exploration.
I remember, shortly after the landings some remarkable images being beamed back to Earth. These were images of little nodules that were scattered about, and nicknamed "blueberries." They weren't blue. They were gray, but they were round beads and just the right size.
image
Figure 3: Opportunity's blueberry patch.
It turns out that these “blueberries” have a terrestrial analog. They closely resemble hematite concretions , which are usually precipitated from aqueous solutions.[v] [vi]
These closely resemble the round rocks of hematite in southern Utah, whose diameters range from a quarter inch to 8 inches or more. They usually form in the presence of water. A subsequent chemical analysis of said “blueberries” by Opportunity’s spectrometer indicated the presence of hematite. [vii] This is strong evidence for water at the time of formation. These minerals had precipitated.
Meanwhile, on the other side of the planet, Spirit exited the Gusev crater, its landing site. It came across some low hills, which exhibited fine sedimentary layering. But how could you be sure that this was truly sedimentary and not a layering due to some other geological process? One indicator was that some of these deposits included rocks and particles of varying sizes were cemented together. The conclusion was that these were deposited from a fluid. This is as opposed to ejecta from a meteoric impacts or ash falls from a volcano.[viii]
image
Figure 4: Spirit's sedimentary layering
Not wanting to be outdone, Opportunity found layering of its own. In the Eagle Crater Opportunity came across multiple examples of crossbed layering. It is a type of sedimentary deposition. Its geometry indicated that it was deposited in fast flowing water. If you look at the image in figure 5 you see blueberries, physically embedded in the layers. So very cool!
image
Figure 5: Crossbed layering near Eagle Crater with “blueberries” in situ.
The crossbedding coupled with the hematitic blueberries, is nearly irrefutable evidence for the past presence of not just water, but substantial amounts of water.[ix] And that is exciting because it also implies an environment that may have been hospitable to early life![x]

ACCESSIBLE LIQUID WATER TODAY

image
Figure 6: Alluvial flow of water
Examine figure 6 (above), these are two before and after images taken from orbit of what certainly appears to be an alluvial flow of liquid water.[xi] These images were captured by the Mars Surveyor spacecraft. This may have been flowing water. If so, this would be the first direct evidence of liquid water on the planet's bitterly cold surface. At this particular site temperatures do rise higher than 0 C. Still, at surface pressures as low as they are, liquid water cannot exist for long. So these two images cannot be considered conclusive evidence that liquid water is flowing on Mars. But it is indicative that something that certainly appears to be a liquid is flowing although water is not the only possibility. I will speculate that there could be a subsurface geologic process that is producing enough heat to maintain water in a liquid state. When it flows out through the crater wall it rapidly evaporates? Certainly there is reason to continue the search.
image
Figure 7: Frozen water uncovered by Phoenix under loose "topsoil"
Later, the Mars Phoenix lander too confirmed the presence of water, although this water was in a frozen state. See figure 7 above.[xii]
Regardless, this is an extremely exciting development. Recall that one of NASA's principal objectives for launching the rover missions was to determine the viability of human exploration on the red planet. What these discoveries tell us is that there is some surface water on Mars. And it's not just subsurface water. It is accessible water. Where there is water there is the capacity to sustain a manned mission, if not human colonization of some point in the future. Water isn't just necessary to drink or to grow food, but it can be electrolyzed into oxygen for breathing and hydrogen for fuel.



THE MARTIAN CLIMATE

Spirit and Opportunity have characterized the Martian climate as being cold, very dusty and thin. Opportunity recorded frost collecting on the rover's instruments. It also photographed clouds and the Martian sky. See Figure 8 below.
image
Figure 8: Clouds in the Martian sky.
These clouds are composed of particles of water ice.[xiii] On the other side of the planet Spirit found that dust devils are a daily occurrence in the Martian spring. These dust devils are also thought to be responsible for producing a hydrogen peroxide snow through the static discharges they produce. This could have disturbing consequences for existing life since it is a powerful oxidant.[xiv]
Atmospheric pressure is roughly the equivalent of Earth's at an altitude of 70,000+ feet or about 21 km. That's twice the cruising altitude of commercial airliners. Surface temperatures range from -87 to -5 C.
95% of the thin Martian atmosphere consists of carbon dioxide. The rest is nitrogen and argon. Oxygen could be considered a trace gas at just .13%.

PRE-ROVER VIEW AND NOW

NASA set out four scientific goals for the rover missions. Three of those goals have been met and the missions have been unqualified successes. Only the first is still unanswered; is there life on Mars? We just don't know. What we do know is that at one time, Mars was a


References and endnotes:
[i] http://www.universetoday.com/84204/hopes-dim-for-contacting-spirit-rover/#more-84204
[ii] http://www.nasa.gov/mission_pages/mars-pathfinder/
[iii]http://www.msnbc.msn.com/id/4054530/ns/technology_and_science-space/
[iv] Rayleigh scattering of CO2
[v] http://www.lpi.usra.edu/meetings/metsoc2005/pdf/5051.pdf
[vi] http://www.unews.utah.edu/releases/04/jun/marsmarbles.html
[vii] http://www.jpl.nasa.gov/releases/2004/88.cfm
[viii] http://www.geotimes.org/dec04/WebExtra121504.html
[ix] Squyres, S (and 18 others), 2004, In situ evidence for an ancient aqueous environment at Meridiani planum, Mars, Science, v. 306, p. 1709-1714 (December 2004)
[x] http://records.viu.ca/~earles/mars-sediments-dec04.htm
[xi] http://physicsworld.com/cws/article/news/26573
[xii] http://www.jpl.nasa.gov/news/news.cfm?release=2008-113a
[xiii] http://marsrover.nasa.gov/science/goal2-results.html
[xiv] http://news.nationalgeographic.com/news/2006/08/060807-mars-snow_2.html

Tuesday, December 21, 2010

Sustainable living?

Once upon a time there were people who consumed only locally grown foods and wore only locally made, natural fiber clothing. Even the construction materials that their homes were made of came from local sources. In this land there were no ten-lane freeways, no railways, nor were there any internal combustion engines to produce the noxious exhausts that polluted the atmosphere with greenhouse gases. There were no foreign goods and no foreign ideas. But paradise had its price.


In this land starvation was common, as was death by plague and countless other diseases. Infant mortality was rampant. Just giving birth was very a dangerous proposition for women. People lived in tiny one-room dirt-floor huts without indoor plumbing. During the winter months even these humble accommodations were often shared with the family's livestock. A lack of proper nutrition and medicines, little sanitation and drinking filthy water resulted in an average life expectancy of around thirty years, for the fortunate.


Where is this land? Anywhere and everywhere in the pre-industrial world for most of human history. Subsistence farming might have been sustainable, but human dignity and human life certainly were not. [adapted from Boudreaux, Braudel, ed.]

Wednesday, October 28, 2009

A Climate of Skepticism

Last Friday our school’s head invited a speaker to address the student body on climate change and environmental activism. This well-intentioned young man presented a video that consisted of excerpts from Al Gore's film An Inconvenient Truth along with cute animations that highlighted the “excesses” of Western society (such as the aggregation of private property), followed by Utopian promises of a “green tomorrow.”

This young man was adamant… “The earth has a fever,” capitalism, free-markets, social injustice and meat are at the root of the problem but, working together, we can do something about it before it’s too late! I thought about his message and although I could challenge his positions on any number of different levels I am having the hardest time accepting his underlying premise… that human-caused global warming is leading to disastrous, but reversible, planetary consequences.

Why? Well, I am skeptical of global warming alarmism simply because even though global CO2 levels have continued to slowly rise predicted global warming rates have not been observed. In fact, since 1998, we have experienced a global cooling trend. Take it back a little further and other data indicate no net warming for the past sixty-eight years.

The Atmosphere. If you take a look at the temperature record as measured by satellite and radiosonde (weather balloons) as opposed to just surface temperatures you will see that we have gone through nearly two decades without any warming of the upper troposphere. There are no climate models that predict an absence of warming for nearly twenty years.

The Oceans. Then we look at the oceans since water is a very efficient heat sink. The Argo Project is a huge global array of more than 3,000 free-drifting profiling buoys that measure the temperature and salinity of the upper 2,000m of the world's oceans. First deployed in 2003, Argo now gives us continuous monitoring of the upper ocean with all data being relayed and made publicly available within hours after collection. Result? No net warming of the oceans has been observed in the first six years of the program measurements.

The Polar Ice Caps. Another prediction was that the polar ice caps would melt as CO2 levels rose to produce accelerated warming (you remember… that image of Al Gore’s hapless polar bear struggling to climb onto a melting ice flow). But again, satellite observations have shown no net change for more than thirty years.

In September of 2007 there was an observed anomaly. The Arctic ice cap lost nearly 25% of its normal pack. The global warming proponents said this was finally hard evidence of climate catastrophe to come. But just one year later 12% of that icepack had recovered and by September of 2009, one year later, there was another 12% increase so that the ice loss of 2007 in the northern polar regions has all but fully recovered.

An interesting thing about global climate change is that its effects should be global. So even though there was some ice loss in the Northern Hemisphere there was a corresponding increase in the Southern Hemisphere. Just three weeks after that Northern minimum of September 2007 there was an ice cover maximum recorded in the Southern Hemisphere. And last year, there was less of a summer ice-melt in the Antarctic than has been measured in thirty years of satellite observations.

Hurricane Seasons. We have been told by the IPCC, Al Gore, and others that we could expect to see an increase in hurricane activity… not just in the number of hurricanes but also in their intensity as global temperatures increased. Again, this has not been observed. In 2005 the Gulf Coast was hammered by hurricane Katrina. New Orleans, a city built largely below sea level, was flooded when the levees failed. We were told that storms like this would become more common. Four years later we have witnessed some of the quietest hurricane seasons in memory. OK, now without resorting to Google, can you even name another hurricane since Katrina?

Settled Science. When my biology or physics students do a lab they propose a hypothesis… an educated guess. When their observations fail to agree with their hypothesis my students are taught to keep an open mind and question if the hypothesis was flawed or maybe there was something wrong with the experiment or perhaps with their data collection methods. Could there have been other factors that were affecting their observations that were unaccounted for? What they are taught not to do is declare “this is settled science” and call their hypothesis a fact.

So is dangerous anthropogenic global warming “settled science?” Hardly. Is there room for skepticism? Absolutely! And yet climate change alarmists attack skeptics with an almost religious, unthinking fervor. They are referred to as “deniers” and demonized. In the old days the words “apostates or heretics” would have been appropriate for those who expressed doubts and challenged the established orthodoxy. They point to the “consensus of a thousand scientists” the way zealots point to the harmonious chanting of a priestly class. They say climate systems dynamics are too complicated and too difficult to explain to the average person much the way that we are told religious texts carry hidden meanings and that dogma must not be questioned by the faithful lest they risk the damnation of their immortal souls.

The late Carl Sagan once said, “extraordinary claims require extraordinary evidence.” I don’t propose to raise the bar that high. I simply ask that the global warming proponents match their predictions to observations. In other words, if the local TV weatherman always predicts rain, every now and then it should rain! Until then, I will choose to remain a skeptic.

Friday, August 14, 2009

It Takes Green To Be Green

I just moved into a condominium with an active and very involved home owners association. The condominium complex is a little over four years old and since its inception it has been committed to the practice of “green and sustainable living,” a self-ennobling, if a bit vague, ethos.

Every year the HOA (home owners association) conducts a line-item budget review and this year I happened to notice an $11,000 payment for a “solar loan.” Three years ago, the community decided to promote the use of alternative energy sources and reduce its dependence on electrical energy provided by the local utility, Pacific Gas and Electric. So the HOA board approved the acquisition of a bank of solar panels that would provide electricity for the community common house as well as lighting for all the outdoor common areas like parking lots, car ports and walkways. There was also the hope that by producing a net surplus of electricity that the meter would begin to run backwards and yield a little bit of revenue.

Curious, I asked what the details of the solar investment were. A member of the finance committee said that the cost of the solar panels plus installation was a bit over $100,000 and this was after any applicable subsidies, tax credits or rebates. The current solar loan carries a 6% (variable) interest rate. Next, I asked when the anticipated breakeven* would be. The committee member said he wasn’t sure. That made me even more curious and so I poked around further. I asked what we were spending on electricity prior to the installation of the solar panels. Copies of the utility bills revealed an average monthly expenditure of $833.

OK… now for some back-of-a-napkin breakeven analysis. First the assumptions, a 6% annual increase in energy costs and a (steady but unrealistic) 6% interest rate for the remaining 10 years of the loan. There is no adjustment for inflation. Also, the opportunity costs of what else you could have done with that money are not factored into this brief analysis.

So what’s the “net net?” Well, it will take forty-six years before this condominium community sees breakeven. It’s unlikely that anyone living in this community today will ever see a savings on their already high HOA dues. Forty-six years even is far longer than the anticipated life of the solar panels themselves.

I mentioned this number to a couple of members of the association’s finance committee. One told me that no real cost-benefit analysis was done prior to the decision to move ahead with purchase and when he sent out an email to all indicating that the system would cost far more than we would ever save he was chided by some for being “negative” and not in keeping with the values of the community. Another finance committee member said it didn’t matter because “we wanted to invest in this emerging alternative energy technology.”

Solar has been around for a long time. It’s not a new technology. The reason it is still “emerging” is because unless it is heavily subsidized by government or unless the cost of conventional energy sources skyrocket (due to market forces or fiscal policy), there is no way that it makes any financial sense to install solar arrays, windmills or a host of other alternative energy sources.

What my community wanted to do was to make a statement, a statement of support for, and a commitment to “investment in sustainable green energy.” Unfortunately they unwittingly made another statement… that with “investments” like this only the wealthy can afford to be green.

Tuesday, May 19, 2009

Wouldn’t You Really Rather Have a Buick? (Made in China)

It was reported last week that ailing auto giant, General Motors, is considering the importation of thousands of lower-cost cars manufactured at GM plants in China for sale to the North American market. This ‘rumored’ proposal has created a firestorm of controversy and opposition from the United Auto Workers [US labor union, ed.] and created a significant political problem for the Obama administration since it has provided more than $15.4 billion in loans to the company in an effort to prop up its struggling operations and preserve more than 90,000 US jobs at current levels of compensation and benefits. “GM should not be taking taxpayers' money simply to finance the outsourcing of jobs to other countries," wrote Alan Reuther, the union's Washington lobbyist, in a letter to U.S. lawmakers.

GM, on the other hand, has been under a lot of fire, and for years. They have been harshly criticized for not paying attention to the evolving auto market that has been demanding high quality, fuel efficient cars… a demand satisfied by the Japanese [especially Honda and Toyota] and now too, the Korean auto makers.

A common mantra for the criticism has been that GM is very good at producing exactly the wrong cars at precisely the right time… except perhaps in China where, surprisingly, its Buick brand is well regarded.

GM executives will not comment directly on these reports, saying only that they generally do not speculate on future product strategies beyond what has been publicly released at auto shows. But let’s think this through from an economist’s perspective. So what do we know?

GM is rapidly running out of cash. It faces a June 1st deadline to restructure itself or go into Chapter 11 bankruptcy [where the company is protected from its creditors while it reorganizes under a judge’s supervision. Ed.] And GM has asked for another $11.6 billion in bailout funds to stay afloat. That would bring the total to more than $27 billion in ‘taxpayer’ funding.

There is a lot of truth to the criticism that GM has not done a good job in offering the kinds of cars that people want in the North American market, and when fuel prices climb sales of GM’s leading, profit generating products, like pickup trucks and SUVs decline. That’s a no-win formula.

So after reading the tea leaves of the auto markets GM decides to beef up its fuel-efficient lower-cost entry level offerings in the US by sourcing cars offshore from its Chinese assembly plants. The howls of protest are still echoing through the halls of Congress.

Why is GM considering such a bold and maybe too-late initiative, at least by GM standards? Well GM is at a crossroads… the ‘brink’ might be a more accurate term. They are days away from bankruptcy and management needs to find ways to cut costs and offer competitive cars that people want to buy. If they don’t, then the company, as we know it, will dissolve into insolvency… a memory of another once-great American corporation.

The cost of US labor for the domestic automakers is among the highest in the world. The typical hourly cost for a UAW worker in a GM plant is more than $73 per hour including all benefits. The same kind of worker in a US Toyota plant costs $48. That’s 34% less. In China that average auto assembly line worker earns a little more than $2 for the same hour of work![1]

It doesn’t take a vast storehouse of business sense to understand that sourcing from more competitive labor offerings overseas is an essential component in GM’s strategy for its return to profitability and indeed, its very survival.

Unintended Consequences

So here’s the scenario… Asian auto manufacturers see a huge market in North America. It’s pretty obvious to everyone that with the economy on the ropes, gasoline prices climbing steadily upward and the continuing political instability in the Middle East the market is demanding lower cost fuel efficient vehicles. But not everyone is welcoming these ‘econoboxes’ with open arms. The ‘big three’ car companies, the UAW and their congressional allies see these cars for what they are… a ‘threat to American jobs.’

If these Asian brands gain any more ‘traction’ they will continue to erode the market share of the domestic auto makers. So they and the union appeal to Congress. There’s debate back and forth as to what type of ‘protection’ they can offer Detroit. It’s clear to them that the Asian car makers have an ‘unfair’ advantage in that they pay their workers far less in the way of wages and benefits. So how is it considered ‘fair’ competition when the playing field is so clearly tilted towards the offshore brands? If US jobs are lost to Asian workers then US votes are lost too. Many in Congress are swayed by the demagoguery and they mull a response that includes the imposition of heavy tariffs [import taxes, Ed.] on these cheap but reliable imported cars.

Clearly alarmed, these Asian car companies decide that they can’t afford to fuel a protracted trade-war with the US. After all, their entire goal is to carve out market share, not to be crushed by the weight of the 800 lb. congressional gorilla. So they rethink their strategy. Rather than compete with GM, Ford and Chrysler by end-running them with sturdy and cheap alternatives at the low-profit end of the market they decide to improve their quality, broaden their product lines and move upscale into the mainstream of the US car business. In other words, the plan is to take on the domestic giants head-on and beat them at their own game.

This scenario was played out nearly forty years ago. The Asian car makers were Toyota, Honda, and Datsun [now Nissan]. The cars included the Civic and Accord, the Corolla and the Camry, the Sentra and the Maxima. It even included whole new luxury brands like Lexus, Infiniti and Accura. The strategy proved to be wildly successful.

So the threats and the reality of protectionism, the erecting of artificial barriers to competition and trade by legislative fiat backfired big time. It not only deprived the American consumer of the freedom of choice to select from multiple product alternatives when shopping for a car it pushed ‘Japan Inc.’ to create Lexuses from Corollas… a huge unintended consequence that has directly led to the near and maybe eventual demise of the domestic auto industry as we know it. And guess what… it’s happening all over again.


[1] Although Chinese labor is cheap by US, European and Japanese standards the cost of assembling a car in China is not as cheap as you would suspect since the cost of parts and raw materials are very high.

Saturday, May 2, 2009

Say Cheese! A Lesson In Creative Destruction

Photography is one of my hobbies. At one time it even used to generate a modest secondary income. But it was never about money, it was always about the joy of picture taking that kept me motivated to carry a camera almost everywhere I went. I can remember taking long drives up the coast or spending the day at the zoo carrying my 35mm SLR ready for any picture taking opportunity. “Burning film” was an understatement.

Since I didn’t have a color darkroom part of the ritual after a day’s shooting was gathering the rolls of 35mm film and driving to the little yellow kiosk that stood in the middle of the strip-mall parking lot. Two or three days later I would return to collect the fruits of my labors… chunky envelopes filled with prints. Candy! The next step was to park the car and sort through the prints to see which ones “came out.” The convenience of the neighborhood FotoMat was unbeatable. The kiosks were everywhere. Turnaround time was measured in days not weeks, and the print quality was acceptable for most standard sized prints. At the time, the only thing that was more convenient was the Polaroid camera that carried its own processing lab in the camera body. These were very cool alternatives to the point-and-shoots but limited in what they could do. They complimented the 35mm camera but were never really a substitute.

A few years later came another innovation… the one-hour automated photo lab. Now you could take you film to the lab, and miracle of miracles, your prints, are delivered in one hour. Gradually, the little yellow kiosks began to disappear… not immediately, but gradually. It seems that photographers wanted more than photo processing and printing services. They wanted the satisfaction of seeing their prints while the memories of the events were still fresh… not in days but minutes. So one technological innovation began to displace, then replace, the photo lab kiosk. Still the underlying film-based technology was essentially unchanged. The real revolution was right around the corner.

Enter the 1990’s and the digital still camera. The expression, “did it come out,” was no longer heard since both the casual snap shooter and the pro had the ability to get instant feedback on their pictures. The chances of a botching a shot were greatly reduced and the percentage of acceptable pictures sky-rocketed. Best of all, the digital photographer now had the choice to print art pieces on inexpensive ink-jet printers with terrific results, to view and edit their shots on their computer’s display or even share them with the whole world by uploading the photo files to a social networking site. This represented a fundamental shift in photography. Then in 2001, when the quality of the digital image surpassed that of film, the last nail was driven into the coffin of film-based photography now reduced to a declining niche.

Fast forward eight years. I haven’t seen a FotoMat in years. My Nikon N90s 35mm SLR is gathering dust in my closet. Film sales continue their steep decline. Polaroid is an empty shell of its former self, a licensed brand name only. Kodak, Fuji, Agfa and Ilford all had to change their core business models in order to survive. Yes, the one-hour photo labs are still around as photo departments in drug stores and Wal-Mart but mainly just to print digital images.

This is an example of the sometimes painful process of creative destruction when innovation, in response to, and fueled by, market demand, sweeps away old industries thereby creating new opportunities and growth.

Wednesday, April 29, 2009

Everyday Economics

Economic thinking is not limited to the classroom, the boardroom or the business pages. At least it shouldn’t be!

My wife and I just returned from a trip to Italy. When recounting our travels to a friend we happened to mention that two of the hotels we stayed in were actually quite old. The first was originally constructed as a monastery that dated back to the 11th century. The second was a wealthy merchant’s home located just outside the beautiful walled city of Siena. It was built in the middle 1300’s. Imagine that… sleeping in a room that was nearly one thousand years old! Our friend was taken aback and remarked, “They really knew how to build houses back then. Not like today.” She continued with certainty and conviction, “Today they want the buildings to fall apart quickly so they can be replaced. It’s planned obsolescence.”

Later at dinner another friend happened to mention that she wasn’t feeling well due to severe seasonal allergies and other medical issues. She told us that her insurance company would pay for a prescription of the potent steroid, Prednisone, but not a more targeted and expensive medication that could cost upwards of $600 per month. The drug was not in their formulary. I asked her if she had considered other sources to fill her medication such as Wal-Mart since they are well-known for their aggressive low pricing in prescription drugs, just like everything else they sell. She said firmly, “No. I have an issue with Wal-Mart. I won’t shop there.” Since she had not indicated being poorly treated at a Wal-mart in the past I imagined that it must have been some social or political bias. Regardless, I added, “If they carry your prescription drug you may be able to save hundreds of dollars by having it filled there.” Her eyes widened… she said, “Really? I guess should give them a call.”

Both these statements resonated with me at a fundamental level. Let’s examine the first.

The underlying assumption was that builders today design houses and buildings to deteriorate quickly. Builders today must use a quality of craftsmanship and materials that doesn’t hold up, so that they can be replaced by new buildings… erected, it is assumed, by the same builder at some point in the future.

Now if I was a general contractor and knew this "shady" practice to be true I would have two choices. First, I could keep quiet to perpetuate this scheme. Then I would look for some other way to compete with other builders for new business. Or, I could tell the customer, that my firm will build exactly what the customer wants on time, under budget, and using high quality materials and excellent craftsmanship. I would also state that we, as professional builders, will stand behind our work because our reputation and future business depends on it.

So, if a customer wants to build a monastery that will last a thousand years, and if the budget allows for it, then there are many GC’s who will be able to deliver just that. But that’s a pretty uncommon request since houses made of solid stone, with marble floors and heavy timbers, that are covered by heavy Tuscan tile roofing and adorned with hand painted frescoes would be prohibitively expensive and rejected as being inefficient extravagances. Is it planned obsolescence? In a way, yes it is. But it is the contractor’s client doing the planning and making those choices not the builder.

Then there was the friend with the medical condition.

It seems that she objected to doing business with Wal-mart as a general practice. She never offered a reason why and I never asked. But one thing became clear. She revealed that her objections were quantifiable. They had a dollar threshold. If it meant saving a couple of bucks when shopping for a toaster then her principled stand not to shop at Wal-mart remained steadfast. But when she saw an opportunity to save hundreds of dollars and get exactly the medicine she wanted her objections to shopping at the big box from Arkansas melted like the spring snow.

Wednesday, February 11, 2009

What do we do with Rudy?

Sooner or later many families have a conversation that they have been avoiding for months, sometimes years. Rudy had always been described as strong and independent. A product of the depression, he was a member of that generation that had endured financial ruin, genocides and world wars with a mixture of stoicism and resolve – an inner strength that everyone looked up to. But lately things were different. Grandpa Rudy, as he was known to the youngest in the family, seemed more and more frail – not surprising for a man in his late eighties. But what really had the family concerned was his forgetfulness. Not just forgetting the occasional name, the date, or odd face, but forgetting to eat, to take his medicine, to dress himself or sometimes, even to bathe. Clearly, despite Rudy’s fierce sense of independence that defined him, he was no longer in a position to safely care for himself but what could the family do? What would the family be willing to do?

This is an all too common scenario in our aging population. Placing an elderly loved one in assisted living is never an easy decision but the deepening recession and the collapse of the housing market has exposed seniors to unprecedented risks and uncertainties that compromise more than their lifestyle and their quality of life... it compromises their safety.

Assisted living costs for elders are not insignificant; ranging from $2,000 to $8,000 per month. Typically, they have been financed by the equity in the elder’s home, either by the outright sale of the property or by the vehicle of the reverse mortgage. But things are different now.

Let’s take our narrative a little further.

The family agrees that Rudy needs to be placed in assisted living. Some careful investigation revealed that the best matched alternatives will cost around $4,000 per month, or at minimum $48,000 per year. Despite a lifetime of hard work, some financial reverses, including the recent tanking of the Dow, have left Rudy in a vulnerable financial position. His savings only amounts to a little more than fourteen months of assisted living costs. Assuming, as we all hope, that Rudy survives longer than fourteen months, he (we?) will need to dip into the equity of his home in order to fund his living expenses.

It’s then that the realization hits everyone gathered around the kitchen table… everyone except Rudy that is. Rudy’s home was valued at a little over $560,000 two years ago. Today, that same home would likely sell for about $320,000. That’s a drop of more than $240,000. The family doesn’t appear to be prepared to accept the reality that $320,000 is the value of the home and not yesterday’s $560,000. “The market will bounce back,” someone says. “Yes, this is a temporary setback,” another agrees. But on one thing there is unspoken unanimity. This is not the time to consider selling Rudy’s home. “After all,” they say, “it’s in Rudy’s best interests to wait until the time is right.” Right? But in the back of some of the minds there is that ugly fleeting notion, like a darting shadow in the corner of the mind’s eye, that the equity in Rudy’s home represents a portion of an assumed inheritance.

“No. This isn’t the time to sell. So, now what do we do with Rudy?”

The recession has many victims. Many are hidden and helpless. Many are as close to us as our aging parents.

Tuesday, December 16, 2008

A student, a letter and a $4 message

The following remarkable column was first published on Sunday, December 14th, 2008 in the Press Democrat. The student is my student. We’ll call him RV. [js]

By Paul Gullixson
THE PRESS DEMOCRAT

We recently received this letter here at The Press Democrat:

“Dear Sir

Madam, I am writing to you because I want to turn a wrong into a right. Over the past few weeks I have been paying for one newspaper and taking two. To make things worse, I sold the extra copies to my fellow 8th graders at school. I realize that I benefited from your loss, and I am sorry.

I realize now that my actions were dishonest and unkind. Enclosed is a money order, and I hope it will sufficiently cover your loss.

Sincerely,

An 8th Grade Student.”

Enclosed was a money order for $4. There was no return address. I’ve shared this letter many times in recent days and decided that I would like to respond here.

Dear ‘Student’:

Today, I am celebrating my 10-year anniversary here at The Press Democrat. I mention that because in all those years of opening letters to the editor, I don’t recall ever coming across a note quite like yours.

I’m impressed, for reasons I hope will be evident.

First of all, anybody who can sell newspapers to eighth-graders in this day and age is OK with me. I’ve been led to believe that eighth-graders are no more interested in newspapers than they are asparagus.

My colleagues were equally impressed when I read your letter aloud at our department heads’ meeting and handed your $4 money order to our controller. (You should have seen his face. It’s not often he receives revenue from the Editorial Department.)

They had one question for you:

How are you at selling quarter-page ads? You may have a future in newspapers — as long as there are newspapers in the future.

Some say newspapers are dying. I hope not. At the least, we’re certainly experiencing some dramatic changes. Many businesses are being pushed to the brink.

But I fear there’s something else that’s dying out there, something more important than newspapers. You addressed it with your letter.

It’s called integrity.

As David Brooks, a New York Times columnist, recently wrote, “Recessions breed pessimism.” Crime goes up. So does lying and cheating.

So does forgetting to admit when we’re wrong.

You say that you know your actions were “dishonest and unkind.” If only we could hear such words from those responsible for our current financial crisis — and so many things that are happening in our world.

Adults have made a mess of things, and I’m afraid we’re going to be leaving your generation with some hefty bills. I don’t know how to begin to apologize to you for that. But bills and bailouts are not the only thing I worry that my generation is handing down.

A recent study found 64 percent of U.S. high school students say they’ve cheated on a test in the past year. Thirty percent have stolen from a store. Both of those numbers have risen steadily in recent years.

That, to me, is more discouraging than anything we’ve heard from Wall Street.

The study by the Josephson Institute in Los Angeles also found that using the Internet to plagiarize an assignment (36 percent), lying to parents about something significant (83 percent) and lying to save money (42 percent) also were up.

Michael Josephson, founder and president of the institute, put it this way: “In a society drenched with cynicism, young people can look at it and say, ‘Why shouldn’t we? Everyone does it.’ ”

Well, maybe not everyone.

Which is one reason I wanted to publish your letter today. Some here at the paper have suggested that there’s a parent, maybe even a teacher, who encouraged — ordered? — you to write this.

To that I say, what does it matter? Whether you wrote this alone or with someone lurking over your shoulder, both are evidence of someone in your life who cared enough to teach you the value of accountability and honesty.

Consider yourself fortunate. Not everyone has someone like that.

Maybe this letter was even embarrassing for you to write. If so, it’s probably equally embarrassing now to see me responding in print.

But I wanted to commend you. There’s no bravery in conformity. It takes real courage to ignore the cynicism of the world, the temptation to hide, the encouragement to avoid, and stand up and do the right thing.

Don’t let anyone tell you otherwise. I’ve waxed philosophic long enough. I fear I may be getting melancholy on my anniversary. I just wanted to thank you for your message and your reminder of something worth much more than $4.

I don’t know where our country is going. But what I do know is that the things we can’t afford to leave behind are fundamentals like integrity, accountability, hope, faith.

They still matter. They always will.

Take pride in this letter. Maybe someday, you will even encourage someone to do the same — admit a mistake and seek to make amends.

If so, you’ll do more than that just turn a wrong into a right. You’ll be a leader.

And those don’t just fall out of newspaper racks, you know.

(Paul Gulllixson is editorial director for The Press Democrat. E-mail him at paul.gullixson@pressdemocrat.com)

Tuesday, December 9, 2008

In Defense of "Sweatshops"

In one of my economics classes the subject of sweatshops [and what to do about them was discussed]. The following is one of the better articles and analyses on the subject. A very strong case can be made that the real problem is not that sweatshops exist but rather, as Charles Wheelan suggests, that there are not enough of them. Powell’s article was was first posted June 2, 2008. [js]

by Benjamin Powell*


"Because sweatshops are better than the available alternatives, any reforms aimed at improving the lives of workers in sweatshops must not jeopardize the jobs that they already have."

I do not want to work in a third world "sweatshop." If you are reading this on a computer, chances are you don't either. Sweatshops have deplorable working conditions and extremely low pay—compared to the alternative employment available to me and probably you. That is why we choose not to work in sweatshops. All too often the fact that we have better alternatives leads first world activists to conclude that there must be better alternatives for third world workers too.

Economists across the political spectrum have pointed out that for many sweatshop workers the alternatives are much, much worse.1 In one famous 1993 case U.S. senator Tom Harkin proposed banning imports from countries that employed children in sweatshops. In response a factory in Bangladesh laid off 50,000 children. What was their next best alternative? According to the British charity Oxfam a large number of them became prostitutes.2

The national media spotlight focused on sweatshops in 1996 after Charles Kernaghan, of the National Labor Committee, accused Kathy Lee Gifford of exploiting children in Honduran sweatshops. He flew a 15 year old worker, Wendy Diaz, to the United States to meet Kathy Lee. Kathy Lee exploded into tears and apologized on the air, promising to pay higher wages.

Should Kathy Lee have cried? Her Honduran workers earned 31 cents per hour. At 10 hours per day, which is not uncommon in a sweatshop, a worker would earn $3.10. Yet nearly a quarter of Hondurans earn less than $1 per day and nearly half earn less than $2 per day.

Wendy Diaz's message should have been, "Don't cry for me, Kathy Lee. Cry for the Hondurans not fortunate enough to work for you." Instead the U.S. media compared $3.10 per day to U.S. alternatives, not Honduran alternatives. But U.S. alternatives are irrelevant. No one is offering these workers green cards.

What are the Alternatives to Sweatshops?

Economists have often pointed to anecdotal evidence that alternatives to sweatshops are much worse. But until David Skarbek and I published a study in the 2006 Journal of Labor Research, nobody had systematically quantified the alternatives.3 We searched U.S. popular news sources for claims of sweatshop exploitation in the third world and found 43 specific accusations of exploitation in 11 countries in Latin America and Asia. We found that sweatshop workers typically earn much more than the average in these countries. Here are the facts:

We obtained apparel industry hourly wage data for 10 of the countries accused of using sweatshop labor. We compared the apparel industry wages to average living standards in the country where the factories were located. Figure 1 summarizes our findings.4

Figure 1. Apparel Industry Wages as a Percent of Average National Income

Figure 1. Apparel Industry Wages as a Percent of Average National Income

Working in the apparel industry in any one of these countries results in earning more than the average income in that country. In half of the countries it results in earning more than three times the national average.5

Next we investigated the specific sweatshop wages cited in U.S. news sources. We averaged the sweatshop wages reported in each of the 11 countries and again compared them to average living standards. Figure 2 summarizes our findings.

Figure 2. Average Protested Sweatshop Wages as a Percent of Average National Income

Figure 2. Average Protested Sweatshop Wages as a Percent of Average National Income

From "In Praise of Cheap Labor," by Paul Krugman. Slate Magazine, March 1997:

A country like Indonesia is still so poor that progress can be measured in terms of how much the average person gets to eat; since 1970, per capita intake has risen from less than 2,100 to more than 2,800 calories a day. A shocking one-third of young children are still malnourished—but in 1975, the fraction was more than half. Similar improvements can be seen throughout the Pacific Rim, and even in places like Bangladesh. These improvements have not taken place because well-meaning people in the West have done anything to help—foreign aid, never large, has lately shrunk to virtually nothing. Nor is it the result of the benign policies of national governments, which are as callous and corrupt as ever. It is the indirect and unintended result of the actions of soulless multinationals and rapacious local entrepreneurs, whose only concern was to take advantage of the profit opportunities offered by cheap labor.

Even in specific cases where a company was allegedly exploiting sweatshop labor we found the jobs were usually better than average. In 9 of the 11 countries we surveyed, the average reported sweatshop wage, based on a 70-hour work week, equaled or exceeded average incomes. In Cambodia, Haiti, Nicaragua, and Honduras, the average wage paid by a firm accused of being a sweatshop is more than double the average income in that country. The Kathy Lee Gifford factory in Honduras was not an outlier—it was the norm.

Because sweatshops are better than the available alternatives, any reforms aimed at improving the lives of workers in sweatshops must not jeopardize the jobs that they already have. To analyze a reform we must understand what determines worker compensation.

What Determines Wages and Compensation?

If a Nicaraguan sweatshop worker creates $2.50 per hour worth of revenue (net of non-labor costs) for a firm then $2.50 per hour is the absolute most a firm would be willing to pay the worker. If the firm paid him $2.51 per hour, the firm would lose one cent per hour he worked. A profit maximizing firm, therefore, would lay the worker off.

From Nicholas D. Kristof, The New York Times, 14 January 2004:

And so I think what Americans don't perhaps understand is that in a country like Cambodia, the exploitation of workers in sweatshops is a real problem, but the primary problem in places like this is not that there are too many workers being exploited in sweatshops, it's that there are not enough. And a country like Cambodia would be infinitely better off if it had more factories using the cheap labor here and giving people a lift out of the unbelievably harsh conditions in the villages and even in the urban slums.

Of course a firm would want to pay this worker less than $2.50 per hour in order to earn greater profits. Ideally the firm would like to pay the worker nothing and capture the entire $2.50 of value he creates per hour as profit. Why doesn't a firm do that? The reason is that a firm must persuade the worker to accept the job. To do that, the firm must offer him more than his next best available alternative.6

The amount a worker is paid is less than or equal to the amount he contributes to a firm's net revenue and more than or equal to the value of the worker's next best alternative. In any particular situation the actual compensation falls somewhere between those two bounds.

Wages are low in the third world because worker productivity is low (upper bound) and workers' alternatives are lousy (lower bound). To get sustained improvements in overall compensation, policies must raise worker productivity and/or increase alternatives available to workers. Policies that try to raise compensation but fail to move these two bounds risk raising compensation above a worker's upper bound resulting in his losing his job and moving to a less-desirable alternative.

What about non-monetary compensation? Sweatshops often have long hours, few bathroom breaks, and poor health and safety conditions. How are these determined?

Compensation can be paid in wages or in benefits, which may include health, safety, comfort, longer breaks, and fewer working hours. In some cases, improved health or safety can increase worker productivity and firm profits. In these cases firms will provide these benefits out of their own self interest. However, often these benefits do not directly increase profits and so the firm regards such benefits to workers as costs to itself, in which case these costs are like wages.

A profit-maximizing firm is indifferent between compensating workers with wages or compensating them with health, safety, and leisure benefits of the same value when doing so does not affect overall productivity. What the firm really cares about is the overall cost of the total compensation package.

Workers, on the other hand, do care about the mix of compensation they receive. Few of us would be willing to work for no money wage and instead take our entire pay in benefits. We want some of each. Furthermore, when our overall compensation goes up, we tend to desire more non-monetary benefits.

For most people, comfort and safety are what economists call "normal goods," that is, goods that we demand more of as our income rises. Factory workers in third world countries are no different. Unfortunately, many of them have low productivity, and so their overall compensation level is low. Therefore, they want most of their compensation in wages and little in health or safety improvements.

Evaluating Anti-Sweatshop Proposals

The anti-sweatshop movement consists of unions, student groups, politicians, celebrities, and religious groups.7 Each group has its own favored "cures" for sweatshop conditions. These groups claim that their proposals would help third world workers.

Some of these proposals would prohibit people in the United States from importing any goods made in sweatshops. What determines whether the good is made in a sweatshop is whether it is made in any way that violates labor standards. Such standards typically include minimum ages for employment, minimum wages, standards of occupational safety and health, and hours of work.8

Such standards do nothing to make workers more productive. The upper bound of their compensation is unchanged. Such mandates risk raising compensation above laborers' productivity and throwing them into worse alternatives by eliminating or reducing the U.S. demand for their products. Employers will meet health and safety mandates by either laying off workers or by improving health and safety while lowering wages against workers' wishes. In either case, the standards would make workers worse off.

The aforementioned Charles Kernaghan testified before Congress on one of these pieces of legislation, claiming:

Once passed, this legislation will reward decent U.S. companies which are striving to adhere to the law. Worker rights standards in China, Bangladesh and other countries across the world will be raised, improving conditions for tens of millions of working people. Your legislation will for the first time also create a level playing field for American workers to compete fairly in the global economy.9

From David R. Henderson, "The Case for Sweatshops." Weekly Standard, 7 February 2000:

The next time you feel guilty for buying clothes made in a third-world sweatshop, remember this: you're helping the workers who made that clothing. The people who should feel guilty are those who argue against, or use legislation to prevent us, giving a boost up the economic ladder to members of the human race unlucky enough to have been born in a poor country. Someone who intentionally gets you fired is not your friend.

Contrary to his assertion, anti-sweatshop laws would make third world workers worse off by lowering the demand for their labor. As his testimony alludes to though, such laws would make some American workers better off because they would no longer have to compete with third world labor: U.S. consumers would be, to some extent, a captive market. Although Kernaghan and some other opponents of sweatshops claim that they are attempting to help third world workers, their true motives are revealed by the language of one of these pieces of legislation: "Businesses have a right to be free from competition with companies that use sweatshop labor." A more-honest statement would be, "U.S. workers have a right not to face competition from poor third world workers and by outlawing competition from the third world we can enhance union wages at the expense of poorer people who work in sweatshops."

Kernaghan and other first world union members pretend to take up the cause of poor workers but the policies they advocate would actually make those very workers worse off. As economist David Henderson said, "[s]omeone who intentionally gets you fired is not your friend."10 Charles Kernaghan is no friend to third world workers.

Conclusion

Not only are sweatshops better than current worker alternatives, but they are also part of the process of development that ultimately raises living standards. That process took about 150 years in Britain and the United States but closer to 30 years in the Japan, South Korea, Hong Kong, and Taiwan.

When companies open sweatshops they bring technology and physical capital with them. Better technology and more capital raise worker productivity. Over time this raises their wages. As more sweatshops open, more alternatives are available to workers raising the amount a firm must bid to hire them.

The good news for sweatshop workers today is that the world has better technology and more capital than ever before. Development in these countries can happen even faster than it did in the East Asian tigers. If activists in the United States do not undermine the process of development by eliminating these countries' ability to attract sweatshops, then third world countries that adopt market friendly institutions will grow rapidly and sweatshop pay and working conditions will improve even faster than they did in the United States or East Asia. Meanwhile, what the third world so badly needs is more "sweatshop jobs," not fewer.


Footnotes

1.

Walter Williams, "Sweatshop Exploitation." January 27, 2004. Paul Krugman, "In Praise of Cheap Labor, Bad Jobs at Bad Wages are Better Than No Jobs at All." Slate, March 20, 1997.

2.

Paul Krugman, New York Times. April 22, 2001.

3.

Benjamin Powell and David Skarbek, "Sweatshop Wages and Third World Living Standards: Are the Jobs Worth the Sweat?" Journal of Labor Research. Vol. 27, No. 2. Spring 2006.

4.

All figures are reproduced from our Journal of Labor Research article. See the original article for notes on data sources and quantification methods.

5.

Data on actual hours worked were not available. Therefore, we provided earnings estimates based on various numbers of hours worked. Since one characteristic of sweatshops is long working hours, we believe the estimates based on 70 hours per week are the most accurate.

6.

I am excluding from my analysis any situation where a firm or government uses the threat of violence to coerce the worker into accepting the job. In those situations, the job is not better than the next best alternative because otherwise a firm wouldn't need to use force to get the worker to take the job.

7.

It is a classic mix of "bootleggers and Baptists." Bootleggers in the case of sweatshops are the U.S. unions who stand to gain when their lower priced substitute, 3rd world workers, is eliminated from the market. The "Baptists" are the true but misguided believers.

8.

These minimums are determined by laws and regulations of the country of origin. For a discussion of why these laws should not be followed see Benjamin Powell, "In Reply to Sweatshop Sophistries." Human Rights Quarterly. Vol. 28. No.4. Nov. 2006.

9.

Testimonies at the Senate Subcommittee on Interstate Commerce, Trade and Tourism Hearing. Statement of Charles Kernaghan. February 14, 2007.

10.

David Henderson, "The Case for Sweatshops." Weekly Standard, 7 February 2000.


*Benjamin Powell is Assistant Professor of Economics at Suffolk University and Senior Economist with the Beacon Hill Institute. He is editor of Making Poor Nations Rich: Entrepreneurship and the Process of Development.

Copyright 2008 Liberty Fund Inc.