Please note that your COMMENTS are moderated.

On small MOBILE devices (e.g. smartphones), the MOBI version of the BLOG
will be automatically displayed to facilitate the reading.

"Let go - Let's go" is an engaging BLOG that discusses the past, the present, the future.

"Let go - Let's go" will share with you what we believe is important to be discussed, such as but not limited to, business, entrepreneurship, technology, finance, money, science, discoveries, health, sports, etc.

To explore the BLOG select a TOPIC in the right column.

Turning Light Into Matter … And Raspberries

from KHouse
Mass = L/c2. Albert Einstein said so in 1905. A simple replacement of L (light energy) with the letter E (all energy), move “c2” to the other side of the equal sign, and we get “Ta da!” the familiar equation: mc2=E
Or, better yet: E = mc2
Thanks to Gary Larson, Bugs Bunny and elementary school, most people are aware of this bit of physics, and even that Einstein is to be blamed for it.
What does it mean? It means that Mass is ultimately equal to Energy, which was demonstrated in the detonation of the first nuclear bomb. It also theoretically means that Energy can be converted into Mass, but until recently physicists hadn’t been able to figure out how to it. We’re not only years away from a good old-fashioned Star Trek food synthesizer, we can’t even make protons from photons.
In 1934, Gregory Breit and John Wheeler played with the idea that matter could indeed be created from light by smashing together two photons. They might not be able to organize molecules into apples or pears, but the photon car-wreck could create an electron and a positron. It has been 82 years, and physicists at the Imperial College of London have finally put together plans for a “photo-photon collider” — a machine that could actually force two photons to crash into one another. Sort of.
It’s not as easy as you’d think. Photons belong to a class of subatomic particles known as bosons, and bosons are known for their peculiar ability to share the same quantum state. It’s like being able to be in the same place at the same time. In reality, Star Wars light sabers wouldn’t smash into each other, they’d “swish” pass right through one another like good light beams should. Fermions, particles like protons and electrons, aren’t so fortunate. They take up their own space and they don’t like to share it. It’s easy to smash together two particles that take up their own space. It’s not so easy to crash together two particles that can pass right through each other… like a light wave.
Photons do interact with charged particles like electrons and protons, though. Occasionally, a photon can waffle into a particle-antiparticle pair like an electron and positron, allowing one of the mates in this subatomic pair to capture another photon. A soon as the electron and positron recombine (Pow!), they release two photons. This all happens in fractions of a second, and it appears as though two photons just bounced off each other. This “photon-photon scattering” has been observed in the Large Hadron Collider (LHC) at CERN in Switzerland, but nobody’s been able to systematically force the interaction.
A research team led by Oliver Pike at the Imperial College of London has figured out a way to increase the number of photon-derived electron-positron teams that can interact with other particles. Their article in Nature Photonics proposes the building of a machine called the “photon-photon collider” which uses several steps, starting with accelerating electrons with a laser and pounding them into a bar of gold. This first step will create a beam of light immensely more intense than sunlight. The machine would aim this beam at a hohlraum, a hollow cave of gold, to create a thermal radiation field that releases another beam of light that will intersect with the first beam. The electrons and positrons formed during the interaction of the light beams would be detected as they left the hohlraum, and should produce far more pairs of particles than the occasional chance encounter made in the CERN LHC.
The design is out; now the researchers have to build the machine and test it.
“Although the theory is conceptually simple, it has been very difficult to verify experimentally,” Pike said. “We were able to develop the idea for the collider very quickly, but the experimental design we propose can be carried out with relative ease and with existing technology… The race to carry out and complete the experiment is on!”
It’s a massive challenge for physicists, but once they conquer it, their accomplishment will still come several years too late. Organic machines have long been capable of taking light energy and converting it into matter. In fact, these organic machine are capable of organizing molecules into the shapes of apples and pears and figs. These organic machines are called “trees” and they are in many respects similar to the organic machines called “raspberry bushes” that provide delicious materials for jams and ice cream topping.
You’d think that we could throw light, carbon and water together into some sort of food-like substance too, but there will be no vending machines producing foods that were “photo-synthesized” any time soon. As great as our technological advances have proved, manipulating subatomic particles is still the expertise of the Creator alone.

Lobotomy is not a prerequisite to faith.

QUOTE from Russ Pierson, Green­ Faith Fel­low

Lobotomy is not a prerequisite to faith. 

You should probe, poke, investigate this incredible universe (or multi-verse!) in which you find yourself. 

Consider the wisdom of science even as you consider the wisdom of Scripture. You should challenge God and challenge yourself. 

Explore the mysteries, hold things in tension, embrace paradox.

God is bigger than you think -- or God is not God at all.

Alternatives To Elusive Dark Matter

from KHouse Newsletter January 2014

The Universe is made up primarily of a mysterious substance called dark matter, a mesh, a spider web of space. That’s what popular science says, at least. Astrophysicists insist that dark matter is there; the indirect evidence is substantial. Yet, after multiple millions of dollars have been spent on trying to track down the actual physical particles that make up dark matter, science continues to come up empty. Maybe the astrophysicists need to try another approach in order to finally detect the elusive substance, or maybe they just need to adjust their current models about the nature of light, time, and the Universe itself.
It all started with the spinning of distant galaxies. A Swiss astrophysicist named Fritz Zwicky postulated in the 1930s that invisible stuff he called “dunkle Materie” hid inside the galaxies he was studying, because they spun too fast to contain only the visible stars and gas he could account for. Scientists observe the same puzzling phenomenon today. Based on spectral line data, it appears that the outer rims of spiral galaxies are moving at the same rate as the insides of the galaxies – and that doesn’t make any sense. The galaxies should fly apart from spinning that fast.
This problem caused Zwicky to hypothesize the existence of dunkle Materie—large amounts of invisible material that provides the gravitational pull to hold the galaxies together. It’s what physicists think dark matter is – neutral, uncharged particles that interact with visible material by massive gravitational force.
There is also the matter of gravitational lensing. Starlight through space if often seen to bend and warp around unseen massive objects. The Hubble space telescope can often produce two or three images of the same galaxy in one single picture. The individual images may be different sizes but contain the same features, as though space were a hall of mirrors. As beams of light from the same galaxy bend around objects in space, they reach the earth from slightly different angles, giving the appearance of coming from different locations. Clumps of invisible dark matter between us and these galaxies are blamed for causing the distortions.
Cosmologists have a variety of reasons for embracing the idea of dark matter. The problem is that its existence is inferred from physicists’ current interpretations of data; nobody has been able to directly detect the stuff yet. The physicists are confident that dark matter comes in the form of a particle, a weakly interactive massive particle (WIMP) that creates gravitational effects but otherwise ignores normal visible particles. The trick is to get it to get some WIMPs to show themselves by hitting visible matter into them and making them say, “Ow!”
Rick Gaitskell of Brown University has been hunting for dark matter for 24 odd years and heads the team that turned on the Large Underground Xenon (LUX) experiment in South Dakota. A mile underground in the Homestake Gold Mine, the LUX particle accelerator shoots xenon particles past ultra-sensitive detectors. If the xenon particles smack into one of these WIMPs, it should give off a little flash of electricity that the detectors can catch and record.
So far, though, the LUX hasn’t found anything. Gaitskell told Popular Science this past autumn, “Every experiment has reported essentially negative results. No one even knows for sure if the d- stuff really exists.” If dark matter really does make up five-sixths of the matter in the Universe, it certainly does an excellent job of hiding itself.
A Dark Herring
Of course, dark matter may not exist after all. In his own PowerPoint slides on dark matter posted on the Brown University website, Gaitskell tells his students, “It has been a Problem in Cosmology that astrophysical assumptions often need to be made to interpret data/extra parameters.” It’s true. Scientists create models they use to interpret the information that space gives them. The models are based on certain assumptions, and if those assumptions are incorrect, the data gets interpreted wrongly.
Possible Alternatives
If dark matter is just an illusion, though, what is causing the observed phenomena? What does hold spinning spiral galaxies together and cause the bending of light through allegedly empty space?
First of all, it is odd that so many spiral galaxies appear to have the same issue – the matter across their diameters all appear to be rotating at the same rate – all without flying apart. It may be that that the light information coming from them is interpreted incorrectly. The redshifts that are treated as a sort of Doppler effect – light appearing to lengthen as its source moves away from us – may have another explanation.
In the 1970s, William Tifft at the University of Arizona noted that his redshift measurements didn’t show gradual, smooth shifting to the red. Instead, they were quantized – the measurements made small jumps as though going up a flight of stairs. Two astronomers from Scotland, Guthrie and Napier, tried to disprove Tiffts quantized redshift ideas in the 1990s, but they finally confirmed his results.
Professor José Senovilla, Marc Mars and Raül Vera of the University of the Basque Country, Bilbao, and University of Salamanca, Spain proposed in 2011 that the redshift isn’t caused by a Doppler shift but by the slowing of Time itself. Dark energy supposedly permeates the Universe, causing the outer edges of space to expand at an accelerating rate. That’s the wrong way to interpret the light wave data, suggest these scientists. Senovilla and Vera argue that the better explanation is the opposite, that Time has been slowing down and we see its effect in the apparent stretching of light waves. The light reaching the Hubble telescope from distant galaxies might not tell us as much about the rate the galaxies are spinning as about the nature of Time itself.
The speed of light itself may be slowing. Physicists insist that light speed is a constant, but they may have made that determination prematurely. The speed of light may not be dropping very quickly, but a variety of papers have been written in the past several decades that suggest light speed is not a constant after all. Paul Davies, currently of Arizona State, argued in 2002 that the speed of light may be slowing down, and physicist Barry Setterfield has written extensively on the subject.
Yves-Henri Sanejouand from the University of Nantes in France in 2010 showed a possible slowing of the speed of light by about 0.02–0.03 m/s per year. That’s not much, but it demonstrates the real possibility of a much faster speed of light in the past. “The constancy of the speed of light is one of the fundamental pillars of contemporary physics,” explains Sanejouand, “so the possibility that it may instead vary (even at a slow rate) has far reaching consequences (although mostly on the theoretical side).”
It may also be that spiral galaxies haven’t had time to fly apart. If the speed of light has been slowing, methods for dating the age of the Universe might be way off. The age of the Universe itself may have been overestimated.
While dark matter is credited with causing gravitational lensing, Anirudh Pradhan of Hindu P. G. College in India suggests that the observed bending of light might be caused by the refraction of light as it hits the gasses around various astronomical bodies. We see the refraction of light all the time in everyday life. The fisherman who goes to stab a fish in the water cannot aim directly at the image of the fish, because the light changes direction as it leaves the denser water and hits the less dense air. The refraction of light makes the fish look like it’s in a spot that it isn’t. The same thing can happen in space. As light shoots through the vacuum of space, it hits clouds of gasses that cause it to change direction so that when it reaches us, multiple images of various sizes are produced – and we can’t be certain of where they actually originated.
The nature of the Universe is an involved mystery, a deep subject that requires a great deal more study. Yet, the hunt for dark matter highlights the importance of examining one’s assumptions in the pursuit of scientific truth. Assumptions are required to interpret data, but a great deal of time and money can be spent to prove incorrect interpretations when the underlying assumptions are faulty.
Further Reading
Is Light Slowing Down?
— Optics and Photonics Focus

The Mystery Rock on Mars

from  KHouse January 2013
A mysterious rock said to resemble a “jelly donut” suddenly appeared, apparently out of nowhere, in front of the Opportunity Mars rover. It is “like nothing we’ve ever seen before,” according to NASA scientists.

Experts said they were “completely confused” by both the origins and makeup of the object. (The rock is currently being analyzed by the remote diagnostic equipment onboard Opportunity.)
Astronomers noticed the new rock after it had “appeared” without any explanation on an outcropping which had previously been empty just days before. By sheer coincidence, the rover has been stationary for about a month due to bad weather and NASA scientists have been looking at the area in detail since it has nowhere else to go. The Jet Propulsion Laboratory (JPL) in Pasadena, California is actually doing the analysis of the Martian surface.
NASA issued a Mars status report entitled “encountering a surprise,” and the head scientist for the Opportunity mission, Steve Squyres, was quoted as saying that Mars “keeps throwing new things at us.”
Squyres said that two pictures taken on the Martian surface just 12 Martian days apart first showed an empty landscape, then “We saw this rock just sitting here. It looks white around the edge in the middle and there’s a low spot in the center that’s dark red—it looks like a jelly doughnut.
“And it appeared, just plain appeared at that spot—and we haven’t ever driven over that spot.”
Squyres said his team had two theories on how the rock got there—that there’s “a smoking hole in the ground somewhere nearby” and it was caused by a meteor or that it was “somehow flicked out of the ground by a wheel” as the rover went by.
“We had driven a meter or two away from here, and I think the idea that somehow we mysteriously flicked it with a wheel is the best explanation,” Squyres said.
However, the riddle wrapped in a puzzle gets even stranger once the rover took a deeper look into the rock. Squyres explained: “We are as we speak situated with the rover’s instruments deployed making measurements of this rock.
“We’ve taken pictures of both the doughnut and jelly parts, and then got the first data on the composition of the jelly yesterday.
“It’s like nothing we’ve ever seen before,” he said. “It’s very high in sulphur, it’s very high in magnesium and it’s got twice as much manganese as we’ve ever seen in anything on Mars.
“I don’t know what any of this means. We’re completely confused, and everyone in the team is arguing and fighting (over what it means).
“That’s the beauty of this mission… what I’ve realized is that we will never be finished. There will always be something tantalizing, something wonderful just beyond our reach that we didn’t quite get to—and that’s the nature of exploration.”
While Spirit lost contact with Earth and was later declared “dead” in 2010, Opportunity has now roamed the planet far in excess of what was originally planned as a three-month mission. NASA said that with its maximum speed of just 0.05mph, as of “Sol 3547” (15 January 2014) Opportunity had covered just over 24 miles (38km).
For Further Reading
Bertha, M. (2014, January 20). NASA scientists are arguing about a ‘jelly donut’ rock they found on Mars.
Retrieved from Philly.com: http://www.philly.com/philly/blogs/trending/NASA-scientists-are-arguing-about-a-weird-rock-they-found-on-Mars.html
CNN. (2014, January 21). Rock Mysteriously Appears in Front of the Mars Opportunity Rover.
Retrieved from YouTube: https://www.youtube.com/watch?v=tkDBIkYoVJw

Landau, E. (2014, January 21). Mystery rock spotted on Mars.
Retrieved from CNN.com: http://www.cnn.com/2014/01/20/tech/innovation/mars-mystery-rock/

The Crisis of the USA Middle Class

From George Friedman Founder and Chairman of Stratfor, a company that is now a leader in the field of global intelligence.
Unemployment in the United States is not a problem in the same sense that it is in Europe because it does not pose a geopolitical threat. The United States does not face political disintegration from unemployment, whatever the number is. Europe might.
At the same time, I would agree that the United States faces a potentially significant but longer-term geopolitical problem deriving from economic trends. The threat to the United States is the persistent decline in the middle class’ standard of living, a problem that is reshaping the social order that has been in place since World War II and that, if it continues, poses a threat to American power.
The Crisis of the American Middle Class
The median household income of Americans in 2011 was $49,103. Adjusted for inflation, the median income is just below what it was in 1989 and is $4,000 less than it was in 2000. Take-home income is a bit less than $40,000 when Social Security and state and federal taxes are included. That means a monthly income, per household, of about $3,300. It is urgent to bear in mind that half of all American households earn less than this. It is also vital to consider not the difference between 1990 and 2011, but the difference between the 1950s and 1960s and the 21st century. This is where the difference in the meaning of middle class becomes most apparent.
In the 1950s and 1960s, the median income allowed you to live with a single earner — normally the husband, with the wife typically working as homemaker — and roughly three children. It permitted the purchase of modest tract housing, one late model car and an older one. It allowed a driving vacation somewhere and, with care, some savings as well. I know this because my family was lower-middle class, and this is how we lived, and I know many others in my generation who had the same background. It was not an easy life and many luxuries were denied us, but it wasn’t a bad life at all.
Someone earning the median income today might just pull this off, but it wouldn’t be easy. Assuming that he did not have college loans to pay off but did have two car loans to pay totaling $700 a month, and that he could buy food, clothing and cover his utilities for $1,200 a month, he would have $1,400 a month for mortgage, real estate taxes and insurance, plus some funds for fixing the air conditioner and dishwasher. At a 5 percent mortgage rate, that would allow him to buy a house in the $200,000 range. He would get a refund back on his taxes from deductions but that would go to pay credit card bills he had from Christmas presents and emergencies. It could be done, but not easily and with great difficulty in major metropolitan areas. And if his employer didn’t cover health insurance, that $4,000–5,000 for three or four people would severely limit his expenses. And of course, he would have to have $20,000–40,000 for a down payment and closing costs on his home. There would be little else left over for a week at the seashore with the kids.
And this is for the median. Those below him — half of all households — would be shut out of what is considered middle-class life, with the house, the car and the other associated amenities. Those amenities shift upward on the scale for people with at least $70,000 in income. The basics might be available at the median level, given favorable individual circumstance, but below that life becomes surprisingly meager, even in the range of the middle class and certainly what used to be called the lower-middle class.
The Expectation of Upward Mobility
I should pause and mention that this was one of the fundamental causes of the 2007–2008 subprime lending crisis. People below the median took out loans with deferred interest with the expectation that their incomes would continue the rise that was traditional since World War II. The caricature of the borrower as irresponsible misses the point. The expectation of rising real incomes was built into the American culture, and many assumed based on that that the rise would resume in five years. When it didn’t they were trapped, but given history, they were not making an irresponsible assumption.
American history was always filled with the assumption that upward mobility was possible. The Midwest and West opened land that could be exploited, and the massive industrialization in the late 19th and early 20th centuries opened opportunities. There was a systemic expectation of upward mobility built into American culture and reality.
The Great Depression was a shock to the system, and it wasn’t solved by the New Deal, nor even by World War II alone. The next drive for upward mobility came from post-war programs for veterans, of whom there were more than 10 million. These programs were instrumental in creating post-industrial America, by creating a class of suburban professionals. There were three programs that were critical:
1. The GI Bill, which allowed veterans to go to college after the war, becoming professionals frequently several notches above their parents.
2. The part of the GI Bill that provided federally guaranteed mortgages to veterans, allowing low and no down payment mortgages and low interest rates to graduates of publicly funded universities.
3. The federally funded Interstate Highway System, which made access to land close to but outside of cities easier, enabling both the dispersal of populations on inexpensive land (which made single-family houses possible) and, later, the dispersal of business to the suburbs.
There were undoubtedly many other things that contributed to this, but these three not only reshaped America but also created a new dimension to the upward mobility that was built into American life from the beginning. Moreover, these programs were all directed toward veterans, to whom it was acknowledged a debt was due, or were created for military reasons (the Interstate Highway System was funded to enable the rapid movement of troops from coast to coast, which during World War II was found to be impossible). As a result, there was consensus around the moral propriety of the programs.
The subprime fiasco was rooted in the failure to understand that the foundations of middle class life were not under temporary pressure but something more fundamental. Where a single earner could support a middle class family in the generation after World War II, it now took at least two earners. That meant that the rise of the double-income family corresponded with the decline of the middle class. The lower you go on the income scale, the more likely you are to be a single mother. That shift away from social pressure for two parent homes was certainly part of the problem.
Re-engineering the Corporation
But there was, I think, the crisis of the modern corporation. Corporations provided long-term employment to the middle class. It was not unusual to spend your entire life working for one. Working for a corporation, you received yearly pay increases, either as a union or non-union worker. The middle class had both job security and rising income, along with retirement and other benefits. Over the course of time, the culture of the corporation diverged from the realities, as corporate productivity lagged behind costs and the corporations became more and more dysfunctional and ultimately unsupportable. In addition, the corporations ceased focusing on doing one thing well and instead became conglomerates, with a management frequently unable to keep up with the complexity of multiple lines of business.
For these and many other reasons, the corporation became increasingly inefficient, and in the terms of the 1980s, they had to be re-engineered — which meant taken apart, pared down, refined and refocused. And the re-engineering of the corporation, designed to make them agile, meant that there was a permanent revolution in business. Everything was being reinvented. Huge amounts of money, managed by people whose specialty was re-engineering companies, were deployed. The choice was between total failure and radical change. From the point of view of the individual worker, this frequently meant the same thing: unemployment. From the view of the economy, it meant the creation of value whether through breaking up companies, closing some of them or sending jobs overseas. It was designed to increase the total efficiency, and it worked for the most part.
This is where the disjuncture occurred. From the point of view of the investor, they had saved the corporation from total meltdown by redesigning it. From the point of view of the workers, some retained the jobs that they would have lost, while others lost the jobs they would have lost anyway. But the important thing is not the subjective bitterness of those who lost their jobs, but something more complex.
As the permanent corporate jobs declined, more people were starting over. Some of them were starting over every few years as the agile corporation grew more efficient and needed fewer employees. That meant that if they got new jobs it would not be at the munificent corporate pay rate but at near entry-level rates in the small companies that were now the growth engine. As these companies failed, were bought or shifted direction, they would lose their jobs and start over again. Wages didn’t rise for them and for long periods they might be unemployed, never to get a job again in their now obsolete fields, and certainly not working at a company for the next 20 years.
The restructuring of inefficient companies did create substantial value, but that value did not flow to the now laid-off workers. Some might flow to the remaining workers, but much of it went to the engineers who restructured the companies and the investors they represented. Statistics reveal that, since 1947 (when the data was first compiled), corporate profits as a percentage of gross domestic product are now at their highest level, while wages as a percentage of GDP are now at their lowest level. It was not a question of making the economy more efficient — it did do that — it was a question of where the value accumulated. The upper segment of the wage curve and the investors continued to make money. The middle class divided into a segment that entered the upper-middle class, while another faction sank into the lower-middle class.
American society on the whole was never egalitarian. It always accepted that there would be substantial differences in wages and wealth. Indeed, progress was in some ways driven by a desire to emulate the wealthy. There was also the expectation that while others received far more, the entire wealth structure would rise in tandem. It was also understood that, because of skill or luck, others would lose.
What we are facing now is a structural shift, in which the middle class’ center, not because of laziness or stupidity, is shifting downward in terms of standard of living. It is a structural shift that is rooted in social change (the breakdown of the conventional family) and economic change (the decline of traditional corporations and the creation of corporate agility that places individual workers at a massive disadvantage).
The inherent crisis rests in an increasingly efficient economy and a population that can’t consume what is produced because it can’t afford the products. This has happened numerous times in history, but the United States, excepting the Great Depression, was the counterexample.
Obviously, this is a massive political debate, save that political debates identify problems without clarifying them. In political debates, someone must be blamed. In reality, these processes are beyond even the government’s ability to control. On one hand, the traditional corporation was beneficial to the workers until it collapsed under the burden of its costs. On the other hand, the efficiencies created threaten to undermine consumption by weakening the effective demand among half of society.
The Long-Term Threat
The greatest danger is one that will not be faced for decades but that is lurking out there. The United States was built on the assumption that a rising tide lifts all ships. That has not been the case for the past generation, and there is no indication that this socio-economic reality will change any time soon. That means that a core assumption is at risk. The problem is that social stability has been built around this assumption — not on the assumption that everyone is owed a living, but the assumption that on the whole, all benefit from growing productivity and efficiency.
If we move to a system where half of the country is either stagnant or losing ground while the other half is surging, the social fabric of the United States is at risk, and with it the massive global power the United States has accumulated. Other superpowers such as Britain or Rome did not have the idea of a perpetually improving condition of the middle class as a core value. The United States does. If it loses that, it loses one of the pillars of its geopolitical power.
The left would argue that the solution is for laws to transfer wealth from the rich to the middle class. That would increase consumption but, depending on the scope, would threaten the amount of capital available to investment by the transfer itself and by eliminating incentives to invest. You can’t invest what you don’t have, and you won’t accept the risk of investment if the payoff is transferred away from you.
The agility of the American corporation is critical. The right will argue that allowing the free market to function will fix the problem. The free market doesn’t guarantee social outcomes, merely economic ones. In other words, it may give more efficiency on the whole and grow the economy as a whole, but by itself it doesn’t guarantee how wealth is distributed. The left cannot be indifferent to the historical consequences of extreme redistribution of wealth. The right cannot be indifferent to the political consequences of a middle-class life undermined, nor can it be indifferent to half the population’s inability to buy the products and services that businesses sell.
The most significant actions made by governments tend to be unintentional. The GI Bill was designed to limit unemployment among returning serviceman; it inadvertently created a professional class of college graduates. The VA loan was designed to stimulate the construction industry; it created the basis for suburban home ownership. The Interstate Highway System was meant to move troops rapidly in the event of war; it created a new pattern of land use that was suburbia.
It is unclear how the private sector can deal with the problem of pressure on the middle class. Government programs frequently fail to fulfill even minimal intentions while squandering scarce resources. The United States has been a fortunate country, with solutions frequently emerging in unexpected ways.
It would seem to me that unless the United States gets lucky again, its global dominance is in jeopardy. Considering its history, the United States can expect to get lucky again, but it usually gets lucky when it is frightened. And at this point it isn’t frightened but angry, believing that if only its own solutions were employed, this problem and all others would go away. I am arguing that the conventional solutions offered by all sides do not yet grasp the magnitude of the problem — that the foundation of American society is at risk — and therefore all sides are content to repeat what has been said before.

People who are smarter and luckier than I am will have to craft the solution. I am simply pointing out the potential consequences of the problem and the inadequacy of all the ideas I have seen so far.

Making Energy With Salt and Thorium

from KHouse
Mount Storm Power Station in Grant County, West Virginia, generates almost 1600 megawatts of electricity from burning West Virginia’s famous coal. The power plant is not the only source of energy on the mountain, though. From the shores of Mount Storm Lake—the reservoir used to cool the power station—at least thirty large wind turbines can be seen turning in the high mountaintop winds. The NedPower Mount Storm wind farm employs a total of 132 wind turbines running along 12 miles of mountaintop, each with a 2 megawatt capacity.
One mountain ridge offers two methods of producing energy, one renewable and one not-so-much, but at 1600 MW, the coal-fired power station offers six times the energy production of those 132 wind turbines. Replacing fossil fuels with renewable energy isn’t as simple as one would hope.
One of the greatest challenges of our times is to find sources of inexpensive energy that don’t leave us dependent on foreign nations for fuel, don’t create environmental hazards that poison our water or give our dogs extra eyes, and won’t run dry on us during the next few generations. The human race has plenty of ideas, from molten salt to liquid thorium reactors. The question is… will alternative energies be nipped before they can bloom or will expenses, inherent cons, and the ubiquitous red tape choke the life out of them?
Energy Consistency
The quest for renewable energy has caused wind farms with their multitude of turbines to poke up across America like porcupine quills. Solar plants abound in Germany and the rest of the world is watching. The problem with solar and wind, however, is that they offer little consistency. The sun goes down. The wind stops blowing. The technology is improving and the costs are dropping, but wind and solar still do not produce enough efficient, consistent energy to begin to replace the likes of Mount Storm’s coal-fired power plant.
Even Germany’s vast solar success may not be as sunny as it’s been sold to the world. In October, a scathing analysis of Germany’s use of solar power appeared on the Forbes website. Ryan Carlyle pointed to the high costs of solar and the country’s increasing dependence on coal power during the times the sun doesn’t shine. Germany suffers from too much power they don’t need in the summer, putting all non-solar power suppliers in financial straits and resulting in exceptionally expensive power in the winter. Carlyle stated that Germans pay $0.34 per kWh, one of the highest rates in the world, and that 300,000 German households lose their electricity every year because they can’t pay their high energy bills. (Most Americans still pay less than $0.12/kWh, according to the U.S. Energy Information Administration.)
The Light Switch
It wouldn’t hurt the members of the human race to turn off the lights more often, to ride their bikes, to put on a sweater when the house feels cool. At the same time, the members of the human race are filled to the brim with bright ideas. The production of clean, safe, readily available energy is a problem, but not because human ingenuity hasn’t developed a wide range of potential ideas:
Molten Salt
Generally when we think of solar power, we think of photovoltaic cells, which convert the photons from the sun into electrons and harness them into an electric current. Solar thermal plants use a different means to grasp the sun. They employ big, curved, mirror-lined troughs to focus the light from the sun to create heat that turns turbines. Think of starting a fire with a magnifying glass—row after row over almost 2000 acres.
The Solana solar farm recently opened 70 miles southwest of Phoenix, Arizona—1920 acres of parabolic trough mirrors that can generate 280 MW of electricity. Solana is the first solar farm to store its thermal energy in molten salt, offering a detour around the problem that solar farms consistently face: the sun hides at night.
Systems that depend on photovoltaics have to use expensive, somewhat inefficient batteries if they are to store unused energy created during the day. Solana’s use of molten salt—a mixture of sodium nitrate, potassium nitrate, and calcium nitrate—provides a way of prolonging the plant’s energy production long after sundown. After the sun heats the liquid salt to 566°C (1,051°F), it is stored in well-insulated heat tanks where it can remain for hours until needed to create the steam that turns the turbines that generate energy.
Still, the setup is expensive. The technology works, but at 1 MW per 6.9 acres, it will be a long time before Solana pays off its $2 billion price tag.
Energy Consistency
Across the world, various means are used to store wind and solar energy so that a surplus of energy won’t be wasted, to squirrel it away until a later time when demand is high. Batteries. Molten salt. Pumping air into underground caverns and releasing it to turn turbines when it’s needed. Pumped Storage Hydroelectric plants generate electricity by directing water downhill through turbines. When demand is low, these plants pump water back uphill to store for use when demand increases again.
The ability to efficiently store energy is just as large an issue as producing it in the first place. As environmentally unfriendly as fossil fuels may be, their use in power plants is easy to control. Coal can always wait to be fired until everybody gets home from the beach.
Thorium Reactors
And then came nuclear. Nuclear power has offered another alternative to coal and petroleum products, one that is still highly controversial. Light-water nuclear reactors operate cleanly and inexpensively, producing plenty of energy without also producing much-maligned CO2 emissions. Of course, people get put off by the occasional meltdown that spews radiation into the air and water. There is also the huge issue of waste. The uranium-dioxide fuel rods must be changed out after only 3–5% of the uranium is used, forcing the disposal of highly radioactive material that will take multiple thousands of years to “cool.” The plutonium generated by light-water reactors also runs the risk of being swiped by disreputable groups for use in bombs destined for places like New York and Tel Aviv.
Blame it on the Cold War. We did not have to focus on light-water reactors with their associated problems. Back in the 1950s, though, producing plutonium as a bi-product sounded like a good idea. Admiral Hyman Rickover wanted the U.S.S. Nautilus, the world’s first nuclear submarine, to get into the water as soon as possible, and the LWR was the most convenient choice at the time. The Nautilus was launched in 1954, and the world followed down the uranium path.
It didn’t have to be thus; other fuel sources could have been used. A successful liquid-fluoride thorium reactor was developed at Oak Ridge National Laboratory in Tennessee between 1959 and 1973, until the Nixon Administration shut it down because the reactor didn’t produce plutonium. These days, plutonium production is a huge risk factor.
Thorium is common on the planet and contains vast amounts of energy. It’s as common as coal with exponentially greater energy potential—and without the pollution coal causes. It requires a kick-start because it won’t start reacting on its own, and stockpiles of existing waste can be used to do the kick-starting. Once it gets going, thorium decays through several steps into uranium–233, an excellent fuel source, without requiring the removal of partially used fuel rods.
The use of thorium as a liquid fuel avoids a number of the major problems that cause light-water nuclear reactors to be as dangerous as they are. Rather than the high-pressure toxic water that cools LWRs, thorium reactors are cooled with liquid fluoride salt under normal atmospheric pressure. The reactor won’t melt down because its normal state-of-being is molten salt at its core. If that salt leaked out, it would simply solidify. A thorium reactor produces a minute fraction of the waste that LWR reactors produce, and the waste breaks down in terms of hundreds of years rather than tens of thousands. There is some argument about whether the uranium–233 can be stolen for use in weapons, but proponents argue it would be highly difficult to do so.
Various forms of sticky tape promise to keep thorium reactors on the back burner for decades in the United States, where conventional nuclear and the oil industry can put up a deep-pocketed fight. Outside the U.S., a variety of countries are already pursuing the thorium dream. Thirty-two countries were represented at the Thorium Energy Conference in Geneva, Switzerland in November, with notable attendees who included CERN Director General Mr. Rolf-Dieter Heuer, former International Atomic Energy Agency director Hans Blix, and Nobel Prize Laureate Carlo Rubbia. Thor Energy in Norway is already working on using thorium in existing reactors, and the British nuclear scientists have been involved in developing research for use in thorium reactors in both Norway and India.
We have plenty of options for producing energy that doesn’t depend on finite oil resources in hostile foreign lands. We have options that don’t demand we pollute our watersheds or litter our horizons with windmills that depend on the inconsistent wind. The question is whether we’ll pursue the courses that will produce the most benefit, or whether we’ll get hung up in the bad politics of pushing solar panels on locations where the sun is absent half the year.

Mount Storm Power Station
— Dominion
NedPower Mount Storm
— Shell
Average Revenue per kWh by State
— Energy Information Administration
ThEC13 in Geneva at CERN—Success!
— Thorium Energy Conference
The Nuke That Might Have Been
— The Economist