BODY - MIND - SPIRIT ***

EXPLORE - EXCEL - ENJOY

Please note that your COMMENTS are moderated.

Let Go - Let's Go is a member of the LightBeWeb EcoSystem

The BLOGS Page displays the 3 most recent BLOG POSTS. More POSTS in the BLOG ARCHIVE.

On MOBILE devices, the MOBI version of the BLOG
will be automatically displayed to facilitate the reading.

Wednesday, January 29, 2014

Alternatives To Elusive Dark Matter

from KHouse Newsletter January 2014

The Universe is made up primarily of a mysterious substance called dark matter, a mesh, a spider web of space. That’s what popular science says, at least. Astrophysicists insist that dark matter is there; the indirect evidence is substantial. Yet, after multiple millions of dollars have been spent on trying to track down the actual physical particles that make up dark matter, science continues to come up empty. Maybe the astrophysicists need to try another approach in order to finally detect the elusive substance, or maybe they just need to adjust their current models about the nature of light, time, and the Universe itself.
It all started with the spinning of distant galaxies. A Swiss astrophysicist named Fritz Zwicky postulated in the 1930s that invisible stuff he called “dunkle Materie” hid inside the galaxies he was studying, because they spun too fast to contain only the visible stars and gas he could account for. Scientists observe the same puzzling phenomenon today. Based on spectral line data, it appears that the outer rims of spiral galaxies are moving at the same rate as the insides of the galaxies – and that doesn’t make any sense. The galaxies should fly apart from spinning that fast.
This problem caused Zwicky to hypothesize the existence of dunkle Materie—large amounts of invisible material that provides the gravitational pull to hold the galaxies together. It’s what physicists think dark matter is – neutral, uncharged particles that interact with visible material by massive gravitational force.
There is also the matter of gravitational lensing. Starlight through space if often seen to bend and warp around unseen massive objects. The Hubble space telescope can often produce two or three images of the same galaxy in one single picture. The individual images may be different sizes but contain the same features, as though space were a hall of mirrors. As beams of light from the same galaxy bend around objects in space, they reach the earth from slightly different angles, giving the appearance of coming from different locations. Clumps of invisible dark matter between us and these galaxies are blamed for causing the distortions.
Cosmologists have a variety of reasons for embracing the idea of dark matter. The problem is that its existence is inferred from physicists’ current interpretations of data; nobody has been able to directly detect the stuff yet. The physicists are confident that dark matter comes in the form of a particle, a weakly interactive massive particle (WIMP) that creates gravitational effects but otherwise ignores normal visible particles. The trick is to get it to get some WIMPs to show themselves by hitting visible matter into them and making them say, “Ow!”
Rick Gaitskell of Brown University has been hunting for dark matter for 24 odd years and heads the team that turned on the Large Underground Xenon (LUX) experiment in South Dakota. A mile underground in the Homestake Gold Mine, the LUX particle accelerator shoots xenon particles past ultra-sensitive detectors. If the xenon particles smack into one of these WIMPs, it should give off a little flash of electricity that the detectors can catch and record.
So far, though, the LUX hasn’t found anything. Gaitskell told Popular Science this past autumn, “Every experiment has reported essentially negative results. No one even knows for sure if the d- stuff really exists.” If dark matter really does make up five-sixths of the matter in the Universe, it certainly does an excellent job of hiding itself.
A Dark Herring
Of course, dark matter may not exist after all. In his own PowerPoint slides on dark matter posted on the Brown University website, Gaitskell tells his students, “It has been a Problem in Cosmology that astrophysical assumptions often need to be made to interpret data/extra parameters.” It’s true. Scientists create models they use to interpret the information that space gives them. The models are based on certain assumptions, and if those assumptions are incorrect, the data gets interpreted wrongly.
Possible Alternatives
If dark matter is just an illusion, though, what is causing the observed phenomena? What does hold spinning spiral galaxies together and cause the bending of light through allegedly empty space?
First of all, it is odd that so many spiral galaxies appear to have the same issue – the matter across their diameters all appear to be rotating at the same rate – all without flying apart. It may be that that the light information coming from them is interpreted incorrectly. The redshifts that are treated as a sort of Doppler effect – light appearing to lengthen as its source moves away from us – may have another explanation.
In the 1970s, William Tifft at the University of Arizona noted that his redshift measurements didn’t show gradual, smooth shifting to the red. Instead, they were quantized – the measurements made small jumps as though going up a flight of stairs. Two astronomers from Scotland, Guthrie and Napier, tried to disprove Tiffts quantized redshift ideas in the 1990s, but they finally confirmed his results.
Professor José Senovilla, Marc Mars and Raül Vera of the University of the Basque Country, Bilbao, and University of Salamanca, Spain proposed in 2011 that the redshift isn’t caused by a Doppler shift but by the slowing of Time itself. Dark energy supposedly permeates the Universe, causing the outer edges of space to expand at an accelerating rate. That’s the wrong way to interpret the light wave data, suggest these scientists. Senovilla and Vera argue that the better explanation is the opposite, that Time has been slowing down and we see its effect in the apparent stretching of light waves. The light reaching the Hubble telescope from distant galaxies might not tell us as much about the rate the galaxies are spinning as about the nature of Time itself.
The speed of light itself may be slowing. Physicists insist that light speed is a constant, but they may have made that determination prematurely. The speed of light may not be dropping very quickly, but a variety of papers have been written in the past several decades that suggest light speed is not a constant after all. Paul Davies, currently of Arizona State, argued in 2002 that the speed of light may be slowing down, and physicist Barry Setterfield has written extensively on the subject.
Yves-Henri Sanejouand from the University of Nantes in France in 2010 showed a possible slowing of the speed of light by about 0.02–0.03 m/s per year. That’s not much, but it demonstrates the real possibility of a much faster speed of light in the past. “The constancy of the speed of light is one of the fundamental pillars of contemporary physics,” explains Sanejouand, “so the possibility that it may instead vary (even at a slow rate) has far reaching consequences (although mostly on the theoretical side).”
It may also be that spiral galaxies haven’t had time to fly apart. If the speed of light has been slowing, methods for dating the age of the Universe might be way off. The age of the Universe itself may have been overestimated.
While dark matter is credited with causing gravitational lensing, Anirudh Pradhan of Hindu P. G. College in India suggests that the observed bending of light might be caused by the refraction of light as it hits the gasses around various astronomical bodies. We see the refraction of light all the time in everyday life. The fisherman who goes to stab a fish in the water cannot aim directly at the image of the fish, because the light changes direction as it leaves the denser water and hits the less dense air. The refraction of light makes the fish look like it’s in a spot that it isn’t. The same thing can happen in space. As light shoots through the vacuum of space, it hits clouds of gasses that cause it to change direction so that when it reaches us, multiple images of various sizes are produced – and we can’t be certain of where they actually originated.
The nature of the Universe is an involved mystery, a deep subject that requires a great deal more study. Yet, the hunt for dark matter highlights the importance of examining one’s assumptions in the pursuit of scientific truth. Assumptions are required to interpret data, but a great deal of time and money can be spent to prove incorrect interpretations when the underlying assumptions are faulty.
Further Reading
Is Light Slowing Down?
— Optics and Photonics Focus

Monday, January 27, 2014

The Mystery Rock on Mars

from  KHouse January 2013
A mysterious rock said to resemble a “jelly donut” suddenly appeared, apparently out of nowhere, in front of the Opportunity Mars rover. It is “like nothing we’ve ever seen before,” according to NASA scientists.


Experts said they were “completely confused” by both the origins and makeup of the object. (The rock is currently being analyzed by the remote diagnostic equipment onboard Opportunity.)
Astronomers noticed the new rock after it had “appeared” without any explanation on an outcropping which had previously been empty just days before. By sheer coincidence, the rover has been stationary for about a month due to bad weather and NASA scientists have been looking at the area in detail since it has nowhere else to go. The Jet Propulsion Laboratory (JPL) in Pasadena, California is actually doing the analysis of the Martian surface.
NASA issued a Mars status report entitled “encountering a surprise,” and the head scientist for the Opportunity mission, Steve Squyres, was quoted as saying that Mars “keeps throwing new things at us.”
Squyres said that two pictures taken on the Martian surface just 12 Martian days apart first showed an empty landscape, then “We saw this rock just sitting here. It looks white around the edge in the middle and there’s a low spot in the center that’s dark red—it looks like a jelly doughnut.
“And it appeared, just plain appeared at that spot—and we haven’t ever driven over that spot.”
Squyres said his team had two theories on how the rock got there—that there’s “a smoking hole in the ground somewhere nearby” and it was caused by a meteor or that it was “somehow flicked out of the ground by a wheel” as the rover went by.
“We had driven a meter or two away from here, and I think the idea that somehow we mysteriously flicked it with a wheel is the best explanation,” Squyres said.
However, the riddle wrapped in a puzzle gets even stranger once the rover took a deeper look into the rock. Squyres explained: “We are as we speak situated with the rover’s instruments deployed making measurements of this rock.
“We’ve taken pictures of both the doughnut and jelly parts, and then got the first data on the composition of the jelly yesterday.
“It’s like nothing we’ve ever seen before,” he said. “It’s very high in sulphur, it’s very high in magnesium and it’s got twice as much manganese as we’ve ever seen in anything on Mars.
“I don’t know what any of this means. We’re completely confused, and everyone in the team is arguing and fighting (over what it means).
“That’s the beauty of this mission… what I’ve realized is that we will never be finished. There will always be something tantalizing, something wonderful just beyond our reach that we didn’t quite get to—and that’s the nature of exploration.”
While Spirit lost contact with Earth and was later declared “dead” in 2010, Opportunity has now roamed the planet far in excess of what was originally planned as a three-month mission. NASA said that with its maximum speed of just 0.05mph, as of “Sol 3547” (15 January 2014) Opportunity had covered just over 24 miles (38km).
For Further Reading
Bertha, M. (2014, January 20). NASA scientists are arguing about a ‘jelly donut’ rock they found on Mars.
Retrieved from Philly.com: http://www.philly.com/philly/blogs/trending/NASA-scientists-are-arguing-about-a-weird-rock-they-found-on-Mars.html
CNN. (2014, January 21). Rock Mysteriously Appears in Front of the Mars Opportunity Rover.
Retrieved from YouTube: https://www.youtube.com/watch?v=tkDBIkYoVJw

Landau, E. (2014, January 21). Mystery rock spotted on Mars.
Retrieved from CNN.com: http://www.cnn.com/2014/01/20/tech/innovation/mars-mystery-rock/

Tuesday, January 14, 2014

The Crisis of the USA Middle Class

From George Friedman Founder and Chairman of Stratfor, a company that is now a leader in the field of global intelligence.
Unemployment in the United States is not a problem in the same sense that it is in Europe because it does not pose a geopolitical threat. The United States does not face political disintegration from unemployment, whatever the number is. Europe might.
At the same time, I would agree that the United States faces a potentially significant but longer-term geopolitical problem deriving from economic trends. The threat to the United States is the persistent decline in the middle class’ standard of living, a problem that is reshaping the social order that has been in place since World War II and that, if it continues, poses a threat to American power.
The Crisis of the American Middle Class
The median household income of Americans in 2011 was $49,103. Adjusted for inflation, the median income is just below what it was in 1989 and is $4,000 less than it was in 2000. Take-home income is a bit less than $40,000 when Social Security and state and federal taxes are included. That means a monthly income, per household, of about $3,300. It is urgent to bear in mind that half of all American households earn less than this. It is also vital to consider not the difference between 1990 and 2011, but the difference between the 1950s and 1960s and the 21st century. This is where the difference in the meaning of middle class becomes most apparent.
In the 1950s and 1960s, the median income allowed you to live with a single earner — normally the husband, with the wife typically working as homemaker — and roughly three children. It permitted the purchase of modest tract housing, one late model car and an older one. It allowed a driving vacation somewhere and, with care, some savings as well. I know this because my family was lower-middle class, and this is how we lived, and I know many others in my generation who had the same background. It was not an easy life and many luxuries were denied us, but it wasn’t a bad life at all.
Someone earning the median income today might just pull this off, but it wouldn’t be easy. Assuming that he did not have college loans to pay off but did have two car loans to pay totaling $700 a month, and that he could buy food, clothing and cover his utilities for $1,200 a month, he would have $1,400 a month for mortgage, real estate taxes and insurance, plus some funds for fixing the air conditioner and dishwasher. At a 5 percent mortgage rate, that would allow him to buy a house in the $200,000 range. He would get a refund back on his taxes from deductions but that would go to pay credit card bills he had from Christmas presents and emergencies. It could be done, but not easily and with great difficulty in major metropolitan areas. And if his employer didn’t cover health insurance, that $4,000–5,000 for three or four people would severely limit his expenses. And of course, he would have to have $20,000–40,000 for a down payment and closing costs on his home. There would be little else left over for a week at the seashore with the kids.
And this is for the median. Those below him — half of all households — would be shut out of what is considered middle-class life, with the house, the car and the other associated amenities. Those amenities shift upward on the scale for people with at least $70,000 in income. The basics might be available at the median level, given favorable individual circumstance, but below that life becomes surprisingly meager, even in the range of the middle class and certainly what used to be called the lower-middle class.
The Expectation of Upward Mobility
I should pause and mention that this was one of the fundamental causes of the 2007–2008 subprime lending crisis. People below the median took out loans with deferred interest with the expectation that their incomes would continue the rise that was traditional since World War II. The caricature of the borrower as irresponsible misses the point. The expectation of rising real incomes was built into the American culture, and many assumed based on that that the rise would resume in five years. When it didn’t they were trapped, but given history, they were not making an irresponsible assumption.
American history was always filled with the assumption that upward mobility was possible. The Midwest and West opened land that could be exploited, and the massive industrialization in the late 19th and early 20th centuries opened opportunities. There was a systemic expectation of upward mobility built into American culture and reality.
The Great Depression was a shock to the system, and it wasn’t solved by the New Deal, nor even by World War II alone. The next drive for upward mobility came from post-war programs for veterans, of whom there were more than 10 million. These programs were instrumental in creating post-industrial America, by creating a class of suburban professionals. There were three programs that were critical:
1. The GI Bill, which allowed veterans to go to college after the war, becoming professionals frequently several notches above their parents.
2. The part of the GI Bill that provided federally guaranteed mortgages to veterans, allowing low and no down payment mortgages and low interest rates to graduates of publicly funded universities.
3. The federally funded Interstate Highway System, which made access to land close to but outside of cities easier, enabling both the dispersal of populations on inexpensive land (which made single-family houses possible) and, later, the dispersal of business to the suburbs.
There were undoubtedly many other things that contributed to this, but these three not only reshaped America but also created a new dimension to the upward mobility that was built into American life from the beginning. Moreover, these programs were all directed toward veterans, to whom it was acknowledged a debt was due, or were created for military reasons (the Interstate Highway System was funded to enable the rapid movement of troops from coast to coast, which during World War II was found to be impossible). As a result, there was consensus around the moral propriety of the programs.
The subprime fiasco was rooted in the failure to understand that the foundations of middle class life were not under temporary pressure but something more fundamental. Where a single earner could support a middle class family in the generation after World War II, it now took at least two earners. That meant that the rise of the double-income family corresponded with the decline of the middle class. The lower you go on the income scale, the more likely you are to be a single mother. That shift away from social pressure for two parent homes was certainly part of the problem.
Re-engineering the Corporation
But there was, I think, the crisis of the modern corporation. Corporations provided long-term employment to the middle class. It was not unusual to spend your entire life working for one. Working for a corporation, you received yearly pay increases, either as a union or non-union worker. The middle class had both job security and rising income, along with retirement and other benefits. Over the course of time, the culture of the corporation diverged from the realities, as corporate productivity lagged behind costs and the corporations became more and more dysfunctional and ultimately unsupportable. In addition, the corporations ceased focusing on doing one thing well and instead became conglomerates, with a management frequently unable to keep up with the complexity of multiple lines of business.
For these and many other reasons, the corporation became increasingly inefficient, and in the terms of the 1980s, they had to be re-engineered — which meant taken apart, pared down, refined and refocused. And the re-engineering of the corporation, designed to make them agile, meant that there was a permanent revolution in business. Everything was being reinvented. Huge amounts of money, managed by people whose specialty was re-engineering companies, were deployed. The choice was between total failure and radical change. From the point of view of the individual worker, this frequently meant the same thing: unemployment. From the view of the economy, it meant the creation of value whether through breaking up companies, closing some of them or sending jobs overseas. It was designed to increase the total efficiency, and it worked for the most part.
This is where the disjuncture occurred. From the point of view of the investor, they had saved the corporation from total meltdown by redesigning it. From the point of view of the workers, some retained the jobs that they would have lost, while others lost the jobs they would have lost anyway. But the important thing is not the subjective bitterness of those who lost their jobs, but something more complex.
As the permanent corporate jobs declined, more people were starting over. Some of them were starting over every few years as the agile corporation grew more efficient and needed fewer employees. That meant that if they got new jobs it would not be at the munificent corporate pay rate but at near entry-level rates in the small companies that were now the growth engine. As these companies failed, were bought or shifted direction, they would lose their jobs and start over again. Wages didn’t rise for them and for long periods they might be unemployed, never to get a job again in their now obsolete fields, and certainly not working at a company for the next 20 years.
The restructuring of inefficient companies did create substantial value, but that value did not flow to the now laid-off workers. Some might flow to the remaining workers, but much of it went to the engineers who restructured the companies and the investors they represented. Statistics reveal that, since 1947 (when the data was first compiled), corporate profits as a percentage of gross domestic product are now at their highest level, while wages as a percentage of GDP are now at their lowest level. It was not a question of making the economy more efficient — it did do that — it was a question of where the value accumulated. The upper segment of the wage curve and the investors continued to make money. The middle class divided into a segment that entered the upper-middle class, while another faction sank into the lower-middle class.
American society on the whole was never egalitarian. It always accepted that there would be substantial differences in wages and wealth. Indeed, progress was in some ways driven by a desire to emulate the wealthy. There was also the expectation that while others received far more, the entire wealth structure would rise in tandem. It was also understood that, because of skill or luck, others would lose.
What we are facing now is a structural shift, in which the middle class’ center, not because of laziness or stupidity, is shifting downward in terms of standard of living. It is a structural shift that is rooted in social change (the breakdown of the conventional family) and economic change (the decline of traditional corporations and the creation of corporate agility that places individual workers at a massive disadvantage).
The inherent crisis rests in an increasingly efficient economy and a population that can’t consume what is produced because it can’t afford the products. This has happened numerous times in history, but the United States, excepting the Great Depression, was the counterexample.
Obviously, this is a massive political debate, save that political debates identify problems without clarifying them. In political debates, someone must be blamed. In reality, these processes are beyond even the government’s ability to control. On one hand, the traditional corporation was beneficial to the workers until it collapsed under the burden of its costs. On the other hand, the efficiencies created threaten to undermine consumption by weakening the effective demand among half of society.
The Long-Term Threat
The greatest danger is one that will not be faced for decades but that is lurking out there. The United States was built on the assumption that a rising tide lifts all ships. That has not been the case for the past generation, and there is no indication that this socio-economic reality will change any time soon. That means that a core assumption is at risk. The problem is that social stability has been built around this assumption — not on the assumption that everyone is owed a living, but the assumption that on the whole, all benefit from growing productivity and efficiency.
If we move to a system where half of the country is either stagnant or losing ground while the other half is surging, the social fabric of the United States is at risk, and with it the massive global power the United States has accumulated. Other superpowers such as Britain or Rome did not have the idea of a perpetually improving condition of the middle class as a core value. The United States does. If it loses that, it loses one of the pillars of its geopolitical power.
The left would argue that the solution is for laws to transfer wealth from the rich to the middle class. That would increase consumption but, depending on the scope, would threaten the amount of capital available to investment by the transfer itself and by eliminating incentives to invest. You can’t invest what you don’t have, and you won’t accept the risk of investment if the payoff is transferred away from you.
The agility of the American corporation is critical. The right will argue that allowing the free market to function will fix the problem. The free market doesn’t guarantee social outcomes, merely economic ones. In other words, it may give more efficiency on the whole and grow the economy as a whole, but by itself it doesn’t guarantee how wealth is distributed. The left cannot be indifferent to the historical consequences of extreme redistribution of wealth. The right cannot be indifferent to the political consequences of a middle-class life undermined, nor can it be indifferent to half the population’s inability to buy the products and services that businesses sell.
The most significant actions made by governments tend to be unintentional. The GI Bill was designed to limit unemployment among returning serviceman; it inadvertently created a professional class of college graduates. The VA loan was designed to stimulate the construction industry; it created the basis for suburban home ownership. The Interstate Highway System was meant to move troops rapidly in the event of war; it created a new pattern of land use that was suburbia.
It is unclear how the private sector can deal with the problem of pressure on the middle class. Government programs frequently fail to fulfill even minimal intentions while squandering scarce resources. The United States has been a fortunate country, with solutions frequently emerging in unexpected ways.
It would seem to me that unless the United States gets lucky again, its global dominance is in jeopardy. Considering its history, the United States can expect to get lucky again, but it usually gets lucky when it is frightened. And at this point it isn’t frightened but angry, believing that if only its own solutions were employed, this problem and all others would go away. I am arguing that the conventional solutions offered by all sides do not yet grasp the magnitude of the problem — that the foundation of American society is at risk — and therefore all sides are content to repeat what has been said before.

People who are smarter and luckier than I am will have to craft the solution. I am simply pointing out the potential consequences of the problem and the inadequacy of all the ideas I have seen so far.

Thursday, January 9, 2014

Making Energy With Salt and Thorium

from KHouse
Mount Storm Power Station in Grant County, West Virginia, generates almost 1600 megawatts of electricity from burning West Virginia’s famous coal. The power plant is not the only source of energy on the mountain, though. From the shores of Mount Storm Lake—the reservoir used to cool the power station—at least thirty large wind turbines can be seen turning in the high mountaintop winds. The NedPower Mount Storm wind farm employs a total of 132 wind turbines running along 12 miles of mountaintop, each with a 2 megawatt capacity.
One mountain ridge offers two methods of producing energy, one renewable and one not-so-much, but at 1600 MW, the coal-fired power station offers six times the energy production of those 132 wind turbines. Replacing fossil fuels with renewable energy isn’t as simple as one would hope.
One of the greatest challenges of our times is to find sources of inexpensive energy that don’t leave us dependent on foreign nations for fuel, don’t create environmental hazards that poison our water or give our dogs extra eyes, and won’t run dry on us during the next few generations. The human race has plenty of ideas, from molten salt to liquid thorium reactors. The question is… will alternative energies be nipped before they can bloom or will expenses, inherent cons, and the ubiquitous red tape choke the life out of them?
Energy Consistency
The quest for renewable energy has caused wind farms with their multitude of turbines to poke up across America like porcupine quills. Solar plants abound in Germany and the rest of the world is watching. The problem with solar and wind, however, is that they offer little consistency. The sun goes down. The wind stops blowing. The technology is improving and the costs are dropping, but wind and solar still do not produce enough efficient, consistent energy to begin to replace the likes of Mount Storm’s coal-fired power plant.
Even Germany’s vast solar success may not be as sunny as it’s been sold to the world. In October, a scathing analysis of Germany’s use of solar power appeared on the Forbes website. Ryan Carlyle pointed to the high costs of solar and the country’s increasing dependence on coal power during the times the sun doesn’t shine. Germany suffers from too much power they don’t need in the summer, putting all non-solar power suppliers in financial straits and resulting in exceptionally expensive power in the winter. Carlyle stated that Germans pay $0.34 per kWh, one of the highest rates in the world, and that 300,000 German households lose their electricity every year because they can’t pay their high energy bills. (Most Americans still pay less than $0.12/kWh, according to the U.S. Energy Information Administration.)
The Light Switch
It wouldn’t hurt the members of the human race to turn off the lights more often, to ride their bikes, to put on a sweater when the house feels cool. At the same time, the members of the human race are filled to the brim with bright ideas. The production of clean, safe, readily available energy is a problem, but not because human ingenuity hasn’t developed a wide range of potential ideas:
Molten Salt
Generally when we think of solar power, we think of photovoltaic cells, which convert the photons from the sun into electrons and harness them into an electric current. Solar thermal plants use a different means to grasp the sun. They employ big, curved, mirror-lined troughs to focus the light from the sun to create heat that turns turbines. Think of starting a fire with a magnifying glass—row after row over almost 2000 acres.
The Solana solar farm recently opened 70 miles southwest of Phoenix, Arizona—1920 acres of parabolic trough mirrors that can generate 280 MW of electricity. Solana is the first solar farm to store its thermal energy in molten salt, offering a detour around the problem that solar farms consistently face: the sun hides at night.
Systems that depend on photovoltaics have to use expensive, somewhat inefficient batteries if they are to store unused energy created during the day. Solana’s use of molten salt—a mixture of sodium nitrate, potassium nitrate, and calcium nitrate—provides a way of prolonging the plant’s energy production long after sundown. After the sun heats the liquid salt to 566°C (1,051°F), it is stored in well-insulated heat tanks where it can remain for hours until needed to create the steam that turns the turbines that generate energy.
Still, the setup is expensive. The technology works, but at 1 MW per 6.9 acres, it will be a long time before Solana pays off its $2 billion price tag.
Energy Consistency
Across the world, various means are used to store wind and solar energy so that a surplus of energy won’t be wasted, to squirrel it away until a later time when demand is high. Batteries. Molten salt. Pumping air into underground caverns and releasing it to turn turbines when it’s needed. Pumped Storage Hydroelectric plants generate electricity by directing water downhill through turbines. When demand is low, these plants pump water back uphill to store for use when demand increases again.
The ability to efficiently store energy is just as large an issue as producing it in the first place. As environmentally unfriendly as fossil fuels may be, their use in power plants is easy to control. Coal can always wait to be fired until everybody gets home from the beach.
Thorium Reactors
And then came nuclear. Nuclear power has offered another alternative to coal and petroleum products, one that is still highly controversial. Light-water nuclear reactors operate cleanly and inexpensively, producing plenty of energy without also producing much-maligned CO2 emissions. Of course, people get put off by the occasional meltdown that spews radiation into the air and water. There is also the huge issue of waste. The uranium-dioxide fuel rods must be changed out after only 3–5% of the uranium is used, forcing the disposal of highly radioactive material that will take multiple thousands of years to “cool.” The plutonium generated by light-water reactors also runs the risk of being swiped by disreputable groups for use in bombs destined for places like New York and Tel Aviv.
Blame it on the Cold War. We did not have to focus on light-water reactors with their associated problems. Back in the 1950s, though, producing plutonium as a bi-product sounded like a good idea. Admiral Hyman Rickover wanted the U.S.S. Nautilus, the world’s first nuclear submarine, to get into the water as soon as possible, and the LWR was the most convenient choice at the time. The Nautilus was launched in 1954, and the world followed down the uranium path.
It didn’t have to be thus; other fuel sources could have been used. A successful liquid-fluoride thorium reactor was developed at Oak Ridge National Laboratory in Tennessee between 1959 and 1973, until the Nixon Administration shut it down because the reactor didn’t produce plutonium. These days, plutonium production is a huge risk factor.
Thorium is common on the planet and contains vast amounts of energy. It’s as common as coal with exponentially greater energy potential—and without the pollution coal causes. It requires a kick-start because it won’t start reacting on its own, and stockpiles of existing waste can be used to do the kick-starting. Once it gets going, thorium decays through several steps into uranium–233, an excellent fuel source, without requiring the removal of partially used fuel rods.
The use of thorium as a liquid fuel avoids a number of the major problems that cause light-water nuclear reactors to be as dangerous as they are. Rather than the high-pressure toxic water that cools LWRs, thorium reactors are cooled with liquid fluoride salt under normal atmospheric pressure. The reactor won’t melt down because its normal state-of-being is molten salt at its core. If that salt leaked out, it would simply solidify. A thorium reactor produces a minute fraction of the waste that LWR reactors produce, and the waste breaks down in terms of hundreds of years rather than tens of thousands. There is some argument about whether the uranium–233 can be stolen for use in weapons, but proponents argue it would be highly difficult to do so.
Various forms of sticky tape promise to keep thorium reactors on the back burner for decades in the United States, where conventional nuclear and the oil industry can put up a deep-pocketed fight. Outside the U.S., a variety of countries are already pursuing the thorium dream. Thirty-two countries were represented at the Thorium Energy Conference in Geneva, Switzerland in November, with notable attendees who included CERN Director General Mr. Rolf-Dieter Heuer, former International Atomic Energy Agency director Hans Blix, and Nobel Prize Laureate Carlo Rubbia. Thor Energy in Norway is already working on using thorium in existing reactors, and the British nuclear scientists have been involved in developing research for use in thorium reactors in both Norway and India.
We have plenty of options for producing energy that doesn’t depend on finite oil resources in hostile foreign lands. We have options that don’t demand we pollute our watersheds or litter our horizons with windmills that depend on the inconsistent wind. The question is whether we’ll pursue the courses that will produce the most benefit, or whether we’ll get hung up in the bad politics of pushing solar panels on locations where the sun is absent half the year.

Notes
Mount Storm Power Station
— Dominion
NedPower Mount Storm
— Shell
Average Revenue per kWh by State
— Energy Information Administration
ThEC13 in Geneva at CERN—Success!
— Thorium Energy Conference
The Nuke That Might Have Been
— The Economist

Friday, September 6, 2013

NANOTECHNOLOGY might have been invented over 1600 years ago

from KHouse
Ecclesiastes 1:9 reminds us “Whatever has happened, will happen again; whatever has been done, will be done again. There is nothing new on earth” (ISV).
Most people would agree that we live in an age of technological innovation. However, some recent archeological discoveries may prove that some of this new technology may have been discovered centuries or even millennia before, only to be lost, until now.
Take for example a 1,600 year old cup discovered in the 18th century. The Lycurgus Cup named after the king depicted on the glass from the sixth book of Homer’s Iliad has fascinated scientists for decades. The unique properties from the glass allow the cup to change its color. In direct light the glass of the cup resembles jade, but when the light shines through the glass the cup turns to a translucent ruby color. This unusual optical effect is called dichroic.
The cup resides at the British Museum. They acquired it in 1958 from Lord Rothschild, who had it in the family for almost a century. Various studies were done on the cup since 1950, but a recent study has uncovered a technology that was thought to be a 20th century discovery.
After decades of study, scientists discovered tiny particles of silver and gold infused in the glass. Each gold and silver particle measured 50 nanometers in diameter. Scientists concluded in a 2007 research paper that even with modern power tools, this cup would take years to complete. Other cups similar to the Lycurgus Cup have been discovered by archeologists proving that Roman glass workers in 400 A.D. were the pioneers in nanotechnology.
The term nanotechnology describes building machines at a molecular level. K. Eric Drexler, an American engineer and a graduate from M.I.T, popularized the term, calling it molecular nanotechnology in the 1970s. The word nanotechnology is sometimes referred to as general technology because it can impact almost every industry.
The U.S. National Science Foundation describes nanotechnology like this:
“Imagine a medical device that travels through the human body to seek out and destroy small clusters of cancerous cells before they can spread. Or a box no larger than a sugar cube that contains the entire contents of the Library of Congress. Or materials much lighter than steel that possess ten times as much strength.”
The word “nano” comes from a Greek word that means “dwarf”. A nanometer is one billionth of a meter. In physical terms a nanometer is the breadth of three to four atoms placed side by side. The human hair is 50,000 nanometers in diameter.
Most experts have defined the parameters of nanotechnology that deal with anything measuring between 1 and 100 nanometers (nm). Anything above 100 nm is considered microscale and anything below 1 nm is considered the atomic scale.
Building nanomachines from the ground up starts with microscopic molecules called assemblers. It would require trillions of assemblers working simultaneously together to build one nanoscopic machine. The assemblers would also replicate themselves, building second and third generations of assemblers. Once enough assemblers have been replicated, they can begin to produce objects. The assemblers and replicators working together could eventually construct products that would replace human labor. This would decrease the cost of manufacturing consumer goods and make these goods plentiful, cheaper, and stronger. Famine would eventually be eradicated by food-fabricated machines to feed the hungry. Nano-surgery may be the next impact to the medical industry. Patients drink a fluid containing nanorobots programmed to attack and reconstruct the molecular structure of a cancer cell or virus. Nanorobots can also perform internal surgery without leaving external scars. The cosmetic surgery industry will take advantage of the technology to change a patient’s facial features, or eye color using nano-surgery robots. Nanorobots could replace dangerous jobs such as coal mining, oil drilling, tree-cutting, and removing contaminants from water sources and air.
Countries have invested billions of dollars into nanotechnology research for its potential use in military and industry applications. A 2012 report indicated that the U.S. has invested $3.7 billion through its National Nanotechnology Institute, followed by the European Union investing $1.2 billion and Japan with $750 million. Countries such as Bangladesh see a huge benefit from using nanotechnology for their agriculture. Scientists estimate that 70% of agriculture fertilizer is wasted from poor absorption, causing economic waste and environmental pollution. They believe that nanotechnology will increase fertilizer absorption and cause less environmental pollution.
The challenges, risks, and ethics involved with nanotechnology have caused medical doctors to worry if nanoparticles can cross the blood-brain barrier, protecting the brain from harmful chemicals in the bloodstream. Using nanoparticles in clothing and building materials may cause poison toxins to enter the human body. The immediate challenge with nanotechnology is to learn more about the materials and properties at the nanoscale. Some experts have warned scientists and politicians concerning the ethics behind using nanotechnology as a weapon. The concern lies in the fact that the question of ethics will occur after the weapon is built.
Using nanotechnology in the field of medicine also raises concerns about genetically changing the human body. This will open a Pandora ’s Box of creating transhumans. Experts argue that this technology will create a race of wealthy modified humans and a poorer population of unaltered people.
Eric Drexler has created another term called the “grey goo” scenario. He theorizes the dangers that may arise from malfunctioning self-replicating nanorobots destroying the planet. The grey goo scenario is an apocalyptic vision of trillions of nanorobots duplicating themselves by consuming carbon from the environment. In other words, synthesized material replacing organic material.
Related Links:

Friday, June 7, 2013

Quantum Theory: is the Universe not gravitationally but electrical?

from K-House

Ever since the birth of Quantum Theory there has been speculation—some privately held, some very public—of what the implications of the existence of quanta really means. As measuring devices improved dramatically, they provided more clarity to the issue. Today it is an accepted “truth” that everything is quantized. Energy (as well as length, time, and mass) exists in discrete quantities, divisions if you will.
There are also other side-effects to this discovery. Everything is “connected.” All particles on the quantum level know what the other particles are doing, regardless of distance, instantaneously. Let that sink in. Also, there is a size at which, when dividing the particle in half, it becomes “non-local.” It is nowhere and everywhere. Every measurement of the universe in which we reside has a “quantum limitation.”
Since what we know of reality is based solely on our perception (what we see, hear, feel, taste and smell), it appears there is a foundational structure upon which all of the details “hang.”
If we entertain the idea that the Universe is not gravitationally based but is electrical, it certainly would fit even our most common understanding of a computer-generated simulation. Because of the many difficulties with the gravitational model of the universe, many physicists are being convinced of an electrical model. Many of the issues concerning missing mass, celestial interactions, black holes and the abnormality of temperatures on our own Sun dissolve with the electrical theory.
There is a movie from a few years back (The Thirteenth Floor) in which the simulated characters of a simulated world are confronted with the fundamental limitations of their “reality.” Because of the limitations of structure, they were forced to conclude they were living within a “program” created by an outside source.
Ironically, this is where the physicists of our time find themselves. Disregarding what the Bible has said all along, of course, they are seeking to quantify and confirm the obvious, yet unwanted, conclusion to where the evidence is leading them. Like all good “scientists” they are trying to creatively foster any explanation other than a Creator that loves them and has been trying to inform them in every way possible.
The more they learn about the foundations of our reality, the more the idea of an underlying structure to space/time is confirmed. Millions of dollars are being spent in preparation for the design and construction of experiments to “test” the simulation theory.
Right now at the University of Bonn, nuclear physicist Silas Beane and some of his colleagues have come up with a test that exploits this feature of simulations; their need to be discretized, or quantized.
Beane and company believe we can test to see if the universe behaves the way we expect from theory, or the way we’d expect in a discretized model like a computer simulation. If the latter is true, it would provide evidence that we are all stuck in a simulation:
“In our universe, the laws of physics are the same in every direction. But in a grid, this changes since you no longer have a space-time continuum, and the laws of physics would depend on direction. Simulators would be able to hide this effect but they wouldn’t be able to get rid of it completely.”
Beane and company are testing by creating their own simulations. They are presently simulating quantum chromodynamics (QCD), which is the fundamental force in nature that produces the strong nuclear force among protons and neutrons, and to nuclei and their interactions. In place of the space-time continuum, they have designed tiny, tightly spaced cubic “lattices.” They call this “lattice gauge theory.” After observing their models, they then compare them to real world observations.
Interestingly, the researchers consider their simulation to be just a beginning. As computational development takes place, they envision more powerful versions in which molecules, cells, and even humans themselves might someday be simulated. But for now, they’re interested in creating accurate models of cosmological processes — and finding out which ones might show evidence of an underlying lattice.
On the possibility that we do live in a simulation he says, “There is a famous argument that we probably do live in a simulation. The idea is that in the future, humans will be able to simulate entire universes quite easily (approximately 500 years). And given the vastness of time ahead, the number of these simulations is likely to be huge. So if you ask the question: ‘Do we live in the one true reality or in one of the many simulations?’ the answer, statistically speaking, is that we’re more likely to be living in a simulation.”
The interesting analysis that drives them is that we have “noticed” that our reality has limitations. It is not a foregone conclusion that if some alien culture is behind this simulation, it would be constructed in a way we could understand it. Since we can, at least so far, understand its limitations, this means one of two things; either the designer is like us or it is constructed for our discovery. Either scenario causes secular scientists headaches. How could “they” be like us when they are capable of such technologies? And why would they care that we can discover limitations that should be invisible to the participant?
Many are seeing this edge of reality as an obstacle to be overcome. They talk of entering into the “real world” by their own intellect and effort. This is vaguely reminiscent of the Tower of Babel. As man stumbles forward, the higher probability is of misstep. If God acted at Babel to halt their development capabilities, what more could He have in store for us now?
Related Links:

Tell us what do you think about the "Let go - Let's go"BLOG