Tuesday, April 19, 2011

Physicist's hide and seek game


All the physicists are playing hide and seek. Einstein is the ‘den’ and stands against the wall with his eyes closed and counts till 100 to enable all the physicists to run and hide. At the count of 100 Einstein turns around and finds Newton standing there.




Einstein is a bit confused and starts to scratch his head and beard. Amidst Newton draws a square, one meter by one meter, on the floor and stands in the middle of it and explains..........


Monday, April 4, 2011

Golden Jubilee of Gagarin days

April 12, 2011. This  is the 50th anniversary of the the first human space flight. The person who embarked on this challenging mission, which lasted for 108 minutes, was a young Russian pilot, Yuri Gagarin. The date was April 12, 1961. Organisations all over the world are being formed to celebrate the event. Some notable examples are:


 1. Yuri Gagarin50 which has been established in the UK with an informative website- http://yurigagarin50.org/

2. In russia, 50 years of the First Human Space Mission. http://www.federalspace.ru/main.php?lang=en 




Let us look forward to great Gagarin days ahead.

Sunday, April 3, 2011

Astronomers Take a Look Inside Red Giant Stars


 NASA's Kepler Mission is giving astronomers such a clear view of changes in star brightness that they can now see clues about what's happening inside red giant stars.

The astronomers' preliminary findings are published in two papers, one inScience and the other in Nature(references below). Both papers describe how Kepler tracks tiny, regular changes in star brightness. Their regularity resembles steady drumbeats at different, precise rhythms. Each rhythm can be thought of as an individual tooth of a comb. Astronomers have studied those oscillations from ground-based telescopes to determine star basics such as mass and radius. But they noticed departures from the steady patterns in the Kepler data -- "dandruff on the comb," Kawaler said."No one anticipated seeing this before the mission launched," said Steve Kawaler, an Iowa State University professor of physics and astronomy and a leader of the Kepler Asteroseismic Investigation. "That we could see so clearly down below a red giant star's surface was unexpected."
These other patterns are caused by gravity mode oscillations. And those waves are allowing researchers to probe a star's core. The result, according to the Science paper, is information about the density and chemistry deep inside a star.
And, according to the Nature paper, the data also shows researchers whether a red giant star burns hydrogen in a shell surrounding the star or whether it has evolved to an age that it burns helium in the core. That's something astronomers hadn't been able to determine before Kepler.
"The stars burning helium in the core survived a helium flash," Kawaler said. "That transformation from stars burning a hydrogen shell is mysterious. We think it happens quickly and perhaps explosively. Now we can tell which stars have done that and which stars will do that."
That information will help astronomers better understand the life cycle of red giant stars. Our sun will evolve into a red giant in about 5 billion years.
Kepler launched March 6, 2009, from Florida's Cape Canaveral Air Force Station. The spacecraft is orbiting the sun carrying a photometer, or light meter, to measure changes in star brightness. The photometer includes a telescope 37 inches in diameter connected to a 95 megapixel CCD camera. That instrument is continually pointed at the Cygnus-Lyra region of the Milky Way galaxy. Its primary job is to use tiny variations in the brightness of the stars within its view to find earth-like planets that might be able to support life.
The Kepler Asteroseismic Investigation is also using data from that photometer to study stars. The investigation is led by a four-member steering committee: Kawaler, Chair Ron Gilliland of the Space Telescope Science Institute based in Baltimore, Jorgen Christensen-Dalsgaard and Hans Kjeldsen, both of Aarhus University in Aarhus, Denmark.
Kepler, Kawaler said, is a revolutionary tool for the study and understanding of stars. It's like having an instrument that simultaneously studies waves for clues about the ocean's surface and listens beneath the surface for clues about the ocean depths.
"But you have to listen very carefully," Kawaler said. "And you have to have an instrument sensitive enough to see and hear both."

Saturday, April 2, 2011

Large Hadron Collider

Some scientists are very concerned…
Some scientists are concerned that operation of the LHC (Large Hadron Collider) experiment in Europe might create dangerous particles and might pose a threat to life on Earth. LHC scientists do not know what their experiment will create, but they promise that any particles created will be safe.
Other scientists in the United States and Europe filed lawsuits to require proof of safety before LHC operations begin. High energy collisions are expected as soon as spring 2009.
Otto E. Rössler, Ph.D
 <http://www.scholarpedia.org/article/User:Rossler> <http://www.scholarpedia.org/article/User:Rossler>
Visiting Professor of Physics Dr. Otto E. Rössler, University of Türbingen, F.R.G <http://www.scholarpedia.org/article/User:Rossler>, creator of Chaos Theory’s Rössler Attractor  <http://en.wikipedia.org/wiki/Rossler_attractor>and founder of Endophysics <http://en.wikipedia.org/wiki/Endophysics>
“miniblack holes as are expected to be produced soon at the Large Hadron Collider at CERN.” [1]
“To publicize the danger because time is running out is a big decision.” [2]
“ …after 50 months the earth to a centimeter would have shrunk. It would be nothing more there, not only no more life, there but also the earth would be… a small black hole.“ [3]
[1] Abraham-Solution to Schwarzschild Metric Implies That CERN Miniblack Holes Pose a Planetary Risk <http://www.wissensnavigator.com/documents/OTTOROESSLERMINIBLACKHOLE.pdf>, Prof. Dr. Otto. E. Rössler, Division of Theoretical Chemistry, University of Tübingen (2008)
[2] A Rational, Moral and Spiritual Dilemma <http://www.wissensnavigator.com/documents/spiritualottoeroessler.pdf>, Prof. Dr. Otto E. Rössler’s (2008)
[3] “Biggest crimes of humanity” <http://translate.google.com/translate?u=http%3A%2F%2Fwww.20min.ch%2Fnews%2Fwissen%2Fstory%2F24668213&hl=en&ie=UTF8&sl=de&tl=en>, Prof. Dr. Otto. E. Rössler, 20 Minuten News (25 June 2008)
Rainer Plaga, Dr. Habil.
 <http://eprintweb.org/S/authors/All/P/R_Plaga> <http://eprintweb.org/S/authors/All/P/R_Plaga>
Dr. Habil. Rainer Plaga <http://rplaga.tripod.com/>, Astrophysicist <http://eprintweb.org/S/authors/All/P/R_Plaga>
At the present stage of knowledge there is a definite residual risk from mBHs production at colliders. This final conclusion differs completely from the one drawn by G & M.“ [1] Part 6 Conclusion, page 9
I put up for further dicussion three feasable measures for risk mitigation, at least in the start up phase of LHC“ [1] Part 6 Conclusion, page 9
[1] On the potential catastrophic risk from metastable quantum-black holes produced at particle colliders <http://arxiv.org/abs/0808.1415>, Dr. Habil. Rainer Plaga (26 Sep 2008)
 <http://en.wikipedia.org/wiki/Otto_R%C3%B6ssler>
Walter L. Wagner, Co-Plaintiff
 <http://www.lhcdefense.org> <http://www.lhcdefense.org>
Former Nuclear Safety Officer, Cosmic Ray Researcher and CBEST Math Champion, Walter L. Wagner <http://www.lhcdefense.org/>
“there is at present a significant risk that has not been proven to be an impossibility, and that operation of the LHC may have unintended consequences which could ultimately result in the destruction of our planet.” [1]
“Perhaps cosmic-ray collisions really are creating tiny black holes or strangelets, but those little bits of doomsday zip by too fast to cause any trouble. In the LHC, they say, the bad stuff could hang around long enough to be captured by Earth’s gravity and set off a catastrophe” [2]
Part 1 of 4  <http://www.youtube.com/watch?v=EL2ghqv5mCg>
Part 2 of 4  <http://www.youtube.com/watch?v=PTTbVmxbqZM&feature=related>
Part 3 of 4  <http://www.youtube.com/watch?v=oabjc8hr6n0&feature=related>
Part 4 of 4  <http://www.youtube.com/watch?v=wJQO-bkqBHw&feature=related>
[1] Coast to Coast AM Interview <http://www.youtube.com/watch?v=EL2ghqv5mCg>, George Noory (8 May 2008), “LHC may cause mini black hole and swallow Earth
[2] LHC Defense <http://www.lhcdefense.org>, Walter L Wagner (26 Sep 2008)
Luis Sancho, Co-Plaintiff
Video by Luis Sancho, Cosmologist <http://www.unificationtheory.com/>, Co-Plaintiff US Federal Lawsuit <http://www.lhcdefense.org/pdf/Sancho%20v%20Doe%20-%20Affidavit%20of%20Luis%20Sancho.pdf>
“In layman terms, what CERN is doing is asking each of us, and all of mankind, to play a game of Russian Roulette with 2 bullets; one for the creation of black holes, and one for the creation of strange matter.” [1], page 24, part 47.
my beliefs that CERN is acting with criminal negligence and it is not warning the public and the authorities that sponsor them of the real risks involved in the LHC experiment” [1], page 25, part 50.
[1] Affidavit of Luis Sancho, US District Court Hawaii <http://www.lhcdefense.org/pdf/Sancho%20v%20Doe%20-%20Affidavit%20of%20Luis%20Sancho.pdf>, Luis Sancho (March 2008)
Other Scientists
Dr. Raj Baldev, Theoretical Cosmology <http://www.internationalreporter.com/News-3458/If-LHC-research-fails,-it-may-shrink-the-Earth-.html>  <http://www.internationalreporter.com/News-3458/If-LHC-research-fails,-it-may-shrink-the-Earth-.html>
“ … the scientists are fully aware that it is not a project without a grave risk to the life of the Earth.”
Teresa E Tutt, Ph.D,  <http://www.lhcdefense.org/lhc_expertssay.php>Nuclear Engineering Texas A&M University <http://www.dsls.usra.edu/tutt.html>
 <http://www.internationalreporter.com/news/read.php?id=2326>
… put a stop to this insanity.”
Dr. Paul J. Werbos, National Science Foundation <http://www.lhcdefense.org/lhc_expertssay.php>
 <http://www.dsls.usra.edu/tutt.html>
“… stake the very survival of all life on earth on the truth of their ZPE stuff! … a gamble.”
 <http://www.lhcdefense.org/lhc_expertssay.php>
Cambridge Professor Shahn Majid <http://www.cambridgeblog.org/tag/shahn-majid/>, author of On Space and Time <http://www.cambridge.org/us/catalogue/catalogue.asp?isbn=9780521889261>“ with contributions by Roger Penrose
 <http://www.nsf.gov/staff/staff_bio.jsp?lan=pwerbos&org=NSF>
“I’m only not worried because I do not think that step 1 [micro black hole creation] will occur in the first place. But this is just my personal skepticism.
James Blodgett, Biometry and Statistics <http://www.lhcdefense.org/pdf/Sancho%20v%20Doe%20-%20Affidavit%20of%20James%20Blodgett%20in%20Support%20of%20TRO.pdf>, University of Albany New York
I would say that there is definitely a risk, and that the risk is considerably higher that was thought until recently. Most of the authors who have written on the subject agree that there is a risk” [1] Part 9, Page 5.
Other Concerns
 <http://www.scholarpedia.org/article/User:Rossler> <http://www.scholarpedia.org/article/User:Rossler>
Professor Dr. Otto E. Rössler <http://www.scholarpedia.org/article/User:Rossler>
The concern of low velocity micro black hole creation on Earth may not be limited to collider experiments <http://www.lhcfacts.org/?cat=19>, there may be another creation mechanism that requires far less exotic machinery as detailed in the following draft received from Dr. Otto E. Rössler:
Huang-Superatom in Pfau-Bosenova: A Falsifiable Prediction A recent experiment by Tilman Pfau, Masahito Ueda and colleagues [1] achieved a first Bosenova implosion [2] using magnetic atoms. According to Kerson Huang and Ueda [3], who predicted indistinguishability-based implosions before they were actually observed [2], a magnetic superatom must have formed in the Pfau-Ueda experiment. Owing to its giant magnetism, this superatom ought to be discoverable in the residuals of the Pfau experiment. Hereby the superatom’s diameter could be found to range from 30 atom radiuses (liquid droplet) over 10 to the -14 m (large atomic nucleus) to 10 to the -18 m (potential black-hole radius). Thus for the the first time, the controversial question of whether or not a compact relic is generated in a Bosenova implosion, and whether superatoms exist, could be answered empirically. This is important because despite their early prediction [3], relics are still not generally believed to exist nor is the associated new form of matter, superatoms. A second reason which makes this empirical check important is monitoring: By down- and up-grading the initial atom number while recording the magnetism of the corresponding relic, nature’s unknown black-hole threshold could be determined empirically from the onset of a decrease in magnetism caused by gravitational redshift. The Pfau experiment is therefore a maximally important tool for the avoidance of inadvertent black hole formation. For J.O.R. Authors: O.E. Rössler, D. Hoffmann and J. Tankersley Jr., University of Tubingen, F.R.G.
References:
[1] T. Lahaye, J. Metz, B. Frohlich, T. Kich, M. Meister, A. Griesmaier, T. Pfau, H. Saito, Y. Kawaguchi and M. Ueda, d-Wave collapse and explosion of a dipolar Bose-Einstein condensate. Phys. Rev. Lett. 101, 080401 (2008).
[2] E.A. Donley, N.R. Claussen, S.L. Cormish, J.L. Roberts, E.A. Cornell and C.E. Wieman, Dynamics of collapsing and expanding Bose-Einstein condensates. Nature 412, 295 (2001).
[3] M. Ueda and K. Huang, Fate of a Bose-Einstein condensate with attractive interactions. Phys. Rev. A 60, 3317 (1999).
POLLS
AOL News Poll <http://news.aol.com/story/_a/critics-fear-collider-could-doom-earth/20080628165609990001>: Is the giant particle smasher worth the risk? No: 61%, Yes: 39% with 259,307 total votes.
BBC on-line Poll <http://www.bbc.co.uk/sn/tvradio/programmes/horizon/broadband/tx/universe/vote/>: Is it Worth the Risk? Yes 54%, No: 46%
QUESTIONS
Q: What is the Large Hadron Collider?
The ‘LHC’ is the largest, most expensive scientific experiment ever created. It is located along the French and Swiss border and it will collide tiny particles at nearly the speed of light to create conditions that may not have existed since the first fraction of a second after the big bang almost 14 billion years ago.
Q: Why all the concern now?
When funding for the LHC was approved decades ago, scientists believed that there was no reasonable danger. A few years ago CERN scientists predicted the LHC might create tiny particles called micro black holes <http://arxiv.org/abs/0806.2111> at a rate of 1 per second <http://cerncourier.com/cws/article/cern/29199>.
Q: What is a micro black hole?
A micro black hole is a particle thousands of times smaller than an atom but it is so dense that any other particles it touches will be collapsed into it.
Q: Are micro black holes dangerous?
Unknown. CERN believes that micro black holes would either evaporate or grow too slowly to be dangerous. Other scientists believe micro black holes would not evaporate and might grow quickly. One scientists believes micro black holes might create dangerous radiation as it grows.
Dr. Otto Rössler’s theory is that when an mBH (micro black hole) accretes a charged particles they will not go straight into the mBH, but will circulate around the mBH creating magnetic fields that would strongly attract other charged particles hus accelerating the growth rate.
Q: Do cosmic ray impacts with Earth prove the LHC is safe?
No. Some scientists have theorized that cosmic rays prove safety but this theory is flawed. Cosmic rays that strike Earth just pass through Earth and back into space. The LHC will collide particles head on and some particles produced by the LHC will be slowed down enough to be captured by Earth.
Nuclear Physicist Walter L Wagner discovered that cosmic ray impacts with Earth do not prove that LHC created micro black holes would be safe. CERN promised to produce a new safety report in response.
Q: Do physicists believe that micro black holes would evaporate?
One Delphi study found that opinions of physicists differ significantly on their belief that micro black holes might evaporate or not.
Scientist James Blodgett <../?cat=52> conducted a survey of physicists in 2004 that found that physicists’ estimates that Hawking radiation would fail ranged from 0% to 50% (0%, 0%, 0.000000001%, 0.1%, 1%, 1%, 1%, 2%, 2%, 7%, 10%, 10%, 30%, 35%, and 50%).
Q: Dr. Hawking believes micro black holes could evaporate, could he be wrong?
Yes. Several theoretical scientists studied Dr. Hawking’s theories and concluded that Dr. Hawking was mistaken, micro black holes would not evaporate.
“black holes do not radiate” [1]
“The possibility that non-radiating `mini’ black holes exist should be taken seriously; such holes could be part of the dark matter in the Universe” [2]
“the effect [Hawking Radiation] does not exist.“ [3]
“2) infinitely delayed Hawking radiation; 3) infinitely weak chargedness of black holes“ [4]
it is possible that… the behavior of the black hole is stable“ [5]
[1] arxiv.org/abs/gr-qc/0008016 <http://arxiv.org/abs/gr-qc/0008016  >, Trans-Plankian Modes, Back-Reaction, and the Hawking Process, Prof. Dr. Adam D. Helfer (2000)
[2] arxiv.org/abs/gr-qc/0304042v1 <http://arxiv.org/abs/gr-qc/0304042v1>, Do black holes radiate? Do black holes radiate? Prof. Dr. Adam D. Helfer (2003)
[3] arxiv.org/abs/gr-qc/0607137 <http://arxiv.org/abs/gr-qc/0607137>, On the existence of black hole evaporation yet again On the existence of black hole evaporation yet again, Prof. VA Belinski Paper. (2006)
[4] www.wissensnavigator.com/documents/OTTOROESSLERMINIBLACKHOLE.pdf <http://www.wissensnavigator.com/documents/OTTOROESSLERMINIBLACKHOLE.pdf> Abraham-Solution to Schwarzschild Metric Implies That CERN Miniblack Holes Pose a Planetary Risk, Prof. Dr. Otto Rössler (2008)
[5] arxiv.org/abs/0808.2631 <http://arxiv.org/abs/0808.2631> On the Stability of Black Holes at the LHC, M. D. Maia, E. M. Monte (2008)
Q: Do cosmic ray impacts with Neutron stars prove the LHC is safe?
Unknown. Some scientists theorize that neutron stars or white dwarfs might stop cosmic rays, and because these types of stars to not disapear into black holes neither would Earth. Other scientists theorize that neutron stars and white dwarfs would not stop cosmic rays, so these stars do not provide safety proof.
Q: Would scientists purposefully risk danger to Earth?
Scientists have been willing to take calculated risks in the past. CERN scientists believe that the Large Hadron Collider is an extremely important experiment and they might be willing to accept some level of risk.
Q: Have any of CERN’s particle physicists experssed any concerns?
Some CERN particle physicists may have some concerns [1] but they have been asked to represent LHC safety as zero risk regardless of personal opinion. [2]
We don’t want to know if it’s possible we will blow up the world–because, quite frankly, we already know the answer. And the answer is, quite frankly, despite all the testing we will ever do–yes. It’s possible. That doesn’t help us.
What we want to know is if we are going to destroy the world. And we can’t know this with certainty, but in reality, we don’t really care about certainty. We care about whether or not it’s probable–that is, likely, that we will destroy the world. So again, possibility–irrelevant. Likelihood–key.” [1]
“Chief Scientific Officer, Mr. Engelen passed an internal memorandum to workers at CERN, asking them, regardless of personal opinion, to affirm in all interviews that there were no risks involved in the experiments, changing the previous assertionof ‘minimal risk’ ” [2] Part 34, Page 18.
[1] RE: LHC Dangerous? by yy2bggggs <http://forums.xkcd.com/viewtopic.php?f=18&t=11690&st=0&sk=t&sd=a&start=160> on Sat Apr 12, 2008 5:27 pm UTC , XKCD (12 Apr 2008)
[2] Affidavit of Luis Sancho, US District Court Hawaii <http://www.lhcdefense.org/pdf/Sancho%20v%20Doe%20-%20Affidavit%20of%20Luis%20Sancho.pdf>, Luis Sancho (March 2008)
CLOSING
 <http://www.bigcrash.org> <http://www.bigcrash.org>
A few scientists warned NASA managers that launching the Space Shuttle Challenger in freezing weather might not be safe. NASA Managers did not delay the scheduled launch.
Some scientists warn that operation of the Large Hadron Collider might not be safe. CERN managers are not planning to delay the scheduled launch.
JTankers
(Co-)Administrator LHCFacts.org

QUANTUM COMPUTATION

How the weird logic of the subatomic world could make it possible for machines to calculate millions of times faster than they do today


IN THE DARKEST HOUR OF THE Second World War, when France had fallen and German submarines prowled the Atlantic at will, the finest minds in Britain were clustered in a place known as Bletchley Park. There, in low wooden buildings thrown up on the grounds of a country estate fifty miles northwest of London, thousands of men and women worked furiously to decode the orders the Axis sent its ships and troops. As intercepted radio messages chattered out of the teleprinters, clerks would snatch them and copy them by hand onto standardized forms. Then the cryptanalysts would take over. Mostly, they worked by hand: comparing sheets of piled paper, running through permutations, crossing off false starts, staring, shuffling, guessing. It worked. For days or weeks the fog of war would clear as the Bletchley Park team found the keys to the Red and Light Blue ciphers of the Axis air force, the naval and army Enigma ciphers and then, triumphantly, the super-secret Fish cipher of the German high command.
It seems astonishing that the fate of Europe may once have depended on pencil stubs and intuition. Today, when calculations become complex and the stakes are high, ordinary brainpower almost always gets an electronic assist from a computer-and the faster and more powerful the better. Many of the hardest calculations still have to do with codes-though they are intended mostly to test security, not compromise it. But today's codes make Fish seem like the simple letter-substitution ciphers beloved of puzzlers in children's magazines (a becomes b, b becomes c and so forth). One is the so-called RSA protocol, which makes electronic banking possible by assuming banks and their customers that a bogus transfer of funds or a successful forgery would take the world's fastest computer millions of years to carry out. Another is the widespread Data Encryption Standard (DES), which remains secure for most ordinary business transactions.
Other calculations -- searching the Internet, modeling the national economy, forecasting the weather -- likewise strain the capacities of even the fastest and most powerful computers. The difficulty is not so much that microprocessors are too slow; it is that computers are inherently inefficient. Modern computers operate according to programs that divide a task into elementary operations, which are then carried out serially, one operation at a time. Computer designers have tried for some time to coax two or more computers (or at least two or more microprocessors) to work on different aspects of a problem at the same time, but progress in such parallel computing has been slow and fitful. The reason, in large part, is that the logic built into microprocessors is inherently serial. (Ordinary computers sometimes appear to be doing many tasks at once, such as running both a word-processor and a spreadsheet program, but in reality the central processor is simply cycling rapidly from one task to the next.)
A truly parallel computer, in contrast.would has simultaneity built into its very nature. It would be able to carry out many operations at once, to search instantly through a long list of possibilities and point out the one that solves a problem.
Such computers do exist. They are called quantum computers -- not so much because they are inherently small, but because they operate according to the bizarre rules of quantum mechanics, which do indeed govern the world of the very small: the waves and particles of subatomic physics. One quantum rule in particular creates an enormous incentive to apply quantum mechanics to computing: the startling discovery by twentieth-century physicists that elementary particles such as protons, neutrons and electrons can persist in two or more states at once. That makes it possible, at least in principle, for them to be harnessed as processing units in a machine more efficient than any conventionally designed "classical" computer could ever be.
In the past few years, simple quantum computers have been built in the laboratory. Yet on paper, at least, the prospects are stunning: an algorithm that could factor 140-digit-long numbers a billion (109) times faster than is currently possible with the best non quantum methods; a search engine that could examine every nook and cranny of the Internet in half an hour; a "brute-force" decoder that could unscramble a DES transmission in five minutes.
PERHAPS THE MOST SURPRISING THING ABOUT quantum computing is that it was so slow to get started. Physicists have known since the 1920s that the world of subatomic particles is a realm apart, but it took computer scientists another half-century to begin wondering whether quantum effects might be harnessed for computation. The answer was far from obvious.
Boiled down to its essentials, any computer must meet two requirements: it must be able to store information as strings of 1's and 0's, or bits, and it must have a way of altering the bits in accordance with instructions. A computer transforms its bits by means of gates, or devices designed to carry out simple operations in logic. For example, a NOT gate converts any input bit into its opposite (0 becomes l, and 1 becomes 0). An OR gate. by contrast, converts two input bits into a single bit whose value is the higher of the two (0 OR 0 yields 0; any other combination gives 1). And an AND gate yields a 1 only if both input bits are 1's; otherwise, its output is a 0. Everything a computer does -- whether synthesizing speech, calculating the billionth digit of pi or beating Garry Kasparov at chess -- ultimately comes about through the transformation of bits by gates.
Could subatomic particles store bits? Could they form gates? Consider the electron. Every electron acts as if it were a little magnet, spinning about an axis, whose magnetic moment can point in only one of two directions, up or down. Thus the spin of the electron is quantized: it has just two possible states, which can readily be identified with the 0's and 1's of an ordinary computer processor. And you can flip the bit -- that is, change a down, or 0, to an up, or l, by adding just a smidgen of energy.                                              //amit below given a mathematical exemplary which you told me to include.If possible,try to mould it into more compact format otherwise leave it as such//
Imagine a quantum computer made of two atomic nuclei acted on by an external magnetic field. Suppose the nuclei belong to the neighboring atoms of carbon and hydrogen in a single molecule of chloroform, CHCl3. Just as electrons do, the nuclei align their spins with the magnetic field in the direction up (1) or down (0). One can now begin to compute with this toy system by tickling the nuclei with radio waves. By tuning the frequency and duration of a radio pulse in just the right way, it is possible to make one or the other nucleus flip its spin. It is even possible to ensure that the hydrogen nucleus flips over only if the carbon nucleus is already pointing up. In that case the quantized behavior of the two nuclei functions as what computer scientists call a controlled-NOT gate, with the carbon nucleus as the control. In symbols, with carbon in the first place and hydrogen in the second, there are four possible inputs, (1,1), (1,0), (0,1) and (0,0). Controlled-NOT can then operate in one of four ways: (1,1) -> (1,0); (1,0) -> (1,1); (0,1) -> (0,1); (0,0) -> (0,0). Physicists and computer scientists have proved that, by stringing together single-qubit operations and two-qubit controlled-NOT gates, it is theoretically possible to build a quantum computer capable of doing anything a classical computer can do.


BUT THE REAL MAGIC transpires when a two-qubit gate acts on a particle that is in a superposition of spin states. First, place the chloroform molecule in a strong external magnetic field that aligns both atomic nuclei into the down, or 0, position. Then, with a pulse of tuned radio waves, tweak the carbon nucleus so that it does a partial flip, into a superposed state for which the probabilities for both spin directions are each 50 percent (a single-qubit operation). Finally, carry out a controlled-NOT operation with the carbon nucleus as the control qubit. Because the second qubit (the hydrogen nucleus) started out in the zero state, only two of the operations are relevant: (1, 0) -> (1, 1) and (0, 0) -> (0, 0). In other words, if the carbon nucleus had initially gotten flipped to a 1, the controlled-NOT operation would flip the hydrogen nucleus into the 1 state, too. If the carbon had remained a 0, the controlled-NOT operation would have left the hydrogen in the 0 state, too. But the action of controlled-NOT on the superposed state of the carbon nucleus and the 0 state of the hydrogen nucleus leaves the two-qubit system as a whole in a more complicated superposition, with a 50 percent chance of being in the (1,1) state and a 50 percent chance of being in the (0,0) state. Such a superposition is called an EPR state, after the physicists Einstein, Bons Podolsky and Nathan Rosen, who first studied it in 1935.
Intensely puzzling aspects of the EPR state arise when the two qubits are physically separated and independently operated on. Suppose you measure only the value of the first qubit, the spin state of the carbon nucleus. When you do that, you run up against one of the fundamental rules of quantum mechanics: If an interaction gives any information about the state of a quantum system, the rest of the system immediately readjusts itself to be consistent with that information. In general, any attempt to measure or observe a system in a superposition of two or more states immediately forces the system to make a decision. Instead of continuing in its intermediate, superposed state, the quantum computer jumps into just one of the possible quantum states open to it. In the language of quantum mechanics, it decoheres.
By observing the carbon qubit of the EPR state, then, you force decoherence and destroy the superposition; you had an even chance of observing a 0 or a 1, but you could observe only one or the other value. But that observation also implies that the system as a whole cannot continue to be in its superposition of the two states (0, 0) and (1, 1); instead, it too takes on a single, definite state, and so the hydrogen qubit assumes the same value as the carbon one. In quantum-mechanical terms, both states appear at the same time because the two nuclei have become entangled.
IF THAT WERE THE WHOLE STORY, QUANTUM computing might not seem particularly interesting. After all, the final outcome is just two identical qubits -- with random values, at that. The exciting thing about entanglement, however, is that you do not have to measure those values right away. Instead, you can leave the system in its superposed state and carry out any number of delicate and intriguing operations on the qubits before you finally decide to make an observation. Meanwhile, any quantum operation on the system acts on all of the states simultaneously. If the number of qubits is q, the total number of possible states is 2q.
Thus from, say, 500 particles you could, in principle, create a quantum system that is a superposition of as many as 2500 states. Each state would be a single list of 500 1's and 0's. Any quantum operation on that system-a particular pulse of radio waves, for instance, whose action was, say, to execute a controlled-NOT operation on the 175th and 176th qubits-would simultaneously operate on all 2500 states. Hence with one machine cycle, one tick of the computer clock, a quantum operation could compute not just on one machine state, as serial computers do, but on 2500 machine states at once! That number, which is approximately equal to a 1 followed by 150 zeros, is far larger than the number of atoms in the known universe. Eventually, of course, observing the system would cause it to collapse into a single quantum state corresponding to a single answer. a single list of 500 1's and 0's -- but that answer would have been derived from the massive parallelism of quantum computing.
POINT TO PONDER:
The consequence is that for some purposes quantum computers would be so much faster than classical computers are that they could solve problems the classical computers cannot touch. If functioning quantum computers can be built, harnessing their potential will be just a matter of creating algorithms that carry out the right operations in the right order.
THAT REALIZATION HAS TOUCHED OFF AN explosion of research into the theory of quantum computing. The field poses some unique challenges. For one thing, the mathematics involved in analyzing the evolution of a system of particles is daunting. Just as you might fly from another obstacle is that quantum computers are extremely fragile. To remain in an intermediate, superposed state, a quantum-mechanical system needs to be almost totally isolated from its environment; the slightest interaction with anything outside itself will perturb the system, destroy the superposition and upset the computation. As a result, anyone who wants to build a quantum computer must be careful to shield it from heat, cosmic rays and other potential outside influences-including outside observers. Moreover, once your quantum computer has solved a problem for you, your own need to read out the answer forces you to destroy the system.
The quirkiness of quantum systems goes a long way toward explaining why the most impressive studies in quantum computing so far have taken place on paper. While experimenters struggle to build even rudimentary systems in the lab, theorists have raced ahead to anticipate the software such computers will need, should they ever become practical.
One essential piece of software for any computing technology is a way to correct errors. Machines make mistakes. Classical computers are designed to catch errors through redundancy. They perform each elementary computation several times and then accept the most frequent answer as correct. Such a "majority rules" approach would not work in quantum computers, though, because comparing answers would entail precisely what must be avoided: observing the system before the computation is done.
A SIMILARLY SUBTLE APPROACH HAS BEEN devised for factoring large numbers. Factoring is what computer scientists call a one-way problem: hard in one direction but easy in the other. Suppose I asked you the question, "Which two integers can be multiplied to obtain the number 40,301?" Systematically testing all the candidates might keep you busy for fifteen minutes or so. But if I asked you to multiply 191 by 211, it would take you only about twenty seconds with pencil and paper to determine that the answer is 40,301. The lopsided difficulty of factoring compared with multiplication forms the basis for practical data encryption schemes such as the RSA protocol. Large prime numbers -- say, a hundred digits each or so -- make good "passwords" for such systems because they are easy to verify: just multiply them together and see whether their product matches a number that is already stored or that might even be made publicly available. Extracting the passwords from a 200-digit composite product of two large primes, however, is equivalent to factoring the large composite number -- a problem that is very hard, indeed. The largest number that ordinary supercomputers have been able to factor with non-quantum algorithms is "only" 140 digits long.
Quantum algorithms, however, are another matter. In 1994 Shor discovered one that makes factoring almost as efficient as multiplication. In computer science, one often tries to solve hard problems by converting them into simpler problems that one already knows how to solve. In that spirit, Shor started by employing well-known results from number theory to convert the problem of factoring into one of estimating the periodicity of a long sequence. Periodicity is the number of elements in the repeating unit of a sequence. The sequence 0, 3, 8, 5, 0, 3, 8, 5, . . ., for instance, has a periodicity of four. To estimate periodicity, a classical algorithm must observe at least as many elements as there are in the period. Shor's algorithm does much better. It sets up a quantum system made up of a large number of superposed states. Each state is identified with an element of the repeating sequence. A single quantum mechanical operation then transforms each superposed state in a way that depends on the value of the sequence to which the state corresponds. A series of such quantum mechanical operations, mathematically analogous to X-ray diffraction, is carried out on the superposed states.
The method works for much the same reason X-ray diffraction enables mineralogists to deduce the periodicity of the crystal lattice of an unknown solid substance. The periodic structure of the lattice allows only waves of certain wavelengths to propagate in any specified direction. Similarly in Shor's algorithm, the quantum system of superposed states allows only certain of the wave-like probabilities associated with the quantum states to "propagate"; the rest are canceled out or damped away. The algorithm then calculates those propagating wavelengths, estimates the periodicity of the superposed states, and finally deduces the factors of the number. The result is the fastest factoring algorithm known. Factoring is a kind of search -- a search for factors. For other searches, however, a more general-purpose algorithm is needed. My own most important contribution to quantum computation is an efficient quantum mechanical algorithm for searching unsorted databases. That algorithm, which I discovered in 1996, is faster than any classical algorithm can ever be. More than that, it has been shown that no other quantum mechanical algorithm can ever beat it either.
A quantum computer could do much better, thanks to its ability to carry out many operations at the same time. Assuming you had access to a quantum system, here is how you could do the search: First choose enough particles (some number q) so that there are enough quantum states in the system (2q) to assign at least one state to each name in the phone book. (To get a million names, for instance, you would need twenty particles, since 220 are slightly more than a million.) Place the information from the phone book into quantum memory, and match each name to a different quantum state. Put the system into a superposition of the million or so states. Now the system can do a computation that checks whether each of the names is the right name. Thanks to quantum superposition, the computation takes place on all states simultaneously.
At this stage the answer is in the system; the trick is to get it out. Observing the system without further processing would instantly collapse the superposition into one of its million entangled states, but the chances that that state will be the one that gives you the name you need would be just one in a million. You might just as well have picked a name at random from the phone book.
The way around the problem, then, is further processing: a sequence of quantum mechanical operations on the superposed states that amplifies the probability that when the superposition is observed, it will collapse only into the state corresponding to the desired name. That is what my search algorithm does. Like Shor's factoring method, the algorithm takes advantage of the wave-like nature of probability in a quantum computer.
//Amit here one more example going to strike with a bang! If interested, saving private Ryan (kidding of course), then put it mathematical dress-up otherwise let the river flow with its triumphant saga //
To take a relatively straightforward example of how the process works, suppose you want to find a name in a phone book that has only four entries. To set up your quantum computer, you decide to compute with a pair of particles -- for variety, choose protons this time -- and you arrange things so that each name in the phone book corresponds to a distinct combination of spins: (0,0), (0, 1), (1,0) or (1,1). Now suppose that, unbeknownst to you, the name you want to find corresponds to the third state, (1, 0). That state is the target.
YOU BEGIN BY INITIALIZING THE SPINS OF THE protons with a strong magnetic field, aligning both of them in the up direction. Then you give each particle a fainter dose of magnetism, just enough energy to change the spin state to a superposition that is 50 percent up and 50 percent down (a "50 percent flip"). The two-particle system has now become a superposition of the four possible combinations of spins, each with a probability of 1/4.
In quantum mechanics each probability is treated mathematically as the square of a theoretical (but not directly observable) construct called the probability amplitude. Strictly speaking, what my search algorithm manipulates are probability amplitudes. The advantage of working with probability amplitudes is that, unlike actual probabilities, they can be either positive or negative, and so they can cancel one another out, just as waves do in water. The algorithm makes use of that property by canceling computational paths that initially seem promising but that later turn out to be dead ends.
Since each of the four superposed states has a probability of 1/4, the probability amplitude of each state in my two-particle example can be either +1/2 or -1/2 (technically, in fact, it can even be a complex number). The algorithm ensures that all the probability amplitudes begin with the same value: (1/2, 1/2, 1/2, 1/2). Now, come the heart of the algorithm. The first operation changes the sign of the amplitude of the target state (in my example, the third state); thus the amplitudes change to (1/2, 1/2, -1/2, 1/2). That is possible because, in a sense, when the quantum computer is in the target state, it can verify that it is indeed in the right state and can then invert the phase in that state. Note that this operation reveals nothing to the outside world, because the probabilities -- that is, the squares of the probability amplitudes -- remain unchanged.
Next come three quantum operations: a 50 percent flip, an operation that inverts the phase of one of the states, and another 50 percent flip. The net effect is a maneuver called "inversion about the average." If you imagine the average value as a crossbar whose height is equal to the average value of the amplitudes, with the various individual amplitudes jutting up or dangling down from it, you invert each amplitude by flipping it over to the opposite side of the bar.
What is the net effect? The average of the amplitudes in the four states, after changing the sign of the amplitude of the target state, is (1/2 + l /2 - 1/2 + 1/2)/4, or 1/4. The first state has an amplitude of l /2, which is 1/4 above the average, and so after the inversion about the average its amplitude becomes 1/4 below the average, or 0. A similar calculation for each state shows that the amplitudes become (0, 0, 1, 0). The square of each of those numbers gives the probability of each state. In other words, the effect of the operations is to drive the quantum computer into the target state; the probability of the target state, (1,0), has reached certainty. If you now observe any of the spins of the protons, the quantum superposition will collapse into a display of the right answer.
MOST SEARCHES, OF COURSE, WOULD SCAN a list longer than four items. To do so, the algorithm might repeat the three quantum operations many times, nudging the system toward the desired state with every pass through the loop. What makes quantum searching so powerful is that, for a list of N items, the algorithm requires only about the square root of N steps to find its quarry-not the N/2 steps of the classical trial-and-error search. Thus a quantum computer could search a million-name phone book in 1,000 tries instead of half a million. The longer the list, the more dramatically the quantum algorithm will outpace its classical rival.
The algorithm is good for more than merely looking up phone numbers. It can search through items not explicitly spelled out on a list, and so it can take a brute-force approach to any problem whose potential solutions can be systematically enumerated, trying them all until it stumbles on a solution that works. Non-quantum versions of the brute-force approach are already a staple of computing: such algorithms are applied in programs that play games such as chess. Because of its speed and versatility, it seems likely that the search algorithm will be a key component in the software of the future.
IF THE TWENTY-FIRST CENTURY IS TO BE AN AGE of quantum computers, some major advances will be necessary. Certainly the working models built to date come nowhere near their theoretical potential. Even factoring two-digit numbers remains beyond them. The most promising approach so far is a spin-off from the medical technology of nuclear magnetic resonance (NMR) imaging. The computers are molecules in a liquid, and information is encoded in atomic nuclei in the molecules. Instead of trying to coax results out of a few fragile qubits, the technique is based on manipulating, or, in effect, programming, enormous numbers of nuclei with radio-frequency pulses and then harnessing statistics to filter the right answers (about one result in a million) out of the background of noise.
Will quantum computers ever grow into their software? How long will it take them to blossom into the powerful calculating engines that theory predicts they could be? I would not dare to guess, but I advise all would-be forecasters to remember these words, from a discussion of the Electronic Numerical Integrator and Calculator (ENIAC) in the March 1949 issue of Popular Mechanics: Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and weigh only 1.5 tons.
New York to Los Angeles either directly or via any of several hub airports, a subatomic particle changing from one state to another can take several possible paths. The difference is that, whereas your plane can take only one of the available routes, the particle acts like a wave that simultaneously takes them all. Furthermore, the probability of finding the particle along each path fluctuates from point to point and from moment to moment, as if it were a wave with crests and troughs. To evaluate the probability that any particular state will come to pass, one must sum the probabilities of all the paths leading to that state, being careful to keep the probability waves along each path in the proper phase.
All physical systems are subject to random fluctuations, including those degrees of freedom, which may be described in terms of classical mechanics. However, classical degrees of freedom may be stabilized to a very high degree, either by making the ratio of system size to perturbations size very large (passive stabilization), or by continuously monitoring the system and providing greatly enhanced `inertia' against random fluctuation be means of feedback control (active stabilization). Of these two possibilities, the former, that is passive stabilization, can be applied in the quantum regime only by making the perturbations small rather than by making the system large, and stabilization beyond a certain degree is in practice ruled out. It is not hard to show that this makes the experimental realization of a quantum computer of useful computational power impossible by any currently attemptable method. The physics of quantum information processing remains interesting, one should add. It is simply that great computing power is not available.
The method of active stabilization is extremely powerful in classical systems, and is at the heart of mechanical devices from early steam engines to the latest microchip processors. However, it is not obvious whether anything like active stabilization is possible for a quantum system, since feedback control involves dissipation, and therefore is non-unitary. Hence one may frame the following question:
'`Is active stabilization of a quantum bit possible


The idea of a quantum bit or qubit is introduced in order to emphasize that the aim is to stabilise a complete quantum state, not just a chosen observable. Also, we are concerned with the properties of the quantum state, not with the physical system expressing it. For example, a single qubit may be expressed by a system whose Hilbert space has many more than two dimensions. Among the possible changes such a system may undergo, some will affect the stored single qubit of quantum information, but others will not.
The surprising answer to our question is ``yes,'' with some important provisos, which depend on the type of stabilization sought. The stabilization is based on the classical theory of error correction, which provides a very powerful technique by which classical information can be transmitted without errors through the medium of a noisy channel. Classical error correction operates by the judicious use of redundancy, that is, sending the same information many times. In this sense it is akin to making the system larger in order to make it more resistant to perturbations. However, the precise way in which the redundancy is introduced is very important. The type of redundancy, or encoding, employed must be carefully matched to the type of noise in the channel. Typically, one considers the case of random noise, which affects different bits independently, but this is not the only possible case. The encoding enables the most likely errors in the information to be identified and corrected. This corrective procedure is akin to active stabilization, and brings the associated benefits of powerful noise suppression.
The ending note, of course regarding the article, not of this heightened jaunt of quantum computation, suggest that many intrinsic and prone to excel concept of quantum computation will see the light of future with astonishing facts and figure running down the streets of common people actually correlating it as the habitual practice and exercise as one’s domain.