Sunday, February 24, 2008

1,000 trillion floating-point calculations (flops) per second

Scientists have unveiled a new initiative, dubbed the Institute for Advanced Architecture, to lay the groundwork for a supercomputer that would be more than 1,000 times faster than any current offering.
Commercial supercomputer makers have recently begun to flirt with petaflop performance, meaning computers capable of completing 1,000 trillion floating-point calculations (flops) per second. The Sandia and Oak Ridge national lab scientists aim to leapfrog that benchmark by several orders of magnitude and are targeting one million trillion calculations per second, known as exascale computing.
(Exa is the metric prefix for quintillion, or 1018.)
"Both the [Department of Energy's] Office of Science and the National Nuclear Security Administration have identified exascale computing as a critical need in roughly the 2018 timeframe," said Sudip Dosanjh, the project's head. "We certainly think that there is a national competitiveness issue."
Ultrafast computers are integral to simulating complex systems, like the Earth's climate, nuclear warhead explosions or the protein interactions inside cells. They continue to progress, thanks to the well-known -- though often questioned -- Moore's Law, which has allowed chip makers to pack twice as much power into the same amount of space about every two years. More power has meant more so-called flops, a common measurement of computing speed. Ten years ago, Sandia's ASCI Red became the first teraflop computer, and in December 2000, Wired called 100-teraflop performance "unheard of."
Now, though, new challenges have presented themselves. The researchers say that moving data from the supercomputer's thousands of processors into its memory will require them to design new architectures that reduce the need to move data around.
"Some people say that flops are almost free, that really what you are paying for is moving the data," Dosanjh said.
In addition, power and reliability require new solutions when you've got thousands or millions of processors.
"The power budget for all computers seems to be going up rapidly. We need a machine you can afford to run," Dosanjh said, and one that actually works. With a million computing nodes working together, the odds are high that one of them will break, over the course of even a small calculation.
With current technologies, "an exascale computer might only stay running for a few minutes," said Dosanjh.
The Sandia-Oak Ridge collaboration has $7.4 million in fiscal year 2008 funding from the National Nuclear Security Administration and the Department of Energy, but it's not just nuclear weapons research that is driving the push for faster supercomputers. Researchers of many stripes have come to depend on the inexorable upward scaling of computing power.
Gavin Schmidt, a climate modeler at NASA Goddard, said that he's built the regularity of computational upgrades into the way he designs his climate simulations, which are so computing-intensive they can take several months of processing to complete.
"Generally speaking we don't do experiments that last more than three months," Schmidt said. "If you want to do an experiment that would last for six months, it's best to just wait a few months, and then [with faster computers] it only takes two months to run."
According to a semiannual list of the world's top 500 supercomputers, compiled in November 2007, IBM's BlueGene/L System is the fastest computer in the world, with benchmark performance of about 480 teraflops per second, or almost half a petaflop. That rig is a joint development of IBM and the National Nuclear Security Administration, and is housed in California's Lawrence Livermore National Laboratory.
With the research team trying to vault several orders of magnitude over any current system, Dosanjh said the new institute would need $20 to $30 million a year to accomplish its goals.
Even as individual supercomputers have grown in speed, distributed-computing initiatives, like the Folding@Home program, have enabled researchers to tap into thousands of users' computers and PS3s to solve some types of scientific problems.

Monday, February 18, 2008

Time travel could be possible ... in the future

Roger Highfield, Science Editor

It may take more than a nuclear-powered De Lorean or a spinning police box, but time travel could actually be a possibility for future generations, according to an eminent professor of physics.

Prof Stephen Hawking refutes the possibility of time travel
The way the machine would work rests on Einstein’s theory of general relativity, a theory of gravity that shows how time can be warped by the gravitational pull of objects.

Bend time enough and you can create a loop and the possibility of temporal travel.

Prof Ori’s theory, set out in the prestigious science journal Physical Review, rests on a set of mathematical equations describing hypothetical conditions that, if established, could lead to the formation of a time machine, technically known as “closed time-like curves.”

In the blends of space and time, or spacetime, in his equations, time would be able to curve back on itself, so that a person travelling around the loop might be able to go further back in time with each lap.

In the past, one of the major challenges has been the alleged need for an exotic material with strange properties - what physicists call negative density - to create these time loops.

“This is no longer an issue,” he told The Daily Telegraph.

“You can construct a time machine without exotic matter,” he said.

It is now possible to use any material, even dust, so long as there is enough of it to bend spacetime into a loop.

Even though Prof Ori, of the Technion-Israel Institute of Technology, believes his new work strengthens the possibility of a real Tardis, he would not speculate on when a time machine would be built, or even if it would ever be possible.

“There are still some open questions.”

The main remaining issue is the stability of space time, the very fabric of the cosmos, in time travel scenarios.

But overcoming this obstacle may require the next generation of theory under development, called quantum gravity, which attempts to blend general relativity with the ideas of the quantum theory, the mathematical ideas that rule the atomic world.

Time travel has long been a fascination, HG Wells grappled with the scientific issues in his 1895 science fiction classic, The Time Machine, Dr Who is still fighting the time war and Hollywood insisted all that was needed for time travel was a De Lorean and a good flash of lightning.

But more serious work on general relativity first raised the astonishing possibility of time travel in the 1940s.

In the half century since, many eminent physicists have argued against time travel because it undermines ideas of cause and effect to create paradoxes so that a time traveller could go back to kill his grandfather so that she is never born in the first place.

In 1990, the world’s best known scientist, Prof Stephen Hawking proposed a “chronology protection conjecture”, which flatly says the laws of physics disallow time machines.

Three years later, Prof Ori concluded that the possibility of constructing a time machine from conventional materials could not be ruled out.

Prof Hawking then fought back with his Cambridge University colleague Michael Cassidy and they concluded that time loops are extremely unlikely.

Tongue in cheek, Prof Hawking added that there is experimental evidence that time travel doesn’t exist: “We have no reliable evidence of visitors from the future. (I’m discounting the conspiracy theory that UFOs are from the future and that the government knows and is covering it up. Its record of cover-ups is not that good.)”

But now, in Physical Review, Prof Ori has provided some more advanced solutions to the problems of time travel outlined by the likes of Prof Hawking, helping to realise an idea that dates back millennia and appears in 18th century literature, Harry Potter, Dickens, sci-fi movies and much more besides.

The Chinese Government's Plans for Nanotechnology

Alexis Madrigal February 17, 2008

BOSTON, MA - China aims to leapfrog the United States in technological development with substantial investment in nanotechnology, but whether those efforts will actually pay off is still unclear. That was the message from University of California at Santa Barbara researchers presenting their findings on the state of Chinese nanotechnology here at the AAAS annual meeting.
Richard Applebaum and Rachel Parker from the Center for Nanotechnology in Society at UCSB conducted about sixty interviews with Chinese officials to piece together a picture of the current state of Chinese nanotechnology. Applebaum set the specific research effort within the context of China's stated overarching goal to "leapfrog" the West by using a combination of learning from the West (i.e. technology transfer) and increasing domestic research capacity ("indigenous innovation" or zizhu chuangxin).
Nanotechnology research is one of four Chinese "science Megaprojects" that have the central purpose of catching the country up to US research by 2020. Still, for all the big talk, the actual government investment is not overwhelming. The researchers estimated that the Chinese government only invested $400 million from 2002 to 2007, although that investment is expected to rise considerably.
They highlighted several international partnerships related to nanotechnology including the Tsinghua-Foxconn Nanotechnology Research Center and the Zheijang-California NanoSystems Institute, but didn't go into much detail about what types of projects are being developed in those centers.
Right now, most nanotech research is being pushed by the central and regional governments with little private capital contributing to the national output. There are a lot of questions about whether or not that is a sustainable model for developing a high-tech industry, Applebaum noted. (It should also be noted, though, that some would question whether the venture capital model is sustainable either.)
It also leads to strange applications of nanotechnology in high-profile venues. Parker said that the Olympic village parking lots being constructed in Beijing will have a nanopolymer coating that will absorb exhaust. It was just an off-hand mention, but I am officially intrigued by the idea of coating our parking lots with pollution absorbing material. I can't vouch for the true environmental-safety of that solution, but I'd love to know how they're doing it. The coating could be something like this pollution absorbing concrete that uses titanium dioxide to degrade pollutants.

Saturday, February 16, 2008

And the 14 Grand Engineering Challenges of the 21st Century Are..

.
By Chuck Squatriglia February 15, 2008
Before you can save the world, you'd better write a to-do list so nothing gets overlooked. Some of the world's brightest minds have done just that by laying out this century's greatest engineering challenges.
The panel of 18 engineers, technologists and futurists included Google co-founder Larry Page and genomics pioneer J. Craig Venter. They spent more than a year pondering how best to improve life on Earth and came up with 14 Grand Engineering Challenges, a list the National Academy of Engineering deemed so momentous it should be capitalized.
The list, announced this afternoon, addresses four themes the committee considered "essential for humanity to flourish" - environmental sustainability, health, reducing our vulnerability and adding to the joy of living.
"We chose engineering challenges that we feel can, through creativity and committment, be realistically met, most of them early in this century," said committee chair William J. Perry, the former Secretary of Defense who teaches engineering at Stanford University. "Some can be, and should be, achieved as soon as possible."
What are they?
Make solar energy affordable.
Provide energy from fusion.
Develop carbon sequestration methods.
Manage the nitrogen cycle.
Provide access to clean water.
Restore and improve urban infrastructure.
Advance health informatics.
Engineer better medicines.
Reverse-engineer the brain.
Prevent nuclear terror.
Secure cyberspace.
Enhance virtual reality.
Advance personalized learning.
Engineer the tools for scientific discovery.
The committee, which also included such luminaries as futurist Ray Kurzweil and robotics guru Dean Kamen, decided not to make any predictions or focus on gee-whiz gadgets. They felt it more important to outline broad objectives that might influence research funding and governmental policy.
The 14 challenges they laid out were culled from hundreds of suggestions from engineers, scientists, policymakers and ordinary people around the world.
"Meeting these challenges would be game changing," said Charles M. Vest, president of the NAE. "Success with any of them could dramatically improve life for everyone."
So... what should we check off first?

Tuesday, February 12, 2008

Russians Plan New Space Platform

By Bill Christensen

posted: 15 January 2008

The Russian space agency stated that it intends to develop a space platform from which missions to the moon and to Mars could be launched. According to agency director Anatoly Perminov, the space platform project should be up and working after 2020. Russia plans its first moon mission for 2025.

The International Space Station will be decommissioned sometime between 2015 and 2025; by that time, the new space platform should be available.

As far as I know, the phrase "space platform" was first used in a scary short story by E. B. White. His short story,"The Morning of the Day They Did It," was published in The New Yorker magazine in 1950.
"We had arranged a radio hookup with the space platform, a gadget the Army had succeeded in establishing six hundred miles up, in the regions of the sky beyond the pull of gravity. The army, after many years of experimenting with rockets, had not only got the platform established but had sent two fellows there in a Spaceship, and also a liberal supply of the New Weapon."
(Read more about the space platform)

The concept of a space platform is at least several years older. In an annual report delivered by Secretary of Defense James Forrestal in 1948, an "earth satellite vehicle program" was mentioned. Forrestal remarked, "The earth-satellite vehicle program, which is being carried out independently by each military service, was assigned to the committee on guided missiles for coordination." It was specifically described as a platform from which missiles could be launched; it could function as an unmanned station.

The "earth satellite" was presented to the public in a great retro painting done by Frank Tinsely. Note the thoughtful details, including an astronomical observatory, cosmic ray traps, a sun power plant, rocket air lock, search radar and television sender. (See this more detailed drawing of the satellite base.)

Several years earlier, in 1946, General Curtis E. LeMay mentioned something similar in a research program announcement. He called for "flight and survival equipment for use above the atmosphere, including space vehicles, space bases and devices for use therein."

Computing That’s Light Years Ahead

A new year brings new trends: in American sports, soccer looks poised to become the new basketball; in health and lifestyle features, fifty is touted as the new thirty; NYC hipsters have been alerted that Brooklyn is the new Manhattan; and, this year's fashion runways suggest that green is the new black.

In the world of technology, however, similar analogies are less ephemeral, and can come to mark quantum leaps forward in the realm of human progress. Just think: photographs vs. still-life paintings; phones vs. telegraphs; cars vs. horse and buggies; television vs. movie theaters; the computer vs. calculators. . .

What if plastic was about to become the new silicon,

and computing was on the verge of becoming fast and fluid as light?

The development of a viable electro-optic polymer has been in the sights of the fiber optic communications industry for decades, because it has been viewed as holding the key to unleashing waves of inexpensive bandwidth. Billions of dollars have been spent by thousands of researchers at large and small companies alike in this pursuit, all to no avail.

After fifty years of competitive research, a small nanotech company from Wilmington, Delaware, named Third-Order Nanotechnologies, has developed a materials breakthrough that could be suitable for making commercially viable photonic chips—chips that hold the promise to be the "silicon" of a new era in computing. In fact, Third-Order's inexpensive plastic photonic chips have shown the potential to be a thousand times more powerful than silicon chips.

In the same fashion that silicon was the material that shaped the twentieth century, Third-Order's third-generation materials just might mold the twenty-first. The company's patented electro-optic plastics would broadly replace more expensive, lower-performance materials that are currently used in fiber-optic ground, wireless, and satellite communication networks, bringing low-cost universal bandwidth along with it.

With this new all-optical platform, the potential exists for Promethean growth in a myriad of different markets. If the first iteration of the Internet created e-mail and Web pages, and the Megabit Internet gave birth to killer applications such as VoIP and streaming music and video, imagine what Third-Order's Gigabit Internet might be like. A billion instantly available television channels. . . ? Lifelike, super-high definition video conferencing with dozens of people at once. . . ? Photorealistic virtual reality role-playing games experienced with thousands of people from around the world. . . ?

One dramatic application for optical computing that may be crucial for national security purposes is instantaneous, "Where's Waldo?"-style facial recognition. With optical computing, faces of suspected wrongdoers may be distinguished with a higher degree of accuracy and one thousand to one million times faster than silicon. With optical computers the size of sticks of butter able to be inserted into traffic lights and security cameras (replacing rooms filled with dozens of bulky desktops), this nimble security application would be both more rigorous and cost-effective than existing solutions.

Third-Order's CEO, Hal Bennett, is both an inventor and a visionary. He would welcome the opportunity to discuss with you Third-Order's technological breakthrough to bring the Gigbit Internet to the home. In the meantime, we would be happy to provide you with a company media kit as well, and encourage you to visit www.Third-Order.com for more information.

Monday, February 11, 2008

Robot future poses hard questions

Public debate is needed about the future use of robots in society
Scientists have expressed concern about the use of autonomous decision-making robots, particularly for military use.
As they become more common, these machines could also have negative impacts on areas such as surveillance and elderly care, the roboticists warn.
The researchers were speaking ahead of a public debate at the Dana Centre, part of London's Science Museum.
Discussions about the future use of robots in society had been largely ill-informed so far, they argued.
Autonomous robots are able to make decisions without human intervention. At a simple level, these can include robot vacuum cleaners that "decide" for themselves when to move from room to room or to head back to a base station to recharge.

Military forces
Increasingly, autonomous machines are being used in military applications, too.
Samsung, for example, has developed a robotic sentry to guard the border between North and South Korea.
It is equipped with two cameras and a machine gun.

The development and eventual deployment of autonomous robots raised difficult questions, said Professor Alan Winfield of the University of West England.
"If an autonomous robot kills someone, whose fault is it?" said Professor Winfield.
"Right now, that's not an issue because the responsibility lies with the designer or operator of that robot; but as robots become more autonomous that line or responsibility becomes blurred."
Professor Noel Sharkey, of the University of Sheffield, said there could be more problems when robots moved from military to civil duties.
"Imagine the miners strike with robots armed with water cannons," he said. "These things are coming, definitely."
The researchers criticised recent research commissioned by the UK Office of Science and Innovation's Horizon Scanning Centre and released in December 2006.

Robot rights

The discussion paper was titled Utopian Dream or Rise of the Machines? It addressed issues such as the "rights" of robots, and examined developments in artificial intelligence and how this might impact on law and politics.
In particular, it predicted that robots could one day demand the same citizen's rights as humans, including housing and even "robo-healthcare".

I can imagine a future where it is much cheaper to dump old people in big hospitals where machines care for them
Professor Noel Sharkey
"It's poorly informed, poorly supported by science and it is sensationalist," said Professor Owen Holland of the University of Essex.
"My concern is that we should have an informed debate and it should be an informed debate about the right issues."
The robo-rights scan was one of 246 papers, commissioned by the UK government, and compiled by a group of futures researchers, the Outsights-Ipsos Mori partnership and the US-based Institute for the Future (IFTF).
At the time, Sir David King, the government's chief scientific adviser, said: "The scans are aimed at stimulating debate and critical discussion to enhance government's short and long-term policy and strategy."
Other scans examined the future of space flight and developments in nanotechnology.

Raised questions

The Dana Centre event will pick up some of these issues.

"I think that concerns about robot rights are just a distraction," said Professor Winfield.
"The more pressing and serious problem is the extent to which society is prepared to trust autonomous robots and entrust others into the care of autonomous robots."
Caring for an ageing population also raised questions, he said.
Robots were already being used in countries like Japan to take simple measurements, such as heart rate, from elderly patients.
Professor Sharkey, who worked in geriatric nursing in his youth, said he could envisage a future when it was "much cheaper to dump a lot of old people" in a large hospital, where they could be cared for by machines.
Scenarios like these meant that proper debate about robotics was imperative, he added.

"In the same way as we have an informed nuclear debate, we need to tell the public about what is going on in robotics and ask them what they want."