Friday, October 29, 2010

BLADE Software Eliminates 'Drive-by Downloads' from Malicious Websites

Internet Explorer Mobile LogoImage via Wikipedia Insecure Web browsers and the growing number of complex applets and browser plug-in applications are allowing malicious software to spread faster than ever on the Internet. Some websites are installing malicious code, such as spyware, on computers without the user's knowledge or consent.

These so-called "drive-by downloads" signal a shift away from using spam and malicious e-mail attachments to infect computers. Approximately 560,000 websites -- and 5.5 million Web pages on those sites -- were infected with malware during the fourth quarter of 2009.

A new tool that eliminates drive-by download threats has been developed by researchers at the Georgia Institute of Technology and California-based SRI International. BLADE -- short for Block All Drive-By Download Exploits -- is browser-independent and designed to eliminate all drive-by malware installation threats. Details about BLADE will be presented at the Association for Computing Machinery's Conference on Computer and Communications Security.

"By simply visiting a website, malware can be silently installed on a computer to steal a user's identity and other personal information, launch denial-of-service attacks, or participate in botnet activity," said Wenke Lee, a professor in the School of Computer Science in Georgia Tech's College of Computing. "BLADE is an effective countermeasure against all forms of drive-by download malware installs because it is vulnerability and exploit agnostic."

The BLADE development team includes Lee, Georgia Tech graduate student Long Lu, and Vinod Yegneswaran and Phillip Porras from SRI International. Funding for the BLADE tool was provided by the National Science Foundation, U.S. Army Research Office and U.S. Office of Naval Research.

The researchers evaluated the tool on multiple versions and configurations of Internet Explorer and Firefox. BLADE successfully blocked all drive-by malware installation attempts from the more than 1,900 malicious websites tested. The software produced no false positives and required minimal resources from the computer. Major antivirus software programs caught less than 30 percent of the more than 7,000 drive-by download attempts from the same websites.

"BLADE monitors and analyzes everything that is downloaded to a user's hard drive to cross-check whether the user authorized the computer to open, run or store the file on the hard drive. If the answer is no to these questions, BLADE stops the program from installing or running and removes it from the hard drive," explained Lu.

Because drive-by downloads bypass the prompts users typically receive when a browser is downloading an unsupported file type, BLADE tracks how users interact with their browsers to distinguish downloads that received user authorization from those that do not. To do this, the tool captures on-screen consent-to-download dialog boxes and tracks the user's physical interactions with these windows. In addition, all downloads are saved to a secure zone on a user's hard drive so that BLADE can assess the content and prevent any malicious software from executing.

"Other research groups have tried to stop drive-by downloads, but they typically build a system that defends against a subset of the threats," explained Lee. "We identified the one point that all drive-by downloads have to pass through -- downloading and executing a file on the computer -- and we decided to use that as our chokepoint to prevent the installs."

The BLADE testing showed that the applications most frequently targeted by drive-by download exploits included Adobe Reader, Sun Java and Adobe Flash -- with Adobe Reader attracting almost three times as many attempts as the other programs. Computers using Microsoft's Internet Explorer 6 became infected by more drive-by-downloads than those using versions 7 or 8, while Firefox 3 had a lower browser infection rate than all versions of Internet Explorer. Among the more than 1,900 active malicious websites tested, the Ukraine, United Kingdom and United States were the top three countries serving active drive-by download exploits.

Legitimate Web addresses that should be allowed to download content to a user's computer without explicit permission, such as a browser or plug-in auto-updates, can be easily white-listed by the user so that their functionality is not affected by BLADE.

The researchers have also developed countermeasures so that malware publishers cannot circumvent BLADE by installing the malware outside the secure zone or executing it while it is being quarantined.

While BLADE is highly successful in thwarting drive-by download attempts, the development team admits that BLADE will not prevent social engineering attacks. Internet users are still the weakest link in the security chain, they note.

"BLADE requires a user's browser to be configured to require explicit consent before executable files are downloaded, so if this option is disabled by the user, then BLADE will not be able to protect that user's Web surfing activities," added Lee.


Enhanced by Zemanta

Super ALICE Ushers in a New Wonderland of Green Computing

The David Wilson Library at the University of ...Image via WikipediaALICE, the University of Leicester's new 'green' supercomputer, has been put into operation.

The University is aiming to make the £2.2 million facility the most energy efficient in the sector.

ALICE is ten times more powerful than the system it replaces, and is expected to help attract high quality researchers and millions of pounds in research grants to Leicester. Researchers will use the high-performance computer to help find the answers to questions ranging from the effects of different government policies on the financial markets to the future of our galaxy.

The new service, supplied by HP, offers computational power equivalent to thousands of desktop PCs by clustering large numbers of central processing units. It will make it possible to analyse much bigger data sets than before, get responses more quickly, and therefore help find the answers to more and different kinds of questions.

High performance computing produces an enormous amount of heat, so keeping the equipment cool is a major challenge. If a traditional cooling solution had been used this would have been both expensive to run and bad for the environment. Instead, the new Leicester computer room will, for the first time, use an advanced water-cooling system -- a bit like a glorified car radiator.

An existing computer room was completely redesigned and re-equipped to accommodate the new groundbreaking Ecofris cooling technology, supplied by Keysource Ltd. It is the first installation of this technology in academia or in any small to medium sized data centre.

Every year the system will save an estimated £130,000 and reduce CO2 emissions by 800 tons compared with the technology it has replaced. The supplier now plans to enter the facility into an international competition to identify the most efficient small data centre in Europe.

Researchers in Leicester's Physics and Astronomy, Engineering and Economics departments have been piloting the computer, but after its launch this month it will become freely available to any researcher in the University.

Mary Visser, Director of IT services at the university, said: "It's fascinating to see how researchers work these days -- looking for patterns in huge datasets and simulating complex phenomena.

"Usually, you need to be a real techie to engage with this kind of work. But we have social scientists and economists with big problems to solve who didn't sign up to be computer programmers. Our team aims to help make the facility accessible for them, too.

She added: "The amount of data produced is going up by around 50 per cent a year, so we need to get much cleverer about how we manage it, make it searchable, and decide what to keep for the next generation. That is a massive challenge for the whole sector -- one that calls for new kinds of support and training for researchers at every stage of their careers."


Enhanced by Zemanta

Better Surgery With New Surgical Robot With Force Feedback

Robot Brain Surgeon GPN-2000-001541Image via Wikipedia Robotic surgery makes it possible to perform highly complicated and precise operations. Surgical robots have limitations, too. For one, the surgeon does not 'feel' the force of his incision or of his pull on the suture, and robots are also big and clumsy to use. Therefore TU/e researcher Linda van den Bedem developed a much more compact surgical robot, which uses 'force feedback' to allow the surgeon to feel what he or she is doing.

Van den Bedem intends to market Sofie, the 'Surgeon's Operating Force-feedback Interface Eindhoven'.

One of the distinctive properties of Sofie is the 'force feedback', i.e. 'tactile feedback' in the joysticks with which the surgeon operates. This counter pressure enables a surgeon to feel exactly what force he applies when making a suture or pushing aside a bit of tissue. The finishing touch of this, the control of the force feedback, is being developed.

Moreover, Sofie is quite compact and hence less of an obstacle in the operating theater and above the patient. Its small dimensions come with an added bonus: Sofie's slave is not on the floor, but is mounted on the operating table. This averts the need of resetting everything when the operating table and the patient are moved or tilted. Further, Sofie makes it possible to approach an organ from different sides and can even operate 'around the corner'. Van den Bedem built the robot with assistance from TU/e's technical department. The university has patented this know-how.

The researcher expects that it will definitely take some five years or so before Sofie can really be put on the market.

Van den Bedem last week obtained her PhD degree at TU/e for a new type of surgical robot, Sofie. More specifically: she was awarded the title for the 'slave' of the robot, the robotic section performing the operation at the table. Van den Bedem built a prototype for this. The other components Sofie consists of are a master, the surgeon's 'control panel', with driven joysticks.


Enhanced by Zemanta

Trapping Charged Particles With Laser Light

Simulation of an optical lattice potential.Image via WikipediaIn the past decades, setups for trapping single particles have played a key role in high precision quantum measurements because they allow for an ultimate control of all important experimental parameters. However, until now scientists had to choose between two alternative strategies: either to capture charged atoms (i.e. ions) in strong radiofrequency fields, or to keep neutral atoms in place by light fields.

Dr. Tobias Schätz, leader of the Emmy-Noether-Research Group Quantum Simulations at the Max Planck Institute of Quantum Optics, and his team have now demonstrated the possibility of combining both methods as well as both kinds of particles: they succeeded to store an ion in an optical trap for the first time. Their experiment opens new perspectives, for example for using controllable quantum systems in the simulation of condensed matter properties. At the same time completely new experimental possibilities may arise in the field of ultracold chemistry.

Experimental quantum simulations are based on the principle of modeling a complex many-body system (e.g. a metallic solid state), whose quantum properties are neither understood nor controllable, by yet another system which allows the study of analogue properties under precisely defined conditions. These model systems can be realized in different ways. Most promising are systems based on ions sitting in radiofrequency traps, and systems made of neutral atoms stored in light fields. A special case of the latter is an optical lattice which is created by overlapping laser waves in a way that a periodic pattern of bright and dark areas emerges. For about three decades these "crystals of light" have shown to be a very useful tool for manipulating and controlling ultracold neutral atoms.

The decision to exploit which kind of particles strongly depends on the question under investigation. One of the topics the group of Dr. Schätz is very interested in is quantum properties of magnetic matter. Magnetism of a solid state can occur when the individual elemental atoms carry an angular momentum, a so-call spin. Depending on external conditions the interaction of each two spins makes them to align either parallel or antiparallel, thus eventually leading to ferromagnetic or antiferromagnetic (in the latter for uneven spin numbers even "frustrated") states. The investigation of the quantum dynamics of these states may make a contribution to a better understanding of high-temperature superconductivity. For the analogue simulation of spin-spin-interaction and its consequences ions are their preferred candidates because the Coulomb-force between neighbouring ions is much stronger than the interaction of neighbouring atoms in an optical lattice. Experimental quantum simulations with ions therefore could take a much shorter time than in the case of atoms whereby the influence of external fields would get largely reduced.

Because of their electric charge ions are also most easily to influence by external electromagnetic fields. Hence physicists have used the method of trapping ions with alternating radiofrequency fields by now for more than sixty years. Meanwhile storage times of up to several months are achieved. However, these systems suffer from a severe drawback: it is very difficult to scale them to larger architectures, which limits the possibilities of performing quantum simulations with sufficiently many ions. So, what is the reason that up to now optical lattices have not been used as an alternative for storing ions?

"Optical fields are disfavoured because they don't allow for potential wells nearly as deep as they are guaranteed by radiofrequency fields," Dr. Schätz explains. "At the same time ions react in a very sensitive way to external stray fields. This has caused the widely believed prejudice that optical potentials are too shallow and therefore unable to trap ions. But, as a matter of fact, we were able to experimentally demonstrate that ions can indeed be trapped by the interaction with light.

The scientists start their experiment with cooling a single magnesium ion down to a few thousandths of a degree above absolute zero temperature. In the next step external stray fields are compensated for by appropriate "counter fields." Then a strongly collimated laser beam is turned on while the radiofrequency field gets switched off. According to a series of measurements the ion was kept in place for several milliseconds. This corresponds to a couple of hundred oscillations of the ion in the potential well, despite its relatively shallow shape.

Tobias Schätz does not seem to be awfully surprised by this change of paradigm. "In principle, both traps, the radio frequency as well as the optical, work the same way: they capture the particle by a fast changing electromagnetic field." At present the lifetime of the ion in the optical trap is only limited by heating which is caused by scattering light of the optical field. It could be largely improved with state-of-the art techniques.

Once it was possible to extend the principle of optical trapping as demonstrated in this experimental approach to a large number of ions in an optical lattice a completely new class of experiments could be carried out. Besides simulating complex spin-systems, hybrid quantum systems could be developed which combine ions and atoms in a common optical lattice with the quantum particles "sharing" the excess charges.

There are also intriguing possibilities for investigating chemical reactions at extremely low temperatures. If, for example, a single ion was embedded in a cold atomic quantum gas (a so-called Bose-Einstein condensate) in a common optical trap, the particles would -- due to their very low kinetic energy -- spend so much time together, that novel chemical reactions caused by quantum mechanical tunneling might evolve. Hence this experiment is both the beginning of a new generation of quantum simulations and of a new era of ultracold chemistry.


Enhanced by Zemanta

Controlling Individual Cortical Nerve Cells by Human Thought

Logo of the United States National Institute o...Image via WikipediaFive years ago, neuroscientist Christof Koch of the California Institute of Technology (Caltech), neurosurgeon Itzhak Fried of UCLA, and their colleagues discovered that a single neuron in the human brain can function much like a sophisticated computer and recognize people, landmarks, and objects, suggesting that a consistent and explicit code may help transform complex visual representations into long-term and more abstract memories.

Now Koch and Fried, along with former Caltech graduate student and current postdoctoral fellow Moran Cerf, have found that individuals can exert conscious control over the firing of these single neurons -- despite the neurons' location in an area of the brain previously thought inaccessible to conscious control -- and, in doing so, manipulate the behavior of an image on a computer screen.

The work, which appears in a paper in the October 28 issue of the journal Nature, shows that "individuals can rapidly, consciously, and voluntarily control neurons deep inside their head," says Koch, the Lois and Victor Troendle Professor of Cognitive and Behavioral Biology and professor of computation and neural systems at Caltech.

The study was conducted on 12 epilepsy patients at the David Geffen School of Medicine at UCLA, where Fried directs the Epilepsy Surgery Program. All of the patients suffered from seizures that could not be controlled by medication. To help localize where their seizures were originating in preparation for possible later surgery, the patients were surgically implanted with electrodes deep within the centers of their brains. Cerf used these electrodes to record the activity, as indicated by spikes on a computer screen, of individual neurons in parts of the medial temporal lobe -- a brain region that plays a major role in human memory and emotion.

Prior to recording the activity of the neurons, Cerf interviewed each of the patients to learn about their interests. "I wanted to see what they like -- say, the band Guns N' Roses, the TV show House, and the Red Sox," he says. Using that information, he created for each patient a data set of around 100 images reflecting the things he or she cares about. The patients then viewed those images, one after another, as Cerf monitored their brain activity to look for the targeted firing of single neurons. "Of 100 pictures, maybe 10 will have a strong correlation to a neuron," he says. "Those images might represent cached memories -- things the patient has recently seen."

The four most strongly responding neurons, representing four different images, were selected for further investigation. "The goal was to get patients to control things with their minds," Cerf says. By thinking about the individual images -- a picture of Marilyn Monroe, for example -- the patients triggered the activity of their corresponding neurons, which was translated first into the movement of a cursor on a computer screen. In this way, patients trained themselves to move that cursor up and down, or even play a computer game.

But, says Cerf, "we wanted to take it one step further than just brain-machine interfaces and tap into the competition for attention between thoughts that race through our mind."

To do that, the team arranged for a situation in which two concepts competed for dominance in the mind of the patient. "We had patients sit in front of a blank screen and asked them to think of one of the target images," Cerf explains. As they thought of the image, and the related neuron fired, "we made the image appear on the screen," he says. That image is the "target." Then one of the other three images is introduced, to serve as the "distractor."

"The patient starts with a 50/50 image, a hybrid, representing the 'marriage' of the two images," Cerf says, and then has to make the target image fade in -- just using his or her mind -- and the distractor fade out. During the tests, the patients came up with their own personal strategies for making the right images appear; some simply thought of the picture, while others repeated the name of the image out loud or focused their gaze on a particular aspect of the image. Regardless of their tactics, the subjects quickly got the hang of the task, and they were successful in around 70 percent of trials.

"The patients clearly found this task to be incredibly fun as they started to feel that they control things in the environment purely with their thought," says Cerf. "They were highly enthusiastic to try new things and see the boundaries of 'thoughts' that still allow them to activate things in the environment."

Notably, even in cases where the patients were on the verge of failure -- with, say, the distractor image representing 90 percent of the composite picture, so that it was essentially all the patients saw -- "they were able to pull it back," Cerf says. Imagine, for example, that the target image is Bill Clinton and the distractor George Bush. When the patient is "failing" the task, the George Bush image will dominate. "The patient will see George Bush, but they're supposed to be thinking about Bill Clinton. So they shut off Bush -- somehow figuring out how to control the flow of that information in their brain -- and make other information appear. The imagery in their brain," he says, "is stronger than the hybrid image on the screen."

According to Koch, what is most exciting "is the discovery that the part of the brain that stores the instruction 'think of Clinton' reaches into the medial temporal lobe and excites the set of neurons responding to Clinton, simultaneously suppressing the population of neurons representing Bush, while leaving the vast majority of cells representing other concepts or familiar person untouched."

This work was funded by the National Institute of Neurological Disorders and Stroke, the National Institute of Mental Health, the G. Harold & Leila Y. Mathers Charitable Foundation, and Korea's World Class University program.


Enhanced by Zemanta

Three-Dimensional Maps of Brain Wiring

Magnetic Resonance Imaging scan of a headImage via WikipediaA team of researchers at the Eindhoven University of Technology has developed a software tool that physicians can use to easily study the wiring of the brains of their patients. The tool converts MRI scans using special techniques to three-dimensional images. This now makes it possible to view a total picture of the winding roads and their contacts without having to operate. Researcher Vesna Prčkovska defended her PhD thesis on this subject last week.

To know accurately where the main nerve bundles in the brain are located is of immense importance for neurosurgeons, explains Bart ter Haar Romenij (professor of Biomedical Image Analysis, at the Department of Biomedical Engineering). As an example he cites 'deep brain stimulation', with which vibration seizures in patients with Parkinson's disease can be suppressed. "With this new tool, you can determine exactly where to place the stimulation electrode in the brain. The guiding map has been improved: because we now see the roads on the map, we know better where to stick the needle." The technique may also yield many new insights into neurological and psychiatric disorders. And it is important for brain surgeons to know in advance where the critical nerve bundles are, to avoid damaging them.

The accuracy of the tool is a great step forward. Especially intersections of nerve bundles were difficult to identify till now. Ter Haar Romenij: "You can now see for the first time the spaghetti-like structures and their connections." We are far from seeing all brain connections; there are many more smaller compounds in the brains, who are not seen by the new tool. A microscope observed them. "But you cannot, of course, dissect a live patient into slices for under a microscope," the professor smiles.

The tool was developed by TU/e researcher Anna Vilanova, with her PhD students Vesna Prčkovska, Tim Peeters and Paulo Rodrigues. A demonstration of the package can be found on YouTube (see link below). The tool is based on a recently developed technology called HARDI (High Angular Resolution Diffusion Imaging). The MRI measuring technique for HARDI was already there, the research team took care of the processing, interpretation and interactive visualization of these very complex data, so that doctors can get to work.

Bart ter Haar Romenij expects that the tool can be ready at relatively short notice for use in the hospital within a few years. "We need to validate the package. We now need to prove that the images match reality." Also, there is still work to do on the speed of the corresponding MRI scan. For a detailed view, a patient needs to be one hour in the scanner, which is too long. Moreover, the tool is already widely in use by other scientists, says the professor.

The research was supported by NWO (Dutch Organization for Scientific Research). The thesis of Vesna Prčkovska is titled: High Angular Resolution Diffusion Imaging, Processing & Visualization. She graduated on October 20, 2010.


Enhanced by Zemanta

New Strategy to Kill Bugs -- Even Those in Hiding

Magnified 20,000X, this colorized scanning ele...Image via Wikipedia New strategies to apply antibiotics more effectively to hibernating bugs have been developed by researchers at the University of Hertfordshire.

In a paper, which appeared this month in the Institute of Electrical and Electronics Engineers (IEEE) Transactions on Evolutionary Computing, Dr Ole Steuernagel and Dr Daniel Polani from the University's Science and Technology Research Institute describe how to apply antibiotics to wipe out bacteria that form active as well as inactive subpopulations.

"One of the difficulties of applying antibiotic strategies against bugs is that some of the microbes tend to go into hibernation," said Dr Steuernagel. "Although the medication can wipe out the active populations, it often misses the hibernating ones because they are metabolically inactive. It may not be enough just to kill off the active bacteria, the hibernating rest will 'wake up' and reestablish themselves."

Through use of an optimization approach called 'multiobjective optimization' that is tailored to such multifaceted scenarios, the researchers found that the best solution is to kill the microbes early and late during the therapy period, but not during the intermediate period.

"This is the first time that this approach has been used in a bug eradication scenario and our solutions should be more efficient than existing approaches to kill hibernating bugs," said Dr Steuernagel. "Current practice does not take account of persistence due to hibernation although this may well be a problem. After all, microbes which are known to hibernate include Escherichia coli, multiply resistant Staphyloccus aureus (MRSA -- "superbug"), Mycobacterium tuberculosis, Pseudomonas aeruginosa."


Enhanced by Zemanta

Culturally Inspired Mobile Phone Games Help Chinese Children Learn Language Characters

The atrium of Carnegie Mellon University in Qa...Image via WikipediaMobile phone-based games could provide a new way to teach basic knowledge of Chinese language characters that might be particularly helpful in underdeveloped rural areas of China, say researchers in Carnegie Mellon University's Mobile & Immersive Learning for Literacy in Emerging Economies (MILLEE) Project.

Earlier this year, researchers reported that two mobile learning games, inspired by traditional Chinese games, showed promise during preliminary tests with children in Xin'an, an underdeveloped region in Henan Province, China. The researchers from Carnegie Mellon, the University of California, Berkeley and the Chinese Academy of Sciences reported their findings at CHI 2010, the Association for Computing Machinery's Conference on Human Factors in Computing Systems in Atlanta. Subsequent studies this summer at a privately run school in Beijing likewise showed that students playing the educational videogames increased their knowledge of Chinese characters.

"We believe that the cooperative learning encouraged by the games contributed to character learning," said CMU's Matthew Kam, assistant professor in the School of Computer Science's Human-Computer Interaction Institute and MILLEE project director. "The results of our studies suggest that further development of these games could make inexpensive mobile phones important learning tools, particularly for children in underdeveloped rural areas."

The Chinese language is the most widely spoken language in the world, with more than 1 billion Mandarin Chinese speakers, but it presents unique challenges to language education. Unlike languages with alphabetic writing systems, the Chinese language uses characters that each correspond to a syllable or sometimes a word. About 6,000 characters are commonly used, but the shape of each character provides few clues to its pronunciation and different dialects have different pronunciations for the same character.

MILLEE researchers analyzed 25 traditional games played by children in China to identify elements, such as cooperation between players, songs and handmade game objects, that could be used to design two educational mobile phone games. In one game, Multimedia Word, children are required to recognize and write a correct Chinese character based on hints provided for pronunciation, a sketch, a photo or other multimedia context. In a second game, Drumming Stroke, children practice writing Chinese characters; participants pass the mobile phone one by one on the rhythm of a drum sound played by the mobile phone, with each player required to write one stroke of a given Chinese character by following the exact stroke order.

Kam and other MILLEE researchers are collaborating with Tian Feng, an associate professor in the Institute of Software, Chinese Academy of Sciences in Beijing, to further explore the potential of mobile phones as a learning resource for Chinese children. Field research on behalf of MILLEE was performed this summer by Ben Rachbach, a student at Swarthmore College, to determine the educational needs of low-income students in three schools in Beijing. The team is receiving curriculum guidance from Sue-mei Wu, associate teaching professor of Chinese at CMU and chair of Chinese learning in the Pittsburgh Science of Learning Center, a joint effort of Carnegie Mellon and the University of Pittsburgh that is supported by the National Science Foundation.

With the support of Nokia, MILLEE has developed mobile phone-based games for teaching English literacy to rural children in India and is commencing a controlled study involving 800 children in 40 villages of Andhra Pradesh, a state in southern India. MILLEE is also working with the University of Nairobi to explore how the games could be adapted to English literacy learning for rural children in Kenya.

Kam, a native of Singapore, said despite their small screens and low computing power by today's standards, mobile phones could become a major educational resource as wireless carriers and mobile phone manufacturers move aggressively to extend mobile phone penetration across the globe. And if the educational benefits of mobile phones can be demonstrated convincingly, he added, consumers will have an additional motivation for getting mobile phone service, which could further spur mobile phone adoption in developing countries.


Enhanced by Zemanta

'Virtual Satellite Dish' Thanks to Lots of Simple Processors Working Together

Astro's "mini-dish".Image via WikipediaSatellite TV without having to set up a receiver dish. Digital radio on your mobile phone without your batteries quickly running flat. The advanced calculations needed for these future applications are made possible by a microchip with relatively simple processors that can interact and communicate flexibly. These are among the findings of research at the Centre for Telematics and Information Technology of the University of Twente carried out by Marcel van de Burgwal, who obtained his PhD on 15 October.

Soon it will be possible to receive satellite signals not only with a satellite dish, but also using stationary antennae arrays made up of grids of simple, fixed, almost flat antennae that can fit on the roof of a car, for example. The antennae then no longer need to be carefully aimed: the grid of antennae forms a 'virtual dish'. That is a great advantage, especially for mobile applications such as satellite TV on the move. The aiming of the virtual dish is actually carried out by the entire grid. It is comparable with the LOFAR project, in which countless simple antennae laid out on the heathland of Drenthe in the north east Netherlands together form a huge dish for radiotelescopy. This too calls for large numbers of calculations and fast communications.

Computing power replaces analogue components

Conventional microprocessors are less suitable for these calculations, because they are highly overdimensioned and use large amounts of energy. The remedy is a combination of smaller, simple processors on a single microchip that can carry out tasks flexibly and be switched off when they are not needed. In this way a complete computer network can be constructed that takes up just a few square millimetres. To achieve this, Van de Burgwal makes use of an efficient infrastructure based on a miniature network, where a TV or radio receiver is defined by software instead of the classic coils and crystals. "Software-defined radio may seem much more complex, but we can pack so much computing power into the space taken up by, for example, a coil that it more than repays the effort," says Van de Burgwal.

Chameleon

The same type of microchip also turns out to be suitable for a completely different application: digital radio reception on a smartphone, where the main criterion is minimizing energy use. In his doctoral thesis Van de Burgwal shows that major gains can also be made here by using new methods of communication between the different processors. The multi-processor chip that he uses is based on the Montium processor -- appropriately named after a chameleon -- that was developed at the University of Twente. The processor is being further developed and marketed by the spinoff business Recore Systems.

Marcel van de Burgwal carried out his research in the Computer Architecture for Embedded Systems group, which forms a part of the Centre for Telematics and Information Technology at the University of Twente.


Enhanced by Zemanta

Researchers Break Speed Barrier in Solving Important Class of Linear Systems

Posner Center roof garden, Carnegie Mellon Uni...Image via WikipediaComputer scientists at Carnegie Mellon University have devised an innovative and elegantly concise algorithm that can efficiently solve systems of linear equations that are critical to such important computer applications as image processing, logistics and scheduling problems, and recommendation systems.

The theoretical breakthrough by Professor Gary Miller, Systems Scientist Ioannis Koutis and Ph.D. student Richard Peng, all of Carnegie Mellon's Computer Science Department, has enormous practical potential. Linear systems are widely used to model real-world systems, such as transportation, energy, telecommunications and manufacturing that often may include millions, if not billions, of equations and variables.

Solving these linear systems can be time consuming on even the fastest computers and is an enduring computational problem that mathematicians have sweated for 2,000 years. The Carnegie Mellon team's new algorithm employs powerful new tools from graph theory, randomized algorithms and linear algebra that make stunning increases in speed possible.

The algorithm, which applies to an important class of problems known as symmetric diagonally dominant (SDD) systems, is so efficient that it may soon be possible for a desktop workstation to solve systems with a billion variables in just a few seconds.

The work will be presented at the annual IEEE Symposium on Foundations of Computer Science (FOCS 2010), Oct. 23-36 in Las Vegas.

A myriad of new applications have emerged in recent years for SDD systems. Recommendation systems, such as the one used by Netflix to suggest movies to customers, use SDD systems to compare the preferences of an individual to those of millions of other customers. In image processing, SDD systems are used to segment images into component pieces, such as earth, sky and objects like buildings, trees and people. "Denoising" images to bring out lettering and other details that otherwise might appear as a blur also make use of SDD systems.

A large class of logistics, scheduling and optimization problems can be formulated as maximum-flow problems, or "max flow," which calculate the maximum amount of materials, data packets or vehicles that can move through a network, be it a supply chain, a telecommunications network or a highway system. The current theoretically best max flow algorithm uses, at its core, an SDD solver.

SDD systems also are widely used in engineering, such as for computing heat flow in materials or the vibrational modes of objects with complex shapes, in machine learning, and in computer graphics and simulations.

"In our work at Microsoft on digital imaging, we use a variety of fast techniques for solving problems such as denoising, image blending and segmentation," said Richard Szeliski, leader of the Interactive Visual Media Group at Microsoft Research. "The fast SDD solvers developed by Koutis, Miller and Peng represent a real breakthrough in this domain, and I expect them to have a major impact on the work that we do."

Finding methods to quickly and accurately solve simultaneous equations is an age-old mathematical problem. A classic algorithm for solving linear systems, dubbed Gaussian elimination in modern times, was first published by Chinese mathematicians 2,000 years ago.

"The fact that you can couch the world in linear algebra is super powerful," Miller said. "But once you do that, you have to solve these linear systems and that's often not easy."

A number of SDD solvers have been developed, but they tend not to work across the broad class of SDD problems and are prone to failures. The randomized algorithm developed by Miller, Koutis and Peng, however, applies across the spectrum of SDD systems.

The team's approach to solving SDD systems is to first solve a simplified system that can be done rapidly and serve as a "preconditioner" to guide iterative steps to an ultimate solution. To construct the preconditioner, the team uses new ideas from spectral graph theory, such as spanning tree and random sampling.

The result is a significant decrease in computer run times. The Gaussian elimination algorithm runs in time proportional to s3, where s is the size of the SDD system as measured by the number of terms in the system, even when s is not much bigger the number of variables. The new algorithm, by comparison, has a run time of s[log(s)]2. That means, if s = 1 million, that the new algorithm run time would be about a billion times faster than Gaussian elimination.

Other algorithms are better than Gaussian elimination, such as one developed in 2006 by Daniel Spielman of Yale University and Miller's former student, Shang-Hua Teng of the University of Southern California, which runs in s[log(s)]25. But none promise the same speed as the one developed by the Carnegie Mellon team.

"The new linear system solver of Koutis, Miller and Peng is wonderful both for its speed and its simplicity," said Spielman, a professor of applied mathematics and computer science at Yale. "There is no other algorithm that runs at even close to this speed. In fact, it's impossible to design an algorithm that will be too much faster."


Enhanced by Zemanta

Light on Silicon Better Than Copper?

Principal components: 1. Gain medium 2. Laser ...Image via WikipediaStep aside copper and make way for a better carrier of information -- light.

As good as the metal has been in zipping information from one circuit to another on silicon inside computers and other electronic devices, optical signals can carry much more, according to Duke University electrical engineers. So the engineers have designed and demonstrated microscopically small lasers integrated with thin film-light guides on silicon that could replace the copper in a host of electronic products.

The structures on silicon not only contain tiny light-emitting lasers, but connect these lasers to channels that accurately guide the light to its target, typically another nearby chip or component. This new approach could help engineers who, in their drive to create tinier and faster computers and devices, are studying light as the basis for the next generation information carrier.

The engineers believe they have solved some of the unanswered riddles facing scientists trying to create and control light at such a miniscule scale.

"Getting light onto silicon and controlling it is the first step toward chip scale optical systems," said Sabarni Palit, who this summer received her Ph.D. while working in the laboratory of Nan Marie Jokerst, J.A. Jones Distinguished Professor of Electrical and Computer Engineering at Duke's Pratt School of Engineering.

The results of team's experiments, which were supported by the Army Research Office, were published online in the journal Optics Letters.

"The challenge has been creating light on such a small scale on silicon, and ensuring that it is received by the next component without losing most of the light," Palit said.

"We came up with a way of creating a thin film integrated structure on silicon that not only contains a light source that can be kept cool, but can also accurately guide the wave onto its next connection," she said. "This integration of components is essential for any such chip-scale, light-based system."

The Duke team developed a method of taking the thick substrate off of a laser, and bonding this thin film laser to silicon. The lasers are about one one-hundreth of the thickness of a human hair. These lasers are connected to other structures by laying down a microscopic layer of polymer that covers one end of the laser and goes off in a channel to other components. Each layer of the laser and light channel is given its specific characteristics, or functions, through nano- and micro-fabrication processes and by selectively removing portions of the substrate with chemicals.

"In the process of producing light, lasers produce heat, which can cause the laser to degrade," Sabarni said. "We found that including a very thin band of metals between the laser and the silicon substrate dissipated the heat, keeping the laser functional."

For Jokerst, the ability to reliably facilitate individual chips or components that "talk" to each other using light is the next big challenge in the continuing process of packing more processing power into smaller and smaller chip-scale packages.

"To use light in chip-scale systems is exciting," she said. "But the amount of power needed to run these systems has to be very small to make them portable, and they should be inexpensive to produce. There are applications for this in consumer electronics, medical diagnostics and environmental sensing."

The work on this project was conducted in Duke's Shared Materials Instrumentation Facility, which, like similar facilities in the semiconductor industry, allows the fabrication of intricate materials in a totally "clean" setting. Jokerst is the facility's executive director.


Enhanced by Zemanta

Robotic Gripper Runs on Coffee ... and Balloons

Logo for iRobotImage via WikipediaThe human hand is an amazing machine that can pick up, move and place objects easily, but for a robot, this "gripping" mechanism is a vexing challenge. Opting for simple elegance, researchers from Cornell University, University of Chicago and iRobot have bypassed traditional designs based around the human hand and fingers, and created a versatile gripper using everyday ground coffee and a latex party balloon.

They call it a universal gripper, as it conforms to the object it's grabbing rather than being designed for particular objects, said Hod Lipson, Cornell associate professor of mechanical engineering and computer science. The research is a collaboration between the groups of Lipson, Heinrich Jaeger at the University of Chicago, and Chris Jones at iRobot Corp. It is published Oct. 25 online in Proceedings of the National Academy of Sciences.

"This is one of the closest things we've ever done that could be on the market tomorrow," Lipson said. He noted that the universality of the gripper makes future applications seemingly limitless, from the military using it to dismantle explosive devises or to move potentially dangerous objects, robotic arms in factories, on the feet of a robot that could walk on walls, or on prosthetic limbs.

Here's how it works: An everyday party balloon filled with ground coffee -- any variety will do -- is attached to a robotic arm. The coffee-filled balloon presses down and deforms around the desired object, and then a vacuum sucks the air out of the balloon, solidifying its grip. When the vacuum is released, the balloon becomes soft again, and the gripper lets go.

Jaeger said coffee is an example of a particulate material, which is characterized by large aggregates of individually solid particles. Particulate materials have a so-called jamming transition, which turns their behavior from fluid-like to solid-like when the particles can no longer slide past each other.

This phenomenon is familiar to coffee drinkers familiar with vacuum-packed coffee, which is hard as a brick until the package is unsealed.

"The ground coffee grains are like lots of small gears," Lipson said. "When they are not pressed together they can roll over each other and flow. When they are pressed together just a little bit, the teeth interlock, and they become solid."

Jaeger explains that the concept of a "jamming transition" provides a unified framework for understanding and predicting behavior in a wide range of disordered, amorphous materials. All of these materials can be driven into a 'glassy' state where they respond like a solid yet structurally resemble a liquid, and this includes many liquids, colloids, emulsions or foams, as well as particulate matter consisting of macroscopic grains.

"What is particularly neat with the gripper is that here we have a case where a new concept in basic science provided a fresh perspective in a very different area -- robotics -- and then opened the door to applications none of us had originally thought about," Jaeger said.

Eric Brown, a postdoctoral researcher, and Nick Rodenberg, a physics undergraduate, worked with Jaeger on characterizing the basic mechanisms that enable the gripping action. Prototypes of the gripper were built and tested by Lipson and Cornell graduate student John Amend as well as at iRobot.

As for the right particulate material, anything that can jam will do in principle, and early prototypes involved rice, couscous and even ground- up tires. They settled on coffee because it's light but also jams well, Amend said. Sand did better on jamming but was prohibitively heavy. What sets the jamming-based gripper apart is its good performance with almost any object, including a raw egg or a coin -- both notoriously difficult for traditional robotic grippers.


Enhanced by Zemanta

Patterns of Nonverbal Emotional Communication Between Infants and Mothers to Help Scientists Develop a Baby Robot That Learns

Student Camoflauge, University of California, ...Image via WikipediaTo help unravel the mysteries of human cognitive development and reach new the frontiers in robotics, University of Miami (UM) developmental psychologists and computer scientists from the University of California in San Diego (UC San Diego) are studying infant-mother interactions and working to implement their findings in a baby robot capable of learning social skills.

The first phase of the project was studying face-to-face interactions between mother and child, to learn how predictable early communication is, and to understand what babies need to act intentionally. The findings are published in the current issue of the journal Neural Networks in a study titled "Applying machine learning to infant interaction: The development is in the details."

The scientists examined 13 mothers and babies between 1 and 6 months of age, while they played during five minute intervals weekly. There were approximately 14 sessions per dyad. The laboratory sessions were videotaped and the researchers applied an interdisciplinary approach to understanding their behavior.

The researchers found that in the first six months of life, babies develop turn- taking skills, the first step to more complex human interactions. According to the study, babies and mothers find a pattern in their play, and that pattern becomes more stable and predictable with age,explains Daniel Messinger, associate professor of Psychology in the UM College of Arts and Sciences and principal investigator of the study.

"As babies get older, they develop a pattern with their moms," says Messinger. "When the baby smiles, the mom smiles; then the baby stops smiling and the mom stops smiling, and the babies learn to expect that someone will respond to them in a particular manner," he says. "Eventually the baby also learns to respond to the mom."

The next phase of the project is to use the findings to program a baby robot, with basic social skills and with the ability to learn more complicated interactions. The robot's name is Diego-San. He is 1.3 meters tall and modeled after a 1-year-old child. The construction of the robot was a joint venture between Kokoro Dreams and the Machine Perception Laboratory at UC San Diego.

The robot will need to shift its gaze from people to objects based on the same principles babies seem to use as they play and develop. "One important finding here is that infants are most likely to shift their gaze, if they are the last ones to do so during the interaction," says Messinger. "What matters most is how long a baby looks at something, not what they are looking at."

The process comes full circle. The babies teach the researchers how to program the robot, and in training the robot the researchers get insight into the process of human behavior development, explains Paul Ruvolo, six year graduate student in the Computer Science Department at UC San Diego and co-author of the study.

"A unique aspect of this project is that we have state-of-the-art tools to study development on both the robotics and developmental psychology side," says Ruvolo. "On the robotics side we have a robot that mechanically closely approximates the complexity of the human motor system and on the developmental psychology side we have a fine-grained motion capture and video recording that shows the mother infant action in great detail," he says. "It is the interplay of these two methods for studying the process of development that has us so excited."

Ultimately, the baby robot will give scientists understanding on what motivates a baby to communicate and will help answer questions about the development of human learning. This study is funded by National Science Foundation.

About the University of Miami

The University of Miami's mission is to educate and nurture students, to create knowledge, and to provide service to our community and beyond. Committed to excellence and proud of the diversity of our University family, we strive to develop future leaders of our nation and the world. www.miami.edu


Enhanced by Zemanta

Tiny Brained Bees Solve a Complex Mathematical Problem

Royal Holloway, University of London, England ...Image via WikipediaBumblebees can find the solution to a complex mathematical problem which keeps computers busy for days.

Scientists at Royal Holloway, University of London and Queen Mary, University of London have discovered that bees learn to fly the shortest possible route between flowers even if they discover the flowers in a different order. Bees are effectively solving the 'Travelling Salesman Problem', and these are the first animals found to do this.

The Travelling Salesman must find the shortest route that allows him to visit all locations on his route. Computers solve it by comparing the length of all possible routes and choosing the shortest. However, bees solve it without computer assistance using a brain the size of grass seed.

Dr Nigel Raine, from the School of Biological Sciences at Royal Holloway explains: "Foraging bees solve travelling salesman problems every day. They visit flowers at multiple locations and, because bees use lots of energy to fly, they find a route which keeps flying to a minimum."

The team used computer controlled artificial flowers to test whether bees would follow a route defined by the order in which they discovered the flowers or if they would find the shortest route. After exploring the location of the flowers, bees quickly learned to fly the shortest route.

As well as enhancing our understanding of how bees move around the landscape pollinating crops and wild flowers, this research, which is due to be published in The American Naturalist, has other applications. Our lifestyle relies on networks such as traffic on the roads, information flow on the web and business supply chains. By understanding how bees can solve their problem with such a tiny brain we can improve our management of these everyday networks without needing lots of computer time.

Dr Raine adds: "Despite their tiny brains bees are capable of extraordinary feats of behaviour. We need to understand how they can solve the Travelling Salesman Problem without a computer. What short-cuts do they use?'


Enhanced by Zemanta

Brain's Journey from Early Internet to Modern-Day Fiber Optics: Computer Program Shows How Brain's Complex Fiber Tracks Mature

Magnetic Resonance Imaging scan of a headImage via Wikipediahe brain's inner network becomes increasingly more efficient as humans mature. Now, for the first time without invasive measures, a joint study from the Ecole Polytechnique Fédérale de Lausanne (EPFL) and the University of Lausanne (UNIL), in collaboration with Harvard Medical School, has verified these gains with a powerful new computer program.

Reported in the Proceedings of the National Academy of Sciences early online edition, the soon-to-be-released software allows for individualized maps of vital brain connectivity that could aide in epilepsy and schizophrenia research.

"The computer program brings together a series of processes in a 'pipeline' beginning with individual MRIs and ending with a personalized map of the fiber optics-like network in the brain. It takes a whole team of engineers, mathematicians, physicists, and medical doctors to come up with this type of neurobiological understanding," explains Jean-Philippe Thiran, an EPFL professor and head of the Signal Processing Laboratory 5.

A young child's brain is similar to the early Internet with isolated, poorly linked hubs and inefficient connections, say the researchers from EPFL and UNIL. An adult brain, on the other hand, is more like a modern day, fully integrated fiber optic network. The scientists hypothesized that while the brain does not undergo significant topographical changes in childhood, its white matter -- the bundles of nerve cells connecting different parts of the brain -- transitions from weak and inefficient connections to powerful neuronal highways. To test their idea, the team worked with colleagues at Harvard Medical School and Indiana University to map the brains of 30 children between the ages of two and 18.

With MRI, they tracked the diffusion of water in the brain and, in turn, the fibers that carry this water. Thiran and UNIL professor Patric Hagmann, in the Department of Radiology, then created a database of the various fiber cross-sections and graphed the results. In the end, they had a 3D model of each brain showing the thousands of strands that connect different regions.

These individual models provide insight not only into how a child's brain develops but also into the structural differences in the brain between left-handed and right-handed people, for example, or between a control and someone with schizophrenia or epilepsy. The models may also help inform brain surgeons of where, or where not, to cut to relieve epilepsy symptoms. Thiran and Hagmann plan to make the tool available early next year free of charge to hospitals around the world.


Enhanced by Zemanta

Thursday, October 21, 2010

Making the Internet Faster

Graph of internet users per 100 inhabitants be...Image via WikipediaWeaknesses in the architecture behind the Internet mean that surfing can sometimes lead to slow speeds and a tiresome wait for a video to load. Redeveloping the whole architecture of the Internet is an option recently discussed even by Internet pioneers. However, a group of European engineers decided to go the opposite way and to monitor traffic and tailor services to meet demand.

There is no single entity behind the Internet. It is made up of different networks that are managed by service providers. These service providers -- or operators -- manage what data is being sent and monitor the amount of traffic being used in terms of simple web browsing, multimedia streaming or peer to peer file sharing. When the data traffic on a network is too dense what experts call "bottlenecks" can occur, slowing the delivery of information to your computer, which can result in a slower Internet experience.

A EUREKA-backed project entitled TRAMMS (http://www.celtic-initiative.org/projects/tramms/), for Traffic Measurements and Models in Multi-Service networks, incorporating teams from Sweden, Hungary and Spain, aimed to solve this issue by gaining access to Internet networks run by operators in both Sweden and Spain and monitoring traffic over a period of three years. This gave them an excellent insight into user behaviour, enabling them to accurately measure network traffic so that in the future, service providers know how much capacity is needed and can avoid bottlenecks.

Taming the Internet beast

The particularity of this research project is that the team of experts taking part in it was given access to very sensitive data on Internet traffic measurements. Operators normally tend to guard this information jealously as it constitutes their core business. "Internet traffic measurements are very difficult to find if you are not an operator," says Mr. Andreas Aurelius, coordinator of the project and senior scientist at Acreo AB, one of the project partners. Previous research in this field has normally been limited to campus networks, and limited to a geographical area. "That is one of the unique things about this project," he says. "We were using data in access networks, not campus networks as most researchers do."

The type of information the project monitored were designed to get an overall view of traffic passing through the networks. This included IP traffic (the flow of data on the Internet), routing decisions (the selection of which path to send network traffic), quality of service (giving priority to certain applications, such as multimedia) and available bandwidth. This was innovative as the partners developed new tools that measured traffic which gave a complete picture of a network. These tools, already targeted for use by many operators will make web browsing considerably faster.

"For everyday users, this means better quality for multimedia services over the Internet, like streaming for example" says Mr. Aurelius.

Setting new standards to measure Internet traffic

Another question that springs to mind is how the team was able to acquire all of this information without flouting any privacy laws. The answer is that through agreements with the operators, the partners had access to certain information, but not all of it. "The information was post-processed, so it only contained data. It wasn't linked to any customers or IP addresses. We could see what type of application was being used, for example if it was peer to peer, but we couldn't see what file was downloaded," explains Mr. Aurelius. Getting access to such delicate information was a great coup for the project, and as a result the privacy concerns were taken very seriously.

The team managed to collect an astonishing 3 000 terabytes of data over the three years of the project. This was important as it allowed them to study trends and changes over an extended period of time amid a continuous influx of information.

The project was also notable for the fact that a number of the processes that were carried out are under consideration by the International Telecommunication Union to become standardised forms of measurement. An example of these standards is BART (Bandwidth Available in Real Time), which monitors available bandwidth between a sender and a receiver in a network such as the Internet.

Getting big companies' attention

Transnational partnerships and multicultural issues didn't seem to affect the TRAMMS project, as some of the partners had already worked together on previous projects, so they gelled together well.

"There was a tighter bond between the national partners," says Mr. Aurelius, but there were regular international meetings with the partners which helped foster cooperation. In the end, 11 partners successfully completed the project and the three countries involved complemented each others areas of specialisation making the project a perfect example of international cooperation.

Mr. Aurelius has even come back for more, with a follow-up project entitled IPNQSIS already under way. This project deals with quality of experience in network services, such as voice over IP (VoIP), video on demand (VoD), IPTV, and so on. These are sectors where network service providers are expecting huge revenue opportunities and need to improve the quality of the offered services as perceived by the users with a goal to minimize the customer churn yet maintaining their competitive edge. This makes the topic an ideal follow-up project for the team that already worked on TRAMMS.

The TRAMMS project was started under the Celtic Initiative which is a EUREKA cluster specialising in communication technology. Mr. Aurelius praised the way Celtic was involved in the set up phase of the project, finding funding and partners.

The project came to a conclusion at the end of 2009 and the first results are impressive. No less than five companies have taken up the methodology used for traffic measurements: Ericsson, Procera, Telnet-RI, NAUDIT, and GCM Communications Technology.


Enhanced by Zemanta

Physicists Break Color Barrier for Sending, Receiving Photons

Flashflight light-up flying discImage via WikipediaUniversity of Oregon scientists have invented a method to change the color of single photons in a fiber optic cable. The laser-tweaked feat could be a quantum step forward for transferring and receiving high volumes of secured data for future generations of the Internet.

The proof-of-concept experiment is reported in a paper about work led by UO physicist Michael G. Raymer that appeared in the Aug. 27 issue of Physical Review Letters.

In a separate paper also published by the same journal on Sep. 15, Raymer and collaborators at the University of Bath in the United Kingdom tell how they added hydrogen and a short laser burst to a hollow "photonic crystal" fiber cable to create multiple colors, or wavelengths, of light. This paper, Raymer said, provides groundwork for future research in creating ultra-short light pulses.

The single-photon project, in which a dual-color burst of laser light was used to change the color of a separate single photon of light, is directly applicable to future Internet communications technology, said Raymer, the UO's Knight Professor of Liberal Arts and Sciences and author of a newly published textbook "The Silicon Web: The Physics Behind the Internet."

In the computing world, digital data now is contained as individual bits represented by many electrons and is transmitted using pulses of infrared light containing many photons. In quantum computing -- a futuristic technology -- data might be stored in individual electrons and photons. Such quantum techniques could make data 100-percent secure from hackers and expand the ability to search large databases, Raymer said.

"There is a need for more bandwidth, or data rate, in fiber optic networks," he said. "In today's fiber optic lines one frequency of light may carry a phone conversation, while others may carry TV channels or emails, all traveling in separate channels across the Internet. At the level of single photons, we would like to send data in different channels -- colors or wavelengths -- at the same time. Quantum memories based on electrons emit and absorb visible light -- for example, red," he said. "But the optical fibers we want to use -- such as those in the ground now -- are optimized to transmit infrared, not visible light."

In experiments led by Raymer's doctoral student Hayden J. McGuinness, researchers used two lasers to create an intense burst of dual-color light, which when focused into the same optical fiber carrying a single photon of a distinct color, causes that photon to change to a new color. This occurs through a process known as Bragg scattering, whereby a small amount of energy is exchanged between the laser light and the single photon, causing its color to change.

This process, demonstrated in the UO's Oregon Center for Optics, is called quantum frequency translation. It allows devices that talk to one another using a given color of light to communicate with devices that use a different color.

The research was stimulated by work done earlier by Raymer's collaborators: Colin McKinstrie at Alcatel-Lucent Bell Labs and Stojan Radic at the University of California, San Diego.

"Other researchers have done this frequency translation using certain types of crystals," Raymer said. "Using optical fibers instead creates the translated photons already having the proper shape that allows them to be transmitted in a communication fiber. Another big advantage of our technique is that it allows us to change the frequency of a single photon by any chosen amount. The objective is to convert a single photon from the color that a common quantum memory will deal with into an infrared photon that communication fibers can transmit. At the other end, it has to be converted back into the original color to go into the receiving memory to be read properly."

The second paper published by Raymer's group focused on theoretical and experimental work at UO and at the University of Bath. It showed how to create an optical frequency comb in a hydrogen-filled optical fiber.

The optical frequency comb contains many precisely known colors or wavelengths of light, and can be used to measure the wavelength of light, much as a ruler with many tick marks can be used to measure distance.

The comb method was co-developed by John Hall of the National Institute of Standards and Technology, who won the Nobel Prize in Physics in 2005 for his work that led to the standard for measuring light frequencies.

By filling empty air holes in a hollow optical with hydrogen gas, researchers were able to change the color, or frequency, of light passing through. As a short burst of red laser passed through the gas, the hydrogen molecules were caused to vibrate, emitting strong light of many colors.

"In the first study, we worked with one photon at a time with two laser bursts to change the energy and color without using hydrogen molecules," he said. "In the second study, we took advantage of vibrating molecules inside the fiber interacting with different light beams. This is a way of using one strong laser of a particular color and producing many colors, from blue to green to yellow to red to infrared."

The laser pulse used was 200 picoseconds long. A picosecond is one-trillionth of a second. Combining the produced light colors in such a fiber could create pulses 200,000 times shorter -- a femtosecond (one quadrillionth of a second).

Such time scales could open the way to study biological processes at the level of atoms or possibly capture so-far-unseen activity in photosynthesis, Raymer said.

Co-authors with McGuinness and Raymer on the single-photon paper were McKinstrie and Radic. The National Science Foundation funded the project.


Enhanced by Zemanta

New Search Method Tracks Down Influential Ideas: Computer Scientists Have Developed a New Way of Tracing the Origins and Spread of Ideas

Scientific literatureImage via Wikipedia Princeton computer scientists have developed a new way of tracing the origins and spread of ideas, a technique that could make it easier to gauge the influence of notable scholarly papers, buzz-generating news stories and other information sources.

The method relies on computer algorithms to analyze how language morphs over time within a group of documents -- whether they are research papers on quantum physics or blog posts about politics -- and to determine which documents were the most influential.

"The point is being able to manage the explosion of information made possible by computers and the Internet," said David Blei, an assistant professor of computer science at Princeton and the lead researcher on the project. "We're trying to make sense of how concepts move around. Maybe you want to know who coined a certain term like 'quark,' or search old news stories to find out where the first 1960s antiwar protest took place."

Blei said the new search technique might one day be used by historians, political scientists and other scholars to study how ideas arise and spread.

While search engines such as Google and Bing help people sort through the haystack of information on the Web, their results are based on a complex mix of criteria, some of which -- such as number of links and visitor traffic -- may not fully reflect the influence of a document.

Scholarly journals traditionally quantify the impact of a paper by measuring how often it is cited by other papers, but other collections of documents, such as newspapers, patent claims and blog posts, provide no such means of measuring their influence.

Instead of focusing on citations, Blei and Sean Gerrish, a Princeton doctoral student in computer science, developed a statistical model that allows computers to analyze the actual text of documents to see how the language changes over time. Influential documents in a field will establish new concepts and terms that change the patterns of words and phrases used in later works.

"There might be a paper that introduces the laser, for instance, which is then mentioned in subsequent articles," Gerrish said. "The premise is that one article introduces the language that will be adopted and used in the future."

Previous methods developed by the researchers for tracking how language changes accounted for how a group of documents influenced a subsequent group of documents, but were unable to isolate the influence of individual documents. For instance, those models can analyze all the papers in a certain science journal one year and follow the influence they had on the papers in the journal the following year, but they could not say if a certain paper introduced groundbreaking ideas.

To address this, Blei and Garrish developed their algorithm to recognize the contribution of individual papers and used it to analyze several decades of reports published in three science journals: Nature, the Proceedings of the National Academy of Sciences and the Association for Computational Linguistics Anthology. Because they were working with scientific journals, they could compare their results with the citation counts of the papers, the traditional method of measuring scholarly impact.

They found that their results agreed with citation-based impact about 40 percent of the time. In some cases, they discovered papers that had a strong influence on the language of science, but were not often cited. In other cases, they found that papers that were cited frequently did not have much impact on the language used in a field.

They found no citations, for instance, for an influential column published in Nature in 1972 that correctly predicted an expanded role of the National Science Foundation in funding graduate science education.

On the other hand, their model gave a low influence score to a highly cited article on a new linguistics research database that was published in 1993 in the Association for Computational Linguistics Anthology. "That paper introduced a very important resource, but did not present paradigm-changing ideas," Blei said. "Consequently, our language-based approach could not correctly identify its impact."

Blei said their model was not meant as a replacement for citation counts but as an alternative method for measuring influence that might be extended to finding influential news stories, websites, and legal and historical documents.

"We are also exploring the idea that you can find patterns in how language changes over time," he said. "Once you've identified the shapes of those patterns, you might be able to recognize something important as it develops, to predict the next big idea before it's gotten big."


Enhanced by Zemanta

iPhone Images: Good Enough for Medical Use?

Image representing iPhone as depicted in Crunc...Image via CrunchBaseLike the rest of society, medicine increasingly relies on digital systems and mobile devices to manage work flow and enhance communications. Eye M.D.s (ophthalmologists) routinely evaluate internet-transmitted images of patients' eyes as part of diagnosis and treatment. Usually images are viewed at computer workstations with standard display screens. University of Pittsburg School of Medicine researchers wondered whether handheld devices like the iPhone would work equally well.

In the study, Eye M.D.s from the University of Pittsburg Eye Center evaluated three aspects of diabetic retinopathy, a potentially blinding disease that affects many people with diabetes, by reviewing both the standard computer monitor and iPhone images for 55 patients (110 eyes). The doctors then made recommendations for follow up treatment.

"We found high consistency-more than 85 percent agreement-between evaluations based on the standard computer monitor and on the iPhone for all image sections tested," said Dr. Michael J. Pokabla. "There were no significant differences between evaluations and recommendations using the two systems, and the doctors rated the iPhone images as excellent. We conclude that mobile devices like the iPhone can be used to evaluate ophthalmic images," he added.

No Eye M.D. in the House? Videoconferencing Brings the Expert to the Outback When no ophthalmologist is available on site, some emergency rooms (ERs) in remote medical centers in rural Australia now use videoconferencing to receive diagnosis and treatment advice for their eye injury and ophthalmic illness patients.

A telecommunication link at a major metropolitan teaching eye hospital, the Royal Victorian Eye and Ear Hospital (RVEEH), is connected with four ERs that serve large regions of rural Australia. Dr. Christolyn Raj and her team studied the effectiveness of this approach by reviewing the initial six months of RVEEH videoconference interactions with the regional ERs.

Diagnoses were altered in approximately 60 percent of cases and management plans were changed in about 70 percent of cases following videoconference consultations, study results show. The average consultation time was 10 minutes.

"Videoconferencing is a sustainable, effective way of providing prompt eye management advice to rural emergency doctors," Dr. Raj said. "Although it can never replace face to face clinical care, it is a useful tool to have at one's fingertips and its use will undoubtedly increase in coming years," she added.


Enhanced by Zemanta

Eat Safer: Novel Approach Detects Unknown Food Pathogens

Logo of the United States National Institute o...Image via WikipediaTechnologies for rapid detection of bacterial pathogens are crucial to maintaining a secure food supply.

Researchers from the School of Science at Indiana University-Purdue University Indianapolis (IUPUI) and the Bindley Bioscience Center at Purdue University have developed a novel approach to automated detection and classification of harmful bacteria in food. The investigators have designed and implemented a sophisticated statistical approach that allows computers to improve their ability to detect the presence of bacterial contamination in tested samples. These formulas propel machine-learning, enabling the identification of known and unknown classes of food pathogens.

The study appears in the October issue of the journal Statistical Analysis and Data Mining.

"The sheer number of existing bacterial pathogens and their high mutation rate makes it extremely difficult to automate their detection," said M. Murat Dundar, Ph.D., assistant professor of computer science in the School of Science at IUPUI and the university's principal investigator of the study. "There are thousands of different bacteria subtypes and you can't collect enough subsets to add to a computer's memory so it can identify them when it sees them in the future. Unless we enable our equipment to modify detection and identification based on what it has already seen, we may miss discovering isolated or even major outbreaks."

To detect and identify colonies of pathogens such as listeria, staphylococcus, salmonella, vibrio and E. coli based on the optical properties of their colonies, the researchers used a prototype laser scanner, developed by Purdue University researchers. Without the new enhanced machine-learning approach, the light-scattering sensor used for classification of bacteria is unable to detect classes of pathogens not explicitly programmed into the system's identification procedure.

"We are very excited because this new machine-learning approach is a major step towards a fully automated identification of known and emerging pathogens in real time, hopefully circumventing full-blown, food-borne illness outbreaks in the near future. Ultimately we would like to see this deployed to tens of centers as part of a national bio-warning system," said Dundar.

"Our work is not based on any particular property of light scattering detection and therefore it can potentially be applied to other label-free techniques for classification of pathogenic bacteria, such as various forms of vibrational spectroscopy," added Bartek Rajwa, Ph.D., the Purdue principal investigator of the study.

Dundar and his colleagues believe this methodology can be expanded to the analysis of blood and other biological samples as well.

This study was supported by a grant from the National Institute of Allergy and Infectious Diseases.


Enhanced by Zemanta

sponsers