The Fourier transform, which splits a complicated signal into individual pure frequencies, was devised over 200 years ago but only became widely used after the development of an algorithm called the fast Fourier transform in the 1960s. Now, computer scientist Dina Katabi and her colleagues at the Massachusetts Institute of Technology have developed a Fourier transform algorithm that is potentially hundreds of times faster still.
Splitting a signal using the Fourier transform reveals how much different frequency components contribute to an overall sound or image. In some cases, a wide range of frequencies contribute equally, but often a small number dominate. The MIT algorithm improves performance on signals of the latter type, known as sparse signals.
"Many natural signals have a sparse nature," says Katabi, including images and sounds. By contrast, human transmissions such as Wi-Fi signals tend to be non-sparse because they are designed to carry as much information as possible by making full use of available frequencies.
The team discovered they could quickly identify the important frequencies in a sparse signal by combining two existing signal filters to create a new, more efficient one. This filter works by dividing the range of frequencies into sets, then identifying which sets contain important frequencies.
Having found the key sets, the team had to identify the exact important frequency within each set. This is usually done by subdividing the set repeatedly until only the important signal remains.
Rapid sampling
However, the team applied another signal-processing technique normally used to improve wireless communications. This is based on the idea that the most important frequency will modulate all the other signals in the set. Sampling the set rapidly at different times reveals the frequency of this dominating signal. The new technique can process sparse signals up to 10,000 faster than the old algorithm.
Katabi and colleagues are now investigating how their algorithm could improve existing compression techniques, such as those used in smartphones to store video and audio. "You make an algorithm faster by doing fewer operations, and if the algorithm does fewer operations then it consumes less power," she says. So more efficient compression techniques could make your smartphone's battery last longer.
The team presented the work at the Symposium on Discrete Algorithms in Kyoto, Japan, this week.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Subscribe now to comment.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support.
david garrard indy car kinder morgan zachary quinto zachary quinto ashley judd brewers
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.