TechNews Pictorial PriceGrabber Video Mon Nov 25 12:29:29 2024

0


Researchers Just Discovered a Way to Make Supercomputing Way More Powerful
Source: Sage Lazzaro


Supercomputers at Facebook’s Luleå data center. (Facebook)

You probably thought there was no such thing as being too accurate, but when it comes to supercomputing, that’s exactly what’s holding us back.

Scientists have discovered that sacrificing a tiny bit of unneeded accuracy when running sophisticated computer models would save significant amounts of energy and, consequently, allow us to overcome the current major limitations of supercomputing.

According to a recent study out of Rice University and the University of Illinois at Urbana-Champaign, it’s typical to round to the seventh or eight decimal place, but this much precision very rarely outweighs the costs. While it may seem that opting to round to only the third or fourth decimal place might be a sacrifice in quality, the exact opposite is true. Using a common numerical analysis tool, a method known as Newton-Raphson created by Isaac Newton and Joseph Raphson in the 1600s, the researchers demonstrated that opting for an inexact approach could improve the the solution’s quality by more than three orders of magnitude (or 1000 times) for a fixed energy cost.
ADVERTISING

“In simple terms, it is analogous to rebalancing an investment portfolio,” Marc Snir, a computer science professor at the University of Illinois at Urbana-Champaign, told Futurity. “If you have one investment that’s done well but has maxed out its potential, you might want to reinvest some or all of those funds to a new source with more potential for a much better return on investment.”




Weather prediction can heavily benefit from inexactness.


In the paper’s conclusion, the authors admit this method is paradoxical, but write that it will reduce error in the long run. Additionally, it will allow supercomputers to improve in general.

“Our results essentially show that one can reduce energy consumption by a factor of 2.x, without affecting the quality of the result, by smarter use of single precision. For decades, increased supercomputer performance has meant more double precision floating point operations per second. This brute force approach is going to hit a brick wall pretty soon.”

One important area where this could have a huge impact is weather prediction and climate modeling, which the authors write is perhaps the most important scientific domains relying on supercomputing. Their earlier work has shown that inexactness (or phase I of the approach described in the recent study) yields benefits to weather prediction models with lower energy consumption while preserving the quality of the prediction. As a result, this has spurred interest among climate scientists who know that for serious advances in model quality, weather and climate models need to be resolved at much higher resolutions than is possible today with current computational energy budgets.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |