Monday, March 9, 2009

San Diego Supercomputer Center has built a high-performance computer with solid-state drives

The San Diego Supercomputer Center has built a high-performance computer with solid-state drives, which the center says could help solve science problems faster than systems with traditional hard drives.

The flash drive will provide faster data throughput, which should help the supercomputer analyze data an "order-of-magnitude faster" than hard drive-based supercomputers, said Allan Snavely, associate director at SDSC, in a statement. SDSC is a part of the University of California, San Diego.

"This means it can solve data-mining problems that are looking for the proverbial 'needle in the haystack' more than 10 times faster than could be done on even much larger supercomputers that still rely on older 'spinning disk' technology," Snavely said.

Solid-state drives, or SSDs, store data on flash memory chips. Unlike hard drives, which store data on magnetic platters, SSDs have no moving parts, making them rugged and less vulnerable to failure. SSDs are also considered to be less power-hungry.

Flash memory provides faster data transfer times and better latency than hard drives, said Michael Norman, interim director of SDSC in the statement. New hardware like sensor networks and simulators are feeding lots of data to the supercomputer, and flash memory more quickly stores and analyzes that data.

The system uses Intel's SATA solid-state drives, with four special I/O nodes serving up 1TB of flash memory to any other node. The university did not immediately respond to a query about the total available storage in the supercomputer.

SSDs could be better storage technology than hard drives as scientific research is time-sensitive, said Jim Handy, director at Objective Analysis, a semiconductor research firm. The quicker read and write times of SSDs compared to hard drives contribute to providing faster results, he said.

SSDs are also slowly making their way into larger server installations that do online transaction processing, like stock market trades and credit-card transactions, he said.

Many data centers also a employ a mix of SSDs and hard drives to store data, Handy said. Data that is frequently accessed is stored on SSDs for faster processing, while hard drives are used to store data that is less frequently needed.

"Hard drives are still the most cost-effective way of hanging on to data," Handy said. But for scientific research and financial services, the results are driven by speed, which makes SSDs makes worth the investment.

(This news sourced from http://www.goodgearguide.com.au)

Intel stretches HPC dev tools across chubby clusters

SC11 Supercomputing hardware and software vendors are getting impatient for the SC11 supercomputing conference in Seattle, which kick...