Wednesday, December 30, 2009

NumaConnect SMP Adapter Card

Numascale's SMP Adapter is an HTX card made to be used with commodity servers with AMD processors that feature an HTX connector to its HyperTransport interconnect.


Highlights
Scalable, Directory Based Cache Coherence Protocol
Write-back cache for Remote Data: 2-4-8-(16)GB options, standard SDIMMs
ECC protected with background scrubbing of soft errors
16 coherent + 16 non-coherent outstanding memory transactions
Support for single-image or multi-image OS partitions
3-way on-chip distributed switching for 1D, 2D or 3D Torus topologies
30GB/s switching capacity per node
HTX connected - 6.4GB/s
<20W power dissipation


For detailed review click their webpages.
The PDF manual is here.

Thursday, December 10, 2009

PRACE is Ready for the Next Phase



PRACE is eligible to apply for a grant under the European Union’s 7th Framework Programme to start the implementation phase.

In October 2009 PRACE demonstrated to a panel of external experts and the European Commission that the project made “satisfactory progress in all areas” and “that PRACE has the potential to have real impact on the future of European HPC, and the quality and outcome of European research that depends on HPC services”. Two months before the end of the project it met the eligibility to apply for a grant of 20 million Euros for the implementation phase of the permanent PRACE Research Infrastructure.

The future PRACE Research Infrastructure (RI) will consist of several world-class top-tier centers, managed as a single European entity. The infrastructure to be created by PRACE will form the top level of the European HPC ecosystem. It will offer competent support and a spectrum of system architectures to meet the requirements of different scientific domains and applications. It is expected that the PRACE RI will provide European scientists and technologists with world-class leadership supercomputers with capabilities equal to or better than those available in the USA, Japan, China, India and elsewhere in the world, in order to stay at the forefront of research.

About PRACE:  The Partnership for Advanced Computing in Europe (PRACE) prepares the creation of a persistent pan-European HPC service, consisting of several tier-0 centres providing European researchers with access to capability computers and forming the top level of the European HPC ecosystem. PRACE is a project funded in part by the EU’s 7th Framework Programme (FP7/2007-2013) under grant agreement n° RI-211528.

Wednesday, November 18, 2009

The winner is Jaguar!

The 34th TOP500 List released November 17th in Portland, Oregon at the SC09 Conference.

A PDF version of the TOP500 Report distributed during SC09 can be found here.

In its third run to knock the IBM supercomputer nicknamed “Roadrunner” off the top perch on the TOP500 list of supercomputers, the Cray XT5 supercomputer known as Jaguar finally claimed the top spot on the 34th edition of the closely watched list.

Jaguar, which is located at the Department of Energy’s Oak Ridge Leadership Computing Facility and was upgraded earlier this year, posted a 1.75 petaflop/s performance speed running the Linpack benchmark. Jaguar roared ahead with new processors bringing the theoretical peak capability to 2.3 petaflop/s and nearly a quarter of a million cores. One petaflop/s refers to one quadrillion calculations per second.


Kraken, another upgraded Cray XT5 system at the National Institute for Computational Sciences/University of Tennessee, claimed the No. 3 position with a performance of 832 teraflop/s (trillions of calculations per second).

At No. 4 is the most powerful system outside the U.S. -- an IBM BlueGene/P supercomputer located at the Forschungszentrum Juelich (FZJ) in Germany. It achieved 825.5 teraflop/s on the Linpack benchmark and was No. 3 in June 2009.

Rounding out the top 5 positions is the new Tianhe-1 (meaning River in Sky) system installed at the National Super Computer Center in Tianjin, China and to be used to address research problems in petroleum exploration and the simulation of large aircraft designs. The highest ranked Chinese system ever, Tianhe-1 is a hybrid design with Intel Xeon processors and AMD GPUs used as accelerators. Each node consists of two AMD GPUs attached to two Intel Xeon processors.

Tuesday, November 10, 2009

What to do with an old nuclear silo?

Question: What to do with a 36 feet wide by 65 feet high nuclear grade silo with 2 feet thick concrete walls ?
Answer: An HPC Center!


A supercomputing center in Quebec has transformed a huge concrete silo into the CLUMEQ Colossus, a data center filled with HPC clusters.

The silo, which is 65 feet high with two-foot thick concrete walls, previously housed a Van de Graaf accelerator dating to the 1960s. It was redesigned to house three floors of server cabinets, arranged so cold air can flow from the outside of the facility through the racks and return via an interior 'hot core'. The construction and operation of the unique facility are detailed in a presentation from CLUMEQ.

Link: http://www.datacenterknowledge.com/archives/2009/12/10/wild-new-design-data-center-in-a-silo/

(This news sourced from the slashdot.com)

Sunday, July 19, 2009

ScaleMP announces vSMP Foundation for Cluster Structures


The vSMP Foundation for Cluster™ solution provides a simplified compute architecture for high-performance clusters - it hides the InfiniBand fabric, offers built-in high-performance storage as a cluster-filesystem replacement and reduces the number of operating systems to one, making it much easier to administer. This solution is ideally suited for smaller compute implementations in which management tools and skills may not be readily available.

The target customers for the Cluster product are those with initial high performance cluster implementations who are concerned with the complexity of creation and management of the cluster environment.

Key Advantages:
  • Simplified install and management of high performance clusters;



    • Eliminates multiple nodes, operating systems to one;
    • Eliminates the need for separate cluster-filesystem,
  • Stronger entry-level value proposition – scale up growth opportunities with no additional overhead.
You can reach detailed product info at their web pages.

Monday, March 9, 2009

San Diego Supercomputer Center has built a high-performance computer with solid-state drives

The San Diego Supercomputer Center has built a high-performance computer with solid-state drives, which the center says could help solve science problems faster than systems with traditional hard drives.

The flash drive will provide faster data throughput, which should help the supercomputer analyze data an "order-of-magnitude faster" than hard drive-based supercomputers, said Allan Snavely, associate director at SDSC, in a statement. SDSC is a part of the University of California, San Diego.

"This means it can solve data-mining problems that are looking for the proverbial 'needle in the haystack' more than 10 times faster than could be done on even much larger supercomputers that still rely on older 'spinning disk' technology," Snavely said.

Solid-state drives, or SSDs, store data on flash memory chips. Unlike hard drives, which store data on magnetic platters, SSDs have no moving parts, making them rugged and less vulnerable to failure. SSDs are also considered to be less power-hungry.

Flash memory provides faster data transfer times and better latency than hard drives, said Michael Norman, interim director of SDSC in the statement. New hardware like sensor networks and simulators are feeding lots of data to the supercomputer, and flash memory more quickly stores and analyzes that data.

The system uses Intel's SATA solid-state drives, with four special I/O nodes serving up 1TB of flash memory to any other node. The university did not immediately respond to a query about the total available storage in the supercomputer.

SSDs could be better storage technology than hard drives as scientific research is time-sensitive, said Jim Handy, director at Objective Analysis, a semiconductor research firm. The quicker read and write times of SSDs compared to hard drives contribute to providing faster results, he said.

SSDs are also slowly making their way into larger server installations that do online transaction processing, like stock market trades and credit-card transactions, he said.

Many data centers also a employ a mix of SSDs and hard drives to store data, Handy said. Data that is frequently accessed is stored on SSDs for faster processing, while hard drives are used to store data that is less frequently needed.

"Hard drives are still the most cost-effective way of hanging on to data," Handy said. But for scientific research and financial services, the results are driven by speed, which makes SSDs makes worth the investment.

(This news sourced from http://www.goodgearguide.com.au)

Intel stretches HPC dev tools across chubby clusters

SC11 Supercomputing hardware and software vendors are getting impatient for the SC11 supercomputing conference in Seattle, which kick...