The National Center for Supercomputing Applications (NCSA) will soon deploy a new highly parallel shared memory supercomputer, called Ember. With a peak performance of 16 teraflops, Ember doubles the performance of its predecessor, the five-year-old Cobalt system.
Ember will be available to researchers through the National Science Foundation's TeraGrid until that program concludes in March 2011 and then will be allocated through its successor, the eXtreme Digital program.
(The full story can be reached at NCSA's site)
Showing posts with label Shared Memory. Show all posts
Showing posts with label Shared Memory. Show all posts
Friday, March 5, 2010
Tuesday, March 2, 2010
Fixstars Launches Linux for CUDA

The problem is that the majority of future HPC accelerated deployments is destined to be GPU-based, rather than Cell-based. While Cell had a brief fling with HPC stardom as the processor that powered the first petaflop system -- the Roadrunner supercomputer at Los Alamos National Lab -- IBM has signaled it will not continue development of the Cell architecture for HPC applications. With NVIDIA's steady evolution of its HPC portfolio, propelled by the popularity of its CUDA development environment, general-purpose GPU computing is now positioned to be the most widely used accelerator technology for high performance computing. The upcoming "Fermi" GPU-based boards (Q3 2010) substantially increase the GPU's double precision capability, add error corrected memory, and include hardware support for C++ features.
Which brings us back to Fixstars. The company's new YDEL for CUDA offering is aimed squarely at filling what it sees as a growing market for turnkey GPU-accelerated HPC on x86 clusters. Up until now, customers either built their own Linux-CUDA environments or relied upon system OEMs to provide the OS integration as part of the system. That might be fine for experimenters and big national labs who love to tweak Linux and don't mind shuffling hardware drivers and OS kernels, but commercial acceptance will necessitate a more traditional model.
One of the challenges is that Red Hat and other commercial Linux distributions are generally tuned for mass market enterprise applications: large database and Web servers, in particular. In this type of setup, HPC workloads won't run as efficiently as they could. With YDEL, Fixstars modified the Red Hat kernel to support a more supercomputing-like workload. The result, according to Owen Stampflee, Fixstars' Linux Product Manager (and Terra Soft alum), is a 5 to10 percent performance improvement on HPC apps compared to other commercial Linux distributions.
Fixstars is selling YDEL for CUDA as a typical enterprise distribution, which in this case means the CUDA SDK, hardware drivers, and Linux kernel pieces are bundled together and preconfigured for HPC. A product license includes Fixstars support for both Linux and CUDA. The product contains multiple versions of CUDA, which can be selected at runtime via a setting in a configuration file or an environment variable. In addition, the YDEL comes with an Eclipse-based graphical IDE for CUDA programming. To complete the picture, Fixstars also offers end-user training and seminars on CUDA application development.
(This news summarized from the HPCwire and full text pages can be reached their site)
Subscribe to:
Posts (Atom)
Intel stretches HPC dev tools across chubby clusters
SC11 Supercomputing hardware and software vendors are getting impatient for the SC11 supercomputing conference in Seattle, which kick...

-
* 1. Install RHEL 5.1 x86_64 Server * 2. install openldap server and client RPMs rpm -qa | grep -i openldap must be show o...
-
Introduction to High Performance Computing. Introduction to the DEISA Infrastructure. DEISA is running two training courses at U...
-
SC11 Supercomputing hardware and software vendors are getting impatient for the SC11 supercomputing conference in Seattle, which kick...