Thursday, February 25, 2010

Helsinki to recycle excess heat from data center

Helsinki public energy company Helsingin Energia will recycle heat from a new data center to help generate energy and deliver hot water for the Finnish capital city.

The recycled heat from the data center, being built by IT and telecom services company Academica, could add about 1 percent to the total energy generated by Helsingin Energia's system in the summer.

The data center is located in an old bomb shelter and is connected to Helsingin Energia's district heating system, which works by pumping boiling water through a system of pipes to households in Helsinki.

The plan calls for the data center to first get cold water from Helsingin Energia's system. The water then goes through the data center to cool down the equipment. Next, the now warmer water flows to a pump that heats the water and sends it into the district heating system. The pump also cools the water and sends it back to the data center.

The ability of the heat pump to both heat and cool water is what makes it special. The pump is also very efficient.

The data center will go live at the end of January, and will at first measure 500 square meters.

Academica had always planned to use water to cool the data center and lower electricity bills for customers. The idea to recycle excess energy came later. However, recycling could end up playing an important role.

(This story summarized from ITworld and full story can be reached their web pages)

Saturday, February 20, 2010

JRT Offering the Tesla Workstation

JRT's new Tesla Workstation delivers the accelerated multi-core processing power. Designed to deliver groundbreaking performance, and power efficiency for compute and graphics intensive environments, the new JRT Tesla Workstation lets you create, design, render, and analyze, without compromise.

The new JRT Tesla Workstation offers outstanding performance and incredible graphics and memory up to 64 GB for technical and graphic intensive computing. The Tesla Workstation supports up to two 64-bit Dual/Quad-Core Intel Xeon 5200/5400 series processors and supports the full NVIDIA Quadro graphics and Tesla accelerator product lines. Designed with all new performance architecture for the research - critical, compute-intensive and graphically demanding workstation environments.

The JRT Tesla Workstation offers the latest high-end graphics cards that gives high level graphics performance for the most demanding visual applications in industries such as oil and gas, CAD, animation and 3D modeling.

Key Features

  • Dual / Quad-Core Intel Xeon Processors
  • Up to 64 GB of Memory
  • Dual PCI Express x16 Slot
  • High Performance NVIDIA Quadro Graphics Card
  • Up to 8 TB of Hot-Swap Storage
  • Whisper Quiet Workstation (28 dB)
  • NVIDIA Tesla C1060 Computing Processor

(For more information visit the product pages)

Thursday, February 18, 2010

A Strategic Application Collaboration for Molecular Dynamics

Over the last two decades, an increasing number of chemists have turned to the computer to predict the results of experiments beforehand or to help interpret the results of experiments. Skepticism on the part of laboratory chemists has gradually evaporated as the computational results have made contact with, and even anticipated, experimental findings. When the 1998 Nobel Prize in Chemistry was awarded recently to two scientists, Walter Kohn and John Pople, who originated some of the first successful methods in computational chemistry, the award was seen as an affirmation of the value of computational chemistry to the field of chemistry.

"We've come a long way," said Peter Kollman of the Department of Pharmaceutical Chemistry at UC San Francisco (UCSF). "But while we've come a long way, we can see that we've still got a long way to go."

Now, as part of an NPACI Strategic Application Collaboration, AMBER's performance is being improved by 50 percent to 65 percent.

AMBER stands for Assisted Model Building with Energy Refinement. The code's successes include its use to study protein folding, to study the relative free energies of binding of two ligands to a given host (or two hosts to a given ligand), to investigate the sequence-dependent stability of proteins and nucleic acids, and to find the relative solvation free energies of different molecules in various liquids. Hundreds of contributions to the scientific literature reflect the use of AMBER.

(This news summarized from the San Diego Super Computing Center and original full text can be reached their web site)

Tuesday, February 16, 2010

Appro HyperPower™ Cluster - Featuring Intel Xeon CPU and NVIDIA® Tesla™ GPU computing technologies

The amount of raw data needed to process research analysis in drug discoveries, oil and gas exploration, and computational finance create a huge demand for computing power. In addition, the 3D visualization analysis data has grown a lot in recent years moving visualization centers from the desktop to GPU clusters. With the need of performance and memory capacities, Appro clusters and supercomputers are ideal architectures combined with the latest CPUs and GPU's based on NVIDIA® Tesla™ computing technologies. It delivers best performance at lower cost and fewer systems than standard CPU-only clusters. With 240-processor computing core per GPU, C-language development environment for the GPU, a suite of developer tools as well as the world’s largest GPU computing ISV development community, the Appro HyperPower GPU clusters allow scientific and technical professionals the opportunity to test and experiment their ability to develop applications faster and to deploy them across multiple generations of processors.

The Appro HyperPower cluster features high density 1U servers based on Intel® Xeon® processors and NVIDIA® Tesla™ GPU cards onboard. It also includes interconnect switches for node-to-node communication, master node, and clustering software all integrated in a 42U standard rack configuration. It supports up to 304 CPU cores and 18,240 GPU cores with up to 78TF single/6.56 TF double precision GPU performance. By using fewer systems than standard CPU-only clusters, the HyperPower delivers more computing power in an ultra dense architecture at a lower cost.

In addition, the Appro HyperPower cluster gives customers a choice of configurations with open-source commercially supported cluster management solutions that can easily be tested and pre-integrated as a part of a complete package to include HPC professional services and support.

Ideal Environment:
Ideal solution for small and medium size HPC Deployments. The target markets are Government, Research Labs, Universities and vertical industries such as Oil and Gas, Financial and Bioinformatics where the most computationally-intensive applications are needed.

Installed Software
The Appro HyperPower is preconfigured with the following software:
- Redhat Enterprise Linux 5.x, 64-bit
- CUDA 2.2 Toolkit and SDK
- Clustering software (Rocks Roll)

CUDA Applications
The CUDA-based Tesla GPUs give speed-ups of up to 250x on applications ranging from MATLAB to computational fluid dynamics, molecular dynamics, quantum chemistry, imaging, signal processing, bioinformatics, and so on. Click here to learn more about these speedups with links to application downloads.,

(This news sourced from Appro Ltd. and can be reached their web site)

Monday, February 15, 2010

Solving The Protein Folding Problem with HPC

The University of Florida uses high-performance computing to simulate protein folding and help in the fight against disease.

Challenge: The Protein Folding Problem
Just like a road map, there are many ways to fold a protein molecule but only one is right. Misfold a map and the only penalty is inconvenience; but misfold a protein and the penalty can be a bad disease. How does a protein know the shape into which it is supposed to fold? High-performance computing can help answer this question.

Low free energy is good. Laboratory experiments can probe around only the unfolded and folded regions of the energy curve. Computer experiments can probe the whole thing. Professor Adrian Roitberg and Seonah Kim are doing just that on the UF High Performance Computing (HPC) Cluster at the University of Florida. The cluster depends on the high performance and reliability of the Cisco® InfiniBand fabric that connects the AMD Opteron based Rackable servers and storage subsystem. Kim has run more than 45 days on 100 processors and isn't done yet.

The simulation uses the highly parallelized Assisted Model Building with Energy Refinement (AMBER) package of molecular simulation programs. Why so long to study just two proteins? For one thing, biology involves a lot of water. The pinkish cloud is 7000 water molecules (21,000 atoms) surrounding a 14-residue peptide molecule (the bluish "worm" in the middle). The AMBER simulation works by calculating the motions of all these molecules. They bend, rotate, and move through space, avoiding or bouncing off one another. The simulation divides time into little steps and uses Newton's laws of physics to calculate the motion of the thousands of atoms at each step.

The High-Performance Computing Initiative at UF is an innovative approach to such needs. The design is a computing grid, linking specialized research computing clusters to a central parallel cluster over a dedicated high-speed network. Funding from the National Science Foundation and a cooperative agreement with Cisco provided the routers and switches for that grid.

(This news summarized from Cisco and original text can be reach their website)

Chelsio 10G Ethernet Adapters

Chelsio is pioneering the future with its Unified Wire technology, enabling tremendous cost savings and performance increases for Enterprise data centers. Chelsio's unique patented hardware architecture increases bandwidth, reduces latency, and dramatically lowers host-system CPU utilization.
Chelsio offers the broadest range of 10Gb Ethernet adapters in the industry based on it's third-generation Terminator 3 ASIC.

For the detailed information visit corporate web site.

SC10 Conference

The SC Conference is the premier international conference for high performance computing (HPC), networking, storage and analysis. Conference will be held this year in New Orleans, LA,USA at November 15th - 18th, 2010.

For more info visit

Saturday, February 13, 2010

NBCR Releases APBS Roll for Rocks 5.3

The NBCR (National Biomedical Computation Resource) at University of California, San Diego is pleased to announce the availability of APBS (Adaptive Poisson-Boltzmann Solver) Roll package for Rocks clusters version 5.3 for i386 and x86_64 architectures.

APBS is a scalable Poisson-Boltzmann equation solver used to study electrostatic properties of small to nanoscale biomolecular systems. The APBS Roll simplifies APBS deployment and integration on Rocks clusters. More information about APBS can be found at SourceForge or at the NBCR web site.

This APBS Roll contains the latest APBS version 1.2.1b and PDB2PQR package version 1.5. The APBS Roll can be downloaded from the APBS download site and the Roll documentation including installation and usage information is available here.

Rocks is an open-source Linux cluster distribution that enables end users to easily build computational clusters, grid endpoints and visualization tiled-display walls. Hundreds of researchers from around the world have used Rocks to deploy their own cluster (see the Rocks Cluster Site).

Thursday, February 11, 2010

IBM brings supercomputing storage into the cloud

IBM has announced a new network storage array that was based around its supercomputing platforms down to enterprise level.

The Scale Out Network Attached Storage (SONAS) system uses between one and 30 storage ‘pods,’ containing a storage node, a storage controller and attached 15,000 or 7,200 drives. These can be scaled up to a claimed 14.4 petabytes of storage.

“Companies not only need to cost-effectively store that data, but they need to rapidly locate it and provide ubiquitous access to it instantly. SONAS addresses these needs and provide clients with the right scalable solution.” said Doug Balog, vice president of disk systems for IBM.

The technology behinds SONAS was originally developed as part of General Parallel File System (GPFS), which the company has used on its supercomputing platform for 5 around 10 years.

SONAS also comes with an integrated Tivoli Storage Manager backup/archive client, up to 256 snapshots per file system, and support for modern RAID systems and network protocols, including CIFS, NFS, the Secure Copy Protocol (SCP), HTTP and FTP.

(This news sourced from

Tuesday, February 9, 2010

Power of Desktop: Cray CX1

Affordably priced, the award-winning Cray CX1 is the right size in performance, functionality and cost for a wide range of users, from the single user using a personal supercomputer to a department of users accessing shared clustered resources.

The brilliant brochure is here.

Software review: EventLog Analyzer

System log (Syslog) management is an important need in almost all enterprises. System administrators look at syslogs as a critical source to troubleshoot performance problems on syslog supported systems & devices across the network. The need for a complete sys-log monitoring solution is often underestimated; leading to long hours spent sifting through tons of syslogs to troubleshoot a single problem. Efficient event log syslog analysis reduces system downtime, increases network performance, and helps tighten security policies in the enterprise.

EventLog Analyzer performs like a syslog daemon or a syslog server and collect the sys-log events by listening to the syslog port (UDP). Event log analyser application can analyze, report, and archive the syslog events (including syslog-ng) received from all the syslog supported systems and device. Event log analyzer manages the events of systems supporting Unix syslogs, Linux syslogs, Solaris syslogs, HP-UX syslogs, IBM AIX syslogs and devices supporting syslog like routers, switches (Cisco) or any other device.

Using Event log analyzer application you can generate syslog reports in real-time, and archive or store these syslogs. You get instant access to wide variety of reports for syslog events generated across hosts, users, processes, and host groups.

Event log analyzer application also supports event logs received from Windows machines. You can reach detailed info and demo version software at their webpage.

Three days PostgreSQL training about advanced tips and technics.

Training Announce;
Cybertec will offer a comprehensive 3 day training course dealing with PostgreSQL tuning and advanced performance optimization. The goal of this workshop is to provide people with optimization techniques and insights into PostgreSQL. Click here to see program details.

Date: February 23rd — 25th 2010
Amsterdam, Netherlands

Monday, February 1, 2010

Extreme Scale Computing

Parallel computing is not a new concept. Its been around for decades. Now the reality is here. Serial computing is dead? Well, that's what was stated in an article in IEEE Computer Magazine.

New technologies just allow for more tools to make solutions possible and/or more efficient. But as far as hardware, that is pretty much the case. Will there be CPU manufacturers making single-core CPUs? That's dead. Hardware development marches on, no looking back.

Now we're talking about millions of cores and peta-scale (1015) to exa-scale (1018) operations per second. Massive parallelism has a name -- Extreme Scale Computing (ESC). Just like multicore that had to solve the issue of power consumption and data transfers that led to improvements in data bus transfers technology for example, Extreme Scale Computing has many challenges it must overcome in the next decade: energy and power consumption, and enabling concurrency and locality.

(To read full article visit the web pages)

Intel stretches HPC dev tools across chubby clusters

SC11 Supercomputing hardware and software vendors are getting impatient for the SC11 supercomputing conference in Seattle, which kick...