Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Wednesday, March 24, 2010

XtreemOS 2.1 Release Announced

The XtreemOS consortium is pleased to announce the release of XtreemOS 2.1.
This update release will include:
  • Improved installer with a new xosautoconfig tool to greatly simply and automate installation of XtreemOS instances.
     
  • A number of high impact bug fixes, along with work on stability and correctness.XtreemFS 1.2, which has a number of new features along with enhanced performance and stability.
     
  • XtreemOS MD (Mobile Device) -- This new version integrates XtreemOS on Internet Tablets, beginning with the Nokia N8xx models.
This makes it possible to launch jobs and interact with XtreemOS resources via a special client with a simple single signon.
  • Virtual Nodes -- a framework to provide fault tolerance for grid applications by replicating them over multiple nodes.
     
  • XOSSAGA -- a set of technologies to allow you to run SAGA compliant applications on top of XtreemOS unmodified.
Downloading
An updated list of Mandriva Mirrors can be found at http://api.mandriva.com/mirrors/list.php and http://twiki.mdklinuxfaq.org/en/Mandriva_mirrors.
The ISO files for XtreemOS releases are in the folder /devel/xtreemos/iso/2.1
Changes
This release has concentrated strictly on bug-fixes and polishing. You can find the change log at http://sourceforge.net/apps/mantisbt/xtreemos/roadmap_page.php.
All users are encouraged to test the new ISO and report any issues to our bug tracker at http://sourceforge.net/apps/mantisbt/xtreemos/main_page.php.

About XtreemOS
 XtreemOS 2.1 is the result of an ongoing project with 18 academic and industrial partners into the design and implementation of an open source grid operating system including native support for virtual organizations (VO) ease of use. XtreemOS is running on a wide range of hardware ranging from smartphones, PCs and Linux clusters.

A set of system services, extending those found in traditional Linux, provides users with all the grid capabilities associated with current grid middleware, but fully integrated into the OS. Based on Linux, XtreemOS provides distributed support for VOs spanning across many machines and sites along with appropriate interfaces for grid OS services.

When installed on a participating machine, the XtreemOS system provides for the grid what an operating system offers for a single computer: abstraction from the hardware and secure resource sharing between different users. XtreemOS provides for users, the vision of a large powerful single workstation environment, but removing the complex resource management issues of a grid environment.

Tuesday, February 16, 2010

Appro HyperPower™ Cluster - Featuring Intel Xeon CPU and NVIDIA® Tesla™ GPU computing technologies

The amount of raw data needed to process research analysis in drug discoveries, oil and gas exploration, and computational finance create a huge demand for computing power. In addition, the 3D visualization analysis data has grown a lot in recent years moving visualization centers from the desktop to GPU clusters. With the need of performance and memory capacities, Appro clusters and supercomputers are ideal architectures combined with the latest CPUs and GPU's based on NVIDIA® Tesla™ computing technologies. It delivers best performance at lower cost and fewer systems than standard CPU-only clusters. With 240-processor computing core per GPU, C-language development environment for the GPU, a suite of developer tools as well as the world’s largest GPU computing ISV development community, the Appro HyperPower GPU clusters allow scientific and technical professionals the opportunity to test and experiment their ability to develop applications faster and to deploy them across multiple generations of processors.


The Appro HyperPower cluster features high density 1U servers based on Intel® Xeon® processors and NVIDIA® Tesla™ GPU cards onboard. It also includes interconnect switches for node-to-node communication, master node, and clustering software all integrated in a 42U standard rack configuration. It supports up to 304 CPU cores and 18,240 GPU cores with up to 78TF single/6.56 TF double precision GPU performance. By using fewer systems than standard CPU-only clusters, the HyperPower delivers more computing power in an ultra dense architecture at a lower cost.

In addition, the Appro HyperPower cluster gives customers a choice of configurations with open-source commercially supported cluster management solutions that can easily be tested and pre-integrated as a part of a complete package to include HPC professional services and support.

Ideal Environment:
Ideal solution for small and medium size HPC Deployments. The target markets are Government, Research Labs, Universities and vertical industries such as Oil and Gas, Financial and Bioinformatics where the most computationally-intensive applications are needed.

Installed Software
The Appro HyperPower is preconfigured with the following software:
- Redhat Enterprise Linux 5.x, 64-bit
- CUDA 2.2 Toolkit and SDK
- Clustering software (Rocks Roll)

CUDA Applications
The CUDA-based Tesla GPUs give speed-ups of up to 250x on applications ranging from MATLAB to computational fluid dynamics, molecular dynamics, quantum chemistry, imaging, signal processing, bioinformatics, and so on. Click here to learn more about these speedups with links to application downloads.,



(This news sourced from Appro Ltd. and can be reached their web site)

Tuesday, February 9, 2010

Software review: EventLog Analyzer

System log (Syslog) management is an important need in almost all enterprises. System administrators look at syslogs as a critical source to troubleshoot performance problems on syslog supported systems & devices across the network. The need for a complete sys-log monitoring solution is often underestimated; leading to long hours spent sifting through tons of syslogs to troubleshoot a single problem. Efficient event log syslog analysis reduces system downtime, increases network performance, and helps tighten security policies in the enterprise.

EventLog Analyzer performs like a syslog daemon or a syslog server and collect the sys-log events by listening to the syslog port (UDP). Event log analyser application can analyze, report, and archive the syslog events (including syslog-ng) received from all the syslog supported systems and device. Event log analyzer manages the events of systems supporting Unix syslogs, Linux syslogs, Solaris syslogs, HP-UX syslogs, IBM AIX syslogs and devices supporting syslog like routers, switches (Cisco) or any other device.

Using Event log analyzer application you can generate syslog reports in real-time, and archive or store these syslogs. You get instant access to wide variety of reports for syslog events generated across hosts, users, processes, and host groups.

Event log analyzer application also supports event logs received from Windows machines. You can reach detailed info and demo version software at their webpage.

Three days PostgreSQL training about advanced tips and technics.

Training Announce;
Cybertec will offer a comprehensive 3 day training course dealing with PostgreSQL tuning and advanced performance optimization. The goal of this workshop is to provide people with optimization techniques and insights into PostgreSQL. Click here to see program details.

Date: February 23rd — 25th 2010
Location:
Amsterdam, Netherlands

Saturday, January 2, 2010

Penguin Computing Announces Release of New Scyld ClusterWare “Hybrid”

Penguin Computing, experts in high performance computing solutions, announced today that Scyld ClusterWare™ "Hybrid", the newest version of its industry-leading cluster management software, will be released in January of 2010. Scyld ClusterWare Hybrid was developed as a solution for Penguin Computing's Scyld customers who want to provision, monitor and manage heterogeneous operating systems from a single point of control.

Scyld ClusterWare Hybrid is a fully integrated cluster management environment that combines Scyld ClusterWare's industry leading diskless single-system-image architecture with a traditional provisioning architecture that deploys an operating environment to local disk. Combining the "best of both worlds," this hybrid approach provides unmatched flexibility and transparency. Compute nodes can still be booted with Scyld ClusterWare and provisioned extremely rapidly, with a minimal memory footprint and guaranteed consistency, or can be provisioned with a complete operating environment to the local hard drive.

With Scyld ClusterWare Hybrid, target operating environments can be dynamically assigned to cluster nodes at start-up time, allowing for the quick re-purposing of systems according to workload and user demand. Once provisioned, systems can be managed from a single node with a single subset of commands, accelerating the learning curve for new users and reducing the time spent on system management for system administrators and researchers tasked with cluster management.

Click here to access more information.

Sunday, July 19, 2009

ScaleMP announces vSMP Foundation for Cluster Structures


The vSMP Foundation for Cluster™ solution provides a simplified compute architecture for high-performance clusters - it hides the InfiniBand fabric, offers built-in high-performance storage as a cluster-filesystem replacement and reduces the number of operating systems to one, making it much easier to administer. This solution is ideally suited for smaller compute implementations in which management tools and skills may not be readily available.

The target customers for the Cluster product are those with initial high performance cluster implementations who are concerned with the complexity of creation and management of the cluster environment.

Key Advantages:
  • Simplified install and management of high performance clusters;



    • Eliminates multiple nodes, operating systems to one;
    • Eliminates the need for separate cluster-filesystem,
  • Stronger entry-level value proposition – scale up growth opportunities with no additional overhead.
You can reach detailed product info at their web pages.

Tuesday, September 23, 2008

Making portable GridStack 4.1 (Voltaire OFED) drivers.

Remove previously installed IB rpms if there. To do this;
rpm -e kernel-ib-1.0-1 \
dapl-1.2.0-1.x86_64 \
libmthca-1.0.2-1.x86_64 \
libsdp-0.9.0-1.x86_64 \
libibverbs-1.0.3-1.x86_64 \
librdmacm-0.9.0-1.x86_64

lsmod
And remove by hand all of "ib_" modules with "rmmod modulename" command


*** If you installed previously OFED IB with same package you can run ./uninstall.sh
script which is included GridStack-4.1.5_9.tgz package instead above steps.
This script does same and plus things automaticaly so you can prefer.



1. First optain Gridstack source code from Voltaire.
And then;
mkdir /home/setup
cp GridStack-4.1.5_9.tgz /home/setup
cd /home/setup
tar -zxvf GridStack-4.1.5_9.tgz
all of files will be in "/home/setup/GridStack-4.1.5_9"

cd GridStack-4.1.5_9

2. Install the GridStack drivers

./install.sh --make-bin-package

This process takes about 30 minutes.
time to coffee or tea but not cigarette...
....
.......
..........
INFO: wrote ib0 configuration to /etc/sysconfig/network-scripts/ifcfg-ib0
DEVICE=ib0 ONBOOT=yes BOOTPROTO=static IPADDR=192.168.129.9 NETWORK=192.168.0.0 NETMASK=255.255.0.0 BROADCAST=192.168.255.255 MTU=2044

Installation finished
Please logout from the shell and login again in order to update your PATH environment variable

3. Finishing the driver settings
Firts edit ip settings for IB
Just edit "/etc/sysconfig/network-scripts/ifcfg-ib0" like below;

DEVICE=ib0
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.129.50.9
NETMASK=255.255.0.0
MTU=2044

save and reboot the system.

4. GridStack installation puts a init.d service on the system startup.
After the bootup process you must see ib0 device on ifconfig command and
LEDs of HCA cards must be on or blinking state. Check this...

After the reboot check the state of connection by ifconfig
eth0      Link encap:Ethernet  HWaddr 00:19:BB:XX:XX:XX  
          inet addr:10.128.129.9  Bcast:10.128.255.255  Mask:255.255.0.0
          inet6 addr: fe80::219:bbff:fe21:b3a8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:177 errors:0 dropped:0 overruns:0 frame:0
          TX packets:148 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:16829 (16.4 KiB)  TX bytes:21049 (20.5 KiB)
          Interrupt:169 Memory:f8000000-f8011100 

ib0       Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet addr:10.129.50.9  Bcast:10.129.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:2044  Metric:1
          RX packets:11 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:128 
          RX bytes:892 (892.0 b)  TX bytes:384 (384.0 b)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:336 (336.0 b)  TX bytes:336 (336.0 b)

If you see similar of above message you won. Ping the neighbors IP addres if avaible there;
ping 10.129.50.1
PING 10.129.50.1 (10.129.50.1) 56(84) bytes of data.
64 bytes from 10.129.50.1: icmp_seq=0 ttl=64 time=0.094 ms
64 bytes from 10.129.50.1: icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from 10.129.50.1: icmp_seq=2 ttl=64 time=0.064 ms
64 bytes from 10.129.50.1: icmp_seq=3 ttl=64 time=0.056 ms

If you does not see ib0 or cannot ping gridstack service may not be started.
Start by manualy: /etc/init.d/gridstack start

If everything ok you can make an image of this system for
central deploying mechanism like tftp.

6. Installing new compiled GridStack driver to identical machines.
It is so easy. After the GridStack compilation process a new bz2 file and
their md5 checksum are created automaticaly. You can find these two files under the
upper level of source folder. On our example two files wait for your attn in there;

ls -al /home/setup
-rw-r--r--   1 root root       88 Nov 23 19:11 GridStack-4.1.5_9-rhas-k2.6.9-42.ELsmp-x86_64.md5sum
-rw-r--r--   1 root root 43570798 Nov 23 19:11 GridStack-4.1.5_9-rhas-k2.6.9-42.ELsmp-x86_64.tar.bz2

Copy this two files to all of the IB hosts which you want to plan GridStack installation.
Opposite to previous steps this installation not takes too many minutes.
Just copy files to new machine by scp;

cd /home/setup
scp GridStack-4.1.5_9-rhas-k2.6.9-42.ELsmp-x86_64 root@10.128.129.10:/home

Change to target machine console and type those commands;

cd /home
first check-out the binary equality of bz2 file
md5sum -c GridStack-4.1.5_9-rhas-k2.6.9-42.ELsmp-x86_64.md5sum
GridStack-4.1.5_9-rhas-k2.6.9-42.ELsmp-x86_64.tar.bz2: OK

if you see OK sign type this;
tar -jxvf GridStack-4.1.5_9-rhas-k2.6.9-42.ELsmp-x86_64.tar.bz2

A folder which is called "GridStack-4.1.5_9-rhas-k2.6.9-42.ELsmp-x86_64" will be created.
cd GridStack-4.1.5_9-rhas-k2.6.9-42.ELsmp-x86_64/
./install.sh

GridStack binary rpms will be install automaticaly.
Make ifcfg-ib0 setting like above, reboot and check for IP connectivity.


7. As a bonus advice;
After the GridStack installation there is lots of ib diagnostics tools avaible under the
/usr/local/ofed/bin directory. So for example issuing the ./ibv_devinfo give an brief
and usefull informations about HCA connectivity, board model, FW level and ... etc

Here ise sample output for my machine;
hca_id: mthca0
        fw_ver:                         4.7.400
        node_guid:                      0017:08ff:ffd0:XXXX
        sys_image_guid:                 0017:08ff:ffd0:XXXX
        vendor_id:                      0x1708
        vendor_part_id:                 25208
        hw_ver:                         0xA0
        board_id:                       HP_0060000001
        phys_port_cnt:                  2
                port:   1
                        state:                  PORT_ACTIVE (4)
                        max_mtu:                2048 (4)
                        active_mtu:             2048 (4)
                        sm_lid:                 29
                        port_lid:               75
                        port_lmc:               0x00

                port:   2
                        state:                  PORT_ACTIVE (4)
                        max_mtu:                2048 (4)
                        active_mtu:             2048 (4)
                        sm_lid:                 29
                        port_lid:               261
                        port_lmc:               0x00





---=== HCA DDR EXP-D FW upgrade after GridStack 4.1 install =--------

ib-burn -y -i VLT-EXPD -a /usr/voltaire/fw/HCA400Ex-D-25208-4_7_6.img 

INFO: Using alternative image file /usr/voltaire/fw/HCA400Ex-D-25208-4_7_6.img
Burning : using fw image file: /usr/voltaire/fw/HCA400Ex-D-25208-4_7_6.img VSD extention : -vsd1 VLT-EXPD -vsd2 VLT0040010001
    Current FW version on flash:  N/A
    New FW version:               N/A

    Burn image with the following GUIDs:
        Node:      0019bbffff00XXXX
        Port1:     0019bbffff00XXXX
        Port2:     0019bbffff00XXXX
        Sys.Image: 0019bbffff00XXXX

    You are about to replace current PSID in the image file - "VLT0040010001" with a different PSID - "VLT0040010001".
    Note: It is highly recommended not to change the image PSID.

 Do you want to continue ? (y/n) [n] : y

Read and verify Invariant Sector               - OK
Read and verify PPS/SPS on flash               - OK
Burning second    FW image without signatures  - OK  
Restoring second    signature                  - OK  

Where /usr/local/bin/ib-burn is a realy BASH script
this is another deep way to burn HCA card FW

lspci -n | grep -i "15b3:6278" | awk '{print $1}'
if you see "13:00.0" as output type this;

mstflint -d 13:00.0 -i /usr/voltaire/fw/HCA400Ex-D-25208-4_7_6.img -vsd1 "" -psid HP_0060000001 -y burn > /root/hca-fw-ugr.log
This command does not prompt for Yes.

For checking FW on the flash type this;
mstflint -d 13:00.0 q

Wednesday, September 3, 2008

SFS 2.2-1 Client Upgrade witch GridStack 4.x

SFS is the Hewlett Packard's Parallel File Systems which is based on the open source Lustre file system.
As an acronym SFS stands for Scalable File Share. HP company also has SFS20 disk enclosures must be not conflict to this software.


Here we are upgrading the client packages (rpms) to new level.
Enter to /home/sfs-iso-loop/client_enabler and run;


./build_SFS_client.sh --no_infiniband --config --allow_root \
/home/sfs-iso-loop/client_enabler/src/x86_64/RHEL4_U3/SFS_client_x86_64_RHEL4_U3.config



cd /home/sfs-iso-loop/client_enabler/output/RPMS/x86_64
rpm -ivh kernel-smp-2.6.9-34.0.2.EL_SFS2.2_1.x86_64.rpm
rpm -ivh kernel-smp-devel-2.6.9-34.0.2.EL_SFS2.2_1.x86_64.rpm

change /boot/grub/menu.lst to boot from this new kernel.



Reboot the machine and showtime ...

Wednesday, May 7, 2008

Creating new LDAP Server with OpenLDAP







* 1. Install RHEL 5.1 x86_64 Server

* 2. install openldap server and client RPMs
    rpm -qa | grep -i openldap must be show
    openldap-2.3.27-8
    openldap-servers-2.3.27-8

* 3. Copy /etc/openldap/DB_CONFIG.example to /var/lib/ldap/
and rename to just DB_CONFIG


* 4. Create or edit /etc/openldap/slapd.conf
Those of lines must be added;

include         /etc/openldap/schema/core.schema
include         /etc/openldap/schema/cosine.schema
include         /etc/openldap/schema/inetorgperson.schema
include         /etc/openldap/schema/nis.schema

allow bind_v2

pidfile         /var/run/openldap/slapd.pid
argsfile        /var/run/openldap/slapd.args

access to attrs=userPassword
    by self write
    by anonymous auth
    by * none
access to *
    by * read

database        bdb
suffix          "dc=uybhm,dc=itu,dc=edu,dc=tr"
rootdn          "cn=Manager,dc=uybhm,dc=itu,dc=edu,dc=tr"

#rootpw          "This value must be set later"

directory       /var/lib/ldap

index objectClass                       eq,pres
index ou,cn,mail,surname,givenname      eq,pres,sub
index uidNumber,gidNumber,loginShell    eq,pres
index uid,memberUid                     eq,pres,sub
index nisMapName,nisMapEntry            eq,pres,sub


* 5. Create a new Manager password which will be use later for
top level LDAP administration tasks
    slappasswd -h {SSHA} (type a pasword twice when asked)
Grab output and paste to rootpw line and edit like this;
rootpw          {SSHA}F/a/QvcnCrWHj7/eyJtWd/HdGtCpqsHt
Change owner of slapd.conf to just ldap:ldap and remove
"group" and "other" permissions.


* 6. Start LDAP service and check initial working status;
/etc/init.d/ldap restart

If you see OK than jump to next step.



* 7 Run a query
ldapsearch x -b "dc=uybhm,dc=itu,dc=edu,dc=tr" -h 127.0.0.1
# extended LDIF
#
# LDAPv3
# base with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# search result
search: 2
result: 32 No such object

# numResponses: 1


* 8. Prepare a base domain record and insert to LDAP server
# This is base record of uybhm.idu.edu.tr
# This record must be add before all of other LDIFs

dn: dc=uybhm,dc=itu,dc=edu,dc=tr
objectClass: dcObject
objectClass: organization
o: UYBHM Administrators
dc: uybhm

dn: cn=Manager,dc=uybhm,dc=itu,dc=edu,dc=tr
objectclass: organizationalRole
cn: Manager

# users, uybhm.itu.edu.tr
dn: ou=users,dc=uybhm,dc=itu,dc=edu,dc=tr
objectClass: top
objectClass: organizationalUnit
ou: users

# groups, uybhm.itu.edu.tr
dn: ou=groups,dc=uybhm,dc=itu,dc=edu,dc=tr
objectClass: top
objectClass: organizationalUnit
ou: groups



ldapadd -W -x -D "cn=Manager,dc=uybhm,dc=itu,dc=edu,dc=tr" -h 127.0.0.1 -f 1.uybhm-domain.record.ldif
Enter LDAP Password:
adding new entry "dc=uybhm,dc=itu,dc=edu,dc=tr"
adding new entry "cn=Manager,dc=uybhm,dc=itu,dc=edu,dc=tr"
adding new entry "ou=users,dc=uybhm,dc=itu,dc=edu,dc=tr"
adding new entry "ou=groups,dc=uybhm,dc=itu,dc=edu,dc=tr"



* 9. Check to result
ldapsearch -x -b "dc=uybhm,dc=itu,dc=edu,dc=tr" -h 127.0.0.1

# extended LDIF
#
# LDAPv3
# base with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# uybhm.itu.edu.tr
dn: dc=uybhm,dc=itu,dc=edu,dc=tr
objectClass: dcObject
objectClass: organization
o: UYBHM Administrators
dc: uybhm

# Manager, uybhm.itu.edu.tr
dn: cn=Manager,dc=uybhm,dc=itu,dc=edu,dc=tr
objectClass: organizationalRole
cn: Manager

# users, uybhm.itu.edu.tr
dn: ou=users,dc=uybhm,dc=itu,dc=edu,dc=tr
objectClass: top
objectClass: organizationalUnit
ou: users

# groups, uybhm.itu.edu.tr
dn: ou=groups,dc=uybhm,dc=itu,dc=edu,dc=tr
objectClass: top
objectClass: organizationalUnit
ou: groups

# search result
search: 2
result: 0 Success

# numResponses: 5
# numEntries: 4


* 10. Prepare or reinject user records

ldapadd -W -x -D "cn=Manager,dc=uybhm,dc=itu,dc=edu,dc=tr" -h 127.0.0.1 -f 2.users.ldif
Enter LDAP Password:
adding new entry "uid=lsfadmin,ou=users,dc=uybhm,dc=itu,dc=edu,dc=tr"
adding new entry "uid=efadmin,ou=users,dc=uybhm,dc=itu,dc=edu,dc=tr"
adding new entry "uid=efnobody,ou=users,dc=uybhm,dc=itu,dc=edu,dc=tr"
adding new entry "uid=bench,ou=users,dc=uybhm,dc=itu,dc=edu,dc=tr"

A sample user record file is here;

# mahmut.un, users, uybhm.itu.edu.tr
dn: uid=mahmut.un,ou=users,dc=uybhm,dc=itu,dc=edu,dc=tr
uid: mahmut.un
cn: Mahmut UN
objectClass: account
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
shadowLastChange: 13735
shadowMax: 999999
shadowWarning: 7
uidNumber: 620
gidNumber: 620
homeDirectory: /rs/users/mahmut.un
gecos: Mahmut UN
userPassword: {SSHA}QjoA6jcZmiX92h5uchz7U3uY80eoJulS
loginShell: /bin/bash

* 11. Query and see all of added records
ldapsearch -x -b "dc=uybhm,dc=itu,dc=edu,dc=tr" -h 127.0.0.1
If you want to see password hash also you must initialize a Manager query like this;
ldapsearch -W -x -D "cn=Manager,dc=uybhm,dc=itu,dc=edu,dc=tr" -h 127.0.0.1

Sunday, February 10, 2008

Testing the Network Speed: The netcat way

Testing your core network speed is essential to describe possible bottlenecks.

Here is practical way to do this;

nc is a golden linux tool which stands for netcat.
On the receiver side;
# nc -l 10.129.50.45 -p6666 > /dev/null

10.129.50.45 is the listened IP address.


On the transmitter side;
# dd if=/dev/zero bs=1024k count=1024 | nc 10.129.50.45 6666

At the end of pumping of the zeros "dd" shows total used time and bw/s values.
If not, you can use the "time" command before the dd to do this.

Intel stretches HPC dev tools across chubby clusters

SC11 Supercomputing hardware and software vendors are getting impatient for the SC11 supercomputing conference in Seattle, which kick...