The cluster now has a queuing system. Set-up was less-than-trivial.
Monthly Archives: September 2006
The cluster is basically done
I started setting up the hardware at around 1600h yesterday.
I finished setting up the underlying software today at around 1900h today.
Not too shabby, eh? According to wwtop:
Cluster totals: 20 nodes, 40 cpus, 96200 MHz, 39.18 GB mem
Here’s a snapshot of the front of the status page:
What an ordeal. Now we have to start compiling PETSC on it.
New Cluster Set-Up
When moving to JHU, we took our 20 cluster nodes which are Evolocity II units from Linux Networx. They were part of a larger cluster at Tulane’s Center for Computational Science. As such, we didn’t need to worry about them — the CCS sysadmin took care of them.
Now that they are here at JHU, they need to be set up on their own. “Who will set them up?” you ask.
*looks around*
Oh, right. So, I’m looking at using Warewulf. It’s targeted to Fedora Core head nodes, and our fileserver Lagniappe is already running Ubuntu 6.06 LTS Server, so I may have a few difficulties there. Also, Warewulf prefers PXE boot, and our nodes use Etherboot on LinuxBIOS. Supposedly this is “backward” compatible with PXE boot, but I’m not sure whether our nodes have a current version of LinuxBIOS or not, so I don’t know if they feature that capability. Luckily, assuming I get Warewulf set up, there’s no difference administratively between running 5 nodes and 500 nodes.
Once it is complete, I will need to install MPI, the intel compilers, and so on.