Author Archives: Brock Tice

Cluster Update

So, the cluster is here.

The good news: I think we’ll be able to use it (albeit without Infiniband for now) by the end of the day.

The bad news: FedEx clearly dropped one entire rack. Yes. Unbelievable. About $110,000 worth of equipment. When it got here, there were plastic parts coming out from under the door of the crate. From the looks of it, it faceplanted off of a loading dock or a forklift. The tilt sensor was definitely tripped. Some of it may still be good, but it all has to be thoroughly inspected. Penguin is just building us a whole new rack. We lucked out in a way — it was the only rack with no ‘special’ components. All of the other racks have something special. One has the head node (not a big deal). One has the Infiniband switch (very expensive with long lead time to replace), and one has a fileserver with a Xyratex storage unit (also very expensive with long lead time). This rack just had compute nodes and a regular gigabit switch.

I’ll have more news and pictures once we get further along.

New cluster to arrive tomorrow

If you’re subscribed to my RSS feed (hi, mom!) you might already have seen these pictures. Nonetheless, I’ll add them here with a little explanation. First, the front of the cluster:

New Cluster

I suggest that you click on it and go to the Flickr page, as I’ve added some notes to the various parts of the picture, pointing out the major pieces. Here’s the back:

Cluster Cabling

As the title of the post states, it’s supposed to arrive here in Baltimore, from San Francisco, sometime tomorrow. I’m guessing in the afternoon. A couple of guys from Penguin will be here at 13:00. That’s the good news. Now to the bad news — the news that kept me from falling asleep for hours on Friday night.

It won’t be ready to go when it arrives.

No, contrary to all of the lovely visions I was given about the cluster rolling out of the crates, and being ready to go, it will not be ready to go when it gets here. This is in large part due to all of the delays on the part of AMD, and their apparent inability to ship anything when they say they will. Now, I’m not sure just how un-ready it will be. If I can get all of the wires hooked up, and the Infiniband MPI libraries are ready to go, I should be able to have simulations running on it by Tuesday. Penguin’s software people are not excited about this, and want me to wait. We’ll see how things look tomorrow.

November and December are our deadline season, with Heart Rhythm abstracts due on January 4th. Now more than ever, we have a huge backlog of simulations to run on large models, requiring the high-performance power that this cluster packs. However, the complete set-up is not due to start until November 28th. That is no good. To make matters worse, our favorite TeraGrid cluster in Texas, Lonestar, is on the lam (no pun intended) until further notice.

I’m not superstitious, but if I were, I’d be crossing my fingers right now.

Amazon EC2’s New Images

Amazon.com is really the leader right now in so-called “cloud computing”, where there’s some anonymous cluster of servers that you somehow use for various reliable services.

They started with “S3“, meaning “Simple Storage Service”. S3 is a back-end web service that allows you to place, retrieve, and share files via some sort of web back-end interface. As I’m not really a web developer, that interface is beyond me for the moment. However, various programs have cropped up that act as a front-end to S3, my favorite being JungleDisk. Once JungleDisk is fired up, you have what appears to be a network disk with unlimited storage. It is, in fact, effectively unlimited, though it of course comes at a cost. There’s a bandwidth cost whenever you move things to and from, and an ongoing storage cost. The ongoing storage cost is something like a charge based on your “average daily balance” of storage used, and is currently charged at a mere $0.15/GB/month. So, if you want to store 100 GB, that’s $15/mo. Not bad. Remember, too, that you’re never paying for storage that you don’t need, unlike some other services.

After they got S3 stabilized, Amazon.com came out with another beta “cloud” service, the “EC2“, for “Elastic Compute Cloud”. They let you run virtual machine images, basically, charged at an hourly rate. This was kind of interesting, and had some usefulness for a lot of people, but the machines were somewhat small and weak.

No more.

Within the last few days, they launched larger machine images with an x86_64 architecture, up to 4 virtual cores, and nearly 16GB of available RAM. Now we’re talking something interesting for people like me, and our lab. As a test, I created my own custom image, containing our simulation software, and fired it up. For $0.80/hour, I can get the equivalent of 2 of our 2-core Opteron cluster nodes. It’s a little slower than that, but only just. It also probably doesn’t scale as well as our current system, and I know it won’t scale as well as the system that will be arriving on Monday, but something tells me that this will change. I’m not the only one out there that wants to run cluster applications on EC2, as a quick google search will reveal.

Furthermore, this is a relatively cheap and ubiquitous platform that would allow someone to run high-performance applications without the overhead of purchasing a complete cluster. It would be good for, say, starting a business that required a large amount of computing power without having to purchase, store, feed, cool, and house all of that hardware up front. Once things got rolling it would be possible to use revenue to purchase and maintain such dedicated hardware.

The one major down-side of EC2, as I see it, is that it doesn’t save any of the data on the machine once the machine is shut down. One has to ship the data off to S3 (no charge) or another machine (bandwidth charges apply) before shutting it down. Nonetheless, as the tools for interacting with S3 improve, I expect that this limitation will disappear as well.

I should note that I’m not affiliated with either Amazon.com or JungleDisk in any way, except as a happy user.