Monthly Archives: November 2007

My very long day

I find myself this evening at the end of a very long day — one that started yesterday. I had a red-eye flight from Seattle to Baltimore via Detroit, that left at 22:00 PST and arrived at 08:15 EST. Perhaps I should back up a bit.

My day started around 08:00 PST yesterday, as I woke up, saw Amanda off to the first day of her sub-internship, and set to work coding and analyzing data. Between rain and the season, it got dark at around 16:30. In the evening, everyone came home, I packed, dinner came and went, and Amanda drove me to the Seattle airport. You might think that my day would have ended sometime shortly thereafter.

What actually happened was that I caught about two hours of sleep on the three-hour flight, and was booted out of my cozy seat into the sleepy mid-field terminal in Detroit. I had only ever been in that terminal when it was a bustling hub of travelers. Normally, giant (50-foot?) screens show CNN for you to watch ten gates away, with speakers dispersed everywhere, hundreds of people walk by in a few minutes, and restaurants and stores beckon you in. Lately, I pass my time there by watching this fountain.

At 04:15, hardly anyone is there. The screens and speakers are mercifully off, and all of the restaurants and shops except McDonald’s are closed. It is surreal. Slowly, as I wandered through this foreign place, things came to life. Most of the shops and restaurants opened at 05:00. I acquired breakfast. By 05:45, I was sitting at the gate, watching “Jay and Slient Bob Strike Back” on my laptop. A sudden assault on my senses began — a quick look at a clock confirmed my suspicion that it all turned on at precisely 06:00 EST. The drive-in theater sized screens sprang to life. The CNN Crawl snaked(snook?) across the bottoms of the screens in foot-high letters. The endless stream of inane talking-head babble was forced into my ears, past my in-ear headphones.

Shortly thereafter, I boarded the plane and had a cup of coffee. I had given up on creating any lasting separation between the days. They were separated by a mere two hours of intermittent sleep, which were quickly forgotten. Only now, 32 hours after I woke up, do I feel like the true end of yesterday is arriving. Hopefully, tomorrow things will feel a bit more uh, … correct.

Five years in the lab: looking back, then forward

About this time five years ago, I was a nervous junior undergraduate studying Biomedical Engineering at Tulane University. I had just been accepted as an undergraduate member of Dr. Natalia Trayanova’s computational cardiac electrophysiology lab. The goal at that time was to complete a research project for my undergraduate thesis.


So very many things have happened since then. Here are the highlights:

  • 2002: Started learning the ropes of the lab
  • 2003: Continued to familiarize myself with the computers and code in use in the lab. The most powerful machine in our possession was an SGI with 8 processors (the Origin 300 listed here). There was almost always a wait to use those processors. Spent my summer vacation working in the lab. This was the first time I was paid to to research. Some time during this year (I think) I created the lab wiki using MoinMoin. By this time I was administering the lab computers and was sick of answering the same questions over and over. In desperation I created a wiki and started putting answers on it, referring people to the wiki when I was asked a question. The wiki is now (as of November 2007) huge, and contains basically all of the documentation of everything used in the lab, as well as gigabytes upon gigabytes of attached models, data, and images.
  • 2004: Graduated from Tulane with my Bachelor of Science in Engineering (BSE) degree. Joined the lab as a graduate student. Sometime in 2004 (I think), Tulane acquired a Linux Networx cluster, and we owned 20 nodes in that cluster.
  • 2005: Shortly after returning from my trip to Niger, Katrina struck New Orleans. The lab was scattered. Few people in the lab had access to their data. A few lab members actually snuck past armed guards to get our file servers and some workstations from our lab at Tulane. We took up residence in St.Louis, MO for two and a half months, aided by our colleagues in the labs of Drs. Yoram Rudy and Igor Efimov at Washington University. By the end of the year, we had returned to a slowly-recovering New Orleans.
  • 2006: Dr. Trayanova accepted a position as a professor at Johns Hopkins University. Almost the entire lab transfered to JHU and moved to Baltimore, MD.
  • 2007: In April, I began discussing a cluster purchase with High Performance Computing (HPC) companies. Around that time, the weather warmed up, the server room could no longer be adequately cooled, and we started limping by on 4 compute nodes. By the end of July, we had placed an order for a new cluster. We moved from Clark Hall into the newly-completed though poorly-named Computational Science and Engineering Building. In mid-November, most of our new cluster arrived, though FedEx dropped and destroyed one rack, and the cluster was not completely set up.

That brings us to the present day. Now, looking forward a little:

In the next two weeks, the cluster set-up will be completed. We will have free rein on 140 compute nodes (20 old, 120 new), all managed from one head node. The new nodes will be connected by the fastest Infiniband interconnects available on the market, and each node will have 8 GB of RAM available, with the potential to hold 64 GB each. There are four 3.0 GHz Opteron cores per node, yielding a total of 480 processors and 960 GB of RAM on the new nodes alone.

To give you some perspective on what that means, let me give you some details about the kinds of models we run. When I joined the lab, our two largest models consisted of a 4mm thick slice of the canine heart, and a very smooth, idealized model of the rabbit heart. These models are composed of 1.6 million and 0.82 million tetrahedral elements, respectively. It took something like an hour of wall clock time per millisecond of simulation time to run these models. (In other words, to get one millisecond worth of simulation data it was necessary to wait about an hour.) We could run one or two simulations at a time, at that speed.

My newest model, and currently the largest model in use in the lab, is composed of 28 million tetrahedral elements. On a cluster similar to our new one (Lonestar on TeraGrid), using 32 processors, it takes about 22 minutes of wall-clock time to simulate one millisecond in the model. Using a crude estimated unit of speed of (minutes real time / millisecond simulation time / tetrahedral element), and focusing only on the number of simulations we can run at once, not the number of CPUs required:

  • Old way: 60 minutes / 1 ms / 0.82 million tets = 73 minutes / ms sim time / million tets
  • New way: 22 minutes / 1ms / 28 million tets = .78 minutes / ms sim time / million tets

We have increased our simulation speed by almost 100 fold. We can run two to four simulations of that size at a time, vs one or two the old way. But that’s not all. We can now run bigger models. Much bigger models. We are now capable of running something the size of a dog heart (we have verified this). More importantly, we now have the technical capacity to run a model the size of the human heart, with a resolution near that of the size of a cardiac cell, and to model contraction in addition to electrical activity. It remains only to develop such models. We are prepared to store the results: the new cluster has a storage capacity of 28 TB online, with the ability to add something like 40 or 50 TB more simply by expanding the existing storage device.

In my time in the lab, I have watched our abilities expand from serial jobs with relatively small models to massively parallel jobs with the capacity to model electrical and mechanical activity in the human heart. We are just beginning a very exciting time in the lab and in the field, and what’s really killing me is that fact that there’s so much more to tell you.

But I can’t just yet.

(This post was partly inspired by a conversation with Maria and Amanda)

Cluster Update

So, the cluster is here.

The good news: I think we’ll be able to use it (albeit without Infiniband for now) by the end of the day.

The bad news: FedEx clearly dropped one entire rack. Yes. Unbelievable. About $110,000 worth of equipment. When it got here, there were plastic parts coming out from under the door of the crate. From the looks of it, it faceplanted off of a loading dock or a forklift. The tilt sensor was definitely tripped. Some of it may still be good, but it all has to be thoroughly inspected. Penguin is just building us a whole new rack. We lucked out in a way — it was the only rack with no ‘special’ components. All of the other racks have something special. One has the head node (not a big deal). One has the Infiniband switch (very expensive with long lead time to replace), and one has a fileserver with a Xyratex storage unit (also very expensive with long lead time). This rack just had compute nodes and a regular gigabit switch.

I’ll have more news and pictures once we get further along.

New cluster to arrive tomorrow

If you’re subscribed to my RSS feed (hi, mom!) you might already have seen these pictures. Nonetheless, I’ll add them here with a little explanation. First, the front of the cluster:

New Cluster

I suggest that you click on it and go to the Flickr page, as I’ve added some notes to the various parts of the picture, pointing out the major pieces. Here’s the back:

Cluster Cabling

As the title of the post states, it’s supposed to arrive here in Baltimore, from San Francisco, sometime tomorrow. I’m guessing in the afternoon. A couple of guys from Penguin will be here at 13:00. That’s the good news. Now to the bad news — the news that kept me from falling asleep for hours on Friday night.

It won’t be ready to go when it arrives.

No, contrary to all of the lovely visions I was given about the cluster rolling out of the crates, and being ready to go, it will not be ready to go when it gets here. This is in large part due to all of the delays on the part of AMD, and their apparent inability to ship anything when they say they will. Now, I’m not sure just how un-ready it will be. If I can get all of the wires hooked up, and the Infiniband MPI libraries are ready to go, I should be able to have simulations running on it by Tuesday. Penguin’s software people are not excited about this, and want me to wait. We’ll see how things look tomorrow.

November and December are our deadline season, with Heart Rhythm abstracts due on January 4th. Now more than ever, we have a huge backlog of simulations to run on large models, requiring the high-performance power that this cluster packs. However, the complete set-up is not due to start until November 28th. That is no good. To make matters worse, our favorite TeraGrid cluster in Texas, Lonestar, is on the lam (no pun intended) until further notice.

I’m not superstitious, but if I were, I’d be crossing my fingers right now.