Adding Infiniband to my Bitcoin Mining Cluster for HPC Tasks (Part 1 – Overview)

I now have a cluster with InfiniBand network hardware in my garage. This is my bitcoin mining cluster that I’ve had running for a few years. Last summer I upgraded the CPUs from the cheapest available (Sempron 140s) to something faster but compatible with the same motherboards (Phenom II X4 975 BEs) so that I could run simulations for work, but I ran into scaling issues using 100 Mb/s Ethernet. At that point I got a cheap TrendNet gigabit switch and switched the cluster over to that, and I was able to scale across 2-3 machines (8-12 cores), but after that things really started to slow down.

I’m happy to report that with the InfiniBand hooked up as of yesterday I can now scale across all 6 IB-equipped compute nodes with approximately linear scaling. Unfortunately the older single-data-rate (SDR) hardware I got for ‘cheap’ from eBay didn’t produce the kind of dramatic reductions in latency I expected compared to gigabit Ethernet. I expect for that I’ll really need to upgrade to QDR IB at some point, but I’ll also probably want faster machines at that point. Total cost for the IB upgrade (24-port Cisco Topspin switch $350, 7 Mellanox HCAs for $39.90/ea – $279.30, 7 CX4 cables with latch connectors at $16.00/ea – $112.00) was $741.30. I intend to add some posts later with more details of the setup, as I know there some interest out there in setting up cheap IB hardware at home.