When I start up a simulation on our cluster, I’m used to seeing this after some information scrolls by:
Solver = Conjugate gradient
preconditioner=block Jacobi with ILU(5) on each block
I knew before that this was some way of solving a big matrix representing the problem at hand, but never knew how it was done. (Un)luckily, my midterm project in one of my classes this semester is to implement and play around with the conjugate gradient method. We were given a little introduction to the method of steepest descent, then sent on our merry ways to the Mardi Gras and subsequent break.
I was terrified.
I started reading the course notes that we’re using for the class, but they used a bunch of terminology I’ve never heard of before. They were extremely concise. Attempts at understanding the information on MathWorld and other sites ended in confusion. And then, I came upon this title on google:
An Introduction to the Conjugate Gradient Method Without the Agonizing Pain
With the agonizing pain still acutely in my mind, I clicked on the link and gave it a try. It’s excellent! The author, Jonathan Richard Shewchuk, writes with clarity, knowledge that I’m probably not a numerical analysis professor, and a little dry humor here and there. After searching for him on google, I discovered why the name looked so familiar — I used his Triangle software to generate my 2D cross-section of our model of the rabbit ventricles! The CJ paper has pretty much saved me, and perhaps more importantly has shown me just how cool and clever numerical analysis can be.
If you want to learn about the CJ method, you really must read his paper.