December 5, 2011
I finally finished my graduate studies at Virginia Tech. The good news: I got my degree! The bad news: my 1.5 years of research is probably sitting on the VT servers gathering virtual dust. In hopes that it might help someone, I’ve published my thesis on the Internet for anyone interested in programming Hidden Markov Model (HMM) algorithms with CUDA:
Hidden Markov Models have been used for some time in pattern recognition, such as handwriting and speech analysis, but they have seen limited use in cognitive radio for things like spectrum sensing and signal analysis.
CPUs, especially small ones found in mobile devices like cell phones, cannot keep up with the demand for most HMM algorithms. Graphic processing units (GPUs), on the other hand, offer the ability to perform Single Instruction, Multiple Data (SIMD) operations on large scale data. GPGPU seemed like a natural fit for HMMs.
I created C and CUDA implementations for three of the main HMM algorithms: the Forward Algorithm, the Viterbi Algorithm, and the Baum-Welch Algorithm. From my experiments, graphics cards can surpass CPUs in execution time only when many states or many models are used simultaneously. The Forward and Baum-Welch algorithms (with a large number of states) saw a huge increase in execution speed when moved to a GPU. The Viterbi, however, only saw a marginal increase in speed.
If you’re interested in my HMM CUDA implementation, check out my project on Google Code: