It's 7am. The cat and me are the only ones awake. The sun is not yet up. A good time to implement regularized logisitc regression for my ML Class homework.
I already have two degrees in machine learning, so why am I spending time on a cut-down version of Stanford's introductory machine learning course? Were the institutions I attended that bad? Not at all, but it turns out (to use Andrew Ng's favourite phrase) that the material in the ML class is a useful complement to my training.
My research projects (PhD and MSc) involved reinforcement learning and unsupervised learning. My current work on Myna is similar. I don't actually have much experience with supervised learning, the bread-and-butter of machine learning. Furthermore, my teachers follow the Bayesian heresy (as, largely, do I), and I studied at a time when support vector machines were the new hotness. I learned about neural networks, decision trees, and generative models. Stanford's course, no doubt influenced by ESL, emphasises linear learners and kernel methods.
I also benefit from the structure of the course. The exercises force me to spend more time withthe material than if I was just reading papers. I can also fit it into my life. I watch the videos while I'm doing housework. The material is familiar enough I can keep up without giving it full attention. Likewise the programming exercises aren't too onerous. It's taken me longer to write this post than it to complete the homework today. Again, this is due to familiarity with the material and tools.
And now everyone else is waking up, and I must go.