Back to Machine Learning

In 2012 I spent a lot of my rare free time working thru both Sebastian Thrun and Peter Norvig’s Intro to Artificial Intelligence (which I can’t find anymore) and Andrew Ng’s Machine Learning course. Andrew’s was the better of the two.

One thing that blew me away was how he used k-means clustering on our homework submissions to discover where there were a large number of students that had a common misconception, and make a clarification video.

At the time I was working in R&D at Rosetta Stone, and we wanted to bring data science and machine learning to bear on how to improve our language learning offerings. It was a linear course, and we dreamed of building a model of what our learners knew, and then challenging them on what they didn’t know. Duolingo had a much better vision and execution for this.

With the breakthroughs in the last few years: Stable Diffusion, LLaMA, LLaVA (yes, I’m ignoring things I can’t run on my own computer), I wanted to dive back in and learn more. To that end I’m taking Jeremy Howard’s Practical Deep Learning for Coders.

I have a lot of data on comic books. I’m hoping to build some practical applications for this. One idea is to feed each panel into something like LLaVA and have it describe what’s going on in it, then have it summarize the story.

#machine-learning