I was sharing my latest Algorithmic Rotoscope image on Facebook and a friend asked me what I meant by training a machine learning model. I still suck at quantifying this stuff in any normal way. When you get too close to the fire you lose your words sometimes. It is why I try to step away and write stories about it--helps me find my words, and learn to use them in new and interesting ways.
Thankfully I have a partner in crime who understands this stuff and knows how to use her words. Audrey came up with the following explanation of what machine learning is in the context of my Algorithmic Rotoscope work:
"Machine learning" is a subset of AI in which a computer works at a problem programmatically without being explicitly programmed to do something specific. In this case, the Algorithmia folks have written a program that can identify certain characteristics in a piece of art -- color, texture, shadow, etc. This program can be used to construct a filter and that can be used in turn to alter another image. Kin is "training" new algorithms based on Algorithmia's machine learning work -- in order to build a new filter like this one based on Russian propaganda, the program analyzes that original piece of art -- the striking red, obviously. The computer does this thru machine learning rather than Kin specifying what it should "see."
I use my blog as a reference for my ideas and thoughts, and I didn't want to lose this one. I'm playing with machine learning so that I can better understand what it does, and what it doesn't do. It helps me to have good explanations of what I'm doing, so I can help turn other people on to the concept and help me make more sense (some of the time). We are going to have to develop an ability to have a conversation about the artificial intelligence and machine learning assault that has already begun. It will be important that we help others get up to speed and see through the smoke and mirrors.
When it comes to training algorithmic models using art, there isn't any machine learning going on. My model isn't learning art. When I execute the model against an image it isn't making art either. I am just training an algorithm to evaluate and remember an image, creating a model that can then be applied to other images--transferring the characteristics from one image to another algorithmically. In my work it is important for me to understand the moving parts, and how the algorithmic gears turn, so I can tell more truthful stories about what all of this is, and generate visuals that complement these stories I'm publishing.