Kin Lane

I Do Not Distort My Images With Machine Learning Because They Look Better

I have been playing with machine learning since the election. I started a side project I have called algorotoscope, which I started applying texture transfer machine learning algorithms to videos. I don’t have the budget I did around the holidays, so it has been reduced to just photos, but it is something I dedicate regular time to each week. Many of the photos I apply the filters to actually look better than the filtered images, but yet I keep doing it. Not because they look better or worse, but because I want to show how our world is increasingly being distorted with algorithms.

Historically, I often used Noun Project images in my stories, because it reflected the minimalist look of my website. After the 2016 presidential election things changed for me radically. It has been a build over the last several years, but during this election it became clear that we were going to be living in a permanent state of algorithmic distortion from here on out. Now, I am a poor to mediocre photographer, but I love taking photos, and playing with my drone and other video cameras. I enjoy using these photos in my storytelling, but I feel that the algorithmic filters I can apply to them add another dimension to my storytelling.

Most of the time the meaning behind is only something I will understand, but other times I will tell the story behind. Regardless of my approach I feel like algorithmically distorted images go well with my API storytelling. Not only are APIs being used to apply artificial intelligence and machine learning, but they are being used to algorithmically distort almost everything else in our lives, from our Twitter and Facebook walls, to the videos and images we see on Instagram, Snapchat, and Youtube. Even if my algorithmic distortion doesn’t convey the direct meaning I intended with each story I tell, and image I include, I think the regular reminder that algorithmic distortion exists is an important reminder for us all, and something that should be recognizable throughout our online experiences.

One thing that is different with my image texture transfers from the majority of platforms you are seeing is I am using Algorithmia’s Style Thief, which allows me to choose both the source of the filter, as well as the image I’m applying to. This gives me a wider range of which textures I’m transferring, and in my opinion, more control over what meaning gets extracted, transferred, and applied to each images. Also, 98% the images I’m filtering are my own, taken either on my iPhone, my Canon, Drone, or Osmo equipment. I’m slowly working to get my image act together so I can more efficient recall images. I’m also working to build a library of art, propaganda, and other work that I can borrow (steal) the textures from and apply to my work. I’m also working to maintain some level of attribution in this work, allowing me to cite where I derive filters, and recreate distortion that works for me.

Not sure where this is all going, but it is something I’ll keep playing with alongside my regular storytelling. For me, it is a vehicle for pushing my storytelling forward, while also providing a constant reminder for myself, and my readers about how APIs and algorithms are distorting everything we know today. It is something we have to remember, other wise I’m afraid we won’t be able to even tread water in this new environment we’ve created for ourselves.