Machine Learning Will Be A Vehicle For Many Heists In The Future

I am spending some cycles on my algorithmic rotoscope work. Which is basically a stationary exercise bicycle for my learning about what is, and what is not machine learning. I am using it to help me understand and tell stories about machine learning by creating images using machine learning that I can use in my machine learning storytelling. Picture a bunch of machine learning gears all working together to help make sense of what I'm doing, and WTF I am talking about.

As I'm writing a story on how image style transfer machine learning could be put to use by libraries, museums, and collection curators, I'm reminded of what a con machine learning will be in the future, and be a vehicle for the extraction of value and outright theft. My image style transfer work is just one tiny slice of this pie. I am browsing through the art collections of museums, finding images that have meaning and value, and then I'm firing up an AWS instance that costs me $1 per hour to run, pointing it at this image, and extracting the style, text, color, and other characteristics. I take what I extracted from a machine learning training session, and package up into a machine learning model, that I can use in a variety of algorithmic objectives I have.

I didn't learn anything about the work of art. I basically took a copy of its likeness and features. Kind of like the old Indian chief would say to the photographer in the 19th century when they'd take his photo. I'm not just taking a digital copy of this image. I'm taking a digital copy of the essence of this image. Now I can take this essence and apply in an Instagram-like application, transferring the essence of the image to any new photo the end-user desires. Is this theft? Do I own the owner of the image anything? I'm guessing it depends on the licensing of the image I used in the image style transfer model--which is why I tend to use openly license photos. I'll have to learn more about copyright and see if there are any algorithmic machine learning precedents to be had.

My theft example in this story is just low-level algorithmic art essence theft. However, this same approach will play out across all sectors. A company will approach another company telling them they have this amazing machine learning voodoo, and if we run against your data, content, and media, it will tell you exactly what you need to know, give you the awareness of a deity. Oh, and thank you for giving me access to all your data, content, and media, it has significantly increased the value of my machine learning models--something that might not be expressed in our business agreement. This type of business model is above your pay grade, and operating on a different plane of existence.

Machine learning has a number of valuable use, with some very interesting advancements having been made in recent years, notably around Tensorflow. Machine learning doesn't have me concerned. It is the investment behind machine learning, and the less than ethical approaches behind some machine learning companies I am watching, and their tendencies towards making wild claims about what machine learning can do. Machine learning will be the trojan horse for this latest wave of artificial intelligence snake oil salesman. All I am saying is, that you should be thoughtful about what machine learning solutions you connect to your backend, and when possible make sure you are just connecting them to a sandboxed, special version of your world that won't actually do any damage when things go south.

Why Would People Want Fine Art Trained Machine Learning Models

I'm spending time on my algorithmic rotoscope work, and thinking about how the machine learning style textures I've been marking can be put to use. I'm trying to see things from different vantage points and develop a better understanding of how texture styles can be put to use in the regular world.

I am enjoying using image style filters in my writing. It gives me kind of a gamified layer to my photography and drone hobby that allows me to create actual images I can use in my work as the API Evangelist. Having unique filtered images available for use in my writing is valuable to me--enough to justify the couple hundreds of dollars I spend each month on AWS servers.

I know why I like applying image styles to my photos, but why do others? Most of the image filters out there we've seen from apps like Prisma are focused on fine art. Training image style transfer machine learning models on popular art that people are already familiar with. I guess this is allows people to apply the characteristics of art they like to the photographic layer of our increasingly digital lives.

To me, it feels like some sort of art placebo. A way of superficially and algorithmic injecting what are brain tells us is artsy to our fairly empty, digital photo reality. Taking photos in real time isn't satisfying enough anymore. We need to distract ourselves from the world by applying reality to our digitally documented physical world--almost the opposite of augmented reality if there is such a thing. Getting lost in the ability to look at the real world through the algorithmic lens of our online life.

We are stealing the essence the meaningful, tangible art from our real world, and digitizing it. We take this essense and algorithmically apply it our everyday life trying to add some color, some texture, but not too much. We need the photos to still be meaningful, and have context in our life, but we need to be able to spray an algorithmic lacquer of meaning on our intangible lives.

The more filters we have, the more lenses we have to look at the exact same moment we live each day. We go to work. We go to school. We see the same scenery, the same people, and the same pictures each day. Now we are able to algorithmic shift, distort, and paint the picture of our lives we want to see.

Now we can add color to our life. We are being trained to think we can change the palette, and are in control over our lives. We can colorize the old World War 2 era photos of our day, and choose whether we want to color within, or outside the lines. Our lives don't have to be just binary 1s and 0s, and black or white.

Slowly, picture by picture, algorithmic transfer by algorithmic transfer, the way we see the world changes. We no longer settle for the way things are, the way our mobile phone camera catches it. The digital version is the image we share with my friends, family, and the world. It should always be the most brilliant, the most colorful, and the painting that catches their eye and makes them stand in front of on the wall of your Facebook feed captivated.

We no longer will remember what reality looks like, or what art looks like. Our collective social media memory will dictate what the world looks like. The number of likes will determine what is artistic, and what is beautiful or ugly. The algorithm will only show us what images match the world it wants us to see. Algorithmically, artistically painting the inside walls of our digital bubble.

Eventually, the sensors that stimulate us when we see photos will be well worn. They will be well programmed, with known inputs, and predictable outputs. The algorithm will be able to deliver exactly what we need, and correctly predict what we will need next. Scheduling up and queuing the next fifty possible scenarios--with exactly the right colors, textures, and meaning.

How we see art will be forever changed by the algorithm. Our machines will never see art. Our machines will never know art. Our machines will only be able to transfer the characteristics we see and deliver them into newer, more relevant, timely, and meaningful images. Distilling down the essence of art into binary, and programming us to think this synthetic art is meaningful, and still applies to our physical world.

Like I said, I think people like applying artistic image filters to their mobile photos because it is the opposite of augmented reality. They are trying to augment their digital (hopes of reality) presence with the essence of what we (algorithm) think matters to use in the world. This process isn't about training a model to see art like some folks may tell you. It is about distilling down some of the most simple aspects of what our eyes see as art, and give this algorithm to our mobile phones and social networks to apply to the photograph digital logging of our physical reality.

It feels like this is about reprogramming people. It is about reprogramming what stimulates you. Automating an algorithmic view of what matters when it comes to art, and applying it to a digital view of matters in our daily worlds, via our social networks. Just one more area of our life where we are allowing algorithms to reprogram us, and bend our reality to be more digital.

I Borrowed This Image From University of Maine Museum of Art

The Oberservability Of Uber

I had another observation out of the Uber news from this last week, where Uber was actively targeting regulators and police in cities around the globe, and delivering an alternate experience for these users because they had them targeted as an enemy of the company. To most startups, regulation is seen as the enemy, so these users belong in a special bucket--so they can be excluded from the service, and even actively given a special Uber experience.

It makes me think about the observability of the platforms we depend on, like Uber. How observable is Uber to the average user, to regulators and law enforcement, the government? How observable should the platforms we depend on be? Can everyone sign up for an account, use the website, mobile applications, or APIs and expect the same results? How well can we understand the internal states of Uber, the platform, and company, from knowledge obtained through its existing external outputs -- mobile application, and API.

When it comes to the observability of the platforms we depend on via our mobile phones each day there are no laws stating they have to treat us the same. The applications on our mobile phones are personalized, making notions of net neutrality seem naive. There is nothing that says Uber can't treat each user differently, based upon their profile score, or if they are law enforcement. We are not entitled to any sort of visibility into the algorithms that decide whether we get a ride with Uber, or how they see us--this is the mystery, magic, and allure of the algorithm. This is why startups are able to wrap anything in an algorithm and sell it as the next big thing.

The question of how observable Uber will be defined in coming months and years. What surprises me is that we are just now getting around to having these conversations, when these companies possess an unprecedented amount of observability into our personal and professional lives. The Uber app knows a lot about us, and in turn, Uber knows a lot about us. I'm thinking the more important question is, why are we allowing for so much observability by these tech companies into our lives, with so little in return when it comes to understanding business practices and the ethics behind the company firewall?

Machine Learning Style Transfer For Museums, Libraries, and Collections

I putting some thought into some next steps for my algorithmic rotoscope work, which is about the training and applying of image style transfer machine learning models. I'm talking with Jason Toy (@jtoy) over at Somatic about the variety of use cases, and I want to spend some thinking about image style transfers, from the perspective of a collector or curator of images--brainstorming how they can organize, make available their work(s) for use in image style transfers.

Ok, let's start with the basics--what am I talking about when I say image style transfer?  I recommend starting with a basic definition of machine learning in this context, providing by my girlfriend, and partner in crime Audrey Watters. Beyond, that I am just referring to the training a machine learning model by directing it to scan an image. This model can then be applied to other images, essentially transferring the style of one image, to any other image. There are a handful of mobile applications out there right now that let you apply a handful of filters to images taken with your mobile phone--Somatic is looking to be the wholesale provider of these features

Training one of these models isn't cheap. It costs me about $20 per model in GPUs to create--this doesn't consider my time, just my hard compute costs (AWS bill). Not every model does anything interesting. Not all images, photos, and pieces of art translate into cool features when applied to images. I've spent about $700 training 35 filters. Some of them are cool, and some of them are meh. I've had the most luck focusing on dystopian landscapes, which I can use in my storytelling around topics like immigration, technology, and the election

This work ended up with Jason and I talking about museums and library collections, thinking about opportunities for them to think about their collections in terms of machine learning, and specifically algorithmic style transfer. Do you have images in your collection that would translate well for use in graphic design, print, and digital photo applications? I spend hours looking through art books for the right textures, colors and outlines. I also spend hours looking through graphic design archives for movie and gaming industry, as well as government collections. Looking for just the right set of images that will either transfer and produce an interesting look, as well as possible transfer something meaningful to the new images that I am applying styles to.

Sometimes style transfers just make a photo look cool, bringing some general colors, textures, and other features to a new photo--there really isn't any value in knowing what image was behind the style transfer, it just looks cool. Other times, the image can be enhanced knowing about the image behind the machine learning model, and not just transferring styles between images, but also potentially transferring some meaning as well. You can see this in action when I took a nazi propaganda poster and applied to it to photo of Ellis Island, or I took an old Russian propaganda poster and applied to images of the White House. I a sense, I was able to transfer some of the 1000 words applied to the propaganda posters and transfer them to new photos I had taken.

It's easy to think you will make a new image into a piece of art by training a model on a piece of art and transferring it's characteristics to a new image using machine learning. Where I find the real value is actually understanding collections of images, while also being aware of the style transfer process, and thinking about how images can be trained and applied. However, this only gets you so far, there has to still be some value or meaning in how it's being applied, accomplishing a specific objective and delivering some sort of meaning. If you are doing this as part of some graphic design work it will be different than if you are doing for fun on a mobile phone app with your friends.

To further stimulate my imagination and awareness I'm looking through a variety of open image collections, from a variety of institutions:

I am also using some of the usual suspects when it comes to searching for images on the web:

I am working on developing specific categories that have relevance to the storytelling I'm doing across my blogs, and sometimes to help power my partners work as well. I'm currently mining the following areas, looking for interesting images to train style transfer machine learning models:

  • Art - The obvious usage for all of this, finding interesting pieces of art that make your photos look cool.
  • Video Game - I find video game imagery to provide a wealth of ideas for training and applying image style transfers.
  • Science Fiction - Another rich source of imagery for the training of image style transfer models that do cool things.
  • Electrical - I'm finding circuit boards, lighting, and other electrical imagery to be useful in training models.
  • Industrial - I'm finding industrial images to work for both sides of the equation in training and applying models.
  • Propaganda - These are great for training models, and then transferring the texture and the meaning behind them.
  • Labor - Similar to propaganda posters, potentially some emotional work here that would transfer significant meaning.
  • Space - A new one I'm adding for finding interesting imagery that can train models, and experiencing what the effect is.

As I look through more collections, and gain experience training style transfer models, and applying models, I have begun to develop an eye for what looks good. I also develop more ideas along the way of imagery that can help reinforce the storytelling I'm doing across my work. It is a journey I am hoping more librarians, museum curators, and collection stewards will embark on. I don't think you need to learn the inner workings of machine learning, but at least develop enough of an understanding that you can think more critically about the collection you are knowledgeable about. 

I know Jason would like to help you, and I'm more than happy to help you along in the process. Honestly, the biggest hurdle is money to afford the GPUs for training the image. After that, it is about spending the time finding images to train models, as well as to apply the models to a variety of imagery, as part of some sort of meaningful process. I can spend days looking through art collection, then spend a significant amount of AWS budget training machine learning models, but if I don't have a meaningful way to apply them, it doesn't bring any value to the table, and it's unlikely I will be able to justify the budget in the future.

My algorithmic rotoscope work is used throughout my writing and helps influence the stories I tell on API Evangelist, Kin Lane, Drone Recovery, and now Contrafabulists. I invest about $150.00 / month training to image style transfer models, keeping a fresh number of models coming off the assembly line. I have a variety of tools that allow me to apply the models using Algorithmia and now Somatic. I'm now looking for folks who have knowledge and access to interesting image collections, who would want to learn more about image style transfer, as well as graphic design and print shops, mobile application development shops, and other interested folks who are just curious about WTF image style transfers are all about.

In The Future We Will All Have Multiple Digital Personas

I am captivated by the news about Uber actively targeting regulators and police in cities around the globe. I specifically love thinking about the work that regulators and investigators are having to do to be able to build a case against Uber, and inversely the amount of work that Uber is doing to thwart these investigations, and break into new and often times hostile markets.

Regulators and police are using burner devices, and fake personas to do their work. Uber is delivering fake services and creating fake signals to create a foggy landscape where they can do business. I'm not rooting for law enforcement, regulators, or Uber, I'm rooting for everyone possessing more than one persona, throwaway versions of themselves that are used to distract, obfuscate, hide, and confuse the machine. It's a very beautiful dumpster fire of a digital world we've created for ourselves--good job everyone.

I'm preparing for some international travel in a couple weeks, so I'm firing up my alter egos. They aren't fake personas, but they are alternative versions of myself, that will be present on all my devices when I cross any international border, on devices I can stand to lose, or just throw away. This is the world now. We won't have just a single digital version of ourselves. We will have alternative version of our personal life, our work lives, and we'll create fake accounts as they are needed--our children are already well trained in this practice.

This will be the only way we can carve out any sense of privacy in a surveillance economy. Platforms and regulators will have to work overtime to connect the dots. Our digital self will become a schizophrenic reflection of our physical world, where devices have invaded every space and moment, and are trying to identify who we are, what we are doing, and connect the dots between each version of our self, as well as those around us. I can't help but feel like the Internet as we know it is somehow fracturing society, and any sense we have of the individual--something that will be difficult to recover from, and I fear we will always be different from here forward.

The Residue Of Internets C4I DNA Visible In Ubers Behavior

The military's fingerprints are visible throughout the Internet's history, with much of the history of compute born out of war, so it's no surprise that the next wave of warfare is all about the cyber (its HUGE). With so much of Internet technology being inseparable from military ideology and much of its funding coming from the military-industrial complex, it is going to be pretty hard for Internet tech to shake its core DNA programmed as part of a command, control, communications, computers, and intelligence (C4I) seeds. 

This DNA is present in the unconscious behavior we see from startups, most recently with the news of Uber deceiving authorities using a tool they developed call Greyball, allowing them to target regulators and law enforcement, and prevent or obscure their access and usage to the ridesharing platform. User profiling and targeting is a staple of Silicon Valley startups. Startups profile and target their definition of ideal behavior(s), and then focus on getting users to operate within this buckets, or you segment them into less desirable buckets, and deal (or don't) with them however you deem appropriate.

If you are a marketer or sales person you think targeting is a good thing. You want as much information on a single user and a group of users as you possibly can so that you can command and control (C2) your desired outcome. If you are a software engineer, this is all a game to you. You gather all the data points you possibly can build your profiles--command, control, communications, and intelligence (C3i). The Internet's DNA whispers to you in your ear--you are the smart one here, everyone is just a pawn in your game. Executives and investors just sit back and pull the puppet strings on all the actors within their control.

It's no surprise that Uber is targeting regulators and law enforcement. They are just another user demographic bucket. I guarantee there are buckets for competitors, and their employees who have signed up for accounts. When any user signs up for your service, you process what you know about them, and put them in a bucket, and see where they exist (or don't) within your sales funnel (repeat, rinse). Competitors, regulators, and law enforcement all have a role to play, the bucket they get put into, and the service they receive will be (slightly) different than everyone else.

Us engineers love to believe that we are the puppet masters, when it reality we are the puppets, with our string pulled by those who invest us, and our one true master--Internet technology. We numb ourselves and conveniently forget the history of the Internet, and lie to ourselves that venture capital has our best interests in mind and that they need us. They do not. We are a commodity. We are the frontline of this new type of warfare that has evolved as the Internet over the last 50 years--we are beginning to see the casualties of this war, democracy, privacy, and security.

This is cyber warfare. It's lower level warfare in the sense that the physical destruction and blood isn't front and center, but the devastation and suffering still exists. Corporations are forming their own militias, drawing lines, defining their buckets, and targeting groups for delivering propaganda to, while they are positioning for a variety of attacks against competitors, regulators, law enforcement, and other competing militias. You attack anyone the algorithm defines as the enemy. You aggressively sell to those who fit your ideal profile. You try to indoctrinate anyone you can trust to be part of your militia, and keep fighting--it is what the Internet wants us to do.

What Do You Mean When You Say You Are Training A Machine Learning Model?

I was sharing my latest Algorithmic Rotoscope image on Facebook and a friend asked me what I meant by training a machine learning model. I still suck at quantifying this stuff in any normal way. When you get too close to the fire you lose your words sometimes. It is why I try to step away and write stories about it--helps me find my words, and learn to use them in new and interesting ways.

Thankfully I have a partner in crime who understands this stuff and knows how to use her words. Audrey came up with the following explanation of what machine learning is in the context of my Algorithmic Rotoscope work:

"Machine learning" is a subset of AI in which a computer works at a problem programmatically without being explicitly programmed to do something specific. In this case, the Algorithmia folks have written a program that can identify certain characteristics in a piece of art -- color, texture, shadow, etc. This program can be used to construct a filter and that can be used in turn to alter another image. Kin is "training" new algorithms based on Algorithmia's machine learning work -- in order to build a new filter like this one based on Russian propaganda, the program analyzes that original piece of art -- the striking red, obviously. The computer does this thru machine learning rather than Kin specifying what it should "see."

I use my blog as a reference for my ideas and thoughts, and I didn't want to lose this one. I'm playing with machine learning so that I can better understand what it does, and what it doesn't do. It helps me to have good explanations of what I'm doing, so I can help turn other people on to the concept and help me make more sense (some of the time). We are going to have to develop an ability to have a conversation about the artificial intelligence and machine learning assault that has already begun. It will be important that we help others get up to speed and see through the smoke and mirrors.

When it comes to training algorithmic models using art, there isn't any machine learning going on. My model isn't learning art. When I execute the model against an image it isn't making art either. I am just training an algorithm to evaluate and remember an image, creating a model that can then be applied to other images--transferring the characteristics from one image to another algorithmically. In my work it is important for me to understand the moving parts, and how the algorithmic gears turn, so I can tell more truthful stories about what all of this is, and generate visuals that complement these stories I'm publishing.

Adopta.Agency In Trump Administration

Adopta.Agency is an ongoing project for me. I'm still using the template as a basis for some custom open data work, but I wanted to pause for a moment and think about what Adopta.Agency means to me in a Trump administration. The need for Adopta.Agency is greater than ever. We need an army of civic-minded individuals to step in and help be stewards of public data. The current administration does not see value in making government more transparent, something that will trickle down to all levels of government, making what we do much more difficult.

To be honest, after the election I hit a pretty big low, regarding what I should be doing with open data at the federal level. Now in February I feel little more optimistic and I wanted to set a handful of Adopta.Agency goals for myself, and think more about the project in the Trump Administration. In the next couple of months I want to:

  • Target Two Datasets - I want to target two datasets in coming months, liberate from their current position on government servers, download and convert to YAML format, and publish as an Adopta.Agency project on Github.
  • API Adoption - In addition to rescuing open data sets from disappearing, I want to enable the reuse of APIs. You can't always save or replace the entire API, but indexing and mapping what is there will help any future projects in the same area.
  • Storytelling - There has been a lot going on when it comes to rescuing government data in the last 60 days. Much of it has been centered around climate data -- I want to tell more stories of work going on beyond just Adopta.Agency.

The Trump administration doesn't change the Adopta.Agency mission and purpose at all, it just raises the stakes. I still view the federal government as a partner in this, we can't do the hard work of making government more observable without it's involvement. However, it is a much more hostile and unfriendly environment right now, making it even more urgent that we adopt existing data sets, and give new life in a safer place until the right partners in the public and private sector can be found.

It is easy to get overwhelmed by this work, I do often. I'm going to start with identifying two data sets, download the Adopta.Agency blueprint, and get to work liberating the data, and publishing it to Github. I find the process therapeutical and it helps me process what is going on right now--I hope you will join in. I look forward to hearing your story.

The Random Calls Home That An Application Makes From My Home

I have been running Charles Proxy locally for quite some time now. I began using it to reverse engineer the APIs behind some mobile applications and continued to use it to map out the APIs I'm depending on each day. I regularly turn on Charles Proxy and export the listing of any HTTP calls made while I'm working, every five minutes. These files get moved up into the cloud using Dropbox, where I have a regular CRON job processing each call made--profiling the domain, and details of the request and response for later review.

This process has shed some light on the application architecture of many of the tools and services I depend on. It's fascinating to see the number of pings home the average application will make when on, or running in the background. In addition to running Charles Proxy and understanding how these applications are communicating with their mothership, from within my home, I downloaded Little Flocker--providing me a peek at another layer of application architecture, and how they interact with my laptop.

Little Flocker tells me each time an application is writing or accessing a file, turning on my audio, video, and other items. After a day of running, I have been given another glimpse at the architecture of the apps I'm depending on. One example of suspicious application architecture is from Citrix. I haven't been on a call using the app in at least 4 days, and usually, I just uninstall the app after use, but it was interesting to see it trying to write files on a regular basis, even though I don't have the application open. Why do they need to do this? It looks like it is looking for any updates, but not sure why it needs to when I'm not running.

I wish applications would provide a list of the remove calls their applications make to the home base. I've talked with several platform providers about how they view this layer of their apps, and their thoughts about pulling back the curtain, and being more transparent about the APIs behind their apps--they usually aren't very interested in having these conversations with end-users and often see this activity as their proprietary secret sauce. The part that interests me is the fact that these client interactions, API calls, and data transmitted are happening here in my home on my laptop. I know that tech company see this as us users operating on their platforms, but in reality, they are entering our homes and making calls home to the platform using our Internet. 

Sure, we all agree to terms of service that make all of this legally irrelevant--they have their asses covered. It still doesn't change that many desktop, web and mobile application developers are exploiting the access they have in our lives. With the bad behavior we've seen from technology companies, government entities, and hackers in recent years, I feel like this level of access isn't sustainable or healthy. Especially when apps are either poorly architected, or are done so with a lack of respect for the end-user environment. This is my laptop, in my home, engaging in a personal or business relationship with your company, please be respectful of me, my space, and my privacy.

My Style Of Writing Gives Me A Single URL For My Thoughts

I was just having a conversation with a friend on a social network about test-driven development (TDD) and behavior-driven development (BDD), as we progressed through the conversation I used both my blogs and as references for thoughts I've had on the subject, providing them with a single URL to get more information from me.

I shared URLs for 3-5 ideas/thoughts I've had on the subject, giving me a much better way to recall what I know and thoughts I've had -- from within my domain. I have 10 years of posts on, and I am approaching 7 years on This gives me a rich and efficient way to recall thoughts I've had, build on them, and quickly share and convey these thoughts to others using URLs--this is why I tell stories.

This process also drives people to my websites, hopefully building trust with them when it comes to my domains. When you want information on APIs, you go to When you want slightly deranged rants about technology that may or may not make sense, you go to I'm working on improved tagging across my content, something that ultimately is a manual thing, as nobody knows the content of my work like I do--not even the AIs and machine learningz.

I do not obsess over SEO for my websites. The natural progression of my research, and focus on helping people understand the world of APIs, lends itself nicely to having a wealth of links, and interconnected stories about a wide range of topics I am passionate about--which translates to some healthy organically generated SEO. Talking through this stuff helps me execute on this all in a more consistent way--the more I write, the ideas I have, and the more URLs I have to share. Which makes all of this go round for me, and hopefully you along the way.

My House Is Infested With IoTs

We were just having a conversation about the information our Sonos is sending back and forth. One of a handful of devices we've willfully purchased and plugged into our home network. In today's environment, we are becoming hyper aware of what our applications and devices know about us and are communicating outside of our network, and local storage. 

With two people in a small home/office environment, we have 4 iPhones, 2 iPads, 3 laptops, 1 desktop, 1 printer, 2 Sonos speakers, 1-time capsule, and 1 smart tv connected all the time. We also have 3 video cameras and 3 drones that can connect to the network and/or broadcast a network, but isn't necessarily always on. We aren't huge home IoT people, but that seems like a significant number of devices for a single network and quite a lot to think about when it comes to managing our digital bits.

Our house is infested with IoTs. Ok, it's mostly because of my drone and camera obsession, but the printer, Sonos, and other devices are definitely a little more on the normal side of things. When you stop to think about all this IoT think stuff for a bit, it's pretty crazy what have let into our world. These little devices that run on our home network, do things for us, regularly talking back to their masters in the cloud. What do they say about us? What information do they keep track of? 

I fully understand my obsession with our data at this level is considerably greater than the average person, but I am astounded at people's inability to stop corporations (and government) from infiltrating our homes in this way. I'm not immune. I have the usual suspects when it comes to home devices, as well as some more specialized IoT devices on my network. I am tuning into which devices I have, and what data they are sending to the cloud because I'm concerned with capturing the data exhaust from my world and making a living, but secondarily I am increasingly concerned about privacy, security, and other more concerning activity from these devices I've invited into my home, and the companies who operate them.

My smart TV tracks my viewing habits, my Sono tracks my listening habits, and my laptop, tablet, and mobile device track the rest. Some of these devices are fixed in my home, while other more portable devices travel with me, and then come back home to get plugged in, recharged, and synced with the cloud. I'm using my drones and video cameras to gather data, images, and audio from the world around me, and bringing them back to my home for filtering and organization locally and in the cloud. My house isn't just infested with IoT devices, it's infested with the data and other bits generated by these IoT devices. These are valuable little bits and they are something companies are scrambling to get their hands on.

I'm on a quest to make sure I get a piece of the action when it comes to selling my bits--the bigger piece of the pie, the better. I'm also looking to help drive the conversation around what the technology companies are doing with our bits. I do not expect to win this war, I'm just looking to push back wherever and whenever I can, and establish a greater understanding around what data is being generated and tracked, both inside and outside of my home. The more I'm in tune with this activity, the more I can develop and evolve the tactics I will need to keep resisting and stay ahead of the curve.

Can You See The Algorithm?

Can you see an algorithm? Algorithms are behind many common analog and digital actions we execute daily. Can you see what is going on behind each task? Can you observe what is going on? To use an antiquated analogy, can you take the back off your watch? An example of this is our world right now would be the #immigration debate -- whether you are viewing on Twitter, Facebook, or any other source of news and discussion around the immigration debate. Can you see the algorithm that powers the Twitter or Facebook's #immigration feed?

Algorithms that drive the web are often purposefully opaque, unobservable, yet they are still right behind the curtain of your browser, UI, and social media content card. They are supposed to be magic. You aren't supposed to be able to see the magic behind. The closest we can get to seeing an algorithm is via their APIs which (might) give us access to an algorithms inputs and outputs, hopefully making it more observable. APIs do not guarantee that you can fully understand what an API or the algorithm behind does, but it does give us an awareness and working examples of the inputs and outputs--falling just short of being able to actually see anything. 

You can develop visualizations, workflow diagrams, images, and other visuals to help us see reflections of what an algorithm does using its API (if it's available), but if we don't have a complete picture of the surface area of an algorithm, or of all its parameters and other inputs, we will only paint a partial picture of an algorithm. I'm super fascinated with not just trying to find different ways of seeing an algorithm, I also want some dead simple ways to offer up a shared meaning of what your eyes are seeing, and make an immediate impact. 

How do I distil down the algorithm behind the #immigration debate hashtag on Twitter and Facebook in a single image? I don't think you can. There are many different ways to interpret the meaning of the data I can pull from the Twitter and Facebook APIs. Which users are part of the conversation? Which users are bots? What is being said, and what is the sentiment? There are many different ways I can extract meaning from this data, but ultimately it is still up to me, the human to process, and distil down into a single meaningful image that will speak to other humans. Even though the image could be worth 1000 words, which thousand words would that be?

I blog as the API Evangelist to polish my API stories. I write code to polish how I can use APIs to tell better stories. I take photos in the real world so that I can tell better stories online and in print. I'm trying to leverage all of this to help me better tell stories about how algorithms pulling the strings in our world, and help everyone see algorithms. Sadly, I do not think we will ever precisely see an algorithm, but we can develop ways of refracting light through them helping us see the moving parts, or sometimes, more importantly, see what parts are missing. 

One of the things I'm working on with my algorithmic storytelling is developing machine learning filters that help me shine a light on the different layers, and gears of an algorithm. I do not think we can use the master's tools to dismantle the house, but I don't want to dismantle the house, I just want to install a gorgeous floor to the ceiling window that spans one side of the house, and maybe a couple of extra windows. I want reliable and complete access to the inputs and outputs of an algorithm so that I can experiment with a variety of ways to see what is going on, painting a picture that might help us have a conversation about what an algorithm does, or does not do.

I recently took a World War 2 Nazi propaganda poster and trained a machine learning model using it, and then applied the filter to a picture of the waiting room at Ellis Island waiting room. When looking at the picture you are seeing the waiting room where millions of immigrants have waited for access to the United Sates, but the textures and colors you are seeing when you look at the image are filtered through machine learning interpretation of the World War 2 Nazi poster. When you look at the image you may never know the filter is being applied--it is just the immigration debate. However, what you are being fed algorithmically is being painted by a very loud, bot-driven, hateful and false content fueled color and texture pallette.

Granted, I chose the subject matter that went into the machine learning algorithm, but this was intentional. Much like the handful of techies who developed and operate bots, meme, alternative news and fact engines, I was biased in how I was influencing the algorithm that is being applied. However, if you don't know the story behind, and don't understand the inputs and outputs of what is happening, you think you are looking at just a photo of Ellis Island. By giving you awareness and more of an understanding of the inputs, a regular photo of Ellis island, the filter being trained using a World War 2 Nazi poster, plus I added some machine learning voodoo and wizardry--poof we helped shine a light on one layer of the algorithm, exposing just a handful of the potentially thousands or millions of gears that are driving the algorithms coloring the immigration debate.

I am sure folks will point out what they see as the negativity in this story. If you are denying the influence of white nationalists on this election, and specifically how this was done algorithmically--you are experiencing a dangerous form of denial. This story is not meant to paint a complete picture, but shine line on a single layer in a way that helps people understand a little more about what is happening behind the immigration debate online. It's just one of many exercises I'm conducting to assist me in telling more stories and create compelling images that help folks better understand and see algorithms.

A Little Bit of Optimism About Technology In Federal Government

I met with a handful of friends who are (still) working in the federal government about how they are doing this last week. The tone within federal government seems to be on par with what I'm hearing from the outside -- people are seriously concerned about a wide range of things, and fearful for what this administration might do. I only talked to a few friends, but they were in positions that expose them to a wide variety of agencies, as well as projects, and the concern is pretty widely shared across agencies.

There are a number of folks who have already left, and folks seem to be taking things on a day by day, week by week, and project by project basis--before they decide to speak up or possible leave their job altogether. From what I've gathered people are being very vocal, and the incoming administration is very aware of the concerns from technical teams. This makes me optimistic, hearing that some of the folks I know are staying, as they are some of the smartest, and most opinionated individuals I know. They aren't about to take any shit and are ready for a fight when it comes to what is truly important.

What also gives me hope is that from what I'm hearing, the current White House administration is not very tech savvy, and they know it. Sure, they had (have?) the assistance for companies like Cambridge Analytica, and advisors like Peter Thiel, but the core administration isn't that savvy. They also seem to be somewhat aware of the scope of the Federal Government, and the technical complexity. Sure, there are many ways that government bureaucracy can be made more efficient, but in the end, people need to get their checks, benefits, and many US companies depend on these gears turning. They need some of the smart folks high up at these agencies to help make sure things don't just fall apart -- you need the gears turning until you identify an alternate solution (hopefully).

The 26% of Americans that vote for Donald Trump, and continue to support his approach do not seem to truly understand the scope and scale of the federal government--it is just big. Similar to the confusion between the Affordable Care Act (ACA) and Obamacare, there are many other programs and services that people depend on, and if these stop operating, congress people and senators are going to start hearing from their constituents, who are going to hold them accountable in the next election (education, cough cough). I'm optimistic that within the cracks of the technical incompetence of the current administration, and the existing bureaucracy that is the federal government, good things might occur.

Many of the people I know at the White House, GSA, and other agencies are there to serve the American people. They are there to push back on the inefficiencies that exist within government. The people I know are some very smart folks who don't just see the federal government as a republican or democrat thing, they see it as a system that serves the people--one that needs constant refinement. Yes, it is a very complex, and sometimes inefficient system, but they are ALREADY doing the hard work it takes to understand this complexity and working to be honest and realistic about what the solution is to streamline the service(s), so that they better serve everyone. 

It really hit me hard that some of my friends and family were chanting "drain the swamp" during and after the election. While this phrase has other historical baggage (you should understand), Donald Trump and Newt Gingrich explained that this was about targeting the liberals in government. They were talking about many of the smart people I know in the federal government--making this a very personal attack for me. Those are my friends, and people that I know are decent folks trying to serve the entire country. It gives me some renewed strength to know that it will be these sample people, who are still in government, are actively resistant, and working hard to identify the positive thing they can accomplish in the cracks.

I am thankful that these folks have the fortitude to do what I can't. I am thankful that they are continuing to do what matters. I want all of them to know that I am here to support them however I can -- just let me know. And please...make sure at take care of yourself, and be safe.

Fearing Technology Over Those Who Control It

Stories of whether or not we should fear losing our jobs to robots have been going on for a long time. Another layer of this I'm seeing expand lately is whether we should be fearing machine learning (ML) and artificial intelligence (AI). I am always fascinated to hear the reasons people fear technology as well as the reasons people give regarding why you should not fear technology.

One aspect of this back and forth that fascinates me is why nobody talks about fearing the people behind the technology, while also continuing to worship the ground that people like Bill Gates, Mark Zuckerberg, and Elon Musk walk on. There is no reason to fear technology. There are many, many reasons for fearing the people that are developing and have control over technology. It all just seems like such a con game being played by those in power. Kind of like the stock market where you can play the ups and downs of the market.

Robots are scary, and they are coming for your jobs -- do you mean corporate leaders are greedy and want to pay you as little as possible, or just replace you if they can. Artificial Intelligence can do evil things and will become smarter than you -- do you mean the white engineers behind these things either straight up do evil, or are greedy fuckers and will allow really evil things to be done as long as they keep getting rich? When I see each story come across my monitoring dashboard I can't help but think about the reasons behind -- for and against.

Technology isn't good, nor bad, nor neutral -- it just is a tool. The people who have control of the technology are good, bad, or neutral. Neutral oftentimes winds up being pretty fucking bad when you are a privileged white dude with a lot of money, and unwilling to understand the shady shit people are doing with your genius invention. These stories about fearing or not fearing technology seems like some shady Trump administration tactic to just distract us while they pull all the real strings that end up screw all of us over, and make those in power richer.

Algorithmic Reflections On The Immigration Debate

We are increasingly looking through an algorithmic lens when it comes to politics in our everyday lives. I spend a significant portion of my days trying to understand how algorithms are being used to shift how we view and discuss politics. One of the ongoing themes in my research is focused on machine learning, which is an aspect of technology currently being applied to news curation, identifying fake news, all the way to how we monitor and see the world online with images and video. 

Algorithms are painting a real-time picture that colors how we see the physical world around us--something that is increasingly occurring online for many of us. Because many of the creators of algorithms are white men, they often are blind and even willfully ignorant of how their algorithms and technological tools are used for evil purposes. With a focus on revenue and the interests of their investors, Twitter, Facebook, Reddit and other platforms often do not see (or are willing to turn a blind eye to) how hateful groups are using their platforms to spread misinformation and hate. When you combine this with a lack of awareness when it comes history, we end up in the current situation we find ourselves in with the Trump administration.

As part of my work to understand how algorithms are shaping our world views I am playing with different ways of applying machine learning to my images and videos for use across my storytelling -- I am calling @algorotoscope. It's helping me understand how machine learning works (or not), while also giving me an artistic distraction from the increasingly inhuman world of technology. Taking photos and videos, as well as the process of training and applying the filters gives me relief, allowing me to find some balance in the very toxic digital environment I find myself in today.

I feel that we are allowing algorithms to amplify some very hateful views of the world right now, something that is being leveraged to produce some very damaging outcomes in the immigration debate. To help paint a picture of what I'm seeing from my vantage point, I took an old World War II nazi propaganda poster and used it to train a machine learning model, which I could then apply to any image or video using a platform called Algorithmia. Here is the resulting image....

The image is a photo I took from the waiting area at Ellis Island, with sunlight reflecting through the windows, lighting up the tiles in the room where millions of immigrants waiting to be admitted into this country. I feel like we are allowing our willful ignorance of history as Americans to paint the immigration debate today, something that is being accelerated and fueled by a small hateful portion of our society, with the assistance of algorithms. Facebook, Twitter, Reddit, and other platforms are allowing their algorithms to be gamed by this very vocal minority in a way that is shaping the views of the larger population--making for a very destructive and divisive debate about something very core to our country's origin -- immigration.

If we are going to get to the bottom of this recent shift in how we operate as a society, we are going to have to work to shine a light on how these algorithms are operating, and how advertising is incentivizing platforms to be blind to their damaging effects. We are allowing algorithms and digital technology to reflect and amplify the worst within us and pushing us to be more polarized. I'm hoping to continue stimulating a more constructive conversation about how technology is being deployed, one that is NOT fueled by greed or hate, through my storytelling, programming, and imagery.

The Digital Things That Happen In The Privacy Of Our Homes

I'm always trying to unpack and understand the digital world unfolding around us. Understanding where the physical world is colliding with the digital world is increasingly difficult to see, let alone articulate to average people (this is why I tell stories, so I get better at it). One aspect of this collision that regularly leaves me baffled are the views on security and privacy when it comes to things that happen "online", but in the physical world, these "digital" things are actually happening "physically" in our private spaces.

Where is the line between "the cloud" and "my living room". Startups would argue what I do on their "cloud platform" exists because of them, and per their terms of service we are giving them an unlimited license to everything I do in "their domain". They aren't even willing to consider the fact that this is physically happening in my living room, regardless of where the data centers and servers are located to support an application. Where is the line between our physical and digital world? When I close my eyes and try to visualize this, the "cloud" seems pretty fucking invasive and existing in my living room and the bedrooms in my house--with very little respect from startups and the government that we've allowed you to come into our homes.

What really kills me is that the average citizen is completely unaware that they have invited companies, government agencies, hackers, and many other rando-developer into their homes. Not just on the websites and apps we use on our laptops and mobile devices. We are now doing this in our appliances, thermostats, security cameras, automobiles, etc. Each device we connect to our home wifi opens up a doorway for companies, government, and individuals to come into our home and gather information, and build awareness about our everyday life.

Few people seem overly concerned with this evolution, and startups are happy to keep them in the dark, allowing them to vacuum up data, and sell to other brokers, and on Wall Street. I just cannot reconcile the appetite for access to our bits, as they exist in our homes, and the lack of awareness amongst the average consumer. I remember when the Internet first started taking a hold on the world and how concerned folks were with their address being online, and cookies--now they seem perfectly happy to give up their longitude and latitude every second of the day, as well as share their most intimate thoughts and activities, in exchange for a little bit of convenience and "artificial intelligence".

The attitude of startups about where the line between our digital and physical worlds disturbs me, but what really worries me is the amount of work we have ahead of us when it comes to the ownership of these private bits. This will be one our biggest challenge in coming years.

Startups New Revenue Stream Selling Your Data To Hedge Funds

I am always fascinated by how people see data or don't see data. Startups are definitely seeing it right now, but the average citizen seems unable to see it, care about it, let alone understand that it is actually their private data. I was reading this post about startups selling their data to hedge funds, and once again left amazed at how startups just see end users as livestock, and are a commodity for buying and selling.

I'm stunned that at the same time we are also having conversations about a segment of our population being "left behind", while in the same motion we are willfully blind to startups buying and selling our own private details like we are cattle. This is one of the reasons the tech community is so willing to ignore the bad behavior going on with what has become to be known as the surveillance economy, because there is so much money to be made by a few, harvesting and selling the data of the rest of us.

The actually number of people looking to do harm through the use of technology is fairly small, but the number of people willing to look the other way, and be compolicit in the surveillance economy is actually pretty large. If there is a buck to be made surveilling people, gathering every bit of data about them, there are endless entpresenuers willing to line up and do the work. These entrepeneurs rarely question the motives of what their buyers will do with data, or are willing to ackownledge how their technology can be used for evil, if there is money to be made at any point in the game.

The dehumanizing effects of technology, combined with the greed and blindness of capitalism leaves me worried about the future. The social bubbles we experienced in the election are just the tip of the iceberg. We will see more of these bubbles emerge, with people happy to particpate, as well as companies who are willing to exploit, and manipulate as long as they get a piece of the action when it comes to selling the data. All of this makes me sad when you consider what is contained withint this data--our most intimate thoughts, locations, images, video, and other personal and private items that are just being sold to the highest bidder on the open market.

Why I Am Not Good At Business And Politics

I have had enough businesses, and business dealings to understand the realities of the game, and with almost 30 years of experience under my belt, I have come to realize I am not good at business or politics. This is why I run a single person business, under an LLC umbrella with my partner in crime Audrey Watters (@audreywatters). She does Hack Education, and I do API Evangelist, and the overlap of the two is Tech Gypsies--no investment, or other partners necessary. We just do what we do best, and nothing more--no scaling necessary.

What It Takes To Be Successful At Business
Business people love to shine a light on the classic American dream version of starting a business. You work hard, build a good product, offer a good service, and you can be successful. What they neglect to tell you is how cutthroat you have to be, how many lawyers you will need to be, and willing to screw over your partners, investors, customers, and anyone who gets in your path along the way. Now I am not saying all successful business are like this, but I am saying that increasingly the real successful ones have to operate in this manner because, "if you don't, someone else will" (or so they tell me). I just do not have this in my blood, and I would rather have a small business that never scales, will pay my bills, and keep my soul intact.

What It Takes To Be Successful in Politics
Similar to business, politics is a cutthroat and shady world. Something I think some democrats can do well, but this election has shown how republicans have a much larger appetite for the shady shit, and willing to obtain power at all costs. They are willing to gerrymander, ally with the enemy, screw of the average citizen, and even people with disabilities--whatever it takes to get the reigns of power. Most of this I see on TV and in the papers but have had the chance to see close up a couple of times working in DC, I just do not have the stomach for this. It's not that I'm not tough, and can't handle a challenge, it is that I actually have an ethical core, and like feeling good about myself when I go to sleep at night.

Stick With Just Being a Monkey Wrench
This is why I will just stick with what I do best, being a monkey wrench in the business and political goings-on around the country. Many of my friends have decided to take a different path. They want to make money and use it to make the change, or maybe join government or work within companies, and make the change that way--that is fine. This is my path. I will continue to be a Tech Gypsy, live in the cracks, and throw myself against the machine, being a monkey wrench in the operations of the businesses and government agencies who are working against the American people. I am not cut out for business and politics--I wish to keep my soul intact.

The Archiving Of Social Media For The Obama Administration

I'm thinking a lot about my bits lately, and the legacy of my work in a digital environment. As I'm working and writing on this topic, an email came through my inbox from the White House on the archive work they've done with the President's social media. I thought their approach was worth sharing as what I'd consider to be an archival and reclaim lesson when it comes to our digital bits and a positive approach to preserving the legacy of our digital work. becomes
The Obama White House website – which includes press articles, blog posts, videos, and photos – will be available at, a site maintained by the National Archives and Records Administration (NARA), beginning on January 20, 2017. If you are looking for a post or page on the Obama administration’s from 2009 through 2017, you can find it by changing the URL to For example, after the transition, this blog post will be available at

President Obama, Vice President Biden, First Lady Michelle Obama, and Dr. Biden

Archived content posted to these social media accounts during the Obama administration will be maintained by NARA at the following handles:

White House Social Media

Archived content posted to institutional White House social media accounts during the Obama administration will be maintained by NARA at the following handles:

Some other content you may be looking for can be found here: 

This is a static archive index of our 44th President that because each of the channels also has an API (except Medium), this index can act as an engine for research and storytelling on the 44th presidency, and possible a backdrop for current, and future presidencies. Using these platform APIs you can easily pull photos, quotes, video, and other valuable snippets from this period in time. This approach to archiving will play a significant role in how the history books are written (or rewritten).

I'm considering how I can create a new type of APIs.json index that can be used in this approach, providing a machine readable index to not just the Twitter, Facebook, Instagram, Flickr, Instagram, and Youtube APIs, but also provide a reference to all of the accounts present in an archive. I'm looking to further quantify the dimensions of this approach to archiving, by having a machine-readable definition of the APIs, the accounts, as well as the data, and content contained within each archive. I want to be able to feed a single APIs.json file into a tool, and have it spit out a complete Github archive of everything represented by an archival index.

It Is Always Sad When I Click On Your Dormant Or Dead Domain

I spend a lot of time looking through the tweets I've favorited, curated, and retweeted, digging deeper into the digital presence of the people and companies behind them. This is how I expand my LinkedIn, Github, Twitter, and other networks, as well as establish a better understanding the people who are moving things forward in the API space.

When I click on the profile of someone doing interesting things with APIs, and I see their personal domain as part of their profile, I always click on it. Nothing makes more sad for the future of tech than when these domains are gone, dormant and horribly out of date. It means this person is doing interesting things for a company but is not capturing any of the exhaust from this work for themselves--even just the occasional story on their blog.

Nothing sucks your time and value than having a job. The emptiest portions of my CV's are when I've had jobs. I'm not bashing having a job, I'm just bashing where the bar currently is for employers when it comes to stimulating the creativity, and ownership of ideas among their workers. Even if it is just a little restriction in place on blogging and tweeting, and a fear of saying something you are not supposed too, this will restrict and limits someone's creativity, and will always come back to hurt an employer in the long run.

I'm sure there are many reasons why someone stops blogging and creating within their own domain while working for a company, organization, institution, or government agency, but the majority of the reasons I am sure will leave me pretty sad at the state of creativity in the tech space. I really enjoy the thoughts from folks in the space that flow around their own ideas, and their views on technology, without the influence of who they work for and the products and services they build daily. These are the ideas I think enrich the world of APIs more than any single corporate blog or PR channel--wish they were encourage more, and not killed.

Domain Literacy: When A Trustworthy Domain Goes Bad

I have a side project going on right now where I'm working to define what I'm calling Domain Literacy. I am looking to take my knowledge of the web and APIs and help folks better understand some of the digital currents they are swept up in online each day. Whether its Silicon Valley hype, fake news, politics, or cybersecurity, APIs are being used to track, analyze, and influence people around the world.

My objective is to just help folks understand a little bit more about the dangers of consuming information online, and that there are many different domains you can operate within, as well as owning your own domain. I want folks to understand the motivations behind some popular domains like, and I also want to help them discover which domains they can go to and find domain experts when it comes to security, privacy, and other areas of technology where they might need some assistance and education.

I want to help people understand that the web is always shifting, evolving, and sometimes this happens very quickly within startup culture but isn't limited to just the business arena. There are many factors that contribute to a domain being trustworthy or not, something that can change quickly. An example of this recently is, who has recently removed reliable information on climate change, LGBTQ rights, Spanish language, information for people with disabilities, and other important areas.

I am not saying everything published to will be inaccurate from here forward, or that everything was removed maliciously. I am just saying their track record on telling the truth isn't very good so far, and when using we should be skeptical about the content and data provided. We should be employing a healthy amount of skepticism anywhere on the web, and knowing which domain you are operating within, and some awareness of the trustworthiness and safety of a domain is important. You should know where you are entering your social security and credit card numbers, but you should also be aware of the quality of information you are consuming online, and who is pulling the strings.

I do not expect that you understand the technical underpinnings of the web, just possess a little awareness about the domain you are operating in within your browser, and the apps you install on your mobile phone. A little domain awareness and an understanding of who is behind a domain can go a long way towards helping improve our privacy, online security, and the quality of news and information we digest on a daily basis--something that can really impact how we see the world around us.

Honesty In API Startup Land

I take flack from folks when I write posts like I did last week about Oracle acquiring Apiary. I can't help be blunt about these fabricated realities that many folks claim to be "inevitable". Why would I not congratulate Apiary? I explained--I'm not dealing in Silicon Valley currency, I'm just a vocal spectator, and acquisitions don't make me happy. People also like to tell me that not all startups are bad, and not all big companies are bad, and not all founders are greedy. True, but do you ever stop and ask yourself why you feel compelled to speak out when these things are true about a signification portion of the space?

Sure it is the natural course of everything, right? Startups get created, then they get acquired--it's just business. Ok. So if these things are inevitable, and just the way things are done, why aren't they included in the marketing and the origin story that you tell your customers from day one? You know, "Hey we have this great new service that you should use, but we want you to know that at some point we are selling this thing and making a bunch of money (or not), and your whole world will be disrupted when our service goes away (or not)". It is because we are not being honest with people, forcing me to be the asshole who talks about it after the fact.

So what is actually inevitable? That all startups will eventually go away, and companies are bought and sold or is it that business people are inevitably dishonest? I'm not asking for much. I just want us all to make sure there are good APIs, with a robust set of data portability and integration tooling, so that small business owners like me can reliably depend on services, without their world being disrupted every time one of you hit your big payday. Also, maybe we could have just a little more honesty and less hype along the way. I just don't understand why I'm the delusional person who is living in another world when y'all are the ones playing these games.

All I'm asking for is: 1) Data Portability 2) Complete API Stack 3) Integration / Syncing / Migration Tooling, and a little bit more honesty about change and what the future holds--then you can do your startups, sell them, and play this game in a way that won't create fatigue across the sector. I think entrepreneurs underestimate the damage that this will do to the average business consumer's appetite for adopting new services--something that will hurt everyone.

Oh, while I'm ranting, you should consider being more honest about change in your API operations with hypermedia. ;-)

What Tracks Do You Leave When It Comes To Virtualization In Cloud?

I spend a lot of time thinking about how technology can be used for good, and for bad. I feel pretty strongly that many technologists do not think deeply about the alternative ways in which their technology can be used, for both good and evil. This is one of the big challenges in the world of APIs, how do you encourage companies to open up their resources, knowing they may not fully understand what they are doing. It may be something that stimulates innovation, but it may also be something that gets abused.

One of the ways I push my understanding of technology is through my process of design fiction, where I write stories that push realities on my alternate Kin Lane blog, and similar stories from the world of APIs on alternate API Evangelist. A topic I'm pushing forward in this areas centers around how do law enforcement or the "good guys" conduct forensic analysis on mobile devices, as well as via laptops, desktops, and servers they obtain custody of--inversely I'm trying to understand how hackers or the "bad guys" cover their tracks. I prefer talking about this stuff out in the open so that others can learn from, whether it is for good, or for bad--I believe the good from being transparent outweighs the bad in many scenarios (not all). 

When it comes to recovering data from laptop, desktop, and server hard drives, practices for recovering data, as well as covering your tracks, are pretty proven. When it comes to doing this on mobile phones there is still much being figured out when it comes to reliability getting into mobile devices, as well as a whole lot of discussion around what is currently possible, and being used by law enforcement, banks, and other corporate and government entities. We are seeing regular trickles of information emerging about what technology and services are available out there to help get at people's information stored on mobile phones--a discussion that needs to be further brought out in the open.

In my style, I am thinking about the future of how information is protected, and how the surveillance machine is getting at information. I'm focusing on virtualization, and like the unallocated space on a hard drive, how long does information stick around in virtualized environments. Increasingly we are storing information on virtualized storage instances, and running applications and desktops in virtualized environments. What does data storage, and recovery look like in these environments? I regularly fire up 20-30 virtual servers, along with virtual storage drives to process jobs, harvest and crunch data, then delete them when I am done. What happens to all this data? What is retrievable? I am not just talking about for my recovery needs, I'm talking about in law enforcement scenarios. 

I'm not looking to do anything illegal--not my style. I'm looking to understand this so I can be ahead of the curve in poking, prodding, and stimulating the conversation about how we keep our bits private, as well as understand what the police and surveillance apparatus is up to. I am not team terrorist or criminal, but I am team anti-surveillance. I am a strong believer that if we can have an honest conversation about this out in the open, and better understand what is possible that we can sensibly mitigate criminal activity while also protecting the privacy of citizens and businesses. I'm just getting going thinking about this, talking with experts in my circle, and learning more. I will keep exploring both in reality here on my blog, as well as on my alternative blogs, and see where this all goes.

If you have any expertise or opinions in this area, I'd love to hear your thoughts.

That Feeling You Get When You Say Something Critical

Have you ever spoken up critically on the Internet about a company? I'm not talking about the random gripe with a brand when it comes to their products or services, I am talking about taking a direct shot at the ethics of a company. You know that feeling you get when you do this, where your mind starts going through the possible repercussions: Will this impact my job? Will this impact me getting future contracts? Maybe people won't return my calls now? Will I be sued? On and on....

While this is definitely a REAL problem, I can't help but feel that in 80% of these situations it is more about self-policing, then it is about real repercussions. This is the beauty of how this works in business. Something I've seen play out in government, higher education institutions, the enterprise, and in the startup space when it comes to VC investment. If you ever want VC money, you better not bad mouth venture capital, or any individual VCs! I get a number of backchannel DM's and emails from folks telling me I'm brave for saying something, or asking if I have concerns about repercussions.

I deeply consider the impacts of anything I say and write online, whether it is public or not. However, I do not self-police myself for a fear of those in power (at any level). While I do worry about where my next paycheck will come from, I'm not concerned with calling out companies I work with, or would potentially depend on for a paycheck. If a company can't take constructive criticism from me and truly get to know who I am and what I represent, then  I do not want to take money from them. It is just how I operate, something that keeps me confident in myself, my business, and what I do for a living and to make my impact on the world.

This type of self-policing and intimidation is how power works. I'm putting this out there because I feel like the precedent has been set by Trump and his followers, that they will come after you legally--if you speak up. Something I think some businesses will emulate. While I do consider these repercussions throughout my work, I'm not going to let it disrupt what I do because this is how power works. If we are going to manifest the world we want, we have to speak truth to power, otherwise, we've created our own jail cells, and put ourselves into them, giving up on ever making a meaningful impact on the world around us.

When The Companies Who Have All Your Digital Bits Promise Not To Recreate You

I'm thinking about my digital bits a lot lately. Thinking about the digital bits that I create, the bits I generate automatically, the bits I own, the bits I do not own, and how I can make a living with just a handful of my bits. I have an inbox full of people who want me to put my bits on their websites, and people who want to put their bits on my platform so that they are associated with my brand, increasing the value of their bits. I know people think I'm crazy (I am) when I talk so much about my bits in this way, but it is a just response to my front-row seat watching companies getting pretty wealthy off all of our bits. #BlueManGroup

Obviously, this is not a new phenomenon, and we've heard stories about Prince, John Fogerty, and George Clinton fighting for the funk and ownership of their musical bits, something artists of all types have had to battle on all fronts, throughout their careers. Lately, I have I have found myself sucked in listening to stories from Carrie Fisher in her documentaries, better understanding her struggles to maintain a voice in the merchandising, representation and control over her likeness, and her most famous role--Princess Leia. <3

Carrie Fisher made Prince Leia the icon she is today. However, she did it on the LucasFilm platform. How much does LucasFilm own, and how much does Carrie Fisher own? How dependent are they on her, and how dependent is she on them. Something that has been intensely worked out between lawyers since the 1970's. Now that she has passed, I'm sure her estate will continue to take on LucasFilm on this front, but the company has so many of her video, audio, and images (her bits), that they can possibly recreate her for future movies if they desired.

As I'm thinking about my own bits, and the control, or lack of control I have over these this week, I'm also reading that Lucasfilm released a statement that:

We want to assure our fans that Lucasfilm has no plans to digitally recreate Carrie Fisher’s performance as Princess or General Leia Organa.

Remember the Tupac and Michael Jackson holograms? The precedent for digitally recreate all of or the parts of pieces (bits) of a human is out there. Let me stop here. I'm not talking about anything remotely in realm of the singularity, I'm simply talking about what is possible with existing technology using video, audio, images, and text content generated from or containing the fingerprint of a certain human being (me). I know that some geeks love to masturbate to this shit, but I'm just talking about some of you delusion mother-fuckers realizing there is a lot of money to be off of someone else's hard work, or even just their human existence. #Exploitatification

The platformification of everything is all about getting people to come do shit on your platform, and making money doing this--I just happen to study this stuff for a living and possess a borderline unhealthy obsession on the subject (#help). Carrie Fisher had to learn the hard way how to fight for what is hers, back when she was a young adult, something that continued throughout her life. With advances in technology this battle has evolved, morphed, and changed, with the greatest amount of control and power always existing in the hands of the platform (Lucasfilm) operator, who has the most lawyers. 

if you publish anything on the web regularly you know that there are folks who immediately copy your shit and post elsewhere, trying to generate ad revenue--this is the lowest level of things out there. At the higher levels, we have Youtube, Facebook, Instagram, and others who want all your bits in their walled garden, where they can measure, track and run your bits through their "machine learning" and "artificial intelligence" algorithms (go Evernote). Where they obtain a license and control over your videos, images, audio and other objects. Where their machine learning can learn to write like you, understand your behavior, where you go, what you like to buy, where you eat, and what you like to read and watch, think, and write in your journal.

At what point can Facebook or Google launch an API channel that behaves just like what I perform as the API Evangelist. At what point can Amazon understand which algorithms are getting the most use, the most traction, and awareness, and access to the most data and content, and recreate all of this for themselves, within their domain, presented as the latest AWS offering. You know why platform operators are afraid of folks stealing their AI through APIs? Because it is the reverse of their business model, training their algorithms using your bits, and providing a plantation for developers to tend to, cultivate, and grow the best crops. 

Thankfully I am a human being, and no amount of AI, machine learning, and algorithms will ever replace me, and what I do, but it doesn't mean that there won't be endless corporations willing to step up and exploit, profit from my existence as said human being. As I struggle to understand my digital self, and make ends meet for my physical self, I just had to take pause, and point out that a corporation just promised to a bunch of human beings that they would not be digitally recreating another beloved human being, simply so they could profit from the hard work she did as a living human being. It leaves me wondering if Lucasfilm will always have this attitude, and whether or not other companies like Facebook and Oculus Rift will have similar ethical stances, and not use all of us in their social and VR bubbles productions.