I Have To Comply With DJI Update Or My Drone Will Be Crippled

I received this email from DJI about my drone this weekend, telling me about an upcoming update this week where I will be forced to comply with an update that is designed to limit where I can operate my drone. It is a pretty interesting look at the future of this Internet of Things beast we’ve unleashed.

Dear Customers,

DJI will soon introduce a new application activation process for international customers. This new step, to take effect at the end of this week, ensures you will use the correct set of geospatial information and flight functions for your aircraft, as determined by your geographical location and user profile. All existing flight safety limitations, such as geofencing boundaries and altitude limits, remain the same.

Even if you have registered when activating your aircraft upon purchase, you will have to log in once when you update the new version of DJI GO or GO 4 App. If you have forgotten your password since your initial login, you can reset it using a function within the DJI GO and DJI GO 4 apps.

You will need a data connection to the Internet for your smartphone or tablet when you log in, in order to verify the account information and activate the updated software or firmware. If this activation process is not performed, the aircraft will not have access to the correct geospatial information and flight functions for that region, and its operations will be restricted if you update the upcoming firmware: Live camera streaming will be disabled, and flight will be limited to a 50-meter (164-foot) radius up to 30 meters (98 feet) high.

The feature applies to all aircraft (except standalone A3 and N3) that have been upgraded to the latest firmware or when using future versions of the DJI GO and GO 4 apps.

DJI encourages pilots to always follow applicable laws and regulations in the countries where they operate, and provides information about these regulations on its FlySafe website at flysafe.dji.com.


Your DJI Team

I find it really fascinating that if you do not comply with the update your device will be limited in where it can operate, and taking away some features. That “this new step, to take effect at the end of this week, ensures you will use the correct set of geospatial information and flight functions for your aircraft, as determined by your geographical location and user profile.”

This email provides us with a look at the future, where all our devices are connected to the Internet, and if we don’t comply with all updates, and forward motion, the objects in our lives can be turned off, or limited in what they can do.

The Oil Industry Waking Up To Data Being The New Oil

When you hang out in startup circles you hear the phrase, “data is the new oil” a lot. Getting rights to the mining and extracting, and generating revenue from data is big business, and VC, Hedge Funds, and even government are getting in on the game. Whether it is gathered from private or public sectors, or in your living room and pocket, everyone wants access to data.

One sign that this discussion is reaching new levels, is that the oil industry is talking about data being the new oil. That is right. I’m increasingly coming across stories about big data and the revenue opportunities derived from data when it comes to IoT, social, and many other trending sectors. The big oil supply chain has access to a lot of data to support its efforts, as well as generated from the exhaust of daily oil production to consumption–the opportunity is real man!

To entrepreneurs this shift is exciting I’m sure. To me, it’s troubling. Wall Street turning their sites to the data opportunity, and hedge funds getting in on the game worried me, but big oil being interested an even greater sign that things are reaching some extreme levels. It is one thing to use data is the new oil as a skeuomorph to find investment in your startup, or acquire new customers. It is another thing for the folks behind big oil to be paying attention–these are the same people who like to start wars to get at what they want.

Anyways, it is just one of many troubling signs emerging across the landscape. Many of my readers will dismiss as meaningless, but these discussions are just signs of an overall illness around how we see data, privacy, and security. Remember when we’d topple dictators to get at oil resources in the world? Well, welcome to the new world where you topple democracies if you have access to the right data resources.

Liquid To Filter Out The Future On My Blogs

I created a little hack on my Jekyll-driven websites to allow me to publish a week’s worth of posts (or more) ahead of time. I’ve been scheduling these publishing using my homebrew CMS, but I recently ditched it for Siteleaf, and one of the things that were not possible with the CMS was scheduling–so I needed a hack.

I wanted to be able to just publish at least a weeks worth of blog posts, but then just trickle them out somehow using Jekyll, and avoid using the CMS layer. I got to work publishing a couple of “future” posts and tightening up any holes where the future might leak out into the present–specifically the blog and RSS/Atom listings.

First I set a variable to tell me what the date and time were for any given moment:

Then I translated the publish date for each post into the same format as my definition for now (seconds):

Then you just check to make sure each blog post that is being displayed using Liquid is truly from the past:

Voila, a filter for the future on my blog listing page, and the RSS or Atom feeds. After this, I published a schedule.xml feed which showed all my blog posts, even for the future. I use this to schedule Tweets, and other social media posts for my blogs throughout the week–allowing my social media management tooling to see into the future when it comes to my blogs.

It is a hack for achieving a blog schedule, but it works. It allows me to schedule my world days or weeks ahead, and stay focused on project work. One of the reasons I abandoned my homegrown CMS is I wanted to be forced to find solutions within the cracks of a variety of SaaS tooling, using feeds and APIs. I feel like these approaches are going to be more valuable to my readers, as I can’t expect everyone to deploy a custom solution like I was doing.

Observability Is Needed to Quantify A DDoS Attack

The FCC released a statement from the CIO's office about a Denial-of-Service Attack on the FCC comment system, after John Oliver directed his viewers to go there and "express themselves". Oliver even published a domain (gofccyourself.com) that redirects you to the exact location of the comment system form, saving users a number of clicks before they could actually submit something. I am not making any linkage between what John Oliver did, and the DDoS attack claims from the FCC but would like to just highlight the complexity of what is DDoS, and how it's becoming an essential tool in our Cybersecurity Theater toolbox.

According to Wikipedia, "a denial-of-service attack (DoS attack) is a cyber-attack where the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to the Internet. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled." It is a pretty straightforward way of taking down a website, application, and increasingly devices, but it is one that is often more theater than reality.

There are two sides of the DDoS coin: 1) how many requests an attacker can make, and 2) how many requests an attack receiver can handle. If a website, form or another service can only handle 100 requests in any second, it doesn't take much to become a DDoS attack. I worked at a company once, where the IT director claimed to be under sustained DDoS attack for weeks, crippling business, but after a review, it turned out he was running some really inefficient services, in an under-resourced server environment. My point is, that there is always a human making the decision about how many requests we should handle before things are actually are crippled, either by limiting the resources available before an attack occurs or by cutting off scaling up existing infrastructure because it would cost too much to achieve.

There are variations of the DDoS attacks, sometimes called a "cash overflow" attack, where a website operates in a scalable cloud, and can handle a large volume of requests, but eventually will cost a provider too much, and they will cut if off because they can't afford to pay the bill. A DDoS attack can be successful for a variety of reasons. Sometimes providers don't have the infrastructure to support and scale to the number of requests, sometimes providers can't afford to scale infrastructure to support, and other times a provider just makes the decision that a website, form, or device isn't worth scaling to support any level of demand beyond what is politically sensible.

I'm sure that many DDoS attacks are legitimate, but I know personally that in some cases they are also a theater skit performed by providers who are looking to cry foul or stimulate a specific type of conversation or response from a specific audience. I just think it is important to remember the definition of what a DDoS attack is, and always think a little more deeply about the motivations of both the DDoS attacker, as well as those under attack, and the political motivations of everyone involved, as well as the resource they have to contribute to the two-way street that is a distributed denial of service attack (DDoS)

The Value Of Our Digital Bits

I think way too much about the digital bits being transmitted online each day. I study the APIs that are increasingly being used to share these bits via websites, mobile, and other Internet-connected devices. These bits can be as simple as your messages and images or can be as complex as the inputs and outputs of algorithms used in self-driving cars. I think about bits at the level up from just the 1s and 0s, at the point where they start to become something more meaningful, and tangible--as they are sent and received via the Internet, using web technology.

The average person takes these digital bits for granted, and are not burdened with the technical, business, and political concerns surrounding each of these bits. For many other folks across a variety of sectors, these bits are valuable and they are looking to get access to as many of them as you can. These folks might work at technology startups, hedge funds, maybe in law enforcement or just tech-savvy hacker or activist on the Internet. If you hang out in these circles, data is often the new oil, and you are looking to get your hands on as much of it as you can, and are eager to mine it everywhere you possibly can. 

In 2010, I started mapping out this layer of the web that was emerging, where bits were beginning to be sent and received via mobile devices, expanding the opportunity to make money from these increasingly valuable bits on the web. This move to mobile added a new dimension to each bit, making it even more valuable than they were before--it now possessed a latitude and longitude, telling us where it originated. Soon, this approach to sending and receiving digital bits spread to other Internet-connected devices beyond just our mobile phones, like our automobiles, home thermostats, and even wearables--to name just a few of the emerging areas.

The value of these bits will vary from situation to situation, with much of the value lying in the view of whoever is looking to acquire it. The value of a Facebook wall post is worth a different amount to an advertiser looking for a potential audience, then it will be to law enforcement looking for clues in an investigation, and let's not forget the value of this bit to the person who is posting it, or maybe their friends who are viewing it. When it comes to business in 2017, it is clear that our digital bits are valuable, even if much of this value is purely based on perception and very little tangible value in the real world. With many wild claims about the value and benefit of gathering, storing, and selling bits.

Markets are continually working to define the value of bits at a macro level, with many technology companies dominating the list, and APIs are defining the value of bits at the micro level--this is where I pay attention to things, at the individual API transaction level. I enjoy studying the value of individual bits, not because I want to make money off of them, but because I want to understand how those in power positions perceive the value of our bits and are buying and selling our bits at scale. Whether it is compute and storage in the cloud, or the television programs we stream, and pictures and videos we share in our homes, these bits are increasing in value, and I want to understand the process how everything we do is being reduced to a transaction. 

Many Perspectives On Internet Domains

I am always fascinated by how people see Internet domains. I do not expect everyone to grasp all of the technical details of DNS or the nuance of the meaning behind the word domain, but I'm perpetually amazed by what people associate or do not associate with the concept. I like to write about these things under my domain literacy work, saving the research I do for future use, but also using the process to polish my storytelling on the subject, and hopefully being more influential when it comes to domain literacy discussions.

After watching the conversation around Audrey's decision to block annotation from her domain(s), I just wanted to take a moment and capture a few of the strange misconceptions around domains I've seen come up, as well as rework some of the existing myths and misunderstandings I deal with regularly when it comes to my API research, and wider domain literacy work. Let's explore some of the storytelling going on when it comes to what is an Internet domain.

What Is A Domain?
Many folks have no idea what a domain is. That they type them in regularly in their browsers, click on them, let alone that you can buy and own your own domain. This illiteracy actually plays into the hands of tech entrepreneurs, and each wave of capitalists who are investing in them--they do not want you knowing the details of each domain, who is behind them, and they want to make sure you are always operating on someone else's domain. It is how they will own, aggregate, and monetize your bits, always being the first to extract any value from what you do online, and via your mobile phones.

You Don't Own Your Domain!
A regular thing I hear back from people about domains is that you don't every truly own your domain. Well, I'd first say that you never really truly own ANYTHING, but that is probably another conversation. Do you really own your house? What happens if you don't pay your taxes, or use and respect the title company, and other powers involved? What about imminent domain laws? Sure, you don't really own your domain, but you are able to purchase it, control the addressing of it, and decide what gets hosted there (or not). It's pretty damn close to a common definition of ownership for this discussion.

Your Domain Is On the Internet So It Is Public!
Just walk yourself through the top domains you can think of. Does this argument hold any water? Every part, of every domain on the Internet is public because it uses public DNS and Internet infrastructure? No. There are so many grades of access and availability across many domains that use public infrastructure. Domain owners and operators get to determine which portions of a domain are accessible by the public, private partners, and even across internal actors. Even on the public areas, not protected by a password, there can be different levels of content delivery based upon region, individual IP address, or just randomly, leaving it to the algorithm to personalize what you will see. There are no guaranttees of something being public, just because it uses a public domain.

Domain Name Servers (DNS) Is Voodoo
Yes. DNS is voodoo. I've been managing DNS professionally for domains since 1998, and I still think it's voodoo. Even with DNS being a dark art, it is still something the average person can comprehend, and even manage at a basic level for simple domains, especially with the help of DNS service providers. DNS is the address, doorway and even the fence for the perimeter of your domain. DNS also helps you define and quantify the size of your domain, with the number of domains exponentially expanding your digital territory. A basic level proficiency with DNS is required to manage your own domain(s) successfully.

We Own What You Do In Our Domain!
Ok. Sure. Any new data or content that is generated by systems running within your domain can be seen as YOUR intellectual property. However, when you invite people to bring their bits (photos, videos, thoughts) to your domain and don't really educate them about intellectual property, and what you are up to, it can be easily argued that maybe what people generate in your domain isn't always yours. Even with that said, ensuring things happen within a specific domain, so that you can place some sort of ownership claim over those bits is a pretty standard operating procedure for the web today. This is why most of my work is conducted via my own domain(s) each day, and syndicated out to other domains as I see fit.

There Is No Real Difference Between Domains 
As people surf the web, they rarely see the difference between each domain. Unless it's big brands like Twitter, Facebook, Google, and others, I don't think people really ever consider the domain they are on, or who might be behind it. Those of us in the business do a lot of thinking about domains and see the crack in the web, but the average person doesn't see the boundaries, differences, or motivations behind. This all contributes to many different paths people take when it comes to domain literacy--depending on where they boarded with the concepts they'll see domains very differently. While some of us enjoy helping others understand domains, there are many who think it should be kept in the realm of the dark arts, and something normals shouldn't worry their pretty little heads about.

Everybody Gets The Same Experience At A Public Domain
Each domain you visit on the public web looks the same for everyone who visits, is a common perception I get from folks. We are good at projecting our reality at common online domains onto other people. The news I see on my favorite news site is what everyone else sees. My view of Facebook, Instagram, and Twitter is similar to what other people experience, or rather, I don't think people spend much time thinking about it, things are the way they are through a lack of curiosity. My Facebook is definitely not your Facebook. Our web experience is increasingly personalized and bubbleized, changing how and what each domain will mean to different folks. Net Neutrality is under attack on many fronts and is rapidly being eroded away in our browsers and on our mobile phones via the major providers.

I am captivated by this version of our online world that is unfolding around us. What worries me is the lack of understanding about how it works and some awareness of where they are all operating when online. People don't seem concerned with knowing what is safe, what is not. What worries me the most is that number of people who don't even have the concept of a domain, domain ownership, and any sense of separation between sites online. After that, the misuse, misinformation, and obfuscation of the digital world by people operating in the shadows and benefitting from ad revenue. I know many folks who would argue that we need to create safe spaces (domains) like Facebook where people can operate, but I feel pretty strongly that this is an Internet discussion, and not merely a platform one.

We have a lot of work ahead of us when it comes to web literacy. With the amount of time we are spending online, and the ways we are letting it infiltrate our physical worlds, we have to do better and educating people about the basic building blocks of the web. If we let "them" ruin the web, and platforms are the only safe place to be--cooperations win, and this grand experiment called the web is over. Maybe it already is, or maybe it never was, or maybe we can just help folks just see the web for what it is.

FREE Always Seems To Suck The Oxygen Out Of The Room

I closely watch the value the digital bits being exchanged via the Interwebz--it is what I do. @audreywatters always says that APIs are "reducing everything to a transaction", and I am interested understanding the value of these bits, what people are buying and selling them for, and how it keeps the Internet machine chugging along--for better or worse. As I watch Audrey battle with folks about the availability of content with her domain and experience my own shift in what should be made freely available by API providers, I'm left thinking about the damaging effects free has had on our world.

I feel like the seeds of this were set into motion by John Perry Barlow followers imparting their ideology on the web, but was something that was capitalized on during the Web 2.0 movement by tech giants like Google, Twitter, and Facebook when it came to leveling the playing field, giving them the competitive advantage they needed. It is very difficult to compete with FREE. Only certain companies can operate in this environment. It's a brilliant and cutthroat way of doing business, setting a tone for how business should be done in a way that keeps competitors out of the game. When the free and open Internet armies become wielded by this type of passive aggressive capitalism, the resulting zombie army becomes a pretty effect force for attacking any providers who are left operating in this oxygen-deprived environment.

These free zombie armies think the web should be free and openly accessible for them to do what they want, most notably build a startup, getting funding, and sell that startup to another company. Your detailed website of business listings, research into an industry, and other valuable exhaust that MUST remain free is ripe for picking, and inclusion into their businesses. The zombies rarely go picketing after tech giants, telling them that everything must remain free and available, they go after the small service provider who is trying to make a living and build an actual small business. If the tech giants sucking the oxygen out of space with FREE don't get you, the free and open zombies will pick you clean through a sustained, and systematic assault from Reddit and Hacker News.

I'm always amazed at the bipolar conversations I have with folks about how I manage to make a living doing my API research, how rich and detailed my work is, while also being asked to jump on a phone call to talk through my research, so it can be used in their startup, marketing, or infographic. Never being asked if they could pay me, and when I mention getting paid-- they often just scatter. This continuous assault on the web has pushed me to shift my views on what should be FREE, and what we publish and openly license on the web, as well as make available at the lowest tiers of our APIs. These are my valuable bits. I've managed to stay alive and make a living in a field where most analysts either burn out or are acquired and coopted. My bits are how I make a living, please stop demanding that they always be free. Not everyone can operate like Google, Facebook, or Twitter--sometimes things cost money to do well, and might not need to be done at scale.

Expressing My Concern About Startup Dependability When I Talk To VCs

I talk to venture capital (VC) folks on a regular basis, answering questions about specific API-centric companies, all the way to general trends regarding where technology is headed. This week I was talking with a firm about the viability of one of the API companies I work with regularly, and the topic of startup dependability came up, as we were talking about the challenges this particular startup is facing.

While I am using this particular startup in my business operations I expressed concern about the viability and stability of the startup in the long run. This concern has less to do with the startup, as I fully trust the team, and the technology they develop, it is more about the nature of how investment works, as well as the looming threats for the 1000lb pound gorillas in the space. I just do not trust that ANY startup will be around in coming months, and I craft my API integrations accordingly--always with a plan b, and hopefully a plan c waiting in the shadows.

This isn't just me. I've had similar conversations with companies of all shapes and sizes, university technology groups, as well as government agencies. After each wave of startups failing or achieving their exits, us end-users who are often in charge of purchasing decisions are suffering from whiplash, and our necks hurt. Every time there is a new tool on the table, we are asking ourselves whether or not it is worth it this time. Should we be investing in yet another software as a service that will likely go away in 12 to 24 months? The burden on us has been too high, and we are left feeling like the startups and their investors really do not give a shit about us--they have their own business model that they are moving forward with, where we are just a number.

There are no guarantees in business, but startups and VCs aren't doing enough to address the dependability of their portfolio companies. At some point, it will catch up with them, if it already isn't. As the API Evangelist, I am already toning down my excitement over new startups because I really do not want to be responsible for helping convince people to adopt a new tool, and then be held accountable when the startup goes away. Each week I have an inbox full of startups asking me to write about them, and most of them are unaware of how much my neck hurts, they are narrowly focused on their vision, with little concern for the rest of us, as long as they get their payout.

Why You Have Trouble Talking About The Negative Impacts Of Technology?

When you are a critic of technology you get a lot of pushback from technologists, who seem to almost aways impulsively respond that not all technology is bad, echoing conversations around race and gender. There are two default responses you get when you ask some hard questions about how technology is used: 1) Not all technology is bad 2) Why do you hate all technology? These pretty default responses from a culture of people (mostly men) who feel that they have to defend technology in all scenarios, rather than actually being capable of participating in constructive conversations about the sensible use of technology. 

I like technology and have made a good living as an Internet technology specialist. I'm definitely not someone who is anti-technology. However, the bi-product of folks believing so blindly in the power of technology is leaving a trail of negative consequences in its wake, and I find myself regularly stepping up to ask some of the harder questions--resulting in folks increasingly pushing back on me in this impulsive way. I would say that the pushback from technologists has exceeded the push back I get from business folks when it comes to the power of markets, but when you have technologists who have also drunk the markets Kool-Aid while also believe so blindly in technology, a very disturbing conversation emerges.

As Internet technology continues its penetration into every aspect of our personal and business worlds we have to get better at asking the hard questions about the negative and often unforeseen consequences of using Internet technology. I'm always amazed by the annoying tone that technologists take with me when I bring up the hard questions, and how they resort to immediately defending technology, and technologists. Rarely is it a conversation about the possibilities (good or bad), it immediately becomes an attack on what someone does for a living, their belief system, making it a personal attack--one that regularly becomes unproductive in a digital environment.

For the folks who do this, I ask, "do you ever step back and evaluate why you feel so compelled to defend technology?" If you pause and think about some of the negative things we've seen in the world this century, is it so hard to conceive that, as technologists, we might be missing some negative situations, and that we might be too close to the technology, or maybe there are views we are not considering because of how close we are? Why is your immediate responses in defense of technology, over the defense of people? Why are you so unwilling to have a discussion with other people about Internet technology, and understand their perspective of technology?

When people step up and criticize APIs I often have the same emotional response in my head. Not all APIs are bad! I'm doing good with APIs! Then I stop myself, and I remind myself that technology doesn't need defending. People do. Maybe I should listen to what someone is saying. Even if I don't end up agreeing with them, I almost always benefit from understanding their perspective. If I don't agree with them, I also remind myself that the situation doesn't always warrant a response. Technology doesn't need me defending it. If there is a human involved in my rebuttal to an argument I may actually respond with a defense, but if it is just technology--it can defend itself. 

My Procedures For Crossing Borders With Digital Devices

I just got back from two weeks in the United Kingdom, which was my first international travel in a Trump and Brexit dystopia. My travel leaving the country, and coming back through LAX were uneventful, but it gave me the opportunity to begin pulling together my procedures for crossing borders with my digital devices. 

Pausing, and thinking about which devices I will be traveling with, what I am storing on these devices, and the applications I'm operating provided a significant opportunity to get my security and privacy house in order. It allows me to go through my digital self, think about the impact of traveling with too much data, and prepare and protect myself from potentially compromising situations at the border.

When I started my planning process I had invested in a burner Google Chromebook, but because I'm currently working on some projects that require Adobe products, I resorted to taking my older Macbook Air, that if I had confiscated I would be alright leaving behind (who wants a violated machine?). When it comes to my iPhone and iPad, I cannot afford to leave them behind, as I need the most recent iPhone to work with my new Mavic drone, and my Osmo+ video camera--both from DJI.

Any law enforcement looking to get access to my MacBook, iPhone, or iPad are going to go after all my essential bits: contacts, messages, images, audio, and video. So I made sure that all of these areas are cleared before I crossed any border. I keep only the applications I need to navigate and stay in touch with key people--no social media except Twitter. Since I use OnePass for my password management, I don't actually know any of my passwords, and once I remove the OnePass application, I can't actually get into anything I'm not already logged into.

The process of developing my border processing procedure also helped me think through my account hierarchy. My iCloud and Google are definitely my primary accounts, with everything else remembered by OnePass. I even set up an alternate Kin Lane for iCLoud, Twitter, and Google, which I log into with all of my devices when crossing a border. I made sure all social and messaging applications are removed except for the essentials, and double checked I had two-factor authentication turned on for EVERYTHING.

I store nothing on the iPhone or the iPad. Everything on my Macbook is stored in a synced Dropbox folder, which is removed before any border is crossed. I clear all SD cards and camera storage on the device. Everything is stored in the cloud when I travel, leaving nothing on the device. When you are really in tune with the bits you create and need to operate each day, it isn't much work to minimize your on-device footprint like this. The more you exercise, the easier it is to keep the data you store on-device as minimal as you possibly can. One footnote on storage though--if you can't get all your data uploaded to the cloud in time because of network constraints, store on mini-SD cards, which can be hidden pretty easily.

The areas I focused on as part of my procedure were focused on device storage, application connections to the cloud, and what is baked into the device like address book, etc.--running everything in the lightest, bare bones mode possible. It really sucks that we have to even do this at all, but I actually find the process rewarding--think of it like fasting, but for your digital self. I'm looking forward to furthering refining my approach and keeping it as something that I do EVERY time I cross a border. Eventually, it will just become standard operating procedure, and something I do without thinking, and will definitely begin to impact my more permanent digital footprint--keeping everything I do online as thoughtful, meaningful, and secure as possible.

The Valuable Bits On Your Cell Phone That Everybody Wants

I track on which digital resources are valuable. Products, contacts, messages, compute, images, video, and the other valuable bits that are being moved around, and bought and sold via the Internet. I'm always trying to understand what is valuable to developers, platform operators, investors, and even the police and government agencies. 

I was reading a post on how Denver police are using Cellebrite, a solution for accessing cell phones, and an OCR image from the story had a list of bullet points regarding Cellebrite's functionality, which I think provides a nice snapshot of what data is valuable on your cell phone currently. They are looking for the following bits:

  • Device ID - The unique identification of your device.
  • Address Book - The names, and information for your contacts.
  • Phone Call - The details of every call you have made.
  • Emails - All of the emails you sent and received.
  • Messages - Your text and images messaging and SMS.
  • Videos - All of the videos you have watched and created.
  • Photos - All of the photos you have watched and created.
  • Audio - Any podcast you've listened to and audio file created.
  • Location - The history of where you have been with GPS.
  • Social - Your social messages and connections for used networks.
  • Password - The code you use to get into your device.
  • Wifi - The networks you have connected to with your device.

It provides a pretty nice snapshot of what is valued in today's digital world. These are the essential bits of all of our lives, and everyone is working overtime to get their hands on them. It's not just the government, every single company doing business online wants to get at these bits, connect the dots, and make money from them. Law enforcement is interested in the same bits, just for very different reasons--they have a very different business model than the startups, but they both have a shared desire.

How free we are in the future is going to come down to how much control we have over our bits. Everyone wants them. The government, hedge funds, venture capitalists, hackers. I think the last one wifi, or our network, is the canary in the coal mine. The current tone being set by the FCC, and Trump administration, is the sign that things will begin to get much more toxic. Even beyond the Silicon Valley operated, cyber(in)secure world we find ourselves operating in currently--hang on to your bits, it is going to be a wild ride!

The Role Of The University In Our World

I had come across Your College Degree is Worthless as part of my regular monitoring of the API space, which is a story I see regularly from the startup community, partly due to my relative position to my partner in crime Audrey Watters (@audreywatters) and her Hack Education work. It is a story startup like to tell when they are selling technology fueled solutions they see as a replacement to the college degree, in this case, the author is developing a startup based on selling apprenticeships with other startups. I'm linking to the story and startup not because I support them, but because it provides a great example of the corrosive effects that startup culture has become.

Shortly after reading this story I went to Oxford in the UK to speak with the Oxford Dictionaries API, and while in Oxford I walked around several of the schools there. While experiencing Christ Church and Magdalen colleges this story came to mind, and I spent time thinking deeply of the hubris and delusion of tech culture. Imagine believing that an internship at a startup is more valuable than a college degree and that higher educational institutions should be dismantled and replaced with startup culture--we have created quite a magical echo chamber.

I get it, you think the startup experience is amazing, and everyone should do it. You see academia as an exclusive group. A party maybe you were never fully invited to. Also, you smell opportunity, selling folks what you see as an alternative. But, you are missing so much. How can an apprenticeship at a startup every replace studying literature at a university, and immersing yourself in, well, learning? What a hollow, empty world to live in where running a business would ever replace literature, philosophy, art, and other meaningful aspects of being human.

While in the UK I had the pleasure of taking my 16-year-old daughter with me, and I took her with me to Oxford that day. It isn't a school she'd be applying to, but we also visited Edinburgh University on the trip, which might actually make it on her list of schools she'll be applying to in a year or so. I think about the experience my daughter would have at startups vs the experience she would have in a university environment. I want my daughter to be successful, but this doesn't just mean making money, it also involves be happy, healthy, and well-adjusted in her life. Something that a university environment would contribute to, but I shudder to think about in the volatile, male-dominated, "meritocracy" of startup culture.

I do not have a university degree. Hell, I do not even have a high school diploma. I have no allegiance to any academic institution, but I completely respect what they do, and refuse to take for granted what they have done for our world. Sure, higher educational institutions have their problems, but so does startup culture. It troubles me that so many would be willing to support the concept of a university degree being worthless, willfully dismissing what a university degree has done for so many on the planet. It leaves me seeing startup culture as some sort of virus being unleashed on almost every sector of our society today.

I know. I know. Not all startups. Yes. Just like not all men. Just like not all white men. But have you ever taken the time to actually step back from your startup aspirations, let the effects of the kool-aid fade, and thought about life beyond technology and making money? There are so many other aspects of life that make it worth living, something that universities have played a significant role in. Maybe we could spend more time thinking about the positive role startups should play, and not the dismantling of good things, simply so you can profit frsellinging their replacement.

Working To Understand The Digital World Around Us

My partner in crime Audrey Watters and I recently rebranded our umbrella company as Contrafabulists, and along the way, we worked with our friend Bryan Mathers to help us develop some graphics that would help define our work. Bryan quickly developed a logo for Contrafabulists that I think represents what we do--embedding ourselves within the gears of the machine, pushing back on the daily stories from the technology sector.

Bryan has a unique approach to conducting his work. He spent time wth us on a video call discussing our vision, listening to both of us speak, while also applying some of what he already knew of Audrey's Hack Education work, as well as my API Evangelist and Drone Recovery work. From this discussion, he created a banner image, that we use as the banner for the Contrafabulists website -- providing another great visualization of our work.

I love staring into the eyes of the owl, which stares back at you with its mechanical gaze, forcing you to ask the hard questions about how you are using technology. Maybe you are complicit in the stories coming out of the technology sector, or maybe you are just a listener or narrator of these stories being--either way, the owl's eyes quickly get to work understanding you, and what defines you from a technical view.

After we launched the Contrafabulists website, Bryan was listening to our podcast, where Audrey and I rant about the week and he produced an image that was unexpected and resonates with me in some powerful ways. Bryans work illustrates where we are at when it comes to defining who we are in the digital world unfolding around us, while the machines are all learning about us as well.

I do not know which conversation inspired Bryan's work, but I'm assuming it was our discussion around what machine learning technology can do, and what it can't do. Machine learning is a very (intentionally so) abstract term that is being used across the latest wave of rhetoric coming out of the technology sector, that often invokes magical visions in your head about what the machines are learning. Understanding more about what is machine learning is, and what it isn't, is a significant portion of my work as API Evangelist, overlapping with Audrey's work on Hack Education--Bryan's work is extremely relevant and continues to help augment our storytelling in an important way.

There are three significant things going on in his image for me. At first glance, it feels like a representation of what the machine sees of us, when trying to interpret a photo of us using facial or object recognition, defining our face, the space and context around us, while also linking that to other aspects of our social and digital footprint. Then I'm overwhelmed with feelings of my own efforts to define who I am, with each blog post, social media post, or image uploaded--in which the machine is working so hard to understand in the same moment. Then there is the intersection of these two worlds, and the struggle to understand, connect, find meaning, and deliver value--the struggle to define our digital self, something we either do ourselves, or it will be done for us by the technological platforms we operate on.

As I process these thoughts, I would add a fourth dimension to this struggle, something that is very API driven--the role 3rd parties play in defining us, and the world around us, in an increasingly digital world. Our world is increasingly being shaped by platforms, and the 3rd parties who have learned how to p0wn these platforms, whether for ideological or financial gain. Our understanding of the immigration debate is perpetually being shaped by platforms like Twitter or Facebook, and a small group of 3rd party influencers who have learned to shape and game the algorithm.

As we are learning, the machine's are also learning about us, something that is being used against us in real-time, by those who understand how to manipulate the algorithms to achieve their objectives. Helping people understand what we mean when we mean when we say machine learning is difficult--this is because machine learning is technically complicated, but it is also designed to provide a smoke screen for any exploitation and manipulation that is occurring behind the scenes. Machine learning is designed to be understood by a handful of wizards, leaving everyone else to bask in the glow of the personalization and convenience it delivers, leaving no questions regarding the magical capabilities of the machine.

Machine learning is increasingly defining us in the online world, watching everything we do on Facebook, Instagram, Twiter, and via search engines like Google, but it is also beginning to define how we see the physical world around us, helping shape how we see other cities, countries, and places we may never actually visit, and experience in person--algorithmically painting a picture of how we see the world.

Audrey and I are dedicated to understanding the stories coming out of the tech sector, cutting through the marketing, hype, and storytelling accompanying each wave of technology. Machine learning is just one of many areas we work to understand, in an increasingly complex landscape of magic and wizardry being sold via the Internet and applications that are infiltrating our mobile phones, televisions, automobiles, and every other corner of our personal and professional lives. 

I'm thankful to have folks like Bryan Mathers along for the ride, assisting us in crafting images for the stories we tell. I feel like our words are critical, but it is equally important to have meaningful images to go along with the words we write each day. Amidst the constant assault of information each day, sometimes all we have time for is just a couple seconds to absorb a single image, making the photo and image gallery an important section in our Contrafabulists toolbox. I imagine using Bryan's machine learning photos in dozens of stories over the next couple of years, and I'm hoping that eventually, it will continue to come into focus, helping us better connect the dots, and see our digital reflection in this pool we have waded into.

Machine Learning Will Be A Vehicle For Many Heists In The Future

I am spending some cycles on my algorithmic rotoscope work. Which is basically a stationary exercise bicycle for my learning about what is, and what is not machine learning. I am using it to help me understand and tell stories about machine learning by creating images using machine learning that I can use in my machine learning storytelling. Picture a bunch of machine learning gears all working together to help make sense of what I'm doing, and WTF I am talking about.

As I'm writing a story on how image style transfer machine learning could be put to use by libraries, museums, and collection curators, I'm reminded of what a con machine learning will be in the future, and be a vehicle for the extraction of value and outright theft. My image style transfer work is just one tiny slice of this pie. I am browsing through the art collections of museums, finding images that have meaning and value, and then I'm firing up an AWS instance that costs me $1 per hour to run, pointing it at this image, and extracting the style, text, color, and other characteristics. I take what I extracted from a machine learning training session, and package up into a machine learning model, that I can use in a variety of algorithmic objectives I have.

I didn't learn anything about the work of art. I basically took a copy of its likeness and features. Kind of like the old Indian chief would say to the photographer in the 19th century when they'd take his photo. I'm not just taking a digital copy of this image. I'm taking a digital copy of the essence of this image. Now I can take this essence and apply in an Instagram-like application, transferring the essence of the image to any new photo the end-user desires. Is this theft? Do I own the owner of the image anything? I'm guessing it depends on the licensing of the image I used in the image style transfer model--which is why I tend to use openly license photos. I'll have to learn more about copyright and see if there are any algorithmic machine learning precedents to be had.

My theft example in this story is just low-level algorithmic art essence theft. However, this same approach will play out across all sectors. A company will approach another company telling them they have this amazing machine learning voodoo, and if we run against your data, content, and media, it will tell you exactly what you need to know, give you the awareness of a deity. Oh, and thank you for giving me access to all your data, content, and media, it has significantly increased the value of my machine learning models--something that might not be expressed in our business agreement. This type of business model is above your pay grade, and operating on a different plane of existence.

Machine learning has a number of valuable use, with some very interesting advancements having been made in recent years, notably around Tensorflow. Machine learning doesn't have me concerned. It is the investment behind machine learning, and the less than ethical approaches behind some machine learning companies I am watching, and their tendencies towards making wild claims about what machine learning can do. Machine learning will be the trojan horse for this latest wave of artificial intelligence snake oil salesman. All I am saying is, that you should be thoughtful about what machine learning solutions you connect to your backend, and when possible make sure you are just connecting them to a sandboxed, special version of your world that won't actually do any damage when things go south.

Why Would People Want Fine Art Trained Machine Learning Models

I'm spending time on my algorithmic rotoscope work, and thinking about how the machine learning style textures I've been marking can be put to use. I'm trying to see things from different vantage points and develop a better understanding of how texture styles can be put to use in the regular world.

I am enjoying using image style filters in my writing. It gives me kind of a gamified layer to my photography and drone hobby that allows me to create actual images I can use in my work as the API Evangelist. Having unique filtered images available for use in my writing is valuable to me--enough to justify the couple hundreds of dollars I spend each month on AWS servers.

I know why I like applying image styles to my photos, but why do others? Most of the image filters out there we've seen from apps like Prisma are focused on fine art. Training image style transfer machine learning models on popular art that people are already familiar with. I guess this is allows people to apply the characteristics of art they like to the photographic layer of our increasingly digital lives.

To me, it feels like some sort of art placebo. A way of superficially and algorithmic injecting what are brain tells us is artsy to our fairly empty, digital photo reality. Taking photos in real time isn't satisfying enough anymore. We need to distract ourselves from the world by applying reality to our digitally documented physical world--almost the opposite of augmented reality if there is such a thing. Getting lost in the ability to look at the real world through the algorithmic lens of our online life.

We are stealing the essence the meaningful, tangible art from our real world, and digitizing it. We take this essense and algorithmically apply it our everyday life trying to add some color, some texture, but not too much. We need the photos to still be meaningful, and have context in our life, but we need to be able to spray an algorithmic lacquer of meaning on our intangible lives.

The more filters we have, the more lenses we have to look at the exact same moment we live each day. We go to work. We go to school. We see the same scenery, the same people, and the same pictures each day. Now we are able to algorithmic shift, distort, and paint the picture of our lives we want to see.

Now we can add color to our life. We are being trained to think we can change the palette, and are in control over our lives. We can colorize the old World War 2 era photos of our day, and choose whether we want to color within, or outside the lines. Our lives don't have to be just binary 1s and 0s, and black or white.

Slowly, picture by picture, algorithmic transfer by algorithmic transfer, the way we see the world changes. We no longer settle for the way things are, the way our mobile phone camera catches it. The digital version is the image we share with my friends, family, and the world. It should always be the most brilliant, the most colorful, and the painting that catches their eye and makes them stand in front of on the wall of your Facebook feed captivated.

We no longer will remember what reality looks like, or what art looks like. Our collective social media memory will dictate what the world looks like. The number of likes will determine what is artistic, and what is beautiful or ugly. The algorithm will only show us what images match the world it wants us to see. Algorithmically, artistically painting the inside walls of our digital bubble.

Eventually, the sensors that stimulate us when we see photos will be well worn. They will be well programmed, with known inputs, and predictable outputs. The algorithm will be able to deliver exactly what we need, and correctly predict what we will need next. Scheduling up and queuing the next fifty possible scenarios--with exactly the right colors, textures, and meaning.

How we see art will be forever changed by the algorithm. Our machines will never see art. Our machines will never know art. Our machines will only be able to transfer the characteristics we see and deliver them into newer, more relevant, timely, and meaningful images. Distilling down the essence of art into binary, and programming us to think this synthetic art is meaningful, and still applies to our physical world.

Like I said, I think people like applying artistic image filters to their mobile photos because it is the opposite of augmented reality. They are trying to augment their digital (hopes of reality) presence with the essence of what we (algorithm) think matters to use in the world. This process isn't about training a model to see art like some folks may tell you. It is about distilling down some of the most simple aspects of what our eyes see as art, and give this algorithm to our mobile phones and social networks to apply to the photograph digital logging of our physical reality.

It feels like this is about reprogramming people. It is about reprogramming what stimulates you. Automating an algorithmic view of what matters when it comes to art, and applying it to a digital view of matters in our daily worlds, via our social networks. Just one more area of our life where we are allowing algorithms to reprogram us, and bend our reality to be more digital.

I Borrowed This Image From University of Maine Museum of Art

The Oberservability Of Uber

I had another observation out of the Uber news from this last week, where Uber was actively targeting regulators and police in cities around the globe, and delivering an alternate experience for these users because they had them targeted as an enemy of the company. To most startups, regulation is seen as the enemy, so these users belong in a special bucket--so they can be excluded from the service, and even actively given a special Uber experience.

It makes me think about the observability of the platforms we depend on, like Uber. How observable is Uber to the average user, to regulators and law enforcement, the government? How observable should the platforms we depend on be? Can everyone sign up for an account, use the website, mobile applications, or APIs and expect the same results? How well can we understand the internal states of Uber, the platform, and company, from knowledge obtained through its existing external outputs -- mobile application, and API.

When it comes to the observability of the platforms we depend on via our mobile phones each day there are no laws stating they have to treat us the same. The applications on our mobile phones are personalized, making notions of net neutrality seem naive. There is nothing that says Uber can't treat each user differently, based upon their profile score, or if they are law enforcement. We are not entitled to any sort of visibility into the algorithms that decide whether we get a ride with Uber, or how they see us--this is the mystery, magic, and allure of the algorithm. This is why startups are able to wrap anything in an algorithm and sell it as the next big thing.

The question of how observable Uber will be defined in coming months and years. What surprises me is that we are just now getting around to having these conversations, when these companies possess an unprecedented amount of observability into our personal and professional lives. The Uber app knows a lot about us, and in turn, Uber knows a lot about us. I'm thinking the more important question is, why are we allowing for so much observability by these tech companies into our lives, with so little in return when it comes to understanding business practices and the ethics behind the company firewall?

Machine Learning Style Transfer For Museums, Libraries, and Collections

I putting some thought into some next steps for my algorithmic rotoscope work, which is about the training and applying of image style transfer machine learning models. I'm talking with Jason Toy (@jtoy) over at Somatic about the variety of use cases, and I want to spend some thinking about image style transfers, from the perspective of a collector or curator of images--brainstorming how they can organize, make available their work(s) for use in image style transfers.

Ok, let's start with the basics--what am I talking about when I say image style transfer?  I recommend starting with a basic definition of machine learning in this context, providing by my girlfriend, and partner in crime Audrey Watters. Beyond, that I am just referring to the training a machine learning model by directing it to scan an image. This model can then be applied to other images, essentially transferring the style of one image, to any other image. There are a handful of mobile applications out there right now that let you apply a handful of filters to images taken with your mobile phone--Somatic is looking to be the wholesale provider of these features

Training one of these models isn't cheap. It costs me about $20 per model in GPUs to create--this doesn't consider my time, just my hard compute costs (AWS bill). Not every model does anything interesting. Not all images, photos, and pieces of art translate into cool features when applied to images. I've spent about $700 training 35 filters. Some of them are cool, and some of them are meh. I've had the most luck focusing on dystopian landscapes, which I can use in my storytelling around topics like immigration, technology, and the election

This work ended up with Jason and I talking about museums and library collections, thinking about opportunities for them to think about their collections in terms of machine learning, and specifically algorithmic style transfer. Do you have images in your collection that would translate well for use in graphic design, print, and digital photo applications? I spend hours looking through art books for the right textures, colors and outlines. I also spend hours looking through graphic design archives for movie and gaming industry, as well as government collections. Looking for just the right set of images that will either transfer and produce an interesting look, as well as possible transfer something meaningful to the new images that I am applying styles to.

Sometimes style transfers just make a photo look cool, bringing some general colors, textures, and other features to a new photo--there really isn't any value in knowing what image was behind the style transfer, it just looks cool. Other times, the image can be enhanced knowing about the image behind the machine learning model, and not just transferring styles between images, but also potentially transferring some meaning as well. You can see this in action when I took a nazi propaganda poster and applied to it to photo of Ellis Island, or I took an old Russian propaganda poster and applied to images of the White House. I a sense, I was able to transfer some of the 1000 words applied to the propaganda posters and transfer them to new photos I had taken.

It's easy to think you will make a new image into a piece of art by training a model on a piece of art and transferring it's characteristics to a new image using machine learning. Where I find the real value is actually understanding collections of images, while also being aware of the style transfer process, and thinking about how images can be trained and applied. However, this only gets you so far, there has to still be some value or meaning in how it's being applied, accomplishing a specific objective and delivering some sort of meaning. If you are doing this as part of some graphic design work it will be different than if you are doing for fun on a mobile phone app with your friends.

To further stimulate my imagination and awareness I'm looking through a variety of open image collections, from a variety of institutions:

I am also using some of the usual suspects when it comes to searching for images on the web:

I am working on developing specific categories that have relevance to the storytelling I'm doing across my blogs, and sometimes to help power my partners work as well. I'm currently mining the following areas, looking for interesting images to train style transfer machine learning models:

  • Art - The obvious usage for all of this, finding interesting pieces of art that make your photos look cool.
  • Video Game - I find video game imagery to provide a wealth of ideas for training and applying image style transfers.
  • Science Fiction - Another rich source of imagery for the training of image style transfer models that do cool things.
  • Electrical - I'm finding circuit boards, lighting, and other electrical imagery to be useful in training models.
  • Industrial - I'm finding industrial images to work for both sides of the equation in training and applying models.
  • Propaganda - These are great for training models, and then transferring the texture and the meaning behind them.
  • Labor - Similar to propaganda posters, potentially some emotional work here that would transfer significant meaning.
  • Space - A new one I'm adding for finding interesting imagery that can train models, and experiencing what the effect is.

As I look through more collections, and gain experience training style transfer models, and applying models, I have begun to develop an eye for what looks good. I also develop more ideas along the way of imagery that can help reinforce the storytelling I'm doing across my work. It is a journey I am hoping more librarians, museum curators, and collection stewards will embark on. I don't think you need to learn the inner workings of machine learning, but at least develop enough of an understanding that you can think more critically about the collection you are knowledgeable about. 

I know Jason would like to help you, and I'm more than happy to help you along in the process. Honestly, the biggest hurdle is money to afford the GPUs for training the image. After that, it is about spending the time finding images to train models, as well as to apply the models to a variety of imagery, as part of some sort of meaningful process. I can spend days looking through art collection, then spend a significant amount of AWS budget training machine learning models, but if I don't have a meaningful way to apply them, it doesn't bring any value to the table, and it's unlikely I will be able to justify the budget in the future.

My algorithmic rotoscope work is used throughout my writing and helps influence the stories I tell on API Evangelist, Kin Lane, Drone Recovery, and now Contrafabulists. I invest about $150.00 / month training to image style transfer models, keeping a fresh number of models coming off the assembly line. I have a variety of tools that allow me to apply the models using Algorithmia and now Somatic. I'm now looking for folks who have knowledge and access to interesting image collections, who would want to learn more about image style transfer, as well as graphic design and print shops, mobile application development shops, and other interested folks who are just curious about WTF image style transfers are all about.

In The Future We Will All Have Multiple Digital Personas

I am captivated by the news about Uber actively targeting regulators and police in cities around the globe. I specifically love thinking about the work that regulators and investigators are having to do to be able to build a case against Uber, and inversely the amount of work that Uber is doing to thwart these investigations, and break into new and often times hostile markets.

Regulators and police are using burner devices, and fake personas to do their work. Uber is delivering fake services and creating fake signals to create a foggy landscape where they can do business. I'm not rooting for law enforcement, regulators, or Uber, I'm rooting for everyone possessing more than one persona, throwaway versions of themselves that are used to distract, obfuscate, hide, and confuse the machine. It's a very beautiful dumpster fire of a digital world we've created for ourselves--good job everyone.

I'm preparing for some international travel in a couple weeks, so I'm firing up my alter egos. They aren't fake personas, but they are alternative versions of myself, that will be present on all my devices when I cross any international border, on devices I can stand to lose, or just throw away. This is the world now. We won't have just a single digital version of ourselves. We will have alternative version of our personal life, our work lives, and we'll create fake accounts as they are needed--our children are already well trained in this practice.

This will be the only way we can carve out any sense of privacy in a surveillance economy. Platforms and regulators will have to work overtime to connect the dots. Our digital self will become a schizophrenic reflection of our physical world, where devices have invaded every space and moment, and are trying to identify who we are, what we are doing, and connect the dots between each version of our self, as well as those around us. I can't help but feel like the Internet as we know it is somehow fracturing society, and any sense we have of the individual--something that will be difficult to recover from, and I fear we will always be different from here forward.

The Residue Of Internets C4I DNA Visible In Ubers Behavior

The military's fingerprints are visible throughout the Internet's history, with much of the history of compute born out of war, so it's no surprise that the next wave of warfare is all about the cyber (its HUGE). With so much of Internet technology being inseparable from military ideology and much of its funding coming from the military-industrial complex, it is going to be pretty hard for Internet tech to shake its core DNA programmed as part of a command, control, communications, computers, and intelligence (C4I) seeds. 

This DNA is present in the unconscious behavior we see from startups, most recently with the news of Uber deceiving authorities using a tool they developed call Greyball, allowing them to target regulators and law enforcement, and prevent or obscure their access and usage to the ridesharing platform. User profiling and targeting is a staple of Silicon Valley startups. Startups profile and target their definition of ideal behavior(s), and then focus on getting users to operate within this buckets, or you segment them into less desirable buckets, and deal (or don't) with them however you deem appropriate.

If you are a marketer or sales person you think targeting is a good thing. You want as much information on a single user and a group of users as you possibly can so that you can command and control (C2) your desired outcome. If you are a software engineer, this is all a game to you. You gather all the data points you possibly can build your profiles--command, control, communications, and intelligence (C3i). The Internet's DNA whispers to you in your ear--you are the smart one here, everyone is just a pawn in your game. Executives and investors just sit back and pull the puppet strings on all the actors within their control.

It's no surprise that Uber is targeting regulators and law enforcement. They are just another user demographic bucket. I guarantee there are buckets for competitors, and their employees who have signed up for accounts. When any user signs up for your service, you process what you know about them, and put them in a bucket, and see where they exist (or don't) within your sales funnel (repeat, rinse). Competitors, regulators, and law enforcement all have a role to play, the bucket they get put into, and the service they receive will be (slightly) different than everyone else.

Us engineers love to believe that we are the puppet masters, when it reality we are the puppets, with our string pulled by those who invest us, and our one true master--Internet technology. We numb ourselves and conveniently forget the history of the Internet, and lie to ourselves that venture capital has our best interests in mind and that they need us. They do not. We are a commodity. We are the frontline of this new type of warfare that has evolved as the Internet over the last 50 years--we are beginning to see the casualties of this war, democracy, privacy, and security.

This is cyber warfare. It's lower level warfare in the sense that the physical destruction and blood isn't front and center, but the devastation and suffering still exists. Corporations are forming their own militias, drawing lines, defining their buckets, and targeting groups for delivering propaganda to, while they are positioning for a variety of attacks against competitors, regulators, law enforcement, and other competing militias. You attack anyone the algorithm defines as the enemy. You aggressively sell to those who fit your ideal profile. You try to indoctrinate anyone you can trust to be part of your militia, and keep fighting--it is what the Internet wants us to do.

What Do You Mean When You Say You Are Training A Machine Learning Model?

I was sharing my latest Algorithmic Rotoscope image on Facebook and a friend asked me what I meant by training a machine learning model. I still suck at quantifying this stuff in any normal way. When you get too close to the fire you lose your words sometimes. It is why I try to step away and write stories about it--helps me find my words, and learn to use them in new and interesting ways.

Thankfully I have a partner in crime who understands this stuff and knows how to use her words. Audrey came up with the following explanation of what machine learning is in the context of my Algorithmic Rotoscope work:

"Machine learning" is a subset of AI in which a computer works at a problem programmatically without being explicitly programmed to do something specific. In this case, the Algorithmia folks have written a program that can identify certain characteristics in a piece of art -- color, texture, shadow, etc. This program can be used to construct a filter and that can be used in turn to alter another image. Kin is "training" new algorithms based on Algorithmia's machine learning work -- in order to build a new filter like this one based on Russian propaganda, the program analyzes that original piece of art -- the striking red, obviously. The computer does this thru machine learning rather than Kin specifying what it should "see."

I use my blog as a reference for my ideas and thoughts, and I didn't want to lose this one. I'm playing with machine learning so that I can better understand what it does, and what it doesn't do. It helps me to have good explanations of what I'm doing, so I can help turn other people on to the concept and help me make more sense (some of the time). We are going to have to develop an ability to have a conversation about the artificial intelligence and machine learning assault that has already begun. It will be important that we help others get up to speed and see through the smoke and mirrors.

When it comes to training algorithmic models using art, there isn't any machine learning going on. My model isn't learning art. When I execute the model against an image it isn't making art either. I am just training an algorithm to evaluate and remember an image, creating a model that can then be applied to other images--transferring the characteristics from one image to another algorithmically. In my work it is important for me to understand the moving parts, and how the algorithmic gears turn, so I can tell more truthful stories about what all of this is, and generate visuals that complement these stories I'm publishing.

Adopta.Agency In Trump Administration

Adopta.Agency is an ongoing project for me. I'm still using the template as a basis for some custom open data work, but I wanted to pause for a moment and think about what Adopta.Agency means to me in a Trump administration. The need for Adopta.Agency is greater than ever. We need an army of civic-minded individuals to step in and help be stewards of public data. The current administration does not see value in making government more transparent, something that will trickle down to all levels of government, making what we do much more difficult.

To be honest, after the election I hit a pretty big low, regarding what I should be doing with open data at the federal level. Now in February I feel little more optimistic and I wanted to set a handful of Adopta.Agency goals for myself, and think more about the project in the Trump Administration. In the next couple of months I want to:

  • Target Two Datasets - I want to target two datasets in coming months, liberate from their current position on government servers, download and convert to YAML format, and publish as an Adopta.Agency project on Github.
  • API Adoption - In addition to rescuing open data sets from disappearing, I want to enable the reuse of APIs. You can't always save or replace the entire API, but indexing and mapping what is there will help any future projects in the same area.
  • Storytelling - There has been a lot going on when it comes to rescuing government data in the last 60 days. Much of it has been centered around climate data -- I want to tell more stories of work going on beyond just Adopta.Agency.

The Trump administration doesn't change the Adopta.Agency mission and purpose at all, it just raises the stakes. I still view the federal government as a partner in this, we can't do the hard work of making government more observable without it's involvement. However, it is a much more hostile and unfriendly environment right now, making it even more urgent that we adopt existing data sets, and give new life in a safer place until the right partners in the public and private sector can be found.

It is easy to get overwhelmed by this work, I do often. I'm going to start with identifying two data sets, download the Adopta.Agency blueprint, and get to work liberating the data, and publishing it to Github. I find the process therapeutical and it helps me process what is going on right now--I hope you will join in. I look forward to hearing your story.

The Random Calls Home That An Application Makes From My Home

I have been running Charles Proxy locally for quite some time now. I began using it to reverse engineer the APIs behind some mobile applications and continued to use it to map out the APIs I'm depending on each day. I regularly turn on Charles Proxy and export the listing of any HTTP calls made while I'm working, every five minutes. These files get moved up into the cloud using Dropbox, where I have a regular CRON job processing each call made--profiling the domain, and details of the request and response for later review.

This process has shed some light on the application architecture of many of the tools and services I depend on. It's fascinating to see the number of pings home the average application will make when on, or running in the background. In addition to running Charles Proxy and understanding how these applications are communicating with their mothership, from within my home, I downloaded Little Flocker--providing me a peek at another layer of application architecture, and how they interact with my laptop.

Little Flocker tells me each time an application is writing or accessing a file, turning on my audio, video, and other items. After a day of running, I have been given another glimpse at the architecture of the apps I'm depending on. One example of suspicious application architecture is from Citrix. I haven't been on a call using the app in at least 4 days, and usually, I just uninstall the app after use, but it was interesting to see it trying to write files on a regular basis, even though I don't have the application open. Why do they need to do this? It looks like it is looking for any updates, but not sure why it needs to when I'm not running.

I wish applications would provide a list of the remove calls their applications make to the home base. I've talked with several platform providers about how they view this layer of their apps, and their thoughts about pulling back the curtain, and being more transparent about the APIs behind their apps--they usually aren't very interested in having these conversations with end-users and often see this activity as their proprietary secret sauce. The part that interests me is the fact that these client interactions, API calls, and data transmitted are happening here in my home on my laptop. I know that tech company see this as us users operating on their platforms, but in reality, they are entering our homes and making calls home to the platform using our Internet. 

Sure, we all agree to terms of service that make all of this legally irrelevant--they have their asses covered. It still doesn't change that many desktop, web and mobile application developers are exploiting the access they have in our lives. With the bad behavior we've seen from technology companies, government entities, and hackers in recent years, I feel like this level of access isn't sustainable or healthy. Especially when apps are either poorly architected, or are done so with a lack of respect for the end-user environment. This is my laptop, in my home, engaging in a personal or business relationship with your company, please be respectful of me, my space, and my privacy.

My Style Of Writing Gives Me A Single URL For My Thoughts

I was just having a conversation with a friend on a social network about test-driven development (TDD) and behavior-driven development (BDD), as we progressed through the conversation I used both my blogs kinlane.com and apievangelist.com as references for thoughts I've had on the subject, providing them with a single URL to get more information from me.

I shared URLs for 3-5 ideas/thoughts I've had on the subject, giving me a much better way to recall what I know and thoughts I've had -- from within my domain. I have 10 years of posts on kinlane.com, and I am approaching 7 years on apievangelist.com. This gives me a rich and efficient way to recall thoughts I've had, build on them, and quickly share and convey these thoughts to others using URLs--this is why I tell stories.

This process also drives people to my websites, hopefully building trust with them when it comes to my domains. When you want information on APIs, you go to apievangelist.com. When you want slightly deranged rants about technology that may or may not make sense, you go to kinlane.com. I'm working on improved tagging across my content, something that ultimately is a manual thing, as nobody knows the content of my work like I do--not even the AIs and machine learningz.

I do not obsess over SEO for my websites. The natural progression of my research, and focus on helping people understand the world of APIs, lends itself nicely to having a wealth of links, and interconnected stories about a wide range of topics I am passionate about--which translates to some healthy organically generated SEO. Talking through this stuff helps me execute on this all in a more consistent way--the more I write, the ideas I have, and the more URLs I have to share. Which makes all of this go round for me, and hopefully you along the way.

My House Is Infested With IoTs

We were just having a conversation about the information our Sonos is sending back and forth. One of a handful of devices we've willfully purchased and plugged into our home network. In today's environment, we are becoming hyper aware of what our applications and devices know about us and are communicating outside of our network, and local storage. 

With two people in a small home/office environment, we have 4 iPhones, 2 iPads, 3 laptops, 1 desktop, 1 printer, 2 Sonos speakers, 1-time capsule, and 1 smart tv connected all the time. We also have 3 video cameras and 3 drones that can connect to the network and/or broadcast a network, but isn't necessarily always on. We aren't huge home IoT people, but that seems like a significant number of devices for a single network and quite a lot to think about when it comes to managing our digital bits.

Our house is infested with IoTs. Ok, it's mostly because of my drone and camera obsession, but the printer, Sonos, and other devices are definitely a little more on the normal side of things. When you stop to think about all this IoT think stuff for a bit, it's pretty crazy what have let into our world. These little devices that run on our home network, do things for us, regularly talking back to their masters in the cloud. What do they say about us? What information do they keep track of? 

I fully understand my obsession with our data at this level is considerably greater than the average person, but I am astounded at people's inability to stop corporations (and government) from infiltrating our homes in this way. I'm not immune. I have the usual suspects when it comes to home devices, as well as some more specialized IoT devices on my network. I am tuning into which devices I have, and what data they are sending to the cloud because I'm concerned with capturing the data exhaust from my world and making a living, but secondarily I am increasingly concerned about privacy, security, and other more concerning activity from these devices I've invited into my home, and the companies who operate them.

My smart TV tracks my viewing habits, my Sono tracks my listening habits, and my laptop, tablet, and mobile device track the rest. Some of these devices are fixed in my home, while other more portable devices travel with me, and then come back home to get plugged in, recharged, and synced with the cloud. I'm using my drones and video cameras to gather data, images, and audio from the world around me, and bringing them back to my home for filtering and organization locally and in the cloud. My house isn't just infested with IoT devices, it's infested with the data and other bits generated by these IoT devices. These are valuable little bits and they are something companies are scrambling to get their hands on.

I'm on a quest to make sure I get a piece of the action when it comes to selling my bits--the bigger piece of the pie, the better. I'm also looking to help drive the conversation around what the technology companies are doing with our bits. I do not expect to win this war, I'm just looking to push back wherever and whenever I can, and establish a greater understanding around what data is being generated and tracked, both inside and outside of my home. The more I'm in tune with this activity, the more I can develop and evolve the tactics I will need to keep resisting and stay ahead of the curve.

Can You See The Algorithm?

Can you see an algorithm? Algorithms are behind many common analog and digital actions we execute daily. Can you see what is going on behind each task? Can you observe what is going on? To use an antiquated analogy, can you take the back off your watch? An example of this is our world right now would be the #immigration debate -- whether you are viewing on Twitter, Facebook, or any other source of news and discussion around the immigration debate. Can you see the algorithm that powers the Twitter or Facebook's #immigration feed?

Algorithms that drive the web are often purposefully opaque, unobservable, yet they are still right behind the curtain of your browser, UI, and social media content card. They are supposed to be magic. You aren't supposed to be able to see the magic behind. The closest we can get to seeing an algorithm is via their APIs which (might) give us access to an algorithms inputs and outputs, hopefully making it more observable. APIs do not guarantee that you can fully understand what an API or the algorithm behind does, but it does give us an awareness and working examples of the inputs and outputs--falling just short of being able to actually see anything. 

You can develop visualizations, workflow diagrams, images, and other visuals to help us see reflections of what an algorithm does using its API (if it's available), but if we don't have a complete picture of the surface area of an algorithm, or of all its parameters and other inputs, we will only paint a partial picture of an algorithm. I'm super fascinated with not just trying to find different ways of seeing an algorithm, I also want some dead simple ways to offer up a shared meaning of what your eyes are seeing, and make an immediate impact. 

How do I distil down the algorithm behind the #immigration debate hashtag on Twitter and Facebook in a single image? I don't think you can. There are many different ways to interpret the meaning of the data I can pull from the Twitter and Facebook APIs. Which users are part of the conversation? Which users are bots? What is being said, and what is the sentiment? There are many different ways I can extract meaning from this data, but ultimately it is still up to me, the human to process, and distil down into a single meaningful image that will speak to other humans. Even though the image could be worth 1000 words, which thousand words would that be?

I blog as the API Evangelist to polish my API stories. I write code to polish how I can use APIs to tell better stories. I take photos in the real world so that I can tell better stories online and in print. I'm trying to leverage all of this to help me better tell stories about how algorithms pulling the strings in our world, and help everyone see algorithms. Sadly, I do not think we will ever precisely see an algorithm, but we can develop ways of refracting light through them helping us see the moving parts, or sometimes, more importantly, see what parts are missing. 

One of the things I'm working on with my algorithmic storytelling is developing machine learning filters that help me shine a light on the different layers, and gears of an algorithm. I do not think we can use the master's tools to dismantle the house, but I don't want to dismantle the house, I just want to install a gorgeous floor to the ceiling window that spans one side of the house, and maybe a couple of extra windows. I want reliable and complete access to the inputs and outputs of an algorithm so that I can experiment with a variety of ways to see what is going on, painting a picture that might help us have a conversation about what an algorithm does, or does not do.

I recently took a World War 2 Nazi propaganda poster and trained a machine learning model using it, and then applied the filter to a picture of the waiting room at Ellis Island waiting room. When looking at the picture you are seeing the waiting room where millions of immigrants have waited for access to the United Sates, but the textures and colors you are seeing when you look at the image are filtered through machine learning interpretation of the World War 2 Nazi poster. When you look at the image you may never know the filter is being applied--it is just the immigration debate. However, what you are being fed algorithmically is being painted by a very loud, bot-driven, hateful and false content fueled color and texture pallette.

Granted, I chose the subject matter that went into the machine learning algorithm, but this was intentional. Much like the handful of techies who developed and operate bots, meme, alternative news and fact engines, I was biased in how I was influencing the algorithm that is being applied. However, if you don't know the story behind, and don't understand the inputs and outputs of what is happening, you think you are looking at just a photo of Ellis Island. By giving you awareness and more of an understanding of the inputs, a regular photo of Ellis island, the filter being trained using a World War 2 Nazi poster, plus I added some machine learning voodoo and wizardry--poof we helped shine a light on one layer of the algorithm, exposing just a handful of the potentially thousands or millions of gears that are driving the algorithms coloring the immigration debate.

I am sure folks will point out what they see as the negativity in this story. If you are denying the influence of white nationalists on this election, and specifically how this was done algorithmically--you are experiencing a dangerous form of denial. This story is not meant to paint a complete picture, but shine line on a single layer in a way that helps people understand a little more about what is happening behind the immigration debate online. It's just one of many exercises I'm conducting to assist me in telling more stories and create compelling images that help folks better understand and see algorithms.