Kin Lane

The Behavioral Surplus From Me Reading My RSS Feeds

I’m spending a lot of time thinking about behavioral surplus thanks to Shoshana Zuboff, and her book “The Age of Surveillance Capitalism”. I’m about half way through, and I find myself getting lost in other thoughts as I write notes in the margins (yes I write in my books). Resulting in a number of connections being made between my work, and the disturbing shift in the capitalist landscape she is shining a light on. I’ve long been aware and concerned about the API driven marketplace that is my digital self, but Zuboff’s work has taken my view of the landscape to a whole new level. Changing how I view many of the technological tools I depend on daily, forcing me to think a little more critically about the relationships I’m fostering with companies online.

As I move from reading The New Yorker in my bathroom, to The Age of Surveillance Capitalism out on the deck, and then back to my couch to read my RSS fees online using Feedly, I can’t help but think about my daily reading in terms of behavioral surplus—-identifying the tangible behavior data exhaust that is generated, which then becomes available for Software as a Service (SaaS) to extract, process, and use to enrich their own data sets and machine learning models. The New Yorker, or Hatchet Book Group, the publisher of The Age of Surveillance Capitalism do not have access to this behavioral surplus, but Feedly, the software I pay a subscription to use online does. Turning a regular daily ritual of mine into an opportunity to harvest some behavioral surplus in the form of data, then begin to make sense of human behavior scale. Companies like Feedly aren’t particularly interested in my individual behavior, but they are interested in understand behavior at scale, and using this behavior surplus to amass buckets of behavioral data that dovetail nicely with other assumptions a company will make.

I wanted to take a moment to understand what some of the surplus data that is generated from me just reading my RSS feeds for about an hour in my Feedly web application:

  • Subscribe To - Every time I subscribe to an RSS feed, this information is added to my profile use later.
  • How Long - How much time I put into cultivating feeds is a default part of surplus data being generated.
  • Click and Read - Everything I click on and read adds a layer of behavioral surplus to be extracted.
  • Tag and Organize - Everything I tag and organize shares my approach to taxonomy and understanding.
  • Share With Others - The tags I turn into feeds and share continue painting a picture of what matters.

When you take these behavioral data points and multiply them by a couple thousand feeds, and hundreds of thousands of individual blog posts, GitHub updates, and Tweets that I subscribe to via my Feedly, it can paint a pretty relevant, real-time portrait of what Kin Lane is thinking about. As an API professional, as the API Evangelist, and as the API Architect at F5, I’m giving access to a pretty relevant snapshot of what is going on in my head. Granted, at scale RSS feeds isn’t seeing the type of adoption people are looking for in the raw behavioral surplus data markets. However, when you add in the social layers like Twitter, Facebook, LinkedIn, and GitHub, and loop in my email, documents, photos, and other dimensions, you begin to get a pretty valuable slice of who Kin Lane—-a 47 year old white male living in Seattle, WA is up to.

While this type of surveillance capitalism grinds my gears, what really gets me worked up is that people feel that this data isn’t mine, or that it doesn’t represent me. People often tell me they aren’t worried about privacy or surveillance, as long as they get convenience. However, I can’t imagine people would like feeling like digital livestock when it comes to these platform operators, and the people they sell data, predictions, and other products they derive from behavioral surplus extracted from our daily existence. I regularly export my OPML file, and I run daily cron jobs to keep stories I tag and share synced with my storytelling projects. So I have access to some of the data I generate within Feedly. I don’t have access to every bit of data I generate within the web application, but I’m also fairly comfortable that the people behind Feedly are up to anything I should be concerned about—-this is why I like smaller technology providers, who aren’t dedicated surveillance capitalists.

Another aspect of this that is of huge concern for me is how people interpret the content I am subscribing, clicking, reading, tagging, organizing, and sharing on Feedly. I read a lot of articles on breaches, hacking, and other security or cybersecurity related issues. Without context, one could make the wrong assumptions about why I’m subscribing to some of the news and other content sources. Extracting and processing this behavioral surplus as data is one thing, but then attaching meaning to why someone is subscribing, clicking, reading, tagging, organizing, and sharing each piece of content can get very problematic, very quickly. I am an API security professional, which requires me to keep an eye on some grey areas of the tech sector, and if you are lacking the context of what I do for a living, you might make the wrong assumptions about which bucket to put me in.

Giving “the machine” a look inside my daily RSS feeds isn’t one of my top concerns. I have bigger digital self illnesses to tackle. However, it does provide me with an exercise in learning how to identify the ways in which my every day activities can be harvested for behavioral surplus. I’m captivated by understanding when these often API-driven mechanisms that identify, extract, process, and deliver upon each data point I discuss above (subscribe, click, read, tag, organize, share) go bad. I’m concerned about when a company goes from helping you or enabling you with these APIs, to surveilling you. I’m concerned that often times they feel feel this behavioral surplus is just there for the picking, and not something that belongs to me, and is an active representation of who I am as a human being. I’m concerned with how tech companies and their machine learning models will be interpreting the content I’m curating as part of my feeds each day. That maybe they don’t have the right context, or the right perspective to truly understand what it is that I’m doing, and automate decision making based upon incorrect assumptions. I have a lot of concerns.