💻When computers make content for each other
Media, metadata and machine learning: Context Collapse! #101
Today in Context Collapse!: Why algorithms are increasingly writing content for other algorithms to read. You puny humans aren’t the audience—no, this is computers making content for other computers.
SEO, which you may well have heard of, is short for “search engine optimization.” As Searchengineland puts it, “The process of improving your site to increase its visibility when people search for products or services related to your business in Google, Bing, and other search engines. The better visibility your pages have in search results, the more likely you are to garner attention and attract prospective and existing customers to your business.“
Improving your site, in most cases = Adding metadata to your site, inserting keywords into copy and redesigning your web pages and social media presence so your site will (hopefully!) be closer to the top of search results.
In other word, SEO is all about making content that’s read by computers instead of humans.
But what happens when we jump to the next stage and computers begin automating SEO and start making content for other computers to read?
Algorithms Writing for Algorithms
I started thinking about this when I saw a tweet by Ars Technica co-founder Jon Stokes.
Gabe @cwgabrielNew PA for your Monday! https://t.co/ZtrQGruKYX #pennyarcade https://t.co/hX1v9ZKPVt
“Algos writing for an audience of algos, to honeypot segmented humans into a conversion funnel.” That sounds very, well, 2021 marketing! (For those interested, Stokes has an essay on the general area.)
Stokes is writing about a very particular slice of the computers-making-content-for-other-computers-to-read market. In this case, marketers who are interested in manipulating the internet’s many, many search-and-discovery engines in order to reach niche audiences.
But let’s zoom out a bit.
You’re Already Consuming Content Made By Computers
Computers generate content all the time. At the highest level, algorithms serve as curators. Your Facebook news feed and your Twitter feed? Curated by algorithms which prioritize content they think will keep you on their service and looking at more ads longer. When your photo storage service sends you customized photo albums of trips and surfaces pictures from years ago on your phone notifications? That’s all algorithms.
Computers also *make* content all the time, with minimal human oversight, for consumption either by human beings or other machines.
Automated journalism is a practice where software automatically generates news stories with plain, boring formats—your sports match results and quarterly earning reports of the world—based on inputs that have been entered by a human at some point. Companies in this space include Automated Insights and Narrative Science.
Publishers have been using automation for years, especially in the creation of online video. Companies like Wibbitz have made a lot of money automating video creation for publishers and marketers—remember that “pivot to video” a few years ago when every media outlet suddenly started creating video content for Facebook and that whole Tronc thing? So, so much of that was automated video.
Automated online video is easy! You open the dashboard of the software, upload some images, add some captions, choose some music or audio to play in the background and… presto! The algorithm made a video with a couple of clicks. No technical skill required. You can even have the algorithm make the video with no human input if you’re feeling confident.
Human Audience vs. Machine Audience
Most of the time, creating content for human audiences centers around grabbing attention in order to generate a response1. However, creating content for machine audiences centers around manipulating the parameters of an algorithm to generate a response2 in order to affect humans.
Both of them have one thing in common: Leveraging information to generate a response.
Creating content for machine audiences, in the context of journalism, writing, marketing and advertising, largely takes the form of creating weird websites, strange social media posts or confusing content farms with copy and images that don’t make sense to humans. But they make perfect sense to the search engines and services that mediate the online experiences human beings have.
Welcome to the future!
Endless AI-generated spam risks clogging up Google’s search results (The Verge / James Vincent)
Why AI-Generated Content Won’t Break the Web (Marketing Artificial Intelligence Institute / Stephen Jeske)
As Online Video Surges, Publishers Turn to Automation (New York Times / John Herrmann)
Steve Morgan at Cybersecurity Ventures just named me as one of their top cybersecurity copywriters in 2021.
Elaine McArdle at the Harvard Law Bulletin just wrote about my team’s work on the Disinfodex project:
One of its current projects is Disinfodex, a publicly available and searchable database of disinformation campaigns by social media platforms created by a group of cross-disciplinary experts who serve as Assembly fellows. Within a cohort of student fellows, a group is examining the for-profit model of social media platforms, where companies push content — and make more money — by leveraging a user’s personal data to develop sophisticated algorithms that keep them online and scrolling. How? Often, by presenting them with increasingly extreme content, some of which is not only false but potentially dangerous.
This response can be anything from altruistic (a news outlet reporting on civic corruption that would otherwise not be exposed) to self-serving (a marketer getting a customer to buy a product). Regardless of the response, the methods used are largely the same.
In the SEO context, writing for algorithms means making sure a listing shows up higher in the Etsy results or a YouTube video reaches 500,000 potential viewers instead of 50,000. In other context, it could mean many things including and up to teaching computers to program other computers—something there’s quite a bit of interest in these days.