Federico: Historically, technology has usually advanced in lockstep with opening up new creative opportunities for people. From word processors allowing writers to craft their next novel to digital cameras letting photographers express themselves in new ways or capture more moments, technological progress over the past few decades has sustained creators and, perhaps more importantly, spawned industries that couldn’t exist before.
Technology has enabled millions of people like myself to realize their life’s dreams and make a living out of “creating content” in a digital age.
This is all changing with the advent of Artificial Intelligence products based on large language models. If left unchecked without regulation, we believe the change may be for the worse.
Over the past two years, we’ve witnessed the arrival of AI tools and services that often use human input without consent with the goal of faster and cheaper results. The fascination with maximization of profits above anything else isn’t a surprise in a capitalist industry, but it’s highly concerning nonetheless – especially since, this time around, the majority of these AI tools have been built on a foundation of non-consensual appropriation, also known as – quite simply – digital theft.
As we’ve documented on MacStories and as other (and larger) publications also investigated, it’s become clear that foundation models of different LLMs have been trained on content sourced from the open web without requesting publishers’ permission upfront. These models can then power AI interfaces that can regurgitate similar content or provide answers with hidden citations that seldom prioritize driving traffic to publishers. As far as MacStories is concerned, this is limited to text scraped from our website, but we’re seeing this play out in other industries too, from design assets to photos, music, and more. And top it all off, publishers and creators whose content was appropriated for training or crawled for generative responses (or both) can’t even ask AI companies to be transparent about which parts of their content was used. It’s a black box where original content goes in and derivative slop comes out.
We think this is all wrong.
The practices followed by the majority of AI companies are ethically unfair to publishers and brazenly walk a perilous line of copyright infringement that must be regulated. Most worryingly, if ignored, we fear that these tools may lead to a gradual erosion of the open web as we know it, diminishing individuals’ creativity and consolidating “knowledge” in the hands of a few tech companies that built their AI services on the back of web publishers and creators without their explicit consent.
In other words, we’re concerned that, this time, technology won’t open up new opportunities for creative people on the web. We fear that it’ll destroy them.
We want to do something about this. And we’re starting with an open letter, embedded below, that we’re sending on behalf of MacStories, Inc. to U.S. Senators who have sponsored AI legislation as well as Italian members of the E.U. Special Committee on Artificial Intelligence in a Digital Age.
In the letter, which we encourage other publishers to copy if they so choose, we outline our stance on AI companies taking advantage of the open web for training purposes, not compensating publishers for the content they appropriated and used, and not being transparent regarding the composition of their models’ data sets. We’re sending this letter in English today, with an Italian translation to follow in the near future.
I know that MacStories is merely a drop in the bucket of the open web. We can’t afford to sue anybody. But I’d rather hold my opinion strongly and defend my intellectual property than sit silently and accept something that I believe is fundamentally unfair for creators and dangerous for the open web. And I’m grateful to have a business partner who shares these ideals and principles with me.
With that being said, here’s a copy of the letter we’re sending to U.S. and E.U. representatives.
Read more