The Prizewinner's Speech

In her speech, Meredith Whittaker warns against the power of the tech industry and explains why it is worth thinking positively right now.

[Translate to English:]

Dear friends, audience members, venue staff, and all of the people in my life without whose wisdom and support I would not be here today –  thank you so very much. It’s a deep honor to receive the 2024 Helmut Schmidt Future Prize and I am very grateful to the Jury for this recognition, and to you, Francesca, for your incredibly humbling remarks in your laudatory speech. I’m a big admirer of your work, so this is even more special to me. 

I am accepting this award at a stark moment in history. We are watching bulwarks of global rules-based order moan, sway, and buckle as illiberal pressures mount. And it is with this in mind that I ask your forgiveness as I proceed to deliver an urgent call in a context generally reserved for acknowledgements, thanks, and optimistic visions. 

Make no mistake – I am optimistic – but my optimism is an invitation to analysis and action, not a ticket to complacency. 

With that in mind, I want to start with some definitions to make sure we’re all reading from the same score. Because so often, in this hype-based discourse, we are not. And too rarely do we make time for the fundamental questions – whose answers, we shall see, fundamentally shift our perspective. Questions like, what is AI? Where did it come from? And why is it everywhere, guaranteeing promises of omniscience, automated consciousness, and what can only be described as magic? 

Well, first answer first: AI is a marketing term, not a technical term of art. The term “artificial intelligence” was coined in 1956 by cognitive and computer scientist John McCarthy – about a decade after the first proto-neural network architectures were created. In subsequent interviews McCarthy is very clear about why he invented the term. First, he didn’t want to include the mathematician and philosopher Norbert Wiener in a workshop he was hosting that summer. You see, Wiener had already coined the term “cybernetics,” under whose umbrella the field was then organized. McCarthy wanted to create his own field, not to contribute to Norbert’s – which is how you become the “father” instead of a dutiful disciple. This is a familiar dynamic for those of us familiar with “name and claim” academic politics. Secondly, McCarthy wanted grant money. And he thought the phrase “artificial intelligence” was catchy enough to attract such funding from the US government, who at the time was pouring significant resources into technical research in service of post-WWII cold war dominance. 

Now, in the course of the term’s over 70 year history, “artificial intelligence” has been applied to a vast and heterogeneous array of technologies that bear little resemblance to each other. Today, and throughout, it connotes more aspiration and marketing than coherent technical approach. And its use has gone in and out of fashion, in time with funding prerogatives and the hype-to-disappointment cycle. 

So why, then, is AI everywhere now? Or, why did it crop up in the last decade as the big new thing? 

The answer to that question is to face the toxic surveillance business model – and the big tech monopolies that built their empires on top of this model. 

The roots of this business model can be traced to the 1990s, as scholar Matthew Crain’s work illuminates. In a rush of enthusiasm to commercialize networked computation, the Clinton administration laid down the rules of the road for the profit-driven internet in 1996. In doing so, they committed two original sins–sins that we’re still paying for today. 

First, even though they were warned by advocates and agencies within their own government about the privacy and civil liberties concerns that rampant data collection across insecure networks would produce, they put NO restrictions on commercial surveillance. None. Private companies were unleashed to collect and create as much intimate information about us and our lives as they wanted–far more than was permissible for governments. (Governments, of course, found ways to access this goldmine of corporate surveillance, as the Snowden documents exposed.) And in the US, we still lack a federal privacy law in 2024. Second, they explicitly endorsed advertising as the business model of the commercial internet–fulfilling the wishes of advertisers who already dominated print and TV media. 

This combination was–and is–poison. Because, of course, the imperative of advertising is “know your customer,” in service of identifying the people most likely to be convinced to buy or do the things you want them to. And to know your customer you need to collect data on them. This incentivized mass surveillance, which now feeds governments and private industry well beyond advertising, with strong encryption serving as one of our few meaningful checks on this dynamic.

On this toxic foundation, over the course of the 2000s, the Big Tech platforms established themselves through search, social media, marketplaces, ad exchanges, and much more. They invested in research and development to enable faster and bigger data collection, processing, and to build and maximize computational infrastructures and techniques that could facilitate such collection and ‘use’ of data. Economies of scale, network effects, and the self-reinforcing dynamics of communications infrastructures enabled the firms early to this toxic model to establish monopoly dominance. This was aided by the US government’s use of soft power, trade agreements, and imperial dominance to ensure that the EU and other jurisdictions adopted the US paradigm. 

This history helps explain why the majority of the world’s big tech corporations are based in the US, with the rest emerging from China. The US got a head start, via military infrastructure and neoliberal policies and investment, while China built a self-contained market, capable of supporting its own platforms with its own norms for content that further limited external competition. Which also means that the story of the EU “stifling innovation via regulation” is both wrong, and suspiciously self-serving when it comes from the mouths of tech giants and their hype men. 

Here you might pause, reflect, and ask…but what does this sordid history have to do with AI? Well, it has everything to do with AI. 

In 2012, right as the surveillance platforms were cementing their dominance, researchers published a very important paper on AI image classification, which kicked off the current AI goldrush.  The paper showed that a combination of powerful computers and huge amounts of data  could significantly improve the performance of AI techniques–techniques that themselves were created in the late 1980s. In other words, what was new in 2012 were not the approaches to AI–the methods and procedures. What “changed everything” over the last decade was the staggering computational and data resources newly available, and thus newly able to animate old approaches. 

Put another way, the current AI craze is a result of this toxic surveillance business model. It is not due to novel scientific approaches that–like the printing press–fundamentally shifted a paradigm. And while new frameworks and architectures have emerged in the intervening decade, this paradigm still holds: it’s the data and the compute that determine who “wins” and who loses. 

What we call AI today grew out of this toxic model–and must be understood primarily as a way of marketing the derivatives of mass surveillance and concentrated platform and computational power.  Currently, there are only a small handful of firms–based in the US and China–who have the resources to create and deploy large-scale AI from start to finish. These are the cloud and platform monopolies–those that established themselves early on the backs of the surveillance business model. Everyone else is licensing infrastructure, scrambling for data, and struggling to find market fit without massive cloud infrastructures through which AI can be licensed to customers or massive platforms into which AI can be integrated as a feature or service touching billions of people.  

This is why even the most successful AI ‘startups’ – Open AI, Mistral, Inflection – are ultimately ending up as barnacles on the hull of the Big Tech ship–the Microsoft ship, in their case. It’s why Anthropic needs to be understood as a kind of subsidiary of Google and Amazon. 

In addition to the current technologies that are being called “AI,” we also need to look at the AI narrative itself. The story that’s animating marketing and hype today–how is this marketing term being deployed? By wielding quasi-religious tales about conscious computers, artificial general intelligence, small elves that sit in our pocket and, servant-like, cater to our every desire, massive companies have paved the way for unprecedented dominance. 

By narrating their products and services as the apex of “human progress” and “scientific advancement,” these companies and their boosters are extending their reach and control into nearly all sectors of life, across nearly every region on earth. Providing the infrastructure for governments, corporations, media, and militaries. They are selling the derivatives of the toxic surveillance business model as the product of scientific innovation. 

And they are working to convince us that probabilistic systems that recognize statistical patterns in massive amounts of data are objective, intelligent, and sophisticated tools capable of nearly any function imaginable. Certainly more capable than we, mere mortals. And thus we should step aside and trust our business to them. 

This is incredibly dangerous. The metastatic shareholder capitalism-driven pursuit of endless growth and revenue that ultimately propels these massive corporations frequently diverges from the path toward a liveable future. 

There are many examples that illustrate just how dangerous turning our core infrastructures and sensitive governance decisions over to these few centralized actors is. Scholars like Abeba Birhane, Seda Gürses, Alex Hannah, Khadijah Abdurahman, Jathan Sadowski, Dan McQuillian, Amba Kak and Sarah Myers West, among many others, have explicated such examples at length and with care. 

Here, I will pause and reflect on one particularly distressing trend, which in my view exposes the most dangerous tendency of all. These massive surveillance AI companies are moving to become defense contractors, providing weapons and surveillance infrastructures to militaries and governments they choose to arm and cooperate with.  

We are all familiar with being shown ads in our feeds for yoga pants (even though you don’t do yoga) or a scooter (even if you just bought one), or whatever else. We see these because the surveillance company running the ad market or social platform has determined that these are things “people like us” are assumed to want or be attracted to, based on a model of behavior built using surveillance data. Since other people with data patterns that look like yours bought a scooter, the logic goes, you will likely buy a scooter (or at least click on an ad for one). And so you’re shown an ad. We know how inaccurate and whimsical such targeting is. And when it’s an ad it’s not a crisis when it’s mistargeted. 

But when it’s more serious, it’s a different story. 

We can trace this story to the post-9/11 US drone war, with the concept of the Signature Strike. A signature strike uses the logic of ad targeting, determining targets for death based not on knowledge of the target or certainty about their culpability, but based on data patterns and surveillance of behavior that the US, in this case, assumes to be associated with terrorist activity.  Signature strikes kill people based on their data profiles. And AI, and the large scale surveillance platforms that feed AI systems, are supercharging this capability in incredibly perilous ways. 

We know of one shocking example thanks to investigative work from the Israeli publication 972, which reported that the Israeli Army, following the Oct 7th attacks, is currently using an AI system named Lavender in Gaza, alongside a number of others.  Lavender applies the logic of the pattern recognition-driven signature strikes popularized by the United States, combined with the mass surveillance infrastructures and techniques of AI targeting. Instead of serving ads, Lavender automatically puts people on a kill list based on the likeness of their surveillance data patterns to the data patterns of purported militants – a process that we know, as experts, is hugely inaccurate. Here we have the AI-driven logic of ad targeting, but for killing. According to 972’s reporting, once a person is on the Lavender kill list, it’s not just them who’s targeted, but the building they (and their family, neighbors, pets, whoever else) live is subsequently marked for bombing, generally at night when they (and those who live there) are sure to be home. This is something that should alarm us all. 

While a system like Lavender could be deployed in other places, by other militaries, there are conditions that limit the number of others who could practically follow suit. To implement such a system you first need fine-grained population-level surveillance data, of the kind that the Israeli government collects and creates about Palestinian people.  This mass surveillance is a precondition for creating ‘data profiles’, and comparing millions of individual’s data patterns against such profiles in service of automatically determining whether or not these people are added to a kill list. Implementing such a system ultimately requires powerful infrastructures and technical prowess – of the kind that technically capable governments like the US and Israel have access to, as do the massive surveillance companies. Few others also have such access. This is why, based on what we know about the scope and application of the Lavender AI system, we can conclude that it is almost certainly reliant on infrastructure provided by large US cloud companies for surveillance, data processing, and possibly AI model tuning and creation.  Because collecting, creating, storing, and processing this kind and quantity of data all but requires Big Tech cloud infrastructures – they’re “how it's done” these days. This subtle but important detail also points to a dynamic in which the whims of Big Tech companies, alongside those of a given US regime, determines who can and cannot access such weaponry. 

This is also why it’s imperative that we recognize mass surveillance – and ultimately the surveillance business model–as the root of the large-scale tech we’re currently calling “AI”. 

Which brings us to political economy – of course. Because this turn to defense contracting that the large tech companies have been making is a key part of their revenue model.  Work by Tech Inquiry recently revealed that the five largest US military contracts to major tech firms between 2019 and 2022 have contract ceilings of $53 billion.  And those are the ones we know of. Due to the combination of military classification and corporate secrecy – transparency – let alone accountability – is very hard to come by.

The use of probabilistic techniques to determine who is worthy of death – wherever they’re used – is, to me, the most chilling example of the serious dangers of the current centralized AI industry ecosystem, and of the very material risks of believing the bombastic claims of intelligence and accuracy that are used to market these inaccurate systems. And to justify carnage under the banner of computational sophistication. As UN Secretary General Antonio Gutiérrez put it, “machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law.”  

It’s because of this that I join the German Forum of Computer Scientists for Peace and Social Responsibility in demanding that “practices of targeted killing with supporting systems be outlawed as war crimes.”  

Particularly given the very real possibility of a more authoritarian government in the US, where these companies are homed. A place where the right wing in the country has already broadcast plans to bring the two major tech regulators in the US–the Federal Trade Commission and the Federal Communications Commission–under direct presidential control in the future. Where four of the top five social media platforms are housed, alongside cloud giants that currently control 70% of the global cloud market.  And where a federal abortion ban is on the right wing agenda, accompanied by ongoing campaigns of book banning and censorship of LGBTQ resources and expression already shaping legislation at the state level. 

But! None of this is a reason to give up hope! I’m working every day, giving talks like this in place of soft-focus thanks and praise, because I am hopeful. I am hopeful – but I refuse to plant my hope in the shallow soil of illusion and false comfort. 

There is still time to create a lovely and liveable future. To question these narratives and the control they give these monopolistic actors. We can dismantle this toxic surveillance business model, and the AI derivatives being pushed into the nervous system of our lives and institutions. There’s time, for example, to audit and understand the EU’s dependencies on US surveillance AI giants, and to forge a better way forward. 

Because an ending is the beginning of another thing. And recognizing that something isn’t working – whether it’s a relationship, or a job, or a whole tech paradigm – is the point where a new world can begin being born. With the veil torn and the benevolent rhetoric of the big tech companies revealed to be more marketing than reality, we can start in earnest revising and rebuilding our technological future. 

It does not have to be this way! Computational technology is cool, it can be wonderful, and it can take many, many, many forms. 

We don’t have to compete with the giants on their terms–why let them set the terms to begin with? We can make our own terms! From socially supported infrastructure that breeds more locally governed applications and communications platforms, to a redefinition – or reclamation – of the concept of AI, away from the surveillance-dependent bigger is better approach, toward smaller models built with intentionally-created data that serve real social needs. To open, accountable and shared core infrastructures that, instead of being abandoned and extracted, are supported and nurtured. 

Here, Signal serves as a model. And I am so honored and proud to be able to dedicate my time and energy to nurturing and caring for Signal. The world's most widely used truly private messaging app, and the only cool tech company <3

Signal is a mighty example proving that tech can be done otherwise. Signal’s massive success demonstrates that tech that prioritizes privacy, rejects the surveillance AI business model, and is accountable to the people who use it is not only possible – but can flourish and thrive as a nonprofit, supported by the people who rely on it, not ad dollars and military contracts. 

So, as I said, I am optimistic. My optimism drives me. And I am particularly inspired by the young people here today, and those I talked with earlier this afternoon, whose energy and vision provides the leadership we need to both sit with and address the serious challenges we face, while simultaneously (re)building and reimagining technology that reflects the world we want, not the world surveillance advertising monopolies tell us we’re stuck with. 

Thank you! 

© Phil Dera / Zeit