NEW BOOKS: ATLAS OF AI

600px-Artificial_Intelligence,_AI.jpg (600×480)

The Thammasat University Library has acquired a new book that should be useful for students interested in artificial intelligence, computer science, business, communications, technology, new media, economics, sociology, and related subjects.

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence is by Professor Kate Crawford of The USC Annenberg School for Communication and Journalism.

The TU Library collection includes other books about different aspects of artificial intelligence.

Professor Crawford argues that AI is neither artificial nor intelligent. Instead, it is a massive industrial formation that includes politics, labor, culture, and capital.

In terms of the environment, AI consumes large amounts of rare minerals, water and fossil fuels. AI also requires a great deal of human labor, often under difficult conditions. Metal mines and iPhone factories have been found to be abusive to workers. Professor Crawford describes an Amazon e-commerce fulfilment facility to reveal the unpleasant working conditions found there. As laborers are dehumanized, unlimited data collection about consumers puts a wider popular at risk.

She suggests that readers rethink their approach to AI regulation, instead of assuming that whatever happens in this field is just technological inevitability.

640px-Artificial-Intelligence.jpg (640×427)

Professor Crawford writes:

Let’s ask the deceptively simple question, What is artificial intelligence? If you ask someone in the street, they might mention Apple’s Siri, Amazon’s cloud service, Tesla’s cars, or Google’s search algorithm. If you ask experts in deep learning, they might give you a technical response about how neural nets are organized into dozens of layers that receive labelled data, are assigned weights and thresholds, and can classify data in ways that cannot yet be fully explained. […] In one of the most popular textbooks on the subject, Stuart Russell and Peter Norvig state that AI is the attempt to understand and build intelligent entities. “Intelligence is concerned mainly with rational action,” they claim. “Ideally, an intelligent agent takes the best possible action in a situation.”

Each way of defining artificial intelligence is doing work, setting a frame for how it will be understood, measured, valued, and governed. If AI is defined by consumer brands for corporate infrastructure, then marketing and advertising have predetermined the horizon. If AI systems are seen as more reliable or rational than any human expert, able to take the “best possible action,” then it suggests that they should be trusted to make high-stakes decisions in health, education, and criminal justice. When specific algorithmic techniques are the sole focus, it suggests that only continual technical progress matters, with no consideration of the computational cost of those approaches and their far-reaching impacts on a planet under strain.

In contrast, I argue that AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power. […]

Once we connect AI within these broader structures and social systems, we can escape the notion that artificial intelligence is a purely technical domain.

Once we connect AI within these broader structures and social systems, we can escape the notion that artificial intelligence is a purely technical domain. At a fundamental level, AI is technical and social practices, institutions and infrastructures, politics and culture. Computational reason and embodied work are deeply interlinked: AI systems both reflect and produce social relations and understandings of the world. […] To understand how AI is fundamentally political, we need to go beyond neural nets and statistical pattern recognition to instead ask what is being optimized, and for whom, and who gets to decide. Then we can trace the implications of those choices.

Seeing AI Like an Atlas

How can an atlas help us to understand how artificial intelligence is made? […] Perhaps my favorite account of how a cartographic approach can be helpful comes from the physicist and technology critic Ursula Franklin: “Maps represent purposeful endeavors: they are meant to be useful, to assist the traveler and bridge the gap between the known and the as yet unknown; they are testaments of collective knowledge and insight.” Maps, at their best, offer us a compendium of open pathways— shared ways of knowing—that can be mixed and combined to make new interconnections. But there are also maps of domination, those national maps where territory is carved along the fault lines of power: from the direct interventions of drawing borders across contested spaces to revealing the colonial paths of empires. By invoking an atlas, I’m suggesting that we need new ways to understand the empires of artificial intelligence.

We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it.

We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it. These are the shifting tectonics of power in AI. A topographical approach offers different perspectives and scales, beyond the abstract promises of artificial intelligence or the latest machine learning models. The aim is to understand AI in a wider context by walking through the many different landscapes of computation and seeing how they connect.

There’s another way in which atlases are relevant here. The field of AI is explicitly attempting to capture the planet in a computationally legible form. This is not a metaphor so much as the industry’s direct ambition. The AI industry is making and normalizing its own proprietary maps, as a centralized God’s-eye view of human movement, communication, and labor. Some AI scientists have stated their desire to capture the world and to supersede other forms of knowing. […] One of the founders of artificial intelligence and early experimenter in facial recognition, Woody Bledsoe, put it most bluntly: “in the long run, AI is the only science.” This is a desire not to create an atlas of the world but to be the atlas—the dominant way of seeing.

This colonizing impulse centralizes power in the AI field: it determines how the world is measured and defined while simultaneously denying that this is an inherently political activity. […] Just as there are many ways to make an atlas, so there are many possible futures for how AI will be used in the world. The expanding reach of AI systems may seem inevitable, but this is contestable and incomplete. The underlying visions of the AI field do not come into being autonomously but instead have been constructed from a particular set of beliefs and perspectives. The chief designers of the contemporary atlas of AI are a small and homogenous group of people, based in a handful of cities, working in an industry that is currently the wealthiest in the world.

600px-Artificial_Neural_Network_with_Chip.jpg (600×480)

(All images courtesy of Wikimedia Commons)