Human Machine Autonomies

Are humans and machines separate?

Amrita Khoshoo
3 min readSep 30, 2019

Reflections for Interaction Design Seminar, Professor Molly Wright Steenson, Fall 2019

source

Autonomy has its roots in the Enlightenment as a philosophical ideal that described self-determination and free-will. Slowly through the twentieth century, autonomy started to take on a more technoscientific meaning. The lines between humans and machines started to blur.

Cybernetics rose in the twentieth century, classifying machines and organisms as “behaviorally and in information terms the same” (6). It made the claim that systems are driven by “purposeful and goal-oriented behavior” (8) that become possible through a tight relationship between system and environment. Technological behavior started to model human behavior (modeled information networks and feedback loops).

Understanding cybernetics is important when it comes to autonomous weapons because it shifted information science from “intrinsic properties of entities towards their behavior” (9).

Autonomy

In terms of autonomous weapons, Schuman and Weber continually circle back to the argument that man and machine cannot be separated. Both human and machine enter into a subject-object relationship in which they both become dependent upon and shaped by one another. This relationship is always unfolding, fluid, and unpredictable.

  1. The human builds the machine
  2. The machine (presence of AI) might affect how a human chooses to act.

*This made me think about the human-designed artifact relationship that Arturo Escobar discusses in Designs for the Pluriverse.

Autonomous Weapons

The authors also critique the way in which the military thinks about autonomous weapons. Military language about autonomous weapons assumes that “relevant circumstances can be rendered algorithmically” (16). This assumes that machines This results in patterning of life or profiling, which has been heavily critiqued. This also assumes that “decision-making process within software elides the difference between algorithmic and judgemental ‘decision’” (16).

Further, the military makes the assumption that machine intelligence should be modeled after human intelligence. Machines, however, cannot be thought of as humans because their perceptions are narrowly constructed around a specific goal, they rely on prescribed behaviors or operation, they can’t comprehend actions, and cannot learn to inform future behavior (18).

The military’s approach to autonomy is limited because it’s based on modeling and planning and decomposes “human action into multiple, separate domains” (20).

Schuman and Weber call for a reframing of thinking around autonomous weapons. The current frame leaves certain things out. Reframing our understanding will mean agency becomes the “effect of particular human-machine configurations” (22) which “opens the possibility of explicating the systematic erasures of connection and contingency through which discourses of autonomous agency operate” (22). How can we rethink our relationship with autonomous weapons?

Further, they open up another important question around humans and responsibility. How can sociotechnical systems shift in a way that allows humans to “interact responsibly in and through” (22) automation?

Project Maven

Project Maven is a data surveillance and analysis technology development effort in Pittsburgh. The military is “seeking to develop an algorithm to analyze drone, overhead, and ground data to identify targets and objects of interest” (source).

The milliary hopes to make data reconnaissance easier. This made me think of the idea of AI as modeling human intelligence; more storage = more brainpower.

Further, the automation of information gathering will affect humans. Human actors are still engaged in a relationship with the machine. Someone will be sifting through patterns in information, someone will be designing the AI, etc.

One critique presented in the article is that “militaries with networks of surveillance from above, and constant identification of threats by AI, could lead to an over-eagerness to act and a feeling that “they’re going to lose if they hesitate” (source). AI now being a part of the relationship subsequently ends up affecting or shaping human decision making.

--

--

Amrita Khoshoo

Interaction Designer | Carnegie Mellon University, School of Design | MDes ’21