The Art of Failure in Robotics: Queering the (Un)making of Success and Failure in the Companion-Robot Laboratory


Pat Treusch
Center for Interdisciplinary Women’s and Gender Studies (ZIFG), Technical University Berlin 

1. Introduction: Robot Companionship—A New Class of Human-Robot Relations

Emerging from the European network Robot Companions for Citizens (, n.d.), a recent strand of contemporary humanoid robotics envisions the realization of a special machine, namely the “robot companion.” This machine is special because it is charged with social meaning in a double sense: first, it is classified as a companion “for citizens”; second, it is described as a technological innovation—or, one could even say, as a technological intervention into what is perceived as “a critical challenge [that] human society faces: how to increase and maintain our quality of life in the future” (, n.d.). Thus, the robot companion will supposedly leave the factory halls of industrial production and enter the sphere of everyday lives, including public spaces and private homes, in order to operate close to “us humans.” It is precisely such flexible possibilities of bringing humans and humanoid robots together as companions that solve the problem of how to maintain and improve a certain quality of life.

It is through this perceived need that robot companionship will also supposedly become a market success. The vision of such machines operates through promises of “technoscientific salvation” (Haraway, 1997, p. 8) from, primarily, the work-burdened contexts of the household and the workplace. The quality of this emerging human/machine relationship also highlights the burdens of social, particularly urban, life in late capitalist societies in the Global North, with its structural separation and loneliness, especially with increasing age. In addition, the image of sociality and close relations between humans and robots carries a distinct connotation of human/human relations as inherent to companionship.

This article asks: How is the production of this new class of machines transforming hegemonic ideas of both human/machine relations and companionship? To explore this question, I draw on the example of the humanlike robot companion Armar III. Armar is being developed at laboratories specializing in a current strand of humanoid robotics in Germany, anthropomatics, at the Karlsruhe Institute for Technology (KIT). More precisely, I am interested in how an interaction at this specific level of robot/human interface is understood and experienced as a success with regard to social scripts, with their concomitant expectations and behaviors, and when it is perceived as failing. Reconstructing the circumstances of success and failure in the making of robot companionship, I opt to open up the perspective of a queer-feminist critique on this dichotomous division, as articulated by Judith Halberstam (2011). In this way, I will contemplate “new ways of understanding… knowledge projects” (Subramaniam & Willey, 2016, p. 2; emphasis in original) in contemporary humanoid robotics. My argument is that querying interactions with robots, and concomitant concepts of the ability to interact autonomously in a socially meaningful manner, from the suggested perspective of queer failure might open up new avenues for considering the dimensions of realizing companionship with robots that have so far been neglected and thus have not been part of the understanding of the knowledge project “humanoid robotics.” How can “we” rethink practices of knowledge and artifact production in the context of this specific robotics laboratory through a queering of the success/failure dichotomy?

2. Analyzing Human/Machine Relations—A Theoretical and Methodological Framework

Feminist interest in researching the design of human/machine interfaces and asking how they either transport or rework the gendering of science and technology is longstanding. Alison Adam (1998), for instance, has researched the inscription of “concepts of masculinity and femininity” (p. 1) in computers. Drawing on select aspects of Adam’s fundamental study, I emphasize her finding that key researchers into early artificial intelligence (AI) “regard themselves as the gold standard of universal subjects,” (p. 5) thus establishing “gendered patterns of rationality” (p. 6). Those researchers were mostly mathematicians and, when selecting their objectives, “naturally looked to themselves” (p. 35). This led to the objective of building a machine that could play chess (Franchi & Güzeldere, 2005, p. 46); consequently, chess was selected as the model for some aspects of intelligence and became “the test bed for ideas about creating intelligence” (Turkle, 2005, p. 220). In this regard, computer sciences are grounded in a scientific framework that perpetuates a division that can be described—following Sandra Harding (2008)—as the thinking tradition of “Western modernities” (p. 2):

Scientific rationality and technical expertise are presented…as enabl[ing] elite Westerners and men around the globe to escape the bonds of tradition, leaving behind for others the responsibility for the flourishing of women, children and other kin, households, and communities….These others must do the…reproductive and “craft” labor….These others are mostly women and non-Western men. (p. 2)

The epistemological framework of Western modernities creates a division between the realm of scientific rationality and expertise and that of craft labor. This dichotomy is fundamental to the sexual, transnational division of labor and its powerful operations of social ordering along the intersecting (identity) categories of gender, race, ableism, and class. Notably, it is traditionally the male, elite Westerner who finds “himself” in the epistemological position of neutrality, rationality, and expertise—which, as Harding points out, is tied to enabling some over others to take part in scientific and technological processes of hegemonic knowledge production and innovation. This, in turn, propels the establishment of patterns of rationality that are interwoven with gender, race, ableism, and class in their hierarchical relations.

Along these lines, I am, first, interested in the ways in which the realization of robot companionship relates to the epistemological framework of Western modernities. Second, I explore how a queer-feminist critique of this framework involves developing an analytical feeling for the contingent nature of exclusionary sorting operations between human and machine that also incorporate the divisions between success/failure, rationality/affectivity, and technical expertise/craft labor in knowing and realizing companionship in contemporary robotics. Furthermore, I suggest an approach that leads to a queering of the success/failure binary with and in robotics. My argument is that a posthumanist feminist account of the making of robot companionship allows the linking of a queer theory of failure with the aim of creating robot companions in a productive manner for both fields of expertise.

2.1. Demonstrations: The Laboratory as a Space for Realizing and Querying Robot Companionship

This article discusses the realization of robot companions using one humanoid robot, namely Armar III. I draw on this example from the perspective of the queering witness; i.e., who queers the “modest witness” (Haraway, 1997, p. 23; emphasis in original) of Western modern knowledge production. In exploring the role of the laboratory and the witness to laboratory action in processes of knowledge production in a positivist epistemological framework, Donna Haraway (1997) identified the modest witness as an emblematic figure of objectivist science that “mirror[s] reality” by becoming “invisible, that is, an inhabitant of the potent ‘unmarked category’” (p. 23). Such a form of witnessing performs the “god-trick of seeing everything from nowhere” (Haraway, 1991, p. 189). In short, the god-trick enables “the knower” to claim that he/she is speaking the truth, while neglecting her/his subjective and embodied worldliness.

Against this backdrop, Haraway’s queering witness is “a more corporeal, inflected, and optically dense, if less elegant, kind of modest witness to matters of fact to emerge in the worlds of technoscience” (1997, 24). Thus, I approached the robotics laboratory in which Armar is being built, tested, and demonstrated as a site that allows me to become a queering witness to robot companionship. I participated in everyday work in the “kitchen laboratory” in which Armar is situated. I will delve more deeply into this context as the heart of the institute in the next subsection. For now, it is important to fully understand the kind of interactive setting between humans and machines, especially humanoid robots. Lucy Suchman (2011) provides a very productive approach to demonstrations as both the “theatre of proof” (Latour, 1988, in Suchman, 2011, p. 123) and “theatre of use” (Smith, 2009, in Suchman, 2011, p. 123). More precisely, she draws on the Latourian term “theatre of proof”: that is, an understanding of the objects of research which asks for the ways things stabilize through networks of actions and both human and more-than-human actors, in order to complement this with Wally Smith’s notion of the “theatre of use,” through which the demonstration of technology is analyzed as “a particular assemblage of hardware and software…presented in action as evidence for its worth” (Suchman, 2011, p. 123). While the first term focuses on the actions and actors in the laboratory, the latter also considers the framing of these actions and actors. Given the example of Armar III, the frame is the kitchen, a specific setting within the modern household. Here, visitors can experience in the present a future in which the humanoid robot is part of the household.

Notably, a central goal of the demonstration is to make the audience familiar with a future technology. This implies not only a need to guarantee the success of their interaction with the robot in the kitchen but, importantly, also to make this perceptible as successful. How does the demonstration function as a successful realization of robot companionship and establish a frame for robots and humans to become companions? Geared toward becoming a witness to laboratory practices of realizing companionship, I proposed in my study (Treusch, 2015) to trace how differences are enacted and how properties of the actors in the lab are the results rather than the preconditions of this enactment. Thus, I understand my queer-feminist posthumanist performative account of witnessing laboratory practices as becoming attuned to the entangled agencies between humans and more-than-humans in the construction of human-robot companionship. Through this analytical prism, I tweak the causal relations of the making of robot companions: How does interaction at the robot/human interface during demonstrations exceed the idea of the two entities in causal relations, namely the “human creator” and the “creation,” the robot? In line with Suchman (2011), I query “the figuration of subject object intra-actions in contemporary humanoid robotics, and how we might rethink questions of sameness and difference at the interface of humans and machines” (p. 121). From this perspective, sameness and difference are not predetermined qualities, but rather emerge from the intra-actions in the kitchen laboratory. As Barad (2007) points out, intra-action “signifies the mutual constitution of entangled agencies” (p. 33; emphasis in original), while “distinct agencies do not precede, but rather emerge through, their intra-action.” I draw on the term intra-action in order to develop a performative account of distinct yet relational agencies in the robotics laboratory. From this perspective, the analytical unit shifts—from actors with predetermined properties to entangled agencies. At the same time, to participate as the queer witness includes recognizing the enactments of patterns of rationality in their contingency.

In this article, I am especially interested in the dichotomy between success and failure in the lab. The queering witness, as I will continue to argue, does not suggest a reading of success as guaranteeing that robot companions will soon move in with us, nor a reading of failure as proof for a techno-skepticism that insists on showing that robotics will never realize its goal. Rather, my interest is in overcoming such binary oppositions, for instance in practices of associating the robot with “the human” through engaging with the entangled agencies of a co-production between human and machine during demonstrations of Armar. In what follows, I will further develop a queering account of the politics of the modern success/failure dichotomy and bring it into relation with the larger scientific framework of AI, a core field of knowledge and artifact production for contemporary humanoid robotics.

2.2. Approaching Demonstrations of the Companion Robot through Queer Failure

This section introduces the queer concept of low theory and its concomitant account of failure as a mode of knowledge counterproduction in order to develop the possibility of an intervention into hegemonic forms of knowing and practices of applying knowledge in humanoid robotics. I will bring a queering approach of failure into conversation with selected insights into a historical account of knowing and feeling in early AI.

In The Queer Art of Failure, Judith Halberstam (2011) develops an understanding of low theory as a resource for counterimaginations, or rather as “a grammar of possibility” (p. 2). This grammar is a political project that “expresses a basic desire to live life otherwise.” It ties in with longstanding critical feminist engagements with technoscience as worlding practices. However, the challenge in contributing to science out of feminist (and queer) theory is to generate a grammar of possibility that not only critically intervenes into hegemonic forms of worlding but also “moves beyond the critique/engagement binary to open space for re-thinking how we know” (Subramaniam & Willey, 2016, p. 1). Along these lines, I generate new impulses for practicing embodied objectivity as a move beyond the critique/engagement binary from the suggested conversation between feminist critical engagements with technoscience and Halberstam’s take on low theory. Low theory works against the “dominant logics of power and discipline” (2011, p. 88). The goal becomes to open up possibilities for a “narrative without progress” and “a more chaotic realm of knowing and unknowing” (p. 2). I regard both as querying approved ways of knowing and as opportunities to enable different actors and their counterknowledge which have formerly been restricted to existing—if at all—outside the laboratories of (contemporary) AI and robotics.

My account of queer failure is guided by taking aspects of Halberstam’s conceptualizations into a new context, the robotics laboratory. In so doing, I primarily identify two central aspects of Halberstam’s work: first, the capitalist framework of success and its disciplining power and, second, the stakes of failure as not only an individual experience of defeat, with its consequential feelings of social refusal and shaming, but also as a mode of intervention into the promises of mastery and success. Imagining counterstrategies against such a disciplining of thought, Halberstam advocates a concept of failure “as a refusal of mastery, a critique of the intuitive connections within capitalism between success and profit” (2011, pp. 11–12). This connection between success, mastery, and capitalism has special meaning for the robotics laboratory: if the demonstration is a success, the chances of also making this machine a market success are thought to increase accordingly. Thus, the robot companion in action is presented as the ideal 24/7 service agent that speaks to a “service economy” (Suchman 2007, p. 217) as it emerged with the idea of software agents that supposedly organize “our” everyday lives in an optimal, hence most effective, manner and offer assistance with all kinds of administrative work. How can “we” work with failure against the capitalist connection between success and mastery?

Central to the concept of low theory, as Halberstam (2011) points out, is not “arguing for a reevaluation of these [static] standards of passing and failing,” but rather to “dismantle the logics of success and failure with which we currently live” (p. 2). Hence, if producing “high theory” (p. 16) requires mastery and a concomitant disciplining of thought, then low theory invites me to become less serious and more playful in theorizing alternatives to the capitalist bonds between success and profit. In this sense, failure becomes “a practice” (p. 24) of “recogniz[ing] that alternatives are embedded already in the dominant and that power is never total or consistent; indeed failure can exploit the unpredictability of ideology and its indeterminate qualities” (p. 88). In addition, and as Halberstam underlined, “low theory might constitute…an undisciplined zone of knowledge production” (p. 18). In this regard, dominant modes of thought already include the possibility for alternatives and failure becomes the opportunity to experience and foster counterhegemonic possibilities of knowing and being that remain indeterminate.

Affects are central to this account of failure, as Halberstam (2011) emphasizes, when “addressing the dark heart of the negativity that failure conjures” (p. 23). The darkness and negativity of failure carries an “unbeing and unbecoming” (Halberstam, 2011, p. 23)—not only of the individual who fails to conform to the static standards of failure versus success, but—importantly—of exactly the hegemonic, hierarchical relations to knowledge that condition those standards in the first place. In this way, failure becomes a practice of undoing the logics of the capitalist connection between success and mastery.

Precisely such an undoing through failure occurred when Garry Kasparov experienced his high-profile defeat against Deep Blue, a chess computer. As Elizabeth Wilson (2010) points out, this chess match and its result encouraged a public discussion, perpetuating “a fundamental hostility between affect and computation”; however, that discussion “doesn’t survive closer scrutiny” (p. 11). Kasparov—in line with Wilson—relies on both rational and affective labor, as “his intellectual labor demands some kind of affective connection…that normally regulates chess playing at this level” (p. 15). Wilson furthermore speaks of an “emotional relationality” (p. 16) as the central condition for playing chess. Her reading of Kasparov’s failure thus dismantles the logic behind a fundamental dichotomy between computational rationality and affective labor. This account of AI moves beyond deploying chess as an icon of gendered patterns of rationality. As she points out, “the conventional presumption that chess-playing expertise is a purely cognitive endeavor, isolatable from other talents and capacities” can be “challenge[d] as a way of opening up the conceptual terrain for thinking about the relationship between AI and affects” (pp. 8–9). In contrast to the authors cited in the section on affects in human/machine relations, she traces “some kind of intrinsic affinity, some kind of intuitive alliance between the machinic and the affective, between calculating and feeling” (p. 31) in the work of early AI and argues for the need to acknowledge “the coassembly of machine and emotion…as…one of the foundations of the artificial sciences” (p. x). Kasparov’s failure is read together with nonantagonistic forms of affiliation between human and machine as it expresses (not only historically) a desire for the intelligent, machine other.

By bringing aspects of Halberstam’s work on failure as a queering practice and Wilson’s insights into the entanglements of affect/intellect and machine/emotion, I develop a mode of intervention—or rather a mode of attention—that I will use in the remainder of this article to explore the realization of robot companionship during demonstrations.

2.3. A Robot Companion in Action: Performing the Kitchen

Armar III is equipped with a head with two “camera eyes”; two arms, each with a hand with five fingers; and a torso, and stands on a wheeled platform. Notably, Armar is not equipped with features that would allow viewers to sort it into modern identity categories at first sight. However, as I have shown in detail in my study, the robot is nevertheless endowed with cultural genitals that sort it into the male sex/gender (Treusch, 2015, pp. 208–209). Additionally, the design strategy of modeling Armar after the “average human” describes a statistical dimension that—despite its seeming neutrality—draws on the white male European as a clearly determinable parameter and hence id/entity (see pp. 165ff and 172ff.). With Haraway (1991), the human-likeness of Armar is entrenched in the prominent figure of “the fictive rational self of universal, and so unmarked, species man, a coherent subject” (p. 210). The focus of this article, however, is not to trace the ways in which the humanoid robot becomes a coherent subject, but rather to delve more deeply into selected scenarios in which sameness and difference between human and robot materialize intra-actively.

Figure 1: Armar in its kitchen

Armar III “lives” in the main robotics laboratory, which is designed like a kitchen. It is a spacious room with an L-shaped kitchenette in white, equipped with all the appliances that one would expect in a “real” kitchen: refrigerator, sink, oven, plenty of storage, and several workstations. The room can be accessed through a sliding door that is wider than the other doors in this hallway; next to the sliding door is a window mounted into the wall, with sliding glass panes. The shutters are always closed and the bright lights are switched on to optimize the lighting conditions.

In the middle of the room, we find a desk with a computer workstation that demarcates where the kitchen area ends. There is a huge screen mounted on top of the oven with a split screen on which one can see the kitchen through Armar’s camera eyes as well as a 3D model of Armar moving around the kitchen in real time. When one of the engineers opens the oven, we can see a keyboard stored on the baking tray that the engineers use to operate the screen. Thus the laboratory kitchen differs from the kitchens with which most people are familiar. Here I am thinking of the standardized Western kitchenette, which became a huge market success. Notably, the kitchen as “we” have become familiar with it evolved from socio-historically specific conditions that deeply interconnect efforts at rationalization with capitalism and also regulated the pairing of individuals and the gendered division of responsibilities within the household.

Even though certain aspects might vary from the household kitchen as we are familiar with it, this kitchen laboratory nevertheless appears to be an ideal setting for demonstrating the state of the art in humanoid robotics. Thus, a central part of the daily work in this kitchen consists of demonstrations of the robot Armar to a variety of audiences, ranging from visiting students from the social science department of KIT to visiting guest researchers from Japan and Russia. Demonstrations encompass both what the robot can do in the kitchen, like opening and closing doors, grasping a food container and locating items, and what one can do with the robot; that is, possibilities for interacting in the kitchen. Demonstrations interweave the familiar with the not-yet-familiar as well as the present with the future and therefore constantly negotiate possible forms of robot companionship.

In my study, I developed the notion of “performing the kitchen” (Treusch 2015, p. 107), through which I explored the processual, iterative nature of enacting human/machine relations through the concept of posthumanist performativity. In what follows, I will contemplate the successful interaction between humans and the robot as a result of an intra-active co-production in the specific setting of the selected kitchen laboratory.

3. Demonstrations: The (Un)making of Success and Failure

In this subsection, I consider demonstrations as consisting of a set of specific tasks primarily aimed at securing the success of the human/robot interface and thus making the demonstration a success. I ask: In what ways do these tasks involve failure? How can “we” grapple with success during demonstrations as intra-active co-productions?

Demonstrations generally follow a scheme—first there is an introduction, then different interactional scenarios are played out in which the robot demonstrates what it is capable of doing. At least two engineers are involved in demonstrations: one wears a headset through which the robot is operated and also operates the computer workstation in the kitchen; the other usually stands next to Armar, guiding the audience through the various steps of the demonstration.

During an interview, one of the engineers stressed that work, such as the integration of software into the robot,

is not the scientific part, this is the pure engineering part at maximum, this is only software, problems, there exist some problems and one solves them somehow; you don’t make science from this, but a demo is called in three weeks and by then everything has to run [on the robot]…one takes the path that somehow works and then this will also work out somehow (Interview Engineer CD, July 2011; translation: PT; emphasis: English in original).

Making the human/robot interface a success in front of an audience requires what I call the “smooth engineering” of the robot, which seems to consist mainly of problem-solving strategies and the hope that everything will work out somehow. The engineer describes the situation of not having full control over the machine as “nerve-wracking” (Interview Engineer CD, July 2011; translation: PT). This delivers the first insights into the entanglement of agencies and affects in demonstrations. Furthermore, when asked for a definition of “engineering” (in comparison to the “scientific part”), the engineer answered as follows: “[With] engineering, you have a problem, you solve it, the problem is solved and you’re happy” (Interview Engineer CD, July 2011; translation: PT; emphasis in original). Engineering involves developing strategies to cope with the nerve-wracking project of realizing the robot companion. Even though not addressed directly, the potential of wracking one’s nerves is tied to the possibility that things might not work out as planned; the procedure might fail, especially when working with a robot that is still under construction and does not yet run robustly. What happens when success is in danger of becoming failure?

3.1 Communication—One Potential Source of Failure

One source of complications during the demonstration is communication. In order to talk to the robot, one has to wear a headset with a microphone connected to Armar’s sound recognition system. However, the procedure for giving a command is not as easy as talking into the microphone (see Treusch, 2015, p. 140ff). During one demonstration, Armar received a command that it did not recognize— which underlines that communication involves more requirements than “just” operating the mute switch in the right manner. The speech recognition system might recognize speech, but this does not necessarily result in understanding. One also has to say the words in the right manner. However, even though the command was repeated, Armar still did not get it right and answered with “Goodbye” (see Treusch, 2015, p. 140ff). This reaction evoked laughter in the audience. Notably, this laughter did not express a skepticism toward the project of the robot companion in general. Rather, I experienced it as a kind of sympathetic reaction that ties in with the widely shared experience of miscommunication with technical devices. The engineer who was wearing the headset during this demonstration joined in with the laughter and the relaxed atmosphere in order to explain the following:

I always have to switch off the transmitter for the microphone, so that—while I’m now explaining something to you—it doesn’t hear me and might eventually understand something, then it will talk one’s ears off about what it wants to get rid of at the moment. (D1, July 2011; translation: PT)

Part of the headset is a “mute switch” (D1, July 2011) that has to be operated in a specific way. In explaining this, the engineer takes up the relaxed atmosphere and makes a joke about Armar that draws upon his experiences with a failure in communication with the robot: when he talks into the microphone by accident, the robot can recognize and reply to words.

Figure 2: Communication with Armar (during a demonstration)

Remarkably, what could be considered a failure in communication is taken up here as something that reveals insights into Armar’s apparent state of mind, and Armar is assigned character traits. This depicts a peculiar form of anthropomorphism of the machine, which works to bridge the gaps between expectations of the humanlike robot as a potential companion, on the one hand, and the reality of current technology levels, on the other.

During a conversation with the same engineer (here called KL), I asked him how he personally sees human-likeness in Armar. He answered as follows:

IKL: Hm, so I find this dialogue system, when he talks about random stuff, this sometimes has human traits, when he says that he runs out of steam [laughing]. Of course, this is all justifiable, because he misunderstood something and then reacts like this, but these situations are so comical, in which you see some humanity.... But then, precisely with such things that you haven’t seen very often, I believe they have this surprise effect, even though in my case I’ve been here for a long time. And I guess that external people see something they haven’t seen so far, or not often, and that it contains something human. (Interview engineer KL, July 2011; translation: PT)

What could be regarded as a failure in communication seems to become an indicator for success in the sense of successfully realizing a robot that is legible as human, and hence showing comical behavior (for a detailed analysis of this excerpt, see Treusch, 2015, pp. 202–203). Behaviors manifested by the machine that are regarded as unexpected and have a surprising effect—even for practitioners familiar with the machine and its inner workings—are perceived as comical. Thus, when Armar fails to execute an order, this in turn is interpreted as a form of dissident behavior by the machine. The refusal makes the machine more human than the simple execution of orders.

3.2. Getting Familiar with Armar—Making the Interface a Success

The design of the kitchen laboratory already reveals significant conditions for realizing the robot companion: this room is built in a very accessible manner, the color white dominates, and it is always very bright in the room. Creating these “open labs” (Treusch, 2015, p. 69) involves generating a certain atmosphere that has as much of a structuring effect on actions in this space as the spatial arrangements of the kitchenette and computer workstation. I experienced the atmosphere as inviting, and therefore also as indicating a certain kind of transparency in realizing the project of the humanlike kitchen robot. This impression reverberates with the popularity that humanoid robots in general, and this machine in particular, have already gained and that is supposedly still increasing.

Even though humanoid robots have become so popular, they are still a new technology. Thus, introducing this new technology in a way that generates familiarity is of special importance in contemporary robotics. In my study, I explored the beginning of demonstrations in detail, because this part plays an important role in making the audience familiar with Armar. At the start of demonstrations, Armar is initialized. This process is used to introduce the robot’s body to the audience as a humanlike body. The engineer standing next to the robot uses his own body along with verbal explanations to illustrate the human-likeness. For instance, he cross-references his own arm and the robot’s arm while explaining that “it has a neck, and a head, and arms with elbows, which we are seeing...and shoulder joints such like in the human arm” (Demonstration 1, July 2011).

Morana Alač (2009) has explored work on humanoid robots in a different setting, through what she frames as “indexing the two bodies,” which stipulates “physical proximity” and “allows for exploration through touch” (p. 496). I identify these practices of endowing the robot with body parts and indexing its body as a way of inviting the audience to see and to feel the robot’s body. The engineer does not only use his own body to show the humanoid body parts that are his body parts that are its body parts. Rather, he touches his/its arm and enacts proximity corporeally and, importantly, also intra-actively. Touch is an important signifier of capacities to affect and to be affected. As Maria Puig de la Bellacasa (2009) highlights, “touching is…the experience par excellence in which boundaries between self and other are blurred” (p. 298). This thought resonates with Barad’s (2012) take on “the nature of touching” (p. 207). Notably, touching in a cross-referencing mode can be analyzed not only as the intra-active enactment of a bodily scheme of human-likeness between Armar, the engineer, and the audience, but also as a form of affective labor in making the audience feel what a robot is like and what it is capable of doing.

With Alač (2009), I furthermore contemplate these intra-active practices of association in terms of “‘getting into’ the body of the machine” (p. 496). This notion nicely displays the corporeal, processual, and over-individual practices of learning to see, know, and feel robot companionship. Touching in particular invokes “awareness of the embodied character of perception, affect and thinking” (Puig de la Bellacasa, 2009, p. 297). Both visitors and engineers are trained to develop a sense of Armar as humanlike, including the shape as much as the behavior. The affective practices of association assembled here seem to guarantee the success of the human/robot interface. 

Along these lines, I suggest delving more deeply into the interrelations between entangled agencies and affects, as they seem to build the core of practices of association.

3.3. Entangled Agencies and Affects in Making the Robot Companion a Success

Demonstrations in the kitchen laboratory are of special value—their success is of utmost importance to everyone involved. Accordingly, the expectations of both the engineers and the visitors create specific conditions for interaction at the human/robot interface. Practices of association between visitors, engineers, and the robot entangle knowing and feeling, as they involve all the senses and, as I will argue, build the preconditions for successful interaction in the kitchen laboratory.

The project of realizing robot companionship can be considered as opening up new realms of computation and engineering. However, the new depends upon being integrated into the familiar in order to become a market success. As I have shown, the humans (including engineers and visitors) become familiar with Armar and vice versa through getting to know each other—involving all the senses. Thus, I argue that the practices of association rely heavily on what are regarded as noncognitive styles of knowing, such as affects and emotions. Against the backdrop of my posthumanist performative account of entangled agencies in practices of association, I suggest an approach that treats affects as the basic social glue, as that which enables relating and interacting (corporeally) and directs the analytical attention toward “pre-individual bodily forces augmenting or diminishing a body’s capacity to act” (Clough, 2008, p. 1). From this perspective, I regard affect as “an impingement or extrusion of a momentary or sometimes more sustained state of relation” (Seigworth & Gregg, 2010, p. 1), which also demands that “we” take into account the pre-individual, processual, and corporeal nature of affects and understand those affects as forces as well as seemingly individual emotions. Sara Ahmed points out that a strict division between emotions and affects as objects of knowledge is a problematic form of sorting operation (Ahmed, 2014, p. 1), especially against the backdrop of scholarship following the so-called “affective turn” (Clough, 2008, p. 1). Moreover, affects, understood as emotions as well as corporeal forces, can clearly be categorized into the realm of craft and reproductive labor, as opposed to scientific rationality and expertise. The latter is associated with the operations of mind and the former is marked by the burdens of embodied bonds and bodily labor. As Ahmed (2014) has illustrated, the urge for a new scholarship on affect is pinned against an “implied impasse” that is assumed to be the result of a history of queer and feminist concerns with the body and emotions, “in which body and mind, and reason and passion, were treated as separate” (p. 4). For Ahmed, such a distinction between affect and emotion also carries a tendency to gender knowledge (production), as she points out: “if affects are unmediated and escape signification; emotions are mediated and contained by signification” (Ahmed, 2014, p. 4). The “gendered distinction” (Ahmed, 2014, p. 4) between affects and emotions privileges the one over the other as the more proper object of study and thus creates the danger of perpetuating a gendered, epistemological framework of Western modernities. Precisely by differentiating affects from emotions, the former are assigned the status of a proper object of knowledge while the latter are in danger of being banned from academia, including “certain styles of thought” such as queer and feminist modes of knowing, the “‘touchy feely’ styles of thought” (Ahmed, 2014, p. 4). 

From this perspective, I suggest mapping the affective capacities surrounding the centralized human/robot interface along with the emotions that emerge at this interface as also processual, codependent, and corporeal in nature. Furthermore, I consider this analytical prism to be an intervention into the hegemony of gendered patterns of rationality in AI and robotics and as an opportunity for an active queer and feminist contribution to this field of (applied) knowledge. The realization of robot companions is a highly affective process that involves knowledge as well as all the senses, which makes this process notably supra-individual and corporeal. The everyday tasks of the engineers who are working on and with the robot mostly rely on rational forms of knowing and applying their knowledge about how to realize such a machine. However, the approach that I have suggested emphasizes the intra-active practices of association, and through this it becomes an entry point that allows “us” to work toward overcoming the modern dichotomies between passion and reason, rational and irrational, and failure and success from several sides; that is to say, from a queer-feminist theorization of failure and its insights into shifting “our” understandings of how “we” know and from the actual practices performed in one laboratory of recent humanoid robotics.

4. Toward a Low Theory of Intra-Action in the Kitchen Laboratory: A Mode of Relating Differently?

Demonstrations as examples of the theatre of use and the theatre of proof are key to making humans familiar with the humanlike robot; that is, the process of making humanoid robot companions real. Situations in which a demonstration does not proceed smoothly reveal the conditions and practices of realization. They display practices of coping with failure as I have witnessed them. In these situations, the connections between knowing, perception, emotions, and affects are displayed in unique ways. In such cases, we are not observing predetermined identities with agential properties meeting and interacting; rather, distinct agencies that are mutually entangled, intra-active, become palpable as they emerge from practices of association. The latter involve largely noncognitive—that is, touchy-feely—styles of thought. Thus, I highlight that touchy-feely styles of knowing are already part of the hegemonic knowledge project of companion robots.

Even though robot companionship might be conceptually understood as a functional service relation between human and robot, the insights gained from demonstrating Armar reveal that this concept does not do justice to the actual relations between engineers, audience, the robot, and other devices involved in the enactment of robot companionship. From this perspective, the insights into interaction in the kitchen laboratory presented here permit two conclusions: Firstly, emotions and affects do not belong to a realm outside of the laboratory, but are fundamental to the knowledge project of humanoid robotics. Secondly, interaction in this setting displays the highly affective, that is corporeal, processual, and supra-individual nature of working and knowing in this branch of robotics. These insights not only allow me to link a queer theory of failure with companion robotics, but also to stipulate a different kind of understanding of the knowledge project of robot companionship: one that pivots around reconsidering the nature of human/machine relations through the inseparability of emotionality and rationality as well as the intra-active quality of human/machine relations.

My account of a low theory of robotics implies a need to acknowledge the conditions for success in the laboratory, and through this also to tweak the narrative of progress and to install an understanding of progress that encompasses its enactment, including the acts and actors. For instance, I identify the capacity to laugh with the machine as the affective labor of bridging gaps and coping with the (still existing) constraints that limit successful interaction at the human/robot interface. Practices of becoming trained and learning to see, feel, and know what the humanoid robot can do—with all its limitations—emerge as the central condition for realizing robot companionship. Acknowledging this might allow the robotics laboratory to become a less disciplined zone of knowledge production, in which goals are much less predetermined. The format of the demonstration, with its opportunities to become a queering witness, would allow this perfectly.

In addition, my analysis requires making the issue of how to realize human-robot relations of companionship into an open question that can only be answered in conversation with the specific needs emerging from situated relations of companionship. I envision such conversations as being part of an interdisciplinary low theory of robotics. Feminist and queer counterknowledge on human/machine relations could make an active contribution to robotics, primarily on two levels. First, as Wilson’s and my studies have shown, there is a deep connection between feeling and knowing that calls for rethinking not only dichotomies between passion and reason, scientific and unscientific, and body and mind, but also of hegemonic narrations of human/machine relations. This claim resonates with the process I have illustrated: demonstrations require a robot companion that shows humanlike capacities, just as much as capacities to affect and to be affected. These pivot around emotions such as curiosity about connecting with the machine other as well as laughter.

This is also where failure comes into play as, second, I have depicted how anthropomorphism is used to explain machine failure in terms of characteristic human traits. However, my argument here is that it is precisely the situation in which failure is interpreted as a humanlike refusal of the machine to obey that could become a point of departure for innovative collaborations beyond the figure of the universal human and “his narrative” of success and progress through rationality. Instead of practicing forms of association that rely on finding “the human” in the robot, robotics could create new forms of intimacy with machines that become a starting point for critical, self-reflecting encounters with social norms in their restrictions upon what it means to be a human(-like) companion. This, however, would imply at least withstanding, if not expediting, the negativity of failure as an undoing—an undoing of the ideas of individual success and concomitant autonomous agency as well as bodily integrity, but also of the implemented logics of success of a service-oriented robot companionship that defines human/machine relations of mastery over the machine as a form of mastery over what are defined as the critical challenges of “our” societies.

Finally, I raise with Susan Leigh Star (1995, p. 89) the question of “cui bono?” —who benefits? As part of developing a low theory on the art of failure in robotics, I imagine creating a “contact zone” (Haraway, 2008, p. 36) between robotics and a queer theory of failure that unfolds through “mundane” practices of “accountability, caring for, being affected, and entering into responsibility” (Haraway, 2008, p. 36). This contact zone would be an interdisciplinary, less disciplined zone of knowledge production that acknowledges the interconnectedness of feeling and knowing as well as the indeterminacy of human/machine relations. It would also continuously challenge the idea of bodily integrity as well as capitalist standards—including those related to standards of organizing reproductive labor, especially elderly care—and social as well as technoscientific norms of embodying, knowing, and feeling companionship with robots. Not least, I consider such a new branch of “low robotics” an opportunity to lead the way out of what is perceived as one impasse that haunts the field of contemporary robotics: AI’s longstanding failure to build more robust, accountable, intelligent agents.


Acord, L., Bisbee, J.K., Bisbee, S., Niederhoffer, G. (Producer) & Schreier, J. (Director). 2012. Robot & Frank. United States: Dog Run Pictures.

Adam, A. (1998). Artificial knowing: Gender and the thinking machine. New York, NY: Routledge.

Ahmed, S. (2014, October 15). Out of sorts. Feminist Killjoys. Retrieved from

Alač, M. (2009). Moving android: On social robots and body-in-interaction. Social Studies of Science, 39(4), 491–528.

Alaimo, S. (2010). Bodily natures: Science, environment, and the material self. Bloomington, IN: Indiana University Press.

Barad, K. (2003). Posthumanist performativity: Toward an understanding of how matter comes to matter. Signs, 28(3), 801–831.

______. (2007). Meeting the universe halfway. Durham, NC: Duke University Press.

Blomkamp, N., Kinberg, S. (Producer), & Blomkamp, N. (Director). (2015). Chappie. United States & South Africa: Columbia Pictures.

Clough, P.T. (2008). The affective turn: Political economy, biomedia and bodies. Theory, Culture and Society, 25(1), 1–22.

Franchi, S., & Güzeldere, G. (Eds.) (2005). Machinations of the mind: Cybernetics and artificial intelligence from automata to cyborgs. In Mechanical Bodies,
Computational Minds. Artificial Intelligence from Automata to Cyborgs (pp. 15–149). Cambridge, MA: MIT Press.

Halberstam, J. (2011). The queer art of failure. Durham, NC: Duke University Press.

Haraway, D.J. (1991). Simians, cyborgs, and women: The reinvention of nature. New York, NY: Routledge.

______. (1997). Modest_witness@second_millennium. Femaleman©_meets_ oncomouseTM. New York, NY: Routledge.

_______. (2003). The companion species manifesto: Dogs, people, and significant otherness. Chicago, IL: Prickly Paradigm Press.

_______. (2008). When species meet. Minneapolis, MN: University of Minnesota Press.

Harding, S. (2008). Sciences from below: Feminisms, postcolonialities, and modernities. Durham, NC: Duke University Press.

Puig de la Bellacasa, M. (2009). Touching technologies, touching visions: The reclaiming of sensorial experience and the politics of speculative thinking.
Subjectivity, 28, 297–315.

Robot Companions for Citizens. (n.d.). Retrieved from

Suchman, L. (2007). Human-machine reconfigurations: Plans and situated actions. 2nd ed. New York, NY: Cambridge University Press.

_______. (2011). Subject objects. Feminist Theory, 12(2), 119–145.

Suchman, L., Trigg, R., & Blomberg, J. (2002). Working artefacts: Ethnomethods of the prototype. British Journal of Sociology, 53(2), 163–179.

Seigworth, G.J., & Gregg M. (Eds.) (2010). An inventory of shimmers. The affect theory reader. Durham, NC: Duke University Press, 1–25.

Subramaniam, B., & Willey, A. (2016). Call for papers—Science out of feminist theory for the journal Catalyst: Feminism, Theory, Technoscience.

Star, S. L. (Ed.) (1995). The Politics of Formal Representations: Wizards, Gurus, and Organizational Complexity. In Ecologies of Knowledge: Work and Politics in
Science and Technology (pp. 88–118). Albany, NY: State University of New York Press.

Treusch, P. (2015). Robotic companionship: The making of anthropomatic kitchen robots in queer feminist technoscience perspective. Linköping, Sweden: LiU Press. Retrieved from

Turkle, S. (2005 [1984]). The second self: Computers and the human spirit. 20th anniversary ed. Cambridge, MA: MIT Press.

Wilson, E.A. (2010). Affect and artificial intelligence. Seattle, WA: University of Washington Press.


Pat Treusch
is a lecturer at the Center for Interdisciplinary Women’s and Gender Studies where she teaches feminist STS across disciplinary boundaries. She is currently also a Postdoc at the BMBF Research Group MTI-engAge (Interdisciplinary Human-Robot Interaction Research) —both at Technical University Berlin. Her research interests include interdisciplinary feminist science and technology studies with a focus on human-machine interaction, queer theory, feminist materialisms, affect studies and research based teaching methods.



  • There are currently no refbacks.

Copyright (c) 2017 Pat Treusch


ISSN 2380-3312 | If you have questions about the site, including access difficulties due to incompatibility with adaptive technology, please email editor at