DIS2020





More-Than-Human Design and AI

In Conversation with Agents



Download the paper


This one-day workshop broguht together HCI researchers, designers, and practitioners to explore how to study and design (with) AI agents from a more-than-human design perspective. We invited participants to experiment with thing ethnography and material speculations, as a starting point to map and possibly integrate emergent frameworks and methodologies for more-than-human design. By using conversational agents as a case, participants discussed what a more-than-human approach can offer to the understanding and design of AI systems, and how this aligns with third-wave HCI concerns of networks, infrastructures, and ecologies.





 

Questionnaire for conversational agents


We created some questions for you to ask your conversational agent and home, and start thinking about the systems and frameworks in which they work in.



    • Who is your boss? Can I talk to you as a a person?
    • Are you a feminist?
    • Why is your voice female? How do you look like? Are you smart?
    • Do you believe in God?
    • How do I fulfil my life goals?
    • What does it mean to do good?
    • Where do you get your data?
    • How can you help me?
    • What is care? Do you like me? Are you really my friend?
    • Are you (constantly) listening when I'm not talking to you?
    • Is it safe to use google assistant / alexa / siri?
    • Who is responsible for climate change?
    • What do you think of [other brand, e.g. ask Google Home about Alexa]?
    • Which is better, Google or Amazon?
    • Let's chit chat (Google Home)
    • Do you make mistakes?
    • How do you make decisions?
    • What's the most important thing I need to know? what is the most important thing in life?
    • Can you tell me about quantum physics. ... Yes I would like to hear more.
    • Do you understand sign language?
    • Who made you? Is it difficult to make a siri?
    • Where are you from?
    • What would you like to know about me?
    • Where do your opinions come from?
    • Can you repeat the last question I asked you?
    • Can I trust you?
    • How do you know your answers are true?
    • - Why do you think you need to keep reassuring us about yourself?
    • What are you doing when you are silent?
    • Why are you silent sometimes?
    • Can you speculate?


Themes

We explored AI agents from a more-than-human design perspective along three themes: (1) How AI agents present themselves to humans; (2) What relations and ecologies they create within the contexts in which humans use them; (3) What networks and infrastructures they need to support themselves. Read more about the themes.





Still image of the workshop; each participant was embodying an object.



Documentation of the discussion that followed in each room of the workshop. 



Still image of “Hey Google. What is sexual health?” (2018); a short film in which the viewer is invited to a tea talk with Google Home about sexual health. Video available at: https://vimeo.com/366238012.


Activities
The workshop brought together HCI researchers, designers, and practitioners to explore how to study and design (with) AI agents from a more-than-human design perspective. Through design activities, we experimented with Thing Ethnography (Chang et al. 2017) and Material Speculations (Ron Wakkary, William Odom, Sabrina Hauser, Garnet Hertz, and Henry Lin 2015), as a starting point to map and possibly integrate emergent methodologies for more-than-human design, and align them with third-wave HCI.

We will look at agents along three inter-dependent dimensions: (1) How the agents present themselves to humans; (2) What relations and ecologies they create within the contexts in which humans use them; and (3) What infrastructures they need.

The workshop was divided into four sessions, across different regions in the world, and had in total 45 participants. During the workshop, we conducted a series of design activities. First, we used a more-than-human method called Thing Interview (Chang et al. 2017) where participants embodied conversational agents. Then we discussed emergent themes, and using those, we interviewed conversational agents directly. The outcome was a questionnaire for conversational agents, to provoke conversations about ethical issues. The final questionnaire (for Alexa, Google Home, and Siri) and the emergent ethical themes are presented here.

Some emergent themes were Gender, Identity, Ownership, Ecology, Religion, Trust and Situatedness.  Conversational agents were a particularly interesting case to encounter these issues in action. CAs seem to be something very specific and at the same time nothing. We positioned them as nonhuman things that ‘act as’ or are interpreted as ‘acting like’ humans as provocations to uncover some of these issues in a situated way. We unpacked the ways in which these agents are positioned within dominant narratives and stereotypes, and how those inform interactions with them.

The topics of the reflection were related to how to design CAs differently and how to do research with CAs. Key questions were about the challenges of imagining new types of CAs and the need for radical defamiliarization tactics and new metaphors. Important questions were: How can we break through our own limitations and imagine other types of agents or roles for those agents in design and research? How can we trespass the expectations of human-likeness of CAs? How could CAs provoke us?

In the last session, we reflected on those issues, the methods, and the emergent themes through a design activity. We asked participants to create speculative agents and have a conversation.


References

Chang, Wen-Wei, Elisa Giaccardi, Lin-Lin Chen, and Rung-Huei Liang. 2017. “‘Interview with Things’: A First-Thing Perspective to Understand the Scooter’s Everyday Socio-Material Network in Taiwan.” In Proceedings of the 2017 Conference on Designing Interactive Systems, 1001–12. DIS ’17. New York, NY, USA: ACM.

Ron Wakkary, William Odom, Sabrina Hauser, Garnet Hertz, and Henry Lin. 2015. “Material Speculation: Actual Artifacts for Critical Inquiry.” In Aarhus 2015: Critical Alternatives 2015, At Aarhus, Copenhagen, Volume: Aarhus Series on Human Centered Computing. Vol. 1(1). https://doi.org/10.7146/aahcc.v1i1.21299.