Read Time:4 Minute, 19 Second

By Rhied Al-Othmani

This month, I visited the CONVERSATIONS 2023 conference hosted by the University of Oslo, Norway, together with my colleague Sarah Mbawa. The conference delved into multiple topics related to Conversational Agents in various sectors and technological states, such as Large Language Models, User Experience Design and Ethical dimensions and bias.

The conference and discussions were mostly centering around Large Language Models, with a particular emphasis on their diverse applications. Examples explored include the innovative use of “Simulated Users”. Testing with fictional AI users provides a controlled environment for AI models and chatbot conversational design to interact, ensuring that conversations align with intended plans—especially valuable when real participants are challenging to find. Further, the integration of robotics and LLM in sales, exemplified by “Saleshat,” was a focal point. This fusion, guided by Microsoft design principles, raised concerns about privacy, specifically regarding where the data is transmitted. Additionally, addressing biases in product recommendations was discussed, suggesting a strategic shift in decision-making by the model, and rather using LLM to create a natural conversation flow.

The discourse extended to the prevalence of dark patterns in chatbot design, drawing parallels with deceptive design tactics employed in websites and other media. Some can be found here: www.deceptive.design/types. The emergence of deceptive practices in chatbots underscores the need for a closer examination of user perceptions, often gleaned from social media platforms like Reddit.

During the conference two keynote speakers also delved into LLM, our new way of talking to computers, brings up some important points. We need to control what goes in and what comes out. Predicting outcomes is tough, and keeping results consistent is even tougher. Checking and aligning values through QA monitoring is a must. “Human in the Loop” emphasizes people doing what’s necessary and making automated work automated. One speaker highlighted that using technology is a political decision, and tech is never neutral—it has real-world impacts. Learning from past mistakes, like Cambridge Analytica and the misuse of social media, is crucial. The speaker also pointed out the concentration of power in big companies such as Microsoft and OpenAI, shaping the foundation for today’s chatbots. As these companies move from giving free services to making money, it’s time for action. We need to think about the bigger eco system, raise awareness, clarify rules, and make sure big companies don’t just pay fines but also ‘punish’ them by taking back their models and data. For the public sector, the suggestion is to create our own safe spaces for data, like clouds, instead of relying on big tech. This is especially important for data from students, kids, and citizens. In short, dealing with LLM means more than just talking to computers. It’s about control, doing what’s right, and making sure everyone plays fair.

Diving into the realm of chatbot attributes, the conference touched upon interesting attributes implemented in chatbots such as personality and gender, shedding light on the distinctions between human-likeness and anthropomorphism. While human-likeness involves features assigned to chatbots, anthropomorphism delves into how we perceive them—akin to humans or distinct entities. The discussion extended to the attribution of personalities to chatbots, prompting reflection on the how and why. The incorporation of the Big 5 personality traits into chatbot design was explored, emphasizing that machines lack consciousness but are assigned specific behaviors to emulate personality. Gender, a complex and multidimensional construct, emerged as a significant aspect. Despite its intricacies, chatbots are often designed with gendered identities. This practice, influenced by stereotypes, raises ethical concerns. The need for exploration and understanding in the realm of chatbot gendering was underscored, emphasizing the role of workshops and ethical methodologies in countering stereotypes. The discourse further delved into the concerning rise of chatbot abuse, with a notable observation that female-looking chatbots tend to receive more abuse. To address this, a suggested strategy involves making chatbots respond to abuse, fostering user awareness.

A full programme and list of papers published for CONVERSATION 2023 can be found on the website.

Visting this conference provided me with a reflection on my current study, Bots of Trust: An interdisciplinary discourse. This PhD project focuses on automated Conversational Agents (e.g., chatbots and voice assistants) and their role in user services in the public domain. Automated forms of communication have become increasingly common in interactions between organisations and their target groups. This raises questions of relationship-building, trust, forms of service use, and data ethics. The interdisciplinary study critically examines how citizens’ interactions with Conversational Agents shape trust in public organisations. This project is part of the broader research programme “Public Services in Digital Transition” conducted by the research group Marketing & Customer Experience (MCX) at the HU. The goal is to provide a perspective on digital maturity and tools for strategy development and interventions for public organisations (tax authorities, public transport, healthcare, and Dutch municipalities). The objective is to prevent and recover damaged trust in organisations. Through connecting insights from different disciplines (media- and communication studies, human- computer-interaction research, marketing) and experimental research, this study contributes to the scientific understanding of automated communication technology in daily life.

Photo by Arvid Malde

Previous post 9th European Communication Conference (ECREA) 2022, Aarhus, Denmark
Next post IDC event; GenAi is everywhere