HC8298
Data Ownership and Privacy in Personalized AI Models in Assistive Healthcare
Boris Debic, Luka Medvidovic
10 min. talk | August 8th at 11:30 | Session: Human Centred AI 1/2
[+] More
[-] Less
The use of personalized artificial intelligence (AI) models in assistive healthcare presents a number of ethical and legal challenges, they are examined in this paper. We look at a number of situations in which AI models have close interactions with persons under care. In particular we are interested in the question of model and data ownership, privacy, and ethical implications. The paper also surveys the existing regulatory environment, including the US Congress’s legislative initiatives and the US Federal Trade Commission’s examination of AI. Additionally, it covers AI strategies such as modular AI and discusses possible solutions to the issues described. We present an overview of the current outstanding problems and with this work offer a researched and organized contribution for a public discussion of responsible application of AI in this field of healthcare. Our selection of topics is guided by keeping in mind the key stakeholders: technology providers, healthcare or care providers, care beneficiaries and their families.
HC8479
Reassessing Evaluation Functions in Algorithmic Recourse: An Empirical Study from a Human-Centered Perspective
Tomu Tominaga, Naomi Yamashita, Takeshi Kurashima
10 min. talk | August 8th at 15:00 | Session: Human Centred AI 2/2
[+] More
[-] Less
In this study, we critically examine the foundational premise of algorithmic recourse – a process of generating counterfactual action plans (i.e., recourses) assisting individuals to reverse adverse decisions made by AI systems. The assumption underlying algorithmic recourse is that individuals accept and act on recourses that minimize the gap between their current and desired states. This assumption, however, remains empirically unverified. To address this issue, we conducted a user study with 362 participants and assessed whether minimizing the distance function, a metric of the gap between the current and desired states, indeed prompts them to accept and act upon suggested recourses. Our findings reveal a nuanced landscape: participants’ acceptance of recourses did not correlate with the recourse distance. Moreover, participants’ willingness to act upon recourses peaked at the minimal recourse distance but was otherwise constant. These findings cast doubt on the prevailing assumption of algorithmic recourse research and signal the need to rethink the evaluation functions to pave the way for human-centered recourse generation.
HC8596
Emergence of Social Norms in Generative Agent Societies: Principles and Architecture
Siyue Ren, Zhiyao Cui, Ruiqi Song, Zhen Wang, Shuyue Hu
10 min. talk | August 8th at 15:00 | Session: Human Centred AI 2/2
[+] More
[-] Less
Social norms play a crucial role in guiding agents towards understanding and adhering to standards of behavior, thus reducing social conflicts within multi-agent systems (MASs). However, current LLM-based (or generative) MASs lack the capability to be normative. In this paper, we propose a novel architecture, named CRSEC, to empower the emergence of social norms within generative MASs. Our architecture consists of four modules: Creation & Representation, Spreading, Evaluation, and Compliance. This addresses several important aspects of the emergent processes all in one: (i) where social norms come from, (ii) how they are formally represented, (iii) how they spread through agents’ communications and observations, (iv) how they are examined with a sanity check and synthesized in the long term, and (v) how they are incorporated into agents’ planning and actions. Our experiments deployed in the Smallville sandbox game environment demonstrate the capability of our architecture to establish social norms and reduce social conflicts within generative MASs. The positive outcomes of our human evaluation, conducted with 30 evaluators, further affirm the effectiveness of our approach. Our project can be accessed via the following link: https://github.com/sxswz213/CRSEC.
HC8614
XAI-Lyricist: Improving the Singability of AI-Generated Lyrics with Prosody Explanations
Qihao Liang, Xichu Ma, Finale Doshi-Velez, Brian Lim, Ye Wang
10 min. talk | August 8th at 11:30 | Session: Human Centred AI 1/2
[+] More
[-] Less
Explaining the singability of lyrics is an important but missing ability of language models (LMs) in song lyrics generation. This ability allows songwriters to quickly assess if LM-generated lyrics can be sung harmoniously with melodies and helps singers align lyrics with melodies during practice. This paper presents XAI-Lyricist, leveraging musical prosody to guide LMs in generating singable lyrics and providing human-understandable singability explanations. We employ a Transformer model to generate lyrics under musical prosody constraints and provide demonstrations of the lyrics’ prosody patterns as singability explanations. XAI-Lyricist is evaluated by computational metrics (perplexity, prosody-BLEU) and a human-grounded study (human ratings, average time and number of attempts for singing). Experimental results show that musical prosody can significantly improve the singability of LM-generated lyrics. A controlled study with 14 singers also confirms the usefulness of the provided explanations in helping them to interpret lyrical singability faster than reading plain text lyrics.
HC8643
Are They the Same Picture? Adapting Concept Bottleneck Models for Human-AI Collaboration in Image Retrieval
Vaibhav Balloli, Sara Beery, Elizabeth Bondi-Kelly
10 min. talk | August 8th at 15:00 | Session: Human Centred AI 2/2
[+] More
[-] Less
Image retrieval plays a pivotal role in applications from wildlife conservation to healthcare, for finding individual animals or relevant images to aid diagnosis. Although deep learning techniques for image retrieval have advanced significantly, their imperfect real-world performance often necessitates including human expertise. Human-in-the-loop approaches typically rely on humans completing the task independently and then combining their opinions with an AI model in various ways, as these models offer very little interpretability or correctability. To allow humans to intervene in the AI model instead, thereby saving human time and effort, we adapt the Concept Bottleneck Model (CBM) and propose CHAIR. CHAIR (a) enables humans to correct intermediate concepts, which helps \textit{improve} embeddings generated, and (b) allows for flexible levels of intervention that accommodate varying levels of human expertise for better retrieval. To show the efficacy of \methodname, we demonstrate that our method performs better than similar models on image retrieval metrics without any external intervention. Furthermore, we also showcase how human intervention helps further improve retrieval performance, thereby achieving human-AI complementarity.
HC8657
Human-Agent Cooperation in Games under Incomplete Information through Natural Language Communication
Shenghui Chen, Daniel Fried, Ufuk Topcu
10 min. talk | August 8th at 15:00 | Session: Human Centred AI 2/2
[+] More
[-] Less
Developing autonomous agents that can strategize and cooperate with humans under information asymmetry is challenging without effective communication in natural language. We introduce a shared-control game, where two players collectively control a token in alternating turns to achieve a common objective under incomplete information. We formulate a policy synthesis problem for an autonomous agent in this game with a human as the other player. To solve this problem, we propose a communication-based approach comprising a language module and a planning module. The language module translates natural language messages into and from a finite set of flags, a compact representation defined to capture player intents. The planning module leverages these flags to compute a policy using an asymmetric information-set Monte Carlo tree search with flag exchange algorithm we present. We evaluate the effectiveness of this approach in a testbed based on Gnomes at Night, a search-and-find maze board game. Results of human subject experiments show that communication narrows the information gap between players and enhances human-agent cooperation efficiency with fewer turns.
HC8690
Towards Proactive Interactions for In-Vehicle Conversational Assistants Utilizing Large Language Models
Huifang Du, Xuejing Feng, Jun Ma, Meng Wang, Shiyu Tao, Yijie Zhong, Yuan-Fang Li, Haofen Wang
10 min. talk | August 8th at 11:30 | Session: Human Centred AI 1/2
[+] More
[-] Less
Research demonstrates that the proactivity of in-vehicle conversational assistants (IVCAs) can help to reduce distractions and enhance driving safety, better meeting users’ cognitive needs. However, existing IVCAs struggle with user intent recognition and context awareness, which leads to suboptimal proactive interactions. Large language models (LLMs) have shown potential for generalizing to various tasks with prompts, but their application in IVCAs and exploration of proactive interaction remain under-explored. These raise questions about how LLMs improve proactive interactions for IVCAs and influence user perception. To investigate these questions systematically, we establish a framework with five proactivity levels across two dimensions—assumption and autonomy—for IVCAs. According to the framework, we propose a “Rewrite + ReAct + Reflect” strategy, aiming to empower LLMs to fulfill the specific demands of each proactivity level when interacting with users. Both feasibility and subjective experiments are conducted. The LLM outperforms the state-of-the-art model in success rate and achieves satisfactory results for each proactivity level. Subjective experiments with 40 participants validate the effectiveness of our framework and show the proactive level with strong assumptions and user confirmation is most appropriate.
HC8714
The Role of Perception, Acceptance, and Cognition in the Usefulness of Robot Explanations
Hana Kopecka, Jose Such, Michael Luck
10 min. talk | August 8th at 15:00 | Session: Human Centred AI 2/2
[+] More
[-] Less
It is known that when interacting with explainable autonomous systems, user characteristics are important in determining the most appropriate explanation, but understanding which user characteristics are most relevant to consider is not simple. This paper explores such characteristics and analyses how they affect the perceived usefulness of four types of explanations based on the robot’s mental states. These types are belief, goal, hybrid (goal and belief) and baseline explanations. In this study, the explanations were evaluated in the context of a domestic service robot. The user characteristics considered are the perception of the robot’s rationality and autonomy, the acceptance of the robot and the user’s cognitive tendencies. We found differences in perceived usefulness between explanation types based on user characteristics, with hybrid explanations being the most useful.
HC8721
From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News
Yuhan Liu, Xiuying Chen, Xiaoqing Zhang, Xing Gao, Ji Zhang, Rui Yan
10 min. talk | August 8th at 11:30 | Session: Human Centred AI 1/2
[+] More
[-] Less
In the digital era, the rapid propagation of fake news and rumors via social networks brings notable societal challenges and impacts public opinion regulation. Traditional fake news modeling typically forecasts the general popularity trends of different groups or numerically represents opinions shift. However, these methods often oversimplify real-world complexities and overlook the rich semantic information of news text. The advent of large language models (LLMs) provides the possibility of modeling subtle dynamics of opinion. Consequently, in this work, we introduce a Fake news Propagation Simulation framework (FPS) based on LLM, which studies the trends and control of fake news propagation in detail. Specifically, each agent in the simulation represents an individual with a distinct personality. They are equipped with both short-term and long-term memory, as well as a reflective mechanism to mimic human-like thinking. Every day, they engage in random opinion exchanges, reflect on their thinking, and update their opinions. Our simulation results uncover patterns in fake news propagation related to topic relevance, and individual traits, aligning with real-world observations. Additionally, we evaluate various intervention strategies and demonstrate that early and appropriately frequent interventions strike a balance between governance cost and effectiveness, offering valuable insights for practical applications. Our study underscores the significant utility and potential of LLMs in combating fake news.
HC8738
ADESSE: Advice Explanations in Complex Repeated Decision-Making Environments
Sören Schleibaum, Lu Feng, Sarit Kraus, Jörg P. Müller
10 min. talk | August 8th at 15:00 | Session: Human Centred AI 2/2
[+] More
[-] Less
In the evolving landscape of human-centered AI, fostering a synergistic relationship between humans and AI agents in decision-making processes stands as a paramount challenge. This work considers a problem setup where an intelligent agent comprising a neural network-based prediction component and a deep reinforcement learning component provides advice to a human decision-maker in complex repeated decision-making environments. Whether the human decision-maker would follow the agent’s advice depends on their beliefs and trust in the agent and on their understanding of the advice itself. To this end, we developed an approach named ADESSE to generate explanations about the adviser agent to improve human trust and decision-making. Computational experiments on a range of environments with varying model sizes demonstrate the applicability and scalability of ADESSE. Furthermore, an interactive game-based user study shows that participants were significantly more satisfied, achieved a higher reward in the game, and took less time to select an action when presented with explanations generated by ADESSE. These findings illuminate the critical role of tailored, human-centered explanations in AI-assisted decision-making.
HC8742
A Goal-Directed Dialogue System for Assistance in Safety-Critical Application
Prakash Jamakatel, Rebecca De Venezia, Christian Muise, Jane Jean Kiam
10 min. talk | August 8th at 15:00 | Session: Human Centred AI 2/2
[+] More
[-] Less
In safety-critical applications where a human is in the loop, providing timely contextual assistance can reduce the severity of emergencies. While the context can typically be inferred passively, engaging the human in an active conversation with the assistance system makes this context richer and more sound. For this, we explore a FOND-planning-powered goal-directed dialogue system with Natural Language Understanding (NLU) capabilities. We use an Ultralight (UL) aviation domain as an example application for test and validation by inferring the current context in situations requiring emergency landings using the goal-directed dialogue system. The inferred context is then used for real-time modelling of the problem instance, necessary for generating strategic plans to guide the human out of the emergency situations. To overcome data scarcity, we augment the data collected from human pilots using generative text models to train the NLU capabilities of the dialogue agent. We benchmark against generative chatbots and demonstrate that our goal-directed dialogue system significantly outperforms them in context inference.