A team of Apple researchers have investigated the expectations of real users from artificial intelligence agents and how they interact with these systems.
In a study titled Mapping the Design Space of User Experience for Computer Use Agents, they found that despite extensive investments in the development and evaluation of artificial intelligence agents, many aspects of user experience, including user interaction and interface design, have received little attention. This research was done in two stages: first, the researchers identified the main UX patterns and considerations of the existing agents, and then they tested and optimized these patterns in real interaction with users using the Wizard of Oz practical method.
User interaction with artificial intelligence agents
According to reports, in the first stage, nine desktop, mobile and web agents including Claude Computer Use Tool, Adept, OpenAI Operator, AIlice, Magentic-UI, UI-TARS, Project Mariner, TaxyAI and AutoGLM were reviewed. Then, with the cooperation of eight experts in the field of UX and AI, a comprehensive classification was created, which included four main categories, 21 subcategories and 55 sample features. The four main categories were user input, transparency of agent actions, user control, and mental model and expectations, which covered the way of providing programs and agent capabilities to error management and the possibility of user intervention.

In the second step, 20 users with previous experience interacting with artificial intelligence agents participated in the Wizard of Oz experiment. Through a conversational interface, users delegated the task of booking vacations or online shopping to the agent, while the researcher in another room simulated the role of the agent by controlling the screen and keyboard. Users could enter text commands and stop the operation of the agent with the stop button. Some tasks were intentionally performed with errors or interferences in order to check the reactions of the users and for the researchers to analyze their behavior and expectations.


The results showed that users want to see the performance of the agents, but they do not want to control every step. The agent’s behavior should change according to the type of activity and the user’s familiarity with the interface; Novice users need more clarity, step-by-step explanations, and temporary confirmations, especially when actions have real consequences, such as making a purchase or changing account information. Users lose confidence quickly when faced with errors or hidden assumptions, and they prefer the agent to stop and confirm in ambiguous situations or deviations from the plan.
This study provides a practical framework for application developers to design artificial intelligence agents in a way that is transparent, reliable and compatible with the type of activity and user experience level of users, and the interaction with it is natural and effective for users.
RCO NEWS



