Would you trust a car that cannot explain its own decisions?

Automated driving technologies are advancing rapidly. From adaptive cruise control to lane keeping assistance and the prospect of fully self-driving vehicles, cars are becoming increasingly capable of supporting or even replacing human driving actions. Yet the real challenge for automated mobility is not only technological. Even the most advanced system will struggle to succeed if people do not trust it, understand how it behaves, or feel comfortable interacting with it. In practice, this does not mean that cars will verbally justify every action they take. Instead, automated systems communicate their status, intentions, and limitations through Human–Machine Interfaces (HMIs), such as dashboard alerts, system notifications, or takeover requests that inform drivers when the system reaches its limits. This is why keeping humans in the loop is not simply a design preference, but a fundamental requirement for trustworthy automated mobility. Understanding these human factors is therefore becoming just as important as improving sensors, algorithms, or vehicle performance.

Within the Cynergy4MIE project, researchers explored how people perceive automated driving technologies and what influences their willingness to use them. The goal was simple: to better understand what users expect from automated systems and how those expectations can inform the design of future mobility technologies. To investigate this, the project conducted a study on user and societal acceptance of Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS). The research combined insights from existing studies with a survey exploring how people perceive trust, transparency, safety, and ethical responsibility in automated driving.

The results reveal a complex but insightful picture.

On the one hand, people are becoming increasingly familiar with driver assistance technologies. Features such as automatic emergency braking or lane keeping assistance are already part of many modern vehicles. In fact, 81.66% of respondents reported moderate to high trust in current ADAS features. However, confidence becomes more cautious when automation increases. 58.34% of respondents expressed similar trust in fully automated driving systems, while 73.33% reported concerns about relying entirely on automated driving in real-world conditions. In other words, many users are comfortable with assistance but remain hesitant about handing over full control. Transparency emerged as one of the strongest drivers of trust. The survey showed that 90% of respondents feel more confident using automated systems when they understand how and why the system makes decisions. Similarly, more than 70% supported the idea that automated vehicles should provide real-time explanations of their actions, for example through alerts explaining sudden braking or system limitations.

These findings highlight an important insight: users do not simply want automation to work, they want to understand it.

Human oversight also remains essential, and this is where the human-in-the-loop approach proves critical. Around 80% of respondents reported that they actively monitor or override driver assistance systems, indicating that drivers still expect to remain engaged when using automated technologies. Moreover, 75% said they would likely reduce their use of a system after experiencing an unexpected or unexplained automated action. This demonstrates how quickly trust can be undermined when system behaviour appears unclear or unpredictable. Ethical considerations also play a significant role. Many respondents emphasised the importance of clear rules governing automated decision-making. In addition, 90% highlighted the need for clear legal accountability frameworks for automated driving systems, reinforcing the idea that trust in automation is closely connected to regulation, governance, and institutional oversight.

The project didn’t stop at research findings. The insights were translated into practical guidance and requirements for developers and researchers, helping to embed human-centred thinking directly into the design process, rather than treating it as an afterthought.

Taken together, these insights suggest that the future of automated mobility will depend not only on technological progress but also on how well systems communicate with the people who use them. Users want systems that are transparent about their actions, allow meaningful human oversight, and operate within clear ethical and regulatory frameworks. By combining technical innovation with user-centred research, Cynergy4MIE demonstrates how interdisciplinary collaboration can support the development of automated mobility solutions that are not only technologically advanced but also trustworthy, understandable, and aligned with societal expectations.

Because in the end, a car that people don’t trust isn’t really going anywhere (?)

Blog signed by: CONV team