A Research Lens into AI Behind the Wheel: Balancing Innovation, Safety, and Human Trust

Written by: Lilian Lim

November 8, 2025

It is projected that by 2030, electric vehicles will account for 40% of global car sales (McConnon, 2025), and about 65% of Americans believe that fully driverless cars will exist within the next 50 years (Smith & Anderson, 2017). Before self-driving cars can be considered completely safe, it is essential to understand people’s perceptions of these systems. Such insights can help shape the design of future vehicles and ensure they meet user expectations.

As technology advances and systems become increasingly automated due to artificial intelligence (AI), it is important to understand different perceptions of automation and consider how to use it safely. Automation can be defined as technology capable of making its own decisions without direct human input (Lee & See, 2004). AI has driven innovation in the automobile industry by enabling vehicles to self-diagnose and monitor their own systems (S&P Global, 2025). However, with rapid technological development comes the need to slow down and ensure that safety features are thoroughly tested.

AI is only as reliable as the data it is trained on, and cybersecurity issues pose additional risks, including the potential for car systems to be hacked. This raises important questions: Do we truly need all the additional AI features if a car’s primary purpose is to transport people from one place to another? Do consumers actually want these features? Studies have found that while some people are interested in having AI features in their cars, not many are willing to pay extra for them, and this varies across countries (The Manufacturer, 2024). For people to adopt AI-enhanced vehicles, the value these features provide must justify their cost.

Trust plays a crucial role in a person’s willingness to use automated or self-driving cars. Variations in trust can influence behavior and potentially lead to harm (Parasuraman, 1997). Overtrust, or misuse, occurs when people rely on a system to perform tasks it is not yet capable of. For instance, if a driver assumes their car can safely operate autonomously while they take a nap, despite the system not being fully driverless, they are misusing the technology. This kind of misuse has already led to several fatal Tesla crashes where drivers failed to respond to hazards in time (Greene, 2019; Greenemeier, 2016; BBC News, 2019). Conversely, undertrust, or disuse, occurs when people fail to rely on a system that is capable of safely completing a task (Parasuraman, 1997). For example, a driver who refuses to let their car maintain control while they briefly take a sip of coffee is underusing the system’s potential.

Research indicates that individual differences such as demographics, personality traits, and general propensity to trust significantly influence trust in automation (Schaefer, Chen, Szalma, & Hancock, 2016). A person’s affective factors such as attitudes, comfort, confidence, and satisfaction with automation, psychological states such as stress, fatigue, and attentional capacity, and cognitive factors such as understanding of the system and expectations about its performance all contribute to how they trust and interact with automated systems (Schaefer et al., 2016; Singh, Molloy, & Parasuraman, 1993). Cultural background, upbringing, and personal experiences also shape one’s perception of automation (Schaefer & Scribner, 2015; Chien, Lewis, Hergeth, Semnani-Azad, & Sycara, 2015; Hancock, Billings, Schaefer, Chen, de Visser, & Parasuraman, 2011). Moreover, the concept of the perfect automation schema, which is the belief that automated systems will always work flawlessly, can lead to unrealistic expectations (Lyons & Guznov, 2019). People with higher expectations of automation tend to exhibit greater trust in it, while those who perceive it as unreliable are less likely to trust or use it (Sato, Yamani, Liechty, & Chancey, 2019; Feng et al., 2019; Jian, Bisantz, & Drury, 2000).

While numerous individual factors influence trust in automation, understanding people’s expectations of these systems remains equally critical. As automation and AI continue to evolve, aligning technological capabilities with human perceptions and trust will be key to ensuring safety, usability, and wider acceptance.


References

AI in the automotive industry: Trends, benefits & use cases (2025, July 25). S&P Global. Retrieved from https://www.spglobal.com/automotive-insights/en/blogs/2025/07/ai-in-automotive-industry

Chien, S., Lewis, M., Hergeth, S., Semnani-Azad, Z., & Sycara, K. (2015). Cross-country validation of a cultural scale in measuring trust in automation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59(1), 686-690. doi: 10.1177/1541931215591149

Greene, T. (2019, September 4). Another tesla crashes due to misuse of autopilot (updated). The Next Web. Retrieved from https://www.thenextweb.com/

Greenemeier, L. (2016, July 8). Deadly tesla crash exposes confusion over automated driving. Scientific American. Retrieved from https://www.scientificamerican.com/

Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., & Parasuraman, R.

(2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517-527. doi: 10.1177/0018720811417254

Jian, J., Bisantz, A.M., & Drury, C.G. (2000). Foundation for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53-71, doi: 10.1207/S15327566IJCE0401_04

Lee, J.D., & See, K.A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80, doi: 10.1518/hfes.46.1.50_30392

Lyons, J. B., & Guznov, S.Y. (2019). Individual differences in human-machine trust: A multi-study look at the perfect automation schema. Theoretical Issues in Ergonomics Science, 20(4), 440-458, doi: 10.1080/1463922X.2018.1491071

McConnon, Aili. How AI is making electric vehicles safer and more efficient. (2025, October 23). IBM. Retrieved from https://www.ibm.com/think/topics/ai-ev-batteries#:~:text=In%20the%20long%20run%2C%20we,integration%20with%20the%20smart%20grid

Parasuraman, R., & Riley, V. (1997). Humans and Automation: Use, Misuse, Disuse, Abuse.

Human Factors: The Journal of Human Factors and Ergonomics Society, 39(2), 230-253. doi: 10.1518/001872097778543886

Sato, T., Yamani, Y., Liechty, M., & Chancey, E.T. (2019). Automation trust increases under

high-workload multitasking scenarios involving risk. Cognition, Technology & Work, 1(1), 1-9. doi: 10.1007/s10111-019-00580-5

Schaefer, K.E., Chen, J.Y.C., Szalma, J.L., & Hancock, P.A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377-400. doi: 10.1177/0018720816634228

Schaefer, K.E., & Scribner, D.R. (2015). Individual differences, trust, and vehicle autonomy: A pilot study. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59(1), 786-790. doi: 10.1177/1541931215591242

Singh, I.L., Molloy, R., & Parasuraman, R. (1993). Automation-induced “complacency”: Development of the complacency-potential rating scale. The International Journal of Aviation Psychology, 3(2), 111-122. doi: 10.1207/s15327108ijap0302_2

Smith, A., & Anderson, M. (2017, October 4). Automation in everyday life. Pew Research. Retrieved from https://pewresearch.org/

Tesla model 3: Autopilot engaged during fatal crash. (2019, May 17). BCC News. Retrieved from https://www.bbc.com/news/technology-48308852

The Manufacturer. Majority of drivers open to AI in cars but few willing to pay extra for it, new study finds. (2024, November 6). Retrieved from https://www.themanufacturer.com/articles/majority-of-drivers-open-to-ai-in-cars-but-few-willing-to-pay-extra-for-it-new-study-finds/

Next
Next

From Frustration to Inspiration: How Imposter Researchers in Energy Consulting Sparked My Journey to Start a Values-Driven Firm