Start
Introduction
Background
Study objective and hypotheses
Methods
Participants
Experimental design
Independent variable
Dependent variables
Experimental procedure
Analyses and results
Mental Models Survey scores
Discussion
Conclusions
Declaration of competing interests
CRediT contribution statement
Acknowledgements/Funding
Appendix A. Completeness and accuracy of mental models survey
References

Does training improve users' mental models about adaptive cruise control?


University of Massachusetts Amherst, the United States of America

Handling editor: Lai Zheng, Harbin Institute of Technology, China

Reviewers: Matúš Šucha, Palacký University Olomouc, the Czech Republic
Maria Klingegård, Folksam Insurance Group, Sweden

DOI: https://doi.org/10.55329/aqze5695

Received: 1 August 2023; Accepted: 24 December 2023; Published: 30 January 2024

Abstract

While Advanced Driver Assistance Systems (ADAS) promise safety benefits to drivers, there is evidence to suggest that drivers are unaware or uninformed about their vehicles' systems and thus have poor mental models about the systems. Previous studies suggest that training improves drivers' mental models, although some studies report limited impacts. This study investigated the relationship between training and drivers' mental models about Adaptive Cruise Control (ACC), compared the impact of two different training approaches on drivers' mental models, and examined the relationship between driver knowledge and trust regarding ADAS technologies. This study was conducted online, and participants were randomly and equally assigned to one of three training groups—owner's manual (text‐based); state diagram visualization; or sham (control). Surveys measured their trust and mental models about ACC before and after training. The results found that the text‐based group outperformed the visualization group and the control group in terms of post‐training overall mental model scores, but these differences were not statistically significant. No correlation between post-training mental model scores and overall trust scores was found. This study provides evidence that training improves users' mental models about technology and finds that different training platforms or paradigms may affect learning differently.

Keywords

adaptive cruise control (ACC), advanced driver assistance systems (ADAS), driver training, mental models

Introduction

Background

Vehicle Automation technologies have been making rapid advances in terms of development and deployment in the past decades (NHTSA, 2017). The Society of Automotive Engineers (SAE) has classified Vehicle Automation into six levels (SAE, 2021), ranging from no automation (Level 0) to full automation (Level 5). Although fully automated driving (Level 4 and 5) requiring no driver or operator dependency has not yet been achieved, and there are significant technical and human factors challenges to deploying Level 3, there have been positive strides in the development of partially automated driving features (Level 1 and 2), otherwise known as Advanced Driving Assistance Systems (ADAS). These systems carry out various automated functions such as collision avoidance, vehicle speed regulation and gap distance maintenance (Adaptive Cruise Control), or lane position/centering assistance (Lane Centering Assist), etc. ADAS promises safety and convenience benefits to drivers. However, because these systems are designed to take over some of the traditional driving tasks, the introduction of these systems has changed how drivers interact with their vehicles. In vehicles with these systems, drivers assume a new role, i.e., supervising the functions of these systems and monitoring their driving environment for possible intervention situations.

This new role requires drivers to have appropriate knowledge and awareness about their systems' capabilities and, potentially more importantly, their limitations. However, the literature suggests that drivers are unaware or uninformed about their vehicles' systems (Jenness et al., 2008; McDonald et al., 2018). There may be various reasons for this, with a potentially important one related to the resources available for drivers to understand these systems. While studies have shown that many drivers choose to obtain their knowledge about these systems from owner's manuals (McDonald et al., 2016), they only partially or incompletely read these manuals (Mehlenbacher et al., 2002). This is of particular importance since new ADAS users have difficulties understanding their systems (Larsson, 2012). Moreover, the quality of the material presented in the owner's manual can also affect the driver's knowledge and shape their perceptions of these systems (Singer & Jenness, 2020). Also, there are variances and inconsistencies in the reporting of system limitations across different manufacturers (Pradhan et al., 2021), and this could result in misconceptions and overestimation of the systems' capabilities.

There is evidence that a lack of knowledge or any misconceptions about such systems may affect drivers' mental models, which could potentially manifest as action or response-related driver errors (Dickie & Boyle, 2009; McDonald et al., 2018; Pradhan et al., 2021). Mental models have been defined as ‘a representation of the typical causal interconnections involving actions and environmental factors that influence a system's functioning’ (Durso & Gronlund, 1999). Mental models continuously update knowledge stored in memory and are derived from encountering similar situations as those from past experiences. However, because the deployment of ADAS in vehicles has been recent, drivers may lack accurate mental models about ADAS given the minimal experience, as well as lack knowledge about system functions and limitations as noted earlier. Incomplete knowledge or mismatched expectations about a system's function and limitations may result in lack of mode awareness hindering its user's ability to allocate appropriate attentional resources and detect errors, failures, and miscommunications between the user and the system (Sarter & Woods, 1995). In the driving domain, such user-related errors have been known to play a critical role in motor vehicle crashes.Singh (2015) found that driver related-errors were assigned as the critical reason in about 94% of crashes, where the term ‘critical reason’ was defined as the last event in the crash causal chain. It was also found that about 41% of the crashes were caused by recognition-related errors, while decision-based errors and performance-related errors were the cause for 33% and 11% of the crashes, respectively. This inaccuracy or incompleteness of mental models could lead to mode confusion (Wilson et al., 2020), miscalibrated trust with regards to system capabilities (Beggiato & Krems, 2013; Kidd et al., 2017), and may result in operator errors during ADAS usage (Pradhan et al., 2020; Pradhan et al., 2021; Stanton & Salmon, 2009). Hence, improving drivers' mental models is a critical requirement for appropriate and safe use of vehicle technologies.

Driver's mental models could be improved through driver training. Driver education has helped in improving skills related to hazard perception, visual search, and situational awareness (Horswill et al., 2015; Vlakveld, 2014; Walker et al., 2009). Driver training has also improved drivers' cognitive driving abilities (Yamani et al., 2016), which could indicate that training and education of drivers can be used to improve novice drivers' knowledge and awareness of advanced driver assistance systems. Training can be defined as ‘a planned and systematic effort to modify or develop knowledge, skills and attitudes through learning experiences, to achieve effective performance in an activity or a range of activities’. Learning can be defined as ‘a relatively permanent change in behavior or in the behavioral potential that results from experience’ (Garavan, 1997). Learning can therefore be understood as one of the outcomes of training. Learning also consists of many aspects, such as learning environment, abilities and learning style (Koć-Januchta et al., 2017; Stern, 2017). Learning style in this case can be thought of as an individual's preferred way of learning (Plass et al., 1998). Learning styles and preferences are usually based on four major methods - visual, auditory, kinesthetic and tactile (Klašnja-Milićević et al., 2016). According to previous studies, majority of the population are visual learners (Zopf et al., 2004) and learn using pictures, videos, etc. Studies have also found that when learners were instructed to form images while reading texts or received pictorial cues, their recall accuracy and retention was high (Paivio, 2014; Sadoski & Willson, 2006; Tabbers et al., 2004).

While training leads to learning and skill development, one could seek knowledge and experiences of their own volition, which could also have similar learning outcomes and skill development. The focus of this study is to improve driver knowledge solely through training methods. The impacts of driver training have been widely examined in the literature, and there is strong evidence of improvements after training in driver behaviors such as attention maintenance (Pradhan et al., 2011), hazard perception (Pradhan et al., 2005), and hazard mitigation (Muttart, 2013). However, these training approaches have been studied and evaluated for the traditional driving domain. Since these training approaches were not designed for or targeted any vehicle technologies, there is little that can be extrapolated from them in terms of impact on mental models of advanced technologies.

The literature on driver training in the domain of vehicle automation is somewhat sparse. Some prior research has shown training to be effective in improving driver's knowledge about system limitations, but the results have been somewhat mixed. In a study where the training material provided was weak or strong (given via powerpoint presentations), it was found that drivers with strong mental models were better at operating ADAS during edge-case situations than those with weak mental models (Gaspar et al., 2020). Another study showed that user education through owner's manual and interactive tutorial led to increased understanding of driving automation systems (Forster et al., 2019). In this study, participants were given either baseline information (generic information about L3 and L2 systems) or Owner's Manual (information delivered in a four-page, short sentences and bulleted format, taken from a BMW manual) or interactive (participants completed a tutorial and answered questions).

In contrast, another study suggested that training (provided as descriptions of the ACC interface and explanations of icons displayed) improved drivers' abilities to detect system notifications and change in system status, but only moderately improved drivers' comprehension of system limitations (Mueller et al., 2020). Victor et al. (2018) also reported that of drivers who received specific instruction about vehicle limitations, as well as supervision reminders to keep eye on road and hands on wheel, 28% still collided with a conflict object in their experimental field study. While a large majority of the participants (72%) did benefit from the supervision and instruction (training), it is still critical to examine the failures, and the authors acknowledge the need for more research to examine how to communicate system limitations to drivers. Similarly, Noble et al. (2019) reported that training strategies (baseline and interactive module where participants watched videos and interacted with the system as instructed) only led to limited differences in driver knowledge and no difference in driver behaviors or attitudes.

Drivers' knowledge can contribute to their trust in a system, and thus to the appropriate use of a safety system. However, while there is much work conducted in this domain, there is still mixed evidence about the impact of training on driver trust. In study conducted by Payre et al. (2017), simple or elaborate (text and videos) training optimized drivers' trust when driving with automated features and decreased time to respond to emergency situations. Similarly, Koustanaï et al. (2012) familiarized drivers with Forward Collision Warning system (an ADAS feature) through a simulator-based training and also found improvement in trust towards the system and improvement in driver-system interactions. Three other studies (Beggiato & Krems, 2013; Kazi et al., 2017; Lee & See, 2004) also found that the correct use and knowledge of ACC depends on the level of trust in the system. However, when comparing limitation-focused training and responsibility-focused training approaches,DeGuzman and Donmez (2022) found that there were no differences between the two approaches regarding the drivers' ADAS related knowledge, but found that both approaches negatively affected trust for scenarios where ADAS may not work. Similarly, Zahabi et al. (2021) found no significant differences in driver trust and knowledge about automation when comparing between demonstration-based and video-based training programs.

Overall, we need more evidence about the impact of training on improving drivers' mental models. Traditionally, educating drivers about vehicle capabilities and features have been conducted using Owner's Manual, which has been described as tedious and too complicated (Mehlenbacher et al., 2002). In this study, we compare multiple training methods, a Text-Based training method which is based on the Owner's Manual and a Visualization method which describes the different states of Adaptive Cruise Control using state diagrams. We examine the effect of training method and content to understand the impact of different approaches on drivers' mental models.

Study objective and hypotheses

The objective of this study was to understand the impact of different types of training approaches and content on improving drivers' mental models of Adaptive Cruise Control (ACC), and to understand if that is related to driver trust.

This objective has been motivated by mounting evidence in the field that training and education of drivers may help in improving their understanding of Advanced Driver Assistance Systems (Forster et al., 2019; Gaspar et al., 2020). In addition, there is a rich field of research that underlines the utility of visual learning in improving recall and retention of information (Paivio, 2014; Sadoski & Willson, 2006; Tabbers et al., 2004). Finally, given the importance of user trust in appropriate use of technology, it is important to understand if trust may be related to the depth of one's understanding of a system. There is evidence that trust in advanced drivers' assistance systems may be related to drivers' mental models (Beggiato & Krems, 2013; Kazi et al., 2017; Lee & See, 2004). However, we need more evidence on the benefits of training, including evaluation of multiple types of training and the relationship to trust.

Given these objectives and motivations, this experimental driving simulator study was conducted to test the following hypotheses:

  • Training improves drivers' mental models of ACC (as measured by knowledge).

  • Training that includes visualization will be more effective than those without.

  • Drivers' mental model improvement will correspond to drivers' trust in ACC.

Methods

Participants

36 participants (M = 31.58; SD = 10.23; min = 21; max = 53; Female = 22) were recruited for this study. Drivers with valid US driver's licenses and naïve to ACC were eligible for participation. On average, participants had been licensed for 12 years (Min = 1; Max = 39) and drove for 50–100 miles weekly. A participant's familiarity with ACC was established by a series of screening questions about their prior experience and familiarity using ACC. Only those participants who self-reported as being either novice users or having no knowledge about ACC were included in the study. The study sessions were conducted online through the Zoom video- conferencing platform. Institutional Review Board approval was granted for conducting the study.

Experimental design

The study was conducted as a between-subject experiment with Training Method as the independent variable and participants' system knowledge as the dependent variable. This section details the experimental design of the study.

Independent variable

Training Method was the independent variable. There were three training conditions, two experimental, and one control, designed for the experiment. Participants were randomly and equally assigned to one of the three groups. After the pre-training survey measures were collected, depending on the randomized grouping, participants were sent a link to a document for the training material. The training conditions were as follows.

Text-Based Training (Group M)

The content and material presented in owner's manual could affect drivers' knowledge and perceptions about the system (Singer & Jenness, 2020). The Text-based training method was developed to provide text descriptions, system display & control images, and warnings about ACC. This information was compiled from actual owner's manuals of vehicles that offered ACC such as those provided by Subaru, Toyota, etc. (Subaru, 2020; Toyota, 2021). There is evidence that users only partially or incompletely read through the owner's manual (Mehlenbacher et al., 2002) and therefore, the information presented in this training method was streamlined to minimize time spent searching for relevant information and maximizing information retrieval about ACC limitations, functionalities, and operational capabilities. The training material informed the user about ACC control mechanisms with pictorial representations of in-vehicle components such as steering wheel, buttons, levers, etc. The material also presented the user with actions they could perform to change the parameters of ACC such as following distance, speed, etc. as well as cancel/resume ACC operations. The material also presented the commonly documented edge cases where ACC would fail to respond or malfunction leading to potential hazardous outcomes.

System Visualization Training (Group V)

The System Visualization Training method further simplified the text-based training material by providing a visual representation of ACC states in the form of state diagrams (Pradhan et al., 2020; Pradhan et al., 2021). The state diagram presented the various states of ACC, a total of five states, in the form of circles and each circle was labeled accordingly with the specific function of ACC in that state. The diagram also featured arrows leading in and out of each circle from/to other circles, representing state transition. Transitions represented the condition for user-based actions that would result in changes in the ACC parameters. For example, if State 1 represented an ACC state where ACC was switched on, and State 2 represented an ACC state where the ACC was activated without a lead vehicle in front, a possible user-based action to transition from State 1 to State 2 would be ‘pressing the Set+ button’. The state-diagram visualization was supplemented by text information about limitations derived from the owner's manual as seen in the text-based training material. This training method was included to examine the secondary hypothesis that the system visualization training, i.e., a simplified visualization of a complex system may improve understanding as opposed to the text-based method that presents only text-based information.

Sham Training (Group S)

The control group received a sham training material consisting of text descriptions of unrelated ADAS features, Forward Collision Warning systems (FCW) and Lane Departure Warning Systems (LDW), adapted from online resources (NSC, 2024; NSC, 2024). This sham training method was included to remove potential confounds introduced by having a control group that received no training material, and therefore did not spend similar time and effort on a training intervention as the participants from the other two groups. Similar training methods have been used as control conditions in past training studies (Divekar et al., 2013; Horswill et al., 2015; Pradhan et al., 2011; Yahoodik & Yamani, 2021).

Dependent variables

The dependent variables included the participants' pre- and post-training mental model and trust scores. The participants' mental models were measured before and after training using the Completeness and Accuracy of Mental Models Survey (CAMMS) and similarly their Trust in ACC was measured before and after training using a Trust Survey (Jian et al., 2000).

Completeness and Accuracy of Mental Models Survey—(CAMMS)

The Completeness and Accuracy of Mental Models Survey (Appendix A) was developed by the research team to obtain a measure of the completeness and accuracy of a user's mental models about ACC. For this measure, completeness is defined by the users' general knowledge about ACC, whereas accuracy is defined by the users' specific knowledge about system features. Thus, a user with a ‘complete’ mental model would be knowledgeable about the technology's features and functions, and a user with an ‘accurate’ model would be knowledgeable about the nuances of the system functions such as the conditions and parameters required for the functions.

Items in the survey for ‘Completeness’ included true or false statements regarding ACC functions, limitations, and operational capabilities, and the users could agree or disagree on a 6-point scale with the statements. The six-point scale (from strongly agree to strongly disagree) was based on a confidence-based assessment approach, and the response on the 6-point scale indicated both the accuracy of one's response, and the confidence they had in that response. If the correct response was selected for a ‘completeness’ item, the participant was presented with ‘Accuracy’ items which asked about specifics of the related item. For example, for a completeness item such as ‘ACC can regulate the vehicle's distance from the vehicle in front’, the corresponding accuracy items could be ‘ACC can regulate the vehicle's distance from any type of lead vehicle’ or ‘When ACC is regulating a vehicle's distance, there is no limit to how near or how it can follow another vehicle’. This survey included 75 items (α = 0.84) in total, with 24 completeness items (α = 0.87) and 51 accuracy items (α = 0.78). The participants' agreement responses were translated to a scale of 0 to 100 for correctness of the answer, taking into account the reversal of answers for the false scale. This was done for both the completeness and accuracy items. These scores represent the level of knowledge that a participant has about a feature of ACC, with higher scores indicating more knowledge about a feature of ACC. An Overall Mental Model Survey score was derived as the mean of the Completeness and Accuracy scores.

Experimental procedure

The experimental session was conducted online on the Zoom video-conferencing platform and the session lasted for approximately an hour. Participants were randomly allocated to one of the three groups immediately after their study session was scheduled. This randomization was done by the experimenter prior to any assessment using a predefined randomization table. Blinding was not used because the training interventions were presented by the experimenter, and the group assigned were based on the type of training intervention the participants received. Each of the sessions began with the participants being informed and consented for participation. The experimenters then gave them a brief overview about the study session. Under the supervision of the experimenters, participants accessed the different surveys through individual links provided by the experimenters. Participants first completed the pre-training surveys covering general demographic information and a Trust Survey (Jian et al., 2000). Following these, the participants received the Mental Model Survey. The participants then completed the training material based on their group allocation. The participants received a link to the training material which led to a document the participants read through. Following the training, participants completed the post-training Mental Model Survey and Trust Survey. After completing the post training surveys, particuipants were paid and the session was concluded. The entire session from beginning of the session to the end took around 60 - 70 minutes.

Analyses and results

To measure the differences between the post-training Overall Mental Model Survey scores across three training groups, an ANCOVA (analysis of covariance) was performed with the pre- training scores being treated as a covariate. All analyses were performed using R Statistical Software (v4.0.2). We compared differences in the pre-and post-training Completeness and Accuracy scores individually by using R packages such as ‘mosaic’ (Pruim et al., 2017), ‘rstatix’ (Kassambara, 2024), and ‘ggplot’ (Wickham, 2016) to arrange and visualize individual survey items on the Mental Models Survey across the three training groups. A Pearson's product-moment correlation test was conducted to observe any correlations between the Completeness and Accuracy scores before and after training. A Pearson's product-moment correlation test was also conducted to observe any correlations between the overall Mental Model Survey scores and the overall trust survey scores before and after training. The mean scores derived from the Mental Model Survey have been included in Table 1. This table provides the descriptives for the Completeness, Accuracy, and Overall scores derived from the survey.

Table 1: Descriptives of the scores derived from the Mental Model Survey

Mean (SD)

Pre-Training

Post-Training

Completeness

Accuracy

Overall

Completeness

Accuracy

Overall

Text-based

69.78 (25.4)

48.05 (35.8)

55.04 (34.3)

82.76 (23.0)

63.58 (38.5)

69.70 (35.5)

System Visualization

70.24 (24.4)

48.52 (35.0)

55.46 (33.6)

75.05 (25.5)

58.23 (37.0)

63.61 (34.7)

Sham (Control)

74.56 (25.8)

50.24 (37.8)

58.06 (36.2)

79.46 (24.3)

57.47 (38.3)

64.19 (25.1)

The Completeness and Accuracy scores were also correlated to establish a relationship between the Complete mental models and Accurate mental models of the participants. The correlation test revealed that there was a significant positive correlation between the pre-training Completeness and pre-training Accuracy scores, r (34) = 0.832; p = 3.275⋅e-10. There was also a significant positive correlation between the post-training Completeness and post-training Accuracy scores, r (34) = 0.868; p = 1.324⋅e-11 (Figure 1).

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/3a399b72-3cb0-4c57-9c2d-0780974e55d9image3.png
Figure 1: Correlation between completeness and accuracy scores on the Mental Model Survey for the pre-training (Left) and post-training conditions (Right)

https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/cff9ecb6-0c32-4251-9d74-f238e8f1afb4/image/8247db95-4da5-40e1-9e7c-2345dff09673-ufigure-2-pre-post-completeness_scores.png
Figure 2: Pre- and post- scores on completeness items (* = ACC; LV = Lead Vehicle)

Generally, both the plots indicate that the average post-training scores for all groups were in the ‘correct’ range (i.e. towards the right half of the x-axis). The plots also show that, for most items, the corresponding pre-training score (red circles) lagged behind (i.e. was less correct) the post-training score (blue dots). While there are some items that show a worsening of knowledge for some groups after training, a majority of the items show improvement in correctness after training, for all groups, and for completeness and accuracy.

These plots were generated to visualize the data and to get a sense of any trends or emergent patterns from the raw survey outcomes. These plots provide an important insight in terms of completeness items for all groups, where participants' mental models were generally correct even in the pre-training scores, i.e., they had a reasonably correct understanding of the overall features of the technology. However, for the accuracy items, for all groups, the pre-training scores tended to be incorrect, with training helping to move the scores to the right of the x-axis.

https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/cff9ecb6-0c32-4251-9d74-f238e8f1afb4/image/beeb6717-bdc3-4afc-a4c7-ea0406a9603c-ufigure-3-pre-post-accuracy_scores.png
Figure 3: Pre- and post- scores on accuracy items (* = ACC; LV = Lead Vehicle)

Mental Models Survey scores

The post-training Overall Mental Model Survey scores were analyzed across the three training groups using ANCOVA (analysis of covariance) where the pre-training overall scores were treated as the covariate. The covariate, pre-training Overall Mental Model Survey score, was significantly related to post-training Overall Mental Model Survey scores (F (1, 32) = 23.412; p = 0.00003; η2 = 0.422). The ANCOVA found that there was no main effect of the training method on the post-training scores after adjustment for the pre-training scores (F (2, 32) = 2.208; p = 0.126; η2 = 0.121). Figure 4 shows the pre-training and post-training mean Overall Mental Model Survey scores across all three training groups.

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/3a399b72-3cb0-4c57-9c2d-0780974e55d9image6.jpeg
Figure 4: Overall pre- and post- Mental Model Survey scores

Additional analyses were conducted to analyze the differences between the groups in terms of the completeness and accuracy scores. An ANOVA found a main effect of both survey time (F (1, 32) = 45.408; p < 0.00001), and training method (F (2, 32) = 4.276; p < 0.0226) for the completeness only scores, and a main effect of survey time (F (1, 32) = 13.973; p = 0.0007) for the accuracy only scores (Figure 6; Figure 5).

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/3a399b72-3cb0-4c57-9c2d-0780974e55d9image7.png
Figure 5: Mean pre- and post- scores for Completeness items on the Mental Model Survey
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/3a399b72-3cb0-4c57-9c2d-0780974e55d9image8.png
Figure 6: Mean pre- and post- scores for Accuracy items on the Mental Model Survey

Correlation between Overall Mental Model Survey and Trust scores

Previous work has reported that training did not have any impact on the user's post-training overall trust scores (Pai et al., 2021). However, it is unknown if there is a relationship between the mental models scores and overall trust scores. A Pearson's product-moment correlation test was conducted to examine the correlation between the overall mental model scores and the overall trust scores. The correlation test revealed no significant correlation between the pre- training Overall Mental Model Survey scores and the pre-training overall trust scores, r (34) = 0.022; p = 0.898. The test also revealed no significant positive correlation between the post- training Overall Mental Model Survey scores and the post-training overall trust scores, r (34) = 0.254; p = 0.135. (Figure 7).

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/3a399b72-3cb0-4c57-9c2d-0780974e55d9image9.png
Figure 7: Correlation between overall scores and Trust for the pre-training (Left) and post-training conditions (Right)

Discussion

The study examines the impact of training on drivers' mentals models about the functions, operations, and limitations of Adaptive Cruise Control (ACC). While training has been used to

improve driver knowledge and performance, it has not been used as often in the vehicle automation domain, and training in this context is still an under-researched subject. Moreover, findings regarding the impact of training of drivers' knowledge about automation systems have been mixed, with some showing significant impact on driver knowledge (Forster et al., 2019; Gaspar et al., 2020), and others showing very limited impact (Mueller et al., 2020; Noble et al., 2019). In this study, participants were randomly assigned to one of the two training groups (Text-based training or System Visualization training) or to a control group (who received sham training). A Mental Models Survey was used to assess participants' mental models about ACC.

The results from the study, i.e., ‘Does training improve users' mental models about Adaptive Cruise Control?’ indicate that there was an improvement in the Overall Mental Model Survey scores and that all groups experienced an improvement in their mental models about ACC after receiving their assigned training material. However, there was no significant main effect of the training method. The text-based training had the highest increase in scores, followed by the Visualization group, and then the control group, but these were not significantly different. Therefore, Hypothesis I, which assumes training will help improve mental models can be accepted, but Hypothesis II, which assumed that the Visualization group would have better scores than the other two is rejected.

Despite the insignificance of the differences (potentially explained by the smaller sample size), the direction of the increase in mental model scores is encouraging. The finding provides reasonable motivation for future research in development and deployment of text-based or visualization-based training programs to improve drivers' mental models about ADAS potentially leading to improved driver interactions and reduced likelihood of operator errors while using ACC.

Trust deficiencies after experiencing system failures or malfunction events could be remedied if the driver had prior knowledge about them (Beggiato & Krems, 2013). However, in this study, overall mental model scores and overall trust scores were not significantly correlated, and we found no relationship between the driver's mental models and trust in ACC, thus rejecting Hypothesis III. This raises a question about trust in a system related to knowledge of a system, and trust related to experiencing a system's limitations or edge cases. While this study was unequipped to measure trust after experiencing a system, this may be an important gap to address in order to understand user perception and acceptance of technology based on knowledge versus experience.

While differences were not statistically significant, the results show that the text-based training had the highest post-training improvements. Literature shows that majority of learners are visual learners, so the improvement in mental models of drivers receiving the Visualization method is supported by previous studies (Sadoski & Willson, 2006; Tabbers et al., 2004). There was also an improvement in the Text-based method and although not completely in line with previous literature (Mehlenbacher et al., 2002; Wickens et al., 2015), one explanation could be that presenting material from the owner's manual in a simplified and accessible manner reduced time spent seeking relevant information about ACC. The users from the visualization training group also had a higher increase in scores compared to the control group, potentially driven by the simplification of the text and replacement of text-based information with a visualization. Future work could explore the impact of this visualization versus text-based approaches on users with varying learning styles (say, visual-learner, or auditory, or other learning styles).

This study has a few important limitations. One major limitation of this study is the small sample size. We collected pre- and post-training survey measures from 36 participants (M = 31.58 (10.23) years; Female = 22). The study and the training session were conducted online using video conferencing due to the restrictions brought forth by the COVID-19 pandemic. For higher validity, ideally data collection would have been conducted on a driving simulator or on the road to test the transfer of knowledge. Drivers operating and interacting with ACC may provide insights about the changes to their mental models post-training and actual driver behaviors. This would also help evaluate if training and an improved mental model help calibrate Trust pre- and post- training. Similarly, we were limited by the nature of data collection and therefore could only collect dependent measures related to self-reported survey measures which may suffer from questionable reliability and bias from the participants' end (Schacter, 1999).

Conclusions

This study investigated the impact of various training approaches (and content) on drivers' mental models of Adaptive Cruise Control, an important and relatively common and widespread ADAS feature. There is fast growing evidence that the clarity and depth of drivers' understanding of advanced vehicle technologies will directly affect how well and how appropriately drivers use these technologies in their vehicles. It is also clear that the promised safety benefits of these technologies will not materialize unless drivers use them appropriately. Therefore, it is paramount that drivers' understanding of these complex and sophisticated technologies are accurate and complete. This means that drivers' mental models of these complex systems must be rich, accurate, and contain all the information about what the system's capabilities, and perhaps more importantly, its limitations are. Therefore, the objectives of this research were (a) to examine if training can indeed improve drivers' mental models of such systems (in this case, ACC), (b) whether there are differences in how this training is presented to the drivers, and (c) to understand accuracy and completeness as separate factors when describing one's mental models.

The results of this online study show promise for training as a method for improving users' mental models, and more importantly offers some evidence that the type of training may matter, especially if the training can simplify, or make more accessible, the information about these complex systems. The findings may have implications in terms of designing and deploying ADAS in vehicles to minimize misuse or disuse of such systems due to incomplete mental models or mistmatched expecations. These findings also underline the importance of training approaches or platforms in promoting safe usage of systems as a result of improved knowledge and understanding due to training. The results could indicate that succinct and focused training content such as those seen in the text-based and visualization methods could be helpful in improving mental models. However, further research may be needed to understand the effectiveness of different delivery platforms and informational content. Future works could aim to examine the effects of more immersive or interactive approaches such as video or virtual- reality based platforms, that can help drivers quickly understand complex concepts and features in a visual and immersive manner. Finally, this study presents findings that sheds light on the potential benefits of training and adds to the somewhat scant literature about training in ADAS and the automated driving domain.

Declaration of competing interests

The authors report no competing interests.

CRediT contribution statement

Apoorva Pramod Hungund: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing—original draft, Writing—review & editing.

Ganesh Pai: Conceptualization, Investigation, Methodology, Visualization, Writing—original draft, Writing—review & editing.

Anuj K. Pradhan: Conceptualization, Funding acquisition, Investigation, Methodology, Resources, Supervision, Writing—original draft, Writing—review & editing.