Augmented Reality Head-Up Displays (AR-HUDs) promise to enhance drivers’ situational awareness, but they can also increase visual and cognitive load, potentially leading to driver distraction if not carefully designed and evaluated. This Multivocal Literature Review, covering 2020 through June 2025, combines White and Grey Literature to jointly address which interaction problems occur most frequently, how these systems are being designed and evaluated, and what challenges remain.
The findings converge on two recurring issues: first, comprehension of graphics, which is highly sensitive to choices of color, size, position, hierarchy, and presentation timing; and second, legibility of graphics, closely tied to AR–real-world integration and requiring precise spatial and temporal alignment. The current evidence skews toward simulator studies, with limited on-road validation and little explicit reference to standards. Even so, the use of mixed evaluations that combine objective and subjective metrics is growing, and adaptive attention management is gaining ground: selecting, prioritizing, and limiting in real time what is shown according to urgency, context, and driver state. Overall, the field is shifting from static overlays to contextual and adaptive interfaces, where the value lies not only in what to display, but in when, where, and how much to display, maximizing comprehensibility and trust while minimizing driver distraction.
If you have any questions about submitting your review, please email us at [email protected].