Background. Connected and Autonomous Vehicles (CAVs) are becoming like cyber-physical systems defined through software with dense Vehicle-to-Everything (V2X) communications. The network perimeter is blurred, along with legacy in-vehicle buses like Controller Area Network (CAN. Unified internal and external attack vectors are correlated as a result.
Methods. It was maped threats across in-vehicle network and V2X domains, an analytic conceptual methodology applies, and formalize the observe interpret decide act loop, and metrics are defined in two categories: security effectiveness (detection rate, false positive rate, F1-score, mean time to detect, containment ratio) and operational viability (policy enforcement latency, CPU/memory load on Electronic Control Units, ECUs, network overhead, impact on real-time deadlines).
Results. It was proposed a multi-layer ZTNA architecture with UEBA tailored toward the automotive sector: (1) a layer that collects telemetry from CAN/Automotive Ethernet, V2X messages as well as sensors, and ECU diagnostics; (2) an AI-based UEBA core that builds behavioral profiles of entities, updates them, along with detects anomalies; (3) an engine that converts anomalies plus context into continuous real-time trust scores for each entity and request; (4) a mechanism and enforcement points that enable granular actions, from step-up authentication and privilege reduction until microsegmented isolation and notifications to the Vehicle Security Operations Center (VSOC). Such a loop detects zero-day and insider scenarios so early, restricts lateral movement, and enforces target policy with millisecond latency, as demonstrated now.
Conclusions. UEBA powered via AI integrates into the Zero-Trust Architecture model. This integration does shift static rules into verification that is continuous inside car networks. The measurable implementation path comes as a result of interaction with the VSOC, along with explicit key indicators. This path preserves computational resource constraints. Some directions researchers can take are robustness against adversarial manipulation, AI that is explainable for audit and certification, learning that preserves privacy, plus model management that is scalable over-the-air (OTA).
If you have any questions about submitting your review, please email us at [email protected].