Human Factors in the Use of Detect-and-Avoid Decision Support Tools by Remote Pilots of Unmanned Aircraft Systems
Abstract
Unmanned aircraft systems are being deployed in increasingly dense and heterogeneous airspace, with remote pilots operating beyond visual line of sight under constrained, mediated access to the external environment. Detect-and-avoid decision support tools have emerged to assist these operators in maintaining safe separation, resolving conflicts, and coordinating with conventional air traffic services. However, the effective use of such tools depends on how human cognitive, perceptual, and strategic processes adapt to complex automation that filters, transforms, and prioritizes information about surrounding traffic and environmental constraints. This paper examines human factors in the use of detect-and-avoid decision support tools by remote pilots of unmanned aircraft systems through an integrated, model-based lens that links operator workload, trust calibration, attention allocation, and decision dynamics to tool design characteristics and operational demands. A conceptual task analysis is combined with formal modeling of alert processing, evidence accumulation, and compliance with recommended maneuvers, and with a simulation-based framework that represents variable traffic geometries, uncertainty in sensor and surveillance inputs, and differing display configurations. Results from these models are used to articulate conditions under which detect-and-avoid support may mitigate, preserve, or shift error modes for remote pilots supervising single or multiple aircraft. The discussion emphasizes parameterized trade-offs, highlighting how apparently incremental changes in alerting thresholds or visualization methods can alter cognitive demands and decision latencies. The paper concludes with implications for design, training, and regulation that aim to support reliable, transparent, and predictable human use of detect-and-avoid tools, without assuming automation infallibility.