The DOPE Framework

Factors for successful adoption of agentic AI and autonomous capabilities

Published on
April 3, 2025
Categories

//

What is the DOPE Framework?

As we begin to integrate agentic AI and autonomous systems into complex domains (like cyber defence, intelligence analysis, healthcare, finance) ensuring effective collaboration between humans and machines is critical. Human-Machine Teaming (HMT) seeks to optimise this partnership, leveraging the strengths of both human expertise and machine capabilities. However, building trust and ensuring seamless coordination requires robust guiding frameworks to help AI developers, software engineers, and users understand what HMT factors need to be considered throughout the software development lifecycle. This article introduces the DOPE framework, which is based on scientific research from the field of human factors and behavioural science, that we designed to capture the essential elements for successful HMT in demanding environments.

The four factors to consider when engineering human-machine teaming workflows.

The DOPE framework identifies four critical, interdependent capabilities required for effective HMT:

Directability. This refers to the ability of the human to direct or influence the agent's actions and the agent's capacity to accept this direction. It ensures humans can adjust the agent's goals or operating mode as needed. Conversely, this also includes the agent's ability to appropriately direct the human's attention, for example, to critical notifications or events.

Observability. The agent needs to make its status, processes, and knowledge visible to the human. This means transparent operation and real-time insight into the agent's actions, which is crucial for situational awareness. The agent should also be able to observe the human's activities and decisions.

Predictability. The agent's behaviour should be consistent and understandable, allowing the human to anticipate the agent's actions in a given context. Predictable behaviour builds trust, preventing the human from being surprised by erratic actions. Likewise, human should behave reliably to aid the agent's performance.

Explainability. Going beyond simple observability, the agent must be able to explain decisions and recommendations in human-understandable terms. Providing clear rationale is vital for user trust, especially in complex and high-stakes scenarios like cyber defence. The agent should also receive information about the human's justifications for actions or feedback.

The DOPE framework adapts foundational HMT principles (Observability, Predictability, Directability) seen in other models like the CoActive Design Framework and MITRE's HMT System Engineering Guide, but crucially adds Explainability to address the complexity and transparency needs of modern agentic AI and autonomous systems. We have sought to keep this framework simple to make it memorable and consistently applicable across any AI-related engineering work.

Applying DOPE to autonomous cyber defence workflows

We trialled the DOPE framework on a real project with a customer to analyse how military cyber analysts were interacting with a fictitious cyber AI agent. Using the DOPE framework we were able to identify what human-machine teaming capabilities would be required to develop a real-world version of such a system. Consider this simplified illustration from the project:

Directability. Cyber analysts could adjust the agent's mode of operation and intervene to authorise or prevent the agent’s proposed actions. Following some cyber event (such as a data exfiltration alert), the agent might suggest isolating the involved servers. The cyber analysts could review this recommendation, assess the potential impact using information provided by the agent, and then either approve the action, modify it (e.g., excluding a specific critical asset from isolation), or reject it based on their contextual understanding and mission priorities. The agent was designed to accept and act on these human directives.

Observability. The cyber agent provided the cyber analyst with real-time status updates via its interface. When the agent detected anomalous outbound traffic indicative of an attack, it presented key information clearly: affected assets, traffic characteristics, and an initial assessment correlating the event to known threat patterns. This demonstrably allowed the cyber analyst to maintain situational awareness regarding the agent's detection process and the cyber threat.

Predictability. Cyber analyst’s familiarised themselves with the agent’s operation during a pre-mission orientation exercise. They learned to anticipate the agent's likely responses to specific threat types based on the agent’s configured protocols and past behaviour. When a cyber event occurred (e.g., an exfiltration attempt has been flagged), experienced participants expected the agent to recommend containment actions which it did thus building confidence (and an increasing reliance on) the agent's continued good performance.

Explainability. A critical factor observed was the need for the agent to justify alerts and recommendations. The agent offered explanations for detections, outlining the reasoning (e.g., matched threat signatures, anomaly scores) and confidence levels. Cyber analysts used these explanations to validate the system's findings before committing to a response action. The use of a causal factors chart also supported the cyber analyst’s decision-making.

Applying the DOPE framework in this context provided practical insights into designing human-machine teams where trust and control are paramount. Our motivation in developing and sharing DOPE is to offer a usable tool that helps advance the development of safer, more transparent, and collaborative AI systems across the field.

Get in touch if you'd like us to help you use the DOPE framework to analyse and engineer your human-AI workflows!