Artificial Intelligence

Assurance of AI Enabled Systems

Be the first to rate
This workshop introduces state-of-the-art methods for developing testing and evaluation plans for AI-driven systems and addresses novel challenges these systems present.

Upcoming Offerings


Course Description

Artificial Intelligence (AI). Human head with a luminous brain network.

Complex systems driven in whole or in part by artificial intelligence (AI) are fast becoming ubiquitous, across a broad range of applications. These systems must be tested and evaluated (T&E) to ensure they operate as specified, and not in undesirable ways. AI-based systems are inherently difficult to test because the underlying algorithms that drive them are used in extremely complex environments with overwhelmingly large combinations of inputs and variables. Nonetheless, engineers must perform T&E procedures for the systems to achieve a desired level of assurance, safety, and trustworthiness.

In this course, a team of experts in the field of AI and T&E will provide participants with an understanding of how cutting-edge T&E plans are developed for AI-driven autonomous systems. Participants also will engage in a variety of in-class exercises to practice the concepts of AI-based T&E for autonomous systems.

Key Takeaways

Participants in this artificial intelligence course will engage in a variety of interactive in-class exercises to get hands-on experience applying the latest in AI-based autonomous systems test and evaluation.

  • Describe the novel challenges introduced by AI, including machine learning (ML), in autonomous systems.

  • Describe the 6-D framework for creating an AI-enabled system, navigate AI technology solutions with an end-to-end engineering perspective, and explain how convolutional neural networks (CNNs) relate to the AI renaissance.

  • Summarize the basic math behind CNNs, compare CNNs to how the human brain works, and distinguish fact and fiction in CNN applications.

  • Identify the dimensions of complexity for autonomous systems, explain the role and requirements of simulations in autonomy development and testing, and identify the major architectural components of a modular autonomous system.

  • Describe a cyber-physical system; explain safety concerns related to AI-enabled cyber-physical systems; identify mitigations for addressing key classes of challenges and vulnerabilities.

  • Describe the challenges of confidence estimation for deep learning models and explain current approaches for dealing with problems of uncertainty and domain shift.

  • Identify metrics for performance, quality, reliability, safety for AI-enabled systems.

  • Test and Evaluation / Verification and Validation (TEVV) tools and methodologies for AI-enabled systems.

  • Identify challenges verification and validation on autonomous systems that perform control/planning tasks; describe approaches to formal verification of AI controllers; describe approaches safe design and fallback control architectures.

  • Describe the UL4600 standard and how it relates to MIL STD 882; lessons learned from R&D work in the industry.

  • Define the concept of explainable AI; explain how AI systems can be made more user-friendly; identify ways that failure modes can be made easier to understand.

  • List the essential elements of teamwork and describe how they might apply to AI-enabled and autonomous systems; explain how humans can interact safely and effectively with autonomous systems.

  • Describe strategies for human supervision of AI-enabled and autonomous systems; explain the overarching frameworks that apply to the governance of AI-enabled systems; Identify influence of human-systems interaction on assurance and trust.

  • Identify legal, policy, and ethical issues related to AI-enabled systems; list challenges of interpreting policy and ethical concepts for technology development.

  • Review case studies of testing and evaluating AI-enabled systems to identify lessons learned and potential pitfalls. Establish mock reviews of nominal AI-enabled systems to demonstrate application of learned tools to relevant challenges.

  • Identify key considerations for deploying autonomous systems in the real world.


A working knowledge of systems engineering practices and some experience with testing and evaluation of complex systems.

Who Should Take this Course

Engineers focused on test and evaluation of complex AI-based systems.

Design engineers working on autonomous systems who want to build insights into making AI-driven systems more readily testable.

Engineering managers who want to gain an understanding of the challenges and best practices related to T&E of AI-based autonomous systems.

Upcoming Offerings


Anton T. Dahbura
Anton (Tony) Dahbura is the co-director of Johns Hopkins University’s Institute for Assured Autonomy and executive director of the Johns Hopkins University Information Security Institute.
Frank Fratrik
Frank Fratrik is the senior director of safety solutions at Edge Case Research, where he manages a group of system safety engineers who provide system safety management and engineering expertise across a diverse customer base of developer, users, and assessors.
John Gersh
John Gersh is a principal cognitive engineer in JHU/APL’s Intelligent Systems Branch, where he focuses on human-machine teaming.
Erin Hahn
Erin Hahn is a senior national security analyst and principal professional staff member in JHU/APL’s National Security Analysis Mission Area, where she supervises a group of analysts working on broad issues related to technology development and implementation.
Chad Hawthorne
Chad Hawthorne is a principal investigator and autonomy researcher at JHU/APL and has 20 years’ experience developing autonomy software for unmanned maritime systems.
Lynn Reggia
Lynn Reggia is the supervisor of the Human Machine Engineering Group within JHU/APL’s Air and Missile Defense Sector.
Pedro Rodriguez
Pedro A. Rodriguez is the principal technical leader of multiple deep learning projects at JHU/APL, where currently he focuses on developing and deploying deep learning algorithms at the tactical edge for the U.S. Army and the Joint AI Center (JAIC).
Aurora Schmidt
Aurora C. Schmidt is a project manager in JHU/APL’s Research and Exploratory Development Mission Area, and her research interests include sensor networks, estimation and coordination problems, signal processing, compressed sensing, optimization, multi-target tracking, control theory, and information and decision-making.
Christina Selby
Christina Selby is a senior professional staff member and section supervisor at JHU/APL, with expertise in developing and analyzing mathematical methodologies to solve critical problems that are not well understood.
Tamim Sookoor
Tamim Sookoor is a researcher at JHU/APL, where his research interests include cyber physical systems (CPS), cyber security, the Internet of Things (IoT), and machine learning.
Adam Watkins
Adam Watkins is a principal staff member of JHU/APL with over 15 years’ experience in autonomy and robotics.
Reed Young
Reed Young is a member of the senior professional staff in the Research and Exploratory Development Mission Area at JHU/APL, where he serves as the program manager for Robotics and Autonomy.
Design Thinking Team

Sarah Rigsbee
Sarah Rigsbee is a senior human-centered design and innovation strategist and senior professional staff member at JHU/APL and is the lead human-centered design strategist for JHU’s Institute for Assured Autonomy (IAA).