Artificial Intelligence

Assurance of AI Enabled Systems

Be the first to rate
This workshop introduces state-of-the-art methods for developing testing and evaluation plans for AI-driven systems and addresses novel challenges these systems present.

Upcoming Offerings

Request Info

Course Description

Artificial Intelligence (AI). Human head with a luminous brain network.

Complex systems driven in whole or in part by artificial intelligence (AI) are fast becoming ubiquitous, across a broad range of applications. These systems must be tested and evaluated (T&E) to ensure they operate as specified, and not in undesirable ways. AI-based systems are inherently difficult to test because the underlying algorithms that drive them are used in extremely complex environments with overwhelmingly large combinations of inputs and variables. Nonetheless, engineers must perform T&E procedures for the systems to achieve a desired level of assurance, safety, and trustworthiness.

In this course, a team of experts in the field of AI and T&E will provide participants with an understanding of how cutting-edge T&E plans are developed for AI-driven autonomous systems. Participants also will engage in a variety of in-class exercises to practice the concepts of AI-based T&E for autonomous systems.

Key Takeaways

Participants in this artificial intelligence course will engage in a variety of interactive in-class exercises to get hands-on experience applying the latest in AI-based autonomous systems test and evaluation.

  • Describe the novel challenges introduced by AI, including machine learning (ML), in autonomous systems.

  • Describe the 6-D framework for creating an AI-enabled system, navigate AI technology solutions with an end-to-end engineering perspective, and explain how convolutional neural networks (CNNs) relate to the AI renaissance.

  • Summarize the basic math behind CNNs, compare CNNs to how the human brain works, and distinguish fact and fiction in CNN applications.

  • Identify the dimensions of complexity for autonomous systems, explain the role and requirements of simulations in autonomy development and testing, and identify the major architectural components of a modular autonomous system.

  • Describe a cyber-physical system; explain safety concerns related to AI-enabled cyber-physical systems; identify mitigations for addressing key classes of challenges and vulnerabilities.

  • Describe the challenges of confidence estimation for deep learning models and explain current approaches for dealing with problems of uncertainty and domain shift.

  • Identify metrics for performance, quality, reliability, safety for AI-enabled systems.

  • Test and Evaluation / Verification and Validation (TEVV) tools and methodologies for AI-enabled systems.

  • Identify challenges verification and validation on autonomous systems that perform control/planning tasks; describe approaches to formal verification of AI controllers; describe approaches safe design and fallback control architectures.

  • Describe the UL4600 standard and how it relates to MIL STD 882; lessons learned from R&D work in the industry.

  • Define the concept of explainable AI; explain how AI systems can be made more user-friendly; identify ways that failure modes can be made easier to understand.

  • List the essential elements of teamwork and describe how they might apply to AI-enabled and autonomous systems; explain how humans can interact safely and effectively with autonomous systems.

  • Describe strategies for human supervision of AI-enabled and autonomous systems; explain the overarching frameworks that apply to the governance of AI-enabled systems; Identify influence of human-systems interaction on assurance and trust.

  • Identify legal, policy, and ethical issues related to AI-enabled systems; list challenges of interpreting policy and ethical concepts for technology development.

  • Review case studies of testing and evaluating AI-enabled systems to identify lessons learned and potential pitfalls. Establish mock reviews of nominal AI-enabled systems to demonstrate application of learned tools to relevant challenges.

  • Identify key considerations for deploying autonomous systems in the real world.


A working knowledge of systems engineering practices and some experience with testing and evaluation of complex systems.

Who Should Take this Course

Engineers focused on test and evaluation of complex AI-based systems.

Design engineers working on autonomous systems who want to build insights into making AI-driven systems more readily testable.

Engineering managers who want to gain an understanding of the challenges and best practices related to T&E of AI-based autonomous systems.

Upcoming Offerings

Request Info

Pedro Rodriguez
Pedro A. Rodriguez is the principal technical leader of multiple deep learning projects at JHU/APL, where currently he focuses on developing and deploying deep learning algorithms at the tactical edge for the U.S. Army and the Joint AI Center (JAIC).
Frank Fratrik
Frank Fratrik is the senior director of safety solutions at Edge Case Research, where he manages a group of system safety engineers who provide system safety management and engineering expertise across a diverse customer base of developer, users, and assessors.
Bart Paulhamus
Course Director
Bart Paulhamus is the chief of the Intelligent Systems Center at Johns Hopkins University’s Applied Physics Laboratory.
Jane Pinelis
Chief of the Test, Evaluation, and Assessment
Dr. Jane Pinelis is the Chief of the Test, Evaluation, and Assessment branch at the Department of Defense Joint Artificial Intelligence Center (JAIC). She leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) for JAIC capabilities, as well as development of T&E-specific products and standards that will support testing of…
Chris Ratto
Dr. Christopher Ratto is a member of the Senior Professional Staff at The Johns Hopkins University Applied Physics Laboratory.
Dan Yaroslaski
Dan Yaroslaski is a senior professional staff member in the Tactical Intelligence Systems group within the Asymmetrical Operations Sector at Johns Hopkins Applied Physics Laboratory.
Adam Watkins
Adam Watkins is a principal staff member of JHU/APL with over 15 years’ experience in autonomy and robotics.
Aurora Schmidt
Aurora C. Schmidt is a project manager in JHU/APL’s Research and Exploratory Development Mission Area, and her research interests include sensor networks, estimation and coordination problems, signal processing, compressed sensing, optimization, multi-target tracking, control theory, and information and decision-making.
Chad Hawthorne
Chad Hawthorne is a principal investigator and autonomy researcher at JHU/APL and has 20 years of experience developing autonomy software for unmanned maritime systems. At APL, he oversees a research team that focuses on delivering autonomy and sensing solutions for our nation’s submarine and unmanned platforms.
John Gersh
John Gersh is a principal cognitive engineer in JHU/APL’s Intelligent Systems Branch, where he focuses on human-machine teaming.
Lynn Reggia
Lynn Reggia is the supervisor of the Human Machine Engineering Group within JHU/APL’s Air and Missile Defense Sector.
Reed Young
Reed Young is a member of the senior professional staff in the Research and Exploratory Development Mission Area at JHU/APL, where he serves as the program manager for Robotics and Autonomy.
Anton T. Dahbura
Anton Dahbura is the co-director of Johns Hopkins University’s Institute for Assured Autonomy and executive director of the Johns Hopkins University Information Security Institute.
David Handelman
David Handelman is a Senior Roboticist at the Johns Hopkins University Applied Physics Laboratory. He is a member of the Robotics Group in the Research and Exploratory Development Department. His current research focus is adaptive human-robot teaming based on the emulation of human skill acquisition by robots using neuro-symbolic AI/ML.
Dave Barsic
Dave Barsic is an Assistant Program Manager in the Force Projection Sector at JHU/APL. He is a member of the JHU/APL Principal Professional Staff and has 19 years of experience focusing on machine learning and signal processing applications for various U.S. Navy efforts.
Kevin Ligozio
Kevin Ligozio serves as the technical director and assistant group supervisor of the Tactical Intelligence Systems Group within the Asymmetric Operations Sector at the Johns Hopkins University Applied Physics Laboratory.