Artificial Intelligence

Artificial Intelligence Course: Assured AI & Autonomy

Be the first to rate
This artificial intelligence course introduces state-of-the-art methods for developing testing and evaluation plans for AI-driven systems and addresses novel challenges these systems present.

Upcoming Offerings

Or log in to access your purchased courses

Course Description

Artificial Intelligence (AI). Human head with a luminous brain network.

In this Artificial Intelligence course, test and evaluation experts from Johns Hopkins will share how cutting-edge test and evaluation plans are developed for AI-driven autonomous systems.

Complex systems driven in whole or in part by artificial intelligence (AI) are fast becoming ubiquitous, across a broad range of applications. These systems operate in extremely complex environments with overwhelmingly large combinations of inputs and variables. Engineers must test and evaluate the systems to ensure they are achieving the desired level of assurance, safety, and trustworthiness.

Key Takeaways

Participants in this artificial intelligence course will engage in a variety of interactive in-class exercises to get hands-on experience applying the latest in AI-based autonomous systems test and evaluation.

  • Describe the novel challenges introduced by AI, including machine learning (ML), in autonomous systems.

  • Describe the 6-D framework for creating an AI-enabled system, navigate AI technology solutions with an end-to-end engineering perspective, and explain how convolutional neural networks (CNNs) relate to the AI renaissance.

  • Summarize the basic math behind CNNs, compare CNNs to how the human brain works, and distinguish fact and fiction in CNN applications.

  • Identify the dimensions of complexity for autonomous systems, explain the role and requirements of simulations in autonomy development and testing, and identify the major architectural components of a modular autonomous system.

  • Explain what an IoT/Cyber physical system is, explain safety concerns related to AI-enable Cyber physical systems, and identify mitigations for addressing key classes of challenges and vulnerabilities.

  • Describe the challenges of confidence estimation for deep learning models and explain current approaches for dealing with problems of uncertainty and domain shift.

  • Identify challenges verification and validation on autonomous systems that perform control/planning tasks, describe approaches to formal verification of AI controllers, and describe approaches to safe design and fallback control architectures.

  • Explain the UL4600 standard, how it relates to MIL STD 882, and lessons learned from R&D work in the industry.

  • Define the concept of explainable AI, explain how AI systems can be made more user-friendly, and identify ways that failure modes can be made easier to understand.

  • List the essential elements of teamwork and describe how they might apply to AI-enabled and autonomous system teammates and explain how humans can interact safely and effectively with autonomous systems.

  • Describe strategies for human supervision of AI-enabled and autonomous systems, explain the overarching frameworks that apply to the governance of AI-enabled systems, identify legal, policy, and ethical issues related to AI enabled systems, and list challenges of interpreting policy and ethical concepts for technology development.

  • Identify important considerations for deploying autonomous systems in the real world.


A working knowledge of systems engineering practices and some experience with testing and evaluation of complex systems.

Who Should Take this Course

Engineers focused on test and evaluation of complex AI-based systems.

Design engineers working on autonomous systems who want to build insights into making AI-driven systems more readily testable.

Engineering managers who want to gain an understanding of the challenges and best practices related to T&E of AI-based autonomous systems.

Or log in to access your purchased courses

Upcoming Offerings


Alexis Basantis
Alexis Basantis is a human-centered design strategist at JHU/APL. In her role, she utilizes human factors engineering and user-centered design principles to better integrate the human perspective into the design and development of complex technical landscapes.
Rick Blank
Rick Blank is the program manager and has been an instructor in Johns Hopkins Engineering’s Engineering for Professionals (EP) MS program in Engineering Management since 2009.
Anton T. Dahbura
Anton (Tony) Dahbura is the co-director of Johns Hopkins University’s Institute for Assured Autonomy and executive director of the Johns Hopkins University Information Security Institute.
Frank Fratrik
Frank Fratrik is the senior director of safety solutions at Edge Case Research, where he manages a group of system safety engineers who provide system safety management and engineering expertise across a diverse customer base of developer, users, and assessors.
John Gersh
John Gersh is a principal cognitive engineer in JHU/APL’s Intelligent Systems Branch, where he focuses on human-machine teaming.
Ariel Greenberg
Ariel M. Greenberg is a senior staff scientist and project manager at JHU/ APL, whose research interests include psychophysiology, behavioral modeling and simulation, and machine ethics.
Erin Hahn
Erin Hahn is a senior national security analyst and principal professional staff member in JHU/APL’s National Security Analysis Mission Area, where she supervises a group of analysts working on broad issues related to technology development and implementation.
Chad Hawthorne
Chad Hawthorne is a principal investigator and autonomy researcher at JHU/APL and has 20 years’ experience developing autonomy software for unmanned maritime systems.
Joshua Mueller
Josh Mueller is deputy lead engineer for hypersonic kill chains at JHU/ APL, where previously he led the Operational Concepts and Integration Section, a team of analysts focused on meso-scale complex socio-technical systems analysis.
Lynn Reggia
Lynn Reggia is the supervisor of the Human Machine Engineering Group within JHU/APL’s Air and Missile Defense Sector.
Sarah Rigsbee
Sarah Rigsbee is a senior human-centered design and innovation strategist and senior professional staff member at JHU/APL and is the lead human-centered design strategist for JHU’s Institute for Assured Autonomy (IAA).
Pedro Rodriguez
Pedro A. Rodriguez is the principal technical leader of multiple deep learning projects at JHU/APL, where currently he focuses on developing and deploying deep learning algorithms at the tactical edge for the U.S. Army and the Joint AI Center (JAIC).
Aurora Schmidt
Aurora C. Schmidt is a project manager in JHU/APL’s Research and Exploratory Development Mission Area, and her research interests include sensor networks, estimation and coordination problems, signal processing, compressed sensing, optimization, multi-target tracking, control theory, and information and decision-making.
Christina Selby
Christina Selby is a senior professional staff member and section supervisor at JHU/APL, with expertise in developing and analyzing mathematical methodologies to solve critical problems that are not well understood.
Tamim Sookoor
Tamim Sookoor is a researcher at JHU/APL, where his research interests include cyber physical systems (CPS), cyber security, the Internet of Things (IoT), and machine learning.
Adam Watkins
Adam Watkins is a principal staff member of JHU/APL with over 15 years’ experience in autonomy and robotics.
Reed Young
Reed Young is a member of the senior professional staff in the Research and Exploratory Development Mission Area at JHU/APL, where he serves as the program manager for Robotics and Autonomy.