Videos MIN

Developing and Integrating Explainable AI Principles for Federal Applications – GTC 21 Presentation

Learn about the developing explainable AI (XAI) systems within major deep learning initiatives for federal government applications. With organizations such as the Defense Advanced Research Projects Agency, the National Institute of Standards and Technology, and the National Security Commission on Artificial Intelligence (NSCAI) naming it a frontier in the next decade of AI research, we examine various mission-oriented use cases and goals of how XAI can enable end users to effectively test, evaluate, understand, and deploy AI-enabled recommendations. We’ll cover best practices in use cases such as situational awareness systems, threat detection, NLP-based monitoring systems, and anomaly detection based on graphical network analysis and the building blocks for a successful AI and XAI implementation, demonstrating how teams can leverage the power of AI, GPUs, and system best practices in protecting ethics, fairness, and interpretability in regulated environments, while increasing adoption.

 

Work with Us

Start a Project

We'd like to help creating your next solution, whether modernizing legacy platforms or developing new AI solutions. Technology moves fast, let's build sustainable solutions.
Get Started

More Insights

See All