Profile picture.

David Shriver

I work to make machine learning systems safer and more reliable, especially in applications where failures can have serious consequences, such as autonomous vehicles or safety-critical systems. My research develops practical tools and techniques for analyzing, verifying, and fixing neural networks, ensuring failures are caught and remediated before they can have a significant impact.

Education

Ph.D., Computer Science

Dec 2022
University of Virginia

Advised by: Matt Dwyer and Sebastian Elbaum

Thesis: Increasing the Applicability of Verification Tools for Neural Networks

M.S., Computer Science

May 2018
University of Nebraska-Lincoln

Advised by: Sebastian Elbaum

Thesis: Assessing the Quality and Stability of Recommender Systems

B.S., Computer Engineering

May 2016
University of Nebraska-Lincoln

Positions Held

Machine Learning Research Scientist

Software Engineering Institute (SEI)

January 2023 — February 2026

Leads teams of researchers, engineers, and software developers on projects in the Secure AI Lab on topics related to identifying, understanding, and defending against AI model vulnerabilities.

Research Assistant

Department of Computer Science, University of Virginia

August 2018 — December 2022

Performed research on techniques for increasing the applicability of state-of-the art formal verification tools for neural networks to improve the safety and correctness of systems with high costs of failure, such as autonomous vehicles.

Research Intern

Langley Research Center, NASA

June 2021 — August 2021

Proposed and led a short-term project to develop a method for transforming temporal behavioral properties of neural networks to local robustness properties to enable application of a plethora of available tools and investigated the feasibility of this transformation in the context of neural networks trained for aircraft collision avoidance.

Research Assistant

Department of Computer Science and Engineering, University of Nebraska-Lincoln

March 2014 — July 2018

Developed approaches for assessing the quality and stability of recommender systems.

Publications

Compositional Neural Network Verification via Assume-Guarantee Reasoning



Neural Information Processing Systems (NeurIPS)
2025

[Paper] [Code]

Concept-ROT: Poisoning Concepts in Large Language Models with Model Editing



International Conference on Learning Representations (ICLR)
2025

[Paper] [Code]

Deeper Notions of Correctness in Image-Based DNNs: Lifting Properties from Pixel to Entities



Foundations of Software Engineering (FSE)
2023

[Paper]

DeepManeuver: Adversarial Test Generation for Trajectory Manipulation of Autonomous Vehicles



IEEE Transactions on Software Engineering (TSE)
2023

Increasing the Applicability of Verification Tools for Neural Networks



University of Virginia
2022

[Paper]

Distribution Models for Falsification and Verification of DNNs



International Conference on Automated Software Engineering (ASE)
2021

DNNV: A Framework for Deep Neural Network Verification



Computer Aided Verification
2021

[Paper] [Tool] [Video]

Reducing DNN Properties to Enable Falsification with Adversarial Attacks



International Conference on Software Engineering (ICSE)
2021

Systematic Generation of Diverse Benchmarks for DNN Verification



Computer Aided Verification (CAV)
2020

[Paper]

Poster: Differencing Neural Networks



University of Virginia CS Research Symposium
2019

[Poster]

Refactoring Neural Networks for Verification



arXiv preprint arXiv:1908.08026
2019

[Paper]

Evaluating Recommender System Stability with Influence-Guided Fuzzing



AAAI Conference on Artificial Intelligence (AAAI)
2019

Toward the development of richer properties for recommender systems



International Conference on Software Engineering (ICSE): Companion Proceedings
2018

Assessing the Quality and Stability of Recommender Systems



University of Nebraska-Lincoln
2018

[Paper]

At the End of Synthesis: Narrowing Program Candidates



International Conference on Software Engineering: New Ideas and Emerging Technologies Results Track (ICSE NIER)
2017

[Paper]

Tools and Artifacts

dnnf

:

Implements a reduction to enable the application of falsification tools, such as adversarial attacks, to a more general set of behavioral properties of neural networks.

https://github.com/dlshriver/dnnf

dnnv-benchmarks

:

A large collection of DNN verification benchmarks, specified in DNNP and ONNX for use with DNNV, DNNF and their supported verifiers and falsifiers.

https://github.com/dlshriver/dnnv-benchmarks

[Github]

dnnv

:

Introduces a standard network and property specification formats and implements network simplifications and property reductions, facilitating verifier execution, comparison, and artifact re-use.

https://github.com/dlshriver/dnnv

Awards and Honors

Software Engineering Institute AJ Award for Leading and Advancing (Team award), June 2025
ACM SIGSOFT Outstanding Doctoral Dissertation, March 2023
John A. Stankovic Outstanding Graduate Research, May 2022
University of Nebraska-Lincoln, Highest Distinction, May 2016
University of Nebraska-Lincoln, Computer Engineering Outstanding Undergraduate Senior, May 2016

Service

• ICSE 2026: Program Committee Member
• ICSE 2025: Program Committee Member
• TOSEM 2024: Reviewer
• TOSEM 2023: Reviewer
• ASE 2022: Committee Member in Program Committee within Artifact Evaluation-track
• ISSTA 2021: Committee Member in Artifact Evaluation Committee within the Artifact Evaluation-track
• ISSTA 2021: Co-reviewer
• ICSE 2020: Co-reviewer