Research Areas

Control for Learning

The Controls for Learning in Networks area investigates how principles of control theory can inform and enhance learning in interconnected systems. By unifying ideas from network control, dynamical systems, and machine learning, this research seeks to understand and optimize how information, influence, and decision signals propagate through complex networks. It focuses on developing control-theoretic foundations for graph learning, enabling algorithms that are not only data-driven but also guided by principles such as stability, robustness, and energy efficiency. Applications span multi-agent coordination, infrastructure resilience, neuroscience, and social networks—domains where learning and control coexist within dynamic, interdependent environments—ultimately aiming to build systems that learn through control and are controlled through learning.

Fundamentals of Resilience in Networks

The presence of faulty or adversarial nodes compromises the security of distributed systems. This is particularly evident in consensus-based protocols, which underpin nearly all distributed optimization tasks, where malicious actors can bias the agreement process, forcing convergence to an arbitrary state, or, in some cases, preventing convergence altogether.

In this simulation, one adversarial agent (bottom right) exhibits agent failure. The remaining normal agents implementing the Centerpoint based resilient consensus algorithm still manage to converge.

Distributed Multi-agent Control

Robustness Verification of Neural Networks

Ensuring the reliability of deep neural networks is paramount in safety-critical domains such as autonomous driving and medical diagnostics, where even minor perturbations in inputs or model weights—arising from hardware vulnerabilities or environmental fluctuations—can have severe consequences. While significant research has focused on verifying robustness against input perturbations, formal analysis of robustness with respect to weight perturbations remains relatively underexplored.

To address this gap, our lab presents ModelStar, a novel framework for the formal verification of deep neural networks under weight perturbations. ModelStar leverages reachability analysis and linear set propagation to efficiently characterize the effect of an infinite family of weight variations on network outputs. Our empirical results demonstrate that ModelStar outperforms existing approaches, verifying robustness on up to 60% more samples in image classification benchmarks. Furthermore, ModelStar provides tighter robustness bounds and supports formal verification of any linear layer against weight perturbations, representing a significant advancement towards the dependable deployment of DNNs in safety-critical applications.

Model perturbations in neural networks may lead to fatal accidents in safety-critical systems. For example, an autonomous vehicle may accelerate instead of decelerating when it encounters a construction sign [1].
[1] Zubair, Muhammad Usama, Taylor T. Johnson, Kanad Basu, and Waseem Abbas. “Verification of Neural Network Robustness Against Weight Perturbations Using Star Sets.” In 2025 IEEE Conference on Artificial Intelligence (CAI), pp. 637-642. IEEE, 2025.

Resource Allocation in Multi-Agent Systems