Multi-Sensor Fusion in Automated Driving: A Survey

Work by Zhangjing Wang, Yu Wu, and Qingqing Niu, IEEE Access Volume: 8, 2020

Keywords: Multi Sensor Fusion, Autonomous Driving, Tracking, Data Association

Summary

The authors present a survey for multi-source and heterogeneous information fusion for autonomous driving vehicles. They discuss three main topics:

  1. Sensors and communications: identifying the most popular sensors and communication schemes for autonomous driving and their advantages and disadvantages
  2. Fusion: dividing fusion approaches into four main strategies: discernible units, feature complementarity, target attributes of different sensors, and decision making of different sensors.
  3. Data association: describing methods for data association for single or multiple sensors detecting spare or multiple targets.

Read the paper on IEEE here.

Optimal Inequalities in Probability Theory: A Convex Optimization Approach

Work by Dimitris Bertsimas & Ioana Popescu, SIAM Journal on Optimization, Volume: 15, Number: 3, Pages: 780-804, 2005

Keywords: Bounds in Probability Theory , Higher Order Moment Based SDP

Summary

The authors investigate the problem of obtaining best possible bounds on the probability that a certain random variable belongs in a given set, given information on some of its moments (up to kth-order) famously called as the (n, k, Ω)-bound problem. They formulate it as an optimization problem and use modern optimization theory, in particular convex and semidefinite programming. In particular, they provide concrete answers to the following three key questions namely:

  1. Are such bounds “best possible”; that is, do there exist distributions that match them?
  2. Can such bounds be generalized in multivariate settings, and in what circumstances can they be explicitly and/or algorithmically computed?
  3. Is there a general theory based on optimization methods to address in a unified manner moment-inequality problems in probability theory?

Key Observations

  • In the univariate case, optimal bounds on P(X ∈ S), when the first k moments of X are given, is given by the solution of a SDP in k + 1 dimensions, where S denotes the set of interest.
  • In the multivariate case, if the sets S and Ω are given by polynomial inequalities, an improving sequence of bounds are obtained by solving SDPs of polynomial size in n, for fixed k.
  • It is NP-hard to find tight bounds for k ≥ 4 and Ω = Rn and for k ≥ 2 and Ω = Rn+ respectively with rational problem data.
  • For k = 1 and Ω = Rn+, it is possible to find tight upper bounds by solving n convex optimization problems when the set S is convex and a polynomial time algorithm is given when S and Ω are unions of convex sets, over which linear functions can be optimized efficiently.
  • For k = 2 and Ω = Rn, authors present an algorithm for finding tight bounds when S is a union of convex sets, over which convex quadratic functions can be optimized efficiently.

Possible Applications of Ideas Presented in this Paper

  1. Propagating uncertainties in risk-bounded motion planning with the risk of a trajectory being quantified using the probability bounds presented in this paper
  2. Anomaly detection in cyber physical systems where higher order moments of a residual data from a state estimator can be used to design anomaly detector thresholds that can satisfy an user prescribed false alarm rate.

Read the paper on SIAM here.

Metrics for Signal Temporal Logic Formulae

Work by Curtis Madsen, Prashant Vaidyanathan, Sadra Sadraddini, Cristian-Ioan Vasile, Nicholas A. DeLateur, Ron Weiss, Douglas Densmore, and Calin Belta, IEEE CDC 2018

Keywords: Signal Temporal Logic, Metric Spaces

Summary

The authors discuss how STL formulae can admit a metric space under mild assumptions. They present two metrics: the Pompeiu-Hausdorff (PH) and the Symmetric Difference (SD) metrics. The PH distance measures how much the language of one formula needs to be enlarged to include the other. The SD distance measures the overlap between the two formulae. The PH distance can be formulated as a Mixed-Integer Linear Programming (MILP) optimization problem which can be computed fairly quickly (although complexity grows exponentially with the number of predicates are formulae horizon). The SD distance is computed using a recursive algorithm based on the area of satisfaction. Its complexity depends on the complexity of the norm operation.

Read the paper on arXiv here.

A Survey of Distributed Optimization

Work by Tao Yang, et al., Annual Reviews in Control 2020

Discussion by Yi Guo, February 18, 2020

Keywords: Control review, distributed optimization, algorithm design

Summary

In distributed optimization of multi-agent systems, agents cooperate to minimize a global function which is a sum of local objective functions. Motivated by applications including power systems, sensor networks, smart buildings, and smart manufacturing, various distributed optimization algorithms have been developed. In these algorithms, each agent performs local computation based on its own information and information received from its neighboring agents through the underlying communication network, so that the optimization problem can be solved in a distributed manner.

This survey paper aims to offer a detailed overview of existing distributed optimization algorithms and their applications in power systems. More specifically, the authors first review discrete-time and continuous-time distributed optimization algorithms for undirected graphs. The authors then discuss how to extend these algorithms in various directions to handle more realistic scenarios. Finally, the authors focus on the application of distributed optimization in the optimal coordination of distributed energy resources.

Read the paper on Elsevier here.

Shrinking Horizon Model Predictive Control With Signal Temporal Logic Constraints Under Stochastic Disturbances

Work by Samira S. Farahani, Rupak Majumdar, Vinayak S. Prabhu, and Sadegh Soudjani, IEEE Transactions on Automatic Control August 2019

Keywords: Signal temporal logic, model predictive control, stochastic disturbances

Summary

The authors discuss a shrinking horizon model predictive control (SH-MPC) problem to generate control inputs for a discrete-time linear system under additive stochastic disturbance (either Gaussian or bounded support). The system specifications are through a signal temporal logic (STL) formula and encoded as a chance constraint into the SH-MPC problem. The SH-MPC problem is optimized for minimum input and maximum robustness.

The authors approximate the system robustness in the objective function using min-max and max-min canonical forms of min-max-plus-scaling (MMPS) functions. They under approximate the chance constraint by showing that any chance constraint on a formula can be transformed into chance constraints on atomic propositions, then they transform the latter into linear constraints.

Read the paper on arXiv here or on IEEE Xplore here.

Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak- Lojasiewicz Condition

Work by Hamed Karimi, Julie Nutini, and Mark Schmidt, ECML PKDD 2016

Keywords: Nonconvex, optimization, Polyak-Lojasiewicz inequality, gradient domination, convergence, global

Summary

The authors re-explore the Polyak-Lojasiewicz inequality first analyzed by Polyak in 1963 by deriving convergence results under various descent methods for various modern machine learning tasks and establishing equivalences and sufficiency-necessity relationships with several other nonconvex function classes. A highlight of the paper is a beautifully simple proof of linear convergence using gradient descent on PL functions, which we walk through in our discussion.

Read the paper on arXiv here.

Convergence of gradient descent under the Polyak-Lojasiewicz inequality

This proof is extremely short and simple, requiring only a few assumptions and a basic mathematical background. It is strange that it is as universally known as the theory related to convex optimization.

Relating to the theory of Lyapunov stability for nonlinear systems, the PL inequality essentially means that the function value itself is a valid Lyapunov function for exponential stability of the global minimum under the nonlinear gradient descent dynamics.

Lieven Vandenberghe’s lecture notes for the case of strongly convex functions are an excellent supplement.

The setting we consider is that of minimizing a function f(x).

Lipschitz continuity

A function is Lipschitz continuous with constant L if

This image has an empty alt attribute; its file name is image1-2.png

Likewise the gradient (first derivative) of a function is Lipschitz continuous with constant L if

This image has an empty alt attribute; its file name is image2-1.png

For the next steps we follow slide 1.13 of Lieven Vandenberghe’s lecture notes.

Recall the Cauchy-Schwarz inequality

This image has an empty alt attribute; its file name is image3-1.png

Applying the Cauchy-Schwartz inequality to the Lipschitz gradient condition gives

This image has an empty alt attribute; its file name is image4-1.png

Define the function g(t) as

This image has an empty alt attribute; its file name is image5-1.png

If the domain of f is convex, then g(t) is well-defined for all t in [0, 1].

Using the definition of g(t), the chain rule of multivariate calculus, and the previous inequality we have

This image has an empty alt attribute; its file name is image6-1.png

Using the definition of g(t) we can rewrite f(y) in terms of an integral by using the fundamental theorem of calculus

This image has an empty alt attribute; its file name is image7-1.png

Integrating the second term from 0 to t and using the previous inequality and the derivative of g(t) we obtain

This image has an empty alt attribute; its file name is image8-3.png

Substituting back into the expression for f(y) we obtain the quadratic upper bound

This image has an empty alt attribute; its file name is image.png

The Polyak-Lojasiewicz inequality

A function is said to satisfy the Polyak-Lojasiewicz inequality if the following condition holds:

This image has an empty alt attribute; its file name is image16-1.png

where f* is the minimum function value.

This means that the norm of the gradient grows at least as fast as a quadratic as the function value moves away from the optimal function value.

Additionally, this implies that every stationary point of f(x) is a global minimum.

Gradient descent

The gradient descent update simply takes a step in the direction of the negative gradient:

This image has an empty alt attribute; its file name is image10-2.png

We are now ready to prove convergence of gradient descent under the PL inequality i.e. Theorem 1 of Karimi et al.

Rearranging the gradient descent update gives the difference

This image has an empty alt attribute; its file name is image12-2.png

Using the gradient descent update rule in the quadratic upper bound condition (from Lipschitz continuity of the gradient) we obtain

This image has an empty alt attribute; its file name is image13-2-1024x465.png

If the stepsize is chosen so that the coefficient on the righthand side is negative, then using the Polyak-Lojasiewicz inequality gives

This image has an empty alt attribute; its file name is image14-2.png

The range of permissible stepsizes is [0, 2/L] with the best rate achieved with a stepsize of 1/L. Under this choice, we obtain

This image has an empty alt attribute; its file name is image15-1.png

Adding f(x_k) – f* to both sides gives

This image has an empty alt attribute; its file name is image16-2.png

Dividing by f(x_k) – f* gives the linear (geometric) convergence rate

This image has an empty alt attribute; its file name is image17-1.png

This shows that the difference between the current function value and the minimum decreases at least as fast as a geometric series with a rate determined by the ratio of the PL and Lipschitz constants.

Why would control systems researchers care about this?

The Polyak-Lojasiewicz inequality is key to analysis of convergence of policy gradient for LQR and LQR with multiplicative noise.

Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem

Work by Hesameddin Mohammadi, Armin Zare, Mahdi Soltanolkotabi, and Mihailo R. Jovanovic, IEEE TAC 2020 (under review) / CDC 2019

Keywords: Data-driven control, gradient descent, gradient-flow dynamics, linear quadratic regulator, model-free control,

nonconvex optimization, Polyak-Lojasiewicz inequality, random search method, reinforcement learning, sample complexity

Summary

This work extends results on convergence of policy gradient methods for discrete-time systems of Fazel et al. to the case of continuous-time linear dynamics while also significantly improving the number of cost function evaluations and simulation time. These improvements were made possible by novel proof techniques which included 1) relating the gradient-flow dynamics associated with the nonconvex formulation to that of a convex reparameterization, and 2) relaxing strict bounds on the gradient estimation error to probabilistic guarantees on high correlation between the gradient and its estimate. This echoes the notion that indeed “policy gradient is nothing more than random search“, albeit a random search with compelling convergence properties.

Dr. Zare recently joined UT Dallas as a faculty member and we look forward to working with him!

Read the paper on arXiv here.

Sparse LQR Synthesis via Information Regularization

Work by Jeb Stefan and Takashi Tanaka, CDC 2019

Discussion by Ben Gravell, January 31, 2020

Keywords: Linear quadratic regulator, information, theory, regularization, matrix inequality, iterative

Summary

Researchers from UT Austin formulate a problem of jointly optimizing quadratic cost of a linear system and an information-theoretic communication cost which accommodates partial channel capacity. It is demonstrated empirically that this optimization can be solved with an iterative semidefinite program (SDP), and that the communication cost acts as a regularizer on the control gains, which in some cases promotes sparsity. This is similar to our own work on learning sparse control; we would love to see how data driven approaches could augment info-theoretic notions in the case of unknown dynamics.

Paper link is forthcoming once the CDC proceedings have been published. Author website is here.

MakeSense: Automated Sensor Design for Proprioceptive Soft Robots

Work by Javier Tapia, Espen Knoop , Mojmir Mutny, Miguel A. Otaduy, and Moritz Bacher, 2020

Keywords: Convex, optimization, feasible, design

Summary

Researchers at Disney have created a method for doing optimized sensor selection for soft robotics. A large number of presumptive fabricable sensors are virtually generated and are then culled down to a small set which give good pose estimation in simulation. They then verify the method experimentally by fabricating physical soft robots with highly elastic strain sensors embedded in a flexible polymer. Although our own research is geared toward theoretical analysis of sensor and actuator selection in the case of well-defined networked linear systems, we found this study a fascinating tangent.

Watch the video and read the paper here.