Publications
Papers listed by type and year of publication or year of submission if unpublished.
Preprints
undefined
@article{mip_fom_verify,
author = {Ranjan, V. and Gualandi, S. and Lodi, A. and Stellato, B.},
journal = {arXiv e-prints},
year = {2024},
month = {12},
title = {Exact {Verification} of {First}-{Order} {Methods} via {Mixed}-{Integer} {Linear} {Programming}},
howpublished = {https://arxiv.org/abs/2412.11330},
}
We introduce a machine-learning framework to learn the hyperparameter sequence of first-order methods (e.g., the step sizes in gradient descent) to quickly solve parametric convex optimization problems. Our computational architecture amounts to running fixed-point iterations where the hyperparameters are the same across all parametric instances and consists of two phases. In the first step-varying phase the hyperparameters vary across iterations, while in the second steady-state phase the hyperparameters are constant across iterations. Our learned optimizer is flexible in that it can be evaluated on any number of iterations and is guaranteed to converge to an optimal solution. To train, we minimize the mean square error to a ground truth solution. In the case of gradient descent, the one-step optimal step size is the solution to a least squares problem, and in the case of unconstrained quadratic minimization, we can compute the two and three-step optimal solutions in closed-form. In other cases , we backpropagate through the algorithm steps to minimize the training objective after a given number of steps. We show how to learn hyperparameters for several popular algorithms: gradient descent, proximal gradient descent, and two ADMM-based solvers: OSQP and SCS. We use a sample convergence bound to obtain generalization guarantees for the performance of our learned algorithm for unseen data, providing both lower and upper bounds. We showcase the effectiveness of our method with many examples, including ones from control, signal processing, and machine learning. Remarkably, our approach is highly data-efficient in that we only use 10 problem instances to train the hyperparameters in all of our examples.
@article{learn_algo_steps,
author = {Sambharya, R. and Stellato, B.},
journal = {arXiv e-prints},
year = {2024},
month = {11},
title = {Learning {Algorithm} {Hyperparameters} for {Fast} {Parametric} {Convex} {Optimization}},
howpublished = {https://arxiv.org/abs/2411.15717},
}
This paper introduces a novel data-driven hierarchical control scheme for managing a fleet of nonlinear, capacity-constrained autonomous agents in an iterative environment. We propose a control framework consisting of a high-level dynamic task assignment and routing layer and low-level motion planning and tracking layer. Each layer of the control hierarchy uses a data-driven Model Predictive Control (MPC) policy, maintaining bounded computational complexity at each calculation of a new task assignment or actuation input. We utilize collected data to iteratively refine estimates of agent capacity usage, and update MPC policy parameters accordingly. Our approach leverages tools from iterative learning control to integrate learning at both levels of the hierarchy, and coordinates learning between levels in order to maintain closed-loop feasibility and performance improvement of the connected architecture.
@article{learn_hier_control,
author = {Vallon, C. and Pinto, A. and Stellato, B. and Borrelli, F.},
journal = {arXiv e-prints},
year = {2024},
month = {4},
title = {Learning {Hierarchical} {Control} {For} {Multi}-{Agent} {Capacity}-{Constrained} {Systems}},
howpublished = {https://arxiv.org/abs/2403.14545},
}
We introduce a data-driven approach to analyze the performance of continuous optimization algorithms using generalization guarantees from statistical learning theory. We study classical and learned optimizers to solve families of parametric optimization problems. We build generalization guarantees for classical optimizers, using a sample convergence bound, and for learned optimizers, using the Probably Approximately Correct (PAC)-Bayes framework. To train learned optimizers, we use a gradient-based algorithm to directly minimize the PAC-Bayes upper bound. Numerical experiments in signal processing, control, and meta-learning showcase the ability of our framework to provide strong generalization guarantees for both classical and learned optimizers given a fixed budget of iterations. For classical optimizers, our bounds are much tighter than those that worst-case guarantees provide. For learned optimizers, our bounds outperform the empirical outcomes observed in their non-learned counterparts.
@article{Sambharya2024Data,
author = {Sambharya, R. and Stellato, B.},
journal = {arXiv e-prints},
year = {2024},
month = {4},
title = {Data-{Driven} {Performance} {Guarantees} for {Classical} and {Learned} {Optimizers}},
howpublished = {https://arxiv.org/abs/2404.13831},
}
We propose a data-driven technique to automatically learn the uncertainty sets in robust optimization. Our method reshapes the uncertainty sets by minimizing the expected performance across a family of problems subject to guaranteeing constraint satisfaction. Our approach is very flexible and can learn a wide variety of uncertainty sets while preserving tractability. We solve the constrained learning problem using a stochastic augmented Lagrangian method that relies on differentiating the solutions of the robust optimization problems with respect to the parameters of the uncertainty set. Due to the nonsmooth and nonconvex nature of the augmented Lagrangian function, we apply the nonsmooth conservative implicit function theorem to establish convergence to a critical point, which is a feasible solution of the constrained problem under mild assumptions. Using empirical process theory, we show finite-sample probabilistic guarantees of constraint satisfaction for the resulting solutions. Numerical experiments show that our method outperforms traditional approaches in robust and distributionally robust optimization in terms of out-of-sample performance and constraint satisfaction guarantees.
@article{lropt,
author = {Wang, I. and Becker, C. and Van Parys, B. and Stellato, B.},
journal = {arXiv e-prints},
year = {2024},
month = {4},
title = {Learning {Decision}-{Focused} {Uncertainty} {Sets} in {Robust} {Optimization}},
howpublished = {https://arxiv.org/abs/2305.19225},
}
We introduce a numerical framework to verify the finite step convergence of first-order methods for parametric convex quadratic optimization. We formulate the verification problem as a mathematical optimization problem where we maximize a performance metric (e.g., fixed-point residual at the last iteration) subject to constraints representing proximal algorithm steps (e.g., linear system solutions, projections, or gradient steps). Our framework is highly modular because we encode a wide range of proximal algorithms as variations of two primitive steps: affine steps and element-wise maximum steps. Compared to standard convergence analysis and performance estimation techniques, we can explicitly quantify the effects of warm-starting by directly representing the sets where the initial iterates and parameters live. We show that the verification problem is NP-hard, and we construct strong semidefinite programming relaxations using various constraint tightening techniques. Numerical examples in nonnegative least squares, network utility maximization, Lasso, and optimal control show a significant reduction in pessimism of our framework compared to standard worst-case convergence analysis techniques.
@article{perfverifyqp,
author = {Ranjan, V. and Stellato, B.},
journal = {arXiv e-prints},
year = {2024},
month = {3},
title = {Verification of {First}-{Order} {Methods} for {Parametric} {Quadratic} {Optimization}},
howpublished = {https://arxiv.org/abs/2403.03331},
}
We consider the problem of solving a family of parametric mixed-integer linear optimization problems where some entries in the input data change. We introduce the concept of cutting-plane layer (CPL), i.e., a differentiable cutting-plane generator mapping the problem data and previous iterates to cutting planes. We propose a CPL implementation to generate split cuts, and by combining several CPLs, we devise a differentiable cutting-plane algorithm that exploits the repeated nature of parametric instances. In an offline phase, we train our algorithm by updating the internal parameters controlling the CPLs, thus altering cut generation. Once trained, our algorithm computes, with predictable execution times and a fixed number of cuts, solutions with low integrality gaps. Preliminary computational tests show that our algorithm generalizes on unseen instances and captures underlying parametric structures.
@article{cplayers,
author = {Dragotto, G. and Clarke, S. and Fernandez Fisac, J. and Stellato, B.},
journal = {arXiv e-prints},
year = {2023},
month = {11},
title = {Differentiable {Cutting}-plane {Layers} for {Mixed}-integer {Linear} {Optimization}},
howpublished = {https://arxiv.org/abs/2311.03350},
}
We introduce the GEneralized Newton Inexact Operator Splitting solver (GeNIOS) for large-scale convex optimization. GeNIOS speeds up ADMM by approximately solving approximate subproblems: it uses a second-order approximation to the most challenging ADMM subproblem and solves it inexactly with a fast randomized solver. Despite these approximations, GeNIOS retains the convergence rate of classic ADMM and can detect primal and dual infeasibility from the algorithm iterates. At each iteration, the algorithm solves a positive-definite linear system that arises from a second-order approximation of the first subproblem and computes an approximate proximal operator. GeNIOS solves the linear system using an indirect solver with a randomized preconditioner, making it particularly useful for large-scale problems with dense data. Our high-performance open-source implementation in Julia allows users to specify convex optimization problems directly (with or without conic reformulation) and allows extensive customization. We illustrate GeNIOS's performance on a variety of problem types. Notably, GeNIOS is more than ten times faster than existing solvers on large-scale, dense problems.
@article{geniosjl,
author = {Diamandis, T. and Frangella, Z. and Zhao, S. and Stellato, B. and Udell, M.},
journal = {arXiv e-prints},
year = {2023},
month = {10},
title = {GeNIOS: an (almost) second-order operator-splitting solver for large-scale convex optimization},
howpublished = {https://arxiv.org/abs/2310.08333},
}
This paper presents GeNI-ADMM, a framework for large-scale composite convex optimization, that facilitates theoretical analysis of both existing and new approximate ADMM schemes. GeNI-ADMM encompasses any ADMM algorithm that solves a first- or second-order approximation to the ADMM subproblem inexactly. GeNI-ADMM exhibits the usual O(1/t)-convergence rate under standard hypotheses and converges linearly under additional hypotheses such as strong convexity. Further, the GeNI-ADMM framework provides explicit convergence rates for ADMM variants accelerated with randomized linear algebra, such as NysADMM and sketch-and-solve ADMM, resolving an important open question on the convergence of these methods. This analysis quantifies the benefit of improved approximations and can inspire the design of new ADMM variants with faster convergence.
@article{geniadmm,
author = {Frangella, Z. and Zhao, S. and Diamandis, T. and Stellato, B. and Udell, M.},
journal = {arXiv e-prints},
year = {2023},
month = {2},
title = {On the (linear) convergence of {Generalized} {Newton} {Inexact} {ADMM}},
howpublished = {https://arxiv.org/abs/2302.03863},
}
Journal articles
Despite the modeling power for problems under uncertainty, robust optimization (RO) and adaptive robust optimization (ARO) can exhibit too conservative solutions in terms of objective value degradation compared to the nominal case. One of the main reasons behind this conservatism is that, in many practical applications, uncertain constraints are directly designed as constraint-wise without taking into account couplings over multiple constraints. In this paper, we define a coupled uncertainty set as the intersection between a constraint-wise uncertainty set and a coupling set. We study the benefit of coupling in alleviating conservatism in RO and ARO. We provide theoretical tight and computable upper and lower bounds on the objective value improvement of RO and ARO problems under coupled uncertainty over constraintwise uncertainty. In addition, we relate the power of adaptability over static solutions with the coupling of uncertainty set. Computational results demonstrate the benefit of coupling in applications.
@article{robustcoupled,
author = {Bertsimas, D. and Na, L. and Stellato, B.},
journal = {INFORMS Journal on Optimization (to appear)},
year = {2024},
month = {12},
title = {The {Benefit} of {Uncertainty} {Coupling} in {Robust} and {Adaptive} {Robust} {Optimization}},
howpublished = {https://doi.org/10.1287/ijoo.2023.0007},
}
Robust optimization is a tractable and expressive technique for decision-making under uncertainty, but it can lead to overly conservative decisions when pessimistic assumptions are made on the uncertain parameters. Wasserstein distributionally robust optimization can reduce conservatism by being data-driven, but it often leads to very large problems with prohibitive solution times. We introduce mean robust optimization, a general framework that combines the best of both worlds by providing a trade-off between computational effort and conservatism. We propose uncertainty sets constructed based on clustered data rather than on observed data points directly thereby significantly reducing problem size. By varying the number of clusters, our method bridges between robust and Wasserstein distributionally robust optimization. We show finite-sample performance guarantees and explicitly control the potential additional pessimism introduced by any clustering procedure. In addition, we prove conditions for which, when the uncertainty enters linearly in the constraints, clustering does not affect the optimal solution. We illustrate the efficiency and performance preservation of our method on several numerical examples, obtaining multiple orders of magnitude speedups in solution time with little-to-no effect on the solution quality.
@article{wang2022,
author = {Wang, I. and Becker, C. and Van Parys, B. and Stellato, B.},
journal = {Mathematical Programming (to appear)},
year = {2024},
month = {11},
title = {Mean {Robust} {Optimization}},
howpublished = {https://doi.org/10.1007/s10107-024-02170-4},
}
In this paper we present the nonconvex exterior-point optimization solver (NExOS)---a first-order algorithm tailored to constrained nonconvex optimization problems. We consider the problem of minimizing a convex function over nonconvex constraints, where the projection onto the constraint set is single-valued around local minima. A wide range of nonconvex optimization problems have this structure including (but not limited to) sparse and low-rank optimization problems. By exploiting the underlying geometry of the constraint set, NExOS finds a locally optimal point by solving a sequence of penalized problems with strictly decreasing penalty parameters. NExOS solves each penalized problem by applying a first-order algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero. We then implement and test NExOS on many instances from a wide variety of sparse and lowrank optimization problems. We demonstrate that our algorithm, outperforms specialized methods on several examples of well-known nonconvex optimization problems involving sparse and low-rank optimization.
@article{dasgupta2021,
author = {Das Gupta, S. and Stellato, B. and Van Parys, B.},
journal = {Journal of Optimization Theory and Applications},
year = {2024},
month = {8},
pages = {795 -- 833},
title = {Exterior-point {Optimization} for {Sparse} and {Low}-rank {Optimization}},
howpublished = {https://doi.org/10.1007/s10957-024-02448-9},
volume = {202},
}
We consider the problem of designing a machine learning-based model of an unknown dynamical system from a finite number of (state-input)-successor state data points, such that the model obtained is also suitable for optimal control design. We propose a specific neural network (NN) architecture that yields a hybrid system with piecewise-affine dynamics that is differentiable with respect to the network's parameters, thereby enabling the use of derivative-based training procedures. We show that a careful choice of our NN's weights produces a hybrid system model with structural properties that are highly favourable when used as part of a finite horizon optimal control problem (OCP). Specifically, we show that optimal solutions with strong local optimality guarantees can be computed via nonlinear programming, in contrast to classical OCPs for general hybrid systems which typically require mixed-integer optimization. In addition to being well-suited for optimal control design, numerical simulations illustrate that our NN-based technique enjoys very similar performance to state-of-the-art system identification methodologies for hybrid systems and it is competitive on nonlinear benchmarks.
@article{nnhybridfabiani,
author = {Fabiani, F. and Stellato, B. and Masti, D. and Goulart, P.},
journal = {Automatica (to appear)},
year = {2024},
title = {A neural network-based approach to hybrid systems identification for control},
howpublished = {https://arxiv.org/abs/2404.01814},
}
We introduce a machine-learning framework to warm-start fixed-point optimization algorithms. Our architecture consists of a neural network mapping problem parameters to warm starts, followed by a predefined number of fixed-point iterations. We propose two loss functions designed to either minimize the fixed-point residual or the distance to a ground truth solution. In this way, the neural network predicts warm starts with the end-to-end goal of minimizing the downstream loss. An important feature of our architecture is its flexibility, in that it can predict a warm start for fixed-point algorithms run for any number of steps, without being limited to the number of steps it has been trained on. We provide PAC-Bayes generalization bounds on unseen data for common classes of fixed-point operators: contractive, linearly convergent, and averaged. Applying this framework to well-known applications in control, statistics, and signal processing, we observe a significant reduction in the number of iterations and solution time required to solve these problems, through learned warm starts.
@article{l2ws,
author = {Sambharya, R. and Hall, G. and Amos, B. and Stellato, B.},
journal = {Journal of Machine Learning Research},
number = {166},
year = {2024},
pages = {1--46},
title = {Learning to {Warm}-{Start} {Fixed}-{Point} {Optimization} {Algorithms}},
howpublished = {http://jmlr.org/papers/v25/23-1174.html},
volume = {25},
}
The opioid epidemic is a crisis that has plagued the United States (US) for decades. One central issue of the epidemic is inequitable access to treatment for opioid use disorder (OUD), which puts certain populations at a higher risk of opioid overdose. We integrate a predictive dynamical model and a prescriptive optimization problem to compute the optimal locations of opioid treatment facilities and the optimal treatment budget distribution in each US state. Our predictive model is a differential equation-based epidemiological model that captures the dynamics of the opioid epidemic. We use neural ordinary differential equations to fit this model to opioid epidemic data for each state and obtain estimates for unknown parameters in the model. We then incorporate this epidemiological model for each state into a corresponding mixed-integer optimization problem (MIP) for treatment facility location and resource allocation. Our MIPs aim to minimize the number of opioid overdose deaths and the number of people with OUD, and to target socioeconomic equitability by considering social vulnerability (from the US Centers for Disease Control’s Social Vulnerability Index) and opioid prescribing rates in each county. Overall, our MIPs’ proposed solutions on average decrease the number of people with OUD by 5.70 $\pm$ 0.738\%, increase the number of people in treatment by 21.17 $\pm$ 3.162\%, and decrease the number of opioid-related deaths by 0.51 $\pm$ 0.086\% after 2 years compared to the baseline epidemiological model’s predictions. Rather than only evaluating the effectiveness of potential policies as in past literature, our approach is decision-focused and directly yields actionable insights for policy-makers. It provides concrete opioid treatment facility and budget allocations and quantifies the impact of these allocations on pertinent population health measures. Future iterations of this approach could be implemented as a decision-making tool to tackle the issue of opioid treatment inaccessibility.
@article{joyce23,
author = {Luo, J. and Stellato, B.},
journal = {Manufacturing & Service Operations Management},
number = {4},
year = {2024},
title = {Frontiers in {Operations}: Equitable {Data}-{Driven} {Facility} {Location} and {Resource} {Allocation} to {Fight} the {Opioid} {Epidemic}},
howpublished = {https://doi.org/10.1287/msom.2023.0042},
volume = {26},
}
We introduce CVXPYgen, a tool for generating custom C code, suitable for embedded applications, that solves a parametrized class of convex optimization problems. CVXPYgen is based on CVXPY, a Python-embedded domain-specific language that supports a natural syntax (that follows the mathematical description) for specifying convex optimization problems. Along with the C implementation of a custom solver, CVXPYgen creates a Python wrapper for prototyping and desktop (non-embedded) applications. We give two examples, position control of a quadcopter and back-testing a portfolio optimization model. CVXPYgen outperforms a state-of-the-art code generation tool in terms of problem size it can handle, binary code size, and solve times. CVXPYgen and the generated solvers are open-source.
@article{schaller2022,
author = {Schaller, M. and Banjac, G. and Diamond, S. and Agrawal, A. and Stellato, B. and Boyd, S.},
journal = {IEEE Control Systems Letters},
year = {2022},
pages = {2653--2658},
title = {Embedded {Code} {Generation} with {CVXPY}},
howpublished = {https://doi.org/10.1109/LCSYS.2022.3173209},
volume = {6},
}
We propose a method to approximate the solution of online mixed-integer optimization (MIO) problems at very high speed using machine learning. By exploiting the repetitive nature of online optimization, we can greatly speed up the solution time. Our approach encodes the optimal solution into a small amount of information denoted as strategy using the voice of optimization framework. In this way, the core part of the optimization routine becomes a multiclass classification problem that can be solved very quickly. In this work, we extend that framework to real-time and high-speed applications focusing on parametric mixed-integer quadratic optimization. We propose an extremely fast online optimization method consisting of a feedforward neural network evaluation and a linear system solution where the matrix has already been factorized. Therefore, this online approach does not require any solver or iterative algorithm. We show the speed of the proposed method both in terms of total computations required and measured execution time. We estimate the number of floating point operations required to completely recover the optimal solution as a function of the problem dimensions. Compared with state-of-the-art MIO routines, the online running time of our method is very predictable and can be lower than a single matrix factorization time. We benchmark our method against the state-of-the-art solver Gurobi obtaining up to two to three orders of magnitude speedups on examples from fuel cell energy management, sparse portfolio optimization, and motion planning with obstacle avoidance.
@article{stellato2022,
author = {Bertsimas, D. and Stellato, B.},
journal = {INFORMS Journal on Computing},
number = {4},
year = {2022},
pages = {2229--2248},
title = {Online {Mixed}-{Integer} {Optimization} in {Milliseconds}},
howpublished = {https://doi.org/10.1287/ijoc.2022.1181},
volume = {34},
}
Many robotics problems, from robot motion planning to object manipulation, can be modeled as mixed-integer convex programs (MICPs). However, state-of-the-art algorithms are still unable to solve MICPs for control problems quickly enough for online use and existing heuristics can typically only find suboptimal solutions that might degrade robot performance. In this work, we turn to data-driven methods and present the Combinatorial Offline, Convex Online (CoCo) algorithm for quickly finding high quality solutions for MICPs. CoCo consists of a two-stage approach. In the offline phase, we train a neural network classifier that maps the problem parameters to a (logical strategy), which we define as the discrete arguments and relaxed big-M constraints associated with the optimal solution for that problem. Online, the classifier is applied to select a candidate logical strategy given new problem parameters; applying this logical strategy allows us to solve the original MICP as a convex optimization problem. We show through numerical experiments how CoCo finds near optimal solutions to MICPs arising in robot planning and control with 1 to 2 orders of magnitude solution speedup compared to other data-driven approaches and solvers.
@article{cauligi2021coco,
author = {Cauligi, A. and Culbertson, P. and Schmerling, E. and Schwager, M. and Stellato, B. and Pavone, M.},
journal = {IEEE Robotics and Automation Letters},
number = {2},
year = {2022},
pages = {1447--1454},
title = {CoCo: Online {Mixed}-{Integer} {Control} via {Supervised} {Learning}},
howpublished = {https://doi.org/10.1109/LRA.2021.3135931},
volume = {7},
}
The COVID-19 pandemic has created unprecedented challenges worldwide. Strained healthcare providers make difficult decisions on patient triage, treatment and care management on a daily basis. Policy makers have imposed social distancing measures to slow the disease, at a steep economic price. We design analytical tools to support these decisions and combat the pandemic. Specifically, we propose a comprehensive data-driven approach to understand the clinical characteristics of COVID-19, predict its mortality, forecast its evolution, and ultimately alleviate its impact. By leveraging cohort-level clinical data, patient-level hospital data, and census-level epidemiological data, we develop an integrated four-step approach, combining descriptive, predictive and prescriptive analytics. First, we aggregate hundreds of clinical studies into the most comprehensive database on COVID-19 to paint a new macroscopic picture of the disease. Second, we build personalized calculators to predict the risk of infection and mortality as a function of demographics, symptoms, comorbidities, and lab values. Third, we develop a novel epidemiological model to project the pandemic{\textquoteright}s spread and inform social distancing policies. Fourth, we propose an optimization model to reallocate ventilators and alleviate shortages. Our results have been used at the clinical level by several hospitals to triage patients, guide care management, plan ICU capacity, and re-distribute ventilators. At the policy level, they are currently supporting safe back-to-work policies at a major institution and equitable vaccine distribution planning at a major pharmaceutical company, and have been integrated into the US Center for Disease Control's pandemic forecast.
@article{Bertsimas2021From,
author = {Bertsimas, D. and Boussioux, L. and Cory Wright, R. and Delarue, A. and Digalakis, V. and Jacquillat, A. and Lahlou Kitane, D. and Lukin, G. and Li, M. L. and Mingardi, L. and Nohadani, O. and Orfanoudaki, A. and Papalexopoulos, T. and Paskov, I. and Pauphilet, J. and Skali Lami, O. and Stellato, B. and Tazi Bouardi, H. and Villalobos Carballo, K. and Wiberg, H. and Zeng, C.},
journal = {Health Care Management Science},
year = {2021},
month = {6},
pages = {253 -- 272},
publisher = {Springer},
title = {From predictions to prescriptions: A data-driven response to {COVID} -19},
volume = {24},
}
We introduce the idea that using optimal classification trees (OCTs) and optimal classification trees with-hyperplanes (OCT-Hs), interpretable machine learning algorithms developed in [BD17, BD18] , we are able to obtain insight on the strategy behind the optimal solution in any continuous and mixed-integer convex optimization problem as a function of key parameters that affect the problem. In this way, optimization is not a black box anymore. Instead, we redefine optimization as a multiclass classification problem where the predictor gives insights on the logic behind the optimal solution. In other words, OCTs and OCT-Hs give optimization a voice. We show on several realistic examples that the accuracy behind our method is in the 90\%-100\% range, while even when the predictions are not correct, the degree of suboptimality or infeasibility is very low. We compare optimal strategy predictions of OCTs and OCT-Hs and feed-forward neural networks (NNs) and conclude that the performance of OCT-Hs and NNs is comparable. OCTs are somewhat weaker but often competitive. Therefore, our approach provides a novel, reliable and insightful understanding of optimal strategies to solve a broad class of continuous and mixed-integer optimization problems.
@article{bertsimas2021,
author = {Bertsimas, D. and Stellato, B.},
journal = {Machine Learning},
number = {2},
year = {2021},
month = {2},
pages = {249--277},
title = {The {Voice} of {Optimization}},
howpublished = {https://doi.org/10.1007/s10994-020-05893-5},
volume = {110},
}
Heart-related anomalies are among the most common causes of death worldwide. Patients are often asymptomatic until a fatal event happens, and even when they are under observation, trained personnel is needed in order to identify a heart anomaly. In the last decades, there has been increasing evidence of how Machine Learning can be leveraged to detect such anomalies, thanks to the availability of Electrocardiograms (ECG) in digital format. New developments in technology have allowed to exploit such data to build models able to analyze the patterns in the occurrence of heart beats, and spot anomalies from them. In this work, we propose a novel methodology to extract ECG-related features and predict the type of ECG recorded in real time (less than 30 milliseconds). Our models leverage a collection of almost 40 thousand ECGs labeled by expert cardiologists across different hospitals and countries, and are able to detect 7 types of signals: Normal, AF, Tachycardia, Bradycardia, Arrhythmia, Other or Noisy. We exploit the XGBoost algorithm, a leading machine learning method, to train models achieving out of sample F1 Scores in the range 0.93 0.99. To our knowledge, this is the first work reporting high performance across hospitals, countries and recording standards.
@article{mingardi2020,
author = {Bertsimas, D. and Mingardi, L. and Stellato, B.},
journal = {IEEE Journal of Biomedical and Health Informatics},
number = {9},
year = {2021},
pages = {3627--3637},
title = {Machine {Learning} for {Real}-{Time} {Heart} {Disease} {Prediction}},
howpublished = {https://doi.org/10.1109/JBHI.2021.3066347},
volume = {25},
}
Background: Timely identification of COVID-19 patients at high risk of mortality can significantly improve patient management and resource allocation within hospitals. This study seeks to develop and validate a data-driven personalized mortality risk calculator for hospitalized COVID-19 patients. Methods: De-identified data was obtained for 3,927 COVID-19 positive patients from six independent centers, comprising 33 different hospitals. Demographic, clinical, and laboratory variables were collected at hospital admission. The COVID-19 Mortality Risk (CMR) tool was developed using the XGBoost algorithm to predict mortality. Its discrimination performance was subsequently evaluated on three validation cohorts. Findings: The derivation cohort of 3,062 patients has an observed mortality rate of 26.84\%. Increased age, decreased oxygen saturation (<= 93\%), elevated levels of C-reactive protein (>= 130 mg/L), blood urea nitrogen (>= 18 mg/dL), and blood creatinine (>= 1.2 mg/dL) were identified as primary risk factors, validating clinical findings. The model obtains out-of-sample AUCs of 0.90 (95\% CI, 0.87-0.94) on the derivation cohort. In the validation cohorts, the model obtains AUCs of 0.92 (95\% CI, 0.88-0.95) on Seville patients, 0.87 (95\% CI, 0.84-0.91) on Hellenic COVID-19 Study Group patients, and 0.81 (95\% CI, 0.76-0.85) on Hartford Hospital patients. The CMR tool is available as an online application at covidanalytics.io/mortality_calculator and is currently in clinical use. Interpretation: The CMR model leverages machine learning to generate accurate mortality predictions using commonly available clinical features. This is the first risk score trained and validated on a cohort of COVID-19 patients from Europe and the United States.
@article{Bertsimas2020COVID,
author = {Bertsimas, D. and Lukin, G. and Mingardi, L. and Nohadani, O. and Orfanoudaki, A. and Stellato, B. and Wiberg, H. and Gonzalez-Garcia, S. and Parra-Calderon, C. L. and Robinson, K. and Schneider, M. and Stein, B. and Estirado, A. and a Beccara, L. and Canino, R. and Dal Bello, M. and Pezzetti, F. and Pan, A.},
journal = {PLOS One},
year = {2020},
month = {12},
title = {COVID-19 {Mortality} {Risk} {Assessment}: An {International} {Multi}-{Center} {Study}},
howpublished = {%0A%20%20%20%20%20%20%20%20%20https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0243262%0A%20%20%20%20%20%20%20%20%20},
}
We present a general purpose solver for convex quadratic programs based on the alternating direction method of multipliers, employing a novel operator splitting technique that requires the solution of a quasi-definite linear system with the same coefficient matrix at almost every iteration. Our algorithm is very robust, placing no requirements on the problem data such as positive definiteness of the objective function or linear independence of the constraint functions. It can be configured to be division-free once an initial matrix factorization is carried out, making it suitable for real-time applications in embedded systems. In addition, our technique is the first operator splitting method for quadratic programs able to reliably detect primal and dual infeasible problems from the algorithm iterates. The method also supports factorization caching and warm starting, making it particularly efficient when solving parametrized problems arising in finance, control, and machine learning. Our open-source C implementation OSQP has a small footprint, is library-free, and has been extensively tested on many problem instances from a wide variety of application areas. It is typically ten times faster than competing interior point methods, and sometimes much more when factorization caching or warm start is used. OSQP has already shown a large impact with tens of thousands of users both in academia and in large corporations.
@article{stellato2020,
author = {Stellato, B. and Banjac, G. and Goulart, P. and Bemporad, A. and Boyd, S.},
journal = {Mathematical Programming Computation},
number = {4},
year = {2020},
month = {10},
pages = {637--672},
title = {OSQP: An {Operator} {Splitting} {Solver} for {Quadratic} {Programs}},
howpublished = {https://doi.org/10.1007/s12532-020-00179-2},
volume = {12},
}
The alternating direction method of multipliers is a powerful operator splitting technique for solving structured optimization problems. For convex optimization problems, it is well known that the algorithm generates iterates that converge to a solution, provided that it exists. If a solution does not exist, then the iterates diverge. Nevertheless, we show that they yield conclusive information regarding problem infeasibility for optimization problems with linear or quadratic objective functions and conic constraints, which includes quadratic, second-order cone, and semidefinite programs. In particular, we show that in the limit the iterates either satisfy a set of first-order optimality conditions or produce a certificate of either primal or dual infeasibility. Based on these results, we propose termination criteria for detecting primal and dual infeasibility.
@article{banjac2017,
author = {Banjac, G. and Goulart, P. and Stellato, B. and Boyd, S.},
journal = {Journal of Optimization Theory and Applications},
number = {2},
year = {2019},
pages = {490--519},
title = {Infeasibility detection in the alternating direction method of multipliers for convex optimization},
howpublished = {https://doi.org/10.1007/s10957-019-01575-y},
volume = {183},
}
Switching time optimization arises in finite-horizon optimal control for switched systems where, given a sequence of continuous dynamics, one minimizes a cost function with respect to the switching times. We propose an efficient method for computing the optimal switching times for switched linear and nonlinear systems. A novel second-order optimization algorithm is introduced where, at each iteration, the dynamics are linearized over an underlying time grid to compute the cost function, the gradient and the Hessian efficiently. With the proposed method, the most expensive operations at each iteration are shared between the cost function and its derivatives, thereby greatly reducing the computational burden. We implemented the algorithm in the Julia package SwitchTimeOpt allowing the user to easily solve switching time optimization problems. In the case of linear dynamics, many operations can be further simplified and benchmarks show that our approach is able to provide optimal solutions in just a few ms. In the case of nonlinear dynamics, two examples show that our method provides optimal solutions with up to two orders of magnitude time reductions over state-of-the-art approaches.
@article{stellato2017a,
author = {Stellato, B. and Ober-Bl{\" o}baum, S. and Goulart, P.},
journal = {IEEE Transactions on Automatic Control},
number = {10},
year = {2017},
month = {10},
pages = {5407--5414},
title = {Second-{Order} {Switching} {Time} {Optimization} for {Switched} {Dynamical} {Systems}},
howpublished = {https://doi.org/10.1109/TAC.2017.2697681},
volume = {62},
}
Common approaches for direct model predictive control (MPC) for current reference tracking in power electronics suffer from the high computational complexity encountered when solving integer optimal control problems over long prediction horizons. We propose an efficient alternative method based on approximate dynamic programming, greatly reducing the computational burden and enabling sampling times under \(25\: \mu s\). Our approach is based on the offline minimization of an infinite horizon value function estimate which is then applied to the tail cost of the MPC problem. This allows us to reduce the controller horizon to a very small number of stages while improving overall controller performance. Our proposed algorithm is implemented on a small size FPGA and validated on a variable speed drive system with a three-level voltage source converter. Time measurements showed that only \(5.76 \: \mu s\) are required to run our algorithm for horizon $N=1$ and \(17.27\: \mu s\) for $N=2$ while outperforming state of the art approaches with much longer horizons in terms of currents distortion and switching frequency. To the authors' knowledge, this is the first time direct MPC for current control has been implemented on an FPGA solving the integer optimization problem in real-time and achieving comparable performance to formulations with long prediction horizons.
@article{stellato2017,
author = {Stellato, B. and Geyer, T. and Goulart, P.},
journal = {IEEE Transactions on Power Electronics},
number = {5},
year = {2017},
month = {5},
pages = {4007--4020},
title = {High-{Speed} {Finite} {Control} {Set} {Model} {Predictive} {Control} for {Power} {Electronics}},
howpublished = {http://dx.doi.org/10.1109/TPEL.2016.2584678},
volume = {32},
}
A variant of the well-known Chebyshev inequality for scalar random variables can be formulated in the case where the mean and variance are estimated from samples. In this paper we present a generalization of this result to multiple dimensions where the only requirement is that the samples are independent and identically distributed. Furthermore, we show that as the number of samples tends to infinity our inequality converges to the theoretical multi-dimensional Chebyshev bound.
@article{stellato2017b,
author = {Stellato, B. and Van Parys, B. P.G. and Goulart, P.},
journal = {The American Statistician},
number = {2},
year = {2017},
pages = {123--127},
title = {Multivariate {Chebyshev} {Inequality} with {Estimated} {Mean} and {Variance}},
howpublished = {http://dx.doi.org/10.1080/00031305.2016.1186559},
volume = {71},
}
Conference proceedings
Convex quadratic optimization solvers are extensively utilized in various domains; however, achieving optimal performance in diverse situations remains a significant challenge due to the sparse nature of objective and constraint matrices. General-purpose architectures struggle with hardware utilization when performing critical sparse matrix operations, such as factorization and multiplication. To address this issue, we introduce a pipelined spatial architecture, Multi-Issue Butterfly (MIB), which supports all primitive scalar, vector, and matrix operations required by the Alternating Direction Method of Multipliers (ADMM) based solver algorithm. The proposed architecture features a butterfly computational network with innovative working modes for each node, controlled by runtime instructions. We developed a companion scheduling method for matrix operations based on their sparsity patterns. For factorization, an elimination tree guides the network instructions reordering to avoid data hazards caused by computation dependencies. For matrix-vector multiplication, data prefetching resolves structural hazards caused by read and write conflicts to register files. Instructions without hazards are issued simultaneously to increase pipeline throughput and function unit utilization. We evaluate the proposed architecture using FPGA prototypes, representing the first fully FPGA-based generic QP solver. Our assessment includes extensive performance and efficiency bench-marks across 100 QP problems from five application domains. Compared to the same algorithm variation running on CPU backends, our prototype achieves a geometric mean of $30.5\times$ end-to-end speedup, $127.0 \times$ greater energy efficiency, and $16.5\times$ less runtime jitter. In comparison to GPU backends, the prototype attains a geometric mean of $4.3\times$ faster end-to-end speedup, $21.7\times$ higher energy efficiency, and $33.4\times$ less runtime jitter.
@inproceedings{rsqp_multi,
address = {Los Alamitos, CA, USA},
author = {Wang, M. and McInerney, I. and Stellato, B. and Tu, F. and Boyd, S. and So, H. K.-H. and Cheng, K.-T.},
booktitle = {2024 57th {IEEE}/{ACM} {International} {Symposium} on {Microarchitecture} ({MICRO})},
year = {2024},
month = {11},
pages = {1574--1587},
organization = {IEEE Computer Society},
title = {Multi-{Issue} {Butterfly} {Architecture} for {Sparse} {Convex} {Quadratic} {Programming}},
}
We consider the multi-agent spatial navigation problem of computing the socially optimal order of play, i.e., the sequence in which the agents commit to their decisions, and its associated equilibrium in an N-player Stackelberg trajectory game. We model this problem as a mixed-integer optimization problem over the space of all possible Stackelberg games associated with the order of play's permutations. To solve the problem, we introduce Branch and Play (B&P), an efficient and exact algorithm that provably converges to a socially optimal order of play and its Stackelberg equilibrium. As a subroutine for B&P, we employ and extend sequential trajectory planning, i.e., a popular multi-agent control approach, to scalably compute valid local Stackelberg equilibria for any given order of play. We demonstrate the practical utility of B&P to coordinate air traffic control, swarm formation, and delivery vehicle fleets. We find that B&P consistently outperforms various baselines, and computes the socially optimal equilibrium.
@inproceedings{whoplaysfirstrobot,
address = {Delft, Netherlands},
author = {Hu, H. and Dragotto, G. and Zhang, Z. and Liang, K. and Stellato, B. and Fern{\' a}ndez Fisac, J.},
booktitle = {Proceedings of {Robotics}: Science and {Systems}},
year = {2024},
month = {7},
title = {Who {Plays} {First}? {Optimizing} the {Order} of {Play} in {Stackelberg} {Games} with {Many} {Robots}},
howpublished = {https://roboticsproceedings.org/rss20/p116.html},
}
We propose a stochastic first-order algorithm to learn the rationality parameters of simultaneous and non-cooperative potential games, i.e., the parameters of the agents’ optimization problems. Our technique combines an active-set step that enforces that the agents play at a Nash equilibrium and an implicit-differentiation step to update the estimates of the rationality parameters. We detail the convergence properties of our algorithm and perform numerical experiments on Cournot and congestion games, showing that our algorithm effectively finds high-quality solutions (in terms of out-of-sample loss) and scales to large datasets.
@inproceedings{clarke23,
author = {Clarke, S. and Dragotto, G. and Fernandez Fisac, Jaime and Stellato, B.},
booktitle = {IEEE {Conference} on {Decision} and {Control} ({CDC})},
year = {2023},
month = {12},
title = {Learning {Rationality} in {Potential} {Games}},
howpublished = {https://arxiv.org/abs/2303.11188},
}
First-order methods are widely used to solve convex quadratic programs (QPs) in real-time applications because of their low per-iteration cost. However, they can suffer from slow convergence to accurate solutions. In this paper, we present a framework which learns an effective warm-start for a popular first-order method in real-time applications, Douglas-Rachford (DR) splitting, across a family of parametric QPs. This framework consists of two modules: a feedforward neural network block, which takes as input the parameters of the QP and outputs a warm-start, and a block which performs a fixed number of iterations of DR splitting from this warm-start and outputs a candidate solution. A key feature of our framework is its ability to do end-to-end learning as we differentiate through the DR iterations. To illustrate the effectiveness of our method, we provide generalization bounds (based on Rademacher complexity) that improve with the number of training problems and number of iterations simultaneously. We further apply our method to three real-time applications and observe that, by learning good warm-starts, we are able to significantly reduce the number of iterations required to obtain high-quality solutions.
@inproceedings{l2wsqp,
author = {Sambharya, R. and Hall, G. and Amos, B. and Stellato, B.},
series = {Proceedings of {Machine} {Learning} {Research}},
booktitle = {Proceedings of the 5th {Annual} {Learning} for {Dynamics} and {Control} {Conference}},
editor = {Matni, N. and Morari, M. and Pappas, G. J.},
year = {2023},
month = {6},
pages = {220--234},
organization = {PMLR},
title = {End-to-{End} {Learning} to {Warm}-{Start} for {Real}-{Time} {Quadratic} {Optimization}},
volume = {211},
}
Convex optimization is at the heart of many performance-critical applications across a wide range of domains. Although many high-performance hardware accelerators have been developed for specific optimization problems in the past, designing such accelerator is a challenging task and the resulting computing architecture is often so specific to the targeted application that they can hardly be reused even in a related application within the same domain. To accelerate general-purpose optimization solvers that must operate on diverse user input during run time, an ideal hardware solver should be able to adapt to the provided optimization problem dynamically while achieving high performance and power-efficiency. In this work, a hardware-accelerated general-purpose quadratic program solver, called RSQP, with reconfigurable functional units and data path that facilitate problem-specific customization is presented. RSQP uses a string-based encoding to describe the problem structure with fine granularity. Based on this encoding, functional units and datapath customized to the sparsity pattern of the problem are created by solving a dictionary-based lossless string compression problem and a mixed integer linear program respectively. RSQP has been integrated to accelerate the general-purpose quadratic programming solver OSQP and has been tested using an extensive benchmark with 120 optimization problems from 6 application domains. Through architectural customization, RSQP achieves up to 7x performance improvement over its baseline generic design. Furthermore, when compared with a CPU and a GPU-accelerated implementation, RSQP achieves up to 31.2x and 6.9x end-to-end speedup on these benchmark programs, respectively. Finally, the FPGA accelerator operates at up to 6.6x lower dynamic power consumption and up to 22.7x higher power efficiency over the GPU implementation, making it an attractive solution for power-conscious datacenter applications.
@inproceedings{rsqp,
address = {Orlando, FL, USA},
author = {Wang, M. and McInerney, I. and Stellato, B. and Boyd, S. and So, H. K.-H.},
booktitle = {Proceedings of the 50th {Annual} {International} {Symposium} on {Computer} {Architecture} ({ISCA})},
year = {2023},
organization = {Association for Computing Machinery},
title = {RSQP: Problem-{Specific} {Architectural} {Customization} for {Accelerated} {Convex} {Quadratic} {Optimization}},
}
First-order methods for quadratic optimization such as OSQP are widely used for large-scale machine learning and embedded optimal control, where many related problems must be rapidly solved. These methods face two persistent challenges: manual hyperparameter tuning and convergence time to high-accuracy solutions. To address these, we explore how Reinforcement Learning (RL) can learn a policy to tune parameters to accelerate convergence. In experiments with well-known QP benchmarks we find that our RL policy, RLQP, significantly outperforms state-ofthe-art QP solvers by up to 3x. RLQP generalizes surprisingly well to previously unseen problems with varying dimension and structure from different applications, including the QPLIB, Netlib LP and Maros-Mészáros problems. Code, models, and videos are available at https://berkeleyautomation.github.io/rlqp/
@inproceedings{rlqp,
author = {Ichnowski, J. and Jain, P. and Stellato, B. and Banjac, G. and Luo, M. and Borrelli, F. and Gonzales, J. E. and Stoica, I. and Goldberg, K.},
booktitle = {Advances in {Neural} {Information} {Processing} {Systems} 35},
year = {2021},
month = {12},
title = {Accelerating {Quadratic} {Optimization} with {Reinforcement} {Learning}},
howpublished = {%0A%20%20%20%20%20%20%20%20%20https://proceedings.neurips.cc/paper\textunderscore{}files/paper/2021/hash/afdec7005cc9f14302cd0474fd0f3c96-Abstract.html%0A%20%20%20%20%20%20%20%20%20},
}
Reinforcement learning (RL) for continuous control typically employs distributions whose support covers the entire action space. In this work, we investigate the colloquially known phenomenon that trained agents often prefer actions at the boundaries of that space. We draw theoretical connections to the emergence of bangbang behavior in optimal control, and provide extensive empirical evaluation across a variety of recent RL algorithms. We replace the normal Gaussian by a Bernoulli distribution that solely considers the extremes along each action dimension - a bang-bang controller. Surprisingly, this achieves state-of-the-art performance on several continuous control benchmarks - in contrast to robotic hardware, where energy and maintenance cost affect controller choices. Since exploration, learning, and the final solution are entangled in RL, we provide additional imitation learning experiments to reduce the impact of exploration on our analysis. Finally, we show that our observations generalize to environments that aim to model realworld challenges and evaluate factors to mitigate the emergence of bang-bang solutions. Our findings emphasise challenges for benchmarking continuous control algorithms, particularly in light of potential real-world applications.
@inproceedings{Seyde2021Is,
author = {Seyde, T. and Gilitschenski, I. and Schwarting, W. and Stellato, B. and Riedmiller, M. and Wulfmeier, M. and Rus, D.},
booktitle = {Advances in {Neural} {Information} {Processing} {Systems} 35},
year = {2021},
month = {12},
title = {Is {Bang}-{Bang} {Control} {All} {You} {Need}? {Solving} {Continuous} {Control} with {Bernoulli} {Policies}},
howpublished = {https://openreview.net/pdf?id=9BvDIW6\textunderscore{}qxZ},
}
Mixed-integer convex programming (MICP) is a popular modeling framework for solving discrete and combinatorial optimization problems arising in various settings. Despite advances in efficient solvers for numerical optimization problems, MICPs remain challenging to solve for real-time control in applications requiring solution rates of 10-100Hz. Seeking to bridge this gap, we present an approach that leverages results from supervised learning and parametric programming to quickly solve for feasible solutions to MICPs. This approach consists of (1) an offline phase where a classifier is trained on a codebook consisting of solved MICPs and (2) an online step where the network yields candidate strategies given a new set of problem parameters. Unlike other data-driven approaches for solving MICPs, we show that our approach can tackle a broad category of problems that can be modeled as MICPs and how our framework can handle problems with a different number of discrete decision variables than the problems in the training set. Finally, we demonstrate the efficacy of our proposed approach through numerical experiments showing improved solution times and optimality compared to existing approaches for solving MICPs.
@inproceedings{anonymous2020coco,
author = {Cauligi, A. and Culbertson, P. and Stellato, B. and Schwager, M. and Pavone, M.},
booktitle = {Learning {Meets} {Combinatorial} {Algorithms} at {NeurIPS2020}},
year = {2020},
month = {12},
title = {CoCo: Learning {Strategies} for {Online} {Mixed}-{Integer} {Control}},
howpublished = {https://openreview.net/forum?id=51goIqq88gF},
}
Mixed-integer convex programming (MICP) has seen significant algorithmic and hardware improvements with several orders of magnitude solve time speedups compared to 25 years ago. Despite these advances, MICP has been rarely applied to real-world robotic control because the solution times are still too slow for online applications. In this work, we extend the machine learning optimizer (MLOPT) framework to solve MICPs arising in robotics at very high speed. MLOPT encodes the combinatorial part of the optimal solution into a strategy. Using data collected from offline problem solutions, we train a multiclass classifier to predict the optimal strategy given problem-specific parameters such as states or obstacles. Compared to previous approaches, we use task-specific strategies and prune redundant ones to significantly reduce the number of classes the predictor has to select from, thereby greatly improving scalability. Given the predicted strategy, the control task becomes a small convex optimization problem that we can solve in milliseconds. Numerical experiments on a cart-pole system with walls, a free-flying space robot and task-oriented grasps show that our method provides not only 1 to 2 orders of magnitude speedups compared to state-of-the-art solvers but also performance close to the globally optimal MICP solution.
@inproceedings{cauligi2020learning,
author = {Cauligi, A. and Culbertson, P. and Stellato, B. and Bertsimas, D. and Schwager, M. and Pavone, M.},
booktitle = {IEEE {Conference} on {Decision} and {Control} ({CDC})},
year = {2020},
month = {12},
title = {Learning {Mixed}-{Integer} {Convex} {Optimization} {Strategies} for {Robot} {Planning} and {Control}},
howpublished = {https://arxiv.org/abs/2004.03736},
}
Many control policies used in applications compute the input or action by solving a convex optimization problem that depends on the current state and some parameters. Common examples of such convex optimization control policies (COCPs) include the linear quadratic regulator (LQR), convex model predictive control (MPC), and convex approximate dynamic programming (ADP) policies. These types of control policies are tuned by varying the parameters in the optimization problem, such as the LQR weights, to obtain good performance, judged by application-specific metrics. Tuning is often done by hand, or by simple methods such as a grid search. In this paper we propose a method to automate this process, by adjusting the parameters using an approximate gradient of the performance metric with respect to the parameters. Our method relies on recently developed methods that can efficiently evaluate the derivative of the solution of a convex program with respect to its parameters. A longer version of this paper, which illustrates our method on many examples, is available at https://web.stanford.edu/ boyd/papers/learning_cocps.html.
@inproceedings{Agrawal2020Learning,
author = {Agrawal, A. and Barratt, S. and Boyd, S. and Stellato, B.},
series = {Proceedings of {Machine} {Learning} {Research}},
booktitle = {Proceedings of the 2nd {Conference} on {Learning} for {Dynamics} and {Control}},
year = {2020},
month = {6},
pages = {361--373},
organization = {PMLR},
title = {Learning {Convex} {Optimization} {Control} {Policies}},
volume = {120},
}
We present a novel branch-and-bound solver for mixed-integer quadratic programs (MIQPs) that efficiently exploits the first-order OSQP solver for the quadratic program (QP) sub-problems. Our algorithm is very robust, requires no dynamic memory allocation and is division-free once an initial factorization is computed. Thus, it suitable for embedded applications with low computing power. Moreover, it does not require any assumption on the problem data such as strict convexity of the objective function. We exploit factorization caching and warm-starting to reduce the computational cost of QP relaxations during branch-and-bound and over repeated solutions of parametric MIQPs such as those arising in embedded control, portfolio optimization, and machine learning. Numerical examples show that our method, using a simple high-level Python implementation interfaced with the OSQP solver, is competitive with established commercial solvers.
@inproceedings{stellato2018,
author = {Stellato, B. and Naik, V. V. and Bemporad, A. and Goulart, P. and Boyd, S.},
booktitle = {European {Control} {Conference} ({ECC})},
year = {2018},
month = {7},
title = {Embedded {Mixed}-{Integer} {Quadratic} {Optimization} {Using} the {OSQP} {Solver} },
howpublished = {https://ieeexplore.ieee.org/document/8550136},
}
We introduce a code generation software package that accepts a parametric description of a quadratic program (QP) as input and generates tailored C code that compiles into a fast and reliable optimization solver for the QP that can run on embedded platforms. The generated code is based on OSQP, a novel open-source operator splitting solver for quadratic programming. Our software supports matrix factorization caching and warm starting, and allows updates of the problem parameters during runtime. The generated C code is library-free and has a very small compiled footprint. Examples arising in real-world applications show that the generated code outperforms state-of-the-art embedded and desktop QP solvers.
@inproceedings{banjac2017a,
author = {Banjac, G. and Stellato, B. and Moehle, N. and Goulart, P. and Bemporad, A. and Boyd, S.},
booktitle = {IEEE {Conference} on {Decision} and {Control} ({CDC})},
year = {2017},
month = {12},
title = {Embedded {Code} {Generation} {Using} the {OSQP} {Solver}},
howpublished = {https://ieeexplore.ieee.org/document/8263928},
}
Common approaches for direct model predictive control (MPC) for current reference tracking in power electronics suffer from the high computational complexity encountered when solving integer optimal control problems over long prediction horizons. Recently, an alternative method based on approximate dynamic programming showed that it is possible to reduce the computational burden enabling sampling times under \(25\:\mu s\) by shortening the MPC horizon to a very small number of stages while improving the overall controller performance. In this paper we implemented this new approach on a small size FPGA and validated it on a variable speed drive system with a three-level voltage source converter. Time measurements showed that only \(5.76\:\mu s\) are required to run our algorithm for horizon \(N=1\) and \(17.27\:\mu s\) for \( N=2\) while outperforming state of the art approaches with much longer horizons in terms of currents distortion and switching frequency. To the authors knowledge, this is the first time direct MPC for current control has been implemented on an embedded platform achieving comparable performance to formulations with long prediction horizons.
@inproceedings{stellato2016a,
author = {Stellato, B. and Goulart, P.},
booktitle = {IEEE {Conference} on {Decision} and {Control} ({CDC})},
year = {2016},
month = {12},
pages = {1471--1476},
title = {Real-time {FPGA} {Implementation} of {Direct} {MPC} for {Power} {Electronics} },
howpublished = {https://doi.org/10.1109/CDC.2016.7798474},
}
Switching time optimization arises in finite-horizon optimal control for switched systems where, given a sequence of continuous dynamics, we minimize a cost function with respect to the switching times. In this paper we propose an efficient method for computing optimal switching times in switched linear systems. We derive simple expressions for the cost function, the gradient and the Hessian which can be computed efficiently online without performing any integration. With the proposed method, the most expensive computations are decomposed into independent scalar exponentials which can be efficiently computed and parallelized. Simulation results show that our method is able to provide fast convergence and handle efficiently a high number switching times.
@inproceedings{stellato2016b,
author = {Stellato, B. and Ober-Bl{\" o}baum, S. and Goulart, P.},
booktitle = {IEEE {Conference} on {Decision} and {Control} ({CDC})},
year = {2016},
month = {12},
pages = {7228--7233},
title = {Optimal {Control} of {Switching} {Times} in {Switched} {Linear} {Systems}},
howpublished = {https://doi.org/10.1109/CDC.2016.7799384},
}
Common approaches for direct model predictive control (MPC) for current reference tracking in power electronics suffer from the high computational complexity encountered when solving integer optimal control problems over long prediction horizons. We propose an efficient alternative method based on approximate dynamic programming, greatly reducing the computational burden and enabling sampling times under \(25\:\mu s\). Our approach is based on the offline minimization of an infinite horizon cost function estimate which is then applied to the tail cost of the MPC problem. This allows us to reduce the controller horizon to a very small number of stages improving overall controller performance. Our proposed algorithm is validated on a variable speed drive system with a three-level voltage source converter.
@inproceedings{stellato2016,
author = {Stellato, B. and Goulart, P.},
booktitle = {European {Control} {Conference} ({ECC})},
year = {2016},
month = {7},
pages = {129--134},
title = {High-{Speed} {Direct} {Model} {Predictive} {Control} for {Power} {Electronics}},
howpublished = {http://ieeexplore.ieee.org/document/7810275/},
}
Theses
Many applications in engineering, computer science and economics involve mixed-integer optimal control problems. Solving these problems in real-time is a challenging task because of the explosion of integer combinations to evaluate. This thesis focuses on the development of new algorithms for mixed-integer programming with an emphasis on optimal control problems of fast dynamical systems with discrete controls. The first part proposes two reformulations to reduce the computational complexity. The first reformulation avoids integer variables altogether. By considering a sequence of switched dynamics, we analyze the switching time optimization problem. Even though it is a continuous smooth problem , it is non-convex and the cost function and derivatives are hard to compute. We develop a new efficient method to compute the cost function and its derivatives. Our technique brings up to two orders of magnitude speedups with respect to state-of-the-art tools. The second approach reduces the number of integer decisions. In hybrid model predictive control (MPC) the computational complexity grows exponentially with the horizon length. Using approximate dynamic programming (ADP) we reduce the horizon length while maintaining good control performance by approximating the tail cost offline. This approach allows, for the first time, the application of such control techniques to fast dynamical systems with sampling times of only a few microseconds. The second part investigates embedded branch-and-bound algorithms for mixed-integer quadratic programs (MIQPs). A core component of these methods is the solution of continuous quadratic programs (QPs). We develop OSQP, a new robust and efficient general-purpose QP solver based on the alternating direction method of multipliers (ADMM) and able, for the first time , to detect infeasible problems. We include OSQP into a custom branch-and-bound algorithm suitable for embedded systems. Our extension requires only a single matrix factorization and exploits warm-starting, thereby greatly reducing the number of ADMM iterations required. Numerical examples show that our algorithm solves small to medium scale MIQPs more quickly than commercial solvers.
@misc{stellato2017d,
author = {Stellato, B.},
year = {2017},
school = {University of Oxford},
title = {Mixed-{Integer} {Optimal} {Control} of {Fast} {Dynamical} {Systems}},
type = {phdthesis},
}
Traditional optimization methods for decison-making under uncertainty assume perfect model information. In practice, however, such precise knowledge is rarely available. Thus, optimal decisions coming from these approaches can be very sensitive to perturbations and unreliable. Stochastic optimization programs take into account uncertainty but are intractable in general and need to be approximated. Of late, distributionally robust optimization methods have shown to be powerful tools to reformulate stochastic programs in a tractable way. Moreover, the recent advent of cheap sensing devices has caused the explosion of available historical data, usually referred to as ``Big Data''. Thus, modern optimization techniques are shifting from traditional methods to data-driven approaches. In this thesis, we derive data-driven tractable reformulations for stochastic optimization programs based on distributionally robust optimization. In the first part of this work we provide our theoretical contributions. New distributionally robust probability bounds are derived and used to reformulate uncertain optimization programs assuming limited information about the uncertainty. Then, we show how this information can be derived from historical data. In the second part of this work, we compare the developed methods to support vector machines in a machine learning setting and to randomized optimization and in a control context.
@misc{stellato2014,
author = {Stellato, B.},
year = {2014},
school = {ETH Z{\" u}rich},
title = {Data-driven chance constrained optimization},
type = {mathesis},
}