Lagrangian duality in convex analysis: A (not so) summarized account
I've been dabbling in some convex optimization and decided to write this post to summarize the ideas in a (hopefully) clear and linear fashion. The material is mostly straight out of Boyd and Vandenberghe (freely available), inspired by this very helpful MSE post. Let's get started. We'll work with $\mathbb{R}^n$ for concreteness, though the theory extends to much more general contexts (such as Banach spaces). I presume you know a convex set $C \subseteq \mathbb{R}^n$ is one for which $x, y \in C$ and $0 \leq t \leq 1$ implies $tx + (1-t)y \in C$, and that a real-valued function $f$ is convex if its domain is convex and \[ f(tx + (1-t)y) \leq tf(x) + (1-t)f(y) \] for $x, y \in \text{dom}(f)$ and $0 \leq t \leq 1$ as before. I said $f$ is real-valued, but we also allow it to take on $\pm \infty$, adopting the standard conventions for arithmetic and inequalities with infinities, which will be useful later. Because of that, we define a proper convex function to be one wh