# James Gregson's Website

## Assignment Problems

Assignment problems occur in many fields that deal with resource allocation. There are many variants, however the simplest involves $$N$$ agents indexed by $$i$$ which must perform $$N$$ tasks indexed by $$j$$. The cost for agent $$i$$ to perform task $$j$$ is encoded in a cost matrix $$C$$ that is weighted component-wise by assignments from an assignment matrix $$W_{ij}$$ to form the objective.

It's common to have the requirements that each task must be completed and that no agent can perform more than one task. Often it is required that each agent perform a single task to completion but I'm not considering that case here.

\begin{align*} \newcommand{\argmin}{{\bf\mbox{argmin}}} \newcommand{\st}{{\bf\mbox{subject to:}}} \newcommand{\gap}{\hspace{0.25cm}} W = \argmin\gap& &\sum_{ij} C_{ij} W_{ij} \\ \st\gap& & \sum_i W_{ij} = 1\gap\forall j\\ & & \sum_j W_{ij} = 1\gap\forall i\\ & & W_{ij} \geq 0 \gap\forall i,j \end{align*}

The problem can be solved using the Hungarian algorithm but I thought I'd give it a try with ADMM to see how well it works.

The probability simplex constraints are a pain but it's pretty easy to solve this by introducing splitting variables $$A, B$$ each equal to $$W$$. This gives the following modified problem:

\begin{align*} W = \argmin\gap& &\sum_{ij} C_{ij} W_{ij} + \sum_i \mathcal{I}_\Delta(A_{i*}) + \sum_j \mathcal{I}_\Delta(B_{*j}) \\ \st\gap& & W - A = 0 \\ & & W - B = 0 \end{align*}

The function $$\mathcal{I}_\Delta(v)$$ is the indicator function for probability simplex constraints on the rows of $$V$$:

\begin{equation*} \mathcal{I}_\Delta(v) = \begin{cases} 0 & \gap\mbox{if } v_i \geq 0 \mbox{ and } \sum_j v_i = 1 \gap\forall i \\ \infty &\gap\mbox{otherwise} \end{cases} \end{equation*}

The proximal operator for this functions requires Euclidean projection onto the non-negativity and sum contraints. It's possible to alternately project on the non-negativity and sum contraints but this converges slowly. However, an $$O(N \log N)$$ algorithm exists and can be implemented in Python as below:

def prox_probability_simplex( v ):
"""Algorithm of https://eng.ucmerced.edu/people/wwang5/papers/SimplexProj.pdf"""
j = numpy.arange(1,len(v)+1)
u = numpy.sort(v)[::-1]
y = numpy.cumsum( u )
rho = numpy.argmax((u+(1.0-y)/j>0)*j)+1
off = (1-y[rho-1])/rho
ret = numpy.maximum( v+off, 0.0 )
return ret


Neglecting constants w.r.t $$W, A, B$$, the augmented Lagrangian of this problem is:

\begin{equation*} \mathcal{L}(W,A,U_A,B,U_B) = \sum_{ij} C_{ij} W_{ij} + \sum_i \mathcal{I}_\Delta(A_{i*}) + \sum_j \mathcal{I}_\Delta(B_{*i}) + \frac{\rho}{2}\| W - A + U_A \|_2^2 + \frac{\rho}{2}\| W - A + U_A \|_2^2 \end{equation*}

ADMM consists of alternating minimizations for the solution $$W$$ and the splitting variables $$A$$ and $$B$$. After each set of updates, the (scaled) Lagrange multipliers $$U_A, U_B$$ are updated. The objective terms parallelize really cleanly, the first and last two terms parallelize per-component of $$W$$. The $$\mathcal{I}_\Delta(.)$$ terms each parallelize independently over rows and columns of the splitting variables.

The resulting algorithm is as follows. I have not parallelized the solves although the $$W$$ solve is vectorized at least:

def assignment_problem( C, W=None, rho=0.1, thresh=0.55, early_stop=True, num_iterations=10000 ):
"""Applies an ADMM algorithm to the assignment problem

This algorithm will only solve the problem at the limit of iterations. Early
stopping or running too few iterations could give a non-optimal or invalid
matching (i.e. not a permutation of agents to tasks)

That said, it often does okay!

Inputs:
C (NxN matrix): Input cost matric mapping agents (row indices) to tasks
(column indices). Invalid assignments should be indicated with very
large positive entries

W (NxN matrix): Initial assignments to warm-start the process or None
to start from all zeros

rho (scalar): Lagrangian augmentation weight

thresh (scalar): Threshold above which assignments are treated as hard
for the purposes of returned assignments

early_stop (boolean): enable early stopping in the case that thresholding
the assignment matrix by thresh produces a valid assignment. This
does not necessarily produce an optimal solution!

num_iterations (int): maximum number of iterations to perform, may be
fewer if early_stop == True

Returns:

assignment (N-element integer array): assignments from agents to task
where each element assignment[i]=v maps agent i to task v

assignment matrix (NxN matrix): solution variables to warm-start if

loss (scalar): the achieved loss/cost of the solution

iter (integer): number of iterations performed
"""
if W is None:
W = numpy.zeros_like(C)

A = W.copy()
B = W.copy()
Ua = W-A
Ub = W-B

loss = numpy.sum(C*W)
for iter in range( num_iterations ):
# update primary variables, trivially parallelizes per component
W = numpy.maximum( (rho*(A-Ua)+rho*(B-Ub)-C)/(2.0*rho), 0.0 )

# update row constraints, trivially parallizes by row
for i in range(A.shape[0]):
A[i,:] = prox_probability_simplex( W[i,:] + Ua[i,:] )
Ua += (W - A)

# update column constraints, trivially parallelizes by column
for i in range(A.shape[1]):
B[:,i] = prox_probability_simplex( W[:,i] + Ub[:,i] )
Ub += (W - B)

tloss = numpy.sum(C*W)
rowsum = numpy.sum(W > 0.55, axis=0)
colsum = numpy.sum(W > 0.55, axis=1)
if early_stop and numpy.max(rowsum) == 1 and numpy.min(rowsum) == 1 and numpy.max(colsum) == 1 and numpy.min(colsum) == 1:
return numpy.argmax( W > thresh, axis=1 ), W, tloss, iter

return numpy.argmax( W > thresh, axis=1 ), W, tloss, iter

def check_validity( x ):
"""Checks that a solution from the above is a permutation"""
return numpy.linalg.norm( numpy.sort(x) - numpy.arange(len(x)) ) > 1e-8


It's worth noting a few things:

• With the splitting and ADMM, this algorithm will only exactly solve the problem in the limit of iteration count.
• Early stopping can reduce the iterations needed to achieve a valid, low-cost matching by 10-50X depending on the problem but this matching may not be optimal. For some test problems with 10x10 cost matrices having standard normal entries this happened about 1% of the time. The achieved loss values are typically within 1-2% of each other.
• Early stopping does always generate identical output running the algorithm for many more iterations. Subsets of agents/tasks where multiple matchings produce close results tend to be hard to resolve.

## Dictionary Learning

Dictionary learning consists of jointly solving for both a dictionary $$D$$, the columns of which store features and weights $$X$$ which store coefficients applied to $$D$$ required to best approximate a collection of input datapoints stored as columns of $$Y$$. The weights $$X$$ are required to be sparse, i.e. have relatively few non-zero entries:

\begin{equation*} \newcommand{\gap}{\hspace{0.25cm}} \newcommand{\argmin}{{\bf\mbox{argmin}}} \newcommand{\st}{{\bf\mbox{subject to:}}} D, X = \argmin \frac{1}{2} \| Y - DX \|_F^2 + \lambda \| X \|_1 \end{equation*}

An additional constraint must be added to $$D$$ because, without one, the 1-norm in the objective can be made arbitrarily small by allowing $$D$$ to grow arbitrarily large. Typically this constraint is that columne of $$D$$ have unit 2-norm.

This is a non-convex problem, but it can be solved relatively well by iterating weight and dictionary updates, leaving the other fixed. Ignoring the norm constraint, the dictionary update is a linear least-squares problem and can be solved as:

\begin{equation*} D = \left( \left( X X^T + \epsilon I \right)^{-1} X Y^T \right)^T \end{equation*}

After this, the columns can be normalized.

The weight update is a sparse-coding problem and can be solved using ADMM. The first step is to introduce a splitting variable that allows the Frobenius-norm and 1-norm to be handled separately:

\begin{align*} X = \argmin & & \frac{1}{2} \| Y - D X \|_F^2 + \lambda \| Z \|_1 \\ \st & & X - Z = 0 \end{align*}

Once in this form, the update algorithm for the weights can be written from inspection based on the link above using $$F(x) = \argmin \frac{1}{2}\| Y - D X \|_F^2$$ and $$G(Z) = \lambda \| Z \|_1$$.

\begin{align*} \newcommand{\prox}[2]{{\bf\mbox{prox}_{#1}}\left(#2\right)} X^{k+1} &=& \prox{F}{Z^k - U^k} \\ Z^{k+1} &=& \prox{\lambda G}{X^{k+1} + U^k} \\ U^{k+1} &=& U^k + X^{k+1} - Z^{k+1} \end{align*}

All that remains is to determine the proximal operators and initialization. The proximal operator for $$X$$ is for a pretty conventional least-squares problem:

\begin{align*} \prox{F}{V} &=& \argmin\gap F(X) + \frac{1}{2}\| X - V \|_F^2 \\ &=& \argmin\gap \frac{1}{2} \| Y - D X \|_F^2 + \frac{1}{2} \| X - V \|_F^2 \\ &=& \left( D^T D + I \right)^{-1} \left( D^T Y + V \right) \end{align*}

The corresponding proximal operator for $$Z$$ is for 'pure' 1-norm, which is just the soft-thresholding operator:

\begin{align*} \prox{\lambda G}{v} &=& \argmin\gap G(Z) + \frac{1}{2\lambda} \| Z - V \|_1 \\ &=& ( |V| - \lambda )_+ {\bf\mbox{sgn}}( V ) \end{align*}

If you're in doubt where either of these come from, consult the ADMM link above.

This leads to some interesting choices. Solving the split problem has two sets of variables, $$X$$ and $$Z$$. $$X$$ minimizes the data term but $$Z$$ enforces the constraints. Nominally, both of these should be equal but this only holds at the limit of iteration count, assuming the overall algorithm works at all (the problem is non-convex after all). Which should be used when updating $$D$$?

For a test problem involving around 5.5M points of 48 dof data, I found that using $$Z$$ is vastly preferable. Using a dictionary with only 100 atoms (~2X overcomplete) yields reconstruction errors around 1% or less but the percentage of non-zeros in the coding weights is only around 25%. It settled relatively quickly. Using $$X$$ on the other hand oscillated quite a bit and did not reach these levels of accuracy/sparsity.

I conclude that it's better to use a looser fit than slacker constraint satisfaction, for this problem at least. Presumably the dictionary/codebooks adapt to each other better when they are consistent. More 'compressible' problems may yield sparser solutions.

import numpy

def sparse_coding( Y, D, X, rho, num_iterations, Z=None, U=None ):
if Z is None:
Z = X.copy()
if U is None:
U = X - Z

# precompute solve and part of RHS
iDtD = numpy.linalg.inv( numpy.dot(D.transpose(),D) + numpy.eye(D.shape[1]) )
DtY  = numpy.dot( D.transpose(), Y )

for iter in range(num_iterations):
print('    Sparse coding iteration [{}/{}]'.format(iter+1,num_iterations) )
# primary update
X = numpy.dot( iDtD, DtY + Z - U )

# splitting variable update
T = X + U
Z = numpy.maximum( numpy.abs(T) - rho, 0.0)*numpy.sign(T)

# lagrange multiplier update
U = T - Z
return X, Z, U

def dictionary_learning( Y, num_atoms, rho=0.001, num_outer_iterations=10, num_inner_iterations=10, epsilon=1e-8 ):
# initialize the dictionary and weights
X = numpy.random.standard_normal( (num_atoms, Y.shape[1]) )
D = numpy.random.standard_normal( (Y.shape[0], num_atoms) )

# outer loop to fit dictionary and weights
Z = X.copy()
U = X - Z
for outer in range(num_outer_iterations):
print( '  Outer iteration [{}/{}]'.format(outer+1,num_outer_iterations) )

# dictionary update
D = numpy.linalg.solve(
numpy.dot(Z,Z.transpose())+epsilon*numpy.eye(X.shape[0]),
numpy.dot(Z,Y.transpose())
).transpose()
for i in range(D.shape[1]):
D[:,i] /= numpy.linalg.norm(D[:,i])

# sparse coding weight update
X, Z, U = sparse_coding( Y, D, X, rho, num_inner_iterations, Z, U )

# print some stats
print( '    ||Y-DX|| RMS error: {}'.format( numpy.sqrt(numpy.mean(numpy.square(Y - numpy.dot(D,Z)))) ) )
print( '          mean(nnz(X)): {}'.format( numpy.mean( numpy.sum(numpy.abs(Z)>1e-4, axis=0) ) ) )

# return dictionary and solution variables
return D, Z


## PyQt5 + PyOpenGL Example

It took a little bit for me to find working example code for PyOpenGL+PyQt5. The following exposes a GLUT-like interface where you can provide callback functions for basic window, mouse and keyboard events.

# simple_viewer.py

from PyQt5 import QtWidgets
from PyQt5 import QtCore
from PyQt5 import QtGui
from PyQt5 import QtOpenGL

class SimpleViewer(QtOpenGL.QGLWidget):

initialize_cb    = QtCore.pyqtSignal()
resize_cb        = QtCore.pyqtSignal(int,int)
idle_cb          = QtCore.pyqtSignal()
render_cb        = QtCore.pyqtSignal()

mouse_move_cb    = QtCore.pyqtSignal( QtGui.QMouseEvent )
mouse_press_cb   = QtCore.pyqtSignal( QtGui.QMouseEvent )
mouse_release_cb = QtCore.pyqtSignal( QtGui.QMouseEvent )
mouse_wheel_cb   = QtCore.pyqtSignal( QtGui.QWheelEvent )

key_press_cb     = QtCore.pyqtSignal( QtGui.QKeyEvent )
key_release_cb   = QtCore.pyqtSignal( QtGui.QKeyEvent )

def __init__(self, parent=None):
self.parent = parent
QtOpenGL.QGLWidget.__init__(self, parent)
self.setMouseTracking(True)

def mouseMoveEvent( self, evt ):
self.mouse_move_cb.emit( evt )

def mousePressEvent( self, evt ):
self.mouse_press_cb.emit( evt )

def mouseReleaseEvent( self, evt ):
self.mouse_release_cb.emit( evt )

def keyPressEvent( self, evt ):
self.key_press_cb.emit(evt)

def keyReleaseEvent( self, evt ):
self.key_release_cb.emit(evt)

def initializeGL(self):
self.initialize_cb.emit()

def resizeGL(self, width, height):
if height == 0: height = 1
self.resize_cb.emit(width,height)

def paintGL(self):
self.render_cb.emit()


And here is a basic example using it:

"""About the simplest PyQt OpenGL example with decent interaction"""
import sys

from OpenGL.GL import *
from OpenGL.GLU import *

from simple_viewer import SimpleViewer

width = 800
height = 600
aspect = width/height

def resize( w, h ):
width = w
height = h
aspect = w/h
glViewport( 0, 0, width, height )

def initialize():
glEnable(GL_DEPTH_TEST)
glClearColor( 0.7, 0.7, 1.0, 0.0 )

def render():
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT )

glMatrixMode( GL_PROJECTION )
gluPerspective( 45.0, aspect, 0.1, 10.0 )

glMatrixMode( GL_MODELVIEW )
gluLookAt( 0.0, 2.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0 )

glPointSize(5.0)
glLineWidth(5.0)
glBegin(GL_LINES)
glColor3f(  1.0, 0.0, 0.0 )
glVertex3f( 1.0, 0.0, 0.0 )
glVertex3f( 0.0, 0.0, 0.0 )
glColor3f(  0.0, 1.0, 0.0 )
glVertex3f( 0.0, 1.0, 0.0 )
glVertex3f( 0.0, 0.0, 0.0 )
glColor3f(  0.0, 0.0, 1.0 )
glVertex3f( 0.0, 0.0, 1.0 )
glVertex3f( 0.0, 0.0, 0.0 )
glEnd()

def mouse_move( evt ):
print('Mouse move {}: [{},{}]'.format(evt.button(),evt.x(),evt.y()) )

def mouse_press( evt ):
print('Mouse press {}: [{},{}]'.format(evt.button(),evt.x(),evt.y()) )

def mouse_release( evt ):
print('Mouse release {}: [{},{}]'.format(evt.button(),evt.x(),evt.y()) )

def key_press( evt ):
print('Key press {}'.format(evt.key()) )

def key_release( evt ):
print('Key release {}'.format(evt.key()) )

# create the QApplication
app = SimpleViewer.application()

# set up the display
viewer = SimpleViewer()
viewer.resize_cb.connect( resize )
viewer.initialize_cb.connect( initialize )
viewer.render_cb.connect( render )

# keyboard & mouse interactions
viewer.key_press_cb.connect( key_press )
viewer.key_release_cb.connect( key_release )
viewer.mouse_press_cb.connect( mouse_press )
viewer.mouse_release_cb.connect( mouse_release )
viewer.mouse_move_cb.connect( mouse_move )

# resize the window
viewer.resize( width, height )
viewer.show()

# main loop
try:
sys.exit(app.exec_())
except SystemExit:
pass


## Reordering Matrix Products

Here are a few operations for dealing with objective functions of matrix-valued variables:

## Starting Definitions

Let $$A$$ be a $$N\times M$$ matrix with entry $$A_{ij}$$ being the entry at row $$i$$ and column $$j$$. The $$i$$'th row of $$A$$ is then $$A_{i*}$$ and the $$j$$'th column is $$A_{*j}$$.

## Row and Column Vectorization

Define the column vectorization operator which returns a single column vector containing the columns of $$A$$ stacked

\begin{equation*} \newcommand{\cvec}{{\bf\mbox{cvec}}} \cvec(A) = \begin{bmatrix} A_{*1} \\ A_{*2} \\ \vdots \\ A_{*M} \end{bmatrix} \end{equation*}

likewise the row vectorization operator returns a single column vector containing the (transposes of) rows of $$A$$ stacked:

\begin{equation*} \newcommand{\rvec}{{\bf\mbox{rvec}}} \rvec(A) = \cvec(A^T) = \begin{bmatrix} A_{1*}^T \\ A_{2*}^T \\ \vdots \\ A_{N*} \end{bmatrix} \end{equation*}

In numpy, matrices are stored in row-major order, so the Python definition of these operaions is inverted as below:

def cvec( A ):
"""Stack columns of A into single column vector"""
return rvec(A.T)

def rvec( A ):
"""Stack rows of A into a single column vector"""
return A.ravel()


Similarly, the inverses of $$\cvec(A)$$ and $$\rvec(A)$$ unpack the vectorized values back into a matrix of the original shape:

\begin{align*} \newcommand{\cmat}{{\bf\mbox{cmat}}} \newcommand{\rmat}{{\bf\mbox{rmat}}} \cmat\left( \cvec(A), M \right) = A \\ \rmat\left( \rvec(A), N \right) = A \\ \end{align*}

with corresponding python definitions:

def rmat( v, nr ):
"""Reshape vector v into matrix with nr rows"""
return v.reshape((nr,-1))

def cmat( v, nc ):
"""Reshape vector v into matrix with nc columns"""
return v.reshape((cols,-1)).T


Finally, define two additional operators. The first of these is the spread operator, which takes the Kronecker product between an identity matrix and the input matrix, resulting in copies of the input matrix along the diagonal:

\begin{equation*} \newcommand{\spread}[2]{{\bf\mbox{spread}}_{#2}\left(#1\right)} \spread{A}{r} = I_{r\times r} \otimes A = \begin{bmatrix} A & 0 & \dots & 0 \\ 0 & A & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & A \end{bmatrix} \end{equation*}

The second of these is the split operator which reverses the order of the arguments to the Kronecker product, basically replacing each entry $$A_{ij}$$ with $$A_{i,j} I_{r\times r}$$:

\begin{equation*} \newcommand{\split}[2]{{\bf\mbox{split}}_{#2}\left(#1\right)} \split{A}{r} = A \otimes I_{r\times r} = \begin{bmatrix} A_{11} I_{r\times r} & A_{12} I_{r\times r} & \dots & A_{1M} I_{r\times r} \\ A_{21} I_{r\times r} & A_{22} I_{r\times r} & \dots & A_{2M} I_{r\times r} \\ \vdots & \vdots & \ddots & \vdots \\ A_{N1} I_{r\times r} & A_{N2} I_{r\times r} & \dots & A_{NM} I_{r\times r} \end{bmatrix} \end{equation*}

Python implementation of these are one-liners that just call the existing numpy Kronecker product:

def spread( M, n ):
"""Make n copies of M along the matrix diagonal"""
return np.kron( np.eye(n), M )

def split( M, n ):
"""Replace each entry of M with a n-by-n identity matrix scaled by the entry value"""
return np.kron( M, np.eye(n) )


With these, it is possible to define matrix products. There are also some fun properties that come from the Kronecker product:

\begin{align*} \spread{M}{n}^T &=& \spread{M^T}{n} \\ \spread{M}{n}^{-1} &=& \spread{M^{-1}}{n} \\ \split{M}{n}^T &=& \split{M^T}{n} \\ \split{M}{n}^{-1} &=& \split{M^{-1}}{n} \end{align*}

## Matrix Products

Let $$X$$ be a $$N\times K$$ matrix and $$Y$$ be a $$K\times M$$ matrix, then their product $$W = XY$$ will be a $$N\times M$$ matrix and the following are true:

\begin{align*} \rvec(W) &=& \split{X}{M} \rvec(Y) \\ \cvec(W) &=& \spread{X}{M} \cvec(Y) \\ \rvec(W) &=& \spread{Y^T}{N} \rvec(X) \\ \cvec(W) &=& \split{Y^T}{N} \cvec(X) \end{align*}

What these let you do is unbury any (single) term that's in the middle of some horrendous expression. For example, say there's an expression $$W = AXB$$ with $$A \in \mathcal{R}^{N\times R}$$, $$X \in \mathcal{R}^{R\times S}$$ and $$B \in \mathcal{R}^{S \times M}$$ and you want to isolate $$X$$ as a column vector. Then you have:

\begin{align*} \cvec(W) &=& \spread{A}{M} \split{B^T}{R} \cvec(X) \\ &=& \left( I_{M\times M} \otimes A \right) \left( B^T \otimes I_{R\times R} \right) \cvec(X) \\ &=& \left( I_{M\times M} B^T \right) \otimes \left( A I_{R\times R} \right) \cvec(X) \\ &=& \left( B^T \otimes A \right) \cvec(X) \end{align*}

The step in the third line is the Kronecker product identity:

\begin{equation*} \left( A \otimes B \right) \left( C \otimes D \right ) = \left( A C \right) \otimes \left( B D \right) \end{equation*}

The same steps can also be done if you want row-major storage of $$X$$:

\begin{align*} \rvec(W) &=& \split{A}{M} \spread{B^T}{R} \rvec(X) \\ &=& \left( A \otimes I_{M\times M} \right) \left( I_{R\times R} \otimes B^T \right) \rvec(X) \\ &=& \left( A \otimes B^T \right) \rvec(X) \end{align*}

In these cases the dimensions work out correctly but you don't need to worry too much about them since the dimensions of $$X$$ are constrained by the column could in $$A$$ and the row count in $$B$$. This only applies when operating with them symbolically of course.

## Frobenius Norm Problems

With this information, it becomes pretty easy to deal with Frobenius norm problems like:

\begin{equation*} X = \mbox{argmin} \frac{1}{2} \| A X B - Y \|_F^2 \end{equation*}

I have found these frustrating in the past since I invariably end up in indexing hell. But it becomes much easier with the identities above. Applying them first of all isolates the X term to the right-hand-side of the expression and also converts the Frobenius norm to an $$L_2$$ norm. Making the substitutions $$\tilde{x} = \cvec(X)$$ and $$\tilde{y} = \cvec(y)$$,

\begin{align*} \tilde{x}_* &=& \mbox{argmin} \frac{1}{2} \| \left( B^T \otimes A \right) \tilde{x} - \tilde{y} \|_2^2 \\ &=& \mbox{argmin} \frac{1}{2} \tilde{x}^T \left( B \otimes A^T \right) \left( B^T \otimes A \right) \tilde{x} - \tilde{x}^T \left( B \otimes A^T \right) \tilde{y} + \mbox{constants} \\ &=& \mbox{argmin} \frac{1}{2} \tilde{x}^T \left( B B^T \otimes A^T A \right) \tilde{x} - \tilde{x}^T \left( B \otimes A^T \right) \tilde{y} \end{align*}

This now looks like a standard least-squares problem, albeit one with this Kronecker product mashed into it. Setting the gradient to zero gives the solution as:

\begin{align*} \tilde{x}_* &=& \left( B B^T \otimes A^T A \right)^{-1} \left( B \otimes A^T \right ) \tilde{y} \\ &=& \left( (B B^T)^{-1} \otimes (A^T A)^{-1} \right) \left( B \otimes A^T \right) \tilde{y} \\ &=& \left( (B B^T)^{-1} B \otimes (A^T A)^{-1} A^T \right) \tilde{y} \end{align*}

This result has a similar form to $$AXB = \left( B^T \otimes A\right)\cvec(X)$$, which can be used to get back to the matrix form of the expression:

\begin{align*} X &=& (A^T A)^{-1} A^T Y \left( (B B^T)^{-1} B\right)^T \\ &=& (A^T A)^{-1} A^T Y B^T (B B^T)^{-1} \end{align*}

It's also possible to come at this from a different direction

\begin{align*} X &=& \mbox{argmin} \frac{1}{2} \| A X B - Y \|_F^2 \\ &=& \mbox{argmin} \frac{1}{2} \mbox{Tr}\left( (AXB - Y)(AXB - Y)^T \right) \end{align*}

The gradient of this is given in the Matrix Cookbook.

\begin{equation*} \frac{\partial}{\partial X} \frac{1}{2} \mbox{Tr}\left( (AXB - Y)(AXB - Y)^T \right) = A^T \left( A X B - Y \right) B^T \end{equation*}

Setting the gradient of this to zero and isolating $$X$$ gives the following:

\begin{align*} A^T A X B B^T - A^T Y B^T = 0 \\ X = (A^T A)^{-1} A^T Y B^T (B B^T)^{-1} \end{align*}

which matches the previous result.

## Rotation Matrix Euler Angles

This page describes a basic approach to extracting Euler angles from rotation matrices. This follows what OpenCV does, (which is where I got the process) but I also wanted to understand why this approach was used.

Here I'm considering rotation performed first around x-axis ($$\alpha$$), then y-axis ($$\beta$$) and finally z-axis ($$\gamma$$). For this, the rotation matrix is shown below. Other rotation orders are analogous.

\begin{equation*} R(\alpha,\beta,\gamma) = \left[\begin{matrix}\cos{\left (\beta \right )} \cos{\left (\gamma \right )} & \sin{\left (\alpha \right )} \sin{\left (\beta \right )} \cos{\left (\gamma \right )} - \sin{\left (\gamma \right )} \cos{\left (\alpha \right )} & \sin{\left (\alpha \right )} \sin{\left (\gamma \right )} + \sin{\left (\beta \right )} \cos{\left (\alpha \right )} \cos{\left (\gamma \right )} & 0\\\sin{\left (\gamma \right )} \cos{\left (\beta \right )} & \sin{\left (\alpha\right )} \sin{\left (\beta \right )} \sin{\left (\gamma \right )} + \cos{\left (\alpha \right )} \cos{\left (\gamma \right )} & - \sin{\left (\alpha \right )} \cos{\left (\gamma \right )} + \sin{\left (\beta \right )} \sin{\left (\gamma \right )} \cos{\left (\alpha \right )} & 0\\- \sin{\left (\beta \right )} & \sin{\left (\alpha \right )} \cos{\left (\beta \right )} & \cos{\left (\alpha \right )} \cos{\left (\beta \right )} & 0\\0 & 0 & 0 & 1\end{matrix}\right] \end{equation*}

To solve for the Euler angles, we can use a little bit of trigonometry. The best way, in my opinion, is to find pairs of entries that are each products of two factors. Using the identity $$\sin^2 + \cos^2 = 1$$ can be used to isolate specific angles, e.g. $$R_{0,0}$$ and $$R_{0,1}$$ can be used to find $$\cos(\beta)$$ which unlocks the rest of the angles:

\begin{align*} (-1)^k \cos(\beta + k \pi) = \sqrt{ \cos^2(\beta) ( \sin^2(\gamma) + \cos^2(\gamma)) } = \sqrt{ R_{0,0}^2 + R_{1,0}^2 } \\ \cos(\beta) \approx \sqrt{ R_{0,0}^2 + R_{1,0}^2 } \end{align*}

Pay attention to the first line: For any true value of $$\beta$$, this formula will return the same value for $$\beta + k\pi$$ where $$k$$ is an arbitrary integer. This will in turn determine the angles $$\alpha$$ and $$\gamma$$ in order to be compatible. The approximate equality will return $$\beta \in [-\pi/2, \pi/2]$$.

Once $$\cos(\beta)$$ is found we can find the angles directly using atan2:

\begin{align*} \beta = \mbox{atan2}( -R_{2,0}, \cos(\beta) ) \\ \alpha = \mbox{atan2}\left( \frac{R_{2,1}}{\cos(\beta)}, \frac{R_{2,2}}{\cos(\beta)} \right) \\ \gamma = \mbox{atan2}\left( \frac{R_{1,0}}{\cos(\beta)}, \frac{R_{0,0}}{\cos(\beta)} \right) \end{align*}

The other issue occurs when $$|\cos(\beta)| = 0$$, which causes division by zero in the equations for $$\alpha,\gamma$$ ($$\beta$$ is still well defined). In this case there are a number of options. What OpenCV does is to arbitrarily decide that $$\gamma = 0$$, which means that $$\sin(\gamma) = 0$$ and $$\cos(\gamma) = 1$$. The formulas in this case are:

\begin{align*} \beta = \mbox{atan2}( -R_{2,0}, \cos(\beta) ) \\ \alpha = \mbox{atan2}( -R_{1,2}, R_{1,1} ) \\ \gamma = 0 \end{align*}

There are, of course, other options. Python code implementing these operations is below, along with unit testing for the degenerate and non-degerate cases.

import unittest
import numpy

def x_rotation( theta ):
"""Generate a 4x4 homogeneous rotation matrix about x-axis"""
c = numpy.cos(theta)
s = numpy.sin(theta)
return numpy.array([
[ 1, 0, 0, 0],
[ 0, c,-s, 0],
[ 0, s, c, 0],
[ 0, 0, 0, 1]
])

def y_rotation( theta ):
"""Generate a 4x4 homogeneous rotation matrix about y-axis"""
c = numpy.cos(theta)
s = numpy.sin(theta)
return numpy.array([
[ c, 0, s, 0],
[ 0, 1, 0, 0],
[-s, 0, c, 0],
[ 0, 0, 0, 1]
])

def z_rotation( theta ):
"""Generate a 4x4 homogeneous rotation matrix about z-axis"""
c = numpy.cos(theta)
s = numpy.sin(theta)
return numpy.array([
[ c,-s, 0, 0],
[ s, c, 0, 0],
[ 0, 0, 1, 0],
[ 0, 0, 0, 1]
])

def xyz_rotation( angles ):
"""Generate 4x4 homogeneous rotation in x then y then z order"""
return numpy.dot( z_rotation(angles[2]), numpy.dot( y_rotation(angles[1]), x_rotation(angles[0]) ) )

def xyz_rotation_angles( R, eps=1e-8, check=False ):
"""Back out the Euler angles that would lead to the given matrix"""

if check and numpy.linalg.norm( numpy.dot( R.transpose(), R ) - numpy.eye(4) ) > eps:
raise ValueError('Input matrix is not a pure rotation, R\'R != I')

cos_beta = numpy.sqrt( R[0,0]**2 + R[1,0]**2 )
if numpy.abs(cos_beta) > eps:
alpha = numpy.arctan2( R[2,1]/cos_beta, R[2,2]/cos_beta )
beta  = numpy.arctan2(-R[2,0], cos_beta )
gamma = numpy.arctan2( R[1,0]/cos_beta, R[0,0]/cos_beta )
return numpy.array((alpha,beta,gamma))
else:
alpha = numpy.arctan2(-R[1,2], R[1,1] )
beta  = numpy.arctan2(-R[2,0], cos_beta )
gamma = 0
return numpy.array((alpha,beta,gamma))

class test_euler_angles(unittest.TestCase):
def test_angles(self):
"""Do fuzz testing on arbitrary rotations, limiting beta to valid range"""
for i in range(1000):
alpha = (numpy.random.rand()-0.5)*numpy.pi*2
beta  = (numpy.random.rand()-0.5)*numpy.pi*0.9999
gamma = (numpy.random.rand()-0.5)*numpy.pi*2

ang = xyz_rotation_angles( xyz_rotation([alpha,beta,gamma]))
self.assertAlmostEqual( alpha, ang[0] )
self.assertAlmostEqual( beta,  ang[1] )
self.assertAlmostEqual( gamma, ang[2] )

def test_degeneracies(self):
"""Do fuzz testing on the degenerate condition of beta = +- pi/2"""
for i in range(1000):
alpha = (numpy.random.rand()-0.5)*numpy.pi*2
beta  = numpy.sign(numpy.random.randn())*numpy.pi/2
gamma = (numpy.random.rand()-0.5)*numpy.pi*2
R     = xyz_rotation((alpha,beta,gamma))
ang   = xyz_rotation_angles( R )
R2    = xyz_rotation( ang )
# check that beta is recovered, gamma is set to zero and
# the rotation matrices match to high precision
self.assertAlmostEqual( beta, ang[1] )
self.assertAlmostEqual( ang[2], 0 )
for j in range(16):
self.assertAlmostEqual( R.ravel()[j], R2.ravel()[j] )

if __name__ == '__main__':
unittest.main()


## Transformation Matrix Jacobians

Starting with the $$4\times4$$ homogeneous transform matrix with parameters $$\beta = [ \theta_x, \theta_y, \theta_z, t_x, t_y, t_z ]^T$$ where rotations are performed in XYZ order and using the following substitutions:

\begin{align*} c_x = \cos(\theta_x) \\ s_x = \sin(\theta_x) \\ c_y = \cos(\theta_y) \\ s_y = \sin(\theta_y) \\ c_z = \cos(\theta_z) \\ s_z = \sin(\theta_z) \\ \end{align*}

the vector function mapping a point $$p = [p_x, p_y, p_z, 1]^T$$ in the body coordinate system to a point in the world coordinate system $$w = [w_x, w_y, w_z, 1]^T$$ is:

\begin{equation*} \begin{bmatrix} w_x \\ w_y \\ w_z \\ 1 \end{bmatrix} = F( p, \beta ) = \left[\begin{matrix}c_{y} c_{z} & - c_{x} s_{z} + c_{z} s_{x} s_{y} & c_{x} c_{z} s_{y} + s_{x} s_{z} & t_{x}\\c_{y} s_{z} & c_{x} c_{z} + s_{x} s_{y} s_{z} & c_{x} s_{y} s_{z} - c_{z} s_{x} & t_{y}\\- s_{y} & c_{y} s_{x} & c_{x} c_{y} & t_{z}\\0 & 0 & 0 & 1\end{matrix}\right]\begin{bmatrix} p_x \\ p_y \\ p_z \\ 1 \end{bmatrix} \end{equation*}

and the Jacobian with respect to the parameters $$\beta$$ is:

\begin{equation*} J_F = \left[\begin{matrix}p_{y} \left(c_{x} c_{z} s_{y} + s_{x} s_{z}\right) + p_{z} \left(c_{x} s_{z} - c_{z} s_{x} s_{y}\right) & c_{x} c_{y} c_{z} p_{z} + c_{y} c_{z} p_{y} s_{x} - c_{z}p_{x} s_{y} & - c_{y} p_{x} s_{z} + p_{y} \left(- c_{x} c_{z} - s_{x} s_{y} s_{z}\right) + p_{z} \left(- c_{x} s_{y} s_{z} + c_{z} s_{x}\right) & 1 & 0 & 0\\p_{y} \left(c_{x} s_{y} s_{z} - c_{z} s_{x}\right) + p_{z} \left(- c_{x} c_{z} - s_{x} s_{y} s_{z}\right) & c_{x} c_{y} p_{z} s_{z} + c_{y} p_{y} s_{x} s_{z} - p_{x} s_{y} s_{z} & c_{y} c_{z} p_{x} + p_{y} \left(- c_{x} s_{z} + c_{z} s_{x} s_{y}\right) + p_{z} \left(c_{x} c_{z} s_{y} + s_{x} s_{z}\right) & 0 & 1 & 0\\c_{x} c_{y} p_{y} - c_{y} p_{z} s_{x} & - c_{x} p_{z} s_{y} - c_{y} p_{x} -p_{y} s_{x} s_{y} & 0 & 0 & 0 & 1\\0 & 0 & 0 & 0 & 0 & 0\end{matrix}\right] \end{equation*}

Python code for these respective operations is below:

def make_transform( beta ):
c_x = numpy.cos(beta[0])
s_x = numpy.sin(beta[0])
c_y = numpy.cos(beta[1])
s_y = numpy.sin(beta[1])
c_z = numpy.cos(beta[2])
s_z = numpy.sin(beta[2])
t_x = beta[3]
t_y = beta[4]
t_z = beta[5]
return numpy.array([
[c_y*c_z, -c_x*s_z + c_z*s_x*s_y, c_x*c_z*s_y + s_x*s_z, t_x],
[c_y*s_z, c_x*c_z + s_x*s_y*s_z, c_x*s_y*s_z - c_z*s_x, t_y],
[-s_y, c_y*s_x, c_x*c_y, t_z],
[0, 0, 0, 1]
])

def make_transform_jacobian( beta, p ):
c_x = numpy.cos(beta[0])
s_x = numpy.sin(beta[0])
c_y = numpy.cos(beta[1])
s_y = numpy.sin(beta[1])
c_z = numpy.cos(beta[2])
s_z = numpy.sin(beta[2])
t_x = beta[3]
t_y = beta[4]
t_z = beta[5]
p_x = p[0]
p_y = p[1]
p_z = p[2]
return numpy.array([
[p_y*(c_x*c_z*s_y + s_x*s_z) + p_z*(c_x*s_z - c_z*s_x*s_y), c_x*c_y*c_z*p_z + c_y*c_z*p_y*s_x - c_z*p_x*s_y, -c_y*p_x*s_z + p_y*(-c_x*c_z - s_x*s_y*s_z) + p_z*(-c_x*s_y*s_z + c_z*s_x), 1, 0, 0],
[p_y*(c_x*s_y*s_z - c_z*s_x) + p_z*(-c_x*c_z - s_x*s_y*s_z), c_x*c_y*p_z*s_z + c_y*p_y*s_x*s_z - p_x*s_y*s_z, c_y*c_z*p_x + p_y*(-c_x*s_z + c_z*s_x*s_y) + p_z*(c_x*c_z*s_y + s_x*s_z), 0, 1, 0],
[c_x*c_y*p_y - c_y*p_z*s_x, -c_x*p_z*s_y - c_y*p_x - p_y*s_x*s_y, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0]
])


I generated these using sympy to build the transformations and used a finite-difference Jacobian function to check the output.

## Create Docker Image

This process follows this link . First update apt and install docker.io

$sudo apt-get update$ sudo apt install docker.io


Next start the service and enable it at boot time:

$sudo systemctl start docker$ sudo systemctl enable docker


Finally add your current user to the docker group:

$sudo usermod -a -G docker$USER


and log out of your account and back in.

## Create Dockerfile

Create a blank Dockerfile

$touch Dockerfile  and then edit it to look like the following: # start with ubuntu FROM ubuntu:16.04 # update the software repository RUN apt-get update RUN apt-get install -y python3-dev # add user appuser and switch to it RUN useradd -ms /bin/bash appuser USER appuser WORKDIR /home/appuser # copy over a test message and python # script to print it to the screen COPY data/ ./data/ COPY print_message.py . # run the script at container startup CMD ["python3", "print_message.py"]  Next make a directory to share with the image and put a simple message file into it. $ mkdir data
$echo "Hello from docker" > data/message.txt  Then create a python script to print the message: if __name__ == '__main__': with open('data/message.txt') as f: data = f.read() print( data )  Build the image and run it: $ docker build -t test_image
\$ docker run -it test_image
Hello from docker


And that's all there is to it.

## Deconvolution via ADMM in Python - Part 2: Code

In the previous part of this post I introduced deconvolution and built up the core of a total-variation deconvolution algorithm based on ADMM.

def deconvolve_ADMM( K, B, num_iters=40, rho=1.0, gamma=0.1 ):
# store the image size
dim = B.shape

# pad out the kernel if its the wrong shape
if K.shape != dim:
raise ValueError('B and K must have same shape')

# define the two derivative operators
Dx = Kernel2D( [ 0, 0], [-1, 0], [-1.0, 1.0] )
Dy = Kernel2D( [-1, 0], [ 0, 0], [-1.0, 1.0] )

# define an initial solution estimate
I = numpy.zeros( dim )

# define the two splitting variables and lagrangr multipliers
Zx = Dx.mul( I )
Zy = Dy.mul( I )
Ux = numpy.zeros( Zx.shape )
Uy = numpy.zeros( Zy.shape )

# cache the necessary terms for the I update, need to circularly
# shift the kernel so it's DC spot lies at the corner
fK  = numpy.fft.fft2( numpy.roll( K/numpy.sum(K), (dim[0]//2,dim[1]//2), axis=(0,1) ) )
fB  = numpy.fft.fft2( B )
fDx = Dx.spectrum( dim )
fDy = Dy.spectrum( dim )

# build the numerator and denominator
num_init = numpy.conj( fK )*fB
den      = numpy.conj( fK )*fK + rho*( numpy.conj(fDx)*fDx + numpy.conj(fDy)*fDy )

# define the L1 soft-thresholding operator
soft_threshold = lambda q: numpy.sign(q)*numpy.maximum( numpy.abs( q ) - gamma/rho, 0.0 )

# main solver loop
for iter in range( num_iters ):

# I-update
V = rho*( Dx.mul( Zx - Ux, trans=True) + Dy.mul( Zy - Uy, trans=True ) )
I = numpy.real( numpy.fft.ifft2( (num_init + numpy.fft.fft2(V))/den ) )

# Z-updates, cache the gradient filter results
tmp_x = Dx.mul( I )
tmp_y = Dy.mul( I )
Zx = soft_threshold( tmp_x + Ux )
Zy = soft_threshold( tmp_y + Uy )

# multiplier update
Ux += tmp_x - Zx
Uy += tmp_y - Zy

# return reconstructed result
return I


That is it.

## Deconvolution via ADMM in Python - Part 1: Math

Deconvolution is the process of removing blur from a signal. For images, there are two main types:

• Blind deconvolution removes blur without specific knowledge of what the exact blur is. It generally requires estimating the blur kernel.
• Non-blind deconvolution removes blur but is provided the blur kernel directly. It is an easier problem than blind deconvolution. Non-blind deconvolution is often used as a component of blind deconvolution algorithms.

Here I am focusing on non-blind deconvolution.

## Image Formation Model

When the blur kernel is the same over the entire image, blurry images can be modeled mathematically as a convolution of the blur kernel with an unknown sharp image, i.e.:

\begin{equation*} {\bf B} = {\bf K} \otimes {\bf I} + \epsilon \end{equation*}

The blur kernel $${\bf K}$$ is the image that is produced in the blurry image by a single point (pixel) of the unknown sharp image $${\bf I}$$. It's often called the point spread function and, with the blurry image $${\bf B}$$, is an input to the deconvolution algorithm. The last term $$\epsilon$$ models noise in the captured signal and is often assumed to be Gaussian, although in practice is often not.

The convolution operation is linear, so in an ideal world we would just apply it's inverse to recover the sharp image. What makes deconvolution challenging is that the blur kernel usually destroys at least some information. This means that the inverse does not exist. Consequently, solving a deconvolution problems actually means finding one of the infinitely many possible sharp images that, convolved with the blur kernel, would reproduce the measured blurry image.

## Optimizing for $${\bf I}$$

Making the typical assumption of $$\epsilon$$ being Gaussian noise, a solution for the sharp image $${\bf I}$$ can be found by looking for an underlying sharp image that best explains, in a least-squares sense, the measured data $${\bf B}$$.

\begin{equation*} {\bf I} = \mbox{argmin}_{\bf I} \frac{1}{2} \| {\bf K} \otimes {\bf I} - {\bf B} \|_2^2 \end{equation*}

Denoting $$\mathcal{F}(\bf K)$$ and $$\mathcal{F}^{-1}(\bf K)$$ as the forward and inverse Fourier transform of $${\bf K}$$ and $$*$$ as complex conjugation, this equation can be solved very quickly in the Fourier domain:

\begin{equation*} {\bf I} = \mathcal{F}^{-1}\left[ \frac{\mathcal{F}({\bf K})^*\mathcal{F}(\bf B)}{\mathcal{F}({\bf K})^2} \right] \end{equation*}

Unfortunately the results are garbage. Zero or very low magnitude values of the Fourier transform of $${\bf K}$$ cause divide-by-zero issues and amplify noise. These tiny values are due to convolution by $${\bf K}$$ attenuating certain frequency bands to nothing. It makes sense intuitively since blurry images lose the sharp edges and fine details of focused images so the Fourier modes representing these are lost forever. Trying to undo the process is poorly defined and so the solve above just finds anything that minimizes the objective, whether it looks like an image or not.

It's possible to stabilize the above a bit by adding fudge factors here and there but the results are not very compelling. A better approach is to add regularizers to the optimization that favor specific types of underlying sharp images. One very effective set is the total variation which is incorporated below:

\begin{equation*} {\bf I} = \mbox{argmin}_{\bf I} \frac{1}{2} \| {\bf K} \otimes {\bf I} - {\bf B} \|_2^2 + \gamma \| \frac{\partial}{\partial x} {\bf I} \|_1 + \gamma \| \frac{\partial}{\partial y} {\bf I} \|_1 \end{equation*}

The two extra terms penalize the gradients of the image and reach a zero value only for images that are constant. Using a 1-norm makes them less sensitive to large jumps in value than a 2-norm would be which makes these priors favor piecewise constant reconstructions. This is qualitatively similar to natural images that feature piecewise smooth features.

The only problem is that these 1-norm terms make the optimization much more difficult and costly to solve. However, the effort is worth it.

## Formulating the Optimization with ADMM

The problem with solving the optimization above is that the 1-norm terms are non-smooth. This is challenging for optimization methods. Fortunately, there are some very effective tools for solving 1-norm problems in isolation and a framework call the Alternating Direction Method of Multipliers can be used to transform intractible monolithic objective functions (such as the one above) into several coupled simpler problems.

By being careful about which terms go where, this can result in very efficient solves for problems involving complex priors. What follows is only one option for the above problem.

The overall goal is to split the $$L_1$$ solves into separate pixel-wise solves. This is done by introducing new variables and constraints that make the problem look more complex but actually help. The data term will stay as a coupled $$L_2$$ problem and will absorb the coupling terms as well (which are also $$L_2$$) and serve to stabilize the solve.

The first step is to isolate the $$L_1$$ terms. The differential operators in the priors have the effect of coupling all pixels to each other, but the fast solvers for $$L_1$$ problems expect terms that look like $$\|{\bf A}\|_1$$ for some image $${\bf A}$$. This can be accomplished by introducing splitting variables $${\bf Z}_x,{\bf Z}_y$$ with appropriate constraints.

\begin{align*} {\bf I} = \mbox{argmin} & & \frac{1}{2} \| {\bf K} \otimes {\bf I} - {\bf B} \|_2^2 + \gamma \| {\bf Z}_x \|_1 + \gamma \| {\bf Z}_y \|_1 \\ \mbox{subject to} & & \frac{\partial}{\partial x} {\bf I} - {\bf Z}_x = 0 \\ & & \frac{\partial}{\partial y} {\bf I} - {\bf Z}_y = 0 \end{align*}

It is also important to make sure that the $${\bf Z}_x, {\bf Z}_y$$ terms appear without complex linear operators in the constraints. This ensures that the terms will diagonalize to a large set of pixelwise 1D optimizations rather than couple together.

With the splitting variables introduced, the constraints can be incorporated into the objective via penalty terms with (scaled) Lagrange multipliers $${\bf U}_x,{\bf U}_y$$. The multipliers keep the constraints stiff. Without the multipliers, the high weights needed on the penalty terms to enforce the constraints can dominate the solves.

Applying this step transform back from a constrained problem to a (much more complicated) unconstrained problem again:

\begin{equation*} {\bf I} = \mbox{argmin} \frac{1}{2} \| {\bf K} \otimes {\bf I} - {\bf B} \|_2^2 + \frac{\rho}{2} \| \frac{\partial}{\partial x} {\bf I} - {\bf Z}_x + {\bf U}_x \|_2^2 + \frac{\rho}{2} \| \frac{\partial}{\partial y} {\bf I} - {\bf Z}_y + {\bf U}_y \|_2^2 + \gamma \| {\bf Z}_x \|_1 + \gamma \| {\bf Z}_y \|_1 \end{equation*}

This looks like a mess, but is just the original data term, the two added $$L_1$$ terms and two additional $$L_2$$ coupling terms. Calling the whole thing $$F({\bf I},{\bf Z}_x, {\bf Z}_y, {\bf U}_x, {\bf U}_y)$$ the ADMM algorithm solves this by the following steps:

• Updates $${\bf I}$$ by solving $${\bf I} = \mbox{argmin}_{\bf I} F({\bf I},{\bf Z}_x, {\bf Z}_y, {\bf U}_x, {\bf U}_y)$$
• Updates $${\bf Z}_x$$ by solving $${\bf Z}_x = \mbox{argmin}_{{\bf Z}_x} F({\bf I},{\bf Z}_x, {\bf Z}_y, {\bf U}_x, {\bf U}_y)$$
• Updates $${\bf U}_x$$ by setting $${\bf U}_x = {\bf U}_x + \frac{\partial}{\partial x} {\bf I} - {\bf Z}_x$$
• Repeats the last two steps, appropriately modified, for $${\bf Z}_y$$ and $${\bf U}_y$$ respectively.

### The $${\bf I}$$ Update

The first step of ADMM is to update $${\bf I}$$. This is a straightforward $$L_2$$ problem since the only terms that $${\bf I}$$ appears is in terms with a 2-norm. Denoting $$\frac{\partial}{\partial x}{\bf I}$$ as $${\bf D}_x \otimes {\bf I}$$ and likewise for $$\frac{\partial}{\partial y}$$, the solve ends up being:

\begin{equation*} {\bf I} = \mbox{argmin} \frac{1}{2} \| {\bf K} \otimes {\bf I} - {\bf B} \|_2^2 + \frac{\rho}{2} \| {\bf D}_x \otimes {\bf I} - {\bf Z}_x + {\bf U}_x \|_2^2 + \frac{\rho}{2} \| {\bf D}_y \otimes {\bf I} - {\bf Z}_y + {\bf U}_y \|_2^2 \end{equation*}

All the terms are effectively the same: a minimization over the residual of a kernel convolved the the target image with respect to a measured value. The minimum is found by expanding out the term and finding the target image that sets the gradient of the expanded term to zero.

The solution for the first term in the Fourier domain is earlier in the post as:

\begin{equation*} {\bf I} = \mathcal{F}^{-1}\left[ \frac{\mathcal{F}({\bf K})^*\mathcal{F}(\bf B)}{\mathcal{F}({\bf K})^2} \right] \end{equation*}

The remaining terms follow the same pattern and sum in the numerator and denominator of the solve:

\begin{equation*} {\bf I} = \mathcal{F}^{-1}\left[ \frac{ \mathcal{F}({\bf K})^*\mathcal{F}(\bf B) +\rho \mathcal{F}({\bf D}_x)^*\mathcal{F}({\bf Z}_x - {\bf U}_x) +\rho \mathcal{F}({\bf D}_y)^*\mathcal{F}({\bf Z}_y - {\bf U}_y) }{\mathcal{F}({\bf K})^2 + \rho \mathcal{F}({\bf D}_x)^2 + \rho \mathcal{F}({\bf D}_y)^2} \right] \end{equation*}

This involves 9 Fourier transforms, but only the terms containing $${\bf Z}_x,{\bf U}_x$$ actually change during the solve (similarly for the y subscripts). By caching the first term of the numerator, the entire denominator, as well as $$\mathcal{F}({\bf D}_x)\mbox{ & }\mathcal{F}({\bf D}_y)$$ this can be reduced to three transforms. It can further be reduced by evaluating the derivative filters $${\bf D}_x\mbox{ & }{\bf D}_y$$ in the spatial domain, since they are very inexpensive finite difference filters. The following gets the $${\bf I}$$ update down to one forward and one inverse transform once contants are cached:

\begin{align*} {\bf V} = \rho \left( {\bf D}^T_x \otimes ({\bf Z}_x - {\bf U}_x) + {\bf D}^T_y \otimes ({\bf Z}_y - {\bf U}_y)\right) \\ {\bf I} = \mathcal{F}^{-1}\left[ \frac{ \mathcal{F}({\bf K})^*\mathcal{F}(\bf B) + \mathcal{F}({\bf V}) }{\mathcal{F}({\bf K})^2 + \rho \mathcal{F}({\bf D}_x)^2 + \rho \mathcal{F}({\bf D}_y)^2} \right] \end{align*}

Important note: the $${\bf D}^T_x \otimes$$ terms in $${\bf V}$$ use the transposed filters (mixing notation terribly). This is the complex conjugate of the filter in the Fourier domain version, or matrix transpose if the convolution operation were represented in matrix form. Expressed as spatial domain convolution, the transpose simply requires mirroring the kernels horizontally and vertically about the central pixel.

### The $${\bf Z}$$ Updates

The next step of ADMM is to update the splitting variable $${\bf Z}_x\mbox{ & }{\bf Z}_y$$. These terms only appear in one coupling terms and one $$L_1$$ term each. The updates for the two $${\bf Z}$$ and two $${\bf U}$$ variables are the same except the specific filter that is used, so I'll only cover one. Here's the update needed for $${\bf Z}_x$$:

\begin{equation*} {\bf Z}_x = \mbox{argmin}_{{\bf Z}_x} \frac{\rho}{2} \| {\bf D}_x \otimes {\bf I} - {\bf Z}_x + {\bf U}_x \|_2^2 + \gamma \| {\bf Z}_x \|_1 \end{equation*}

It's important to note that there are no operators applied to $${\bf Z}_x$$, meaning that each pixel can be processed independently. The terms can be rearranged and the weight factors collected onto the $$L_1$$ term to give the equivalent minimization:

\begin{equation*} {\bf Z}_x = \mbox{argmin}_{{\bf Z}_x} \frac{\gamma}{\rho} \| {\bf Z}_x \|_1 + \frac{1}{2} \| {\bf Z}_x - ({\bf D}_x \otimes {\bf I} + {\bf U}_x ) \|_2^2 \end{equation*}

However, a complication is (still) the $$L_1$$ term. This is non-smooth and tricky to deal with but the form above allows a non-smooth optimization method known as proximal operators to be used. The proximal operator of a scaled function $$\lambda f(q)$$ is defined as:

\begin{equation*} \mbox{prox}_{\lambda f}({\bf q}) = \mbox{argmin}_{\bf p} f({\bf p}) + \frac{1}{2\lambda} \| {\bf p} - {\bf q} \|_2^2 \end{equation*}

The similarities with the $${\bf Z}_x$$ minimization are strong. By setting $$f({\bf Z}_x)=\|{\bf Z}_x\|_1\mbox{ & }\lambda = \frac{\gamma}{\rho}\mbox{ & }{\bf q} = {\bf D}_x \otimes {\bf I} + {\bf U}_x$$ the proximal operator is recovered. This is great because the closed form solution for $$\mbox{prox}_{\lambda\|{\bf p}\|_1}(\bf{q})$$ is known and trivial to implement:

\begin{equation*} \mbox{prox}_{\lambda\|{\bf p}\|_1}({\bf q}) = \mbox{sign}({\bf q}) \mbox{max}( 0, |{\bf q}| - \lambda) \end{equation*}

This is known as the soft-thresholding operator, or $$L_1$$ shrinkage operator and is used everywhere. It can be implemented in python as:

def soft_threshold( q, lam ):
return numpy.sign(q)*numpy.maximum( numpy.abs(q) - lam, 0.0 )


In my view, it's the most potent line of code that exists in signal processing.

### The $${\bf U}$$ Updates

The $${\bf U}$$ updates simply add the current constraint violation to the Lagrange multipliers $${\bf U}$$. It's kind of like the integral term in a PID controller.

The update was actually given already in the overview above but is repeated here for completeness:

\begin{align*} {\bf U}_x = {\bf U}_x + {\bf D}_x \otimes {\bf I} - {\bf Z}_x \\ {\bf U}_y = {\bf U}_y + {\bf D}_x \otimes {\bf I} - {\bf Z}_y \end{align*}

This is identical to the formula ealier, I've just used the $${\bf D}_x \otimes {\bf I}$$ notation rather than $$\frac{\partial}{\partial x} {\bf I}$$.

## Summary

That's all the math that's needed. It looks like a lot but ends up being surprisingly little code. In the next part I work through the implementation.

## Consistent Spatial and Frequency Domain Image Filtering with Python

In image reconstruction tasks, it's often convenient to perform filtering operations in either the spatial or frequency domain. Both have their advantages and disadvantages. For the spatial domain, small filters such as finite differences can be evaluated extremely quickly and include local coefficients but objective functions often result in large systems of equations. In contrast, many objective functions diagonalize to yield pixel-wise solutions in the frequency domain but spatial variation cannot be incorporated directly.

As a result, it's common to express reconstruction tasks with individual terms in each domain. The domains are then coupled through penalties or constraints that can be expressed naturally either domain and lead to efficient solvers.

By way of example: in the total-variation regularized inpainting example below, the data term expresses a least-squares penalty on a (fixed) random selection of pixels in the reconstruction $${\bf I}$$ and measurements $${\bf Y}$$ via the selection mask $${\bf S}$$ ($$\odot$$ is pixelwise multiplication). This term cannot be expressed efficiently in the Fourier domain. However the regularizers (e.g. $$\lambda \| \frac{\partial}{\partial x} {\bf I} \|_1$$) can and should be since otherwise they couple every pixel in the image together. This is bad because it results in very costly solves.

\begin{equation*} {\bf I} = \mbox{argmin}_{\bf I} \frac{1}{2} \| S \odot \left( {\bf I} - {\bf Y} \right) \|_2^2 + \lambda \| \frac{\partial}{\partial x} {\bf I} \|_1 + + \lambda \| \frac{\partial}{\partial y} {\bf I} \|_1 \end{equation*}

Solving this efficiently requires splitting the problem into separate solve that couple through constraints and Lagrange multipliers (or more simply through penalties). How to do that is an entire topic in itself but the key thing is that filtering is performed in both domains and must be consistent to produce sensible results. If this is not the case, all sorts of weird things happen: images shifting each iteration, convergence stalling, etc.

This sounds (and is) easy but requires a bit of attention to detail.

## Brass Tacks

The starting point here is a definition of a sparse kernel. By sparse, I mean a kernel with very few non-zero weights, generally less than 10-20 or so. This includes common filters like Laplacians, edge filters and finite difference stencils.

In the following code, a kernel is defined by three equal length lists containing the row-offsets, column-offsets and filter values for the kernel. In the code, the user specifies the correlation kernel; this produces results that match typical finite-difference kernels when the correlate_spatial or correlate_fourier methods are called. Here is the code for the kernel class:

import numpy

class SparseKernel2D:
"""Represents a kernel for a 2D convolution operation

Example:

# make a first-order finite difference along the column axis
K = SparseKernel2D( [0,0], [-1,0], [-1.0, 1.0] )

"""

def __init__( self, offy, offx, vals ):
"""Represents a kernel for a 2D convolution operation

Args:
offy (integer array/list) vertical offsets of non-zero kernel values
offx (integer array/list) horizontal offsets of non-zero kernel values
vals (float array/list)   non-zero kernel values

Raises:
ValueError whenever offx, offx, vals do not have the same length
"""
if len(offy) != len(offx) or len(vals) != len(offx):
raise ValueError('offx, offy and vals must be 1D array-like with the same size')

self.offx = offx
self.offy = offy
self.vals = vals

def image( self, dim ):
"""Returns an image represenation of the kernel at the specified dimensions with the kernel centered.

Args:
dim (integer tuple) output image dimensions [height,width]

Raises:
ValueError when dim is not 2-element

Returns:
height x width image of float32 elements
"""
if len(dim) != 2:
raise ValueError('Expected a 2-element integer tuple/list for image dimensions')

cen = ( int(dim[0]/2), int(dim[1]/2) )
out = numpy.zeros( dim, dtype=numpy.float32 )
for dx,dy,val in zip( self.offx, self.offy, self.vals):
out[cen[0]+dy,cen[1]+dx] = val
return out

def spectrum( self, dim ):
"""Returns a spectrum representation of the kernel at the specified dimensions. The returned
kernel is *not* centered, i.e. has it's DC component at the corner.

Args:
dim (integer tuple) output spectrum dimensions [height,width]

Raises:
ValueError when dim is not 2-element

Returns:
height x width complex image
"""
return numpy.fft.fft2( numpy.roll( self.image(dim), [int(dim[0]/2+0.5),int(dim[1]/2+0.5)], axis=[0,1] ) )

def convolve_spatial( self, I ):
"""Convolves the input image by the kernel in the spatial domain

Args:
I (2d float32 array) input image to convolve with kernel

Raises:
ValueError when input is not a 2D ndarray or I.dtype is not numpy.floating subclass

Returns:
Convolution results as 2D ndarray of I.dtype elements
"""
if not isinstance(I,numpy.ndarray) or I.ndim != 2 or not numpy.issubdtype(I.dtype,numpy.floating):
raise ValueError('Expected a 2D ndarray or float32 values')

result = numpy.zeros( I.shape, dtype=I.dtype )

for dx,dy,val in zip(self.offx, self.offy, self.vals):
result += val*numpy.roll( I, [dy,dx], axis=[0,1] )

return result

def correlate_spatial( self, I ):
"""Correlates the input image with the kernel in the spatial domain

"""
if not isinstance(I,numpy.ndarray) or I.ndim != 2 or not numpy.issubdtype(I.dtype,numpy.floating):
raise ValueError('Expected a 2D ndarray or float32 values')

result = numpy.zeros( I.shape, dtype=I.dtype )

for dx,dy,val in zip(self.offx, self.offy, self.vals):
result += val*numpy.roll( I, [-dy,-dx], axis=[0,1] )

return result

def convolve_fourier( self, I ):
"""Convolves the input image by the kernel in the Fourier domain

Args:
I (2d float32 array) input image to convolve with kernel

Raises:
ValueError when input is not a 2D ndarray or I.dtype is not numpy.floating subclass

Returns:
Convolution results as 2D ndarray of I.dtype elements
"""
if not isinstance(I,numpy.ndarray) or I.ndim != 2 or not numpy.issubdtype(I.dtype,numpy.floating):
raise ValueError('Expected a 2D ndarray or float32 values')

fK = self.spectrum( I.shape )
fI = numpy.fft.fft2( I )

return numpy.real( numpy.fft.ifft2( fK*fI ) )

def correlate_fourier( self, I ):
"""Correlates the input image by the kernel in the Fourier domain

Args:
I (2d float32 array) input image to convolve with kernel

Raises:
ValueError when input is not a 2D ndarray or I.dtype is not numpy.floating subclass

Returns:
Convolution results as 2D ndarray of I.dtype elements
"""
if not isinstance(I,numpy.ndarray) or I.ndim != 2 or not numpy.issubdtype(I.dtype,numpy.floating):
raise ValueError('Expected a 2D ndarray or float32 values')

fK = self.spectrum( I.shape )
fI = numpy.fft.fft2( I )

return numpy.real( numpy.fft.ifft2( numpy.conj(fK)*fI ) )


## Testing & Validation

To ensure that indexing is done correctly, it's important to check that the kernel can represent an identity operation correctly for odd and even image sizes in both Fourier and spatial domains. After that, it's important that the convolution and correlation functions return the correct results.

This last point depends on whether the convolution or correlation filter is being defined. The difference between the two is mirroring horizontally and vertically. Personally, I find it most natural to define the correlation filter which coincides with the typical finite difference stencils. Generating some analytic data as well as some known finite difference kernels allows correct operation to be confirmed:

def TestSparseKernel2D():
"""Runs a series of test functions on the SparseKernel2D class"""

# local helper function that will raise an error if the
# results are insufficiently accurate
def MSE( x, ref ):
val = numpy.mean( numpy.power( x-ref, 2 ) )
if val > 1e-6:
raise ValueError('Values do not match!')
return val

# define a mean-squared error function
#MSE = lambda I, Ref: numpy.mean( numpy.power( I-Ref, 2 ) )

# generate an identity kernel
ident = SparseKernel2D( [0],[0],[1.0] )

# generate some test images
x,y = numpy.meshgrid( numpy.arange(0.,200.), numpy.arange(0.,100.) )
z = x+y

# check that convolving with the identity kernel
# reproduces the input for even mesh sizes
conv_z_s = ident.convolve_spatial( z )
conv_z_f = ident.convolve_fourier( z )

print('Identity test, even sizes: ',z.shape)
print('\tMSE spatial: %f'%MSE(conv_z_s,z) )
print('\tMSE fourier: %f'%MSE(conv_z_f,z) )

# check that convolving with the identity kernel
# reproduces the input for odd/even mesh sizes
tz = z[1:,:]
conv_z_s = ident.convolve_spatial( tz )
conv_z_f = ident.convolve_fourier( tz )

print('Identity test, odd/even sizes: ',tz.shape)
print('\tMSE spatial: %f'%MSE(conv_z_s,tz) )
print('\tMSE fourier: %f'%MSE(conv_z_f,tz) )

# check that convolving with the identity kernel
# reproduces the input for odd/even mesh sizes
tz = z[:,1:]
conv_z_s = ident.convolve_spatial( tz )
conv_z_f = ident.convolve_fourier( tz )

print('Identity test, even/odd sizes: ',tz.shape)
print('\tMSE spatial: %f'%MSE(conv_z_s,tz) )
print('\tMSE fourier: %f'%MSE(conv_z_f,tz) )

# check that convolving with the identity kernel
# reproduces the input for odd/odd mesh sizes
tz = z[:-1,1:]
conv_z_s = ident.convolve_spatial( tz )
conv_z_f = ident.convolve_fourier( tz )

print('Identity test, odd/odd sizes: ',tz.shape)
print('\tMSE spatial: %f'%MSE(conv_z_s,tz) )
print('\tMSE fourier: %f'%MSE(conv_z_f,tz) )

# generate first order finite difference formula
# try convolving in both the spatial and frequency
# domains
ddx = SparseKernel2D( [0,0],[-1,0],[-1.0,1.0] )
ddx_z_s = ddx.correlate_spatial( z )[1:-2,1:-2]
ddx_z_f = ddx.correlate_fourier( z )[1:-2,1:-2]
ones = numpy.ones( ddx_z_s.shape, dtype=numpy.float32 )

print('First order finite-difference d(x+y)/dx')
print('\tMSE spatial: %f' % MSE(ddx_z_s,ones) )
print('\tMSE fourier: %f' % MSE(ddx_z_f,ones) )

# generate first order finite difference formula
# try convolving in both the spatial and frequency
# domains
ddy = SparseKernel2D( [-1,0],[0,0],[-1.0,1.0] )
ddy_z_s = ddy.correlate_spatial( z )[1:-2,1:-2]
ddy_z_f = ddy.correlate_fourier( z )[1:-2,1:-2]
ones = numpy.ones( ddy_z_s.shape, dtype=numpy.float32 )

print('First order finite-difference d(x+y)/dy')
print('\tMSE spatial: %f' % MSE(ddy_z_s,ones) )
print('\tMSE fourier: %f' % MSE(ddy_z_f,ones) )

# now try the convolution functions, this simply
# flips the kernel about the central pixel
# horizontally and vertically. for the filters
# defined above, this negates the results
ddx_z_s = ddx.convolve_spatial( z )[1:-2,1:-2]
ddx_z_f = ddx.convolve_fourier( z )[1:-2,1:-2]
ones = numpy.ones( ddx_z_s.shape, dtype=numpy.float32 )

print('Convolved first order finite-difference d(x+y)/dx')
print('\tMSE spatial: %f' % MSE(ddx_z_s,-ones) )
print('\tMSE fourier: %f' % MSE(ddx_z_f,-ones) )

# generate first order finite difference formula
# try convolving in both the spatial and frequency
# domains
ddy_z_s = ddy.convolve_spatial( z )[1:-2,1:-2]
ddy_z_f = ddy.convolve_fourier( z )[1:-2,1:-2]
ones = numpy.ones( ddy_z_s.shape, dtype=numpy.float32 )

print('Convolved first order finite-difference d(x+y)/dy')
print('\tMSE spatial: %f' % MSE(ddy_z_s,-ones) )
print('\tMSE fourier: %f' % MSE(ddy_z_f,-ones) )


This prints:

Identity test, even sizes:  (100, 200)
MSE spatial: 0.000000
MSE fourier: 0.000000
Identity test, odd/even sizes:  (99, 200)
MSE spatial: 0.000000
MSE fourier: 0.000000
Identity test, even/odd sizes:  (100, 199)
MSE spatial: 0.000000
MSE fourier: 0.000000
Identity test, odd/odd sizes:  (99, 199)
MSE spatial: 0.000000
MSE fourier: 0.000000
First order finite-difference d(x+y)/dx
MSE spatial: 0.000000
MSE fourier: 0.000000
First order finite-difference d(x+y)/dy
MSE spatial: 0.000000
MSE fourier: 0.000000
Convolved first order finite-difference d(x+y)/dx
MSE spatial: 0.000000
MSE fourier: 0.000000
Convolved first order finite-difference d(x+y)/dy
MSE spatial: 0.000000
MSE fourier: 0.000000

← Previous Page 2 of 2