### Another crack at the RX algorithm in Python

At the request of a commentor on a former post, I decided to write an updated Python implementation of the RX algorithm with NumPy (which actually out-performs my naive C implementation). You can download and install the module from GitHub.

At the heart of it all is the Mahalanobis distance metric,

\begin{align}d(\,\tilde{X},X_{i}\,)&=\sqrt{(\,X_{i}-\tilde{X}\,)^{\text{T}}\,\Omega\,(\,X_{i}-\tilde{X}\,)},\end{align}where \(X_{i}\) is some vector in the image's space (many times RGB), \(\tilde{X}\) is the average (or approximately average) vector along each channel, and \(\Omega=\Sigma^{-1}\) is the precision matrix.

I've found it's not really necessary to have the exact precision matrix in practice, so we may reduce the overhead of computing an average vector and covariance if the image is largely one color. This computation can be neatly expressed with numpy.einsum.

At the heart of it all is the Mahalanobis distance metric,

\begin{align}d(\,\tilde{X},X_{i}\,)&=\sqrt{(\,X_{i}-\tilde{X}\,)^{\text{T}}\,\Omega\,(\,X_{i}-\tilde{X}\,)},\end{align}where \(X_{i}\) is some vector in the image's space (many times RGB), \(\tilde{X}\) is the average (or approximately average) vector along each channel, and \(\Omega=\Sigma^{-1}\) is the precision matrix.

I've found it's not really necessary to have the exact precision matrix in practice, so we may reduce the overhead of computing an average vector and covariance if the image is largely one color. This computation can be neatly expressed with numpy.einsum.