I want to share some tricks for making Matlab function more efficient and robust. I decide to write a series of blog posts. This is the first one of this series, in which I want to show a simple function for computing pairwise Euclidean distances between points in high dimensional vector space.

The most important thing for efficient Matlab code is **VECTORIZATION**! Here I will take distance function as an example to demonstrate how accelerate your code by getting ride of “for-loops”.

Say, we have two sets of points X and Y in d-dimensional space. We want to compute the distances between any point in X and any other point in Y. This function is very useful. For example, if you want to find the k-nearest-neighbor of a point x in a database Y, an naive method is to first compute the distances between the point x and all points in Y and then sort the distances to find the k smallest ones.

We can naively implement the function by nested for loops:

function D = dumDistance(X,Y)
n1 = size(X,2);
n2 = size(Y,2);
D = zeros(n1,n2);
for i = 1:n1
for j = 1:n2
D(i,j) = sum((X(:,i)-Y(:,j)).^2);
end
end

Here, each column of matrices X and Y is the vector representation of a point in d-dimensional space. The nested for loop is hurting performance so much. Actually, this is how the pdist function in Matlab is implemented basically (That is how dumb sometime Matlab can be). We wanna get ride of the for-loops and vectorize the code as much as possible. Before write any Matlab code, a good practice is to first write your algorithm down on a paper using matrix notation.

We denote

as a vector in X and

as a vector in Y. The square of distance

between

and

is

where

is dot product.

can be considered as an entry of matrix D. Therefore, we can write the formula in matrix form as:

where, is column vector of squared norms of all vectors in X. We can directly implement this by a one-liner:

function D = sqDistance(X, Y)
D = bsxfun(@plus,dot(X,X,1)',dot(Y,Y,1))-2*(X'*Y);

In my rough tests, the new code is about 60+ times faster than the old one for random dense matrices. For sparse matrices, the gap is even larger. That is just the power of vectorization.

If you want to compute the pairwise distances between all point pairs in a point set, you can simply replace the Y with X in the function. Matlab will do extra optimization for the matrix product like X’*X, where only half of the computation is done. A by-product is that the result matrix D is ensured to be symmetric, which usually cannot be guaranteed due to numerical representation of float number.

The takeaway of this post might be:

(1) Verctorizaton!

(2) Matrix multiplication like M=X’*X is efficient (cost half time of ordinary matrix multiplication) and D is guaranteed to be symmetric.

(3) Write your algorithm in Matrix form before writing the code.

### Like this:

Like Loading...

*Related*

Thanks for the useful post! Your code is quite concise. I actually collected a few different ways to do it from around the web, and yours turned out to be faster due to the particular order of the operations. For reference, here it is, along with 2 other slower implementations commented out (due to my data’s format I changed the code so the matrices are transposed).

% x2 = sum(X.^2, 2);

% y2 = sum(Y.^2, 2);

% D = x2 * ones(size(Y,1), 1)’ + ones(size(X,1), 1) * y2′ – 2 * X * Y’;

% D = bsxfun(@plus, sum(X .* X, 2), -2 * (X * Y’));

% D = bsxfun(@plus, sum(Y .* Y, 2)’, D);

D = bsxfun(@plus, dot(X,X,2), dot(Y,Y,2)’) – 2 * (X * Y’);

Actually, the code you posted seems to be the code I wrote and posted on matlab file exchange long time ago.

Thank you so much for your code. I can save time so much. It is indeed very fast!!!

Could you also post any fast code for finding paiwise distance in a matrix(p x n), p: no. of points; n: dimension of space? Any help appreciated.

Thanks

I would also appreciate that a lot!

Reblogged this on Simplex Signum Veri and commented:

It seems that I had a serious bias towards Matlab and I didn’t got the real spirit of it. Now I need to learn the right way to use Matlab.

Reblogged this on jakirkpatrick and commented:

Changed some code from impossibly shore to merely slow!

Why “D is guaranteed to be symmetric.”, I though D is n1*n2

If you compute the distance between X and X, D is symmetric.

Thanks for posting this nice fast code. However, I had some questions. Firstly, there seems to be something strange in the code supplied.. I used it to calculate the smallest distance between two particles with central coordinates of (3,3) and (5,5). Surprisingly the distance between (1-1) and (2-2) were not 0. Moreover, the distance between (1-2) and (2-1) are not identical; the matrix looks like

1.99 e-06 3.99

4.00 4.82e-07

While the the code seems fast, there is some difference in the precision with the PDist function?

In addition, it is not clear to me from the example how you in the end can identiy which pair is closest together??

It is just what I was looking for!!! Thank you very much. Your code is completely concise and the explanation goes right to the point to understand the code. Thanks again.

A rough estimation on the complexity: suppose each data point has D dimensions, and each scalar using K bits to be represented, two scalars’ multiplication roughly approximated as K summations.

the dum one has complexity: (2+K)*D*n1*n2

the smarter one has complexity: (K+1)*D*n1*n2+(K+1)*D*(n1+n2)+2n1*n2

It is quicker, but only linearly less. So the major difference comes from the inner-product trick of the Matlab implementation.

As for “all point pairs in a point set”, usually we can only compute the upper (or lower) part of the matrix, copy it to the other part if necessary, and leaves the diagonals all 0. This can sure the matrix’s symmetry. The dum one can be easily changed to:

for i = 1:(n1-1)

for j = (i+1):n2

D(i,j) = sum((X(:,i)-X(:,j)).^2); D(j,i) = D(i,j);

end

end

The smarter one, well, can “(X’*Y)” part be more efficient?

Anyway, thanks a lot on this.

Thanks a lot! You explained it so clearly…

Hello,

This is a very simplified and super fast code; I stumbled upon this site trying to vectorise my code in order to obtain all the pairwise distances in a set of vectors. However in my problem my distance is euclidean weighted where there is a series of weights applied to every component. i. e:

Given a set of weights w = [w1,w2,w3], and vectors x = [x1,x2,x3], and y = [y1,y2,y3]

how do we vectorize a calculation of all pair wise distances in a set of vectors

My distance is calculated as : w1(x1-y1)^2 + w2(x2-y2)^2 + w3(x3-y3)^3

Any suggestions?

Thanks

Do you need unlimited articles for your page

? I am sure you spend a lot of time writing articles, but you can save it for other tasks, just type in google: kelombur’s favorite tool

Many thanks for your code. I slightly modify it to even more fast calculation:

D = bsxfun(@plus,sum(X.^2)’,sum(Y.^2))-2*(X’*Y);

I do not understand why but it is faster.

hi,

I want to ask how to calculate the distances in a multidimensinal matrices. let say A is a 20×8 and columns are described by [X1 Y1 X2 Y2 X3 Y3 X4 Y4] (4 points point1: (X1,Y1), point2(X2, Y2). point3:(X3, Y3), point4:(X4, Y4)). how can I calculate the all distances(for instance: distance1 is the distance between point1 and point2 distance2; point1 and point3 distance3; point1 and point4 etc. ) could you please help me?

thanks in advance..