Speed-up of MATLAB: Just-in-time compiler JIT

· Programming
Authors

Sometimes, MATLAB seems to be slow and inefficient. MATLAB is great for prototyping and interactive analysis, but sometimes execution speed is crucial. The MathWorks proposes to vectorized your code. Recently, I found that this is not always the best solution for higher speed. A better solution comes from the sparsely documented JIT Compiler which was already introduced in 2002.

How to accelerate MATLAB

Turning JIT on / off

Actually, there is no need to turn on or off JIT except for benchmarking. It is always turned on if it can handle the code. This is usually the case if your code uses

  • mainly scalar operations,
  • no function calls,
  • preallocation of arrays,
  • 1d or 2d arrays and
  • data of type double or char.

The JIT compiler is under development and theses restrictions might not hold for all MATLAB versions. A description for MATLAB 6.5 JIT is here.

You can manually turn off JIT and on again:


feature accel on
feature accel off

Simple Example of MATLAB JIT compiler

Now, we want to see JIT in action. Look at the following example where we subtract the mean of a vector from a vector:

Vectorized code

x = rand(100,1);
tic; y = x-mean(x); toc

Running this code for the first time results on my machine in “Elapsed time is 0.116763 seconds.”. The second time results in “Elapsed time is 0.000076 seconds.”. Now, we try the usually recommended bsxfun() and see what happens:

bsxfun()

x = rand(100,1);
tic; y = bsxfun(@minus,x,mean(x)); toc

For the first run I get “Elapsed time is 0.000352 seconds.” And for the second run I get “Elapsed time is 0.000231 seconds.”. Since we do not want to compare start-up times, we will compare the second runs, only. For the second run, the implementation with bsxfun() requires about 3x the time of the vectorized version. What about JIT? For JIT, we have to un-vectorized our code and introduce loops again and put the code into an m-file:

 

JIT

n=100;
x = rand(n,1);

tic;
m = 0;
for i=1:n
 m = m + x(i);
end
m = m/n;

y = zeros(n,1);
for i=1:n
 y(i) = x(i)-m;
end

toc;

Running this code leads to “Elapsed time is 0.000015 seconds.” for the first run and to “Elapsed time is 0.000013 seconds.” for the second run. That means Jit is about 6x faster than the vectorized version. Now, we want to find out, when this speed-up occurs.

Benchmarking Just-in-time compiler vs. bsxfun vs. vectorization

We create a function such that we can easily create multiple runs.

function [t_vec, t_bsx, t_jit] = Jit_test(n)
% Test function for benchmarking

x = rand(n,1);

% vectorized form
tic;
y = x-mean(x);
t_vec = toc;

% using bsxfun
tic;
y = bsxfun(@minus,x,mean(x));
t_bsx = toc;

% JIT form
tic;
m = 0;
for i=1:n
 m = m + x(i);
end
m = m/n;

y = zeros(n,1);
for i=1:n
 y(i) = x(i)-m;
end
t_jit = toc;

And we create a script for calling Jit_test():

for n=floor(10.^[0:0.5:6]);
  [t_vec t_bsx t_jit] = Jit_test(n);
  res = [res;t_vec t_bsx t_jit];
end;
plot(res)

Now, we have the following result plot:

 

 

Benchmarking MATLAB JIT against vectorized version and bsxfun().

Benchmarking MATLAB JIT against vectorized version and bsxfun(). For small problem sizes (n<100), JIT (red) is about 10x faster than the other code variants. For large probelm sizes (n>10.000), the vectorized version (blue) is about 10x faster than JIT.

In this plot we can see that for small problem sizes (n<100), JIT (red) is about 10x faster than the other code variants. For large problem sizes (n>10.000), the vectorized version (blue) is about 10x faster than JIT. That means the speed-up is highly dependent on your specific problem. In my experience, JIT is great for small loops which are called often.

JIT can bring you significant speed-ups

Usage of the Just-in-time compiler (JIT) of MATLAB is easy. Just forget about the vectorization. The speed-ups can be significant. In a real-life implementation, I saw a speed-up of over 600x. It is worth trying if speed is key for you. Otherwise, you might want to stay with a readable vectorized code.

1 Comment

Comments RSS
  1. Christoph

    Very nice article. I can confirm the numbers with problem size n=100, though my machine is a bit slower than yours and takes longer to execute in all cases. So vectorizing code pays out only when the problem size is sufficiently large. I’ve determined the break even to be on my system n=9000 (JIT vs. bsxfun). That’s much more than I thought. Thanks a great new insight!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.