Speed of Matlab vs Python vs Julia vs IDL

The Benchmarks Game uses deep expert optimizations to exploit every advantage of each language. The benchmarks I’ve adapted from the Julia micro-benchmarks are done in the way a general scientist or engineer competent in the language, but not an advanced expert in the language would write them. Emphasis is on keeping the benchmarks written with priority on simplicity and length, where programmer time is far more important than CPU time. Jules Kouatchou runs benchmarks on massive clusters comparing Julia, Python, Fortran, etc. A prime purpose of these benchmarks is given ease of programming for a canonical task (say Mandelbrot), which languages have distinct runtime performance benefits.

Julia’s growing advantage is the performance of compiled languages with the relative ease of a scripted language. The majority of analysts scripting in engineering and science are working in Python, with Matlab in second place. The stable Julia 1.0 release finally brings the promise of API stability that was an adoption blocker in earlier Julia releases. Julia allows abstract expression of formulas, ideas, and arrays in ways not feasible in other major analysis applications. This allows advanced analysts unique, performant capabilities with Julia. Since Julia is readily called from Python, Julia work can be exploited from more popular packages.

Python often is “close enough” in performance to compiled languages like Fortran and C, by virtue of numeric libraries Numpy, Numba and the like. For particular tasks, Tensorflow, OpenCV, and directly loading Fortran libraries with f2py or ctypes minimizes Python’s performance penalty. This was not the case when Julia was conceived in 2009 and first released in 2012. Thanks to Anaconda, Intel MKL and PyCUDA, momentum and performance are solidly behind Python for scientific and engineering computing for the next several years at least.

Cython has Python-like syntax that is compiled to .c code that is much larger than the original Python code and isn’t very readable. However, substantial speed increases can result. Don’t convert the entire program to Cython! Just the slow functions.

PyPy does sophisticated analysis of Python code and can also offer massive speedups, without changes to existing code.


We have created a multi-code language benchmark suite. Fortran is comparable to Python with MKL, Matlab, Julia. With single-precision float, Python Cuda can be 1000+ times faster than Python, Matlab, Julia, and Fortran. However, the usual “price” of GPUs is the slow I/O. If large arrays need to be moved constantly on and off the GPU, special strategies may be necessary to get a speed advantage. For iterative algorithms, it’s worthwhile to use Numba or Cython with Python, to get Fortran-like speeds from Python, comparable with Matlab at the given test.

L3 Harris Geospatial IDL is used mostly by astronomers. IDL can be replaced by GDL, the free open-source IDL-compatible program. A better choice would be to move from IDL/GDL to Python or Julia in many cases.

Pi Machin benchmark


Related: Anaconda Accelerate: GPU from Python/Numba