Build and Install#

Ruby Installation#

MLX is available on RubyGems. All you have to do to use MLX with your own Apple silicon computer is

gem install mlx

To install from RubyGems your system must meet the following requirements:

  • Using an M series chip (Apple silicon)

  • Using a native Ruby >= 3.10

  • macOS >= 14.0

Note

MLX is only available on devices running macOS >= 14.0 and higher.

CUDA#

The Ruby gem currently ships with Apple-Silicon native Metal builds only.

This feature is not yet exposed from the Ruby gem.

CPU-only (Linux)#

Linux builds are currently not supported by this Ruby release.

Troubleshooting#

My OS and Ruby versions are in the required range but RubyGems still does not find a matching distribution.

Probably you are using a non-native Ruby. The output of

ruby -e 'puts RbConfig::CONFIG["host_cpu"]'

should be arm. If it is i386 (and you have M series machine) then you are using a non-native Ruby. Switch your Ruby to a native Ruby. A good way to do this is with Conda.

Build from source#

Build Requirements#

  • A C++ compiler with C++20 support (e.g. Clang >= 15.0)

  • cmake – version 3.25 or later, and make

  • Xcode >= 15.0 and macOS SDK >= 14.0

Note

Ensure your shell environment is native arm, not x86 via Rosetta. If the output of uname -p is x86, see the troubleshooting section below.

Ruby Gem Source Build#

To build and install the MLX Ruby gem from source, first clone this repo:

git clone https://github.com/skryl/mlx-ruby.git && cd mlx-ruby

Then build the gem and install locally:

bundle install
gem build mlx.gemspec
gem install ./mlx-<version>.gem

For testing the extension locally during development:

bundle exec rake test

Run the tests with:

bundle exec rake test

C++ API#

Currently, MLX must be built and installed from source.

Similarly to the ruby library, to build and install the MLX C++ library start by cloning MLX from its GitHub repo:

git clone git@github.com:ml-explore/mlx.git mlx && cd mlx

Create a build directory and run CMake and make:

mkdir -p build && cd build
cmake .. && make -j

Run tests with:

make test

Install with:

make install

Note that the built mlx.metallib file should be either at the same directory as the executable statically linked to libmlx.a or the preprocessor constant METAL_PATH should be defined at build time and it should point to the path to the built metal library.

Build Options#

Option

Default

MLX_BUILD_TESTS

ON

MLX_BUILD_EXAMPLES

OFF

MLX_BUILD_BENCHMARKS

OFF

MLX_BUILD_METAL

ON

MLX_BUILD_CPU

ON

MLX_BUILD_PYTHON_BINDINGS

OFF

MLX_METAL_DEBUG

OFF

MLX_BUILD_SAFETENSORS

ON

MLX_BUILD_GGUF

ON

MLX_METAL_JIT

OFF

Note

If you have multiple Xcode installations and wish to use a specific one while building, you can do so by adding the following environment variable before building

export DEVELOPER_DIR="/path/to/Xcode.app/Contents/Developer/"

Further, you can use the following command to find out which macOS SDK will be used

xcrun -sdk macosx --show-sdk-version

Binary Size Minimization#

To produce a smaller binary use the CMake flags CMAKE_BUILD_TYPE=MinSizeRel and BUILD_SHARED_LIBS=ON.

The MLX CMake build has several additional options to make smaller binaries. For example, if you don’t need the CPU backend or support for safetensors and GGUF, you can do:

cmake .. \
  -DCMAKE_BUILD_TYPE=MinSizeRel \
  -DBUILD_SHARED_LIBS=ON \
  -DMLX_BUILD_CPU=OFF \
  -DMLX_BUILD_SAFETENSORS=OFF \
  -DMLX_BUILD_GGUF=OFF \
  -DMLX_METAL_JIT=ON

THE MLX_METAL_JIT flag minimizes the size of the MLX Metal library which contains pre-built GPU kernels. This substantially reduces the size of the Metal library by run-time compiling kernels the first time they are used in MLX on a given machine. Note run-time compilation incurs a cold-start cost which can be anwywhere from a few hundred millisecond to a few seconds depending on the application. Once a kernel is compiled, it will be cached by the system. The Metal kernel cache persists across reboots.

Linux#

To build from source on Linux (CPU only), install the BLAS and LAPACK headers. For example on Ubuntu, run the following:

apt-get update -y
apt-get install libblas-dev liblapack-dev liblapacke-dev -y

From here follow the instructions to install either the Ruby or C++ APIs.

CUDA#

To build from source on Linux with CUDA, install the BLAS and LAPACK headers and the CUDA toolkit. For example on Ubuntu, run the following:

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
dpkg -i cuda-keyring_1.1-1_all.deb
apt-get update -y
apt-get -y install cuda-toolkit-12-9
apt-get install libblas-dev liblapack-dev liblapacke-dev libcudnn9-dev-cuda-12 -y

When building either the Ruby or C++ APIs make sure to pass the cmake flag MLX_BUILD_CUDA=ON. For example, to build the Ruby API run:

CMAKE_ARGS="-DMLX_BUILD_CUDA=ON" bundle install

To build the C++ package run:

mkdir -p build && cd build
cmake .. -DMLX_BUILD_CUDA=ON && make -j

Troubleshooting#

Metal not found#

You see the following error when you try to build:

error: unable to find utility "metal", not a developer tool or in PATH

To fix this, first make sure you have Xcode installed:

xcode-select --install

Then set the active developer directory:

sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer

x86 Shell#

If the output of uname -p is x86 then your shell is running as x86 via Rosetta instead of natively.

To fix this, find the application in Finder (/Applications for iTerm, /Applications/Utilities for Terminal), right-click, and click “Get Info”. Uncheck “Open using Rosetta”, close the “Get Info” window, and restart your terminal.

Verify the terminal is now running natively the following command:

$ uname -p
arm

Also check that cmake is using the correct architecture:

$ cmake --system-information | grep CMAKE_HOST_SYSTEM_PROCESSOR
CMAKE_HOST_SYSTEM_PROCESSOR "arm64"

If you see "x86_64", try re-installing cmake. If you see "arm64" but the build errors out with “Building for x86_64 on macOS is not supported.” wipe your build cache with rm -rf build/ and try again.