GPU-accelerated micromagnetic simulator
GPU-accelerated micromagnetism.
Paper on the design and verification of MuMax3: http://scitation.aip.org/content/aip/journal/adva/4/10/10.1063/1.4899186
👉 Pre-compiled binaries, examples, and documentation are available on the mumax³ homepage.
Documentation of several tools, like mumax3-convert
, is available here.
Contributions are gratefully accepted. To contribute code, fork our GitHub repo and send a pull request.
Consider downloading a pre-compiled mumax³ binary.
If you want to compile nevertheless, 4 essential components will be required to build mumax³: an NVIDIA driver, Go, CUDA and C.
Click on the arrows below to expand the installation instructions:
These instructions were made for Windows 10 and Ubuntu 22.04 (but should be applicable to all Debian systems). Your mileage may vary.
batch
nvidia-smi has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running
dpkg -l | grep nvidia
to see if any NVIDIA drivers are installed.
If it shows some drivers, you might want to uninstall them before proceeding with the clean installation: sudo apt-get --purge remove '*nvidia*'
sudo apt update
and sudo apt upgrade
.sudo add-apt-repository ppa:graphics-drivers/ppa
and sudo apt update
.ubuntu-drivers devices
. This will list the available drivers for your GPU and mark the recommended one. sudo apt install nvidia-driver-<version>
(replace <version>
with the number of the recommended driver e.g., nvidia-driver-535)sudo reboot
to apply the changes.nvidia-smi
. This returns something like this, which shows you the driver version in the top center:bash
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 552.22 Driver Version: 552.22 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3080 ... WDDM | 00000000:01:00.0 Off | N/A |
| N/A 53C P8 9W / 115W | 257MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 28420 C+G ...Programs\Microsoft VS Code\Code.exe N/A |
| 0 N/A N/A 31888 C+G ...les\Microsoft OneDrive\OneDrive.exe N/A |
+-----------------------------------------------------------------------------------------+
ubuntu-drivers devices
throws the error
Command 'ubuntu-drivers' not found
: run the command sudo apt install ubuntu-drivers-common
.ERROR:root:aplay command not found
: run the command sudo apt install alsa-utils
.sudo apt install nvidia-driver-<version>
throws the error E: Unable to locate package nvidia-driver-<version>
: run the commandsbash
sudo apt install software-properties-gtk
sudo add-apt-repository universe
sudo add-apt-repository multiverse
sudo apt update
sudo apt install nvidia-driver-<version>
nvidia-smi
throws the error nvidia: command not found
: the controller is probably not using the correct interface (sudo lshw -c display
should show NVIDIA). To solve this, follow these steps. If a docker: permission denied
error occurs: close and re-open WSL.👉 Check NVIDIA driver installation with: nvidia-smi
C:\cuda
. Spaces should not cause issues when running deploy_windows.ps1
, but this is not guaranteed.sudo apt-get install nvidia-cuda-toolkit
, or download an installer.
Pick the default installation path. If this is not usr/local/cuda/
, create a symlink to that path.nvidia-smi
output).
When prompted what to install: do not install the driver again, only the CUDA toolkit.bin
and lib64
paths to your PATH
and LD_LIBRARY_PATH
by adding the following lines at the end of your shell profile file (usually .bashrc
for Bash):bash
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
source ~/.bashrc
.nvcc --version
*GOPATH
environment variable should have been set automatically (note: the folder it points to probably doesn’t exist yet).go env GOPATH
. GOPATH
manually if it does not exist.%USERPROFILE%/go
is often used, e.g. C:/Users/<name>/go
. See this guide if you are unfamiliar with environment variables.
On Linux: ~/go
is often used. Open or create the ~/.bashrc
file and add the following lines.bash
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
source ~/.bashrc
.👉 Check Go installation with: go version
sudo apt-get install gcc
⚠️ each CUDA version has a maximum supported gcc
version. This StackOverflow answer lists the maximum supported gcc
version for each CUDA version. If necessary, use sudo apt-get install gcc-<min_version>
instead, with the appropriate <min_version>
.gcc
compiler on Windows, so download and install Visual Studio with the C/C++ extension pack. After installing, check if the path to cl.exe
was added to your PATH
environment variable. If not, add it manually, e.g. C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64
.gcc
is needed. Usually this is included in the Go installation, but if not it can be downloaded and installed from w64devkit.gcc --version
on Linux and where.exe cl.exe
on Windows.*sudo apt install git
git –version
*sudo apt-get install gnuplot
gnuplot -V
With these tools installed, you can build mumax³ yourself.
GOPATH
folder, create the subfolders src/github.com/mumax
.git clone https://github.com/mumax/3.git
in that newly created mumax
folder.$GOPATH/src/github.com/mumax/3
.cd 3/
and running go mod init github.com/mumax/3
, followed by go mod tidy
.nvidia-smi --query-gpu=compute_cap --format=csv
. Based on this, set the environment variable CUDA_CC
: if your compute capability is e.g., 8.9, then set the value CUDA_CC=89
.You can now compile mumax³ …
… on Linux:
make realclean
make
Your binary is now at $GOPATH/bin/mumax3
.
Note: each CUDA version has a maximum supported GCC version. If your default GCC compiler is too recent, you can use a different GCC compiler by instead running make NVCC_CCBIN=<path_to_gcc>
where <path_to_gcc>
is a less recent GCC. Check the version compatibility here. Alternatively, setting the NVCC_CCBIN
environment variable achieves the same thing, allowing you to run make
as usual.
… on Windows:
The Makefile
s may experience issues with whitespaces. Instead, we recommend to use the deploy/deploy_windows.ps1
script: this generates the Windows executables for the mumax³ download page, but can also be used to build a single mumax³ executable for yourself by making the following adjustments:
1) Change the $VS2022
variable to point to your Visual Studio executable. If you wish to compile for CUDA versions below v11.6, also set $VS2017
. Example: if where.exe cl.exe
returns foo\bar\cl.exe
, then set $VS2022 = "foo\bar"
.
2) (Not strictly necessary, but check this anyway) Throughout the file there are several switch ( $CUDA_VERSION )
blocks. If these do not address your installed CUDA version, add your version. Consult nearby comments when in doubt.
Now you can compile mumax³ by opening Powershell in the /deploy
directory and running
./deploy_windows.ps1 -CUDA_VERSIONS <your_cuda_version> -CUDA_CC <your_compute_capability>
where e.g. <your_cuda_version>
is 12.6
and <your_compute_capability>
is 86
, if you have installed CUDA v12.6 and your GPU’s compute capability is 8.6.
Your executable will be created in the deploy/build
directory.
Check installation with: which mumax3
on Linux or where.exe mumax3.exe
on Windows, followed by mumax3 -test
.
cuda.h
or curand.h
not found: →click here←CGO_CFLAGS
and CGO_LDFLAGS
environment variables are not found or point to the wrong path. To fix this, either define them in the script you are using to build mumax³, or define them in the terminal before running the script.
On Windows: say your CUDA is installed in %CUDA_PATH%
(e.g. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1
), then run these two lines in Powershell before running deploy_windows.ps1
:
$env:CGO_CFLAGS = '-I "%CUDA_PATH%\include"'
$env:CGO_LDFLAGS = '-L "%CUDA_PATH%\lib\x64"'
mumax3.exe
is not generated: →click here←mumax3.exe
executable can not be found, try setting the CGO_ENABLED
environment variable to 1
in your build script.vcvars64.bat
not found or could not initialise VC environment: →click here←vcvars64.bat
file there (the error message should contain the path to this Batch file), and then compile mumax using that shell.