hf_text-generation-inference/backends/llamacpp
Morgan Funtowicz e0dda9b614 feat(backend): use c++ defined types for llama.cpp 2024-11-29 23:38:27 +01:00
..
cmake feat(backend): bind thread and memory affinity for thread 2024-11-21 13:52:38 +01:00
csrc feat(backend): use c++ defined types for llama.cpp 2024-11-29 23:38:27 +01:00
offline feat(backend): create llama_context_params with default factory 2024-11-28 23:57:13 +01:00
src feat(backend): add missing temperature parameter 2024-11-28 16:55:17 +01:00
CMakeLists.txt feat: Fix Cmakelist to allow building on Darwin platform (#2785) 2024-11-29 00:31:36 +01:00
Cargo.toml feat(backend): rely on multi consumer queue to scheduler workers 2024-11-22 13:32:56 +01:00
README.md feat: Fix Cmakelist to allow building on Darwin platform (#2785) 2024-11-29 00:31:36 +01:00
build.rs feat(backend): correctly link to all libraries 2024-11-29 16:25:12 +01:00
requirements.txt feat: Fix Cmakelist to allow building on Darwin platform (#2785) 2024-11-29 00:31:36 +01:00

README.md

Compiling with MacOS

To compile the Llama.cpp backend on MacOS, you need to install clang and cmake via Homebrew:

brew install llvm cmake

You then need to configure CMakelists.txt to use the newly installed clang compiler. You can do this by configuring your IDE or adding the following lines to the top of the file:

set(CMAKE_C_COMPILER /opt/homebrew/opt/llvm/bin/clang)
set(CMAKE_CXX_COMPILER /opt/homebrew/opt/llvm/bin/clang++)

CMakelist.txt assumes that Homebrew installs libc++ in $HOMEBREW_PREFIX/opt/llvm/lib/c++.