Build
1
|
curl -L https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/resolve/main/Llama-3.2-1B-Instruct-IQ3_M.gguf?download=true -o Llam-3.2-1B.gguf
|
下載 Llam 模型
AMD CPU
1
2
3
4
|
git clone --depth 1 https://github.com/ggerganov/llama.cpp.git
cmake -Bbuild
cd build/
make -j8
|
1
|
bin/llama-cli -m Llam-3.2-1B.gguf -p "Hi" -n 100
|
Vulkan AMD
1
2
3
|
sudo apt install vulkan-tools libvulkan-dev glslc
bin/llama-cli -m Llam-3.2-1B.gguf -p "Hi" -n 100 -ngl 1000
|