Skip to content

Commit 1b064e1

Browse files
authored
docs: Fix llama.cpp GPU Installation in llamacpp.ipynb (Deprecated Env Variable) (langchain-ai#29659)
- **Description:** The llamacpp.ipynb notebook used a deprecated environment variable, LLAMA_CUBLAS, for llama.cpp installation with GPU support. This commit updates the notebook to use the correct GGML_CUDA variable, fixing the installation error. - **Issue:** none - **Dependencies:** none
1 parent 3645181 commit 1b064e1

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/docs/integrations/llms/llamacpp.ipynb

+4-4
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@
6565
"metadata": {},
6666
"outputs": [],
6767
"source": [
68-
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
68+
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
6969
]
7070
},
7171
{
@@ -81,7 +81,7 @@
8181
"metadata": {},
8282
"outputs": [],
8383
"source": [
84-
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
84+
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
8585
]
8686
},
8787
{
@@ -149,9 +149,9 @@
149149
"\n",
150150
"```\n",
151151
"set FORCE_CMAKE=1\n",
152-
"set CMAKE_ARGS=-DLLAMA_CUBLAS=OFF\n",
152+
"set CMAKE_ARGS=-DGGML_CUDA=OFF\n",
153153
"```\n",
154-
"If you have an NVIDIA GPU make sure `DLLAMA_CUBLAS` is set to `ON`\n",
154+
"If you have an NVIDIA GPU make sure `DGGML_CUDA` is set to `ON`\n",
155155
"\n",
156156
"#### Compiling and installing\n",
157157
"\n",

0 commit comments

Comments
 (0)