|
14 | 14 | },
|
15 | 15 | {
|
16 | 16 | "cell_type": "markdown",
|
17 |
| - "id": "ea35b2b7", |
| 17 | + "id": "5b8d9390", |
18 | 18 | "metadata": {},
|
19 | 19 | "source": [
|
20 |
| - "## Setting up\n", |
| 20 | + "## Overview\n", |
21 | 21 | "\n",
|
22 |
| - "Install the package `langchain-ibm`." |
23 |
| - ] |
24 |
| - }, |
25 |
| - { |
26 |
| - "cell_type": "code", |
27 |
| - "execution_count": 1, |
28 |
| - "id": "2f1fff4e", |
29 |
| - "metadata": {}, |
30 |
| - "outputs": [], |
31 |
| - "source": [ |
32 |
| - "!pip install -qU langchain-ibm" |
| 22 | + "### Integration details\n", |
| 23 | + "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/llms/ibm/) | Package downloads | Package latest |\n", |
| 24 | + "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n", |
| 25 | + "| [WatsonxLLM](https://python.langchain.com/api_reference/ibm/llms/langchain_ibm.llms.WatsonxLLM.html) | [langchain-ibm](https://python.langchain.com/api_reference/ibm/index.html) | ❌ | ❌ | ✅ |  |  |" |
33 | 26 | ]
|
34 | 27 | },
|
35 | 28 | {
|
36 | 29 | "cell_type": "markdown",
|
37 |
| - "id": "f406e092", |
| 30 | + "id": "ea35b2b7", |
38 | 31 | "metadata": {},
|
39 | 32 | "source": [
|
40 |
| - "This cell defines the WML credentials required to work with watsonx Foundation Model inferencing.\n", |
| 33 | + "## Setup\n", |
| 34 | + "\n", |
| 35 | + "To access IBM watsonx.ai models you'll need to create an IBM watsonx.ai account, get an API key, and install the `langchain-ibm` integration package.\n", |
| 36 | + "\n", |
| 37 | + "### Credentials\n", |
| 38 | + "\n", |
| 39 | + "The cell below defines the credentials required to work with watsonx Foundation Model inferencing.\n", |
41 | 40 | "\n",
|
42 | 41 | "**Action:** Provide the IBM Cloud user API key. For details, see\n",
|
43 |
| - "[documentation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui)." |
| 42 | + "[Managing user API keys](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui)." |
44 | 43 | ]
|
45 | 44 | },
|
46 | 45 | {
|
|
81 | 80 | "os.environ[\"WATSONX_INSTANCE_ID\"] = \"your instance_id for accessing the CPD cluster\""
|
82 | 81 | ]
|
83 | 82 | },
|
| 83 | + { |
| 84 | + "cell_type": "markdown", |
| 85 | + "id": "f918d229", |
| 86 | + "metadata": {}, |
| 87 | + "source": [ |
| 88 | + "### Installation\n", |
| 89 | + "\n", |
| 90 | + "The LangChain IBM integration lives in the `langchain-ibm` package:" |
| 91 | + ] |
| 92 | + }, |
| 93 | + { |
| 94 | + "cell_type": "code", |
| 95 | + "execution_count": null, |
| 96 | + "id": "f925c9aa", |
| 97 | + "metadata": {}, |
| 98 | + "outputs": [], |
| 99 | + "source": [ |
| 100 | + "!pip install -qU langchain-ibm" |
| 101 | + ] |
| 102 | + }, |
84 | 103 | {
|
85 | 104 | "cell_type": "markdown",
|
86 | 105 | "id": "e36acbef",
|
87 | 106 | "metadata": {},
|
88 | 107 | "source": [
|
89 |
| - "## Load the model\n", |
| 108 | + "## Instantiation\n", |
90 | 109 | "\n",
|
91 | 110 | "You might need to adjust model `parameters` for different models or tasks. For details, refer to [documentation](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames)."
|
92 | 111 | ]
|
93 | 112 | },
|
94 | 113 | {
|
95 | 114 | "cell_type": "code",
|
96 |
| - "execution_count": 3, |
| 115 | + "execution_count": 1, |
97 | 116 | "id": "407cd500",
|
98 | 117 | "metadata": {},
|
99 | 118 | "outputs": [],
|
|
124 | 143 | "In this example, we’ll use the `project_id` and Dallas url.\n",
|
125 | 144 | "\n",
|
126 | 145 | "\n",
|
127 |
| - "You need to specify `model_id` that will be used for inferencing. All available models you can find in [documentation](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#ibm_watsonx_ai.foundation_models.utils.enums.ModelTypes)." |
| 146 | + "You need to specify `model_id` that will be used for inferencing. All available models you can find in [documentation](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#TextModels)." |
128 | 147 | ]
|
129 | 148 | },
|
130 | 149 | {
|
|
241 | 260 | "watsonx_llm = WatsonxLLM(watsonx_model=model)"
|
242 | 261 | ]
|
243 | 262 | },
|
244 |
| - { |
245 |
| - "cell_type": "markdown", |
246 |
| - "id": "c25ecbd1", |
247 |
| - "metadata": {}, |
248 |
| - "source": [ |
249 |
| - "## Create Chain\n", |
250 |
| - "Create `PromptTemplate` objects which will be responsible for creating a random question." |
251 |
| - ] |
252 |
| - }, |
253 |
| - { |
254 |
| - "cell_type": "code", |
255 |
| - "execution_count": 5, |
256 |
| - "id": "c7d80c05", |
257 |
| - "metadata": {}, |
258 |
| - "outputs": [], |
259 |
| - "source": [ |
260 |
| - "from langchain_core.prompts import PromptTemplate\n", |
261 |
| - "\n", |
262 |
| - "template = \"Generate a random question about {topic}: Question: \"\n", |
263 |
| - "\n", |
264 |
| - "prompt = PromptTemplate.from_template(template)" |
265 |
| - ] |
266 |
| - }, |
267 |
| - { |
268 |
| - "cell_type": "markdown", |
269 |
| - "id": "79056d8e", |
270 |
| - "metadata": {}, |
271 |
| - "source": [ |
272 |
| - "Provide a topic and run the chain." |
273 |
| - ] |
274 |
| - }, |
275 |
| - { |
276 |
| - "cell_type": "code", |
277 |
| - "execution_count": 9, |
278 |
| - "id": "dc076c56", |
279 |
| - "metadata": {}, |
280 |
| - "outputs": [ |
281 |
| - { |
282 |
| - "data": { |
283 |
| - "text/plain": [ |
284 |
| - "'What is the difference between a dog and a wolf?'" |
285 |
| - ] |
286 |
| - }, |
287 |
| - "execution_count": 9, |
288 |
| - "metadata": {}, |
289 |
| - "output_type": "execute_result" |
290 |
| - } |
291 |
| - ], |
292 |
| - "source": [ |
293 |
| - "llm_chain = prompt | watsonx_llm\n", |
294 |
| - "\n", |
295 |
| - "topic = \"dog\"\n", |
296 |
| - "\n", |
297 |
| - "llm_chain.invoke(topic)" |
298 |
| - ] |
299 |
| - }, |
300 | 263 | {
|
301 | 264 | "cell_type": "markdown",
|
302 | 265 | "id": "f571001d",
|
303 | 266 | "metadata": {},
|
304 | 267 | "source": [
|
305 |
| - "## Calling the Model Directly\n", |
| 268 | + "## Invocation\n", |
306 | 269 | "To obtain completions, you can call the model directly using a string prompt."
|
307 | 270 | ]
|
308 | 271 | },
|
309 | 272 | {
|
310 | 273 | "cell_type": "code",
|
311 |
| - "execution_count": 7, |
| 274 | + "execution_count": 3, |
312 | 275 | "id": "beea2b5b",
|
313 | 276 | "metadata": {},
|
314 | 277 | "outputs": [
|
315 | 278 | {
|
316 | 279 | "data": {
|
317 | 280 | "text/plain": [
|
318 |
| - "\"Man's best friend is his dog. \"" |
| 281 | + "\"Man's best friend is his dog. Dogs are man's best friend because they are always there for you, they never judge you, and they love you unconditionally. Dogs are also great companions and can help reduce stress levels. \"" |
319 | 282 | ]
|
320 | 283 | },
|
321 |
| - "execution_count": 7, |
| 284 | + "execution_count": 3, |
322 | 285 | "metadata": {},
|
323 | 286 | "output_type": "execute_result"
|
324 | 287 | }
|
|
331 | 294 | },
|
332 | 295 | {
|
333 | 296 | "cell_type": "code",
|
334 |
| - "execution_count": 11, |
| 297 | + "execution_count": 4, |
335 | 298 | "id": "8ab1a25a",
|
336 | 299 | "metadata": {},
|
337 | 300 | "outputs": [
|
338 | 301 | {
|
339 | 302 | "data": {
|
340 | 303 | "text/plain": [
|
341 |
| - "LLMResult(generations=[[Generation(text='The fastest dog in the world is the greyhound, which can run up to 45 miles per hour. This is about the same speed as a human running down a track. Greyhounds are very fast because they have long legs, a streamlined body, and a strong tail. They can run this fast for short distances, but they can also run for long distances, like a marathon. ', generation_info={'finish_reason': 'eos_token'})], [Generation(text='The Beagle is a scent hound, meaning it is bred to hunt by following a trail of scents.', generation_info={'finish_reason': 'eos_token'})]], llm_output={'token_usage': {'generated_token_count': 106, 'input_token_count': 13}, 'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': ''}, run=[RunInfo(run_id=UUID('52cb421d-b63f-4c5f-9b04-d4770c664725')), RunInfo(run_id=UUID('df2ea606-1622-4ed7-8d5d-8f6e068b71c4'))])" |
| 304 | + "LLMResult(generations=[[Generation(text='The fastest dog in the world is the greyhound. Greyhounds can run up to 45 mph, which is about the same speed as a Usain Bolt.', generation_info={'finish_reason': 'eos_token'})], [Generation(text='The Labrador Retriever is a breed of retriever that was bred for hunting. They are a very smart breed and are very easy to train. They are also very loyal and will make great companions. ', generation_info={'finish_reason': 'eos_token'})]], llm_output={'token_usage': {'generated_token_count': 82, 'input_token_count': 13}, 'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None}, run=[RunInfo(run_id=UUID('750b8a0f-8846-456d-93d0-e039e95b1276')), RunInfo(run_id=UUID('aa4c2a1c-5b08-4fcf-87aa-50228de46db5'))], type='LLMResult')" |
342 | 305 | ]
|
343 | 306 | },
|
344 |
| - "execution_count": 11, |
| 307 | + "execution_count": 4, |
345 | 308 | "metadata": {},
|
346 | 309 | "output_type": "execute_result"
|
347 | 310 | }
|
|
369 | 332 | },
|
370 | 333 | {
|
371 | 334 | "cell_type": "code",
|
372 |
| - "execution_count": 12, |
| 335 | + "execution_count": 5, |
373 | 336 | "id": "3f63166a",
|
374 | 337 | "metadata": {},
|
375 | 338 | "outputs": [
|
376 | 339 | {
|
377 | 340 | "name": "stdout",
|
378 | 341 | "output_type": "stream",
|
379 | 342 | "text": [
|
380 |
| - "My favorite breed of dog is a Labrador Retriever. Labradors are my favorite because they are extremely smart, very friendly, and love to be with people. They are also very playful and love to run around and have a lot of energy. " |
| 343 | + "My favorite breed of dog is a Labrador Retriever. They are my favorite breed because they are my favorite color, yellow. They are also very smart and easy to train. " |
381 | 344 | ]
|
382 | 345 | }
|
383 | 346 | ],
|
|
387 | 350 | "):\n",
|
388 | 351 | " print(chunk, end=\"\")"
|
389 | 352 | ]
|
| 353 | + }, |
| 354 | + { |
| 355 | + "cell_type": "markdown", |
| 356 | + "id": "9fc88fdd", |
| 357 | + "metadata": {}, |
| 358 | + "source": [ |
| 359 | + "## Chaining\n", |
| 360 | + "Create `PromptTemplate` objects which will be responsible for creating a random question." |
| 361 | + ] |
| 362 | + }, |
| 363 | + { |
| 364 | + "cell_type": "code", |
| 365 | + "execution_count": 6, |
| 366 | + "id": "ad63fa27", |
| 367 | + "metadata": {}, |
| 368 | + "outputs": [], |
| 369 | + "source": [ |
| 370 | + "from langchain_core.prompts import PromptTemplate\n", |
| 371 | + "\n", |
| 372 | + "template = \"Generate a random question about {topic}: Question: \"\n", |
| 373 | + "\n", |
| 374 | + "prompt = PromptTemplate.from_template(template)" |
| 375 | + ] |
| 376 | + }, |
| 377 | + { |
| 378 | + "cell_type": "markdown", |
| 379 | + "id": "677699db", |
| 380 | + "metadata": {}, |
| 381 | + "source": [ |
| 382 | + "Provide a topic and run the chain." |
| 383 | + ] |
| 384 | + }, |
| 385 | + { |
| 386 | + "cell_type": "code", |
| 387 | + "execution_count": 7, |
| 388 | + "id": "868af75c", |
| 389 | + "metadata": {}, |
| 390 | + "outputs": [ |
| 391 | + { |
| 392 | + "data": { |
| 393 | + "text/plain": [ |
| 394 | + "'What is the origin of the name \"Pomeranian\"?'" |
| 395 | + ] |
| 396 | + }, |
| 397 | + "execution_count": 7, |
| 398 | + "metadata": {}, |
| 399 | + "output_type": "execute_result" |
| 400 | + } |
| 401 | + ], |
| 402 | + "source": [ |
| 403 | + "llm_chain = prompt | watsonx_llm\n", |
| 404 | + "\n", |
| 405 | + "topic = \"dog\"\n", |
| 406 | + "\n", |
| 407 | + "llm_chain.invoke(topic)" |
| 408 | + ] |
| 409 | + }, |
| 410 | + { |
| 411 | + "cell_type": "markdown", |
| 412 | + "id": "59480270", |
| 413 | + "metadata": {}, |
| 414 | + "source": [ |
| 415 | + "## API reference\n", |
| 416 | + "\n", |
| 417 | + "For detailed documentation of all `WatsonxLLM` features and configurations head to the [API reference](https://python.langchain.com/api_reference/ibm/llms/langchain_ibm.llms.WatsonxLLM.html)." |
| 418 | + ] |
390 | 419 | }
|
391 | 420 | ],
|
392 | 421 | "metadata": {
|
|
405 | 434 | "name": "python",
|
406 | 435 | "nbconvert_exporter": "python",
|
407 | 436 | "pygments_lexer": "ipython3",
|
408 |
| - "version": "3.10.13" |
| 437 | + "version": "3.10.14" |
409 | 438 | }
|
410 | 439 | },
|
411 | 440 | "nbformat": 4,
|
|
0 commit comments