diff --git a/site/en/gemini-api/docs/get-started/python.ipynb b/site/en/gemini-api/docs/get-started/python.ipynb
index 957d83ec8..c50f732f2 100644
--- a/site/en/gemini-api/docs/get-started/python.ipynb
+++ b/site/en/gemini-api/docs/get-started/python.ipynb
@@ -957,7 +957,7 @@
"id": "AwCqtZ6D4kvk"
},
"source": [
- "`protos.Content` objects contain a list of `protos.Part` objects that each contain either a text (string) or inline_data (`protos.Blob`), where a blob contains binary data and a `mime_type`. The chat history is available as a list of `protos.Content` objects in `ChatSession.history`:"
+ "`genai.protos.Content` objects contain a list of `genai.protos.Part` objects that each contain either a text (string) or inline_data (`genai.protos.Blob`), where a blob contains binary data and a `mime_type`. The chat history is available as a list of `genai.protos.Content` objects in `ChatSession.history`:"
]
},
{
@@ -1033,7 +1033,7 @@
"source": [
"## Count tokens\n",
"\n",
- "Large language models have a context window, and the context length is often measured in terms of the **number of tokens**. With the Gemini API, you can determine the number of tokens per any `protos.Content` object. In the simplest case, you can pass a query string to the `GenerativeModel.count_tokens` method as follows:"
+ "Large language models have a context window, and the context length is often measured in terms of the **number of tokens**. With the Gemini API, you can determine the number of tokens per any `genai.protos.Content` object. In the simplest case, you can pass a query string to the `GenerativeModel.count_tokens` method as follows:"
]
},
{
@@ -1188,9 +1188,9 @@
"id": "zBg0eNeml3d4"
},
"source": [
- "While the `genai.embed_content` function accepts simple strings or lists of strings, it is actually built around the `protos.Content` type (like GenerativeModel.generate_content
). `protos.Content` objects are the primary units of conversation in the API.\n",
+ "While the `genai.embed_content` function accepts simple strings or lists of strings, it is actually built around the `genai.protos.Content` type (like GenerativeModel.generate_content
). `genai.protos.Content` objects are the primary units of conversation in the API.\n",
"\n",
- "While the `protos.Content` object is multimodal, the `embed_content` method only supports text embeddings. This design gives the API the *possibility* to expand to multimodal embeddings."
+ "While the `genai.protos.Content` object is multimodal, the `embed_content` method only supports text embeddings. This design gives the API the *possibility* to expand to multimodal embeddings."
]
},
{
@@ -1248,7 +1248,7 @@
"id": "jU8juHCxoUKG"
},
"source": [
- "Similarly, the chat history contains a list of `protos.Content` objects, which you can pass directly to the `embed_content` function:"
+ "Similarly, the chat history contains a list of `genai.protos.Content` objects, which you can pass directly to the `embed_content` function:"
]
},
{
@@ -1491,27 +1491,16 @@
"The [`google.generativeai.protos`](https://ai.google.dev/api/python/google/generativeai/protos) submodule provides access to the low level classes used by the API behind the scenes:"
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "l6aafWECnpX6"
- },
- "outputs": [],
- "source": [
- "from google.generativeai import protos"
- ]
- },
{
"cell_type": "markdown",
"metadata": {
"id": "gm1RWcB3n_n0"
},
"source": [
- "The SDK attempts to convert your message to a `protos.Content` object, which contains a list of `protos.Part` objects that each contain either:\n",
+ "The SDK attempts to convert your message to a `genai.protos.Content` object, which contains a list of `genai.protos.Part` objects that each contain either:\n",
"\n",
"1. a text
(string)\n",
- "2. `inline_data` (`protos.Blob`), where a blob contains binary `data` and a `mime_type`.\n",
+ "2. `inline_data` (`genai.protos.Blob`), where a blob contains binary `data` and a `mime_type`.\n",
"\n",
"You can also pass any of these classes as an equivalent dictionary.\n",
"\n",
@@ -1530,11 +1519,11 @@
"source": [
"model = genai.GenerativeModel('gemini-1.5-flash')\n",
"response = model.generate_content(\n",
- " protos.Content(\n",
+ " genai.protos.Content(\n",
" parts = [\n",
- " protos.Part(text=\"Write a short, engaging blog post based on this picture.\"),\n",
- " protos.Part(\n",
- " inline_data=protos.Blob(\n",
+ " genai.protos.Part(text=\"Write a short, engaging blog post based on this picture.\"),\n",
+ " genai.protos.Part(\n",
+ " inline_data=genai.protos.Blob(\n",
" mime_type='image/jpeg',\n",
" data=pathlib.Path('image.jpg').read_bytes()\n",
" )\n",
@@ -1581,9 +1570,9 @@
"\n",
"While the `genai.ChatSession` class shown earlier can handle many use cases, it does make some assumptions. If your use case doesn't fit into this chat implementation it's good to remember that `genai.ChatSession` is just a wrapper around GenerativeModel.generate_content
. In addition to single requests, it can handle multi-turn conversations.\n",
"\n",
- "The individual messages are `protos.Content` objects or compatible dictionaries, as seen in previous sections. As a dictionary, the message requires `role` and `parts` keys. The `role` in a conversation can either be the `user`, which provides the prompts, or `model`, which provides the responses.\n",
+ "The individual messages are `genai.protos.Content` objects or compatible dictionaries, as seen in previous sections. As a dictionary, the message requires `role` and `parts` keys. The `role` in a conversation can either be the `user`, which provides the prompts, or `model`, which provides the responses.\n",
"\n",
- "Pass a list of `protos.Content` objects and it will be treated as multi-turn chat:"
+ "Pass a list of `genai.protos.Content` objects and it will be treated as multi-turn chat:"
]
},
{