diff --git a/site/en/gemini-api/docs/vision.ipynb b/site/en/gemini-api/docs/vision.ipynb index 360302b2d..8a45b86ff 100644 --- a/site/en/gemini-api/docs/vision.ipynb +++ b/site/en/gemini-api/docs/vision.ipynb @@ -1,848 +1,848 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "Tce3stUlHN0L" - }, - "source": [ - "##### Copyright 2024 Google LLC." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "tuOe1ymfHZPu" - }, - "outputs": [], - "source": [ - "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", - "# you may not use this file except in compliance with the License.\n", - "# You may obtain a copy of the License at\n", - "#\n", - "# https://www.apache.org/licenses/LICENSE-2.0\n", - "#\n", - "# Unless required by applicable law or agreed to in writing, software\n", - "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", - "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", - "# See the License for the specific language governing permissions and\n", - "# limitations under the License." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "084u8u0DpBlo" - }, - "source": [ - "# Explore vision capabilities with the Gemini API" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ZFWzQEqNosrS" - }, - "source": [ - "\n", - " \n", - " \n", - "
\n", - " View on ai.google.dev\n", - " \n", - " Run in Google Colab\n", - " \n", - " View source on GitHub\n", - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3c5e92a74e64" - }, - "source": [ - "The Gemini API is able to process images and videos, enabling a multitude of\n", - " exciting developer use cases. Some of Gemini's vision capabilities include\n", - " the ability to:\n", - "\n", - "* Caption and answer questions about images\n", - "* Transcribe and reason over PDFs, including long documents up to 2 million token context window\n", - "* Describe, segment, and extract information from videos,\n", - "including both visual frames and audio, up to 90 minutes long\n", - "* Detect objects in an image and return bounding box coordinates for them\n", - "\n", - "This tutorial demonstrates some possible ways to prompt the Gemini API with\n", - "images and video input, provides code examples,\n", - "and outlines prompting best practices with multimodal vision capabilities.\n", - "All output is text-only." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "VxCstRHvpX0r" - }, - "source": [ - "## Setup\n", - "\n", - "Before you use the File API, you need to install the Gemini API SDK package and configure an API key. This section describes how to complete these setup steps." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "G6J_rV2ipmj_" - }, - "source": [ - "### Install the Python SDK and import packages\n", - "\n", - "The Python SDK for the Gemini API is contained in the [google-generativeai](https://pypi.org/project/google-generativeai/) package. Install the dependency using pip." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "mN8x8DPgu9Kv" - }, - "outputs": [], - "source": [ - "!pip install -q -U google-generativeai" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "NInUZ4hwDq6d" - }, - "source": [ - "Import the necessary packages." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "0x3xmmWrDtEH" - }, - "outputs": [], - "source": [ - "import google.generativeai as genai\n", - "from IPython.display import Markdown" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "l8g4hTRotheH" - }, - "source": [ - "### Set up your API key\n", - "\n", - "The File API uses API keys for authentication and access. Uploaded files are associated with the project linked to the API key. Unlike other Gemini APIs that use API keys, your API key also grants access to data you've uploaded to the File API, so take extra care in keeping your API key secure. For more on keeping your keys\n", - "secure, see [Best practices for using API\n", - "keys](https://support.google.com/googleapi/answer/6310037).\n", - "\n", - "Store your API key in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or are unfamiliar with Colab Secrets, refer to the [Authentication quickstart](https://github.com/google-gemini/gemini-api-cookbook/blob/main/quickstarts/Authentication.ipynb)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "d6lYXRcjthKV" - }, - "outputs": [], - "source": [ - "from google.colab import userdata\n", - "GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n", - "\n", - "genai.configure(api_key=GOOGLE_API_KEY)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "c-z4zsCUlaru" - }, - "source": [ - "## Prompting with images\n", - "\n", - "In this tutorial, you will upload images using the File API or as inline data and generate content based on those images." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "AKNehP2tr3Cr" - }, - "source": [ - "### Technical details (images)\n", - "Gemini 1.5 Pro and Flash support a maximum of 3,600 image files.\n", - "\n", - "Images must be in one of the following image data [MIME types](https://developers.google.com/drive/api/guides/ref-export-formats):\n", - "\n", - "- PNG - `image/png`\n", - "- JPEG - `image/jpeg`\n", - "- WEBP - `image/webp`\n", - "- HEIC - `image/heic`\n", - "- HEIF - `image/heif`\n", - "\n", - "Each image is equivalent to 258 tokens.\n", - "\n", - "While there are no specific limits to the number of pixels in an image besides the model’s context window, larger images are scaled down to a maximum resolution of 3072x3072 while preserving their original aspect ratio, while smaller images are scaled up to 768x768 pixels. There is no cost reduction for images at lower sizes, other than bandwidth, or performance improvement for images at higher resolution.\n", - "\n", - "For best results:\n", - "\n", - "* Rotate images to the correct orientation before uploading.\n", - "* Avoid blurry images.\n", - "* If using a single image, place the text prompt after the image." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "2fa34d5c0db8" - }, - "source": [ - "## Image input\n", - "\n", - "For total image payload size less than 20MB, it's recommended to either upload\n", - "base64 encoded images or directly upload locally stored image files." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "8336e412da3e" - }, - "source": [ - "### Base64 encoded images\n", - "\n", - "You can upload public image URLs by encoding them as Base64 payloads.\n", - "You can use the httpx library to fetch the image URLs.\n", - "The following code example shows how to do this:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "aa9a0e452544" - }, - "outputs": [], - "source": [ - "import httpx\n", - "import base64\n", - "\n", - "# Retrieve an image\n", - "image_path = \"https://upload.wikimedia.org/wikipedia/commons/thumb/8/87/Palace_of_Westminster_from_the_dome_on_Methodist_Central_Hall.jpg/2560px-Palace_of_Westminster_from_the_dome_on_Methodist_Central_Hall.jpg\"\n", - "image = httpx.get(image_path)\n", - "\n", - "# Choose a Gemini model\n", - "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro\")\n", - "\n", - "# Create a prompt\n", - "prompt = \"Caption this image.\"\n", - "response = model.generate_content(\n", - " [\n", - " {\n", - " \"mime_type\": \"image/jpeg\",\n", - " \"data\": base64.b64encode(image.content).decode(\"utf-8\"),\n", - " },\n", - " prompt,\n", - " ]\n", - ")\n", - "\n", - "Markdown(\">\" + response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "f47333dabe62" - }, - "source": [ - "### Multiple images\n", - "\n", - "To prompt with multiple images in Base64 encoded format, you can do the\n", - "following:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "2816ea3d2d91" - }, - "outputs": [], - "source": [ - "import httpx\n", - "import base64\n", - "\n", - "# Retrieve two images\n", - "image_path_1 = \"https://upload.wikimedia.org/wikipedia/commons/thumb/8/87/Palace_of_Westminster_from_the_dome_on_Methodist_Central_Hall.jpg/2560px-Palace_of_Westminster_from_the_dome_on_Methodist_Central_Hall.jpg\"\n", - "image_path_2 = \"https://storage.googleapis.com/generativeai-downloads/images/jetpack.jpg\"\n", - "\n", - "image_1 = httpx.get(image_path_1)\n", - "image_2 = httpx.get(image_path_2)\n", - "\n", - "# Create a prompt\n", - "prompt = \"Generate a list of all the objects contained in both images.\"\n", - "\n", - "response = model.generate_content([\n", - "{'mime_type':'image/jpeg', 'data': base64.b64encode(image_1.content).decode('utf-8')},\n", - "{'mime_type':'image/jpeg', 'data': base64.b64encode(image_2.content).decode('utf-8')}, prompt])\n", - "\n", - "Markdown(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Lm862F3zthiI" - }, - "source": [ - "### Upload one or more locally stored image files\n", - "\n", - "Alternatively, you can upload one or more locally stored image files.. \n", - "\n", - "You can download and use our drawings of [piranha-infested waters](https://storage.googleapis.com/generativeai-downloads/images/piranha.jpg) and a [firefighter with a cat](https://storage.googleapis.com/generativeai-downloads/images/firefighter.jpg). First, save these files to your local directory.\n", - "\n", - "Then click **Files** on the left sidebar. For each file, click the **Upload** button, then navigate to that file's location and upload it:\n", - "\n", - "\n", - "\n", - "When the combination of files and system instructions that you intend to send is larger than 20 MB in size, use the File API to upload those files. Smaller files can instead be called locally from the Gemini API:\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "XzMhQ8MWub5_" - }, - "outputs": [], - "source": [ - "import PIL.Image\n", - "\n", - "sample_file_2 = PIL.Image.open('piranha.jpg')\n", - "sample_file_3 = PIL.Image.open('firefighter.jpg')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "da11223550a9" - }, - "outputs": [], - "source": [ - "import google.generativeai as genai\n", - "\n", - "# Choose a Gemini model.\n", - "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", - "\n", - "# Create a prompt.\n", - "prompt = \"Write an advertising jingle based on the items in both images.\"\n", - "\n", - "response = model.generate_content([sample_file_2, sample_file_3, prompt])\n", - "\n", - "Markdown(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "736c83de95a1" - }, - "source": [ - "Note that these inline data calls don't include many of the features available\n", - "through the File API, such as getting file metadata,\n", - "[listing](https://ai.google.dev/gemini-api/docs/vision?lang=python#list-files),\n", - "or [deleting files](https://ai.google.dev/gemini-api/docs/vision?lang=python#delete-files)." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "0d6f7af7c2ff" - }, - "source": [ - "### Large image payloads" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "rsdNkDszLBmQ" - }, - "source": [ - "#### Upload an image file using the File API\n", - "\n", - "When the combination of files and system instructions that you intend to send is larger than 20 MB in size, use the File API to upload those files.\n", - "\n", - "**NOTE**: The File API lets you store up to 20 GB of files per project, with a per-file maximum size of 2 GB. Files are stored for 48 hours. They can be accessed in that period with your API key, but cannot be downloaded from the API. It is available at no cost in all regions where the Gemini API is available." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "qfa2VSqEsulq" - }, - "source": [ - "Upload the image using [`media.upload`](https://ai.google.dev/api/rest/v1beta/media/upload) and print the URI, which is used as a reference in Gemini API calls." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "2e9f9469b337" - }, - "outputs": [], - "source": [ - "!curl -o jetpack.jpg https://storage.googleapis.com/generativeai-downloads/images/jetpack.jpg" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "N9NxXGZKKusG" - }, - "outputs": [], - "source": [ - "# Upload the file and print a confirmation.\n", - "sample_file = genai.upload_file(path=\"jetpack.jpg\",\n", - " display_name=\"Jetpack drawing\")\n", - "\n", - "print(f\"Uploaded file '{sample_file.display_name}' as: {sample_file.uri}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "cto22vhKOvGQ" - }, - "source": [ - "The `response` shows that the File API stored the specified `display_name` for the uploaded file and a `uri` to reference the file in Gemini API calls. Use `response` to track how uploaded files are mapped to URIs.\n", - "\n", - "Depending on your use case, you can also store the URIs in structures such as a `dict` or a database." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ds5iJlaembWe" - }, - "source": [ - "#### Verify image file upload and get metadata\n", - "\n", - "You can verify the API successfully stored the uploaded file and get its metadata by calling [`files.get`](https://ai.google.dev/api/rest/v1beta/files/get) through the SDK. Only the `name` (and by extension, the `uri`) are unique. Use `display_name` to identify files only if you manage uniqueness yourself." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "kLFsVLFHOWSV" - }, - "outputs": [], - "source": [ - "file = genai.get_file(name=sample_file.name)\n", - "print(f\"Retrieved file '{file.display_name}' as: {sample_file.uri}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "BqzIGKBmnFoJ" - }, - "source": [ - "Depending on your use case, you can store the URIs in structures, such as a `dict` or a database." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "EPPOECHzsIGJ" - }, - "source": [ - "#### Prompt with the uploaded image and text\n", - "\n", - "After uploading the file, you can make GenerateContent requests that reference the File API URI. Select the generative model and provide it with a text prompt and the uploaded image." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "ZYVFqmLkl5nE" - }, - "outputs": [], - "source": [ - "# Choose a Gemini model.\n", - "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", - "\n", - "# Prompt the model with text and the previously uploaded image.\n", - "response = model.generate_content([sample_file, \"Describe how this product might be manufactured.\"])\n", - "\n", - "Markdown(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "c63c1a7a8e32" - }, - "source": [ - "## Capabilties\n", - "\n", - "This section outlines specific vision capabilities of the Gemini model, including object detection and bounding box coordinates." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "7e16d742407a" - }, - "source": [ - "### Get bounding boxes\n", - "\n", - "Gemini models are trained to return bounding box coordinates as relative widths or heights in the range of [0, 1]. These values are then scaled by 1000 and converted to integers. Effectively, the coordinates represent the bounding box on a 1000x1000 pixel version of the image. Therefore, you'll need to convert these coordinates back to the dimensions of your original image to accurately map the bounding boxes." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "778dd36334f4" - }, - "outputs": [], - "source": [ - "# Choose a Gemini model.\n", - "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", - "\n", - "# Create a prompt to detect bounding boxes.\n", - "prompt = \"Return a bounding box for each of the objects in this image in [ymin, xmin, ymax, xmax] format.\"\n", - "response = model.generate_content([sample_file_2, prompt])\n", - "\n", - "Markdown(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "b8e422c55df2" - }, - "source": [ - "The model returns bounding box coordinates in the format\n", - "`[ymin, xmin, ymax, xmax]`. To convert these normalized coordinates\n", - "to the pixel coordinates of your original image, follow these steps:\n", - "\n", - "1. Divide each output coordinate by 1000.\n", - "1. Multiply the x-coordinates by the original image width.\n", - "1. Multiply the y-coordinates by the original image height.\n", - "\n", - "To explore more detailed examples of generating bounding box coordinates and\n", - "visualizing them on images, review our [Object Detection cookbook example](https://github.com/google-gemini/cookbook/blob/main/examples/Object_detection.ipynb)." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "TaUZc1mvLkHY" - }, - "source": [ - "## Prompting with video\n", - "\n", - "In this tutorial, you will upload a video using the File API and generate content based on those images." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "nDN32NDPxXGX" - }, - "source": [ - "### Technical details (video)\n", - "\n", - "Gemini 1.5 Pro and Flash support up to approximately an hour of video data.\n", - "\n", - "Video must be in one of the following video format [MIME types](https://developers.google.com/drive/api/guides/ref-export-formats):\n", - " - `video/mp4`\n", - " - `video/mpeg`\n", - " - `video/mov`\n", - " - `video/avi`\n", - " - `video/x-flv`\n", - " - `video/mpg`\n", - " - `video/webm`\n", - " - `video/wmv`\n", - " - `video/3gpp`\n", - "\n", - "The File API service currently extracts image frames from videos at 1 frame per second (FPS) and audio at 1Kbps, single channel, adding timestamps every second. These rates are subject to change in the future for improvements in inference.\n", - "\n", - "**NOTE:** The finer details of fast action sequences may be lost at the 1 FPS frame sampling rate. Consider slowing down high-speed clips for improved inference quality.\n", - "\n", - "Individual frames are 258 tokens, and audio is 32 tokens per second. With metadata, each second of video becomes ~300 tokens, which means a 1M context window can fit slightly less than an hour of video.\n", - "\n", - "To ask questions about time-stamped locations, use the format `MM:SS`, where the first two digits represent minutes and the last two digits represent seconds.\n", - "\n", - "For best results:\n", - "\n", - "* Use one video per prompt.\n", - "* If using a single video, place the text prompt after the video." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "MNvhBdoDFnTC" - }, - "source": [ - "### Upload a video file to the File API\n", - "\n", - "**NOTE**: The File API lets you store up to 20 GB of files per project, with a per-file maximum size of 2 GB. Files are stored for 48 hours. They can be accessed in that period with your API key, but they cannot be downloaded using any API. It is available at no cost in all regions where the Gemini API is available.\n", - "\n", - "The File API accepts video file formats directly. This example uses the short NASA film [\"Jupiter's Great Red Spot Shrinks and Grows\"](https://www.youtube.com/watch?v=JDi4IdtvDVE0). Credit: Goddard Space Flight Center (GSFC)/David Ladd (2018).\n", - "\n", - "> \"Jupiter's Great Red Spot Shrinks and Grows\" is in the public domain and does not show identifiable people. ([NASA image and media usage guidelines.](https://www.nasa.gov/nasa-brand-center/images-and-media/))\n", - "\n", - "Start by retrieving the short video:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "V4XeFdX1rxaE" - }, - "outputs": [], - "source": [ - "!wget https://storage.googleapis.com/generativeai-downloads/images/GreatRedSpot.mp4" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ZusSiIg2T6ls" - }, - "source": [ - "Upload the video to the File API and print the URI." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "_HzrDdp2Q1Cu" - }, - "outputs": [], - "source": [ - "video_file_name = \"GreatRedSpot.mp4\"\n", - "\n", - "print(f\"Uploading file...\")\n", - "video_file = genai.upload_file(path=video_file_name)\n", - "print(f\"Completed upload: {video_file.uri}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "oOZmTUb4FWOa" - }, - "source": [ - "### Verify file upload and check state\n", - "\n", - "Verify the API has successfully received the files by calling the [`files.get`](https://ai.google.dev/api/rest/v1beta/files/get) method.\n", - "\n", - "**NOTE**: Video files have a `State` field in the File API. When a video is uploaded, it will be in the `PROCESSING` state until it is ready for inference. Only `ACTIVE` files can be used for model inference." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "SHMVCWHkFhJW" - }, - "outputs": [], - "source": [ - "import time\n", - "\n", - "# Check whether the file is ready to be used.\n", - "while video_file.state.name == \"PROCESSING\":\n", - " print('.', end='')\n", - " time.sleep(10)\n", - " video_file = genai.get_file(video_file.name)\n", - "\n", - "if video_file.state.name == \"FAILED\":\n", - " raise ValueError(video_file.state.name)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "IYIIHsvQt0_W" - }, - "source": [ - "### Prompt with a video and text\n", - "\n", - "Once the uploaded video is in the `ACTIVE` state, you can make `GenerateContent` requests that specify the File API URI for that video. Select the generative model and provide it with the uploaded video and a text prompt." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "sHH0ZR6Yt42S" - }, - "outputs": [], - "source": [ - "# Create the prompt.\n", - "prompt = \"Summarize this video. Then create a quiz with answer key based on the information in the video.\"\n", - "\n", - "# Choose a Gemini model.\n", - "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", - "\n", - "# Make the LLM request.\n", - "print(\"Making LLM inference request...\")\n", - "response = model.generate_content([video_file, prompt],\n", - " request_options={\"timeout\": 600})\n", - "\n", - "# Print the response, rendering any Markdown\n", - "Markdown(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "zS5NmQeXLqeS" - }, - "source": [ - "### Refer to timestamps in the content\n", - "\n", - "You can use timestamps of the form `MM:SS` to refer to specific moments in the video." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "ypZuGQ-2LqeS" - }, - "outputs": [], - "source": [ - "# Create the prompt.\n", - "prompt = \"What are the examples given at 01:05 and 01:19 supposed to show us?\"\n", - "\n", - "# Choose a Gemini model.\n", - "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", - "\n", - "# Make the LLM request.\n", - "print(\"Making LLM inference request...\")\n", - "response = model.generate_content([prompt, video_file],\n", - " request_options={\"timeout\": 600})\n", - "Markdown(response.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "JQE0XjgMZSJo" - }, - "source": [ - "### Transcribe video and provide visual descriptions\n", - "\n", - "The Gemini models can transcribe and provide visual descriptions of video content\n", - "by processing both the audio track and visual frames.\n", - "For visual descriptions, the model samples the video at a rate of **1 frame\n", - "per second**. This sampling rate may affect the level of detail in the\n", - "descriptions, particularly for videos with rapidly changing visuals." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "_JrcMsYnYXpJ" - }, - "outputs": [], - "source": [ - "# Create the prompt.\n", - "prompt = \"Transcribe the audio from this video, giving timestamps for salient events in the video. Also provide visual descriptions.\"\n", - "\n", - "# Choose a Gemini model.\n", - "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", - "\n", - "# Make the LLM request.\n", - "print(\"Making LLM inference request...\")\n", - "response = model.generate_content([video_file, prompt],\n", - " request_options={\"timeout\": 600})\n", - "Markdown(esponse.text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "VosrkvAyrx-v" - }, - "source": [ - "## List files\n", - "\n", - "You can list all uploaded files and their URIs using `files.list_files()`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "O82e6E2Irzlj" - }, - "outputs": [], - "source": [ - "# List all files\n", - "for file in genai.list_files():\n", - " print(f\"{file.display_name}, URI: {file.uri}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "diCy9BgjLqeS" - }, - "source": [ - "## Delete files\n", - "\n", - "Files are automatically deleted after 2 days. You can also manually delete them using `files.delete()`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "YYyi5PrKLqeb" - }, - "outputs": [], - "source": [ - "genai.delete_file(video_file.name)\n", - "print(f'Deleted file {video_file.uri}')" - ] - } - ], - "metadata": { - "colab": { - "name": "vision.ipynb", - "toc_visible": true - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - } - }, - "nbformat": 4, - "nbformat_minor": 0 + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "Tce3stUlHN0L" + }, + "source": [ + "##### Copyright 2024 Google LLC." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "tuOe1ymfHZPu" + }, + "outputs": [], + "source": [ + "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "084u8u0DpBlo" + }, + "source": [ + "# Explore vision capabilities with the Gemini API" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ZFWzQEqNosrS" + }, + "source": [ + "\n", + " \n", + " \n", + "
\n", + " View on ai.google.dev\n", + " \n", + " Run in Google Colab\n", + " \n", + " View source on GitHub\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3c5e92a74e64" + }, + "source": [ + "The Gemini API is able to process images and videos, enabling a multitude of\n", + " exciting developer use cases. Some of Gemini's vision capabilities include\n", + " the ability to:\n", + "\n", + "* Caption and answer questions about images\n", + "* Transcribe and reason over PDFs, including long documents up to 2 million token context window\n", + "* Describe, segment, and extract information from videos,\n", + "including both visual frames and audio, up to 90 minutes long\n", + "* Detect objects in an image and return bounding box coordinates for them\n", + "\n", + "This tutorial demonstrates some possible ways to prompt the Gemini API with\n", + "images and video input, provides code examples,\n", + "and outlines prompting best practices with multimodal vision capabilities.\n", + "All output is text-only." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "VxCstRHvpX0r" + }, + "source": [ + "## Setup\n", + "\n", + "Before you use the File API, you need to install the Gemini API SDK package and configure an API key. This section describes how to complete these setup steps." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "G6J_rV2ipmj_" + }, + "source": [ + "### Install the Python SDK and import packages\n", + "\n", + "The Python SDK for the Gemini API is contained in the [google-generativeai](https://pypi.org/project/google-generativeai/) package. Install the dependency using pip." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "mN8x8DPgu9Kv" + }, + "outputs": [], + "source": [ + "!pip install -q -U google-generativeai" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "NInUZ4hwDq6d" + }, + "source": [ + "Import the necessary packages." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "0x3xmmWrDtEH" + }, + "outputs": [], + "source": [ + "import google.generativeai as genai\n", + "from IPython.display import Markdown" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "l8g4hTRotheH" + }, + "source": [ + "### Set up your API key\n", + "\n", + "The File API uses API keys for authentication and access. Uploaded files are associated with the project linked to the API key. Unlike other Gemini APIs that use API keys, your API key also grants access to data you've uploaded to the File API, so take extra care in keeping your API key secure. For more on keeping your keys\n", + "secure, see [Best practices for using API\n", + "keys](https://support.google.com/googleapi/answer/6310037).\n", + "\n", + "Store your API key in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or are unfamiliar with Colab Secrets, refer to the [Authentication quickstart](https://github.com/google-gemini/gemini-api-cookbook/blob/main/quickstarts/Authentication.ipynb)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "d6lYXRcjthKV" + }, + "outputs": [], + "source": [ + "from google.colab import userdata\n", + "GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n", + "\n", + "genai.configure(api_key=GOOGLE_API_KEY)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "c-z4zsCUlaru" + }, + "source": [ + "## Prompting with images\n", + "\n", + "In this tutorial, you will upload images using the File API or as inline data and generate content based on those images." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "AKNehP2tr3Cr" + }, + "source": [ + "### Technical details (images)\n", + "Gemini 1.5 Pro and Flash support a maximum of 3,600 image files.\n", + "\n", + "Images must be in one of the following image data [MIME types](https://developers.google.com/drive/api/guides/ref-export-formats):\n", + "\n", + "- PNG - `image/png`\n", + "- JPEG - `image/jpeg`\n", + "- WEBP - `image/webp`\n", + "- HEIC - `image/heic`\n", + "- HEIF - `image/heif`\n", + "\n", + "Each image is equivalent to 258 tokens.\n", + "\n", + "While there are no specific limits to the number of pixels in an image besides the model’s context window, larger images are scaled down to a maximum resolution of 3072x3072 while preserving their original aspect ratio, while smaller images are scaled up to 768x768 pixels. There is no cost reduction for images at lower sizes, other than bandwidth, or performance improvement for images at higher resolution.\n", + "\n", + "For best results:\n", + "\n", + "* Rotate images to the correct orientation before uploading.\n", + "* Avoid blurry images.\n", + "* If using a single image, place the text prompt after the image." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "2fa34d5c0db8" + }, + "source": [ + "## Image input\n", + "\n", + "For total image payload size less than 20MB, it's recommended to either upload\n", + "base64 encoded images or directly upload locally stored image files." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "8336e412da3e" + }, + "source": [ + "### Base64 encoded images\n", + "\n", + "You can upload public image URLs by encoding them as Base64 payloads.\n", + "You can use the httpx library to fetch the image URLs.\n", + "The following code example shows how to do this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "aa9a0e452544" + }, + "outputs": [], + "source": [ + "import httpx\n", + "import base64\n", + "\n", + "# Retrieve an image\n", + "image_path = \"https://upload.wikimedia.org/wikipedia/commons/thumb/8/87/Palace_of_Westminster_from_the_dome_on_Methodist_Central_Hall.jpg/2560px-Palace_of_Westminster_from_the_dome_on_Methodist_Central_Hall.jpg\"\n", + "image = httpx.get(image_path)\n", + "\n", + "# Choose a Gemini model\n", + "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro\")\n", + "\n", + "# Create a prompt\n", + "prompt = \"Caption this image.\"\n", + "response = model.generate_content(\n", + " [\n", + " {\n", + " \"mime_type\": \"image/jpeg\",\n", + " \"data\": base64.b64encode(image.content).decode(\"utf-8\"),\n", + " },\n", + " prompt,\n", + " ]\n", + ")\n", + "\n", + "Markdown(\">\" + response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "f47333dabe62" + }, + "source": [ + "### Multiple images\n", + "\n", + "To prompt with multiple images in Base64 encoded format, you can do the\n", + "following:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "2816ea3d2d91" + }, + "outputs": [], + "source": [ + "import httpx\n", + "import base64\n", + "\n", + "# Retrieve two images\n", + "image_path_1 = \"https://upload.wikimedia.org/wikipedia/commons/thumb/8/87/Palace_of_Westminster_from_the_dome_on_Methodist_Central_Hall.jpg/2560px-Palace_of_Westminster_from_the_dome_on_Methodist_Central_Hall.jpg\"\n", + "image_path_2 = \"https://storage.googleapis.com/generativeai-downloads/images/jetpack.jpg\"\n", + "\n", + "image_1 = httpx.get(image_path_1)\n", + "image_2 = httpx.get(image_path_2)\n", + "\n", + "# Create a prompt\n", + "prompt = \"Generate a list of all the objects contained in both images.\"\n", + "\n", + "response = model.generate_content([\n", + "{'mime_type':'image/jpeg', 'data': base64.b64encode(image_1.content).decode('utf-8')},\n", + "{'mime_type':'image/jpeg', 'data': base64.b64encode(image_2.content).decode('utf-8')}, prompt])\n", + "\n", + "Markdown(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Lm862F3zthiI" + }, + "source": [ + "### Upload one or more locally stored image files\n", + "\n", + "Alternatively, you can upload one or more locally stored image files.. \n", + "\n", + "You can download and use our drawings of [piranha-infested waters](https://storage.googleapis.com/generativeai-downloads/images/piranha.jpg) and a [firefighter with a cat](https://storage.googleapis.com/generativeai-downloads/images/firefighter.jpg). First, save these files to your local directory.\n", + "\n", + "Then click **Files** on the left sidebar. For each file, click the **Upload** button, then navigate to that file's location and upload it:\n", + "\n", + "\n", + "\n", + "When the combination of files and system instructions that you intend to send is larger than 20 MB in size, use the File API to upload those files. Smaller files can instead be called locally from the Gemini API:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "XzMhQ8MWub5_" + }, + "outputs": [], + "source": [ + "import PIL.Image\n", + "\n", + "sample_file_2 = PIL.Image.open('piranha.jpg')\n", + "sample_file_3 = PIL.Image.open('firefighter.jpg')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "da11223550a9" + }, + "outputs": [], + "source": [ + "import google.generativeai as genai\n", + "\n", + "# Choose a Gemini model.\n", + "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", + "\n", + "# Create a prompt.\n", + "prompt = \"Write an advertising jingle based on the items in both images.\"\n", + "\n", + "response = model.generate_content([sample_file_2, sample_file_3, prompt])\n", + "\n", + "Markdown(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "736c83de95a1" + }, + "source": [ + "Note that these inline data calls don't include many of the features available\n", + "through the File API, such as getting file metadata,\n", + "[listing](https://ai.google.dev/gemini-api/docs/vision?lang=python#list-files),\n", + "or [deleting files](https://ai.google.dev/gemini-api/docs/vision?lang=python#delete-files)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "0d6f7af7c2ff" + }, + "source": [ + "### Large image payloads" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rsdNkDszLBmQ" + }, + "source": [ + "#### Upload an image file using the File API\n", + "\n", + "When the combination of files and system instructions that you intend to send is larger than 20 MB in size, use the File API to upload those files.\n", + "\n", + "**NOTE**: The File API lets you store up to 20 GB of files per project, with a per-file maximum size of 2 GB. Files are stored for 48 hours. They can be accessed in that period with your API key, but cannot be downloaded from the API. It is available at no cost in all regions where the Gemini API is available." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qfa2VSqEsulq" + }, + "source": [ + "Upload the image using [`media.upload`](https://ai.google.dev/api/rest/v1beta/media/upload) and print the URI, which is used as a reference in Gemini API calls." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "2e9f9469b337" + }, + "outputs": [], + "source": [ + "!curl -o jetpack.jpg https://storage.googleapis.com/generativeai-downloads/images/jetpack.jpg" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "N9NxXGZKKusG" + }, + "outputs": [], + "source": [ + "# Upload the file and print a confirmation.\n", + "sample_file = genai.upload_file(path=\"jetpack.jpg\",\n", + " display_name=\"Jetpack drawing\")\n", + "\n", + "print(f\"Uploaded file '{sample_file.display_name}' as: {sample_file.uri}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "cto22vhKOvGQ" + }, + "source": [ + "The `response` shows that the File API stored the specified `display_name` for the uploaded file and a `uri` to reference the file in Gemini API calls. Use `response` to track how uploaded files are mapped to URIs.\n", + "\n", + "Depending on your use case, you can also store the URIs in structures such as a `dict` or a database." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ds5iJlaembWe" + }, + "source": [ + "#### Verify image file upload and get metadata\n", + "\n", + "You can verify the API successfully stored the uploaded file and get its metadata by calling [`files.get`](https://ai.google.dev/api/rest/v1beta/files/get) through the SDK. Only the `name` (and by extension, the `uri`) are unique. Use `display_name` to identify files only if you manage uniqueness yourself." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "kLFsVLFHOWSV" + }, + "outputs": [], + "source": [ + "file = genai.get_file(name=sample_file.name)\n", + "print(f\"Retrieved file '{file.display_name}' as: {sample_file.uri}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "BqzIGKBmnFoJ" + }, + "source": [ + "Depending on your use case, you can store the URIs in structures, such as a `dict` or a database." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "EPPOECHzsIGJ" + }, + "source": [ + "#### Prompt with the uploaded image and text\n", + "\n", + "After uploading the file, you can make GenerateContent requests that reference the File API URI. Select the generative model and provide it with a text prompt and the uploaded image." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ZYVFqmLkl5nE" + }, + "outputs": [], + "source": [ + "# Choose a Gemini model.\n", + "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", + "\n", + "# Prompt the model with text and the previously uploaded image.\n", + "response = model.generate_content([sample_file, \"Describe how this product might be manufactured.\"])\n", + "\n", + "Markdown(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "c63c1a7a8e32" + }, + "source": [ + "## Capabilties\n", + "\n", + "This section outlines specific vision capabilities of the Gemini model, including object detection and bounding box coordinates." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "7e16d742407a" + }, + "source": [ + "### Get bounding boxes\n", + "\n", + "Gemini models are trained to return bounding box coordinates as relative widths or heights in the range of [0, 1]. These values are then scaled by 1000 and converted to integers. Effectively, the coordinates represent the bounding box on a 1000x1000 pixel version of the image. Therefore, you'll need to convert these coordinates back to the dimensions of your original image to accurately map the bounding boxes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "778dd36334f4" + }, + "outputs": [], + "source": [ + "# Choose a Gemini model.\n", + "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", + "\n", + "# Create a prompt to detect bounding boxes.\n", + "prompt = \"Return a bounding box for each of the objects in this image in [ymin, xmin, ymax, xmax] format.\"\n", + "response = model.generate_content([sample_file_2, prompt])\n", + "\n", + "Markdown(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "b8e422c55df2" + }, + "source": [ + "The model returns bounding box coordinates in the format\n", + "`[ymin, xmin, ymax, xmax]`. To convert these normalized coordinates\n", + "to the pixel coordinates of your original image, follow these steps:\n", + "\n", + "1. Divide each output coordinate by 1000.\n", + "1. Multiply the x-coordinates by the original image width.\n", + "1. Multiply the y-coordinates by the original image height.\n", + "\n", + "To explore more detailed examples of generating bounding box coordinates and\n", + "visualizing them on images, review our [Object Detection cookbook example](https://github.com/google-gemini/cookbook/blob/main/examples/Object_detection.ipynb)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "TaUZc1mvLkHY" + }, + "source": [ + "## Prompting with video\n", + "\n", + "In this tutorial, you will upload a video using the File API and generate content based on those images." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nDN32NDPxXGX" + }, + "source": [ + "### Technical details (video)\n", + "\n", + "Gemini 1.5 Pro and Flash support up to approximately an hour of video data.\n", + "\n", + "Video must be in one of the following video format [MIME types](https://developers.google.com/drive/api/guides/ref-export-formats):\n", + " - `video/mp4`\n", + " - `video/mpeg`\n", + " - `video/mov`\n", + " - `video/avi`\n", + " - `video/x-flv`\n", + " - `video/mpg`\n", + " - `video/webm`\n", + " - `video/wmv`\n", + " - `video/3gpp`\n", + "\n", + "The File API service currently extracts image frames from videos at 1 frame per second (FPS) and audio at 1Kbps, single channel, adding timestamps every second. These rates are subject to change in the future for improvements in inference.\n", + "\n", + "**NOTE:** The finer details of fast action sequences may be lost at the 1 FPS frame sampling rate. Consider slowing down high-speed clips for improved inference quality.\n", + "\n", + "Individual frames are 258 tokens, and audio is 32 tokens per second. With metadata, each second of video becomes ~300 tokens, which means a 1M context window can fit slightly less than an hour of video.\n", + "\n", + "To ask questions about time-stamped locations, use the format `MM:SS`, where the first two digits represent minutes and the last two digits represent seconds.\n", + "\n", + "For best results:\n", + "\n", + "* Use one video per prompt.\n", + "* If using a single video, place the text prompt after the video." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MNvhBdoDFnTC" + }, + "source": [ + "### Upload a video file to the File API\n", + "\n", + "**NOTE**: The File API lets you store up to 20 GB of files per project, with a per-file maximum size of 2 GB. Files are stored for 48 hours. They can be accessed in that period with your API key, but they cannot be downloaded using any API. It is available at no cost in all regions where the Gemini API is available.\n", + "\n", + "The File API accepts video file formats directly. This example uses the short NASA film [\"Jupiter's Great Red Spot Shrinks and Grows\"](https://www.youtube.com/watch?v=JDi4IdtvDVE0). Credit: Goddard Space Flight Center (GSFC)/David Ladd (2018).\n", + "\n", + "> \"Jupiter's Great Red Spot Shrinks and Grows\" is in the public domain and does not show identifiable people. ([NASA image and media usage guidelines.](https://www.nasa.gov/nasa-brand-center/images-and-media/))\n", + "\n", + "Start by retrieving the short video:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "V4XeFdX1rxaE" + }, + "outputs": [], + "source": [ + "!wget https://storage.googleapis.com/generativeai-downloads/images/GreatRedSpot.mp4" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ZusSiIg2T6ls" + }, + "source": [ + "Upload the video to the File API and print the URI." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "_HzrDdp2Q1Cu" + }, + "outputs": [], + "source": [ + "video_file_name = \"GreatRedSpot.mp4\"\n", + "\n", + "print(f\"Uploading file...\")\n", + "video_file = genai.upload_file(path=video_file_name)\n", + "print(f\"Completed upload: {video_file.uri}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "oOZmTUb4FWOa" + }, + "source": [ + "### Verify file upload and check state\n", + "\n", + "Verify the API has successfully received the files by calling the [`files.get`](https://ai.google.dev/api/rest/v1beta/files/get) method.\n", + "\n", + "**NOTE**: Video files have a `State` field in the File API. When a video is uploaded, it will be in the `PROCESSING` state until it is ready for inference. Only `ACTIVE` files can be used for model inference." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "SHMVCWHkFhJW" + }, + "outputs": [], + "source": [ + "import time\n", + "\n", + "# Check whether the file is ready to be used.\n", + "while video_file.state.name == \"PROCESSING\":\n", + " print('.', end='')\n", + " time.sleep(10)\n", + " video_file = genai.get_file(video_file.name)\n", + "\n", + "if video_file.state.name == \"FAILED\":\n", + " raise ValueError(video_file.state.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IYIIHsvQt0_W" + }, + "source": [ + "### Prompt with a video and text\n", + "\n", + "Once the uploaded video is in the `ACTIVE` state, you can make `GenerateContent` requests that specify the File API URI for that video. Select the generative model and provide it with the uploaded video and a text prompt." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "sHH0ZR6Yt42S" + }, + "outputs": [], + "source": [ + "# Create the prompt.\n", + "prompt = \"Summarize this video. Then create a quiz with answer key based on the information in the video.\"\n", + "\n", + "# Choose a Gemini model.\n", + "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", + "\n", + "# Make the LLM request.\n", + "print(\"Making LLM inference request...\")\n", + "response = model.generate_content([video_file, prompt],\n", + " request_options={\"timeout\": 600})\n", + "\n", + "# Print the response, rendering any Markdown\n", + "Markdown(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "zS5NmQeXLqeS" + }, + "source": [ + "### Refer to timestamps in the content\n", + "\n", + "You can use timestamps of the form `MM:SS` to refer to specific moments in the video." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ypZuGQ-2LqeS" + }, + "outputs": [], + "source": [ + "# Create the prompt.\n", + "prompt = \"What are the examples given at 01:05 and 01:19 supposed to show us?\"\n", + "\n", + "# Choose a Gemini model.\n", + "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", + "\n", + "# Make the LLM request.\n", + "print(\"Making LLM inference request...\")\n", + "response = model.generate_content([prompt, video_file],\n", + " request_options={\"timeout\": 600})\n", + "Markdown(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JQE0XjgMZSJo" + }, + "source": [ + "### Transcribe video and provide visual descriptions\n", + "\n", + "The Gemini models can transcribe and provide visual descriptions of video content\n", + "by processing both the audio track and visual frames.\n", + "For visual descriptions, the model samples the video at a rate of **1 frame\n", + "per second**. This sampling rate may affect the level of detail in the\n", + "descriptions, particularly for videos with rapidly changing visuals." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "_JrcMsYnYXpJ" + }, + "outputs": [], + "source": [ + "# Create the prompt.\n", + "prompt = \"Transcribe the audio from this video, giving timestamps for salient events in the video. Also provide visual descriptions.\"\n", + "\n", + "# Choose a Gemini model.\n", + "model = genai.GenerativeModel(model_name=\"gemini-1.5-pro-latest\")\n", + "\n", + "# Make the LLM request.\n", + "print(\"Making LLM inference request...\")\n", + "response = model.generate_content([video_file, prompt],\n", + " request_options={\"timeout\": 600})\n", + "Markdown(esponse.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "VosrkvAyrx-v" + }, + "source": [ + "## List files\n", + "\n", + "You can list all uploaded files and their URIs using `files.list_files()`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "O82e6E2Irzlj" + }, + "outputs": [], + "source": [ + "# List all files\n", + "for file in genai.list_files():\n", + " print(f\"{file.display_name}, URI: {file.uri}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "diCy9BgjLqeS" + }, + "source": [ + "## Delete files\n", + "\n", + "Files are automatically deleted after 2 days. You can also manually delete them using `files.delete()`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "YYyi5PrKLqeb" + }, + "outputs": [], + "source": [ + "genai.delete_file(video_file.name)\n", + "print(f'Deleted file {video_file.uri}')" + ] + } + ], + "metadata": { + "colab": { + "name": "vision.ipynb", + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 0 }