Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM (LANGCHAIN GEN AI) folder #1143

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
##LLM5-Gemini Pro LLM Application


from dotenv import load_dotenv
load_dotenv()

import streamlit as st
import os
import google.generativeai as genai

genai.configure(api_key = os.getenv("GOOGLE_API_KEY"))

## function to load Gemini Pro Model and get response
model=genai.GenerativeModel("gemini-pro")
chat= model.start_chat(history=[])

def get_gemini_response(question):
response= chat.send_message(question,stream=True)
return response

##initialize our streamlit app

st.set_page_config(page_title="Q&A Demo")
st.header("Gemini LLM Application")

## initialize session state for chat history if it doesn't exist
if 'chat_history' not in st.session_state:
st.session_state['chat_history']=[]


input=st.text_input("Input:",key="input")
submit=st.button("Ask the question")

if submit and input:
response= get_gemini_response(input)
## Add user query and response to session chat hostory
st.session_state['chat_history'].append(("You",input))
st.subheader("The Response is")
for chunk in response:
st.write(chunk.text)
st.session_state['chat_history'].append(("Bot",chunk.text))
st.subheader("The chat history is ")

for role,text in st.session_state['chat_history']:
st.write(f"{role}:{text}")






Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
streamlit
google-generativeai
python-dotenv
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
##LLM6-Invoice Extractor using Gemini Pro Vision


from dotenv import load_dotenv
load_dotenv() ##load all the environment variables from .env

import streamlit as st
import os
from PIL import Image
import google.generativeai as genai

genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))

### Function to load Gemini Pro Vision
model= genai.GenerativeModel('gemini-pro-vision')
def get_gemini_response(input,image,prompt):
response = model.generate_content([input,image[0],prompt])
return response.text


def input_image_details(uploaded_file):
if uploaded_file is not None:
#Read the file into bytes
bytes_data= uploaded_file.getvalue()

image_parts=[
{
"mime_type": uploaded_file.type, #Get the mmime type of the uploaded file
"data":bytes_data
}
]
return image_parts
else:
raise FileNotFoundError("No file uploaded")


###initialize our streamlit app

st.set_page_config(page_title="MultiLanguage Invoice Extractor")

st.header("MultiLanguage Invoice Extractor")
input=st.text_input("Input Prompt: ",key="input")
uploaded_file=st.file_uploader("Choose an image of the invoice....",type=["jpg","jpeg","png"])
image=""
if uploaded_file is not None:
image = Image.open(uploaded_file)
st.image(image,caption="Uploaded Image",use_column_width=True)

submit=st.button("Tell me about the invoice")
input_prompt="""
You are an expert in understanding invoices. We will upload an image as invoice and you will have to answer any question based on the uploaded invoice image
"""

## If submit button is clicked
if submit:
image_data = input_image_details(uploaded_file)
response= get_gemini_response(input_prompt,image_data,input)
st.subheader("The Response is")
st.write(response)


Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
streamlit
google-generativeai
python-dotenv
langchain
PyPDF2
chromadb
75 changes: 75 additions & 0 deletions Artificial Intelligence/Advanced/LLM (LANGCHAIN GEN AI)/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
LLM (LANGCHAIN GEN AI)

This pack contains a series of projects developed as part of the LangChain initiative, demonstrating various applications and use cases leveraging language models and generative AI. Each project is designed to showcase the capabilities of different technologies and platforms.

Requirements:
Python 3.8 or higher(python 3.10 for LLM5 and LLM6 files)
Streamlit
langchain library
OpenAI API key
Hugging Face API token
LLaMA 2 model file
Google's GenerativeAI API Key
Pillow (Python Imaging Library)


Projects Overview:

1. LLM1-Basics of LLM

1.1. LLM1-Basics of LLM
This project introduces the LangChain library and demonstrates how to use it to interact with OpenAI and Hugging Face models. The examples cover querying information, chaining models for sequential tasks, and using prompt templates.

Pre-requisites:
Set your OpenAI API key and Hugging Face API token as environment variables:
OPEN_API_KEY='your_openai_api_key'
HUGGINGFACEHUB_API_TOKEN='your_huggingface_api_token'

1.2. LangChaiChatmodels with ChatOpenAI
Project LangChain 02 showcases the integration of ChatOpenAI for interactive conversations. It involves setting up ChatOpenAI, providing system and human messages, and obtaining AI-generated responses.

Pre-requisites:
Set up your OpenAI API key as an environment variable:
OPEN_API_KEY='your-api-key-here'

2. LLM2-Querying PDF with AstraDB
This project demonstrates a question-answering demo using Astra DB and LangChain, powered by Vector Search. It involves setting up a Serverless Cassandra with Vector Search on Astra DB and querying PDF documents.

Pre-requisites:
You need a Serverless Cassandra with Vector Search database on ASTRA DB to run this demo.You should get a DB Token with role Database Administrator
and copy your Database ID.You also need an OpenAI API Key.

3. LLM5-Gemini Pro LLM Application
This Streamlit application serves as a Q&A interface, integrating with Google's GenerativeAI Gemini Pro model. It configures the environment, establishes a Streamlit interface, and facilitates communication with the Gemini Pro model through user inputs. Users can input questions, and the app displays responses from the AI model. It maintains a session-based chat history, showing an ongoing conversation between the user and the AI.

Pre-requisites:
Set up your environment variables:
Create a .env file in the project root.
Add your Google GenerativeAI API key:
GOOGLE_API_KEY=your_google_generativeai_api_key


4. LLM6-Invoice Extractor using Gemini Pro Vision
A Streamlit web application titled "MultiLanguage Invoice Extractor" leveraging Google's Generative AI Gemini Pro Vision model to analyze and extract data from uploaded invoice images. Users can upload invoice images, input prompts, and get AI-generated insights.

Pre-requisites:
Set up your environment variables:
Create a .env file in the project root.
Add your Google GenerativeAI API key:
GOOGLE_API_KEY=your_google_generativeai_api_key


Note:
To run the LLM5 and LLM6 file (i.e. '---.py' python files),follow the following steps:
1. Create a conda environment with python version 3.10.
conda create --name myvenv python=3.10
2. Activate the conda environment.
conda activate myvenv
3. Install all the dependencies.
pip install -r requirements.txt
4. Run the file.
streamlit run 'file_name.py'
6. Copy and paste the displayed url in other tab outside the hub.



Loading