Skip to content

Latest commit

 

History

History
76 lines (52 loc) · 1.76 KB

README.md

File metadata and controls

76 lines (52 loc) · 1.76 KB

No bullshit prompt engineering using jinja2 for dynamic prompt templating and LiteLLM to seamlessly use a wide range of LLM providers.

Getting Started

🔥 Installation

pip install toucans

🔥 Usage

Initialize a PromptFunction:

from toucans import PromptFunction

sentiment = PromptFunction(
    model="gpt-4",
    temperature=0.7,
    messages=[
        {"role": "system", "content": "You are a helpful mood-sensitive agent."},
        {"role": "user", "content": "Determine the sentiment of the sentence: {{ sentence }}"},
    ],
)

🔥 Generate Completion

Generate a completion by calling the PromptFunction with a sentence:

completion = sentiment(sentence="I'm so happy!")

🔥 Batch Generate Completions in Parallel

batch_args = [
    {"sentence": "Toucans is nice Python package!"}, 
    {"sentence": "I hate bloated prompt engineering frameworks!"}
]

completion_batch = sentiment.batch_call(batch_args=batch_args)

🔥 Local PromptFunction Serialization

Save/load the PromptFunction to a directory:

# Push to dir (not implemented yet)
sentiment.push_to_dir("./sentiment/")

# Load from dir (not implemented yet)
sentiment = PromptFunction.from_dir("./sentiment/")

🔥 The Toucans Hub

Push/pull the PromptFunction from the toucans hub:

# Push to hub
sentiment.push_to_hub("juunge/sentiment")

# Load from hub
sentiment = PromptFunction.from_hub("juunge/sentiment")

For now,loading from the Toucans Hub requires hosting an instance of it yourself and set the HUB_API_URL environment variable.