Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update python sdk to have optimize and ape schedule #23

Open
wants to merge 14 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 35 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ You can find our full documentation [here](https://weavel.ai/docs/python-sdk).

## How to use

### Basic Usage
### Option 1: Using OpenAI wrapper

```python
from weavel import WeavelOpenAI as OpenAI
Expand All @@ -42,7 +42,40 @@ response = openai.chat.completions.create(

```

### Advanced Usage
### Option 2: Logging inputs/outputs of LLM calls

```python
from weavel import Weavel
from openai import OpenAI
from pydantic import BaseModel

openai = OpenAI()
# initialize Weavel
weavel = Weavel()

class Answer(BaseModel):
reasoning: str
answer: str

question = "What is x if x + 2 = 4?"
response = openai.beta.chat.completions.parse(
model="gpt-4o-2024-08-06",
messages=[
{"role": "system", "content": "You are a math teacher."},
{"role": "user", "content": question}
],
response_format=Answer
).choices[0].message.parsed

# log the generation
weavel.generation(
name="solve-math", # optional
inputs={"question": question},
outputs=response.model_dump()
)
```

### Option 3 (Advanced Usage): OTEL-compatible trace logging

```python
from weavel import Weavel
Expand Down
6 changes: 2 additions & 4 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

setup(
name="weavel",
version="1.7.1",
version="1.9.4",
packages=find_namespace_packages(),
entry_points={},
description="Weavel, Prompt Optimization and Evaluation for LLM Applications",
Expand All @@ -32,8 +32,6 @@
"termcolor",
"watchdog",
"readerwriterlock",
"pendulum",
"httpx[http2]",
"nest_asyncio",
],
python_requires=">=3.8.10",
Expand All @@ -47,6 +45,6 @@
"dataset curation",
"prompt engineering",
"prompt optimization",
"AI Prompt Engineer"
"AI Prompt Engineer",
],
)
2 changes: 1 addition & 1 deletion weavel/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@

from .utils import *

__version___ = "1.7.1"
__version___ = "1.9.4"
21 changes: 20 additions & 1 deletion weavel/_body.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,10 @@ class CaptureRecordBody(BaseRecordBody):


class CaptureObservationBody(BaseObservationBody):
name: str
name: Optional[str] = Field(
default=None,
description="The name of the observation. Optional.",
)


class CaptureMessageBody(CaptureRecordBody):
Expand Down Expand Up @@ -262,6 +265,22 @@ class CaptureGenerationBody(CaptureObservationBody):
default=None,
description="The outputs of the generation. Optional.",
)
messages: Optional[List[Dict[str, str]]] = Field(
default=None,
description="The messages of the generation. Optional.",
)
model: Optional[str] = Field(
default=None,
description="The model of the generation. Optional.",
)
latency: Optional[float] = Field(
default=None,
description="The latency of the generation. Optional.",
)
cost: Optional[float] = Field(
default=None,
description="The cost of the generation. Optional.",
)
prompt_name: Optional[str] = Field(
default=None,
description="The name of the prompt. Optional.",
Expand Down
Loading