Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(openai): add tags & metadata at the call level #47

Merged
merged 2 commits into from
Jul 25, 2024

Conversation

Dam-Buty
Copy link
Contributor

This adds the possibility to add tags/metadata to the OpenAI instrumentation, at the call level. The original feature (at the instrumentation level) is retained.

This is meant to be used like so :

const instrumentedOpenAi = client.instrumentation.openai();

await instrumentedOpenAi.chat.completions.create(
  {
    model: 'gpt-3.5-turbo',
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: 'What is the capital of Canada?' }
    ]
  },
  {
    literalaiTags: ['tag3', 'tag4'],
    literalaiMetadata: { otherKey: 'otherValue' }
  }
);

Copy link

linear bot commented Jul 25, 2024

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have an afterAll that deletes the tag. Le pendant de client.instrumentation.openai({ tags: [testId] });.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure i understand. Do you mean an option to remove the tags at some point so that all calls from now on no longer have them ?

In this case i'd rather recommend to either create a new client (there is no overhead, the client is just an instance of a class), or to tag at the call level. WDYT ?

it('handles tags and metadata on the instrumentation call', async () => {
const client = new LiteralClient(apiKey, url);
client.instrumentation.openai({
tags: ['tag1', 'tag2'],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like super high-level tagging. I'd only use it for A/B testing my code, using it with commit hashes for instance.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand now that it's the old style...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there are 2 levels of tagging :

  • at the instrumentation level, with this one you can use the usual OpenAI client
  • at the call level, with this one you have to use the instrumentedOpenAi client

I left the high level one because otherwise it would break backwards compat.

it('handles tags and metadata on the LLM call', async () => {
const client = new LiteralClient(apiKey, url);

const instrumentedOpenAi = client.instrumentation.openai({
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather have one line that does literalaiClient.instrumentOpenAI() and then use the usual OpenAI client, to not have to change my application code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Me too ! The issue is that this method adds parameters to OpenAI calls (literalaiTags, literalaiMetadata) and the only way for Typescript to understand them is to return this specifically typed client.

As i commented above, for the moment this leaves two options : you can use the original openai client if you don't tag or if you tag at the instrumentation level - but you must use this instrumentedOpenAi client if you tag at the call level.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's reasonable.

let parentId: Maybe<string>;

await client.thread({ name: 'openai' }).wrap(async () => {
return client.run({ name: 'openai' }).wrap(async () => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we already add run and thread tags?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes you can just add them to the constructor !

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, that's what most people need.

@desaxce desaxce self-requested a review July 25, 2024 15:43
@Dam-Buty Dam-Buty merged commit 2ac8082 into main Jul 25, 2024
2 checks passed
@Dam-Buty Dam-Buty deleted the damien/eng-1685-add-metadata-support-to-generations branch August 1, 2024 15:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants