From 72bbb1e9142cbca57fae044a1e05a9d4a1c2f5c8 Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Mon, 26 Aug 2024 10:37:30 +0200 Subject: [PATCH 01/12] wip --- document.txt | 40 ++++++++++++++++++ langchain-rag.ts | 78 +++++++++++++++++++++++++++++++++++ llamaindex.ts | 43 +++++++++++++++++++ openai.ts | 41 ++++++++++++++++++ package-lock.json | 52 +++++++++++------------ package.json | 1 + src/instrumentation/index.ts | 3 ++ src/instrumentation/openai.ts | 4 ++ 8 files changed, 235 insertions(+), 27 deletions(-) create mode 100644 document.txt create mode 100644 langchain-rag.ts create mode 100644 llamaindex.ts create mode 100644 openai.ts diff --git a/document.txt b/document.txt new file mode 100644 index 0000000..e00612a --- /dev/null +++ b/document.txt @@ -0,0 +1,40 @@ +Pokémon is a Japanese media franchise consisting of video games, animated series and films, a trading card game, and other related media. The franchise takes place in a shared universe in which humans co-exist with creatures known as Pokémon, a large variety of species endowed with special powers. The franchise's target audience is children aged 5 to 12, but it is known to attract people of all ages. + +The franchise originated as a pair of role-playing games developed by Game Freak, from an original concept by its founder, Satoshi Tajiri. Released on the Game Boy on February 27, 1996, the games became sleeper hits and were followed by manga series, a trading card game, and anime series and films. From 1998 to 2000, Pokémon was exported to the rest of the world, creating an unprecedented global phenomenon dubbed "Pokémania". By 2002, the craze had ended, after which Pokémon became a fixture in popular culture, with new products being released to this day. In the summer of 2016, the franchise spawned a second craze with the release of Pokémon Go, an augmented reality game developed by Niantic. Pokémon has since been estimated to be the world's highest-grossing media franchise and one of the best-selling video game franchises. + +Pokémon has an uncommon ownership structure. Unlike most IPs, which are owned by one company, Pokémon is jointly owned by three: Nintendo, Game Freak, and Creatures. Game Freak develops the core series role-playing games, which are published by Nintendo exclusively for their consoles, while Creatures manages the trading card game and related merchandise, occasionally developing spin-off titles. The three companies established The Pokémon Company (TPC) in 1998 to manage the Pokémon property within Asia. The Pokémon anime series and films are co-owned by Shogakukan. Since 2009, The Pokémon Company International (TPCi) subsidiary of TPC has managed the franchise in all regions outside of Asia. + +## Name +The original full name of the franchise is Pocket Monsters (ポケットモンスター, Poketto Monsutā), which has been commonly abbreviated to Pokemon (ポケモン) since its launch. When the franchise was released internationally, the short form of the title was used, with an acute accent (´) over the e to aid in pronunciation. + +Pokémon refers to both the franchise itself and the creatures within its fictional universe. As a noun, it is identical in both the singular and plural, as is every individual species name; it is grammatically correct to say "one Pokémon" and "many Pokémon", as well as "one Pikachu" and "many Pikachu". In English, Pokémon may be pronounced either /'powkɛmon/ (poe-keh-mon) or /'powkɪmon/ (poe-key-mon). + +## General concept +The Pokémon franchise is set in a world in which humans coexist with creatures known as Pokémon. Pokémon Red and Blue contain 151 Pokémon species, with new ones being added in subsequent games; as of January 2024, 1,025 Pokémon species have been introduced.[b] Most Pokémon are inspired by real-world animals; for example, Pikachu are a yellow mouse-like species with lightning bolt-shaped tails that possess electrical abilities. + +The player character takes the role of a Pokémon Trainer. The Trainer has three primary goals: travel and explore the Pokémon world; discover and catch each Pokémon species in order to complete their Pokédex; and train a team of up to six Pokémon at a time and have them engage in battles. Most Pokémon can be caught with spherical devices known as Poké Balls. Once the opposing Pokémon is sufficiently weakened, the Trainer throws the Poké Ball against the Pokémon, which is then transformed into a form of energy and transported into the device. Once the catch is successful, the Pokémon is tamed and is under the Trainer's command from then on. If the Poké Ball is thrown again, the Pokémon re-materializes into its original state. The Trainer's Pokémon can engage in battles against opposing Pokémon, including those in the wild or owned by other Trainers. Because the franchise is aimed at children, these battles are never presented as overtly violent and contain no blood or gore.[I] Pokémon never die in battle, instead fainting upon being defeated. + +After a Pokémon wins a battle, it gains experience points. After gaining a certain amount of it, the Pokémon levels up, and its statistics rise. As its level increases, the Pokémon learns new offensive and defensive moves to use in battle. Furthermore, many species can undergo a form of spontaneous metamorphosis called Pokémon evolution, and transform into stronger forms. Most Pokémon will evolve at a certain level, while others evolve through different means, such as exposure to a certain item. + +## Media +### Video games +Pokémon video games have been released in a wide variety of genres. The role-playing games (RPGs) developed by Game Freak are considered the core series of the franchise.[449][450][451] Various spin-off games also exist, such as Pokémon Mystery Dungeon, a roguelike RPG series, Pokémon Ranger, an action RPG series, and Detective Pikachu (2018), an adventure game. Pokémon games, in particular the core RPGs, are commonly classified in generations. For example, Junichi Masuda referred to Diamond and Pearl (2006) as Gen 4,[452] and X and Y (2013) as the 6th generation.[453] + +Until 2011, Pokémon games were released exclusively on Nintendo's consoles. With the rise of the smartphone during the 2010s, The Pokémon Company also began developing, publishing, and licensing Pokémon titles for the mobile phone market, most notably Pokémon Go (2016), an augmented reality game developed by Niantic that spawned a worldwide craze in the summer of 2016.[414][415] + +According to Pokémon's official website, as of March 2024, over 480 million Pokémon game units have been sold worldwide.[454] + +### Trading card game +The Pokémon Trading Card Game (PTCG) was one of the first collectable card games (CCGs) in Japan. It was inspired by Magic: The Gathering.[142][143][144] In the card game, the players use a 60-card deck featuring Basic and evolved Pokémon, Energy cards, and Trainer cards to help them knock out the opponent's Pokémon, drawing prize cards and winning the game.[455] Cards are classified into various levels of rarity, ranging from Common to Rare Holofoil with a holographic illustration. Rare cards, including limited edition, exclusive cards, and older cards, are highly valued among collectors due to their scarcity.[456][457] + +According to the official website of The Pokémon Company, 64.8 billion cards have been produced as of March 2024.[454] + +### Anime +As of 2024, the anime consists of over 1,200 episodes across 26 seasons. Its current season, Pokémon Horizons: The Series, started on 14 April 2023. The anime originally focused on Ash Ketchum and his travels across the Pokémon world with his partner, Pikachu. They were retired as protagonists after the 25th season,[458] and Pokémon Horizons introduced two new protagonists, Liko and Roy.[459] A total of 23 anime films have been released, the most recent being Pokémon the Movie: Secrets of the Jungle (2020).[460] + +Spin-off series from the anime have also been produced, including a variety show titled Weekly Pokémon Broadcasting Station (週刊ポケモン放送局, Shūkan Pokemon Hōsōkyoku), which aired on TV Tokyo from 2002 to 2004 and aired in English as part of Pokémon Chronicles,[461][462] as well as three television specials.[463] Many short films focusing on Pikachu and other Pokémon were released, primarily preceding the films.[464] Various animated mini-series also exist.[IX] + +### Live-action +Detective Pikachu, a live-action/animated film based on the video game of the same name, was released in 2019.[475] A sequel is currently under development.[476] + +A live-action television drama produced by The Pokémon Company and TV Tokyo titled Pocket ni Boken o Tsumekonde premiered on TV Tokyo on October 20, 2023.[477] \ No newline at end of file diff --git a/langchain-rag.ts b/langchain-rag.ts new file mode 100644 index 0000000..1541f4d --- /dev/null +++ b/langchain-rag.ts @@ -0,0 +1,78 @@ +import { HNSWLib } from '@langchain/community/vectorstores/hnswlib'; +import { StringOutputParser } from '@langchain/core/output_parsers'; +import { PromptTemplate } from '@langchain/core/prompts'; +import { + RunnablePassthrough, + RunnableSequence +} from '@langchain/core/runnables'; +import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai'; +import 'dotenv/config'; +import { formatDocumentsAsString } from 'langchain/util/document'; + +import { LiteralClient } from './src'; + +const literalClient = new LiteralClient(); +const cb = literalClient.instrumentation.langchain.literalCallback(); + +const model = new ChatOpenAI({}); + +async function main() { + const vectorStore = await HNSWLib.fromTexts( + ['mitochondria is the powerhouse of the cell'], + [{ id: 1 }], + new OpenAIEmbeddings() + ); + const retriever = vectorStore.asRetriever(); + + const prompt = + PromptTemplate.fromTemplate(`Answer the question based only on the following context: +{context} + +Question: {question}`); + + const chain = RunnableSequence.from([ + { + context: retriever.pipe(formatDocumentsAsString) as any, + question: new RunnablePassthrough() + }, + prompt, + model, + new StringOutputParser() + ]); + + const result = await chain.invoke('What is the powerhouse of the cell?', { + callbacks: [cb], + runName: 'Standalone RAG Run', + metadata: { test: 'yes', helicopter: 'you mean helicoptell' }, + tags: ['bim', 'bam', 'boom'] + }); + + console.log(result); + + // await literalClient.thread({ name: 'Test RAG Thread' }).wrap(async () => { + // const result = await chain.invoke('What is the powerhouse of the cell?', { + // callbacks: [cb] + // }); + + // console.log(result); + // }); + + // await literalClient.run({ name: 'Test RAG Run' }).wrap(async () => { + // const result = await chain.invoke('What is the powerhouse of the cell?', { + // callbacks: [cb] + // }); + + // console.log(result); + + // const result2 = await chain.invoke( + // 'What is the air-speed velocity of an unladen swallow?', + // { + // callbacks: [cb] + // } + // ); + + // console.log(result2); + // }); +} + +main(); diff --git a/llamaindex.ts b/llamaindex.ts new file mode 100644 index 0000000..b93a2a3 --- /dev/null +++ b/llamaindex.ts @@ -0,0 +1,43 @@ +import 'dotenv/config'; +import { + ContextChatEngine, + Document, + Settings, + VectorStoreIndex +} from 'llamaindex'; +import fs from 'node:fs/promises'; +import { stdin as input, stdout as output } from 'node:process'; +import readline from 'node:readline/promises'; + +import { LiteralClient } from './src'; + +const client = new LiteralClient(); + +client.instrumentation.llamaIndex.instrument(); + +// Update chunk size +Settings.chunkSize = 512; + +async function main() { + const documentContent = await fs.readFile('document.txt', 'utf-8'); + const document = new Document({ text: documentContent }); + const index = await VectorStoreIndex.fromDocuments([document]); + const retriever = index.asRetriever({ topK: { TEXT: 5, IMAGE: 5 } }); + const chatEngine = new ContextChatEngine({ retriever }); + const rl = readline.createInterface({ input, output }); + + const thread = await client.thread({ name: 'Llama Index Example' }).upsert(); + await client.instrumentation.llamaIndex.withThread(thread, async () => { + // eslint-disable-next-line + while (true) { + const query = await rl.question('Query: '); + const stream = await chatEngine.chat({ message: query, stream: true }); + for await (const chunk of stream) { + process.stdout.write(chunk.response); + } + process.stdout.write('\n'); + } + }); +} + +main().catch(console.error); diff --git a/openai.ts b/openai.ts new file mode 100644 index 0000000..0cea7e7 --- /dev/null +++ b/openai.ts @@ -0,0 +1,41 @@ +import 'dotenv/config'; +import OpenAI from 'openai'; + +import { LiteralClient } from './src'; + +const literalClient = new LiteralClient(); + +const _openai = new OpenAI(); + +// Instrument the OpenAI client +const openai = literalClient.instrumentation.openai({ + client: _openai +}); + +console.log(openai); + +async function main() { + const response = await openai.chat.completions.create( + { + model: 'gpt-4', + messages: [{ role: 'user', content: 'Say this is a test' }] + }, + { + headers: { + 'x-literalai-tags': 'openai,chat' + }, + literalaiTags: ['openai', 'chat'], + literalaiMetadata: { tags: ['openai', 'chat'] } + } + ); + + const embedding = await openai.embeddings?.create({ + model: 'text-embedding-3-large', + input: 'This is a test' + }); + + console.log(JSON.stringify(response, null, 2)); + console.log(JSON.stringify(embedding, null, 2)); +} + +main(); diff --git a/package-lock.json b/package-lock.json index 14d7e9a..86339e2 100644 --- a/package-lock.json +++ b/package-lock.json @@ -9,6 +9,7 @@ "version": "0.0.515", "license": "Apache-2.0", "dependencies": { + "@langchain/openai": "^0.2.7", "axios": "^1.6.2", "form-data": "^4.0.0", "mustache": "^4.2.0", @@ -4204,17 +4205,15 @@ } }, "node_modules/@langchain/core": { - "version": "0.2.23", - "resolved": "https://registry.npmjs.org/@langchain/core/-/core-0.2.23.tgz", - "integrity": "sha512-elPg6WpAkxWEIGC9u38F2anbzqfYYEy32lJdsd9dtChcHSFmFLlXqa+SnpO3R772gUuJmcu+Pd+fCvmRFy029w==", - "optional": true, - "peer": true, + "version": "0.2.28", + "resolved": "https://registry.npmjs.org/@langchain/core/-/core-0.2.28.tgz", + "integrity": "sha512-xN3+UdfxFaBcm29auMHFHGEYRh+3HwBc/dICHtwfk2wTSmw4HzWmBtZMx3BG+TOgh5Et7+mT6eF6E3omDLfk+A==", "dependencies": { "ansi-styles": "^5.0.0", "camelcase": "6", "decamelize": "1.2.0", "js-tiktoken": "^1.0.12", - "langsmith": "~0.1.39", + "langsmith": "^0.1.43", "mustache": "^4.2.0", "p-queue": "^6.6.2", "p-retry": "4", @@ -4230,8 +4229,6 @@ "version": "5.2.0", "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-5.2.0.tgz", "integrity": "sha512-Cxwpt2SfTzTtXcfOlzGEee8O+c+MmUgGrNiBcXnuWxuFJHe6a5Hz7qwhwe5OgaSYI0IJvkLqWX1ASG+cJOkEiA==", - "optional": true, - "peer": true, "engines": { "node": ">=10" }, @@ -4243,8 +4240,6 @@ "version": "6.3.0", "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-6.3.0.tgz", "integrity": "sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA==", - "optional": true, - "peer": true, "engines": { "node": ">=10" }, @@ -4260,12 +4255,25 @@ "https://github.com/sponsors/broofa", "https://github.com/sponsors/ctavan" ], - "optional": true, - "peer": true, "bin": { "uuid": "dist/bin/uuid" } }, + "node_modules/@langchain/openai": { + "version": "0.2.7", + "resolved": "https://registry.npmjs.org/@langchain/openai/-/openai-0.2.7.tgz", + "integrity": "sha512-f2XDXbExJf4SYsy17QSiq0YY/UWJXhJwoiS8uRi/gBa20zBQ8+bBFRnb9vPdLkOkGiaTy+yXZVFro3a9iW2r3w==", + "dependencies": { + "@langchain/core": ">=0.2.26 <0.3.0", + "js-tiktoken": "^1.0.12", + "openai": "^4.55.0", + "zod": "^3.22.4", + "zod-to-json-schema": "^3.22.3" + }, + "engines": { + "node": ">=18" + } + }, "node_modules/@llamaindex/env": { "version": "0.1.3", "resolved": "https://registry.npmjs.org/@llamaindex/env/-/env-0.1.3.tgz", @@ -5865,8 +5873,7 @@ "node_modules/@types/retry": { "version": "0.12.0", "resolved": "https://registry.npmjs.org/@types/retry/-/retry-0.12.0.tgz", - "integrity": "sha512-wWKOClTTiizcZhXnPY4wikVAwmdYHp8q6DmC+EJUzAMsycb7HB32Kh9RN4+0gExjmPmZSAQjgURXIGATPegAvA==", - "peer": true + "integrity": "sha512-wWKOClTTiizcZhXnPY4wikVAwmdYHp8q6DmC+EJUzAMsycb7HB32Kh9RN4+0gExjmPmZSAQjgURXIGATPegAvA==" }, "node_modules/@types/semver": { "version": "7.5.8", @@ -7458,7 +7465,6 @@ "version": "10.0.1", "resolved": "https://registry.npmjs.org/commander/-/commander-10.0.1.tgz", "integrity": "sha512-y4Mg2tXshplEbSGzx7amzPwKKOCGuoSRP/CjEdwwk0FOGlUbq6lKuoyDZTNZkmxHdJtp54hdfY/JUrdL7Xfdug==", - "peer": true, "engines": { "node": ">=14" } @@ -7642,7 +7648,6 @@ "version": "1.2.0", "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz", "integrity": "sha512-z2S+W9X73hAUUki+N+9Za2lBlun89zigOyGrsax+KUQ6wKW4ZoWpEYBkGhQjwAjjDCkWxhY0VKEhk8wzY7F5cA==", - "peer": true, "engines": { "node": ">=0.10.0" } @@ -10611,10 +10616,9 @@ "peer": true }, "node_modules/langsmith": { - "version": "0.1.41", - "resolved": "https://registry.npmjs.org/langsmith/-/langsmith-0.1.41.tgz", - "integrity": "sha512-8R7s/225Pxmv0ipMfd6sqmWVsfHLQivYlQZ0vx5K+ReoknummTenQlVK8gapk3kqRMnzkrouuRHMhWjMR6RgUA==", - "peer": true, + "version": "0.1.43", + "resolved": "https://registry.npmjs.org/langsmith/-/langsmith-0.1.43.tgz", + "integrity": "sha512-+IL59ye/je9HmMttJU50epJneEbEwlMJ8i5tEFjJC6l2+SWPtedT0UPuAnPEybMhfjU3ziNfqAxck7WTEncL8w==", "dependencies": { "@types/uuid": "^9.0.1", "commander": "^10.0.1", @@ -12028,7 +12032,6 @@ "version": "1.0.0", "resolved": "https://registry.npmjs.org/p-finally/-/p-finally-1.0.0.tgz", "integrity": "sha512-LICb2p9CB7FS+0eR1oqWnHhp0FljGLZCWBE9aix0Uye9W8LTQPwMTYVGWQWIw9RdQiDg4+epXQODwIYJtSJaow==", - "peer": true, "engines": { "node": ">=4" } @@ -12067,7 +12070,6 @@ "version": "6.6.2", "resolved": "https://registry.npmjs.org/p-queue/-/p-queue-6.6.2.tgz", "integrity": "sha512-RwFpb72c/BhQLEXIZ5K2e+AhgNVmIejGlTgiB9MzZ0e93GRvqZ7uSi0dvRF7/XIXDeNkra2fNHBxTyPDGySpjQ==", - "peer": true, "dependencies": { "eventemitter3": "^4.0.4", "p-timeout": "^3.2.0" @@ -12082,14 +12084,12 @@ "node_modules/p-queue/node_modules/eventemitter3": { "version": "4.0.7", "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.7.tgz", - "integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==", - "peer": true + "integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==" }, "node_modules/p-retry": { "version": "4.6.2", "resolved": "https://registry.npmjs.org/p-retry/-/p-retry-4.6.2.tgz", "integrity": "sha512-312Id396EbJdvRONlngUx0NydfrIQ5lsYu0znKVUzVvArzEIt08V1qhtyESbGVd1FGX7UKtiFp5uwKZdM8wIuQ==", - "peer": true, "dependencies": { "@types/retry": "0.12.0", "retry": "^0.13.1" @@ -12102,7 +12102,6 @@ "version": "3.2.0", "resolved": "https://registry.npmjs.org/p-timeout/-/p-timeout-3.2.0.tgz", "integrity": "sha512-rhIwUycgwwKcP9yTOOFK/AKsAopjjCakVqLHePO3CC6Mir1Z99xT+R63jZxAT5lFZLa2inS5h+ZS2GvR99/FBg==", - "peer": true, "dependencies": { "p-finally": "^1.0.0" }, @@ -13186,7 +13185,6 @@ "version": "0.13.1", "resolved": "https://registry.npmjs.org/retry/-/retry-0.13.1.tgz", "integrity": "sha512-XQBQ3I8W1Cge0Seh+6gjj03LbmRFWuoszgK9ooCpwYIrhhoO80pfq4cUkU5DkknwfOfFteRwlZ56PYOGYyFWdg==", - "peer": true, "engines": { "node": ">= 4" } diff --git a/package.json b/package.json index 13c2dc9..cdabc38 100644 --- a/package.json +++ b/package.json @@ -52,6 +52,7 @@ "zod-to-json-schema": "^3.23.0" }, "dependencies": { + "@langchain/openai": "^0.2.7", "axios": "^1.6.2", "form-data": "^4.0.0", "mustache": "^4.2.0", diff --git a/src/instrumentation/index.ts b/src/instrumentation/index.ts index 583b040..1a66344 100644 --- a/src/instrumentation/index.ts +++ b/src/instrumentation/index.ts @@ -1,3 +1,5 @@ +import OpenAI from 'openai'; + import { LiteralClient, Maybe } from '..'; import { LiteralCallbackHandler } from './langchain'; import { instrumentLlamaIndex, withThread } from './llamaindex'; @@ -7,6 +9,7 @@ import { makeInstrumentVercelSDK } from './vercel-sdk'; export type OpenAIGlobalOptions = { tags?: Maybe; metadata?: Maybe>; + client?: OpenAI; }; export default (client: LiteralClient) => ({ diff --git a/src/instrumentation/openai.ts b/src/instrumentation/openai.ts index 0541fae..9c0394d 100644 --- a/src/instrumentation/openai.ts +++ b/src/instrumentation/openai.ts @@ -118,15 +118,19 @@ function instrumentOpenAI( OpenAI.Images.prototype.generate = wrappedImagesGenerate as any; return { + ...options.client, chat: { completions: { + _client: options.client, create: wrappedChatCompletionsCreate } }, completions: { + _client: options.client, create: wrappedCompletionsCreate }, images: { + _client: options.client, generate: wrappedImagesGenerate } }; From c59bf5e788ab8fb88fa71b81054972e16c751bf8 Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Fri, 30 Aug 2024 09:23:30 +0200 Subject: [PATCH 02/12] feat: wip --- langchain-rag.ts | 80 ++++++-- openai.ts | 52 +++-- src/api.ts | 3 +- src/instrumentation/langchain.ts | 275 +++++++++++++++++---------- src/instrumentation/openai.ts | 21 +- src/instrumentation/vercel-sdk.ts | 17 +- src/observability/generation.ts | 1 + tests/integration/langchain.test.ts | 64 +++++++ tests/integration/openai.test.ts | 30 +++ tests/integration/vercel-sdk.test.ts | 62 ++++-- 10 files changed, 447 insertions(+), 158 deletions(-) create mode 100644 tests/integration/langchain.test.ts diff --git a/langchain-rag.ts b/langchain-rag.ts index 1541f4d..a597ad4 100644 --- a/langchain-rag.ts +++ b/langchain-rag.ts @@ -17,6 +17,29 @@ const cb = literalClient.instrumentation.langchain.literalCallback(); const model = new ChatOpenAI({}); async function main() { + const literalaiStepId = '4defe177-334e-457f-8365-f34ad5ba84b3'; + const firstResponse = await model.invoke('Hello, how are you?', { + callbacks: [cb], + metadata: { + test: 'yes', + helicopter: 'you mean helicoptell', + literalaiStepId + }, + tags: ['bim', 'bam', 'boom'] + }); + + await literalClient.api.createScore({ + stepId: literalaiStepId, + name: 'Toxicity', + type: 'HUMAN', + comment: 'The answer is pretty nice', + value: 0 + }); + + console.log(firstResponse); + + await literalClient; + const vectorStore = await HNSWLib.fromTexts( ['mitochondria is the powerhouse of the cell'], [{ id: 1 }], @@ -40,39 +63,54 @@ Question: {question}`); new StringOutputParser() ]); + const newLiteralaiStepId = '94059657-3a31-4682-8f9a-33ce019ea027'; + + await literalClient.api.createScore({ + stepId: newLiteralaiStepId, + name: 'Toxicity', + type: 'HUMAN', + comment: 'wow what a douche', + value: 1 + }); + const result = await chain.invoke('What is the powerhouse of the cell?', { callbacks: [cb], runName: 'Standalone RAG Run', - metadata: { test: 'yes', helicopter: 'you mean helicoptell' }, - tags: ['bim', 'bam', 'boom'] + metadata: { + test: 'yes', + helicopter: 'you mean helicoptell', + literalaiStepId: newLiteralaiStepId + }, + tags: ['bim', 'bam', 'boom'], + configurable: { thread_id: 'test_thread_id' } }); console.log(result); - // await literalClient.thread({ name: 'Test RAG Thread' }).wrap(async () => { - // const result = await chain.invoke('What is the powerhouse of the cell?', { - // callbacks: [cb] - // }); + await literalClient.thread({ name: 'Test RAG Thread' }).wrap(async () => { + const result = await chain.invoke('What is the powerhouse of the cell?', { + callbacks: [cb] + }); - // console.log(result); - // }); + console.log(result); + }); - // await literalClient.run({ name: 'Test RAG Run' }).wrap(async () => { - // const result = await chain.invoke('What is the powerhouse of the cell?', { - // callbacks: [cb] - // }); + await literalClient.run({ name: 'Test RAG Run' }).wrap(async () => { + const result = await chain.invoke('What is the powerhouse of the cell?', { + callbacks: [cb] + }); - // console.log(result); + console.log(result); - // const result2 = await chain.invoke( - // 'What is the air-speed velocity of an unladen swallow?', - // { - // callbacks: [cb] - // } - // ); + const result2 = await chain.invoke( + 'What is the air-speed velocity of an unladen swallow?', + { + callbacks: [cb] + } + ); - // console.log(result2); - // }); + console.log(result2); + }); } main(); diff --git a/openai.ts b/openai.ts index 0cea7e7..3061e33 100644 --- a/openai.ts +++ b/openai.ts @@ -1,41 +1,53 @@ import 'dotenv/config'; import OpenAI from 'openai'; +import { v4 as uuidv4 } from 'uuid'; import { LiteralClient } from './src'; -const literalClient = new LiteralClient(); - -const _openai = new OpenAI(); +const openai = new OpenAI(); +const literalClient = new LiteralClient(); // Instrument the OpenAI client -const openai = literalClient.instrumentation.openai({ - client: _openai -}); - -console.log(openai); +const openai_ = literalClient.instrumentation.openai({ client: openai }); async function main() { - const response = await openai.chat.completions.create( + const response = await openai_.chat.completions.create( { model: 'gpt-4', messages: [{ role: 'user', content: 'Say this is a test' }] }, { - headers: { - 'x-literalai-tags': 'openai,chat' - }, - literalaiTags: ['openai', 'chat'], - literalaiMetadata: { tags: ['openai', 'chat'] } + literalaiStepId: uuidv4() } ); - const embedding = await openai.embeddings?.create({ - model: 'text-embedding-3-large', - input: 'This is a test' - }); + await openai_.chat.completions.create( + { + model: 'gpt-4', + messages: [{ role: 'user', content: 'Say this is a test' }] + }, + { + literalaiStepId: uuidv4() + } + ); + + await openai_.chat.completions.create( + { + model: 'gpt-4', + messages: [{ role: 'user', content: 'Say this is a test' }] + }, + { + literalaiStepId: uuidv4() + } + ); - console.log(JSON.stringify(response, null, 2)); - console.log(JSON.stringify(embedding, null, 2)); + // const embedding = await openai.embeddings?.create({ + // model: 'text-embedding-3-large', + // input: 'This is a test' + // }); + console.log(response); + // console.log(JSON.stringify(response, null, 2)); + // console.log(JSON.stringify(embedding, null, 2)); } main(); diff --git a/src/api.ts b/src/api.ts index 470a0e5..695560e 100644 --- a/src/api.ts +++ b/src/api.ts @@ -850,7 +850,7 @@ export class API { * @param generation - The `Generation` object to be created and sent to the platform. * @returns A Promise resolving to the newly created `Generation` object. */ - async createGeneration(generation: Generation) { + async createGeneration(generation: Generation, stepId: string | null = null) { const mutation = ` mutation CreateGeneration($generation: GenerationPayloadInput!) { createGeneration(generation: $generation) { @@ -861,6 +861,7 @@ export class API { `; const variables = { + stepId, generation }; diff --git a/src/instrumentation/langchain.ts b/src/instrumentation/langchain.ts index 9f73ac2..f4324ce 100644 --- a/src/instrumentation/langchain.ts +++ b/src/instrumentation/langchain.ts @@ -23,6 +23,7 @@ import { import { ChainValues, InputValues } from '@langchain/core/utils/types'; import mustache from 'mustache'; import { v5 as uuidv5 } from 'uuid'; +import { validate as uuidValidate } from 'uuid'; import { ChatGeneration, @@ -121,6 +122,8 @@ interface ChatGenerationStart { start: number; outputTokenCount: number; ttFirstToken?: number; + metadata?: Record; + tags?: string[]; } interface CompletionGenerationStart { @@ -131,6 +134,8 @@ interface CompletionGenerationStart { start: number; outputTokenCount: number; ttFirstToken?: number; + metadata?: Record; + tags?: string[]; } function convertMessageRole(role: string) { @@ -289,6 +294,32 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { } } + getGenerationStepId( + runId: string, + metadata?: Record | undefined + ) { + const generationStepIdFromMetadata = metadata?.literalaiStepId; + + if (typeof generationStepIdFromMetadata !== 'string') { + return runId; + } + + if (!uuidValidate(generationStepIdFromMetadata)) { + return runId; + } + + // The stepId from metadata can only be used on one generation + if ( + Object.values(this.steps).find( + (step) => step.id === generationStepIdFromMetadata + ) + ) { + return runId; + } + + return generationStepIdFromMetadata; + } + /** * LLM Callbacks */ @@ -311,13 +342,27 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { delete settings.apiKey; delete settings.api_key; + // console.log( + // 'handleLLMStart', + // llm, + // prompts, + // runId, + // parentRunId, + // extraParams, + // tags, + // metadata, + // name + // ); + this.completionGenerations[runId] = { provider, model, settings, prompt: prompts[0], start: Date.now(), - outputTokenCount: 0 + outputTokenCount: 0, + metadata, + tags }; const parentId = this.getParentId(parentRunId); @@ -329,7 +374,7 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { type: 'llm', tags: tags, threadId: this.threadId, - id: runId, + id: this.getGenerationStepId(runId, metadata), startTime: new Date().toISOString(), parentId: this.getParentId(parentRunId), metadata: metadata, @@ -375,97 +420,118 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { const completionGeneration = this.completionGenerations[runId]; const chatGeneration = this.chatGenerations[runId]; - if (completionGeneration) { - const { - start, - outputTokenCount, - ttFirstToken, - prompt, - model, - provider, - settings - } = this.completionGenerations[runId]; - const duration = (Date.now() - start) / 1000; - const tokenThroughputInSeconds = - duration && outputTokenCount - ? outputTokenCount / (duration / 1000) - : undefined; - - const generation = new CompletionGeneration({ - provider, - model, - settings, - completion: output.generations[0][0].text, - prompt: prompt, - duration, - ttFirstToken, - outputTokenCount, - tokenThroughputInSeconds: tokenThroughputInSeconds - }); - - if (this.steps[runId]) { - this.steps[runId].generation = generation; - this.steps[runId].output = output.generations[0][0]; - this.steps[runId].endTime = new Date().toISOString(); - - await this.steps[runId].send(); - } else { - await this.client.api.createGeneration(generation); - } - } else if (chatGeneration) { - const { - promptId, - variables, - start, - outputTokenCount, - ttFirstToken, - inputMessages, - model, - provider, - settings, - tools - } = this.chatGenerations[runId]; - - const duration = (Date.now() - start) / 1000; - const tokenThroughputInSeconds = - duration && outputTokenCount - ? outputTokenCount / (duration / 1000) - : undefined; - const messageCompletion = convertMessage( - (output.generations[0][0] as any).message - ); - - const generation = new ChatGeneration({ - promptId, - variables, - provider, - model, - settings, - tools, - messageCompletion, - messages: addToolCallIdToMessages(inputMessages), - duration, - ttFirstToken, - outputTokenCount, - tokenThroughputInSeconds: tokenThroughputInSeconds - }); - - if (this.steps[runId]) { - this.steps[runId].generation = generation; - this.steps[runId].generation!.inputTokenCount = - output.llmOutput?.estimatedTokenUsage?.promptTokens; - this.steps[runId].generation!.outputTokenCount = - output.llmOutput?.estimatedTokenUsage?.completionTokens; - this.steps[runId].generation!.tokenCount = - output.llmOutput?.estimatedTokenUsage?.totalTokens; - - this.steps[runId].output = messageCompletion; - this.steps[runId].endTime = new Date().toISOString(); - - await this.steps[runId].send(); - } else { - await this.client.api.createGeneration(generation); + // console.log('handleLLMEnd', output, runId); + + try { + if (completionGeneration) { + const { + start, + outputTokenCount, + ttFirstToken, + prompt, + model, + provider, + settings, + metadata, + tags + } = this.completionGenerations[runId]; + const duration = (Date.now() - start) / 1000; + const tokenThroughputInSeconds = + duration && outputTokenCount + ? outputTokenCount / (duration / 1000) + : undefined; + + const generation = new CompletionGeneration({ + metadata, + tags, + provider, + model, + settings, + completion: output.generations[0][0].text, + prompt: prompt, + duration, + ttFirstToken, + outputTokenCount, + tokenThroughputInSeconds: tokenThroughputInSeconds + }); + + if (this.steps[runId]) { + this.steps[runId].generation = generation; + this.steps[runId].output = output.generations[0][0]; + this.steps[runId].endTime = new Date().toISOString(); + + await this.steps[runId].send(); + } else { + await this.client.api.createGeneration({ + ...generation, + id: this.getGenerationStepId(runId, metadata) + }); + } + } else if (chatGeneration) { + const { + promptId, + variables, + start, + outputTokenCount, + ttFirstToken, + inputMessages, + model, + provider, + settings, + tools, + metadata, + tags + } = this.chatGenerations[runId]; + + const duration = (Date.now() - start) / 1000; + const tokenThroughputInSeconds = + duration && outputTokenCount + ? outputTokenCount / (duration / 1000) + : undefined; + const messageCompletion = convertMessage( + (output.generations[0][0] as any).message + ); + + const generation = new ChatGeneration({ + metadata, + tags, + promptId, + variables, + provider, + model, + settings, + tools, + messageCompletion, + messages: addToolCallIdToMessages(inputMessages), + duration, + ttFirstToken, + outputTokenCount, + tokenThroughputInSeconds: tokenThroughputInSeconds + }); + + if (this.steps[runId]) { + this.steps[runId].generation = generation; + this.steps[runId].generation!.inputTokenCount = + output.llmOutput?.estimatedTokenUsage?.promptTokens; + this.steps[runId].generation!.outputTokenCount = + output.llmOutput?.estimatedTokenUsage?.completionTokens; + this.steps[runId].generation!.tokenCount = + output.llmOutput?.estimatedTokenUsage?.totalTokens; + + this.steps[runId].output = messageCompletion; + this.steps[runId].endTime = new Date().toISOString(); + + await this.steps[runId].send(); + } else { + await this.client.api.createGeneration({ + ...generation, + id: this.getGenerationStepId(runId, metadata) + }); + } } + } catch (e) { + console.error(e); + console.log('Error in handleLLMEnd', e); } } @@ -490,6 +556,18 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { delete settings.apiKey; delete settings.api_key; + // console.log( + // 'handleChatModelStart', + // llm, + // messages, + // runId, + // parentRunId, + // extraParams, + // tags, + // metadata, + // name + // ); + const messageList = messages[0]; const { promptId, variables } = checkForLiteralPrompt(messageList); @@ -505,7 +583,9 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { tools, inputMessages: messageList.map(convertMessage), start: Date.now(), - outputTokenCount: 0 + outputTokenCount: 0, + metadata, + tags }; const parentId = this.getParentId(parentRunId); @@ -523,10 +603,10 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { type: 'llm', tags: tags, threadId: this.threadId, - id: runId, + id: this.getGenerationStepId(runId, metadata), startTime: new Date().toISOString(), parentId: parentId, - metadata: metadata, + metadata, input: { content: messages[0] } }) .send(); @@ -598,12 +678,12 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { const step = await this.client .run({ name: name || chainType, - tags: tags, threadId: this.threadId, id: runId, input: stepInput, startTime: new Date().toISOString(), - metadata: metadata + metadata, + tags }) .send(); @@ -623,6 +703,7 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { type: 'tool', parentId, tags: tags, + metadata, threadId: this.threadId, id: runId, input: stepInput, diff --git a/src/instrumentation/openai.ts b/src/instrumentation/openai.ts index 9c0394d..82e874f 100644 --- a/src/instrumentation/openai.ts +++ b/src/instrumentation/openai.ts @@ -26,6 +26,7 @@ type OriginalFunction = ( type OpenAICallOptions = { literalaiTags?: Maybe; literalaiMetadata?: Maybe>; + literalaiStepId?: Maybe; }; function cleanOpenAIArgs( @@ -87,8 +88,11 @@ function instrumentOpenAI( client: LiteralClient, options: OpenAIGlobalOptions = {} ) { + const originalMethods = (OpenAI.prototype as any).__literalai_originalMethods; + // Patching the chat.completions.create function const originalChatCompletionsCreate = + originalMethods?.originalChatCompletionsCreate ?? OpenAI.Chat.Completions.prototype.create; const wrappedChatCompletionsCreate = wrapFunction( originalChatCompletionsCreate, @@ -97,7 +101,9 @@ function instrumentOpenAI( ); // Patching the completions.create function - const originalCompletionsCreate = OpenAI.Completions.prototype.create; + const originalCompletionsCreate = + originalMethods?.originalCompletionsCreate ?? + OpenAI.Completions.prototype.create; const wrappedCompletionsCreate = wrapFunction( originalCompletionsCreate, client, @@ -105,7 +111,8 @@ function instrumentOpenAI( ); // Patching the images.generate function - const originalImagesGenerate = OpenAI.Images.prototype.generate; + const originalImagesGenerate = + originalMethods?.originalImagesGenerate ?? OpenAI.Images.prototype.generate; const wrappedImagesGenerate = wrapFunction( originalImagesGenerate, client, @@ -117,6 +124,12 @@ function instrumentOpenAI( OpenAI.Completions.prototype.create = wrappedCompletionsCreate as any; OpenAI.Images.prototype.generate = wrappedImagesGenerate as any; + (OpenAI.prototype as any).__literalai_originalMethods = { + originalChatCompletionsCreate, + originalCompletionsCreate, + originalImagesGenerate + }; + return { ...options.client, chat: { @@ -361,10 +374,12 @@ const processOpenAIOutput = async ( }; const baseGeneration = { + ...(callOptions?.literalaiStepId && { id: callOptions.literalaiStepId }), provider: 'openai', model: inputs.model, settings: getSettings(inputs), - tags + tags, + metadata }; const threadFromStore = client._currentThread(); diff --git a/src/instrumentation/vercel-sdk.ts b/src/instrumentation/vercel-sdk.ts index ec68074..3df5d7b 100644 --- a/src/instrumentation/vercel-sdk.ts +++ b/src/instrumentation/vercel-sdk.ts @@ -15,10 +15,9 @@ import { IGenerationMessage, ILLMSettings, ITool, - LiteralClient + LiteralClient, + Maybe } from '..'; -import { Step } from '../observability/step'; -import { Thread } from '../observability/thread'; export type VercelLanguageModel = LanguageModel; @@ -260,7 +259,9 @@ const computeMetricsStream = async ( }; type VercelExtraOptions = { - literalAiParent?: Step | Thread; + literalaiTags?: Maybe; + literalaiMetadata?: Maybe>; + literalaiStepId?: Maybe; }; export type InstrumentationVercelMethod = { @@ -288,7 +289,7 @@ export const makeInstrumentVercelSDK = ( type TOptions = Options; type TResult = Result; - return async (options: TOptions): Promise => { + return async (options: TOptions & VercelExtraOptions): Promise => { const startTime = Date.now(); const result: TResult = await (fn as any)(options); @@ -318,6 +319,9 @@ export const makeInstrumentVercelSDK = ( ); const generation = new ChatGeneration({ + ...(options.literalaiStepId && { id: options.literalaiStepId }), + metadata: options.literalaiMetadata, + tags: options.literalaiTags, provider: options.model.provider, model: options.model.modelId, settings: extractSettings(options), @@ -346,6 +350,9 @@ export const makeInstrumentVercelSDK = ( const metrics = computeMetricsSync(options, result, startTime); const generation = new ChatGeneration({ + ...(options.literalaiStepId && { id: options.literalaiStepId }), + metadata: options.literalaiMetadata, + tags: options.literalaiTags, provider: options.model.provider, model: options.model.modelId, settings: extractSettings(options), diff --git a/src/observability/generation.ts b/src/observability/generation.ts index 07f6a09..4447668 100644 --- a/src/observability/generation.ts +++ b/src/observability/generation.ts @@ -52,6 +52,7 @@ export class BaseGeneration extends Utils { model?: Maybe; id?: Maybe; tags?: Maybe; + metadata?: Maybe>; error?: Maybe; variables?: Maybe>; settings?: Maybe; diff --git a/tests/integration/langchain.test.ts b/tests/integration/langchain.test.ts new file mode 100644 index 0000000..a14ce34 --- /dev/null +++ b/tests/integration/langchain.test.ts @@ -0,0 +1,64 @@ +import { ChatOpenAI } from '@langchain/openai'; +import 'dotenv/config'; +import { v4 as uuidv4 } from 'uuid'; + +import { LiteralClient } from '../../src'; + +const url = process.env.LITERAL_API_URL; +const apiKey = process.env.LITERAL_API_KEY; + +if (!url || !apiKey) { + throw new Error('Missing environment variables'); +} + +const client = new LiteralClient({ apiKey, apiUrl: url }); +const cb = client.instrumentation.langchain.literalCallback(); + +describe('Langchain integration', function () { + it('should create a generation with the provided id', async function () { + const literalaiStepId = uuidv4(); + const model = new ChatOpenAI({}); + + await model.invoke('Hello, how are you?', { + callbacks: [cb], + metadata: { literalaiStepId } + }); + + const { data } = await client.api.getGenerations({ + filters: [ + { + field: 'id', + operator: 'eq', + value: literalaiStepId + } + ] + }); + + expect(data.length).toBe(1); + }, 30000); + + it('should copy tags and metadata to the generation', async function () { + const literalaiStepId = uuidv4(); + const model = new ChatOpenAI({}); + + const metadata = { + framework: 'Langchain', + awesome: 'yes', + literalaiStepId + }; + + const tags = ['bim', 'bam', 'boom']; + + await model.invoke('Hello, how are you?', { + callbacks: [cb], + metadata, + tags + }); + + const step = await client.api.getStep(literalaiStepId); + + expect(step!.type).toBe('llm'); + expect(step!.metadata).toEqual(expect.objectContaining(metadata)); + expect(step!.tags).toEqual(expect.arrayContaining(tags)); + }, 30000); +}); diff --git a/tests/integration/openai.test.ts b/tests/integration/openai.test.ts index 59fdad7..77a47b5 100644 --- a/tests/integration/openai.test.ts +++ b/tests/integration/openai.test.ts @@ -345,6 +345,35 @@ describe('OpenAI Instrumentation', () => { }); describe('Handling tags and metadata', () => { + it('should assign a specific ID to the generation if provided', async () => { + const openai_ = new OpenAI({ apiKey: 'an-ocean-of-noise' }); + + const client = new LiteralClient({ apiKey, apiUrl }); + const literalaiStepId = uuidv4(); + + const instrumentedOpenAi = client.instrumentation.openai({ + client: openai_ + }); + + await instrumentedOpenAi.chat.completions.create( + { + model: 'gpt-3.5-turbo', + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: 'What is the capital of Canada?' } + ] + }, + { literalaiStepId } + ); + + await new Promise((resolve) => setTimeout(resolve, 3000)); + + const step = await client.api.getStep(literalaiStepId); + + expect(step!.id).toEqual(literalaiStepId); + expect(step!.type).toEqual('llm'); + }); + it('handles tags and metadata on the instrumentation call', async () => { const client = new LiteralClient({ apiKey, apiUrl }); client.instrumentation.openai({ @@ -385,6 +414,7 @@ describe('OpenAI Instrumentation', () => { const client = new LiteralClient({ apiKey, apiUrl }); const instrumentedOpenAi = client.instrumentation.openai({ + client: openai, tags: ['tag1', 'tag2'], metadata: { key: 'value' } }); diff --git a/tests/integration/vercel-sdk.test.ts b/tests/integration/vercel-sdk.test.ts index e208dd0..0117ad0 100644 --- a/tests/integration/vercel-sdk.test.ts +++ b/tests/integration/vercel-sdk.test.ts @@ -1,24 +1,21 @@ import { openai } from '@ai-sdk/openai'; import { generateObject, generateText, streamObject, streamText } from 'ai'; import 'dotenv/config'; +import { v4 as uuidv4 } from 'uuid'; import { z } from 'zod'; import { LiteralClient } from '../../src'; -describe('Vercel SDK Instrumentation', () => { - let client: LiteralClient; - - beforeAll(function () { - const apiUrl = process.env.LITERAL_API_URL; - const apiKey = process.env.LITERAL_API_KEY; +const apiUrl = process.env.LITERAL_API_URL; +const apiKey = process.env.LITERAL_API_KEY; - if (!apiUrl || !apiKey) { - throw new Error('Missing environment variables'); - } +if (!apiUrl || !apiKey) { + throw new Error('Missing environment variables'); +} - client = new LiteralClient({ apiKey, apiUrl }); - }); +const client = new LiteralClient({ apiKey, apiUrl }); +describe('Vercel SDK Instrumentation', () => { // Skip for the CI describe.skip('With OpenAI', () => { afterEach(() => jest.restoreAllMocks()); @@ -431,4 +428,47 @@ describe('Vercel SDK Instrumentation', () => { ); }); }); + + describe.skip('Literal AI metadata', () => { + const generateTextWithLiteralAI = + client.instrumentation.vercel.instrument(generateText); + + it('should create a generation with the provided ID', async () => { + const literalaiStepId = uuidv4(); + + await generateTextWithLiteralAI({ + model: openai('gpt-3.5-turbo'), + prompt: 'Write a vegetarian lasagna recipe for 4 people.', + literalaiStepId + }); + + await new Promise((resolve) => setTimeout(resolve, 3000)); + + const step = await client.api.getStep(literalaiStepId); + + expect(step!.id).toEqual(literalaiStepId); + expect(step!.type).toEqual('llm'); + }, 30_000); + + it('should create a generation with the provided tags and metadata', async () => { + const literalaiStepId = uuidv4(); + + await generateTextWithLiteralAI({ + model: openai('gpt-3.5-turbo'), + prompt: 'Write a vegetarian lasagna recipe for 4 people.', + literalaiStepId, + literalaiTags: ['tag1', 'tag2'], + literalaiMetadata: { otherKey: 'otherValue' } + }); + + await new Promise((resolve) => setTimeout(resolve, 3000)); + + const step = await client.api.getStep(literalaiStepId); + + expect(step!.metadata).toEqual( + expect.objectContaining({ otherKey: 'otherValue' }) + ); + expect(step!.tags).toEqual(expect.arrayContaining(['tag1', 'tag2'])); + }, 30_000); + }); }); From 9e32ef429d8b6ae369f6cae1ef0081d8f38cce48 Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Fri, 30 Aug 2024 12:37:50 +0200 Subject: [PATCH 03/12] feat(wrappers): add decoration wrapper --- src/api.ts | 33 ++++- src/evaluation/experiment-item-run.ts | 5 +- src/index.ts | 29 +++++ src/observability/generation.ts | 1 + src/observability/step.ts | 25 +++- src/observability/thread.ts | 19 ++- tests/decorate.test.ts | 177 ++++++++++++++++++++++++++ tests/integration/llamaindex.test.ts | 2 +- tests/integration/vercel-sdk.test.ts | 2 +- 9 files changed, 281 insertions(+), 12 deletions(-) create mode 100644 tests/decorate.test.ts diff --git a/src/api.ts b/src/api.ts index 695560e..a34cca1 100644 --- a/src/api.ts +++ b/src/api.ts @@ -410,7 +410,6 @@ export class API { variables: variables } }); - if (response.data.errors) { throw new Error(JSON.stringify(response.data.errors)); } @@ -860,6 +859,26 @@ export class API { } `; + const currentStore = this.client.store.getStore(); + + if (currentStore) { + if (currentStore.metadata) { + generation.metadata = { + ...generation.metadata, + ...currentStore.metadata + }; + } + + if (currentStore.tags) { + generation.tags = [...(generation.tags ?? []), ...currentStore.tags]; + } + + if (currentStore.stepId) { + generation.id = currentStore.stepId; + currentStore.stepId = null; + } + } + const variables = { stepId, generation @@ -930,13 +949,13 @@ export class API { $metadata: Json, $participantId: String, $tags: [String!], - ) { + ) { upsertThread( - id: $threadId - name: $name - metadata: $metadata - participantId: $participantId - tags: $tags + id: $threadId + name: $name + metadata: $metadata + participantId: $participantId + tags: $tags ) { ${threadFields} } diff --git a/src/evaluation/experiment-item-run.ts b/src/evaluation/experiment-item-run.ts index f71ab0f..565d7a5 100644 --- a/src/evaluation/experiment-item-run.ts +++ b/src/evaluation/experiment-item-run.ts @@ -44,7 +44,10 @@ export class ExperimentItemRun extends Step { ? currentStore?.rootRun : this.type === 'run' ? this - : null + : null, + metadata: currentStore?.metadata ?? null, + tags: currentStore?.tags ?? null, + stepId: currentStore?.stepId ?? null }, async () => { try { diff --git a/src/index.ts b/src/index.ts index 7bde18b..c52fdb8 100644 --- a/src/index.ts +++ b/src/index.ts @@ -18,6 +18,9 @@ type StoredContext = { currentStep: Step | null; currentExperimentItemRunId?: string | null; rootRun: Step | null; + metadata: Record | null; + tags: string[] | null; + stepId: string | null; }; /** @@ -217,4 +220,30 @@ export class LiteralClient { return store.rootRun; } + + decorate(options: { + metadata?: Record; + tags?: string[]; + stepId?: string; + }) { + return { + wrap: async (cb: () => T) => { + const currentStore = this.store.getStore(); + + return this.store.run( + { + currentThread: currentStore?.currentThread ?? null, + currentExperimentItemRunId: + currentStore?.currentExperimentItemRunId ?? null, + currentStep: null, + rootRun: null, + metadata: options?.metadata ?? null, + tags: options?.tags ?? null, + stepId: options?.stepId ?? null + }, + () => cb() + ); + } + }; + } } diff --git a/src/observability/generation.ts b/src/observability/generation.ts index 4447668..297b233 100644 --- a/src/observability/generation.ts +++ b/src/observability/generation.ts @@ -73,6 +73,7 @@ export class CompletionGeneration extends BaseGeneration { constructor(data: OmitUtils) { super(); this.type = 'COMPLETION'; + Object.assign(this, data); } } diff --git a/src/observability/step.ts b/src/observability/step.ts index ff5e4f8..228cc6d 100644 --- a/src/observability/step.ts +++ b/src/observability/step.ts @@ -63,6 +63,26 @@ export class Step extends StepFields { this.api = client.api; this.client = client; + const currentStore = this.client.store.getStore(); + + if (currentStore) { + if (currentStore.metadata) { + data.metadata = { + ...data.metadata, + ...currentStore.metadata + }; + } + + if (currentStore.tags) { + data.tags = [...(data.tags ?? []), ...currentStore.tags]; + } + + if (currentStore.stepId) { + data.id = currentStore.stepId; + currentStore.stepId = null; + } + } + Object.assign(this, data); // Automatically generate an ID if not provided. @@ -174,7 +194,10 @@ export class Step extends StepFields { ? currentStore?.rootRun : this.type === 'run' ? this - : null + : null, + metadata: currentStore?.metadata ?? null, + tags: currentStore?.tags ?? null, + stepId: currentStore?.stepId ?? null }, () => cb(this) ); diff --git a/src/observability/thread.ts b/src/observability/thread.ts index 4defc13..afa914f 100644 --- a/src/observability/thread.ts +++ b/src/observability/thread.ts @@ -54,6 +54,20 @@ export class Thread extends ThreadFields { data.id = uuidv4(); } + const currentStore = this.client.store.getStore(); + + if (currentStore) { + if (currentStore.metadata) { + data.metadata = { + ...data.metadata, + ...currentStore.metadata + }; + } + if (currentStore.tags) { + data.tags = [...(data.tags ?? []), ...currentStore.tags]; + } + } + Object.assign(this, data); } @@ -117,7 +131,10 @@ export class Thread extends ThreadFields { currentExperimentItemRunId: currentStore?.currentExperimentItemRunId ?? null, currentStep: null, - rootRun: null + rootRun: null, + metadata: currentStore?.metadata ?? null, + tags: currentStore?.tags ?? null, + stepId: currentStore?.stepId ?? null }, () => cb(this) ); diff --git a/tests/decorate.test.ts b/tests/decorate.test.ts new file mode 100644 index 0000000..3e18372 --- /dev/null +++ b/tests/decorate.test.ts @@ -0,0 +1,177 @@ +import { openai } from '@ai-sdk/openai'; +import { ChatOpenAI } from '@langchain/openai'; +import { generateText } from 'ai'; +import 'dotenv/config'; +import { SimpleChatEngine } from 'llamaindex'; +import OpenAI from 'openai'; +import { v4 as uuidv4 } from 'uuid'; + +import { LiteralClient, Maybe } from '../src'; + +const url = process.env.LITERAL_API_URL; +const apiKey = process.env.LITERAL_API_KEY; + +if (!url || !apiKey) { + throw new Error('Missing environment variables'); +} + +const client = new LiteralClient({ apiKey, apiUrl: url }); + +function sleep(ms: number): Promise { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +describe('Decorator', () => { + describe('Manual logging', () => { + it('adds metadata and tags to everything logged inside the wrapper', async () => { + let threadId: Maybe; + let stepId: Maybe; + const metadata = { key: 'value' }; + const tags = ['tag1', 'tag2']; + + await client.decorate({ metadata, tags }).wrap(async () => { + const createdThread = await client + .thread({ name: 'Test thread' }) + .upsert(); + + const createdStep = await createdThread + .step({ name: 'Test step', type: 'assistant_message' }) + .send(); + + threadId = createdThread.id; + stepId = createdStep.id; + }); + + await sleep(1000); + + const thread = await client.api.getThread(threadId!); + const step = await client.api.getStep(stepId!); + + expect(thread?.metadata).toEqual(expect.objectContaining(metadata)); + expect(thread?.tags).toEqual(expect.arrayContaining(tags)); + expect(step?.metadata).toEqual(expect.objectContaining(metadata)); + expect(step?.tags).toEqual(expect.arrayContaining(tags)); + }); + + it('creates the first step with the provided ID', async () => { + const stepId = uuidv4(); + let generatedFirstStepId: Maybe; + let generatedSecondStepId: Maybe; + + await client.decorate({ stepId }).wrap(async () => { + const firstStep = await client.run({ name: 'First step' }).send(); + generatedFirstStepId = firstStep.id; + + const secondStep = await client.run({ name: 'Second step' }).send(); + generatedSecondStepId = secondStep.id; + }); + + expect(generatedFirstStepId).toBe(stepId); + expect(generatedSecondStepId).not.toBe(stepId); + }); + }); + + // Skip for the CI + describe('Integrations', () => { + it('logs Langchain generations with the given ID, metadata and tags', async () => { + const cb = client.instrumentation.langchain.literalCallback(); + const model = new ChatOpenAI({}); + + const stepId = uuidv4(); + const metadata = { key: 'value' }; + const tags = ['tag1', 'tag2']; + + await client.decorate({ stepId, metadata, tags }).wrap(async () => { + await model.invoke('Hello, how are you?', { + callbacks: [cb] + }); + }); + + await sleep(1000); + + const step = await client.api.getStep(stepId); + + expect(step?.type).toBe('llm'); + expect(step?.id).toBe(stepId); + expect(step?.metadata).toEqual(expect.objectContaining(metadata)); + expect(step?.tags).toEqual(expect.arrayContaining(tags)); + }); + + it('logs LlamaIndex generations with the given ID, metadata and tags', async () => { + client.instrumentation.llamaIndex.instrument(); + const engine = new SimpleChatEngine(); + + const stepId = uuidv4(); + const metadata = { key: 'value' }; + const tags = ['tag1', 'tag2']; + + await client.decorate({ stepId, metadata, tags }).wrap(async () => { + await engine.chat({ + message: 'Write a vegetarian lasagna recipe for 4 people.' + }); + }); + + await sleep(1000); + + const step = await client.api.getStep(stepId); + + expect(step?.type).toBe('llm'); + expect(step?.id).toBe(stepId); + expect(step?.metadata).toEqual(expect.objectContaining(metadata)); + expect(step?.tags).toEqual(expect.arrayContaining(tags)); + }, 30_000); + + it('logs OpenAI generations with the given ID, metadata and tags', async () => { + const openai = new OpenAI(); + client.instrumentation.openai(); + + const stepId = uuidv4(); + const metadata = { key: 'value' }; + const tags = ['tag1', 'tag2']; + + await client.decorate({ stepId, metadata, tags }).wrap(async () => { + await openai.chat.completions.create({ + model: 'gpt-3.5-turbo', + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: 'What is the capital of Canada?' } + ] + }); + }); + + await sleep(1000); + + const step = await client.api.getStep(stepId); + + expect(step?.type).toBe('llm'); + expect(step?.id).toBe(stepId); + expect(step?.metadata).toEqual(expect.objectContaining(metadata)); + expect(step?.tags).toEqual(expect.arrayContaining(tags)); + }); + + it('logs Vercel AI SDK generations with the given ID, metadata and tags', async () => { + const generateTextWithLiteralAI = + client.instrumentation.vercel.instrument(generateText); + + const stepId = uuidv4(); + const metadata = { key: 'value' }; + const tags = ['tag1', 'tag2']; + + await client.decorate({ stepId, metadata, tags }).wrap(async () => { + await generateTextWithLiteralAI({ + model: openai('gpt-3.5-turbo'), + prompt: 'Write a vegetarian lasagna recipe for 4 people.' + }); + }); + + await sleep(1000); + + const step = await client.api.getStep(stepId); + + expect(step?.type).toBe('llm'); + expect(step?.id).toBe(stepId); + expect(step?.metadata).toEqual(expect.objectContaining(metadata)); + expect(step?.tags).toEqual(expect.arrayContaining(tags)); + }, 30_000); + }); +}); diff --git a/tests/integration/llamaindex.test.ts b/tests/integration/llamaindex.test.ts index 2c7112e..7ca541d 100644 --- a/tests/integration/llamaindex.test.ts +++ b/tests/integration/llamaindex.test.ts @@ -30,7 +30,7 @@ describe('Llama Index Instrumentation', () => { }); // Skip for the CI - describe.skip('with OpenAI', () => { + describe('with OpenAI', () => { it('should create generation when using SimpleChatEngine', async () => { const spy = jest.spyOn(client.api, 'createGeneration'); diff --git a/tests/integration/vercel-sdk.test.ts b/tests/integration/vercel-sdk.test.ts index 0117ad0..32b0c85 100644 --- a/tests/integration/vercel-sdk.test.ts +++ b/tests/integration/vercel-sdk.test.ts @@ -17,7 +17,7 @@ const client = new LiteralClient({ apiKey, apiUrl }); describe('Vercel SDK Instrumentation', () => { // Skip for the CI - describe.skip('With OpenAI', () => { + describe('With OpenAI', () => { afterEach(() => jest.restoreAllMocks()); it('should work a simple text generation', async () => { From 33225ce2baaef5fdd136174fd2480579800ad2ad Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Fri, 30 Aug 2024 12:38:37 +0200 Subject: [PATCH 04/12] remove useless files --- document.txt | 40 ---------------- langchain-rag.ts | 116 ----------------------------------------------- llamaindex.ts | 43 ------------------ openai.ts | 53 ---------------------- 4 files changed, 252 deletions(-) delete mode 100644 document.txt delete mode 100644 langchain-rag.ts delete mode 100644 llamaindex.ts delete mode 100644 openai.ts diff --git a/document.txt b/document.txt deleted file mode 100644 index e00612a..0000000 --- a/document.txt +++ /dev/null @@ -1,40 +0,0 @@ -Pokémon is a Japanese media franchise consisting of video games, animated series and films, a trading card game, and other related media. The franchise takes place in a shared universe in which humans co-exist with creatures known as Pokémon, a large variety of species endowed with special powers. The franchise's target audience is children aged 5 to 12, but it is known to attract people of all ages. - -The franchise originated as a pair of role-playing games developed by Game Freak, from an original concept by its founder, Satoshi Tajiri. Released on the Game Boy on February 27, 1996, the games became sleeper hits and were followed by manga series, a trading card game, and anime series and films. From 1998 to 2000, Pokémon was exported to the rest of the world, creating an unprecedented global phenomenon dubbed "Pokémania". By 2002, the craze had ended, after which Pokémon became a fixture in popular culture, with new products being released to this day. In the summer of 2016, the franchise spawned a second craze with the release of Pokémon Go, an augmented reality game developed by Niantic. Pokémon has since been estimated to be the world's highest-grossing media franchise and one of the best-selling video game franchises. - -Pokémon has an uncommon ownership structure. Unlike most IPs, which are owned by one company, Pokémon is jointly owned by three: Nintendo, Game Freak, and Creatures. Game Freak develops the core series role-playing games, which are published by Nintendo exclusively for their consoles, while Creatures manages the trading card game and related merchandise, occasionally developing spin-off titles. The three companies established The Pokémon Company (TPC) in 1998 to manage the Pokémon property within Asia. The Pokémon anime series and films are co-owned by Shogakukan. Since 2009, The Pokémon Company International (TPCi) subsidiary of TPC has managed the franchise in all regions outside of Asia. - -## Name -The original full name of the franchise is Pocket Monsters (ポケットモンスター, Poketto Monsutā), which has been commonly abbreviated to Pokemon (ポケモン) since its launch. When the franchise was released internationally, the short form of the title was used, with an acute accent (´) over the e to aid in pronunciation. - -Pokémon refers to both the franchise itself and the creatures within its fictional universe. As a noun, it is identical in both the singular and plural, as is every individual species name; it is grammatically correct to say "one Pokémon" and "many Pokémon", as well as "one Pikachu" and "many Pikachu". In English, Pokémon may be pronounced either /'powkɛmon/ (poe-keh-mon) or /'powkɪmon/ (poe-key-mon). - -## General concept -The Pokémon franchise is set in a world in which humans coexist with creatures known as Pokémon. Pokémon Red and Blue contain 151 Pokémon species, with new ones being added in subsequent games; as of January 2024, 1,025 Pokémon species have been introduced.[b] Most Pokémon are inspired by real-world animals; for example, Pikachu are a yellow mouse-like species with lightning bolt-shaped tails that possess electrical abilities. - -The player character takes the role of a Pokémon Trainer. The Trainer has three primary goals: travel and explore the Pokémon world; discover and catch each Pokémon species in order to complete their Pokédex; and train a team of up to six Pokémon at a time and have them engage in battles. Most Pokémon can be caught with spherical devices known as Poké Balls. Once the opposing Pokémon is sufficiently weakened, the Trainer throws the Poké Ball against the Pokémon, which is then transformed into a form of energy and transported into the device. Once the catch is successful, the Pokémon is tamed and is under the Trainer's command from then on. If the Poké Ball is thrown again, the Pokémon re-materializes into its original state. The Trainer's Pokémon can engage in battles against opposing Pokémon, including those in the wild or owned by other Trainers. Because the franchise is aimed at children, these battles are never presented as overtly violent and contain no blood or gore.[I] Pokémon never die in battle, instead fainting upon being defeated. - -After a Pokémon wins a battle, it gains experience points. After gaining a certain amount of it, the Pokémon levels up, and its statistics rise. As its level increases, the Pokémon learns new offensive and defensive moves to use in battle. Furthermore, many species can undergo a form of spontaneous metamorphosis called Pokémon evolution, and transform into stronger forms. Most Pokémon will evolve at a certain level, while others evolve through different means, such as exposure to a certain item. - -## Media -### Video games -Pokémon video games have been released in a wide variety of genres. The role-playing games (RPGs) developed by Game Freak are considered the core series of the franchise.[449][450][451] Various spin-off games also exist, such as Pokémon Mystery Dungeon, a roguelike RPG series, Pokémon Ranger, an action RPG series, and Detective Pikachu (2018), an adventure game. Pokémon games, in particular the core RPGs, are commonly classified in generations. For example, Junichi Masuda referred to Diamond and Pearl (2006) as Gen 4,[452] and X and Y (2013) as the 6th generation.[453] - -Until 2011, Pokémon games were released exclusively on Nintendo's consoles. With the rise of the smartphone during the 2010s, The Pokémon Company also began developing, publishing, and licensing Pokémon titles for the mobile phone market, most notably Pokémon Go (2016), an augmented reality game developed by Niantic that spawned a worldwide craze in the summer of 2016.[414][415] - -According to Pokémon's official website, as of March 2024, over 480 million Pokémon game units have been sold worldwide.[454] - -### Trading card game -The Pokémon Trading Card Game (PTCG) was one of the first collectable card games (CCGs) in Japan. It was inspired by Magic: The Gathering.[142][143][144] In the card game, the players use a 60-card deck featuring Basic and evolved Pokémon, Energy cards, and Trainer cards to help them knock out the opponent's Pokémon, drawing prize cards and winning the game.[455] Cards are classified into various levels of rarity, ranging from Common to Rare Holofoil with a holographic illustration. Rare cards, including limited edition, exclusive cards, and older cards, are highly valued among collectors due to their scarcity.[456][457] - -According to the official website of The Pokémon Company, 64.8 billion cards have been produced as of March 2024.[454] - -### Anime -As of 2024, the anime consists of over 1,200 episodes across 26 seasons. Its current season, Pokémon Horizons: The Series, started on 14 April 2023. The anime originally focused on Ash Ketchum and his travels across the Pokémon world with his partner, Pikachu. They were retired as protagonists after the 25th season,[458] and Pokémon Horizons introduced two new protagonists, Liko and Roy.[459] A total of 23 anime films have been released, the most recent being Pokémon the Movie: Secrets of the Jungle (2020).[460] - -Spin-off series from the anime have also been produced, including a variety show titled Weekly Pokémon Broadcasting Station (週刊ポケモン放送局, Shūkan Pokemon Hōsōkyoku), which aired on TV Tokyo from 2002 to 2004 and aired in English as part of Pokémon Chronicles,[461][462] as well as three television specials.[463] Many short films focusing on Pikachu and other Pokémon were released, primarily preceding the films.[464] Various animated mini-series also exist.[IX] - -### Live-action -Detective Pikachu, a live-action/animated film based on the video game of the same name, was released in 2019.[475] A sequel is currently under development.[476] - -A live-action television drama produced by The Pokémon Company and TV Tokyo titled Pocket ni Boken o Tsumekonde premiered on TV Tokyo on October 20, 2023.[477] \ No newline at end of file diff --git a/langchain-rag.ts b/langchain-rag.ts deleted file mode 100644 index a597ad4..0000000 --- a/langchain-rag.ts +++ /dev/null @@ -1,116 +0,0 @@ -import { HNSWLib } from '@langchain/community/vectorstores/hnswlib'; -import { StringOutputParser } from '@langchain/core/output_parsers'; -import { PromptTemplate } from '@langchain/core/prompts'; -import { - RunnablePassthrough, - RunnableSequence -} from '@langchain/core/runnables'; -import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai'; -import 'dotenv/config'; -import { formatDocumentsAsString } from 'langchain/util/document'; - -import { LiteralClient } from './src'; - -const literalClient = new LiteralClient(); -const cb = literalClient.instrumentation.langchain.literalCallback(); - -const model = new ChatOpenAI({}); - -async function main() { - const literalaiStepId = '4defe177-334e-457f-8365-f34ad5ba84b3'; - const firstResponse = await model.invoke('Hello, how are you?', { - callbacks: [cb], - metadata: { - test: 'yes', - helicopter: 'you mean helicoptell', - literalaiStepId - }, - tags: ['bim', 'bam', 'boom'] - }); - - await literalClient.api.createScore({ - stepId: literalaiStepId, - name: 'Toxicity', - type: 'HUMAN', - comment: 'The answer is pretty nice', - value: 0 - }); - - console.log(firstResponse); - - await literalClient; - - const vectorStore = await HNSWLib.fromTexts( - ['mitochondria is the powerhouse of the cell'], - [{ id: 1 }], - new OpenAIEmbeddings() - ); - const retriever = vectorStore.asRetriever(); - - const prompt = - PromptTemplate.fromTemplate(`Answer the question based only on the following context: -{context} - -Question: {question}`); - - const chain = RunnableSequence.from([ - { - context: retriever.pipe(formatDocumentsAsString) as any, - question: new RunnablePassthrough() - }, - prompt, - model, - new StringOutputParser() - ]); - - const newLiteralaiStepId = '94059657-3a31-4682-8f9a-33ce019ea027'; - - await literalClient.api.createScore({ - stepId: newLiteralaiStepId, - name: 'Toxicity', - type: 'HUMAN', - comment: 'wow what a douche', - value: 1 - }); - - const result = await chain.invoke('What is the powerhouse of the cell?', { - callbacks: [cb], - runName: 'Standalone RAG Run', - metadata: { - test: 'yes', - helicopter: 'you mean helicoptell', - literalaiStepId: newLiteralaiStepId - }, - tags: ['bim', 'bam', 'boom'], - configurable: { thread_id: 'test_thread_id' } - }); - - console.log(result); - - await literalClient.thread({ name: 'Test RAG Thread' }).wrap(async () => { - const result = await chain.invoke('What is the powerhouse of the cell?', { - callbacks: [cb] - }); - - console.log(result); - }); - - await literalClient.run({ name: 'Test RAG Run' }).wrap(async () => { - const result = await chain.invoke('What is the powerhouse of the cell?', { - callbacks: [cb] - }); - - console.log(result); - - const result2 = await chain.invoke( - 'What is the air-speed velocity of an unladen swallow?', - { - callbacks: [cb] - } - ); - - console.log(result2); - }); -} - -main(); diff --git a/llamaindex.ts b/llamaindex.ts deleted file mode 100644 index b93a2a3..0000000 --- a/llamaindex.ts +++ /dev/null @@ -1,43 +0,0 @@ -import 'dotenv/config'; -import { - ContextChatEngine, - Document, - Settings, - VectorStoreIndex -} from 'llamaindex'; -import fs from 'node:fs/promises'; -import { stdin as input, stdout as output } from 'node:process'; -import readline from 'node:readline/promises'; - -import { LiteralClient } from './src'; - -const client = new LiteralClient(); - -client.instrumentation.llamaIndex.instrument(); - -// Update chunk size -Settings.chunkSize = 512; - -async function main() { - const documentContent = await fs.readFile('document.txt', 'utf-8'); - const document = new Document({ text: documentContent }); - const index = await VectorStoreIndex.fromDocuments([document]); - const retriever = index.asRetriever({ topK: { TEXT: 5, IMAGE: 5 } }); - const chatEngine = new ContextChatEngine({ retriever }); - const rl = readline.createInterface({ input, output }); - - const thread = await client.thread({ name: 'Llama Index Example' }).upsert(); - await client.instrumentation.llamaIndex.withThread(thread, async () => { - // eslint-disable-next-line - while (true) { - const query = await rl.question('Query: '); - const stream = await chatEngine.chat({ message: query, stream: true }); - for await (const chunk of stream) { - process.stdout.write(chunk.response); - } - process.stdout.write('\n'); - } - }); -} - -main().catch(console.error); diff --git a/openai.ts b/openai.ts deleted file mode 100644 index 3061e33..0000000 --- a/openai.ts +++ /dev/null @@ -1,53 +0,0 @@ -import 'dotenv/config'; -import OpenAI from 'openai'; -import { v4 as uuidv4 } from 'uuid'; - -import { LiteralClient } from './src'; - -const openai = new OpenAI(); - -const literalClient = new LiteralClient(); -// Instrument the OpenAI client -const openai_ = literalClient.instrumentation.openai({ client: openai }); - -async function main() { - const response = await openai_.chat.completions.create( - { - model: 'gpt-4', - messages: [{ role: 'user', content: 'Say this is a test' }] - }, - { - literalaiStepId: uuidv4() - } - ); - - await openai_.chat.completions.create( - { - model: 'gpt-4', - messages: [{ role: 'user', content: 'Say this is a test' }] - }, - { - literalaiStepId: uuidv4() - } - ); - - await openai_.chat.completions.create( - { - model: 'gpt-4', - messages: [{ role: 'user', content: 'Say this is a test' }] - }, - { - literalaiStepId: uuidv4() - } - ); - - // const embedding = await openai.embeddings?.create({ - // model: 'text-embedding-3-large', - // input: 'This is a test' - // }); - console.log(response); - // console.log(JSON.stringify(response, null, 2)); - // console.log(JSON.stringify(embedding, null, 2)); -} - -main(); From 22a21d34d2ab4f869181f0486a869a652592d652 Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Fri, 30 Aug 2024 16:24:41 +0200 Subject: [PATCH 05/12] fix: self review --- src/index.ts | 4 ++-- src/instrumentation/langchain.ts | 26 -------------------------- tests/decorate.test.ts | 2 +- tests/integration/llamaindex.test.ts | 2 +- tests/integration/openai.test.ts | 1 - tests/integration/vercel-sdk.test.ts | 2 +- 6 files changed, 5 insertions(+), 32 deletions(-) diff --git a/src/index.ts b/src/index.ts index c52fdb8..8426fe9 100644 --- a/src/index.ts +++ b/src/index.ts @@ -235,8 +235,8 @@ export class LiteralClient { currentThread: currentStore?.currentThread ?? null, currentExperimentItemRunId: currentStore?.currentExperimentItemRunId ?? null, - currentStep: null, - rootRun: null, + currentStep: currentStore?.currentStep ?? null, + rootRun: currentStore?.rootRun ?? null, metadata: options?.metadata ?? null, tags: options?.tags ?? null, stepId: options?.stepId ?? null diff --git a/src/instrumentation/langchain.ts b/src/instrumentation/langchain.ts index f4324ce..bfed578 100644 --- a/src/instrumentation/langchain.ts +++ b/src/instrumentation/langchain.ts @@ -342,18 +342,6 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { delete settings.apiKey; delete settings.api_key; - // console.log( - // 'handleLLMStart', - // llm, - // prompts, - // runId, - // parentRunId, - // extraParams, - // tags, - // metadata, - // name - // ); - this.completionGenerations[runId] = { provider, model, @@ -420,8 +408,6 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { const completionGeneration = this.completionGenerations[runId]; const chatGeneration = this.chatGenerations[runId]; - // console.log('handleLLMEnd', output, runId); - try { if (completionGeneration) { const { @@ -556,18 +542,6 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { delete settings.apiKey; delete settings.api_key; - // console.log( - // 'handleChatModelStart', - // llm, - // messages, - // runId, - // parentRunId, - // extraParams, - // tags, - // metadata, - // name - // ); - const messageList = messages[0]; const { promptId, variables } = checkForLiteralPrompt(messageList); diff --git a/tests/decorate.test.ts b/tests/decorate.test.ts index 3e18372..ce1c4c9 100644 --- a/tests/decorate.test.ts +++ b/tests/decorate.test.ts @@ -72,7 +72,7 @@ describe('Decorator', () => { }); // Skip for the CI - describe('Integrations', () => { + describe.skip('Integrations', () => { it('logs Langchain generations with the given ID, metadata and tags', async () => { const cb = client.instrumentation.langchain.literalCallback(); const model = new ChatOpenAI({}); diff --git a/tests/integration/llamaindex.test.ts b/tests/integration/llamaindex.test.ts index 7ca541d..2c7112e 100644 --- a/tests/integration/llamaindex.test.ts +++ b/tests/integration/llamaindex.test.ts @@ -30,7 +30,7 @@ describe('Llama Index Instrumentation', () => { }); // Skip for the CI - describe('with OpenAI', () => { + describe.skip('with OpenAI', () => { it('should create generation when using SimpleChatEngine', async () => { const spy = jest.spyOn(client.api, 'createGeneration'); diff --git a/tests/integration/openai.test.ts b/tests/integration/openai.test.ts index 77a47b5..63ecb3b 100644 --- a/tests/integration/openai.test.ts +++ b/tests/integration/openai.test.ts @@ -15,7 +15,6 @@ if (!apiUrl || !apiKey) { const openai = new OpenAI({ apiKey: 'an-ocean-of-noise' }); -// Skip for the CI describe('OpenAI Instrumentation', () => { // Mock OpenAI Calls beforeAll(() => { diff --git a/tests/integration/vercel-sdk.test.ts b/tests/integration/vercel-sdk.test.ts index 32b0c85..0117ad0 100644 --- a/tests/integration/vercel-sdk.test.ts +++ b/tests/integration/vercel-sdk.test.ts @@ -17,7 +17,7 @@ const client = new LiteralClient({ apiKey, apiUrl }); describe('Vercel SDK Instrumentation', () => { // Skip for the CI - describe('With OpenAI', () => { + describe.skip('With OpenAI', () => { afterEach(() => jest.restoreAllMocks()); it('should work a simple text generation', async () => { From c74d40e548d95dd10c48a78bab75152dbcac6caf Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Thu, 5 Sep 2024 12:50:18 +0200 Subject: [PATCH 06/12] wip --- jest.config.ts | 3 +- src/api.ts | 52 +++++++-------------- src/instrumentation/langchain.ts | 19 ++++---- src/instrumentation/vercel-sdk.ts | 11 +++++ src/observability/step.ts | 70 ++++++++++++++++------------ src/observability/thread.ts | 24 +++++++--- src/utils.ts | 5 ++ tests/api.test.ts | 43 ++++++++--------- tests/attachments.test.ts | 7 +-- tests/decorate.test.ts | 13 ++---- tests/integration/langchain.test.ts | 22 ++++----- tests/integration/llamaindex.test.ts | 7 +-- tests/integration/openai.test.ts | 21 +++++---- tests/integration/vercel-sdk.test.ts | 17 +++---- tests/utils.ts | 3 ++ tests/wrappers.test.ts | 5 +- 16 files changed, 167 insertions(+), 155 deletions(-) create mode 100644 tests/utils.ts diff --git a/jest.config.ts b/jest.config.ts index ff38151..8f42aa3 100644 --- a/jest.config.ts +++ b/jest.config.ts @@ -4,6 +4,7 @@ const config: Config.InitialOptions = { verbose: true, transform: { '^.+\\.ts?$': 'ts-jest' - } + }, + testTimeout: 30_000 }; export default config; diff --git a/src/api.ts b/src/api.ts index a34cca1..8e3ac73 100644 --- a/src/api.ts +++ b/src/api.ts @@ -849,43 +849,23 @@ export class API { * @param generation - The `Generation` object to be created and sent to the platform. * @returns A Promise resolving to the newly created `Generation` object. */ - async createGeneration(generation: Generation, stepId: string | null = null) { - const mutation = ` - mutation CreateGeneration($generation: GenerationPayloadInput!) { - createGeneration(generation: $generation) { - id, - type - } - } - `; - - const currentStore = this.client.store.getStore(); - - if (currentStore) { - if (currentStore.metadata) { - generation.metadata = { - ...generation.metadata, - ...currentStore.metadata - }; - } - - if (currentStore.tags) { - generation.tags = [...(generation.tags ?? []), ...currentStore.tags]; - } - - if (currentStore.stepId) { - generation.id = currentStore.stepId; - currentStore.stepId = null; - } - } - - const variables = { - stepId, - generation - }; + async createGeneration(generation: Generation) { + const stepId = generation.id; + const stepMetadata = generation.metadata; + const stepTags = generation.tags; + + delete generation.id; + + const generationAsStep = this.client.step({ + id: stepId, + metadata: stepMetadata, + tags: stepTags, + generation, + name: generation.type ?? '', + type: 'llm' + }); - const response = await this.makeGqlCall(mutation, variables); - return response.data.createGeneration as PersistedGeneration; + return generationAsStep.send(); } /** diff --git a/src/instrumentation/langchain.ts b/src/instrumentation/langchain.ts index bfed578..dc221a5 100644 --- a/src/instrumentation/langchain.ts +++ b/src/instrumentation/langchain.ts @@ -294,18 +294,15 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { } } - getGenerationStepId( - runId: string, - metadata?: Record | undefined - ) { + getGenerationStepId(metadata?: Record | undefined) { const generationStepIdFromMetadata = metadata?.literalaiStepId; if (typeof generationStepIdFromMetadata !== 'string') { - return runId; + return null; } if (!uuidValidate(generationStepIdFromMetadata)) { - return runId; + return null; } // The stepId from metadata can only be used on one generation @@ -314,7 +311,7 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { (step) => step.id === generationStepIdFromMetadata ) ) { - return runId; + return null; } return generationStepIdFromMetadata; @@ -362,7 +359,7 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { type: 'llm', tags: tags, threadId: this.threadId, - id: this.getGenerationStepId(runId, metadata), + id: this.getGenerationStepId(metadata), startTime: new Date().toISOString(), parentId: this.getParentId(parentRunId), metadata: metadata, @@ -450,7 +447,7 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { } else { await this.client.api.createGeneration({ ...generation, - id: this.getGenerationStepId(runId, metadata) + id: this.getGenerationStepId(metadata) }); } } else if (chatGeneration) { @@ -511,7 +508,7 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { } else { await this.client.api.createGeneration({ ...generation, - id: this.getGenerationStepId(runId, metadata) + id: this.getGenerationStepId(metadata) }); } } @@ -577,7 +574,7 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { type: 'llm', tags: tags, threadId: this.threadId, - id: this.getGenerationStepId(runId, metadata), + id: this.getGenerationStepId(metadata), startTime: new Date().toISOString(), parentId: parentId, metadata, diff --git a/src/instrumentation/vercel-sdk.ts b/src/instrumentation/vercel-sdk.ts index 3df5d7b..f9d78bf 100644 --- a/src/instrumentation/vercel-sdk.ts +++ b/src/instrumentation/vercel-sdk.ts @@ -221,6 +221,15 @@ const computeMetricsStream = async ( }); break; } + case 'object': { + console.log({ chunk }); + const { object } = chunk as ObjectStreamPart; + messages[messages.length - 1] = { + role: 'assistant', + content: JSON.stringify({}) + }; + break; + } } } @@ -236,6 +245,8 @@ const computeMetricsStream = async ( ? outputTokenCount / (duration / 1000) : undefined; + console.log(messages, textMessage); + if (textMessage.content) messages.push(textMessage); const messageCompletion = messages.pop(); diff --git a/src/observability/step.ts b/src/observability/step.ts index 228cc6d..e164753 100644 --- a/src/observability/step.ts +++ b/src/observability/step.ts @@ -3,7 +3,14 @@ import { v4 as uuidv4 } from 'uuid'; import { LiteralClient } from '..'; import { API } from '../api'; import { Score } from '../evaluation/score'; -import { Environment, Maybe, OmitUtils, Utils, isPlainObject } from '../utils'; +import { + Environment, + Maybe, + OmitUtils, + Utils, + isPlainObject, + omitLiteralAiMetadata +} from '../utils'; import { Attachment } from './attachment'; import { Generation } from './generation'; @@ -63,42 +70,14 @@ export class Step extends StepFields { this.api = client.api; this.client = client; - const currentStore = this.client.store.getStore(); - - if (currentStore) { - if (currentStore.metadata) { - data.metadata = { - ...data.metadata, - ...currentStore.metadata - }; - } - - if (currentStore.tags) { - data.tags = [...(data.tags ?? []), ...currentStore.tags]; - } - - if (currentStore.stepId) { - data.id = currentStore.stepId; - currentStore.stepId = null; - } - } - Object.assign(this, data); + this.enrichFromStore(ignoreContext); // Automatically generate an ID if not provided. if (!this.id) { this.id = uuidv4(); } - if (ignoreContext) { - return; - } - - // Automatically assign parent thread & step & rootRun if there are any in the store. - this.threadId = this.threadId ?? this.client._currentThread()?.id; - this.parentId = this.parentId ?? this.client._currentStep()?.id; - this.rootRunId = this.rootRunId ?? this.client._rootRun()?.id; - // Set the creation and start time to the current time if not provided. if (!this.createdAt) { this.createdAt = new Date().toISOString(); @@ -113,6 +92,37 @@ export class Step extends StepFields { } } + private enrichFromStore(ignoreContext?: true) { + if (ignoreContext) { + return; + } + + const currentStore = this.client.store.getStore(); + + if (currentStore) { + if (currentStore.metadata) { + this.metadata = omitLiteralAiMetadata({ + ...this.metadata, + ...currentStore.metadata + }); + } + + if (currentStore.tags) { + this.tags = [...(this.tags ?? []), ...currentStore.tags]; + } + + if (currentStore.stepId && !this.id) { + this.id = currentStore.stepId; + currentStore.stepId = null; + } + } + + // Automatically assign parent thread & step & rootRun if there are any in the store. + this.threadId = this.threadId ?? this.client._currentThread()?.id; + this.parentId = this.parentId ?? this.client._currentStep()?.id; + this.rootRunId = this.rootRunId ?? this.client._rootRun()?.id; + } + /** * Serializes the step instance, converting complex objects to strings as necessary. * @returns A serialized representation of the step. diff --git a/src/observability/thread.ts b/src/observability/thread.ts index afa914f..05f7bc4 100644 --- a/src/observability/thread.ts +++ b/src/observability/thread.ts @@ -2,7 +2,13 @@ import { v4 as uuidv4 } from 'uuid'; import { LiteralClient } from '..'; import { API } from '../api'; -import { Environment, Maybe, OmitUtils, Utils } from '../utils'; +import { + Environment, + Maybe, + OmitUtils, + Utils, + omitLiteralAiMetadata +} from '../utils'; import { Step, StepConstructor } from './step'; /** @@ -54,21 +60,25 @@ export class Thread extends ThreadFields { data.id = uuidv4(); } + Object.assign(this, data); + + this.enrichFromStore(); + } + + private enrichFromStore() { const currentStore = this.client.store.getStore(); if (currentStore) { if (currentStore.metadata) { - data.metadata = { - ...data.metadata, + this.metadata = omitLiteralAiMetadata({ + ...this.metadata, ...currentStore.metadata - }; + }); } if (currentStore.tags) { - data.tags = [...(data.tags ?? []), ...currentStore.tags]; + this.tags = [...(this.tags ?? []), ...currentStore.tags]; } } - - Object.assign(this, data); } /** diff --git a/src/utils.ts b/src/utils.ts index d04e5ff..b6df629 100644 --- a/src/utils.ts +++ b/src/utils.ts @@ -79,3 +79,8 @@ export class User extends Utils { Object.assign(this, data); } } + +export function omitLiteralAiMetadata(obj: Record) { + const { literalaiTags, literalaiMetadata, literalaiStepId, ...rest } = obj; + return rest; +} diff --git a/tests/api.test.ts b/tests/api.test.ts index ede3b50..247f46a 100644 --- a/tests/api.test.ts +++ b/tests/api.test.ts @@ -4,11 +4,12 @@ import { v4 as uuidv4 } from 'uuid'; import { ChatGeneration, LiteralClient } from '../src'; import { Dataset } from '../src/evaluation/dataset'; import { Score } from '../src/evaluation/score'; +import { sleep } from './utils'; -describe('End to end tests for the SDK', function () { +describe('End to end tests for the SDK', function() { let client: LiteralClient; - beforeAll(function () { + beforeAll(function() { const url = process.env.LITERAL_API_URL; const apiKey = process.env.LITERAL_API_KEY; @@ -19,7 +20,7 @@ describe('End to end tests for the SDK', function () { client = new LiteralClient({ apiKey, apiUrl: url }); }); - it('should test user', async function () { + it('should test user', async function() { const identifier = `test_user_${uuidv4()}`; const user = await client.api.createUser(identifier, { foo: 'bar' }); @@ -44,9 +45,9 @@ describe('End to end tests for the SDK', function () { const deletedUser = await client.api.getUser(identifier); expect(deletedUser).toBeUndefined(); - }, 30000); + }); - it('should test generation', async function () { + it('should test generation', async function() { const generation = await client.api.createGeneration({ provider: 'test', model: 'test', @@ -65,7 +66,7 @@ describe('End to end tests for the SDK', function () { expect(generations.data[0].id).toBe(generation.id); }); - it('should test thread with a single argument', async function () { + it('should test thread with a single argument', async function() { const thread = await client.api.upsertThread({ threadId: uuidv4(), name: 'name', @@ -91,9 +92,9 @@ describe('End to end tests for the SDK', function () { const deletedThread = await client.api.getThread(thread.id); expect(deletedThread).toBeNull(); - }, 30000); + }); - it('should test thread (deprecated)', async function () { + it('should test thread (deprecated)', async function() { const thread = await client.api.upsertThread( uuidv4(), 'name', @@ -123,7 +124,7 @@ describe('End to end tests for the SDK', function () { expect(deletedThread).toBeNull(); }); - it('should test export thread', async function () { + it('should test export thread', async function() { const thread = await client.api.upsertThread({ threadId: uuidv4(), name: 'test', @@ -159,7 +160,7 @@ describe('End to end tests for the SDK', function () { expect(deletedThread).toBeNull(); }); - it('should test run', async function () { + it('should test run', async function() { const step = await client .run({ name: 'test', @@ -175,7 +176,7 @@ describe('End to end tests for the SDK', function () { }) .send(); - await new Promise((resolve) => setTimeout(resolve, 2000)); + await sleep(2000); const fetchedStep = await client.api.getStep(step.id!); expect(fetchedStep?.id).toBe(step.id); @@ -190,7 +191,7 @@ describe('End to end tests for the SDK', function () { expect(deletedStep).toBeNull(); }); - it('should test step', async function () { + it('should test step', async function() { const thread = await client.thread({ id: uuidv4() }); const step = await thread .step({ @@ -209,7 +210,7 @@ describe('End to end tests for the SDK', function () { }) .send(); - await new Promise((resolve) => setTimeout(resolve, 2000)); + await sleep(2000); const fetchedStep = await client.api.getStep(step.id!); expect(fetchedStep?.id).toBe(step.id); @@ -222,9 +223,9 @@ describe('End to end tests for the SDK', function () { const deletedStep = await client.api.getStep(step.id!); expect(deletedStep).toBeNull(); - }, 30000); + }); - it('should test steps', async function () { + it('should test steps', async function() { const thread = await client.thread({ id: uuidv4() }); const step = await thread @@ -237,7 +238,7 @@ describe('End to end tests for the SDK', function () { expect(step.id).not.toBeNull(); - await new Promise((resolve) => setTimeout(resolve, 2000)); + await sleep(2000); const steps = await client.api.getSteps({ filters: [ @@ -259,7 +260,7 @@ describe('End to end tests for the SDK', function () { await client.api.deleteThread(thread.id); }); - it('should test score', async function () { + it('should test score', async function() { const thread = await client.thread({ id: uuidv4() }); const step = await thread .step({ @@ -269,7 +270,7 @@ describe('End to end tests for the SDK', function () { }) .send(); - await new Promise((resolve) => setTimeout(resolve, 2000)); + await sleep(2000); const score = await client.api.createScore({ stepId: step.id!, @@ -299,7 +300,7 @@ describe('End to end tests for the SDK', function () { await client.api.deleteThread(thread.id); }); - it('should test scores', async function () { + it('should test scores', async function() { const thread = await client.thread({ id: uuidv4() }); const step = await thread .step({ @@ -309,7 +310,7 @@ describe('End to end tests for the SDK', function () { }) .send(); - await new Promise((resolve) => setTimeout(resolve, 1000)); + await sleep(1000); const firstScoreValue = 0.9234; const scores = await client.api.createScores([ @@ -527,7 +528,7 @@ describe('End to end tests for the SDK', function () { }) .send(); - await new Promise((resolve) => setTimeout(resolve, 1000)); + await sleep(1000); const datasetItem = await dataset.addStep(step.id!); diff --git a/tests/attachments.test.ts b/tests/attachments.test.ts index 3db3135..f40ab48 100644 --- a/tests/attachments.test.ts +++ b/tests/attachments.test.ts @@ -3,6 +3,7 @@ import { createReadStream, readFileSync } from 'fs'; import { LiteralClient, Maybe } from '../src'; import { Attachment } from '../src/observability/attachment'; +import { sleep } from './utils'; const apiUrl = process.env.LITERAL_API_URL; const apiKey = process.env.LITERAL_API_KEY; @@ -33,7 +34,7 @@ describe('Attachments', () => { { type: 'ArrayBuffer', content: arrayBuffer! }, { type: 'Blob', content: blob! }, { type: 'File', content: file! } - ])('handles $type objects', async function ({ type, content }) { + ])('handles $type objects', async function({ type, content }) { const attachment = await client.api.createAttachment({ content, mime, @@ -48,7 +49,7 @@ describe('Attachments', () => { }) .send(); - await new Promise((resolve) => setTimeout(resolve, 2000)); + await sleep(2000); const fetchedStep = await client.api.getStep(step.id!); @@ -86,7 +87,7 @@ describe('Attachments', () => { }); }); - await new Promise((resolve) => setTimeout(resolve, 2000)); + await sleep(2000); const fetchedStep = await client.api.getStep(stepId!); diff --git a/tests/decorate.test.ts b/tests/decorate.test.ts index ce1c4c9..943789e 100644 --- a/tests/decorate.test.ts +++ b/tests/decorate.test.ts @@ -7,6 +7,7 @@ import OpenAI from 'openai'; import { v4 as uuidv4 } from 'uuid'; import { LiteralClient, Maybe } from '../src'; +import { sleep } from './utils'; const url = process.env.LITERAL_API_URL; const apiKey = process.env.LITERAL_API_KEY; @@ -17,10 +18,6 @@ if (!url || !apiKey) { const client = new LiteralClient({ apiKey, apiUrl: url }); -function sleep(ms: number): Promise { - return new Promise((resolve) => setTimeout(resolve, ms)); -} - describe('Decorator', () => { describe('Manual logging', () => { it('adds metadata and tags to everything logged inside the wrapper', async () => { @@ -72,7 +69,7 @@ describe('Decorator', () => { }); // Skip for the CI - describe.skip('Integrations', () => { + describe('Integrations', () => { it('logs Langchain generations with the given ID, metadata and tags', async () => { const cb = client.instrumentation.langchain.literalCallback(); const model = new ChatOpenAI({}); @@ -87,7 +84,7 @@ describe('Decorator', () => { }); }); - await sleep(1000); + await sleep(2000); const step = await client.api.getStep(stepId); @@ -119,7 +116,7 @@ describe('Decorator', () => { expect(step?.id).toBe(stepId); expect(step?.metadata).toEqual(expect.objectContaining(metadata)); expect(step?.tags).toEqual(expect.arrayContaining(tags)); - }, 30_000); + }); it('logs OpenAI generations with the given ID, metadata and tags', async () => { const openai = new OpenAI(); @@ -172,6 +169,6 @@ describe('Decorator', () => { expect(step?.id).toBe(stepId); expect(step?.metadata).toEqual(expect.objectContaining(metadata)); expect(step?.tags).toEqual(expect.arrayContaining(tags)); - }, 30_000); + }); }); }); diff --git a/tests/integration/langchain.test.ts b/tests/integration/langchain.test.ts index a14ce34..a264cc3 100644 --- a/tests/integration/langchain.test.ts +++ b/tests/integration/langchain.test.ts @@ -3,6 +3,7 @@ import 'dotenv/config'; import { v4 as uuidv4 } from 'uuid'; import { LiteralClient } from '../../src'; +import { sleep } from '../utils'; const url = process.env.LITERAL_API_URL; const apiKey = process.env.LITERAL_API_KEY; @@ -24,18 +25,12 @@ describe('Langchain integration', function () { metadata: { literalaiStepId } }); - const { data } = await client.api.getGenerations({ - filters: [ - { - field: 'id', - operator: 'eq', - value: literalaiStepId - } - ] - }); + await sleep(1000); + + const step = await client.api.getStep(literalaiStepId); - expect(data.length).toBe(1); - }, 30000); + expect(step!.type).toBe('llm'); + }); it('should copy tags and metadata to the generation', async function () { const literalaiStepId = uuidv4(); @@ -55,10 +50,11 @@ describe('Langchain integration', function () { tags }); + await sleep(1000); + const step = await client.api.getStep(literalaiStepId); - expect(step!.type).toBe('llm'); expect(step!.metadata).toEqual(expect.objectContaining(metadata)); expect(step!.tags).toEqual(expect.arrayContaining(tags)); - }, 30000); + }); }); diff --git a/tests/integration/llamaindex.test.ts b/tests/integration/llamaindex.test.ts index 2c7112e..35d9863 100644 --- a/tests/integration/llamaindex.test.ts +++ b/tests/integration/llamaindex.test.ts @@ -11,6 +11,7 @@ import { import { resolve } from 'path'; import { LiteralClient } from '../../src'; +import { sleep } from '../utils'; describe('Llama Index Instrumentation', () => { let client: LiteralClient; @@ -30,7 +31,7 @@ describe('Llama Index Instrumentation', () => { }); // Skip for the CI - describe.skip('with OpenAI', () => { + describe('with OpenAI', () => { it('should create generation when using SimpleChatEngine', async () => { const spy = jest.spyOn(client.api, 'createGeneration'); @@ -119,7 +120,7 @@ describe('Llama Index Instrumentation', () => { expect(response).toBeTruthy(); // Sending message is done asynchronously - await new Promise((resolve) => setTimeout(resolve, 10)); + await sleep(1000); expect(spy).toHaveBeenCalledWith([ expect.objectContaining({ @@ -168,7 +169,7 @@ describe('Llama Index Instrumentation', () => { expect(response).toBeTruthy(); // Sending message is done asynchronously - await new Promise((resolve) => setTimeout(resolve, 10)); + await sleep(1000); expect(spy).toHaveBeenCalledWith([ expect.objectContaining({ diff --git a/tests/integration/openai.test.ts b/tests/integration/openai.test.ts index 63ecb3b..b17d498 100644 --- a/tests/integration/openai.test.ts +++ b/tests/integration/openai.test.ts @@ -5,6 +5,7 @@ import { v4 as uuidv4 } from 'uuid'; import { ChatGeneration, LiteralClient, Maybe, OmitUtils } from '../../src'; import { Step } from '../../src/observability/step'; +import { sleep } from '../utils'; const apiUrl = process.env.LITERAL_API_URL; const apiKey = process.env.LITERAL_API_KEY; @@ -213,7 +214,7 @@ describe('OpenAI Instrumentation', () => { n: 1 }); - await new Promise((resolve) => setTimeout(resolve, 2000)); + await sleep(1000); const { data: [step] } = await client.api.getSteps({ @@ -226,7 +227,7 @@ describe('OpenAI Instrumentation', () => { expect(step?.type).toBe('run'); expect(step?.output?.data[0].url).toEqual(response.data[0].url); - }, 30000); + }); }); }); @@ -255,7 +256,7 @@ describe('OpenAI Instrumentation', () => { }); }); - await new Promise((resolve) => setTimeout(resolve, 2000)); + await sleep(2000); const { data: [step] @@ -266,7 +267,7 @@ describe('OpenAI Instrumentation', () => { expect(step?.threadId).toBe(threadId); expect(step?.parentId).toBe(parentId); - }, 30_000); + }); it("doesn't mix up threads and steps", async () => { const testId = uuidv4(); @@ -340,7 +341,7 @@ describe('OpenAI Instrumentation', () => { expect(firstGeneration?.parentId).toEqual(firstStep?.id); expect(secondGeneration?.threadId).toEqual(secondThreadId); expect(secondGeneration?.parentId).toEqual(secondStep?.id); - }, 30_000); + }); }); describe('Handling tags and metadata', () => { @@ -365,7 +366,7 @@ describe('OpenAI Instrumentation', () => { { literalaiStepId } ); - await new Promise((resolve) => setTimeout(resolve, 3000)); + await sleep(1000); const step = await client.api.getStep(literalaiStepId); @@ -396,7 +397,7 @@ describe('OpenAI Instrumentation', () => { }); }); - await new Promise((resolve) => setTimeout(resolve, 4000)); + await sleep(4000); const { data: [step] @@ -407,7 +408,7 @@ describe('OpenAI Instrumentation', () => { expect(step!.tags).toEqual(expect.arrayContaining(['tag1', 'tag2'])); expect(step!.metadata).toEqual({ key: 'value' }); - }, 30_000); + }); it('handles tags and metadata on the LLM call', async () => { const client = new LiteralClient({ apiKey, apiUrl }); @@ -440,7 +441,7 @@ describe('OpenAI Instrumentation', () => { }); }); - await new Promise((resolve) => setTimeout(resolve, 5000)); + await sleep(5000); const { data: [step] @@ -454,6 +455,6 @@ describe('OpenAI Instrumentation', () => { ); expect(step!.metadata!.key).toEqual('value'); expect(step!.metadata!.otherKey).toEqual('otherValue'); - }, 30_000); + }); }); }); diff --git a/tests/integration/vercel-sdk.test.ts b/tests/integration/vercel-sdk.test.ts index 0117ad0..6b06629 100644 --- a/tests/integration/vercel-sdk.test.ts +++ b/tests/integration/vercel-sdk.test.ts @@ -5,6 +5,7 @@ import { v4 as uuidv4 } from 'uuid'; import { z } from 'zod'; import { LiteralClient } from '../../src'; +import { sleep } from '../utils'; const apiUrl = process.env.LITERAL_API_URL; const apiKey = process.env.LITERAL_API_KEY; @@ -17,7 +18,7 @@ const client = new LiteralClient({ apiKey, apiUrl }); describe('Vercel SDK Instrumentation', () => { // Skip for the CI - describe.skip('With OpenAI', () => { + describe('With OpenAI', () => { afterEach(() => jest.restoreAllMocks()); it('should work a simple text generation', async () => { @@ -143,7 +144,7 @@ describe('Vercel SDK Instrumentation', () => { ); }); - it('should work for streamed structured generation', async () => { + it.only('should work for streamed structured generation', async () => { const spy = jest.spyOn(client.api, 'createGeneration'); const streamObjectWithLiteralAI = @@ -213,7 +214,7 @@ describe('Vercel SDK Instrumentation', () => { expect(result.text).toBeTruthy(); // Sending message is done asynchronously - await new Promise((resolve) => setTimeout(resolve, 10)); + await sleep(1000); expect(spy).toHaveBeenCalledWith([ expect.objectContaining({ @@ -429,7 +430,7 @@ describe('Vercel SDK Instrumentation', () => { }); }); - describe.skip('Literal AI metadata', () => { + describe('Literal AI metadata', () => { const generateTextWithLiteralAI = client.instrumentation.vercel.instrument(generateText); @@ -442,13 +443,13 @@ describe('Vercel SDK Instrumentation', () => { literalaiStepId }); - await new Promise((resolve) => setTimeout(resolve, 3000)); + await sleep(1000); const step = await client.api.getStep(literalaiStepId); expect(step!.id).toEqual(literalaiStepId); expect(step!.type).toEqual('llm'); - }, 30_000); + }); it('should create a generation with the provided tags and metadata', async () => { const literalaiStepId = uuidv4(); @@ -461,7 +462,7 @@ describe('Vercel SDK Instrumentation', () => { literalaiMetadata: { otherKey: 'otherValue' } }); - await new Promise((resolve) => setTimeout(resolve, 3000)); + await sleep(1000); const step = await client.api.getStep(literalaiStepId); @@ -469,6 +470,6 @@ describe('Vercel SDK Instrumentation', () => { expect.objectContaining({ otherKey: 'otherValue' }) ); expect(step!.tags).toEqual(expect.arrayContaining(['tag1', 'tag2'])); - }, 30_000); + }); }); }); diff --git a/tests/utils.ts b/tests/utils.ts new file mode 100644 index 0000000..421bda0 --- /dev/null +++ b/tests/utils.ts @@ -0,0 +1,3 @@ +export function sleep(ms: number): Promise { + return new Promise((resolve) => setTimeout(resolve, ms)); +} diff --git a/tests/wrappers.test.ts b/tests/wrappers.test.ts index d4c28ee..16f1bb8 100644 --- a/tests/wrappers.test.ts +++ b/tests/wrappers.test.ts @@ -3,6 +3,7 @@ import 'dotenv/config'; import { LiteralClient, Maybe } from '../src'; import { DatasetExperimentItem } from '../src/evaluation/dataset'; import { Step } from '../src/observability/step'; +import { sleep } from './utils'; const url = process.env.LITERAL_API_URL; const apiKey = process.env.LITERAL_API_KEY; @@ -13,10 +14,6 @@ if (!url || !apiKey) { const client = new LiteralClient({ apiKey, apiUrl: url }); -function sleep(ms: number): Promise { - return new Promise((resolve) => setTimeout(resolve, ms)); -} - describe('Wrapper', () => { it('handles failing step', async () => { let threadId: Maybe; From 98cce554692318abefa2681cc22517fe6d3d26ac Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Thu, 5 Sep 2024 14:19:31 +0200 Subject: [PATCH 07/12] wip --- src/instrumentation/vercel-sdk.ts | 17 +++++++++++------ tests/integration/vercel-sdk.test.ts | 2 +- 2 files changed, 12 insertions(+), 7 deletions(-) diff --git a/src/instrumentation/vercel-sdk.ts b/src/instrumentation/vercel-sdk.ts index f9d78bf..01409ca 100644 --- a/src/instrumentation/vercel-sdk.ts +++ b/src/instrumentation/vercel-sdk.ts @@ -187,6 +187,10 @@ const computeMetricsStream = async ( let outputTokenCount = 0; let ttFirstToken: number | undefined = undefined; + + let accumulatedStreamObjectResponse: IGenerationMessage | undefined = + undefined; + for await (const chunk of stream as unknown as AsyncIterable) { if (typeof chunk === 'string') { textMessage.content += chunk; @@ -222,11 +226,10 @@ const computeMetricsStream = async ( break; } case 'object': { - console.log({ chunk }); - const { object } = chunk as ObjectStreamPart; - messages[messages.length - 1] = { + const { object } = chunk as any; + accumulatedStreamObjectResponse = { role: 'assistant', - content: JSON.stringify({}) + content: JSON.stringify(object) }; break; } @@ -239,14 +242,16 @@ const computeMetricsStream = async ( outputTokenCount += 1; } + if (accumulatedStreamObjectResponse) { + messages.push(accumulatedStreamObjectResponse); + } + const duration = (Date.now() - startTime) / 1000; const tokenThroughputInSeconds = duration && outputTokenCount ? outputTokenCount / (duration / 1000) : undefined; - console.log(messages, textMessage); - if (textMessage.content) messages.push(textMessage); const messageCompletion = messages.pop(); diff --git a/tests/integration/vercel-sdk.test.ts b/tests/integration/vercel-sdk.test.ts index 6b06629..953c5da 100644 --- a/tests/integration/vercel-sdk.test.ts +++ b/tests/integration/vercel-sdk.test.ts @@ -144,7 +144,7 @@ describe('Vercel SDK Instrumentation', () => { ); }); - it.only('should work for streamed structured generation', async () => { + it('should work for streamed structured generation', async () => { const spy = jest.spyOn(client.api, 'createGeneration'); const streamObjectWithLiteralAI = From c3d94019c308f1ec0fb57880cd37866047368b0d Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Thu, 5 Sep 2024 14:23:06 +0200 Subject: [PATCH 08/12] fix: skip tests --- tests/decorate.test.ts | 2 +- tests/integration/llamaindex.test.ts | 2 +- tests/integration/vercel-sdk.test.ts | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/tests/decorate.test.ts b/tests/decorate.test.ts index 943789e..46b2820 100644 --- a/tests/decorate.test.ts +++ b/tests/decorate.test.ts @@ -69,7 +69,7 @@ describe('Decorator', () => { }); // Skip for the CI - describe('Integrations', () => { + describe.skip('Integrations', () => { it('logs Langchain generations with the given ID, metadata and tags', async () => { const cb = client.instrumentation.langchain.literalCallback(); const model = new ChatOpenAI({}); diff --git a/tests/integration/llamaindex.test.ts b/tests/integration/llamaindex.test.ts index 35d9863..58fb55b 100644 --- a/tests/integration/llamaindex.test.ts +++ b/tests/integration/llamaindex.test.ts @@ -31,7 +31,7 @@ describe('Llama Index Instrumentation', () => { }); // Skip for the CI - describe('with OpenAI', () => { + describe.skip('with OpenAI', () => { it('should create generation when using SimpleChatEngine', async () => { const spy = jest.spyOn(client.api, 'createGeneration'); diff --git a/tests/integration/vercel-sdk.test.ts b/tests/integration/vercel-sdk.test.ts index 953c5da..59de78e 100644 --- a/tests/integration/vercel-sdk.test.ts +++ b/tests/integration/vercel-sdk.test.ts @@ -18,7 +18,7 @@ const client = new LiteralClient({ apiKey, apiUrl }); describe('Vercel SDK Instrumentation', () => { // Skip for the CI - describe('With OpenAI', () => { + describe.skip('With OpenAI', () => { afterEach(() => jest.restoreAllMocks()); it('should work a simple text generation', async () => { From f61a2db42c63156fcf98a2e32f1280f94c32992d Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Fri, 6 Sep 2024 15:17:16 +0200 Subject: [PATCH 09/12] fix: tests --- tests/integration/langchain.test.ts | 2 +- tests/integration/openai.test.ts | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/tests/integration/langchain.test.ts b/tests/integration/langchain.test.ts index a264cc3..95be837 100644 --- a/tests/integration/langchain.test.ts +++ b/tests/integration/langchain.test.ts @@ -15,7 +15,7 @@ if (!url || !apiKey) { const client = new LiteralClient({ apiKey, apiUrl: url }); const cb = client.instrumentation.langchain.literalCallback(); -describe('Langchain integration', function () { +describe.skip('Langchain integration', function () { it('should create a generation with the provided id', async function () { const literalaiStepId = uuidv4(); const model = new ChatOpenAI({}); diff --git a/tests/integration/openai.test.ts b/tests/integration/openai.test.ts index b17d498..9dfaa42 100644 --- a/tests/integration/openai.test.ts +++ b/tests/integration/openai.test.ts @@ -16,7 +16,8 @@ if (!apiUrl || !apiKey) { const openai = new OpenAI({ apiKey: 'an-ocean-of-noise' }); -describe('OpenAI Instrumentation', () => { +// Skip for the CI +describe.skip('OpenAI Instrumentation', () => { // Mock OpenAI Calls beforeAll(() => { /* @ts-expect-error the mock is incomplete but that's OK */ From 061b36d214a7b8ca603c4167c200a8137e288f91 Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Fri, 6 Sep 2024 16:28:30 +0200 Subject: [PATCH 10/12] fix: tests --- openai.ts | 141 +++++++++++++++++++++++++++ prompt.ts | 13 +++ src/prompt-engineering/prompt.ts | 1 + tests/integration/vercel-sdk.test.ts | 6 +- 4 files changed, 158 insertions(+), 3 deletions(-) create mode 100644 openai.ts create mode 100644 prompt.ts diff --git a/openai.ts b/openai.ts new file mode 100644 index 0000000..4712a1f --- /dev/null +++ b/openai.ts @@ -0,0 +1,141 @@ +import 'dotenv/config'; +import OpenAI from 'openai'; + +import { LiteralClient } from './src'; + +const literalClient = new LiteralClient(); + +const openai = new OpenAI(); + +// Instrument the OpenAI client +literalClient.instrumentation.openai(); + +async function main() { + // This will be automagically logged by Literal AI + await openai.chat.completions.create({ + model: 'gpt-4', + messages: [{ role: 'user', content: 'Say this is a test' }] + }); + + await openai.chat.completions.create({ + model: 'gpt-4', + messages: [ + { + role: 'user', + content: + 'Write a README document in markdown explaining the basic usage of the `with-csv` npm library.' + } + ] + }); + + await literalClient + .run({ + name: 'Test run', + input: { + content: + '{"question": ["how to code chat with pdf and use pdfplumber and ollama local",[]],"images_content": {}}' + }, + output: { + content: `To create a chat application that interacts with PDF files using \`pdfplumber\` and the Ollama local model in Chainlit, you can follow the example below. This example demonstrates how to upload a PDF, extract text using \`pdfplumber\`, and allow users to ask questions about its content using the Ollama model. + + ### Step 1: Install Required Packages + Make sure you have the necessary packages installed: + \`\`\`bash + pip install chainlit pdfplumber langchain + \`\`\` + + ### Step 2: Create the Application Script + Create a Python file named \`app.py\` and add the following code: + + \`\`\`python + import os + import pdfplumber + import chainlit as cl + from langchain.llms.ollama import Ollama + from langchain.prompts import ChatPromptTemplate + from langchain.schema import StrOutputParser + from langchain.schema.runnable import Runnable + from langchain.schema.runnable.config import RunnableConfig + from langchain.text_splitter import RecursiveCharacterTextSplitter + from langchain.memory import ChatMessageHistory, ConversationBufferMemory + from langchain.schema import Document + + # Initialize the Ollama model + model = Ollama(model="llama2") + + @cl.on_chat_start + async def on_chat_start(): + files = await cl.AskFileMessage( + content="Please upload a PDF file to start asking questions.", + accept=["application/pdf"], + max_size_mb=20, + timeout=180, + ).send() + + file = files[0] + docs = process_pdf(file) + + message_history = ChatMessageHistory() + memory = ConversationBufferMemory(memory_key="chat_history", output_key="answer", chat_memory=message_history, return_messages=True) + + prompt = ChatPromptTemplate.from_messages( + [ + ("system", "You're a knowledgeable assistant who provides accurate answers based on the PDF content."), + ("human", "{question}"), + ] + ) + + runnable = prompt | model | StrOutputParser() + cl.user_session.set("runnable", runnable) + + await cl.Message(content="You can now ask questions about the PDF!").send() + + def process_pdf(file): + text = "" + with pdfplumber.open(file.path) as pdf: + for page in pdf.pages: + text += page.extract_text() + "\ + " + + # Split the text into chunks + text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100) + docs = text_splitter.split_text(text) + documents = [Document(page_content=chunk) for chunk in docs] + return documents + + @cl.on_message + async def on_message(message: cl.Message): + runnable = cl.user_session.get("runnable") # type: Runnable + msg = cl.Message(content="") + + for chunk in await cl.make_async(runnable.stream)( + {"question": message.content}, + config=RunnableConfig(callbacks=[cl.LangchainCallbackHandler()]), + ): + await msg.stream_token(chunk) + + await msg.send() + \`\`\` + + ### Step 3: Run the Application + To start the Chainlit application, run the following command in your terminal: + \`\`\`bash + chainlit run app.py + \`\`\` + + ### Step 4: Interact with the Application + Open your browser and navigate to \`http://localhost:8000\`. You can upload a PDF file and start asking questions about its content. + + ### Explanation + - **PDF Processing**: The \`process_pdf\` function uses \`pdfplumber\` to extract text from the uploaded PDF file. + - **Text Splitting**: The extracted text is split into manageable chunks using \`RecursiveCharacterTextSplitter\`. + - **Ollama Model**: The Ollama model is used to generate responses based on the extracted text. + - **Conversational Interface**: Users can ask questions, and the model will respond based on the content of the PDF. + + This setup allows you to create a conversational interface that can answer questions based on the content of a PDF file using \`pdfplumber\` and the Ollama local model.` + } + }) + .send(); +} + +main(); diff --git a/prompt.ts b/prompt.ts new file mode 100644 index 0000000..2f35796 --- /dev/null +++ b/prompt.ts @@ -0,0 +1,13 @@ +import 'dotenv/config'; + +import { LiteralClient } from './src'; + +const literalClient = new LiteralClient(); + +async function main() { + const prompt = await literalClient.api.getPrompt(''); + + console.log(prompt); +} + +main(); diff --git a/src/prompt-engineering/prompt.ts b/src/prompt-engineering/prompt.ts index bee2abe..cf747d6 100644 --- a/src/prompt-engineering/prompt.ts +++ b/src/prompt-engineering/prompt.ts @@ -34,6 +34,7 @@ class PromptFields extends Utils { type!: GenerationType; createdAt!: string; name!: string; + url!: string; version!: number; url?: Maybe; versionDesc?: Maybe; diff --git a/tests/integration/vercel-sdk.test.ts b/tests/integration/vercel-sdk.test.ts index 59de78e..76e88b6 100644 --- a/tests/integration/vercel-sdk.test.ts +++ b/tests/integration/vercel-sdk.test.ts @@ -16,9 +16,9 @@ if (!apiUrl || !apiKey) { const client = new LiteralClient({ apiKey, apiUrl }); -describe('Vercel SDK Instrumentation', () => { - // Skip for the CI - describe.skip('With OpenAI', () => { +// Skip for the CI +describe.skip('Vercel SDK Instrumentation', () => { + describe('With OpenAI', () => { afterEach(() => jest.restoreAllMocks()); it('should work a simple text generation', async () => { From 773dba529ffff0ea713a860a7fa15f717fa49ff7 Mon Sep 17 00:00:00 2001 From: Damien BUTY Date: Fri, 6 Sep 2024 16:43:10 +0200 Subject: [PATCH 11/12] fix: tests --- tests/api.test.ts | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tests/api.test.ts b/tests/api.test.ts index 247f46a..aa8395e 100644 --- a/tests/api.test.ts +++ b/tests/api.test.ts @@ -90,6 +90,9 @@ describe('End to end tests for the SDK', function() { await client.api.deleteThread(thread.id); + // We have to await 5 seconds for the thread to disappear from the cache + await sleep(5000); + const deletedThread = await client.api.getThread(thread.id); expect(deletedThread).toBeNull(); }); @@ -120,6 +123,9 @@ describe('End to end tests for the SDK', function() { await client.api.deleteThread(thread.id); + // We have to await 5 seconds for the thread to disappear from the cache + await sleep(5000); + const deletedThread = await client.api.getThread(thread.id); expect(deletedThread).toBeNull(); }); From 05bb5e895daaf0b0782156bd59558eee95e726bc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Cl=C3=A9ment=20Sirieix?= Date: Thu, 12 Sep 2024 15:20:00 +0200 Subject: [PATCH 12/12] fix: apply review --- .gitignore | 1 - openai.ts => examples/openai.ts | 2 +- prompt.ts => examples/prompt.ts | 2 +- package-lock.json | 21 ++++++++++++++++++--- package.json | 2 +- src/instrumentation/langchain.ts | 1 - src/prompt-engineering/prompt.ts | 1 - tests/api.test.ts | 27 ++++++++++++--------------- tests/attachments.test.ts | 2 +- 9 files changed, 34 insertions(+), 25 deletions(-) rename openai.ts => examples/openai.ts (99%) rename prompt.ts => examples/prompt.ts (82%) diff --git a/.gitignore b/.gitignore index c177bb3..95bf6fe 100644 --- a/.gitignore +++ b/.gitignore @@ -15,7 +15,6 @@ dist-ssr *.local .env -examples # Editor directories and files .vscode/* diff --git a/openai.ts b/examples/openai.ts similarity index 99% rename from openai.ts rename to examples/openai.ts index 4712a1f..7987aea 100644 --- a/openai.ts +++ b/examples/openai.ts @@ -1,7 +1,7 @@ import 'dotenv/config'; import OpenAI from 'openai'; -import { LiteralClient } from './src'; +import { LiteralClient } from '../src'; const literalClient = new LiteralClient(); diff --git a/prompt.ts b/examples/prompt.ts similarity index 82% rename from prompt.ts rename to examples/prompt.ts index 2f35796..60b5f24 100644 --- a/prompt.ts +++ b/examples/prompt.ts @@ -1,6 +1,6 @@ import 'dotenv/config'; -import { LiteralClient } from './src'; +import { LiteralClient } from '../src'; const literalClient = new LiteralClient(); diff --git a/package-lock.json b/package-lock.json index 86339e2..ceef4c7 100644 --- a/package-lock.json +++ b/package-lock.json @@ -9,7 +9,6 @@ "version": "0.0.515", "license": "Apache-2.0", "dependencies": { - "@langchain/openai": "^0.2.7", "axios": "^1.6.2", "form-data": "^4.0.0", "mustache": "^4.2.0", @@ -44,6 +43,7 @@ }, "peerDependencies": { "@ai-sdk/openai": "0.0.x", + "@langchain/openai": "^0.2.7", "ai": "3.x", "langchain": "0.1.x", "llamaindex": "0.3.x", @@ -4208,6 +4208,7 @@ "version": "0.2.28", "resolved": "https://registry.npmjs.org/@langchain/core/-/core-0.2.28.tgz", "integrity": "sha512-xN3+UdfxFaBcm29auMHFHGEYRh+3HwBc/dICHtwfk2wTSmw4HzWmBtZMx3BG+TOgh5Et7+mT6eF6E3omDLfk+A==", + "peer": true, "dependencies": { "ansi-styles": "^5.0.0", "camelcase": "6", @@ -4229,6 +4230,7 @@ "version": "5.2.0", "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-5.2.0.tgz", "integrity": "sha512-Cxwpt2SfTzTtXcfOlzGEee8O+c+MmUgGrNiBcXnuWxuFJHe6a5Hz7qwhwe5OgaSYI0IJvkLqWX1ASG+cJOkEiA==", + "peer": true, "engines": { "node": ">=10" }, @@ -4240,6 +4242,7 @@ "version": "6.3.0", "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-6.3.0.tgz", "integrity": "sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA==", + "peer": true, "engines": { "node": ">=10" }, @@ -4255,6 +4258,7 @@ "https://github.com/sponsors/broofa", "https://github.com/sponsors/ctavan" ], + "peer": true, "bin": { "uuid": "dist/bin/uuid" } @@ -4263,6 +4267,7 @@ "version": "0.2.7", "resolved": "https://registry.npmjs.org/@langchain/openai/-/openai-0.2.7.tgz", "integrity": "sha512-f2XDXbExJf4SYsy17QSiq0YY/UWJXhJwoiS8uRi/gBa20zBQ8+bBFRnb9vPdLkOkGiaTy+yXZVFro3a9iW2r3w==", + "peer": true, "dependencies": { "@langchain/core": ">=0.2.26 <0.3.0", "js-tiktoken": "^1.0.12", @@ -5873,7 +5878,8 @@ "node_modules/@types/retry": { "version": "0.12.0", "resolved": "https://registry.npmjs.org/@types/retry/-/retry-0.12.0.tgz", - "integrity": "sha512-wWKOClTTiizcZhXnPY4wikVAwmdYHp8q6DmC+EJUzAMsycb7HB32Kh9RN4+0gExjmPmZSAQjgURXIGATPegAvA==" + "integrity": "sha512-wWKOClTTiizcZhXnPY4wikVAwmdYHp8q6DmC+EJUzAMsycb7HB32Kh9RN4+0gExjmPmZSAQjgURXIGATPegAvA==", + "peer": true }, "node_modules/@types/semver": { "version": "7.5.8", @@ -7465,6 +7471,7 @@ "version": "10.0.1", "resolved": "https://registry.npmjs.org/commander/-/commander-10.0.1.tgz", "integrity": "sha512-y4Mg2tXshplEbSGzx7amzPwKKOCGuoSRP/CjEdwwk0FOGlUbq6lKuoyDZTNZkmxHdJtp54hdfY/JUrdL7Xfdug==", + "peer": true, "engines": { "node": ">=14" } @@ -7648,6 +7655,7 @@ "version": "1.2.0", "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz", "integrity": "sha512-z2S+W9X73hAUUki+N+9Za2lBlun89zigOyGrsax+KUQ6wKW4ZoWpEYBkGhQjwAjjDCkWxhY0VKEhk8wzY7F5cA==", + "peer": true, "engines": { "node": ">=0.10.0" } @@ -10619,6 +10627,7 @@ "version": "0.1.43", "resolved": "https://registry.npmjs.org/langsmith/-/langsmith-0.1.43.tgz", "integrity": "sha512-+IL59ye/je9HmMttJU50epJneEbEwlMJ8i5tEFjJC6l2+SWPtedT0UPuAnPEybMhfjU3ziNfqAxck7WTEncL8w==", + "peer": true, "dependencies": { "@types/uuid": "^9.0.1", "commander": "^10.0.1", @@ -12032,6 +12041,7 @@ "version": "1.0.0", "resolved": "https://registry.npmjs.org/p-finally/-/p-finally-1.0.0.tgz", "integrity": "sha512-LICb2p9CB7FS+0eR1oqWnHhp0FljGLZCWBE9aix0Uye9W8LTQPwMTYVGWQWIw9RdQiDg4+epXQODwIYJtSJaow==", + "peer": true, "engines": { "node": ">=4" } @@ -12070,6 +12080,7 @@ "version": "6.6.2", "resolved": "https://registry.npmjs.org/p-queue/-/p-queue-6.6.2.tgz", "integrity": "sha512-RwFpb72c/BhQLEXIZ5K2e+AhgNVmIejGlTgiB9MzZ0e93GRvqZ7uSi0dvRF7/XIXDeNkra2fNHBxTyPDGySpjQ==", + "peer": true, "dependencies": { "eventemitter3": "^4.0.4", "p-timeout": "^3.2.0" @@ -12084,12 +12095,14 @@ "node_modules/p-queue/node_modules/eventemitter3": { "version": "4.0.7", "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.7.tgz", - "integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==" + "integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==", + "peer": true }, "node_modules/p-retry": { "version": "4.6.2", "resolved": "https://registry.npmjs.org/p-retry/-/p-retry-4.6.2.tgz", "integrity": "sha512-312Id396EbJdvRONlngUx0NydfrIQ5lsYu0znKVUzVvArzEIt08V1qhtyESbGVd1FGX7UKtiFp5uwKZdM8wIuQ==", + "peer": true, "dependencies": { "@types/retry": "0.12.0", "retry": "^0.13.1" @@ -12102,6 +12115,7 @@ "version": "3.2.0", "resolved": "https://registry.npmjs.org/p-timeout/-/p-timeout-3.2.0.tgz", "integrity": "sha512-rhIwUycgwwKcP9yTOOFK/AKsAopjjCakVqLHePO3CC6Mir1Z99xT+R63jZxAT5lFZLa2inS5h+ZS2GvR99/FBg==", + "peer": true, "dependencies": { "p-finally": "^1.0.0" }, @@ -13185,6 +13199,7 @@ "version": "0.13.1", "resolved": "https://registry.npmjs.org/retry/-/retry-0.13.1.tgz", "integrity": "sha512-XQBQ3I8W1Cge0Seh+6gjj03LbmRFWuoszgK9ooCpwYIrhhoO80pfq4cUkU5DkknwfOfFteRwlZ56PYOGYyFWdg==", + "peer": true, "engines": { "node": ">= 4" } diff --git a/package.json b/package.json index cdabc38..2ee035a 100644 --- a/package.json +++ b/package.json @@ -52,7 +52,6 @@ "zod-to-json-schema": "^3.23.0" }, "dependencies": { - "@langchain/openai": "^0.2.7", "axios": "^1.6.2", "form-data": "^4.0.0", "mustache": "^4.2.0", @@ -60,6 +59,7 @@ }, "peerDependencies": { "@ai-sdk/openai": "0.0.x", + "@langchain/openai": "^0.2.7", "ai": "3.x", "langchain": "0.1.x", "llamaindex": "0.3.x", diff --git a/src/instrumentation/langchain.ts b/src/instrumentation/langchain.ts index dc221a5..737228d 100644 --- a/src/instrumentation/langchain.ts +++ b/src/instrumentation/langchain.ts @@ -513,7 +513,6 @@ export class LiteralCallbackHandler extends BaseCallbackHandler { } } } catch (e) { - console.error(e); console.log('Error in handleLLMEnd', e); } } diff --git a/src/prompt-engineering/prompt.ts b/src/prompt-engineering/prompt.ts index cf747d6..bee2abe 100644 --- a/src/prompt-engineering/prompt.ts +++ b/src/prompt-engineering/prompt.ts @@ -34,7 +34,6 @@ class PromptFields extends Utils { type!: GenerationType; createdAt!: string; name!: string; - url!: string; version!: number; url?: Maybe; versionDesc?: Maybe; diff --git a/tests/api.test.ts b/tests/api.test.ts index aa8395e..97cf2fc 100644 --- a/tests/api.test.ts +++ b/tests/api.test.ts @@ -6,10 +6,10 @@ import { Dataset } from '../src/evaluation/dataset'; import { Score } from '../src/evaluation/score'; import { sleep } from './utils'; -describe('End to end tests for the SDK', function() { +describe('End to end tests for the SDK', function () { let client: LiteralClient; - beforeAll(function() { + beforeAll(function () { const url = process.env.LITERAL_API_URL; const apiKey = process.env.LITERAL_API_KEY; @@ -20,7 +20,7 @@ describe('End to end tests for the SDK', function() { client = new LiteralClient({ apiKey, apiUrl: url }); }); - it('should test user', async function() { + it('should test user', async function () { const identifier = `test_user_${uuidv4()}`; const user = await client.api.createUser(identifier, { foo: 'bar' }); @@ -47,7 +47,7 @@ describe('End to end tests for the SDK', function() { expect(deletedUser).toBeUndefined(); }); - it('should test generation', async function() { + it('should test generation', async function () { const generation = await client.api.createGeneration({ provider: 'test', model: 'test', @@ -66,7 +66,7 @@ describe('End to end tests for the SDK', function() { expect(generations.data[0].id).toBe(generation.id); }); - it('should test thread with a single argument', async function() { + it('should test thread with a single argument', async function () { const thread = await client.api.upsertThread({ threadId: uuidv4(), name: 'name', @@ -90,14 +90,11 @@ describe('End to end tests for the SDK', function() { await client.api.deleteThread(thread.id); - // We have to await 5 seconds for the thread to disappear from the cache - await sleep(5000); - const deletedThread = await client.api.getThread(thread.id); expect(deletedThread).toBeNull(); }); - it('should test thread (deprecated)', async function() { + it('should test thread (deprecated)', async function () { const thread = await client.api.upsertThread( uuidv4(), 'name', @@ -130,7 +127,7 @@ describe('End to end tests for the SDK', function() { expect(deletedThread).toBeNull(); }); - it('should test export thread', async function() { + it('should test export thread', async function () { const thread = await client.api.upsertThread({ threadId: uuidv4(), name: 'test', @@ -166,7 +163,7 @@ describe('End to end tests for the SDK', function() { expect(deletedThread).toBeNull(); }); - it('should test run', async function() { + it('should test run', async function () { const step = await client .run({ name: 'test', @@ -197,7 +194,7 @@ describe('End to end tests for the SDK', function() { expect(deletedStep).toBeNull(); }); - it('should test step', async function() { + it('should test step', async function () { const thread = await client.thread({ id: uuidv4() }); const step = await thread .step({ @@ -231,7 +228,7 @@ describe('End to end tests for the SDK', function() { expect(deletedStep).toBeNull(); }); - it('should test steps', async function() { + it('should test steps', async function () { const thread = await client.thread({ id: uuidv4() }); const step = await thread @@ -266,7 +263,7 @@ describe('End to end tests for the SDK', function() { await client.api.deleteThread(thread.id); }); - it('should test score', async function() { + it('should test score', async function () { const thread = await client.thread({ id: uuidv4() }); const step = await thread .step({ @@ -306,7 +303,7 @@ describe('End to end tests for the SDK', function() { await client.api.deleteThread(thread.id); }); - it('should test scores', async function() { + it('should test scores', async function () { const thread = await client.thread({ id: uuidv4() }); const step = await thread .step({ diff --git a/tests/attachments.test.ts b/tests/attachments.test.ts index f40ab48..44ccce2 100644 --- a/tests/attachments.test.ts +++ b/tests/attachments.test.ts @@ -34,7 +34,7 @@ describe('Attachments', () => { { type: 'ArrayBuffer', content: arrayBuffer! }, { type: 'Blob', content: blob! }, { type: 'File', content: file! } - ])('handles $type objects', async function({ type, content }) { + ])('handles $type objects', async function ({ type, content }) { const attachment = await client.api.createAttachment({ content, mime,