Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: update import to workspace pages for examples and docs #1626

Merged
merged 16 commits into from
Feb 10, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/flat-mirrors-dream.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"@llamaindex/doc": patch
---

chore: update examples and docs to use unified imports
14 changes: 10 additions & 4 deletions apps/next/src/app/(home)/page.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -76,15 +76,19 @@ export default function HomePage() {
>
<MagicMove
code={[
`import { OpenAI } from "llamaindex";
`import { OpenAI } from "@llamaindex/openai";

const llm = new OpenAI();
const response = await llm.complete({ prompt: "How are you?" });`,
`import { OpenAI } from "llamaindex";
`import { OpenAI } from "@llamaindex/openai";

const llm = new OpenAI();
const response = await llm.chat({
messages: [{ content: "Tell me a joke.", role: "user" }],
});`,
`import { OpenAI, ChatMemoryBuffer } from "llamaindex";
`import { ChatMemoryBuffer } from "llamaindex";
import { OpenAI } from "@llamaindex/openai";

const llm = new OpenAI({ model: 'gpt4o-turbo' });
const buffer = new ChatMemoryBuffer({
tokenLimit: 128_000,
Expand All @@ -94,7 +98,9 @@ const response = await llm.chat({
messages: buffer.getMessages(),
stream: true
});`,
`import { OpenAIAgent, ChatMemoryBuffer } from "llamaindex";
`import { ChatMemoryBuffer } from "llamaindex";
import { OpenAIAgent } from "@llamaindex/openai";

const agent = new OpenAIAgent({
llm,
tools: [...myTools]
Expand Down
20 changes: 20 additions & 0 deletions apps/next/src/content/docs/llamaindex/examples/agent_gemini.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,24 @@ title: Gemini Agent
import { DynamicCodeBlock } from 'fumadocs-ui/components/dynamic-codeblock';
import CodeSourceGemini from "!raw-loader!../../../../../../../examples/gemini/agent.ts";

## Installation

import { Tab, Tabs } from "fumadocs-ui/components/tabs";

<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install llamaindex @llamaindex/google
```

```shell tab="yarn"
yarn add llamaindex @llamaindex/google
```

```shell tab="pnpm"
pnpm add llamaindex @llamaindex/google
```
</Tabs>

## Source

<DynamicCodeBlock lang="ts" code={CodeSourceGemini} />
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,8 @@ Here's a simple example of how to use the Context-Aware Agent:
import {
Document,
VectorStoreIndex,
OpenAIContextAwareAgent,
OpenAI,
} from "llamaindex";
import { OpenAI, OpenAIContextAwareAgent } from "@llamaindex/openai";

async function createContextAwareAgent() {
// Create and index some documents
Expand Down
29 changes: 26 additions & 3 deletions apps/next/src/content/docs/llamaindex/examples/other_llms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,36 @@ import CodeSource from "!raw-loader!../../../../../../../examples/mistral";

By default LlamaIndex.TS uses OpenAI's LLMs and embedding models, but we support [lots of other LLMs](../modules/llms) including models from Mistral (Mistral, Mixtral), Anthropic (Claude) and Google (Gemini).

If you don't want to use an API at all you can [run a local model](../../examples/local_llm)
If you don't want to use an API at all you can [run a local model](../../examples/local_llm).

This example runs you through the process of setting up a Mistral model:


## Installation

import { Tab, Tabs } from "fumadocs-ui/components/tabs";

<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install llamaindex @llamaindex/mistral
```

```shell tab="yarn"
yarn add llamaindex @llamaindex/mistral
```

```shell tab="pnpm"
pnpm add llamaindex @llamaindex/mistral
```
</Tabs>

## Using another LLM

You can specify what LLM LlamaIndex.TS will use on the `Settings` object, like this:

```typescript
import { MistralAI, Settings } from "llamaindex";
import { MistralAI } from "@llamaindex/mistral";
import { Settings } from "llamaindex";

Settings.llm = new MistralAI({
model: "mistral-tiny",
Expand All @@ -29,7 +51,8 @@ You can see examples of other APIs we support by checking out "Available LLMs" i
A frequent gotcha when trying to use a different API as your LLM is that LlamaIndex will also by default index and embed your data using OpenAI's embeddings. To completely switch away from OpenAI you will need to set your embedding model as well, for example:

```typescript
import { MistralAIEmbedding, Settings } from "llamaindex";
import { MistralAIEmbedding } from "@llamaindex/mistral";
import { Settings } from "llamaindex";

Settings.embedModel = new MistralAIEmbedding();
```
Expand Down
21 changes: 21 additions & 0 deletions apps/next/src/content/docs/llamaindex/getting_started/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ description: Install llamaindex by running a single command.

import { Tab, Tabs } from "fumadocs-ui/components/tabs";

To install llamaindex, run the following command:

<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install llamaindex
Expand All @@ -19,6 +21,25 @@ import { Tab, Tabs } from "fumadocs-ui/components/tabs";
```
</Tabs>

In most cases, you'll also need an LLM package to use LlamaIndex. For example, to use the OpenAI LLM, you would install the following:

<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install @llamaindex/openai
```

```shell tab="yarn"
yarn add @llamaindex/openai
```

```shell tab="pnpm"
pnpm add @llamaindex/openai
```
</Tabs>

Go to [Using other LLM APIs](/docs/llamaindex/examples/other_llms) to find out how to use other LLMs.


## What's next?

<Cards>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ LlamaIndex.TS is written in TypeScript and designed to be used in TypeScript pro
We do lots of work on strong typing to make sure you have a great typing experience with LlamaIndex.TS.

```ts twoslash
import { PromptTemplate } from '@llamaindex/core/prompts'
import { PromptTemplate } from 'llamaindex'
const promptTemplate = new PromptTemplate({
template: `Context information from multiple sources is below.
---------------------
Expand All @@ -29,7 +29,7 @@ promptTemplate.format({
```

```ts twoslash
import { FunctionTool } from '@llamaindex/core/tools'
import { FunctionTool } from 'llamaindex'
import { z } from 'zod'

// ---cut-before---
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@ First we'll need to pull in our dependencies. These are:
- Dotenv to load our API key from the .env file

```javascript
import { OpenAI, FunctionTool, OpenAIAgent, Settings } from "llamaindex";
import { FunctionTool, Settings } from "llamaindex";
import { OpenAI, OpenAIAgent } from "@llamaindex/openai";
import "dotenv/config";
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,22 +13,34 @@ To learn more about RAG, we recommend this [introduction](https://docs.llamainde

We're going to start with the same agent we [built in step 1](https://github.com/run-llama/ts-agents/blob/main/1_agent/agent.ts), but make a few changes. You can find the finished version [in the repository](https://github.com/run-llama/ts-agents/blob/main/2_agentic_rag/agent.ts).

## Installation

import { Tab, Tabs } from "fumadocs-ui/components/tabs";

<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install llamaindex @llamaindex/openai @llamaindex/huggingface
```

```shell tab="yarn"
yarn add llamaindex @llamaindex/openai @llamaindex/huggingface
```

```shell tab="pnpm"
pnpm add llamaindex @llamaindex/openai @llamaindex/huggingface
```
</Tabs>


### New dependencies

We'll be bringing in `SimpleDirectoryReader`, `HuggingFaceEmbedding`, `VectorStoreIndex`, and `QueryEngineTool`, `OpenAIContextAwareAgent` from LlamaIndex.TS, as well as the dependencies we previously used.

```javascript
import {
OpenAI,
FunctionTool,
OpenAIAgent,
OpenAIContextAwareAgent,
Settings,
SimpleDirectoryReader,
HuggingFaceEmbedding,
VectorStoreIndex,
QueryEngineTool,
} from "llamaindex";
import { FunctionTool, QueryEngineTool, Settings, VectorStoreIndex } from "llamaindex";
import { OpenAI, OpenAIAgent } from "@llamaindex/openai";
import { HuggingFaceEmbedding } from "@llamaindex/huggingface";
import { SimpleDirectoryReader } from "llamaindex";
```

### Add an embedding model
Expand Down
12 changes: 6 additions & 6 deletions apps/next/src/content/docs/llamaindex/guide/loading/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,11 @@ We offer readers for different file formats.

<Tabs groupId="llamaindex-or-readers" items={["llamaindex", "@llamaindex/readers"]} persist>
```ts twoslash tab="llamaindex"
import { CSVReader } from 'llamaindex'
import { PDFReader } from 'llamaindex'
import { JSONReader } from 'llamaindex'
import { MarkdownReader } from 'llamaindex'
import { HTMLReader } from 'llamaindex'
import { CSVReader } from '@llamaindex/readers/csv'
import { PDFReader } from '@llamaindex/readers/pdf'
import { JSONReader } from '@llamaindex/readers/json'
import { MarkdownReader } from '@llamaindex/readers/markdown'
import { HTMLReader } from '@llamaindex/readers/html'
// you can find more readers in the documentation
```

Expand Down Expand Up @@ -71,7 +71,7 @@ We offer readers for different file formats.
```

```ts twoslash tab="@llamaindex/readers"
import { SimpleDirectoryReader } from "@llamaindex/readers/directory";
import { SimpleDirectoryReader } from "llamaindex";

const reader = new SimpleDirectoryReader()
const documents = await reader.loadData("./data")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ By default, we will use `Settings.nodeParser` to split the document into nodes.

```ts twoslash
import { TextFileReader } from '@llamaindex/readers/text'
import { SentenceSplitter } from '@llamaindex/core/node-parser';
import { SentenceSplitter } from 'llamaindex';
import { Settings } from 'llamaindex';

const nodeParser = new SentenceSplitter();
Expand All @@ -28,7 +28,7 @@ Settings.nodeParser = nodeParser;
The underlying text splitter will split text by sentences. It can also be used as a standalone module for splitting raw text.

```ts twoslash
import { SentenceSplitter } from "@llamaindex/core/node-parser";
import { SentenceSplitter } from "llamaindex";

const splitter = new SentenceSplitter({ chunkSize: 1 });

Expand All @@ -42,7 +42,7 @@ The `MarkdownNodeParser` is a more advanced `NodeParser` that can handle markdow

<Tabs items={["with reader", "with node:fs"]}>
```ts twoslash tab="with reader"
import { MarkdownNodeParser } from "@llamaindex/core/node-parser";
import { MarkdownNodeParser } from "llamaindex";
import { MarkdownReader } from '@llamaindex/readers/markdown'

const reader = new MarkdownReader();
Expand All @@ -56,8 +56,7 @@ The `MarkdownNodeParser` is a more advanced `NodeParser` that can handle markdow

```ts twoslash tab="with node:fs"
import fs from 'node:fs/promises';
import { MarkdownNodeParser } from "@llamaindex/core/node-parser";
import { Document } from '@llamaindex/core/schema';
import { MarkdownNodeParser, Document } from "llamaindex";

const markdownNodeParser = new MarkdownNodeParser();
const text = await fs.readFile('path/to/file.md', 'utf-8');
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ streamText({
For production deployments, you can use LlamaCloud to store and manage your documents:

```typescript
import { LlamaCloudIndex } from "llamaindex";
import { LlamaCloudIndex } from "@llamaindex/cloud";

// Create a LlamaCloud index
const index = await LlamaCloudIndex.fromDocuments({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,28 @@ A simple JSON data loader with various options.
Either parses the entire string, cleaning it and treat each line as an embedding or performs a recursive depth-first traversal yielding JSON paths.
Supports streaming of large JSON data using [@discoveryjs/json-ext](https://github.com/discoveryjs/json-ext)

## Installation

import { Tab, Tabs } from "fumadocs-ui/components/tabs";

<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install llamaindex @llamaindex/readers
```

```shell tab="yarn"
yarn add llamaindex @llamaindex/readers
```

```shell tab="pnpm"
pnpm add llamaindex @llamaindex/readers
```
</Tabs>

## Usage

```ts
import { JSONReader } from "llamaindex";
import { JSONReader } from "@llamaindex/readers/json";

const file = "../../PATH/TO/FILE";
const content = new TextEncoder().encode("JSON_CONTENT");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,24 @@ title: Image Retrieval

LlamaParse `json` mode supports extracting any images found in a page object by using the `getImages` function. They are downloaded to a local folder and can then be sent to a multimodal LLM for further processing.

## Installation

import { Tab, Tabs } from "fumadocs-ui/components/tabs";

<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install llamaindex @llamaindex/cloud @llamaindex/openai
```

```shell tab="yarn"
yarn add llamaindex @llamaindex/cloud @llamaindex/openai
```

```shell tab="pnpm"
pnpm add llamaindex @llamaindex/cloud @llamaindex/openai
```
</Tabs>

## Usage

We use the `getImages` method to input our array of JSON objects, download the images to a specified folder and get a list of ImageNodes.
Expand All @@ -19,14 +37,10 @@ const imageDicts = await reader.getImages(jsonObjs, "images");
You can create an index across both text and image nodes by requesting alternative text for the image from a multimodal LLM.

```ts
import {
Document,
ImageNode,
LlamaParseReader,
OpenAI,
VectorStoreIndex,
} from "llamaindex";
import { createMessageContent } from "llamaindex/synthesizers/utils";
import { Document, ImageNode, VectorStoreIndex } from "llamaindex";
import { LlamaParseReader } from "@llamaindex/cloud";
import { OpenAI } from "@llamaindex/openai";
import { createMessageContent } from "llamaindex";

const reader = new LlamaParseReader();
async function main() {
Expand Down
Loading