Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: update import to workspace pages for examples and docs #1626

Merged
merged 16 commits into from
Feb 10, 2025
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ import {
Document,
VectorStoreIndex,
OpenAIContextAwareAgent,
OpenAI,
} from "llamaindex";
import { OpenAI } from "@llamaindex/openai";

async function createContextAwareAgent() {
// Create and index some documents
Expand Down
6 changes: 4 additions & 2 deletions apps/next/src/content/docs/llamaindex/examples/other_llms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ If you don't want to use an API at all you can [run a local model](../../example
You can specify what LLM LlamaIndex.TS will use on the `Settings` object, like this:

```typescript
import { MistralAI, Settings } from "llamaindex";
import { MistralAI } from "@llamaindex/mistral";
import { Settings } from "llamaindex";

Settings.llm = new MistralAI({
model: "mistral-tiny",
Expand All @@ -29,7 +30,8 @@ You can see examples of other APIs we support by checking out "Available LLMs" i
A frequent gotcha when trying to use a different API as your LLM is that LlamaIndex will also by default index and embed your data using OpenAI's embeddings. To completely switch away from OpenAI you will need to set your embedding model as well, for example:

```typescript
import { MistralAIEmbedding, Settings } from "llamaindex";
import { MistralAIEmbedding } from "@llamaindex/mistral";
import { Settings } from "llamaindex";

Settings.embedModel = new MistralAIEmbedding();
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@ First we'll need to pull in our dependencies. These are:
- Dotenv to load our API key from the .env file

```javascript
import { OpenAI, FunctionTool, OpenAIAgent, Settings } from "llamaindex";
import { FunctionTool, Settings } from "llamaindex";
import { OpenAI, OpenAIAgent } from "@llamaindex/openai";
import "dotenv/config";
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,10 @@ We're going to start with the same agent we [built in step 1](https://github.com
We'll be bringing in `SimpleDirectoryReader`, `HuggingFaceEmbedding`, `VectorStoreIndex`, and `QueryEngineTool`, `OpenAIContextAwareAgent` from LlamaIndex.TS, as well as the dependencies we previously used.

```javascript
import {
OpenAI,
FunctionTool,
OpenAIAgent,
OpenAIContextAwareAgent,
Settings,
SimpleDirectoryReader,
HuggingFaceEmbedding,
VectorStoreIndex,
QueryEngineTool,
} from "llamaindex";
import { FunctionTool, QueryEngineTool, Settings, VectorStoreIndex } from "llamaindex";
import { OpenAI, OpenAIAgent } from "@llamaindex/openai";
import { HuggingFaceEmbedding } from "@llamaindex/huggingface";
import { SimpleDirectoryReader } from "@llamaindex/readers/directory";
```

### Add an embedding model
Expand Down
12 changes: 6 additions & 6 deletions apps/next/src/content/docs/llamaindex/guide/loading/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,11 @@ We offer readers for different file formats.

<Tabs groupId="llamaindex-or-readers" items={["llamaindex", "@llamaindex/readers"]} persist>
```ts twoslash tab="llamaindex"
import { CSVReader } from 'llamaindex'
import { PDFReader } from 'llamaindex'
import { JSONReader } from 'llamaindex'
import { MarkdownReader } from 'llamaindex'
import { HTMLReader } from 'llamaindex'
import { CSVReader } from '@llamaindex/readers/csv'
import { PDFReader } from '@llamaindex/readers/pdf'
import { JSONReader } from '@llamaindex/readers/json'
import { MarkdownReader } from '@llamaindex/readers/markdown'
import { HTMLReader } from '@llamaindex/readers/html'
// you can find more readers in the documentation
```

Expand All @@ -59,7 +59,7 @@ We offer readers for different file formats.
<Tabs groupId="llamaindex-or-readers" items={["llamaindex", "@llamaindex/readers"]} persist>

```ts twoslash tab="llamaindex"
import { SimpleDirectoryReader } from "llamaindex";
import { SimpleDirectoryReader } from "@llamaindex/readers/directory";

const reader = new SimpleDirectoryReader()
const documents = await reader.loadData("./data")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ streamText({
For production deployments, you can use LlamaCloud to store and manage your documents:

```typescript
import { LlamaCloudIndex } from "llamaindex";
import { LlamaCloudIndex } from "@llamaindex/cloud";

// Create a LlamaCloud index
const index = await LlamaCloudIndex.fromDocuments({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Supports streaming of large JSON data using [@discoveryjs/json-ext](https://gith
## Usage

```ts
import { JSONReader } from "llamaindex";
import { JSONReader } from "@llamaindex/readers/json";

const file = "../../PATH/TO/FILE";
const content = new TextEncoder().encode("JSON_CONTENT");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,10 @@ const imageDicts = await reader.getImages(jsonObjs, "images");
You can create an index across both text and image nodes by requesting alternative text for the image from a multimodal LLM.

```ts
import {
Document,
ImageNode,
LlamaParseReader,
OpenAI,
VectorStoreIndex,
} from "llamaindex";
import { createMessageContent } from "llamaindex/synthesizers/utils";
import { Document, ImageNode, VectorStoreIndex } from "llamaindex";
import { LlamaParseReader } from "@llamaindex/cloud/reader";
import { OpenAI } from "@llamaindex/openai";
import { createMessageContent } from "@llamaindex/core/synthesizers";

const reader = new LlamaParseReader();
async function main() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ For Json mode, you need to use `loadJson`. The `resultType` is automatically set
More information about indexing the results on the next page.

```ts
import { LlamaParseReader } from "@llamaindex/cloud/reader";

const reader = new LlamaParseReader();
async function main() {
// Load the file and return an array of json objects
Expand Down Expand Up @@ -59,7 +61,8 @@ All Readers share a `loadData` method with `SimpleDirectoryReader` that promises
However, a simple work around is to create a new reader class that extends `LlamaParseReader` and adds a new method or overrides `loadData`, wrapping around JSON mode, extracting the required values, and returning a Document object.

```ts
import { LlamaParseReader, Document } from "llamaindex";
import { Document } from "llamaindex";
import { LlamaParseReader } from "@llamaindex/cloud/reader";

class LlamaParseReaderWithJson extends LlamaParseReader {
// Override the loadData method
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,8 @@ provided, it will use the environment variables `PGHOST`, `PGUSER`,
`PGPASSWORD`, `PGDATABASE` and `PGPORT`.

```typescript
import {
Document,
VectorStoreIndex,
PostgresDocumentStore,
PostgresIndexStore,
storageContextFromDefaults,
} from "llamaindex";
import { Document, VectorStoreIndex, storageContextFromDefaults } from "llamaindex";
import { PostgresDocumentStore, PostgresIndexStore } from "@llamaindex/postgres";

const storageContext = await storageContextFromDefaults({
docStore: new PostgresDocumentStore(),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@ docker run -p 6333:6333 qdrant/qdrant

```ts
import fs from "node:fs/promises";
import { Document, VectorStoreIndex, QdrantVectorStore } from "llamaindex";
import { Document, VectorStoreIndex } from "llamaindex";
import { QdrantVectorStore } from "@llamaindex/qdrant";
```

## Load the documents
Expand Down Expand Up @@ -60,7 +61,8 @@ console.log(response.toString());

```ts
import fs from "node:fs/promises";
import { Document, VectorStoreIndex, QdrantVectorStore } from "llamaindex";
import { Document, VectorStoreIndex } from "llamaindex";
import { QdrantVectorStore } from "@llamaindex/qdrant";

async function main() {
const path = "node_modules/llamaindex/examples/abramov.txt";
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,8 @@ Our metadata extractor modules include the following "feature extractors":
Then you can chain the `Metadata Extractors` with the `IngestionPipeline` to extract metadata from a set of documents.

```ts
import {
IngestionPipeline,
TitleExtractor,
QuestionsAnsweredExtractor,
Document,
OpenAI,
} from "llamaindex";
import { Document, IngestionPipeline, TitleExtractor, QuestionsAnsweredExtractor } from "llamaindex";
import { OpenAI } from "@llamaindex/openai";

async function main() {
const pipeline = new IngestionPipeline({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,8 @@ To use DeepInfra embeddings, you need to import `DeepInfraEmbedding` from llamai
Check out available embedding models [here](https://deepinfra.com/models/embeddings).

```ts
import {
DeepInfraEmbedding,
Settings,
Document,
VectorStoreIndex,
} from "llamaindex";
import { Document, Settings, VectorStoreIndex } from "llamaindex";
import { DeepInfraEmbedding } from "@llamaindex/deepinfra";

// Update Embed Model
Settings.embedModel = new DeepInfraEmbedding();
Expand All @@ -33,7 +29,7 @@ By default, DeepInfraEmbedding is using the sentence-transformers/clip-ViT-B-32
For example:

```ts
import { DeepInfraEmbedding } from "llamaindex";
import { DeepInfraEmbedding } from "@llamaindex/deepinfra";

const model = "intfloat/e5-large-v2";
Settings.embedModel = new DeepInfraEmbedding({
Expand All @@ -46,7 +42,8 @@ You can also set the `maxRetries` and `timeout` parameters when initializing `De
For example:

```ts
import { DeepInfraEmbedding, Settings } from "llamaindex";
import { Settings } from "llamaindex";
import { DeepInfraEmbedding } from "@llamaindex/deepinfra";

const model = "intfloat/e5-large-v2";
const maxRetries = 5;
Expand All @@ -62,7 +59,7 @@ Settings.embedModel = new DeepInfraEmbedding({
Standalone usage:

```ts
import { DeepInfraEmbedding } from "llamaindex";
import { DeepInfraEmbedding } from "@llamaindex/deepinfra";
import { config } from "dotenv";
// For standalone usage, you need to configure DEEPINFRA_API_TOKEN in .env file
config();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@
title: Gemini
---

To use Gemini embeddings, you need to import `GeminiEmbedding` from `llamaindex`.
To use Gemini embeddings, you need to import `GeminiEmbedding` from `@llamaindex/google`.

```ts
import { GeminiEmbedding, Settings } from "llamaindex";
import { Document, Settings, VectorStoreIndex } from "llamaindex";
import { GeminiEmbedding, GEMINI_MODEL } from "@llamaindex/google";

// Update Embed Model
Settings.embedModel = new GeminiEmbedding();
Expand All @@ -27,7 +28,7 @@ Per default, `GeminiEmbedding` is using the `gemini-pro` model. You can change t
For example:

```ts
import { GEMINI_MODEL, GeminiEmbedding } from "llamaindex";
import { GEMINI_MODEL, GeminiEmbedding } from "@llamaindex/google";

Settings.embedModel = new GeminiEmbedding({
model: GEMINI_MODEL.GEMINI_PRO_LATEST,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@
title: HuggingFace
---

To use HuggingFace embeddings, you need to import `HuggingFaceEmbedding` from `llamaindex`.
To use HuggingFace embeddings, you need to import `HuggingFaceEmbedding` from `@llamaindex/huggingface`.

```ts
import { HuggingFaceEmbedding, Settings } from "llamaindex";
import { Document, Settings, VectorStoreIndex } from "llamaindex";
import { HuggingFaceEmbedding } from "@llamaindex/huggingface";

// Update Embed Model
Settings.embedModel = new HuggingFaceEmbedding();
Expand All @@ -29,6 +30,8 @@ If you're not using a quantized model, set the `quantized` parameter to `false`.
For example, to use the not quantized `BAAI/bge-small-en-v1.5` model, you can use the following code:

```ts
import { HuggingFaceEmbedding } from "@llamaindex/huggingface";

Settings.embedModel = new HuggingFaceEmbedding({
modelType: "BAAI/bge-small-en-v1.5",
quantized: false,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@
title: MistralAI
---

To use MistralAI embeddings, you need to import `MistralAIEmbedding` from `llamaindex`.
To use MistralAI embeddings, you need to import `MistralAIEmbedding` from `@llamaindex/mistral`.

```ts
import { MistralAIEmbedding, Settings } from "llamaindex";
import { Document, Settings, VectorStoreIndex } from "llamaindex";
import { MistralAIEmbedding } from "@llamaindex/mistral";

// Update Embed Model
Settings.embedModel = new MistralAIEmbedding({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ pnpm install llamaindex
Next, sign up for an API key at [mixedbread.ai](https://mixedbread.ai/). Once you have your API key, you can import the necessary modules and create a new instance of the `MixedbreadAIEmbeddings` class.

```ts
import { MixedbreadAIEmbeddings, Document, Settings } from "llamaindex";
import { MixedbreadAIEmbeddings } from "@llamaindex/mixedbread";
import { Document, Settings } from "llamaindex";
```

## Usage with LlamaIndex
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Ollama
---

To use Ollama embeddings, you need to import `OllamaEmbedding` from `llamaindex`.
To use Ollama embeddings, you need to import `OllamaEmbedding` from `@llamaindex/ollama`.

Note that you need to pull the embedding model first before using it.

Expand All @@ -13,7 +13,8 @@ ollama pull nomic-embed-text
```

```ts
import { OllamaEmbedding, Settings } from "llamaindex";
import { OllamaEmbedding } from "@llamaindex/ollama";
import { Document, Settings, VectorStoreIndex } from "llamaindex";

Settings.embedModel = new OllamaEmbedding({ model: "nomic-embed-text" });

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@
title: OpenAI
---

To use OpenAI embeddings, you need to import `OpenAIEmbedding` from `llamaindex`.
To use OpenAI embeddings, you need to import `OpenAIEmbedding` from `@llamaindex/openai`.

```ts
import { OpenAIEmbedding, Settings } from "llamaindex";
import { OpenAIEmbedding } from "@llamaindex/openai";
import { Document, Settings, VectorStoreIndex } from "llamaindex";

Settings.embedModel = new OpenAIEmbedding();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@ The embedding model in LlamaIndex is responsible for creating numerical represen
This can be explicitly updated through `Settings`

```typescript
import { OpenAIEmbedding, Settings } from "llamaindex";
import { OpenAIEmbedding } from "@llamaindex/openai";
import { Settings } from "llamaindex";

Settings.embedModel = new OpenAIEmbedding({
model: "text-embedding-ada-002",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ export OPENAI_API_KEY=your-api-key
Import the required modules:

```ts
import { CorrectnessEvaluator, OpenAI, Settings, Response } from "llamaindex";
import { OpenAI } from "@llamaindex/openai";
import { CorrectnessEvaluator, Settings, Response } from "llamaindex";
```

Let's setup gpt-4 for better results:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,12 @@ export OPENAI_API_KEY=your-api-key
Import the required modules:

```ts
import { OpenAI } from "@llamaindex/openai";
import {
Document,
FaithfulnessEvaluator,
OpenAI,
VectorStoreIndex,
Settings,
VectorStoreIndex,
} from "llamaindex";
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,11 @@ export OPENAI_API_KEY=your-api-key
Import the required modules:

```ts
import { OpenAI } from "@llamaindex/openai";
import {
Document,
RelevancyEvaluator,
OpenAI,
Settings,
Document,
VectorStoreIndex,
} from "llamaindex";
```
Expand Down
Loading