You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -123,7 +123,7 @@ Please see [here](https://python.langchain.com) for full documentation, which in
123
123
-[Tutorials](https://python.langchain.com/docs/tutorials/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
124
124
-[How-to guides](https://python.langchain.com/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
125
125
-[Conceptual guide](https://python.langchain.com/docs/concepts/): Conceptual explanations of the key parts of the framework.
126
-
-[API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
126
+
-[API Reference](https://python.langchain.com/api_reference/): Thorough documentation of every class and method.
Copy file name to clipboardexpand all lines: docs/docs/concepts/retrieval.mdx
+1-1
Original file line number
Diff line number
Diff line change
@@ -221,7 +221,7 @@ They are particularly useful for storing and querying complex relationships betw
221
221
LangChain provides a unified interface for interacting with various retrieval systems through the [retriever](/docs/concepts/retrievers/) concept. The interface is straightforward:
222
222
223
223
1. Input: A query (string)
224
-
2. Output: A list of documents (standardized LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects)
224
+
2. Output: A list of documents (standardized LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects)
225
225
226
226
You can create a retriever using any of the retrieval systems mentioned earlier. The query analysis techniques we discussed are particularly useful here, as they enable natural language interfaces for databases that typically require structured query languages.
227
227
For example, you can build a retriever for a SQL database using text-to-SQL conversion. This allows a natural language query (string) to be transformed into a SQL query behind the scenes.
Copy file name to clipboardexpand all lines: docs/docs/concepts/retrievers.mdx
+3-3
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ Because of their importance and variability, LangChain provides a uniform interf
18
18
The LangChain [retriever](/docs/concepts/retrievers/) interface is straightforward:
19
19
20
20
1. Input: A query (string)
21
-
2. Output: A list of documents (standardized LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects)
21
+
2. Output: A list of documents (standardized LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects)
22
22
23
23
## Key concept
24
24
@@ -29,7 +29,7 @@ All retrievers implement a simple interface for retrieving documents using natur
29
29
## Interface
30
30
31
31
The only requirement for a retriever is the ability to accepts a query and return documents.
32
-
In particular, [LangChain's retriever class](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) only requires that the `_get_relevant_documents` method is implemented, which takes a `query: str` and returns a list of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects that are most relevant to the query.
32
+
In particular, [LangChain's retriever class](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html#) only requires that the `_get_relevant_documents` method is implemented, which takes a `query: str` and returns a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects that are most relevant to the query.
33
33
The underlying logic used to get relevant documents is specified by the retriever and can be whatever is most useful for the application.
34
34
35
35
A LangChain retriever is a [runnable](/docs/how_to/lcel_cheatsheet/), which is a standard interface is for LangChain components.
@@ -39,7 +39,7 @@ This means that it has a few common methods, including `invoke`, that are used t
39
39
docs = retriever.invoke(query)
40
40
```
41
41
42
-
Retrievers return a list of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects, which have two attributes:
42
+
Retrievers return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects, which have two attributes:
43
43
44
44
*`page_content`: The content of this document. Currently is a string.
45
45
*`metadata`: Arbitrary metadata associated with this document (e.g., document id, file name, source, etc).
Copy file name to clipboardexpand all lines: docs/docs/concepts/runnables.mdx
+1-1
Original file line number
Diff line number
Diff line change
@@ -125,7 +125,7 @@ Please see the [Configurable Runnables](#configurable-runnables) section for mor
125
125
126
126
LangChain will automatically try to infer the input and output types of a Runnable based on available information.
127
127
128
-
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
128
+
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
This API works with a list of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects.
62
+
This API works with a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects.
63
63
`Document` objects all have `page_content` and `metadata` attributes, making them a universal way to store unstructured text and associated metadata.
64
64
65
65
```python
@@ -126,7 +126,7 @@ to the documentation of the specific vectorstore you are using to see what simil
126
126
Given a similarity metric to measure the distance between the embedded query and any embedded document, we need an algorithm to efficiently search over *all* the embedded documents to find the most similar ones.
127
127
There are various ways to do this. As an example, many vectorstores implement [HNSW (Hierarchical Navigable Small World)](https://www.pinecone.io/learn/series/faiss/hnsw/), a graph-based index structure that allows for efficient similarity search.
128
128
Regardless of the search algorithm used under the hood, the LangChain vectorstore interface has a `similarity_search` method for all integrations.
129
-
This will take the search query, create an embedding, find similar documents, and return them as a list of [Documents](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html).
129
+
This will take the search query, create an embedding, find similar documents, and return them as a list of [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html).
Copy file name to clipboardexpand all lines: docs/docs/integrations/chat/cerebras.ipynb
+3-3
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@
17
17
"source": [
18
18
"# ChatCerebras\n",
19
19
"\n",
20
-
"This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html).\n",
20
+
"This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#).\n",
21
21
"\n",
22
22
"At Cerebras, we've developed the world's largest and fastest AI processor, the Wafer-Scale Engine-3 (WSE-3). The Cerebras CS-3 system, powered by the WSE-3, represents a new class of AI supercomputer that sets the standard for generative AI training and inference with unparalleled performance and scalability.\n",
23
23
"\n",
@@ -37,7 +37,7 @@
37
37
"\n",
38
38
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cerebras) | Package downloads | Package latest |\n",
"For detailed documentation of all ChatCerebras features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html"
399
+
"For detailed documentation of all ChatCerebras features and configurations head to the API reference: https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#"
Copy file name to clipboardexpand all lines: docs/docs/integrations/chat/oci_data_science.ipynb
+5-5
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@
19
19
"source": [
20
20
"# ChatOCIModelDeployment\n",
21
21
"\n",
22
-
"This will help you getting started with OCIModelDeployment [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIModelDeployment features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ChatOCIModelDeployment.html).\n",
22
+
"This will help you getting started with OCIModelDeployment [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIModelDeployment features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeployment.html).\n",
23
23
"\n",
24
24
"[OCI Data Science](https://docs.oracle.com/en-us/iaas/data-science/using/home.htm) is a fully managed and serverless platform for data science teams to build, train, and manage machine learning models in the Oracle Cloud Infrastructure. You can use [AI Quick Actions](https://blogs.oracle.com/ai-and-datascience/post/ai-quick-actions-in-oci-data-science) to easily deploy LLMs on [OCI Data Science Model Deployment Service](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm). You may choose to deploy the model with popular inference frameworks such as vLLM or TGI. By default, the model deployment endpoint mimics the OpenAI API protocol.\n",
25
25
"\n",
@@ -30,7 +30,7 @@
30
30
"\n",
31
31
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
0 commit comments