id stringlengths 6 6 | text stringlengths 20 17.2k | title stringclasses 1
value |
|---|---|---|
149612 | One more complex type if example is where the example is an entire conversation, usually in which a model initially responds incorrectly and a user then tells the model how to correct its answer.
This is called a multi-turn example. Multi-turn examples can be useful for more nuanced tasks where its useful to show commo... | |
149614 | | Name | Index Type | Uses an LLM | When to Use | Description ... | |
149632 | # langchain
## 0.2.0
### Deleted
As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly.
The fol... | |
149633 | # langchain-core
## 0.1.x
#### Deprecated
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
- `Base... | |
149698 | ---
sidebar_position: 2
sidebar_label: Release policy
---
# LangChain release policy
The LangChain ecosystem is composed of different component packages (e.g., `langchain-core`, `langchain`, `langchain-community`, `langgraph`, `langserve`, partner packages etc.)
## Versioning
### `langchain`, `langchain-core`, and ... | |
149699 | # LangChain v0.3
*Last updated: 09.16.24*
## What's changed
* All packages have been upgraded from Pydantic 1 to Pydantic 2 internally. Use of Pydantic 2 in user code is fully supported with all packages without the need for bridges like `langchain_core.pydantic_v1` or `pydantic.v1`.
* Pydantic 1 will no longer be s... | |
149700 | ### Base packages
| Package | Latest | Recommended constraint |
|--------------------------|--------|------------------------|
| langchain | 0.3.0 | >=0.3,<0.4 |
| langchain-community | 0.3.0 | >=0.3,<0.4 |
| langchain-text-splitters | 0.3.0 | >=0.3... | |
149702 | CustomTool(
name='custom_tool',
description="hello",
x=1,
)
```
### 4. model_rebuild()
When sub-classing from LangChain models, users may need to add relevant imports
to the file and rebuild the model.
You can read more about `model_rebuild` [here](https://docs.pydantic.dev/latest/concepts/models/#rebuil... | |
149704 | ---
sidebar_position: 0
---
# Overview
## What’s new in LangChain?
The following features have been added during the development of 0.1.x:
- Better streaming support via the [Event Streaming API](https://python.langchain.com/docs/expression_language/streaming/#using-stream-events).
- [Standardized tool calling supp... | |
149705 | ---
sidebar_position: 1
---
# Migration
LangChain v0.2 was released in May 2024. This release includes a number of [breaking changes and deprecations](/docs/versions/v0_2/deprecations). This document contains a guide on upgrading to 0.2.x.
:::note Reference
- [Breaking Changes & Deprecations](/docs/versions/v0_2/... | |
149706 | ---
sidebar_position: 3
sidebar_label: Changes
keywords: [retrievalqa, llmchain, conversationalretrievalchain]
---
# Deprecations and Breaking Changes
This code contains a list of deprecations and removals in the `langchain` and `langchain-core` packages.
New features and improvements are not listed here. See the [o... | |
149709 | {
"cells": [
{
"cell_type": "markdown",
"id": "ed78c53c-55ad-4ea2-9cc2-a39a1963c098",
"metadata": {},
"source": [
"# Migrating from StuffDocumentsChain\n",
"\n",
"[StuffDocumentsChain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.stuff.StuffDo... | |
149716 | {
"cells": [
{
"cell_type": "markdown",
"id": "2c7bdc91-9b89-4e59-bc27-89508b024635",
"metadata": {},
"source": [
"# Migrating from MapReduceDocumentsChain\n",
"\n",
"[MapReduceDocumentsChain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.map_r... | |
149720 | {
"cells": [
{
"cell_type": "markdown",
"id": "9db5ad7a-857e-46ea-9d0c-ba3fbe62fc81",
"metadata": {},
"source": [
"# Migrating from MapRerankDocumentsChain\n",
"\n",
"[MapRerankDocumentsChain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.map_r... | |
149722 | {
"cells": [
{
"cell_type": "markdown",
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
"metadata": {},
"source": [
"# Migrating from LLMChain\n",
"\n",
"[`LLMChain`](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM,... | |
149729 | {
"cells": [
{
"cell_type": "markdown",
"id": "d20aeaad-b3ca-4a7d-b02d-3267503965af",
"metadata": {},
"source": [
"# Migrating from ConversationalChain\n",
"\n",
"[`ConversationChain`](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.conversation.base.ConversationC... | |
149731 | {
"cells": [
{
"cell_type": "markdown",
"id": "292a3c83-44a9-4426-bbec-f1a778d00d93",
"metadata": {},
"source": [
"# Migrating from ConversationalRetrievalChain\n",
"\n",
"The [`ConversationalRetrievalChain`](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.convers... | |
149735 | {
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 1\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to migrate from v0.0 chains\n",
"\n",
"LangChai... | |
149747 | ---
sidebar_position: 1
---
# How to migrate to LangGraph memory
As of the v0.3 release of LangChain, we recommend that LangChain users take advantage of [LangGraph persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/) to incorporate `memory` into their LangChain application.
* Users that rely... | |
149749 | {
"cells": [
{
"cell_type": "markdown",
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
"metadata": {},
"source": [
"# Migrating off ConversationBufferWindowMemory or ConversationTokenBufferMemory\n",
"\n",
"Follow this guide if you're trying to migrate off one of the old memory classes listed ... | |
149751 | " include_system=True,\n",
" allow_partial=False,\n",
" )\n",
"\n",
" # highlight-end\n",
" response = model.invoke(selected_messages)\n",
" # We return a list, because this will get added to the existing list\n",
" return {\"messages\": response}\n",
"\n",
... | |
149788 | # INVALID_PROMPT_INPUT
A [prompt template](/docs/concepts#prompt-templates) received missing or invalid input variables.
## Troubleshooting
The following may help resolve this error:
- Double-check your prompt template to ensure that it is correct.
- If you are using the default f-string format and you are using ... | |
149828 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# MESSAGE_COERCION_FAILURE\n",
"\n",
"Instead of always requiring instances of `BaseMessage`, several modules in LangChain take `MessageLikeRepresentation`, which is defined as:"
]
},
{
"cell_type": "code",
"execut... | |
149830 | "File \u001b[0;32m~/langchain/oss-py/libs/core/langchain_core/language_models/chat_models.py:267\u001b[0m, in \u001b[0;36mBaseChatModel._convert_input\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m 265\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m StringPromptValue(text\u001b[38;5;241m=\u001b[39m\u001b[38;5;28... | |
149831 | "File \u001b[0;32m~/langchain/oss-py/libs/core/langchain_core/messages/utils.py:321\u001b[0m, in \u001b[0;36m_convert_to_message\u001b[0;34m(message)\u001b[0m\n\u001b[1;32m 319\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mKeyError\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;3... | |
149833 | # MODEL_AUTHENTICATION
Your model provider is denying you access to their service.
## Troubleshooting
The following may help resolve this error:
- Confirm that your API key or other credentials are correct.
- If you are relying on an environment variable to authenticate, confirm that the variable name is correct an... | |
149855 | {
"cells": [
{
"cell_type": "markdown",
"id": "8c5eb99a",
"metadata": {},
"source": [
"# How to inspect runnables\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/#l... | |
149856 | {
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to add retrieval to chatbots\n",
"\n",
"Retrieval is a common technique chatbots use to augment th... | |
149857 | "docs = retriever.invoke(\"Can LangSmith help test my LLM applications?\")\n",
"\n",
"docs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that invoking the retriever above results in some parts of the LangSmith docs that contain information about testing that our ch... | |
149858 | " Document(page_content=\"does that affect the output?\\u200bSo you notice a bad output, and you go into LangSmith to see what's going on. You find the faulty LLM call and are now looking at the exact input. You want to try changing a word or a phrase to see what happens -- what do you do?We constantly ran into this i... | |
149859 | "query_transforming_retriever_chain = RunnableBranch(\n",
" (\n",
" lambda x: len(x.get(\"messages\", [])) == 1,\n",
" # If only one message, then we just pass that message's content to retriever\n",
" (lambda x: x[\"messages\"][-1].content) | retriever,\n",
" ),\n",
"... | |
149862 | {
"cells": [
{
"cell_type": "raw",
"id": "8165bd4c",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [memory]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f47033eb",
"metadata": {},
"source": [
"# How to add message... | |
149869 | "message_history = messages[\"messages\"]\n",
"\n",
"new_query = \"Pardon?\"\n",
"\n",
"messages = langgraph_agent_executor.invoke(\n",
" {\"messages\": message_history + [(\"human\", new_query)]}\n",
")\n",
"{\n",
" \"input\": new_query,\n",
" \"output\": messages[\"message... | |
149872 | " (\"system\", \"You are a helpful assistant.\"),\n",
" (\"placeholder\", \"{messages}\"),\n",
" ]\n",
")\n",
"\n",
"\n",
"def _modify_state_messages(state: AgentState):\n",
" return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages()\n",
"\n",
"\n",
... | |
149878 | " 'content': 'San Francisco Weather Forecast for Apr 2024 - Risk of Rain Graph. Rain Risk Graph: Monthly Overview. Bar heights indicate rain risk percentages. Yellow bars mark low-risk days, while black and grey bars signal higher risks. Grey-yellow bars act as buffers, advising to keep at least one day clear from the... | |
149885 | {
"cells": [
{
"cell_type": "markdown",
"id": "ea37db49-d389-4291-be73-885d06c1fb7e",
"metadata": {},
"source": [
"# How to use prompting alone (no tool calling) to do extraction\n",
"\n",
"Tool calling features are not required for generating structured output from LLMs. LLMs that are able t... | |
149886 | "{\"$defs\": {\"Person\": {\"description\": \"Information about a person.\", \"properties\": {\"name\": {\"description\": \"The name of the person\", \"title\": \"Name\", \"type\": \"string\"}, \"height_in_meters\": {\"description\": \"The height of the person expressed in meters.\", \"title\": \"Height In Meters\", \"... | |
149888 | {
"cells": [
{
"cell_type": "raw",
"id": "f781411d",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [charactertextsplitter]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "c3ee8d00",
"metadata": {},
"source": [
"# How... | |
149897 | {
"cells": [
{
"cell_type": "markdown",
"id": "0fee7096",
"metadata": {},
"source": [
"# How to use the output-fixing parser\n",
"\n",
"This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.\n",
"\n",
"B... | |
149900 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to propagate callbacks constructor\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Callbacks](/docs/concepts/#callbacks)\n",
"- [C... | |
149901 | {
"cells": [
{
"cell_type": "markdown",
"id": "4d6c0c86",
"metadata": {},
"source": [
"# How to retry when a parsing error occurs\n",
"\n",
"While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the ou... | |
149902 | "File \u001b[0;32m~/workplace/langchain/libs/langchain/langchain/output_parsers/pydantic.py:35\u001b[0m, in \u001b[0;36mPydanticOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 33\u001b[0m name \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mpydantic_object\u001b... | |
149903 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to better prompt when doing SQL question-answering\n",
"\n",
"In this guide we'll go over prompting strategies to improve SQL query generation using [create_sql_query_chain](https://python.langchain.com/api_reference/lang... | |
149909 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to debug your LLM apps\n",
"\n",
"Like building any type of software, at some point you'll need to debug when building with LLMs. A model call will fail, or model output will be misformatted, or there will be some nested ... | |
149916 | # How to use LangChain with different Pydantic versions
As of the `0.3` release, LangChain uses Pydantic 2 internally.
Users should install Pydantic 2 and are advised to **avoid** using the `pydantic.v1` namespace of Pydantic 2 with
LangChain APIs.
If you're working with prior versions of LangChain, please see the ... | |
149922 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to attach callbacks to a runnable\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Callbacks](/docs/concepts/#callbacks)\n",
"- [Cus... | |
149923 | # Text embedding models
:::info
Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers.
:::
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Coher... | |
149925 | {
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to add memory to chatbots\n",
"\n",
"A key feature of chatbots is their ability to use content of ... | |
149935 | " data = graph.query(description_query, params={\"candidate\": entity})\n",
" return data[0][\"context\"]\n",
" except IndexError:\n",
" return \"No information was found\""
]
},
{
"cell_type": "markdown",
"id": "bdecc24b-8065-4755-98cc-9c6d093d4897",
"metadata": {},
... | |
149938 | {
"cells": [
{
"cell_type": "markdown",
"id": "d9172545",
"metadata": {},
"source": [
"# How to retrieve using multiple vectors per document\n",
"\n",
"It can often be useful to store multiple vectors per document. There are multiple use cases where this is beneficial. For example, we can emb... | |
149940 | ]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieved_docs = retriever.invoke(\"justice breyer\")\n",
"\n",
"len(retrieved_docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "097a5396",
"metadata": {},
... | |
149941 | {
"cells": [
{
"cell_type": "raw",
"id": "45e0127d",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 4\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "d8ca736e",
"metadata": {},
"source": [
"# How to partially format prompt templates\n",
"\n",
":::info ... | |
149942 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to create custom callback handlers\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Callbacks](/docs/concepts/#callbacks)\n",
"\n",
... | |
149945 | {
"cells": [
{
"cell_type": "markdown",
"id": "c95fcd15cd52c944",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"# How to split by HTML sections\n",
"## Description and motivation\n",
"Similar in concept to the [HTMLHeaderTextSplit... | |
149950 | {
"cells": [
{
"cell_type": "raw",
"id": "e2596041-9b76-4e74-836f-e6235086bbf0",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"keywords: [RunnableParallel, RunnableMap, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d... | |
149957 | - [How to: recursively split text](/docs/how_to/recursive_text_splitter)
- [How to: split by HTML headers](/docs/how_to/HTML_header_metadata_splitter)
- [How to: split by HTML sections](/docs/how_to/HTML_section_aware_splitter)
- [How to: split by character](/docs/how_to/character_text_splitter)
- [How to: split code](... | |
149966 | {
"cells": [
{
"cell_type": "markdown",
"id": "4ef893cf-eac1-45e6-9eb6-72e9ca043200",
"metadata": {},
"source": [
"# How to get your RAG application to return sources\n",
"\n",
"Often in Q&A applications it's important to show users the sources that were used to generate the answer. The simpl... | |
149968 | " Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}, page_content='Fig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There a... | |
149982 | {
"cells": [
{
"cell_type": "markdown",
"id": "a05c860c",
"metadata": {},
"source": [
"# How to split text by tokens \n",
"\n",
"Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of t... | |
149996 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to pass callbacks in at runtime\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Callbacks](/docs/concepts/#callbacks)\n",
"- [Custo... | |
150005 | {
"cells": [
{
"cell_type": "markdown",
"id": "612eac0a",
"metadata": {},
"source": [
"# How to do retrieval with contextual compression\n",
"\n",
"One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into... | |
150007 | "\n",
"We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n"
]
}
],
"source": [
"from langchain.retrievers.document_compressors import EmbeddingsFilter\n",
"from langchain_openai import OpenAIEmbeddings\n",
... | |
150009 | "Params: {'model_name': 'CustomChatModel'}\n"
]
}
],
"source": [
"llm = CustomLLM(n=5)\n",
"print(llm)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8cd49199",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"'Thi... | |
150027 | {
"cells": [
{
"cell_type": "markdown",
"id": "fc0db1bc",
"metadata": {},
"source": [
"# How to reorder retrieved results to mitigate the \"lost in the middle\" effect\n",
"\n",
"Substantial performance degradations in [RAG](/docs/tutorials/rag) applications have been [documented](https://arx... | |
150028 | "Given these texts:\n",
"-----\n",
"{context}\n",
"-----\n",
"Please answer the following question:\n",
"{query}\n",
"\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
" template=prompt_template,\n",
" input_variables=[\"context\", \"query\"],\n",
")\n",
"\n",
"# C... | |
150029 | {
"cells": [
{
"cell_type": "markdown",
"id": "5436020b",
"metadata": {},
"source": [
"# How to create tools\n",
"\n",
"When constructing an agent, you will need to provide it with a list of `Tool`s that it can use. Besides the actual function that is called, the Tool consists of several comp... | |
150032 | "\n",
"print(multiply.invoke({\"a\": 2, \"b\": 3}))\n",
"print(await multiply.ainvoke({\"a\": 2, \"b\": 3}))"
]
},
{
"cell_type": "markdown",
"id": "97aba6cc-4bdf-4fab-aff3-d89e7d9c3a09",
"metadata": {},
"source": [
"## How to create async tools\n",
"\n",
"LangChain Tools implemen... | |
150039 | "id": "8ba1764d-0272-4f98-adcf-b48cb2c0a315",
"metadata": {},
"source": [
"### Invoking the tool\n",
"\n",
"Great! We're able to generate tool invocations. But what if we want to actually call the tool? To do so we'll need to pass the generated tool args to our tool. As a simple example we'll just ext... | |
150042 | "Note that we didn't need to wrap the custom function `(lambda x: x.content[:5])` in a `RunnableLambda` constructor because the `model` on the left of the pipe operator is already a Runnable. The custom function is **coerced** into a runnable. See [this section](/docs/how_to/sequence/#coercion) for more information.\n"... | |
150046 | "It can be helpful to return not only tool outputs but also tool inputs. We can easily do this with LCEL by `RunnablePassthrough.assign`-ing the tool output. This will take whatever the input is to the RunnablePassrthrough components (assumed to be a dictionary) and add a key to it while still passing through everythin... | |
150047 | {
"cells": [
{
"cell_type": "raw",
"id": "beba2e0e",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "bb0735c0",
"metadata": {},
"source": [
"# How to use few shot examples in chat models\n",
"\n",
":::in... | |
150050 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to combine results from multiple retrievers\n",
"\n",
"The [EnsembleRetriever](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of r... | |
150054 | {
"cells": [
{
"cell_type": "markdown",
"id": "72b1b316",
"metadata": {},
"source": [
"# How to parse JSON output\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [O... | |
150056 | # How to create and query vector stores
:::info
Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores.
:::
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
vectors, and the... | |
150058 | {
"cells": [
{
"cell_type": "raw",
"id": "27598444",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 3\n",
"keywords: [structured output, json, information extraction, with_structured_output]\n",
"---"
]
},
{
"cell_type"... | |
150061 | "For models that support more than one means of structuring outputs (i.e., they support both tool calling and JSON mode), you can specify which method to use with the `method=` argument.\n",
"\n",
":::info JSON mode\n",
"\n",
"If using JSON mode you'll have to still specify the desired schema in the mod... | |
150063 | {
"cells": [
{
"cell_type": "markdown",
"id": "4da7ae91-4973-4e97-a570-fa24024ec65d",
"metadata": {},
"source": [
"# How to do query validation as part of SQL question-answering\n",
"\n",
"Perhaps the most error-prone part of any SQL chain or agent is writing valid and safe SQL queries. In th... | |
150066 | {
"cells": [
{
"cell_type": "raw",
"id": "b5fc1fc7-c4c5-418f-99da-006c604a7ea6",
"metadata": {},
"source": [
"---\n",
"title: Custom Retriever\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "ff6f3c79-0848-4956-9115-54f6b2134587",
"metadata": {},
"source": [
"# How to c... | |
150068 | {
"cells": [
{
"cell_type": "markdown",
"id": "e5715368",
"metadata": {},
"source": [
"# How to track token usage in ChatModels\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)... | |
150075 | {
"cells": [
{
"cell_type": "markdown",
"id": "b8982428",
"metadata": {},
"source": [
"# Run models locally\n",
"\n",
"## Use case\n",
"\n",
"The popularity of projects like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](http... | |
150076 | "Neil| Armstrong|,| an| American| astronaut|.| He| stepped| out| of| the| lunar| module| Eagle| and| onto| the| surface| of| the| Moon| on| July| |20|,| |196|9|,| famously| declaring|:| \"|That|'s| one| small| step| for| man|,| one| giant| leap| for| mankind|.\"||"
]
}
],
"source": [
"for chunk in ll... | |
150077 | "For example, below we run inference on `llama2-13b` with 4 bit quantization downloaded from [HuggingFace](https://huggingface.co/TheBloke/Llama-2-13B-GGML/tree/main).\n",
"\n",
"As noted above, see the [API reference](https://python.langchain.com/api_reference/langchain/llms/langchain.llms.llamacpp.LlamaCpp.ht... | |
150080 | {
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [Runnable, Runnables, RunnableSequence, LCEL, chain, chains, chaining]\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"... | |
150087 | {
"cells": [
{
"cell_type": "markdown",
"id": "674a0d41-e3e3-4423-a995-25d40128c518",
"metadata": {},
"source": [
"# How to do question answering over CSVs\n",
"\n",
"LLMs are great for building question-answering systems over various types of data sources. In this section we'll go over how t... | |
150088 | "print(db.dialect)\n",
"print(db.get_usable_table_names())\n",
"print(db.run(\"SELECT * FROM titanic WHERE Age < 2;\"))"
]
},
{
"cell_type": "markdown",
"id": "42f5a3c3-707c-4331-9f5f-0cb4919763dd",
"metadata": {},
"source": [
"And create a [SQL agent](/docs/tutorials/sql_qa) to interact ... | |
150090 | " [\n",
" (\n",
" \"system\",\n",
" system,\n",
" ),\n",
" (\"human\", \"{question}\"),\n",
" # This MessagesPlaceholder allows us to optionally append an arbitrary number of messages\n",
" # at the end of the prompt using the 'chat... | |
150091 | " df_template.format(df_head=_df.head().to_markdown(), df_name=df_name)\n",
" for _df, df_name in [(df_1, \"df_1\"), (df_2, \"df_2\")]\n",
")\n",
"\n",
"system = f\"\"\"You have access to a number of pandas dataframes. \\\n",
"Here is a sample of rows from each dataframe and the python code th... | |
150097 | {
"cells": [
{
"cell_type": "raw",
"id": "d35de667-0352-4bfb-a890-cebe7f676fe7",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 5\n",
"keywords: [RunnablePassthrough, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
"me... | |
150101 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to force models to call a tool\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LangChain To... | |
150102 | {
"cells": [
{
"cell_type": "raw",
"id": "94c3ad61",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b91e03f1",
"metadata": {},
"source": [
"# How to use few shot examples\n",
"\n",
":::info Prerequisite... | |
150106 | {
"cells": [
{
"cell_type": "raw",
"id": "ee14951b",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "105cddce",
"metadata": {},
"source": [
"# How to use a vectorstore as a retriever\n",
"\n",
"A vector ... | |
150108 | {
"cells": [
{
"cell_type": "markdown",
"id": "9122e4b9-4883-4e6e-940b-ab44a70f0951",
"metadata": {},
"source": [
"# How to load documents from a directory\n",
"\n",
"LangChain's [DirectoryLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.docume... | |
150115 | {
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 6\n",
"keywords: [RunnablePassthrough, assign, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to add values to a chain's state\n",
"\n",
... | |
150119 | "File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/tools/base.py:659\u001b[0m, in \u001b[0;36mBaseTool.run\u001b[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, tool_call_id, **kwargs)\u001b[0m\n\u001b[1;32m 657\u001b[0m \u001b[38... | |
150124 | "\n",
"ts_splitter = RecursiveCharacterTextSplitter.from_language(\n",
" language=Language.TS, chunk_size=60, chunk_overlap=0\n",
")\n",
"ts_docs = ts_splitter.create_documents([TS_CODE])\n",
"ts_docs"
]
},
{
"cell_type": "markdown",
"id": "ee2361f8",
"metadata": {},
"source": ... | |
150125 | " Document(page_content='<style>\\n body {\\n font-family: Aria'),\n",
" Document(page_content='l, sans-serif;\\n }\\n h1 {'),\n",
" Document(page_content='color: darkblue;\\n }\\n </style>\\n </head'),\n",
" Document(page_content... | |
150127 | {
"cells": [
{
"cell_type": "markdown",
"id": "7414502a-4532-4da3-aef0-71aac4d0d4dd",
"metadata": {},
"source": [
"# How to load web pages\n",
"\n",
"This guide covers how to load web pages into the LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.... | |
150129 | "{'https://python.langchain.com/docs/how_to/chatbots_memory/': \"You'll need to install a few packages, and have your OpenAI API key set as an environment variable named OPENAI_API_KEY:\\n%pip install --upgrade --quiet langchain langchain-openai\\n\\n# Set env var OPENAI_API_KEY or load from a .env file:\\nimport doten... | |
150134 | "id": "d59823f5-9b9a-43c5-a213-34644e2f1d3d",
"metadata": {},
"source": [
":::note\n",
"Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result!\n",
"\n",
"We're focusing o... | |
150152 | "For the retriever, we will use [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load the content of a web page. Here we instantiate a `InMemoryVectorStore` vectorstore and then use its [.as_retriever](https://pyth... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.