id stringlengths 6 6 | text stringlengths 20 17.2k | title stringclasses 1
value |
|---|---|---|
167579 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Components Of LlamaIndex\n",
"\n",
"In this notebook we will demonstrate building RAG application and customize it using different components of LlamaIndex.\n",
"\n",
"1. Question Answering\n",
"2. Summarization.\... | |
167611 | "with a Spearman rank correlation of 0.55 at the model level. This provides an\n",
"additional data point suggesting that LLM-based automated evals could be a\n",
"cost-effective and reasonable alternative to human evals.\n",
"\n",
"### How to apply evals?\n",
"\n",
"**Building solid... | |
167612 | "**RAG has its roots in open-domain Q &A.** An early [Meta\n",
"paper](https://arxiv.org/abs/2005.04611) showed that retrieving relevant\n",
"documents via TF-IDF and providing them as context to a language model (BERT)\n",
"improved performance on an open-domain QA task. They converted each task into... | |
167614 | "embedding models. It comes with pre-trained embeddings for 157 languages and\n",
"is extremely fast, even without a GPU. It’s my go-to for early-stage proof of\n",
"concepts.\n",
"\n",
"Another good baseline is [sentence-\n",
"transformers](https://github.com/UKPLab/sentence-transformers)... | |
167618 | "tuning with the HHH prompt led to better performance compared to fine-tuning\n",
"with RLHF.\n",
"\n",
"\n",
"\n",
"Example of HHH prompt ([source](https://arxiv.org/abs/2204.05862))\n",
"\n",
"**A more common approach is to validate th... | |
167622 | "\n",
"Grusky, Max. [“Rogue Scores.”](https://aclanthology.org/2023.acl-long.107/)\n",
"Proceedings of the 61st Annual Meeting of the Association for Computational\n",
"Linguistics (Volume 1: Long Papers). 2023.\n",
"\n",
"Liu, Yang, et al. [“Gpteval: Nlg evaluation using gpt-4 with better... | |
167693 | "Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets->FlagEmbedding==1.2.11) (3.10.5)\n",
"Collecting scikit-learn (from sentence_transformers->FlagEmbedding==1.2.11)\n",
" Downloading scikit_learn-1.5.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl... | |
167695 | "Installing collected packages: xxhash, threadpoolctl, scipy, safetensors, requests, pyarrow, fsspec, dill, scikit-learn, multiprocess, huggingface-hub, tokenizers, accelerate, transformers, datasets, sentence_transformers, peft, FlagEmbedding\n",
" Attempting uninstall: requests\n",
" Found existing in... | |
167752 | # Basic Strategies
There are many easy things to try, when you need to quickly squeeze out extra performance and optimize your RAG workflow.
## Prompt Engineering
If you're encountering failures related to the LLM, like hallucinations or poorly formatted outputs, then this
should be one of the first things you try.
... | |
167761 | # SimpleDirectoryReader
`SimpleDirectoryReader` is the simplest way to load data from local files into LlamaIndex. For production use cases it's more likely that you'll want to use one of the many Readers available on [LlamaHub](https://llamahub.ai/), but `SimpleDirectoryReader` is a great way to get started.
## Supp... | |
167765 | # Defining and Customizing Documents
## Defining Documents
Documents can either be created automatically via data loaders, or constructed manually.
By default, all of our [data loaders](../connector/index.md) (including those offered on LlamaHub) return `Document` objects through the `load_data` function.
```python... | |
167785 | # Using LLMs
## Concept
Picking the proper Large Language Model (LLM) is one of the first steps you need to consider when building any LLM application over your data.
LLMs are a core component of LlamaIndex. They can be used as standalone modules or plugged into other core LlamaIndex modules (indices, retrievers, qu... | |
167787 | # Embeddings
## Concept
Embeddings are used in LlamaIndex to represent your documents using a sophisticated numerical representation. Embedding models take text as input, and return a long list of numbers used to capture the semantics of the text. These embedding models have been trained to represent text this way, a... | |
167791 | ## Usage Pattern
### Defining a custom prompt
Defining a custom prompt is as simple as creating a format string
```python
from llama_index.core import PromptTemplate
template = (
"We have provided context information below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
... | |
167792 | # Prompts
## Concept
Prompting is the fundamental input that gives LLMs their expressive power. LlamaIndex uses prompts to build the index, do insertion,
perform traversal during querying, and to synthesize the final answer.
LlamaIndex uses a set of [default prompt templates](https://github.com/run-llama/llama_index... | |
167795 | # Customizing LLMs within LlamaIndex Abstractions
You can plugin these LLM abstractions within our other modules in LlamaIndex (indexes, retrievers, query engines, agents) which allow you to build advanced workflows over your data.
By default, we use OpenAI's `gpt-3.5-turbo` model. But you may choose to customize
the... | |
167799 | # Customizing Storage
By default, LlamaIndex hides away the complexities and let you query your data in under 5 lines of code:
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_... | |
167840 | # Retriever
## Concept
Retrievers are responsible for fetching the most relevant context given a user query (or chat message).
It can be built on top of [indexes](../../indexing/index.md), but can also be defined independently.
It is used as a key building block in [query engines](../../deploying/query_engine/index.... | |
167846 | # Output Parsing Modules
LlamaIndex supports integrations with output parsing modules offered
by other frameworks. These output parsing modules can be used in the following ways:
- To provide formatting instructions for any prompt / query (through `output_parser.format`)
- To provide "parsing" for LLM outputs (throug... | |
167854 | # Usage Pattern
## Getting Started
An agent is initialized from a set of Tools. Here's an example of instantiating a ReAct
agent from a set of Tools.
```python
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.core.agent import ReActAgent
# define sample To... | |
167859 | # Tools
## Concept
Having proper tool abstractions is at the core of building [data agents](./index.md). Defining a set of Tools is similar to defining any API interface, with the exception that these Tools are meant for agent rather than human use. We allow users to define both a **Tool** as well as a **ToolSpec** c... | |
167866 | # Chatbots
Chatbots are another extremely popular use case for LLMs. Instead of single-shot question-answering, a chatbot can handle multiple back-and-forth queries and answers, getting clarification or answering follow-up questions.
LlamaIndex gives you the tools to build knowledge-augmented chatbots and agents. Thi... | |
167907 | # Large Language Models
##### FAQ
1. [How to use a custom/local embedding model?](#1-how-to-define-a-custom-llm)
2. [How to use a local hugging face embedding model?](#2-how-to-use-a-different-openai-model)
3. [How can I customize my prompt](#3-how-can-i-customize-my-prompt)
4. [Is it required to fine-tune my model?]... | |
167908 | # Documents and Nodes
##### FAQ
1. [What is the default `chunk_size` of a Node object?](#1-what-is-the-default-chunk_size-of-a-node-object)
2. [How to add information like name, url in a `Document` object?](#2-how-to-add-information-like-name-url-in-a-document-object)
3. [How to update existing document in an Index?]... | |
167940 | # Frequently Asked Questions (FAQ)
!!! tip
If you haven't already, [install LlamaIndex](installation.md) and complete the [starter tutorial](starter_example.md). If you run into terms you don't recognize, check out the [high-level concepts](concepts.md).
In this section, we start with the code you wrote for the [... | |
167943 | # Starter Tutorial (OpenAI)
This is our famous "5 lines of code" starter example using OpenAI.
!!! tip
Make sure you've followed the [installation](installation.md) steps first.
!!! tip
Want to use local models?
If you want to do our starter tutorial using only local models, [check out this tutorial inst... | |
167949 | # Starter Tools
We have created a variety of open-source tools to help you bootstrap your generative AI projects.
## create-llama: Full-stack web application generator
The `create-llama` tool is a CLI tool that helps you create a full-stack web application with your choice of frontend and backend that indexes your d... | |
168097 | class DuckDBVectorStore(BasePydanticVectorStore):
"""DuckDB vector store.
In this vector store, embeddings are stored within a DuckDB database.
During query time, the index uses DuckDB to query for the top
k most similar nodes.
Examples:
`pip install llama-index-vector-stores-duckdb`
... | |
168102 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# DuckDB\n",
"\n",
">[DuckDB](https://duckdb.org/docs/api/python/overview) is a fast in-process analytical database. DuckDB is under an MIT license.\n",
"\n",
"In this notebook we are going to show how to use DuckDB as ... | |
168298 | class MilvusVectorStore(BasePydanticVectorStore):
"""The Milvus Vector Store.
In this vector store we store the text, its embedding and
a its metadata in a Milvus collection. This implementation
allows the use of an already existing collection.
It also supports creating a new one if the collection ... | |
168299 | def __init__(
self,
uri: str = "./milvus_llamaindex.db",
token: str = "",
collection_name: str = "llamacollection",
dim: Optional[int] = None,
embedding_field: str = DEFAULT_EMBEDDING_KEY,
doc_id_field: str = DEFAULT_DOC_ID_KEY,
similarity_metric: str = "I... | |
168345 | """DeepLake vector store index.
An index that is built within DeepLake.
"""
import logging
from typing import Any, List, Optional, cast
from llama_index.core.bridge.pydantic import PrivateAttr
from llama_index.core.schema import BaseNode, MetadataMode, TextNode
from llama_index.core.vector_stores.types import (
... | |
168346 | class DeepLakeVectorStore(BasePydanticVectorStore):
"""The DeepLake Vector Store.
In this vector store we store the text, its embedding and
a few pieces of its metadata in a deeplake dataset. This implementation
allows the use of an already existing deeplake dataset if it is one that was created
th... | |
168348 | import pytest
import jwt # noqa
from llama_index.core import Document
from llama_index.core.vector_stores.types import (
BasePydanticVectorStore,
MetadataFilter,
MetadataFilters,
FilterCondition,
FilterOperator,
)
from llama_index.vector_stores.deeplake import DeepLakeVectorStore
def test_class(... | |
168427 | """Azure AI Search vector store."""
import enum
import json
import logging
from enum import auto
from typing import Any, Callable, Dict, List, Optional, Tuple, Union, cast
from azure.search.documents import SearchClient
from azure.search.documents.aio import SearchClient as AsyncSearchClient
from azure.search.documen... | |
168428 | def _create_index(self, index_name: Optional[str]) -> None:
"""
Creates a default index based on the supplied index name, key field names and
metadata filtering keys.
"""
from azure.search.documents.indexes.models import (
ExhaustiveKnnAlgorithmConfiguration,
... | |
168438 | # LlamaIndex Vector_Stores Integration: MongoDB
## Setting up MongoDB Atlas as the Datastore Provider
MongoDB Atlas is a multi-cloud database service made by the same people that build MongoDB.
Atlas simplifies deploying and managing your databases while offering the versatility you need
to build resilient and perfor... | |
168444 | """MongoDB Vector store index.
An index that is built on top of an existing vector store.
"""
import logging
import os
from importlib.metadata import version
from typing import Any, Dict, List, Optional, cast
from llama_index.core.bridge.pydantic import PrivateAttr
from llama_index.core.schema import BaseNode, Meta... | |
168452 | # Astra DB Vector Store
A LlamaIndex vector store using Astra DB as the backend.
## Usage
Pre-requisite:
```bash
pip install llama-index-vector-stores-astra-db
```
A minimal example:
```python
from llama_index.vector_stores.astra_db import AstraDBVectorStore
vector_store = AstraDBVectorStore(
token="AstraCS:... | |
168470 | class ChromaVectorStore(BasePydanticVectorStore):
"""Chroma vector store.
In this vector store, embeddings are stored within a ChromaDB collection.
During query time, the index uses ChromaDB to query for the top
k most similar nodes.
Args:
chroma_collection (chromadb.api.models.Collection... | |
168652 | class AzureCosmosDBMongoDBVectorSearch(BasePydanticVectorStore):
"""Azure CosmosDB MongoDB vCore Vector Store.
To use, you should have both:
- the ``pymongo`` python package installed
- a connection string associated with an Azure Cosmodb MongoDB vCore Cluster
Examples:
`pip install llama-... | |
168721 | from typing import Any, List, Literal, Optional
import fsspec
from llama_index.vector_stores.docarray.base import DocArrayVectorStore
class DocArrayInMemoryVectorStore(DocArrayVectorStore):
"""Class representing a DocArray In-Memory vector store.
This class is a document index provided by Docarray that stor... | |
168769 | class RedisVectorStore(BasePydanticVectorStore):
"""RedisVectorStore.
The RedisVectorStore takes a user-defined schema object and a Redis connection
client or URL string. The schema is optional, but useful for:
- Defining a custom index name, key prefix, and key separator.
- Defining *additional* m... | |
168825 | """Pathway Retriever."""
import json
from typing import List, Optional
import requests
from llama_index.core.base.base_retriever import BaseRetriever
from llama_index.core.callbacks.base import CallbackManager
from llama_index.core.constants import DEFAULT_SIMILARITY_TOP_K
from llama_index.core.schema import (
N... | |
168867 | class VertexAISearchRetriever(BaseRetriever):
"""`Vertex AI Search` retrieval.
For a detailed explanation of the Vertex AI Search concepts
and configuration parameters, refer to the product documentation.
https://cloud.google.com/generative-ai-app-builder/docs/enterprise-search-introduction
Args:
... | |
168931 | from typing import Any, Dict, List, Optional
from llama_index.core.base.base_retriever import BaseRetriever
from llama_index.core.callbacks.base import CallbackManager
from llama_index.core.constants import DEFAULT_SIMILARITY_TOP_K
from llama_index.core.schema import NodeWithScore, QueryBundle
from llama_index.core.se... | |
168950 | """
Vectara index.
An index that is built on top of Vectara.
"""
import json
import logging
from typing import Any, List, Optional, Tuple, Dict
from enum import Enum
import urllib.parse
from llama_index.core.base.base_retriever import BaseRetriever
from llama_index.core.callbacks.base import CallbackManager
from llam... | |
169733 | import logging
import os
from typing import Any, Callable, Optional, Tuple, Union
from llama_index.core.base.llms.generic_utils import get_from_param_or_env
from tenacity import (
before_sleep_log,
retry,
retry_if_exception_type,
stop_after_attempt,
stop_after_delay,
wait_exponential,
wait_... | |
169735 | """OpenAI embeddings file."""
from enum import Enum
from typing import Any, Dict, List, Optional, Tuple
import httpx
from llama_index.core.base.embeddings.base import BaseEmbedding
from llama_index.core.bridge.pydantic import Field, PrivateAttr
from llama_index.core.callbacks.base import CallbackManager
from llama_in... | |
169823 | [build-system]
build-backend = "poetry.core.masonry.api"
requires = ["poetry-core"]
[tool.codespell]
check-filenames = true
check-hidden = true
skip = "*.csv,*.html,*.json,*.jsonl,*.pdf,*.txt,*.ipynb"
[tool.llamahub]
contains_example = false
import_path = "llama_index.embeddings.instructor"
[tool.llamahub.class_auth... | |
169857 | [build-system]
build-backend = "poetry.core.masonry.api"
requires = ["poetry-core"]
[tool.codespell]
check-filenames = true
check-hidden = true
skip = "*.csv,*.html,*.json,*.jsonl,*.pdf,*.txt,*.ipynb"
[tool.llamahub]
contains_example = false
import_path = "llama_index.embeddings.langchain"
[tool.llamahub.class_autho... | |
169860 | from llama_index.embeddings.langchain.base import LangchainEmbedding
__all__ = ["LangchainEmbedding"] | |
169862 | """Langchain Embedding Wrapper Module."""
from typing import TYPE_CHECKING, List, Optional
from llama_index.core.base.embeddings.base import (
DEFAULT_EMBED_BATCH_SIZE,
BaseEmbedding,
)
from llama_index.core.bridge.pydantic import PrivateAttr
from llama_index.core.callbacks import CallbackManager
if TYPE_CHE... | |
169863 | from llama_index.core.base.embeddings.base import BaseEmbedding
from llama_index.embeddings.langchain import LangchainEmbedding
def test_langchain_embedding_class():
names_of_base_classes = [b.__name__ for b in LangchainEmbedding.__mro__]
assert BaseEmbedding.__name__ in names_of_base_classes | |
170048 | class WandbCallbackHandler(BaseCallbackHandler):
"""Callback handler that logs events to wandb.
NOTE: this is a beta feature. The usage within our codebase, and the interface
may change.
Use the `WandbCallbackHandler` to log trace events to wandb. This handler is
useful for debugging and visualizi... | |
170343 | def get_triplets(
self,
entity_names: Optional[List[str]] = None,
relation_names: Optional[List[str]] = None,
properties: Optional[dict] = None,
ids: Optional[List[str]] = None,
) -> List[Triplet]:
# TODO: handle ids of chunk nodes
cypher_statement = "MATCH (e... | |
170845 | # LlamaIndex Output_Parsers Integration: Langchain | |
170933 | # GCS File or Directory Loader
This loader parses any file stored on Google Cloud Storage (GCS), or the entire Bucket (with an optional prefix filter) if no particular file is specified. It now supports more advanced operations through the implementation of ResourcesReaderMixin and FileSystemReaderMixin.
## Features
... | |
170938 | class GCSReader(BasePydanticReader, ResourcesReaderMixin, FileSystemReaderMixin):
"""
A reader for Google Cloud Storage (GCS) files and directories.
This class allows reading files from GCS, listing resources, and retrieving resource information.
It supports authentication via service account keys and ... | |
171195 | """
Azure Storage Blob file and directory reader.
A loader that fetches a file or iterates through a directory from Azure Storage Blob.
"""
import logging
import math
import os
from pathlib import Path
import tempfile
import time
from typing import Any, Dict, List, Optional, Union
from azure.storage.blob import Cont... | |
171340 | # LlamaIndex Readers Integration: Milvus
## Overview
Milvus Reader is designed to load data from a Milvus vector store, which provides search functionality based on query vectors. It retrieves documents from the specified Milvus collection using the provided connection parameters.
### Installation
You can install M... | |
171394 | # Confluence Loader
```bash
pip install llama-index-readers-confluence
```
This loader loads pages from a given Confluence cloud instance. The user needs to specify the base URL for a Confluence
instance to initialize the ConfluenceReader - base URL needs to end with `/wiki`.
The user can optionally specify OAuth 2.... | |
171399 | class ConfluenceReader(BaseReader):
"""Confluence reader.
Reads a set of confluence pages given a space key and optionally a list of page ids
For more on OAuth login, checkout:
- https://atlassian-python-api.readthedocs.io/index.html
- https://developer.atlassian.com/cloud/confluence/oauth... | |
171605 | """Qdrant reader."""
from typing import Dict, List, Optional, cast
from llama_index.core.readers.base import BaseReader
from llama_index.core.schema import Document
class QdrantReader(BaseReader):
"""Qdrant reader.
Retrieve documents from existing Qdrant collections.
Args:
location:
... | |
171893 | class MultipartMixedResponse(StreamingResponse):
CRLF = b"\r\n"
def __init__(self, *args, content_type: str = None, **kwargs):
super().__init__(*args, **kwargs)
self.content_type = content_type
def init_headers(self, headers: Optional[Mapping[str, str]] = None) -> None:
super().ini... | |
172031 | """Azure Cognitive Search reader.
A loader that fetches documents from specific index.
"""
from typing import List, Optional
from azure.core.credentials import AzureKeyCredential
from azure.search.documents import SearchClient
from llama_index.core.readers.base import BaseReader
from llama_index.core.schema import D... | |
172111 | import logging
from typing import List
from llama_index.core.readers.base import BaseReader
from llama_index.core.schema import Document
logger = logging.getLogger(__file__)
class UnstructuredURLLoader(BaseReader):
"""Loader that uses unstructured to load HTML files."""
def __init__(
self, urls: Li... | |
172254 | # Unstructured.io File Loader
```bash
pip install llama-index-readers-file
```
This loader extracts the text from a variety of unstructured text files using [Unstructured.io](https://github.com/Unstructured-IO/unstructured). Currently, the file extensions that are supported are `.csv`, `.tsv`, `.doc`, `.docx`, `.odt`... | |
172256 | """
Unstructured file reader.
A parser for unstructured text files using Unstructured.io.
Supports .csv, .tsv, .doc, .docx, .odt, .epub, .org, .rst, .rtf,
.md, .msg, .pdf, .heic, .png, .jpg, .jpeg, .tiff, .bmp, .ppt, .pptx,
.xlsx, .eml, .html, .xml, .txt, .json documents.
"""
import json
from pathlib import Path
fro... | |
172258 | # Paged CSV Loader
```bash
pip install llama-index-readers-file
```
This loader extracts the text from a local .csv file by formatting each row in an LLM-friendly way and inserting it into a separate Document. A single local file is passed in each time you call `load_data`. For example, a Document might look like:
`... | |
172260 | """Paged CSV reader.
A parser for tabular data files.
"""
from pathlib import Path
from typing import Any, Dict, List, Optional
from llama_index.core.readers.base import BaseReader
from llama_index.core.schema import Document
class PagedCSVReader(BaseReader):
"""Paged CSV parser.
Displayed each row in an... | |
172395 | """Weaviate reader."""
from typing import Any, List, Optional
from llama_index.core.readers.base import BaseReader
from llama_index.core.schema import Document
class WeaviateReader(BaseReader):
"""Weaviate reader.
Retrieves documents from Weaviate through vector lookup. Allows option
to concatenate ret... | |
172404 | """DeepLake reader."""
from typing import List, Optional, Union
import numpy as np
from llama_index.core.readers.base import BaseReader
from llama_index.core.schema import Document
distance_metric_map = {
"l2": lambda a, b: np.linalg.norm(a - b, axis=1, ord=2),
"l1": lambda a, b: np.linalg.norm(a - b, axis=1,... | |
172572 | # LlamaIndex Readers Integration: Chroma
## Overview
Chroma Reader is a tool designed to retrieve documents from existing persisted Chroma collections. Chroma is a framework for managing document collections and their associated embeddings efficiently.
### Installation
You can install Chroma Reader via pip:
```bas... | |
172576 | """Chroma Reader."""
from typing import Any, List, Optional, Union
from llama_index.core.readers.base import BaseReader
from llama_index.core.schema import Document
class ChromaReader(BaseReader):
"""Chroma reader.
Retrieve documents from existing persisted Chroma collections.
Args:
collection... | |
172744 | from typing import List, Optional, Sequence
from llama_index.core.base.llms.types import ChatMessage, MessageRole
BOS, EOS = "<s>", "</s>"
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = """\
You are a helpful, respectful and honest assistant. \
Always answer ... | |
172753 | # LlamaIndex Llms Integration: Huggingface
## Installation
1. Install the required Python packages:
```bash
%pip install llama-index-llms-huggingface
%pip install llama-index-llms-huggingface-api
!pip install "transformers[torch]" "huggingface_hub[inference]"
!pip install llama-index
```
2. Set th... | |
172769 | # LlamaIndex Llms Integration: Azure Openai
### Installation
```bash
%pip install llama-index-llms-azure-openai
!pip install llama-index
```
### Prerequisites
Follow this to setup your Azure account: [Setup Azure account](https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/#prerequisites)
### Set the en... | |
172844 | # LlamaIndex Llms Integration: Llama Api
## Prerequisites
1. **API Key**: Obtain an API key from [Llama API](https://www.llama-api.com/).
2. **Python 3.x**: Ensure you have Python installed on your system.
## Installation
1. Install the required Python packages:
```bash
%pip install llama-index-program-opena... | |
172907 | def _chat(self, messages: Sequence[ChatMessage], **kwargs: Any) -> ChatResponse:
url = f"{self.api_base}/chat/completions"
payload = {
"model": self.model,
"messages": [
message.dict(exclude={"additional_kwargs"}) for message in messages
],
... | |
173201 | @llm_chat_callback()
async def astream_chat(
self, messages: Sequence[ChatMessage], **kwargs: Any
) -> ChatResponseAsyncGen:
try:
import httpx
# Prepare the data payload for the Maritalk API
formatted_messages = self.parse_messages_for_model(messages)
... | |
173266 | # LlamaIndex Llms Integration: Huggingface API
Integration with Hugging Face's Inference API for generating text.
For more information on Hugging Face's Inference API, visit [Hugging Face's Inference API documentation](https://huggingface.co/docs/api-inference/quicktour).
## Installation
```shell
pip install llama-... | |
173399 | # LlamaIndex Llms Integration: Ollama
## Installation
To install the required package, run:
```bash
%pip install llama-index-llms-ollama
```
## Setup
1. Follow the [Ollama README](https://ollama.com) to set up and run a local Ollama instance.
2. When the Ollama app is running on your local machine, it will serve a... | |
173405 | @llm_chat_callback()
def stream_chat(
self, messages: Sequence[ChatMessage], **kwargs: Any
) -> ChatResponseGen:
ollama_messages = self._convert_to_ollama_messages(messages)
tools = kwargs.pop("tools", None)
def gen() -> ChatResponseGen:
response = self.client.chat(... | |
173423 | # LlamaIndex Llms Integration: Langchain
## Installation
1. Install the required Python packages:
```bash
%pip install llama-index-llms-langchain
```
## Usage
### Import Required Libraries
```python
from langchain.llms import OpenAI
from llama_index.llms.langchain import LangChainLLM
```
### Initialize ... | |
173485 | # LlamaIndex Llms Integration: Text Generation Inference
Integration with [Text Generation Inference](https://huggingface.co/docs/text-generation-inference) from Hugging Face to generate text.
## Installation
```shell
pip install llama-index-llms-text-generation-inference
```
## Usage
```python
from llama_index.ll... | |
173496 | # LlamaIndex Llms Integration: Openai
## Installation
To install the required package, run:
```bash
%pip install llama-index-llms-openai
```
## Setup
1. Set your OpenAI API key as an environment variable. You can replace `"sk-..."` with your actual API key:
```python
import os
os.environ["OPENAI_API_KEY"] = "sk-... | |
173722 | def _handle_upserts(
self,
nodes: Sequence[BaseNode],
store_doc_text: bool = True,
) -> Sequence[BaseNode]:
"""Handle docstore upserts by checking hashes and ids."""
assert self.docstore is not None
doc_ids_from_nodes = set()
deduped_nodes_to_run = {}
... | |
173732 | """LlamaIndex data structures."""
# indices
from llama_index.core.indices.composability.graph import ComposableGraph
from llama_index.core.indices.document_summary import (
DocumentSummaryIndex,
GPTDocumentSummaryIndex,
)
from llama_index.core.indices.document_summary.base import DocumentSummaryIndex
from llam... | |
173737 | class PromptHelper(BaseComponent):
"""Prompt helper.
General prompt helper that can help deal with LLM context window token limitations.
At its core, it calculates available context size by starting with the context
window size of an LLM and reserve token space for the prompt template, and the
out... | |
173747 | ## 🌲 Tree Index
Currently the tree index refers to the `TreeIndex` class. It organizes external data into a tree structure that can be queried.
### Index Construction
The `TreeIndex` first takes in a set of text documents as input. It then builds up a tree-index in a bottom-up fashion; each parent node is able to s... | |
173779 | def upsert_triplet(
self, triplet: Tuple[str, str, str], include_embeddings: bool = False
) -> None:
"""Insert triplets and optionally embeddings.
Used for manual insertion of KG triplets (in the form
of (subject, relationship, object)).
Args:
triplet (tuple): K... | |
173820 | def insert_nodes(self, nodes: Sequence[BaseNode], **insert_kwargs: Any) -> None:
"""
Insert nodes.
NOTE: overrides BaseIndex.insert_nodes.
VectorStoreIndex only stores nodes in document store
if vector store does not store text
"""
for node in nodes:
... | |
173881 | """Retriever tool."""
from typing import TYPE_CHECKING, Any, List, Optional
from llama_index.core.base.base_retriever import BaseRetriever
if TYPE_CHECKING:
from llama_index.core.langchain_helpers.agents.tools import LlamaIndexTool
from llama_index.core.schema import MetadataMode, NodeWithScore, QueryBundle
fro... | |
173894 | """Embedding utils for LlamaIndex."""
import os
from typing import TYPE_CHECKING, List, Optional, Union
if TYPE_CHECKING:
from llama_index.core.bridge.langchain import Embeddings as LCEmbeddings
from llama_index.core.base.embeddings.base import BaseEmbedding
from llama_index.core.callbacks import CallbackManager
... | |
173924 | elf, splits: List[_Split], chunk_size: int) -> List[str]:
"""Merge splits into chunks."""
chunks: List[str] = []
cur_chunk: List[Tuple[str, int]] = [] # list of (text, length)
last_chunk: List[Tuple[str, int]] = []
cur_chunk_len = 0
new_chunk = True
def close_ch... | |
173936 | """Vector memory.
Memory backed by a vector database.
"""
import uuid
from typing import Any, Dict, List, Optional, Union
from llama_index.core.bridge.pydantic import field_validator
from llama_index.core.schema import TextNode
from llama_index.core.vector_stores.types import BasePydanticVectorStore
from llama_inde... | |
173939 | class ChatSummaryMemoryBuffer(BaseMemory):
"""Buffer for storing chat history that uses the full text for the latest
{token_limit}.
All older messages are iteratively summarized using the {llm} provided, with
the max number of tokens defined by the {llm}.
User can specify whether initial tokens (u... | |
174024 | class ReActAgent(BaseAgent):
"""ReAct agent.
Uses a ReAct prompt that can be used in both chat and text
completion endpoints.
Can take in a set of tools that require structured inputs.
"""
def __init__(
self,
tools: Sequence[BaseTool],
llm: LLM,
memory: BaseMem... | |
174058 | from queue import Queue
from threading import Event
from typing import Any, Generator, List, Optional
from uuid import UUID
from llama_index.core.bridge.langchain import BaseCallbackHandler, LLMResult
class StreamingGeneratorCallbackHandler(BaseCallbackHandler):
"""Streaming callback handler."""
def __init_... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.