← Back to all frameworks Graph RAG

LlamaIndex

Document indexing and hybrid retrieval for LLMs

What it is

Specialized framework for connecting LLMs to your data. Loaders for 100+ formats, multiple index types (vector, keyword, summary, knowledge graph), hybrid retrievers, evaluators.

How Vaaani uses it

  • Multi-document QA with cross-document reasoning
  • Building Knowledge Graph indexes alongside vector indexes
  • Hierarchical summarization of long documents
  • Retrieval-augmented agents with cited sources

Why it makes the cut

LangChain orchestrates; LlamaIndex retrieves. Together they handle 90% of the production RAG stack.

Sample code

from llama_index.core import KnowledgeGraphIndex, SimpleDirectoryReader

docs = SimpleDirectoryReader("./pdfs").load_data()
kg = KnowledgeGraphIndex.from_documents(docs, max_triplets_per_chunk=10)
kg.as_query_engine().query("What did Q2 reveal?")

Related in the Vaaani stack

Have a project that needs LlamaIndex?

30-min discovery call. You describe the busywork; I map it to an AI worker and a budget.