From the course: Introduction to AI Orchestration with LangChain and LlamaIndex

Unlock the full course today

Join today to access over 24,000 courses taught by industry experts.

RAG with LlamaIndex

RAG with LlamaIndex

We already looked at a Hello World level of RAG application in the previous chapter. Let's flesh that out a bit. LlamaIndex is really in its happy place when we're indexing documents. So let's get into some of the details needed to make this into more of a real-world app. And while we're at it, let's make it run entirely locally. Here's the directory structure we're going to use. Within Chapter 2, there's a directory called the handbook. This contains some sample data, and a directory called handbook_index will be created by our code. Small bit of advice here. If you're following along with the code for the first time, use just the sample content, even if it's just a document. Once that's working, feel free to play around and try your own documents with it. Here's a summary of some of the code we looked at earlier. We had a vector store index. From that, we made it into a query engine, and from that, we could just call query, passing in text. Instead of calling query_engine.query, we…

Contents