Retrieval is continuing to evolve, and it’s playing a major role in making large language models (LLMs) more effective for real-world tasks, such as responding to RFPs. Anthropic’s Contextual Retrieval method (https://lnkd.in/dVrZayka), which combines traditional techniques like BM25 with modern embeddings, and increases the richness of the pre-processing (similar to the recent DocETL https://www.docetl.org/) is a good example of how retrieval accuracy is improving. By capturing both exact matches and broader semantic meanings, this method reduces retrieval failures and leads to more relevant, higher-quality results.
For complex use cases like RFPs (and many others), the ability to go beyond traditional document search is critical. While it’s useful to have an LLM help with fuzzy searches like, “Where in this document do they talk about compliance?” or polish sections of text, the real long-term value comes from building data pipelines. These pipelines can automate the extraction of industry-specific factors, competitive analysis, and past performance metrics—creating dynamic workflows that allow the LLM to generate customized responses based on this rich input. Converting unstructured to structured.
The real differentiator with LLMs is their ability to handle variability that deterministic systems struggle with. Traditional rule-based approaches would be brittle and hard to maintain because they can’t adapt to the nuanced, shifting priorities that arise in tasks like RFPs. LLMs, on the other hand, excel because they can incorporate a wide range of factors—from industry trends to relationship history—into cohesive outputs. And they have far more breadth beyond the domain than human experts. This adaptability is useful not only for RFPs but for contract analysis, market research, software development, and more.
With many models with increasingly powerful handling of long context windows, you can interact with sizable, complex documents while integrating APIs and external data sources to automatically enrich responses, helping streamline many other complex tasks.
This isn’t about "zero-shot" learning; these models improve over time, whether through fine-tuning or few-shot learning. You can build systems where LLMs become integral to responding to RFPs dynamically, and these methods can be applied to other complex workflows as well. It’s no longer about just document search—it’s about using LLMs to automate nuanced processes that used to require extensive manual effort.
For businesses, this means moving beyond using LLMs as simple q&a tools to a space where they actively help curate knowledge to construct and refine responses based on extracted structures in source and target, whether for RFPs or similar tasks. Shameless plug, if you're working on tools or apps in this area and struggling with balancing data management, privacy and data sovereignty, operational stability, check us out at www.kamiwaza.ai