In the first four posts of this series, we’ve covered a lot about Retrieval Augmented Generation (RAG)—how it works, why it matters, and the benefits it brings. Now, we’re shifting gears to show you how we’re taking RAG to the next level at iseek.ai. We’ve built a two-step retrieval process designed to deliver even more precise and relevant results.
Before the retrieval process even begins, iseek.ai converts source content—such as curricular materials or assessment data—into vector embeddings. We enhance these embeddings with domain-specific concepts, enabling the system to quickly group similar content or materials matching a particular topic-based query.
Step 1: The Search
The first step of the process is the search itself. iseek.ai starts by identifying the most relevant results based on the query. It creates a universe of content that’s deeply focused on a specific discipline, contextually appropriate, and closely aligned with the query and domain-specific needs.
Step 2: Refining the Results
Once the initial results are retrieved, we move to the refinement stage. iseek.ai submits the top results to the LLM, which then applies its training data to tailor the results based on the institution’s unique intelligence, parameters, and criteria.
This two-step process helps to ensure results are accurate, encompassing, up-to-date, and deeply relevant, while also producing contextual result augmentation (tagging). For educational institutions, the approach is particularly appropriate for applications such as curriculum design, continuous quality improvement (CQI), and accreditation preparation.
Join us next week for the final post in this series, where we’ll take a closer look at how our approach meets the unique needs of professional and higher education.