Home / Blog /
Enhancing AI coding assistants with context using RAG and SEM-RAG
//

Enhancing AI coding assistants with context using RAG and SEM-RAG

//
Tabnine Team /
6 minutes /
May 27,2024

Basic AI coding assistants,while helpful,often fall short of delivering the most relevant and contextually accurate code suggestions due to their reliance on a general understanding of software languages and the most common patterns for writing software. These coding assistants generate code that is suited to solve the problems they are tasked with but are often not aligned with an individual team’s coding standards,conventions and styles. This typically results in a need to modify or refine their recommendations in order for the code to be accepted into the application.

AI coding assistants typically function by leaning on the knowledge contained within a specific large language model (LLM),and applying general coding principles across various scenarios. Therefore,typical AI assistants often lack the ability to understand the specific context of a project,leading to suggestions that,while syntactically correct,might not align with a team’s unique guidelines,expected approach or architectural designs. The LLMs that underpin generative AI systems operate based on a fixed set of training data,which does not evolve dynamically as the project progresses. This static approach can result in mismatches between the generated code and the project’s current state or requirements,necessitating developers to make further manual adjustments.

Refining LLMs with RAG

There is a misunderstanding that AI assistants simply interact with an LLM to generate the results a user is looking for. Whether you are generating text,images or code,the best AI assistants use a complex set of guidelines to ensure that what the user asks for (e.g.,a software function that accomplishes a specific task) and what gets generated (a Java function,in the correct version,with the correct parameters for the application) are aligned.

One of the proven techniques to get the best outputs from any LLM is to provide additional context with the prompt. This approach,referred to as retrieval-augmented generation (RAG),has become a critical component of chatbots,AI assistants and agents that successfully serve enterprise use cases.

AI coding assistants,like all generative AI tools,use LLMs as the foundation for code generation. Bringing highly tailored RAG to coding assistants enables them to generate code that is of higher quality and more closely aligned with a company’s existing codebase and engineering standards.