[NEW] Provenance and Attribution: Minimize IP liability for GenAI output
Home / Blog /
Mistral’s Codestral is now available on Tabnine
//

Mistral’s Codestral is now available on Tabnine

//
Ameya Deshmukh /
3 minutes /
June 3, 2024

We’re thrilled to announce that Codestral, the newest high-performance model from Mistral, is now available on Tabnine. Starting today, you can use Codestral to power code generation, code explanations, documentation generation, AI-created tests, and much more. When you use Codestral as the LLM underpinning Tabnine, its outsized 32k context window will deliver quick response times for Tabnine’s personalized AI coding recommendations. 

Codestral: Mistral’s first-ever code model 

Released on May 29, 2024, Codestral is Mistral’s first-ever code model that’s fluent in 80+ programming languages. Tabnine has observed excellent coding performance with popular languages including Python, Java, C, C++, and Bash, and it also performs well on less common languages like Swift and Fortran.

Mistral’s announcement blog post shared some fascinating data on the performance of Codestral benchmarked against three much larger models: CodeLlama 70B, DeepSeek Coder 33B, and Llama 3 70B. They tested it using HumanEval pass@1, MBPP sanitized pass@1, CruxEval, RepoBench EM, and the Spider benchmark. The languages they compared performance on included Python, SQL, C++, Bash, Java, PHP, Typescript, and C#. 

Based on Mistral’s performance benchmarking, you can expect Codestral to significantly outperform the other tested models in Python, Bash, Java, and PHP, with on-par performance on the other languages tested. The really fascinating innovation with Codestral is that it delivers high performance with the highest observed efficiency. Codestral gives you a great cost-to-performance ratio. 

Extending the switchable models capability

We launched the switchable models capability for Tabnine in April 2024, originally offering our customers two Tabnine models plus the most popular models from OpenAI. Since then, we’ve released support for GPT-4o, and with the addition of Codestral, Tabnine users now have six available models to select from: 

  • Tabnine Protected: Tabnine’s original model is designed to deliver high performance without the risks of intellectual property violations or exposing your code and data to others. 
  • Tabnine + Mistral: This model was developed by Tabnine to deliver the highest class of performance across the broadest variety of languages while still maintaining complete privacy over your data.
  • Codestral: Our newest integration demonstrates proficiency in both widely used and less common languages. This model is recommended for users looking for the best possible performance who are comfortable sharing their data externally and using models trained on any publicly available code.
  • OpenAI GPT-4o, GPT-4 Turbo, and GPT-3.5 Turbo: These are the industry’s most popular LLMs, proven to deliver the highest levels of performance for teams willing to share their data externally.

One of our goals is to always provide our users with immediate access to cutting-edge models as soon as they become available. The switchable models capability puts you in the driver’s seat and lets you choose the best model for each task, project, and team. You’re never locked into any one model and can switch instantly between them using the model selector in Tabnine. 

During model selection, Tabnine provides transparency into the behaviors and characteristics of each of the available models to help you decide which is right for your situation. The underlying LLM can be changed with just a few clicks — and Tabnine Chat adapts instantly.  

Check out this short video that shows the Codestral model in action:

Gif of the Codestral model in Tabnine

Starting today, the Codestral model is available to all Tabnine Pro users at no additional cost. Please make sure to use the latest version of the Tabnine plugin for your IDE to get access to the Codestral model. The Codestral model will be available soon for Enterprise users — contact your account representative for more details.