Home / Blog /
Tabnine goes hybrid,serving AI models on both cloud and local
//

Tabnine goes hybrid,serving AI models on both cloud and local

//
Tabnine Team /
<1 minute /
June 8,2022

Harnessing the combined power of both cloud and local servers is essential for providing the best possible code prediction experience for our users. 

That’s why we’re excited to introduce Tabnine’s new Hybrid model,which uses both Cloud and Local inference. As of June 1,2022,Tabnine’s new hybrid model will be enabled by default for all new installations,meaning that Tabnine will use both local machines and our cloud servers in tandem to provide contextual code completions.

Why is Tabnine moving to a hybrid model?

Tabnine’s cloud offering has greatly evolved over the past year,partly due to a significant increase in the size of models and model specialization. Effective inference with these models cannot be run completely locally. 

Our cloud models which require network connectivity,allow you to use Tabnine’s servers for GPU-accelerated completions,offering more accurate and longer predictions. 

Local models run without network access and rely on local resources;as a result,they’re not as powerful as the cloud models. 

Tabnine’s new Hybrid Model combines the benefits of both.

What does it mean for you? 

You still have full control

Although we recommend keeping the hybrid model enabled,you can easily switch to cloud or local mode at any time. Just visit the Tabnine Hub to change the default configuration. 

We remain committed to your privacy 

Tabnine continues to place the highest value on your privacy,never storing nor sharing any of your code. All communication with the cloud is strongly encrypted. Learn more about Tabnine’s code privacy here

Always moving forward

This new hybrid model is an important step in our quest for providing more and more powerful coding experiences for our users,but it’s just one of many to come. 

Stay tuned for more news coming soon…