[NEW] Basic gets a boost: Announcing major upgrades to our free AI code assistant
Home / Blog /
5 AI trends we spotted at Google Cloud Next
//

5 AI trends we spotted at Google Cloud Next

//
Aydrian Howard /
4 minutes /
April 18, 2024

The Tabnine crew hit the road again last week. This time, we took in the bright lights of Las Vegas as a sponsor of Google Cloud Next — and what happened there definitely didn’t stay there. It was another great opportunity to connect with the more than 30,000 attendees who were present. Those who visited our booth saw a demo of the latest Tabnine features in action and walked away with some stickers, a T-shirt, and one of our custom Tabnine Tab key caps.

Following up on last year’s topic of the exciting possibilities of generative AI, Google Cloud Next made it clear that the era of AI is here by showcasing how customers are using it to transform the way they work. Given our unique position in the AI ecosystem, we wanted to share some of our observations.

Here are five trends we noticed:

Where to run GenAI models?

There are a lot of models out there and some are very large. Now that we’re proficient at building and training these models, the question becomes: where will they run? The larger models require massive amounts of computing power on expensive servers to answer queries. Analysts have estimated that it could cost up to $700,000 a day to run ChatGPT. Google Cloud is putting a significant focus on its cloud hardware to become the most efficient place to run GenAI workloads, but cloud infrastructure remains more expensive than on-premises model deployments for the companies that have the staff to manage it. We know that most clients (and particularly large regulated ones) expect to be able to run these workloads anywhere. 

Because Tabnine uses specially trained models focusing on code generation, they’re smaller and can be deployed at a lower cost than paying on consumption of an API. Our customers have the option to use our SaaS environment or self-host using a VPC or their own hardware completely air-gapped for increased security. We can also help you source hardware to run Tabnine on-premises through our partnerships with NVIDIA and other hardware providers.  

Data protection is top of mind

Large clouds plan to solve the GenAI problem with the largest possible models. Google just announced its Gemini family of models, placing it at the center of its GenAI story. As with most large models, you likely have no idea what data they’re being trained on. This could leave you open to copyright issues and the possible exposure of unlicensed code. 

Tabnine set out to alleviate these issues by creating our own model trained exclusively on permissively licensed open source code, in addition to never persisting or training on your code. We were the first company to build an AI coding assistant. We released the first version of Tabnine in 2018, which is now used by over a million developers at thousands of companies, including companies in highly secure and regulated industries like defense, electronics manufacturing, and healthcare. We also provide transparency through our Trust Center, where you can access a list of every repository used to train our models.

The emerging focus on AI agents

AI agents are intelligent systems programmed to perform tasks, make decisions, and interact with their environment just like humans do. Google has made this a focus area by previewing their new Vertex AI Agent Builder and showing off how customers such as Best Buy, Mercedes-Benz, and IHG Hotels & Resorts are leveraging them.

With the introduction of Tabnine Chat came the addition of AI agents to assist the software development process. We added an Onboarding Agent to analyze a project and provide new developers an overview with the ability to ask questions to decrease the time spent onboarding to a new project. Other chat agents assist in executing tasks such as test generation and code documentation. As we move forward, more agents will be created to assist with every step of the software development life cycle.

The proliferation of highly specific AI models for software development

We’re seeing many new AI models for software development tasks released as time goes on. This industry will likely see a proliferation of AI models and you’ll likely find that some models are better than others for specific tasks or languages. Furthermore, many companies are exploring building their own models, or have already done so. 

Tabnine recently released switchable models that allow customers to choose the model with the best possible performance. We pride ourselves on being the AI coding assistant that you control. Our switchable models capability lets you choose between four different models to power Tabnine Chat, giving you the flexibility to choose the best model for each project or task. 

Increasing the quality of AI recommendations

AI hallucinations are definitely a concern. Hallucinations are incorrect or misleading results generated by an AI model. The occurrence of hallucinations will never completely go away, but there are ways to reduce them. Some methods include using grounding and adding context through retrieval-augmented generation (RAG).

Tabnine recently rolled out updates that provided the ability to use RAG in the local workspace and with remote repositories. This yielded a 40% increase in the acceptance of multiline code completions when compared to suggestions without exposure to a developer’s codebase. 

Another feature currently available in Tabnine is model customization. With customization, we fine-tune the Tabnine model on the training data that you provide, further increasing its performance. Regardless of the tool or model you choose, some languages simply lack sufficient exposure to have informed the current crop of models. Fine tuning Tabnine on your own code closes those gaps dramatically. Of course, the model we fine-tune for you remains 100% private and secure.

We look forward to returning to Las Vegas April 30–May 2 for Atlassian Team ‘24. We hope to see you there.