Tabnine now supports Perforce Helix Core
Home / Blog /
How OpenLM Scaled Secure, Context-Aware AI Across Hundreds of Microservices with Tabnine
//

How OpenLM Scaled Secure, Context-Aware AI Across Hundreds of Microservices with Tabnine

//
Christopher Good /
6 minutes /
April 16, 2025

Scaling Innovation Across Microservices and Frontend UX with Contextual AI

OpenLM, a global leader in engineering license management, helps some of the world’s most sophisticated enterprises optimize software usage, enforce compliance, and cut licensing costs across vast technical environments. With customers spanning aerospace, defense, automotive, and semiconductors, OpenLM builds highly specialized infrastructure software that integrates with hundreds of engineering tools and platforms.

To serve these customers, OpenLM runs a lean but powerful engineering organization. Its teams manage an extensive microservices-based architecture across a Kubernetes environment, with hundreds of interconnected repositories, backend-heavy workloads, and growing frontend complexity. With limited time, limited headcount, and increasing demand, OpenLM turned to Tabnine to unlock new development velocity—without compromising quality, security, or control.

Why OpenLM Replaced Boilerplate with AI—and Never Looked Back

Like many fast-moving engineering teams, OpenLM’s developers were spending too much time on boilerplate and repetitive tasks. As Petru Betco, one of OpenLM’s development team leads, put it: “Like most engineering organizations, we wanted to minimize time spent on boilerplate and repetitive tasks. Our goal was to automate the mundane so developers could focus on the high-impact, intellectually demanding parts of the work.”

The pain was especially acute on the frontend. While most of OpenLM’s systems are backend-centric, user-facing products were growing in complexity. With fewer frontend specialists, Petru’s team needed to deliver faster—without letting their backend-focused developers drown in unfamiliar UI work.

Testing was another pressure point. “Testing is critical, but nobody wants to spend time doing it. It’s also one of the most time-consuming and least rewarding tasks. We needed a way to maintain high test coverage without slowing down velocity.” Petru noted. The team wanted to improve test coverage and quality while reducing the overhead of writing and maintaining complex unit and integration tests. 

With Tabnine now driving an Automation Factor nearing 40%, the team has substantially reduced time spent on low-value tasks like boilerplate and test scaffolding. This has allowed engineers to shift more of their time toward architecture, optimization, and innovation—without sacrificing quality or consistency.

At the same time, OpenLM was growing. Squads were distributed across multiple offices, engineers were onboarding into unfamiliar projects, and technical leaders needed visibility into the codebase without slowing teams down. They needed a secure, scalable, context-aware AI solution.

From Generic AI Tools to Enterprise-Grade Engineering Assistants

Like many developers, Petru began experimenting with ChatGPT in the browser. But it quickly became clear that generic AI tooling wasn’t built for software teams operating at scale. What they needed was something purpose-built for professional development environments: tightly integrated, security-aware, and context-driven.

“I hadn’t realized that AI tools could connect directly to the codebase and respond based on context,” Petru shared. “That level of contextual awareness completely shifted my view—this wasn’t just autocomplete, it was an actual engineering assistant. That changed everything.”

That discovery led OpenLM to Tabnine—a platform designed to bring contextual intelligence directly into the IDE. Rather than flipping between tabs or pasting code into external chats, developers could now access real-time suggestions, code explanations, and test scaffolding inside their secure environment, grounded in the full context of their codebase.

Privacy, Trust, and Control: Meeting CISO-Level Standards for Secure AI Adoption

As a company dealing with sensitive customer environments, OpenLM had a strict policy around data handling. Trust, security, and compliance weren’t just talking points—they were gating requirements for any vendor.

“Naturally, we had concerns around trust, security, and compliance. But those were quickly addressed once we dug into Tabnine’s architecture” Petru said. 

OpenLM adopted a clear model policy: Tabnine Protected is used on all security-sensitive code, while Claude 3.5—also available inside Tabnine—is used for general-purpose development. This model-level control ensures developers can move fast with confidence, balancing productivity with compliance.

“The ability to switch models based on sensitivity is key. For regulated workloads, we default to Tabnine Protected. For general development, we use higher-speed models like Claude 3.5 Sonnet. That kind of flexibility gives us productivity without compromising control.” Petru said.

Codebase-Aware AI at Scale: Navigating Hundreds of Repos with Confidence

OpenLM’s codebase reflects the scale and sophistication of its platform. After migrating from a monolithic legacy system, the company now maintains a fully containerized architecture built on Kubernetes and Docker, with dozens of services managed by independent squads.

“Each repository is a service. One squad is responsible for multiple services—usually ones that work in tandem,” Petru explained. With hundreds of interconnected repositories and countless dependencies, navigating and maintaining cohesion across the architecture was a serious challenge.

Tabnine changed that.

“It accelerates development by understanding the architectural context of what you’re working on—so it doesn’t just suggest code, it delivers value aligned to our structure and standards.

“The biggest value for us is how well Tabnine understands the code context. When you dive into a project you’ve never seen before, it helps you grasp the dependencies, the architecture, and what’s going on without having to dig through documentation or message coworkers.

This deep contextual awareness enabled faster collaboration across squads, accelerated onboarding into unfamiliar services, and helped new team members start contributing faster.

OpenLM’s overall Productivity Factor recently peaked at 89.58%—a powerful signal that developers are consistently integrating Tabnine into their core workflows. Rather than relying on occasional suggestions, engineers are using Tabnine as a daily accelerant for writing, reviewing, and reasoning through production code.

This high productivity signal reinforces Tabnine’s role not just as an autocomplete tool, but as an embedded agent that adapts to OpenLM’s engineering DNA.

Accelerated Onboarding, Smarter Collaboration

Tabnine adoption has surged across OpenLM’s globally distributed engineering organization, with near-total license utilization and strong daily usage across squads. Developers have embraced the platform not just as a productivity booster, but as a true engineering partner — one that understands their code, their architecture, and the way they work.

With distributed engineering teams across Europe and Israel, OpenLM needed to ensure that new developers could ramp quickly and contribute confidently.

“The onboarding curve is minimal. The interface is intuitive, and most developers start seeing value almost immediately—often without needing documentation. That ease of adoption has been key to our usage rates.”

All new engineers are onboarded with Tabnine, with environment-specific install guides for JetBrains, Visual Studio, and VS Code. From day one, developers have an intelligent assistant that helps them explore, understand, and contribute to unfamiliar code.

“We like to give new developers real responsibilities right away and just throw them in the water—” Petru joked. “Tabnine helps them stay afloat and productive from day one.”

From OKRs to Edge Cases: Using AI to Plan, Test, and Ship Faster

Petru also shared a powerful use case that extended far beyond day-to-day development. When setting team OKRs, he turned to Tabnine’s Claude integration to help structure his goals.

“I provided a rough list of OKRs and asked Tabnine to help quantify and refine them. It returned detailed metrics, action steps, and success criteria—turning a high-level vision into a structured, executable plan.

That same flexibility shows up in OpenLM’s UI migration efforts. With complex components like paginated dropdowns, dynamic preload states, and edge-case behaviors, Tabnine has helped frontend developers reason through difficult scenarios and move faster.

“There’s still a lot of work to do as a Software Engineer, but Tabnine really helps cut the time and friction,” Petru said.

How Tabnine Delivers Ongoing Value at OpenLM

OpenLM’s adoption of Tabnine isn’t just a cultural or process shift — it’s backed by consistent, measurable results that highlight the platform’s long-term value across velocity, quality, and developer experience.

Productivity is accelerating: OpenLM reached a peak Productivity Factor of 89.58%, signaling deep integration of AI into daily engineering workstreams. Developers are consistently accepting and acting on AI-suggested code — driving faster delivery without compromising standards.

Automation without risk: With a sustained Automation Factor nearing 40%, OpenLM’s engineers are streamlining repetitive tasks like boilerplate generation and test scaffolding — while applying granular security policies through Tabnine Protected wherever needed.

Context is everything: Tabnine’s codebase-aware intelligence is a perfect match for OpenLM’s architecture — enabling developers to confidently contribute across a containerized microservices environment with hundreds of interconnected repositories.

AI chat drives strategic value: Engineers aren’t just using Tabnine to generate code — they’re using it to shape direction. From defining OKRs to planning architecture migrations, Claude-powered chat has become as integral as completions, demonstrating Tabnine’s role as a full-lifecycle assistant.

These results underscore what’s possible when AI is integrated with context, governed with control, and trusted by engineers. Tabnine delivers not just short-term boosts — but long-term, compounding value across the SDLC.

Join industry leaders like OpenLM in transforming your software development lifecycle.

Tabnine is the only AI software development platform built from the ground up to be context-aware, developer-friendly, and enterprise-secure. What makes Tabnine unique isn’t just that it generates code — it generates the right code, for your codebase, grounded in your architecture, patterns, and practices.

Why does that matter? Because trust is the true unlock for enterprise AI adoption — and trust is built on accuracy. At Tabnine, we achieve that accuracy by grounding every suggestion, test plan, explanation, and refactor in the real-time context of your codebase. That’s why Tabnine helps developers move faster without hesitation — because they know the AI understands exactly what they’re working on.

We don’t just integrate into your workflow — we adapt to it. Whether you’re deploying across air-gapped environments, managing sensitive IP, or scaling development across dozens of squads, Tabnine delivers intelligence you can trust, in the tools your developers already use.

Let Tabnine help your developers do their best work — faster, safer, smarter, and with complete confidence.

Contact us today or register for an upcoming Tabnine Office Hours to see how Tabnine can accelerate your engineering velocity while meeting the highest standards of trust, privacy, and performance.