Coding assistance and coding agents
- AI code completions for current line and multiple lines for full-function implementation
- AI-powered chat in the IDE that supports every step in the SDLC,
using leading LLMs from Anthropic, OpenAI, Google, Meta, Mistral and others
- Works in all major IDEs
- Workflow AI agents: test case agent, Jira implementation agent, code review agent
- Autonomous agents, with optional user-in-the-loop oversight
- Can use various tools through the Model Context Protocol (MCP):
- Code Tools: Git operations, testing frameworks, linters
- External Services: JIRA, Confluence, databases, APIs
- Development Tools: Docker, package managers, CI/CD systems
- Integration with Atlassian Jira Cloud and Data Center to inform AI responses and generation
Org-awareness
- A Context Engine that understands your organization and its standards
- Unlimited codebase connections for Bitbucket, GitHub, Gitlab and Perforce P4 (Helix Core)
- Connects to Git, Jira, Confluence, and more
- Uses your real-time development context and adapts outputs accordingly
- Applies your organizational coding standards consistently
- Surfaces the most relevant APIs, docs, and code examples for your teams
Security
- Flexible deployment options – SaaS, VPC, on-premises, or fully air-gapped. You choose where your code lives
- Zero code retention and total privacy – No storage, no training on your code, no sharing with third parties
- End-to-end encryption and TLS. Secure communication and data integrity in every environment
- SSO integration for ease of administration for private deployments
- Enterprise-grade compliance – Meets GDPR, SOC 2, ISO 27001, and more
- License-safe AI usage. Built-in protection against licensing risks**
Deployment flexibility
- Fully private deployment on SaaS or self-hosted (VPC, on-premises, with the option to be fully air-gapped)
- Compatible with all major IDEs, LLMs, languages, and cloud providers
- Works across legacy systems and modern stacks
- No Lock-in and no forced tool changes
- Extensible via MCP
Steerability
- Define clear boundaries and behaviors for the AI
- Governance controls to manage permissions, scope, and usage
- Centralized analytics for visibility into adoption, cost, and compliance
- Adjust Tabnine to follow your rules, processes, and risk profile
- Auditability for all usage
- Access MCP governance controls
- Set pricing thresholds per user/team
- See usage metrics per user team
- Control LLM access by user/ team
- View code generation provenance
- Advanced analytics
Support
- Priority ticket-based support during business hours
- Training on AI-enabled software development for your entire team
* Unlimited usage when using your own LLM on-prem or your LLM end point on cloud. Add payment for reserved token consumption quota when using Tabnine provided LLM access, based on actual LLM provider prices + 5% handling fee
** IP indemnification – subject to terms and conditions
{-}Less
info