From Pixels to Production: Introducing Tabnine’s Image as Context
Home / Blog /
January Changelog
//

January Changelog

//
Christopher Good /
5 minutes /
February 24, 2025

2025 starts with a powerhouse of enhancements to transform how enterprise teams leverage AI in their development workflows. We’re introducing image-based code generation for seamless design-to-code translation, expanding our LLM arsenal with Llama 3.3 and Qwen 2.5 support, adding unprecedented model flexibility for self-hosted environments, and giving teams granular control over context scoping. Plus, we’ve supercharged custom commands with @ mentions for more dynamic, reference-driven workflows, and completely rebuilt our chat interface with a modern, intuitive facelift that optimizes performance even in remote development environments. These updates reflect our unwavering commitment to delivering an AI software development platform that’s truly tailored to enterprise teams, with powerful capabilities that maintain security, compliance, and consistent coding standards across your organization.

Image as context

Our new Image as Context feature has bridged the gap between visual design and code implementation. Now you can upload UI mockups, flowcharts, database diagrams, or even annotated screenshots directly into Tabnine Chat, and watch as our AI transforms these visual elements into actionable code. Whether you’re translating a Figma mockup into a React component, converting an ER diagram into SQL scripts, or using an annotated screenshot to track down bug fixes, Tabnine understands the visual context and generates the appropriate code. This feature is particularly powerful when combined with our existing context awareness capabilities – Tabnine doesn’t just generate generic code from your images; it creates implementations that align with your team’s patterns and standards.

For enterprise engineering teams, this means accelerating development cycles while maintaining consistency across large, distributed teams. Instead of different developers interpreting visual requirements in their own way, Tabnine ensures standardized implementations that adhere to your organization’s established practices and architectural patterns. The result? Faster prototyping, smoother design-to-development handoffs, reduced technical debt, and more time for your engineers to focus on complex problem-solving rather than routine code translation tasks.

Qwen 2.5 32b and Llama 3.3 70b support

We’re expanding our enterprise model offerings with native support for Llama 3.3 70b and Qwen 2.5 32b in Tabnine Enterprise Self-Hosted deployments. Llama brings exceptional performance to complex enterprise programming tasks, achieving a 42.5% success rate on challenging problems through enhanced context window handling and advanced attention mechanisms. This means more accurate understanding of complex, multi-file codebases and better use of Tabnine’s context engine – all while maintaining a 40% smaller memory footprint compared to similar models.

Meanwhile, Qwen shines in maintaining consistent code style across enterprise projects with a remarkable 94% success rate in pattern adherence, making it particularly valuable for teams working across multiple programming languages and frameworks. Together, these powerful models give enterprise teams new options for tackling everything from code modernization initiatives to technical debt reduction while maintaining strong security and efficient resource utilization.

Add any LLM to your Tabnine Enterprise Self-hosted environment

We’re revolutionizing how enterprise engineering teams can leverage AI with our new model flexibility capability for Tabnine Enterprise Self-hosted. Now your development organization can integrate any LLM of your choice – whether it’s models fine-tuned on your proprietary codebase, specialized third-party models for specific development tasks, or emerging open-source options – while still benefiting from Tabnine’s powerful context engine, AI agents, and deep IT system integrations.

This breakthrough is especially powerful for large engineering teams in regulated industries who need to maintain consistent development standards across distributed teams while protecting sensitive intellectual property. Deploy models within your own infrastructure to maintain complete data sovereignty, eliminate unpredictable usage-based pricing, and enforce unified coding standards through our code review agent and attribution features. For engineering leaders balancing innovation with compliance requirements, this means your teams can leverage cutting-edge AI capabilities while maintaining the security controls and architectural patterns your organization demands.

Context selection

With our new context-scoping capabilities, we’re giving development teams unprecedented control over how Tabnine understands and uses your codebase. Now, your engineers can precisely direct Tabnine’s focus – whether that’s searching across your entire connected repositories for architectural patterns or specialized implementations or limiting context to local workspaces for targeted development tasks.

This granular control is particularly valuable for large enterprise teams working across multiple repositories and complex codebases. Want to ensure new code follows existing patterns in a specific component? Are you looking to understand how CSV reports are implemented across different teams? Tabnine now lets you scope your search exactly where you need it, making it easier to maintain consistency across distributed teams while reducing the time spent searching through massive codebases.

For engineering managers, this means accelerated development cycles as teams can quickly find and replicate proven patterns, reduced technical debt by ensuring consistent implementations across services, and more confident releases through better code standardization. Developers can focus Tabnine on specific security-hardened components when implementing sensitive features, quickly understand how shared libraries are used across teams, and ensure their code aligns with established architectural patterns – all without having to context-switch between countless repositories and documentation.

Allow mentions in custom commands

We’ve supercharged our custom commands feature with the addition of @ mentions, giving enterprise development teams even more power to standardize and automate their AI workflows. Now, you can create reusable commands that dynamically reference specific code elements – methods, classes, or types – directly from your workspace.

Imagine creating a security audit command that automatically checks newly added dependencies against your approved @ SecurityPolicy class, or a documentation generator that follows your team’s patterns by referencing your @ DocumentationTemplate implementations. For enterprise engineering teams managing large codebases, this means you can create sophisticated, context-aware commands that enforce architectural standards, maintain consistent code quality, and streamline code reviews across distributed teams. 

AI Chat UX/UI facelift

The chat interface has been rebuilt from the ground up to deliver a more modern, intuitive user experience that matches the sophistication of your development workflow. The new design isn’t just about aesthetics — it’s about making every interaction with Tabnine more efficient and enjoyable.

We’ve optimized chat performance with a focus on remote development environments, ensuring smooth operation regardless of your team’s setup. The refreshed interface brings improved readability, more intuitive navigation, and faster access to frequently used features (as well as some new ones). We’ve added new fine-tuning functionality for improved personalization in the redesigned Settings menu, improved the accessibility and layout of our input types for added context, and updated the color scheme to provide a more modern look and feel. This modernization sets the stage for our powerful new personalization capabilities, creating a seamless environment where developers can focus on what matters most: writing great code.

These January updates demonstrate our focus on making AI more powerful, flexible, and intuitive for enterprise development teams. From transforming visual elements directly into code, to supporting the latest LLMs, to providing granular context control and enhanced custom commands – every feature is designed to help your teams work more efficiently while maintaining your organization’s standards. The modernized chat interface brings it all together in a more accessible, performant package that sets the foundation for even more innovations to come.

Want to see these new features in action and learn how to get more from Tabnine? Join our Tabnine Office Hours live demo and Q&A every Wednesday.