Logo

Differenz Force

We make a difference
  • Home   »  
  • Blog   »  
  • What LLM does Agentforce use?

What LLM does Agentforce use?

Wondering what LLM does Agentforce use? Agentforce combines Salesforce’s CodeGen2.5 & xGen-Code with GPT, Claude, Gemini, plus BYOLLM flexibility.

What LLM does Agentforce use?
Table of Contents

Get in Touch with Our Salesforce Experts

Contact Us Today

Whether you're looking to optimize your Salesforce solution or need custom development, our team is here to help you unlock the full potential of Salesforce.

Agentforce is Salesforce’s powerful platform for building and deploying AI agents that help businesses automate tasks, make decisions, and improve customer service. If you’re wondering, “What LLM does Agentforce use?” the answer isn’t just one model it’s a mix of Salesforce’s own custom-built large language models (LLMs) and support for popular third-party options. This setup gives users flexibility while keeping things secure and efficient. In this guide, we’ll break it down in simple terms, based on the latest info as of August 2025.

What Is Agentforce?

Before diving into the LLMs, let’s quickly explain Agentforce. It’s part of Salesforce’s Einstein AI suite, launched to let companies create “agents” smart AI helpers that can handle complex jobs like answering questions, processing data, or even writing code. These agents use LLMs to understand language, reason, and act. Agentforce stands out because it works seamlessly with Salesforce tools like CRM data, making it great for sales, service, and development teams.

Proprietary LLMs Powering Agentforce

Salesforce doesn’t rely only on outside models; they have their own LLMs built by the Salesforce AI Research team. These are designed specifically for tasks like coding and business workflows, ensuring high performance and security within the Salesforce ecosystem.

CodeGen2.5

  • What it is: A compact, fast model released in 2023, focused on low-delay tasks like code completion.
  • How it’s used in Agentforce: It helps developers by auto-filling code, creating unit tests, and fixing bugs quickly. It’s trained on many programming languages, including Salesforce’s own Apex language.
  • Why it’s great: Keeps everything secure your data stays inside Salesforce, and it learns from expert feedback to get better over time.
  • Fun fact: Developers using this save about 125 minutes a week on average!

xGen-Code

  • What it is: The newest proprietary model, built for both text and code tasks.
  • How it’s used in Agentforce: Powers interactive features like chat-based coding help (e.g., Dev Assistant). It handles conversations, understands complex requests, and generates accurate responses.
  • Why it’s great: It beats many other models in accuracy for Salesforce-specific jobs, and it’s efficient to run, which is good for the environment.

These homegrown models are the core of Agentforce for Developers (formerly Einstein for Developers), making coding and AI tasks smoother and more reliable.

Supported Third-Party LLMs in Agentforce

Agentforce isn’t limited to Salesforce’s models. It supports a wide range of managed LLMs from partners like OpenAI, Anthropic, and Google. These are ready-to-use and geo-aware (meaning they follow local data rules). Here’s a simple table of key ones:

Model ProviderKey ModelsBest For
Anthropic (on Amazon Bedrock)Claude 3 Haiku, Claude 3.7 Sonnet, Claude Sonnet 4Secure, high-trust tasks with Salesforce boundaries
OpenAI / Azure OpenAIGPT-4o, GPT-4o Mini, GPT-5, GPT-5 MiniGeneral chat, reasoning, and fast responses
Google Vertex AIGemini 2.0 Flash, Gemini 2.5 ProCreative tasks and data analysis

Older models like GPT-3.5 Turbo are rerouted to newer versions for better results. This variety lets you pick based on speed, cost, or features for example, Claude models shine in detailed reasoning, while GPT ones are versatile for everyday use.

Bring Your Own LLM (BYOLLM) Feature

Want even more control? Agentforce lets you bring your own LLM through integrations with:

  • Amazon Bedrock
  • Azure OpenAI
  • OpenAI
  • Vertex AI (Google)

This means you can connect custom or preferred models while still using Agentforce’s tools. It’s perfect for companies with specific needs, like extra privacy or specialized training. Just set it up via the Models API, and your agents can tap into these for prompts and actions.

How to Choose the Right LLM for Agentforce?

Picking an LLM depends on your goals:

  1. For coding and dev work: Stick with proprietary ones like CodeGen2.5 or xGen-Code for speed and accuracy.
  2. For general agents: Try GPT-4o Mini for quick, cost-effective chats, or Claude for secure, thoughtful responses.
  3. Custom needs: Use BYOLLM if you have a fine-tuned model.

Test a few Salesforce doesn’t pick one “best” for everything, so experiment based on your use case. Always check performance metrics like response time and accuracy.

Conclusion

Agentforce doesn’t rely on a single model. The answer to “What LLM does Agentforce use” is a multi-model approach: Salesforce’s proprietary LLMs like CodeGen2.5 and xGen-Code handle coding and business-specific workflows, while third-party options such as GPT, Claude, and Gemini cover general reasoning, conversations, and creativity. With the Bring Your Own LLM (BYOLLM) feature, businesses can also integrate their own fine-tuned or hosted models. This flexibility ensures that Agentforce can adapt to any workflow while maintaining Salesforce’s high standards for security, trust, and scalability.

FAQs

Does Agentforce use a single LLM?

No. Agentforce is multi-model. It combines Salesforce’s proprietary LLMs (like CodeGen2.5 and xGen-Code) with third-party models from OpenAI, Anthropic, and Google, plus a Bring Your Own LLM (BYOLLM) option.

What are Salesforce’s own LLMs in Agentforce?

CodeGen2.5: Optimized for coding tasks, fast completions, bug fixes, and Salesforce Apex support.
xGen-Code: Handles both text and code, powering developer assistants and conversational coding help.
These models are built by Salesforce AI Research and tuned for business workflows inside the Salesforce ecosystem.

Which third-party LLMs are supported in Agentforce?

Agentforce supports managed models from leading providers:
OpenAI/Azure OpenAI → GPT-4o, GPT-4o Mini, GPT-5, GPT-5 Mini
Anthropic (via Amazon Bedrock) → Claude 3 Haiku, Claude 3.7 Sonnet, Claude Sonnet 4
Google Vertex AI → Gemini 2.0 Flash, Gemini 2.5 Pro
Older models like GPT-3.5 are automatically rerouted to newer versions for better performance.

Can I bring my own LLM into Agentforce?

Yes. With the BYOLLM feature, you can integrate your own or third-party models via Amazon Bedrock, Azure OpenAI, OpenAI, or Google Vertex AI. This is useful if you have specialized or fine-tuned models.

How does Agentforce decide which LLM to use?

It depends on the task:
Coding/dev work → Defaults to Salesforce proprietary models (CodeGen2.5 or xGen-Code).
General chat/agents → Uses GPT or Claude, depending on your configuration.
Custom workflows → You can prioritize or switch models via Agentforce settings or the Models API.

Can I switch between LLMs during a workflow?

Yes. Agentforce lets you configure agents to use different LLMs for different steps. Example: use xGen-Code for generating code, then switch to Claude for reasoning over customer data.

Are Agentforce LLMs secure?

Yes. Salesforce keeps proprietary and some partner LLMs inside its “trust boundary”, ensuring enterprise-grade security, compliance, and data privacy.

Can I use open-source LLMs in Agentforce?

Not directly. However, if you host them on supported platforms (like Amazon Bedrock or Google Vertex AI), you can connect them via the BYOLLM integration.