Wondering what LLM does Agentforce use? Agentforce combines Salesforce’s CodeGen2.5 & xGen-Code with GPT, Claude, Gemini, plus BYOLLM flexibility.
Whether you're looking to optimize your Salesforce solution or need custom development, our team is here to help you unlock the full potential of Salesforce.
Agentforce is Salesforce’s powerful platform for building and deploying AI agents that help businesses automate tasks, make decisions, and improve customer service. If you’re wondering, “What LLM does Agentforce use?” the answer isn’t just one model it’s a mix of Salesforce’s own custom-built large language models (LLMs) and support for popular third-party options. This setup gives users flexibility while keeping things secure and efficient. In this guide, we’ll break it down in simple terms, based on the latest info as of August 2025.
Before diving into the LLMs, let’s quickly explain Agentforce. It’s part of Salesforce’s Einstein AI suite, launched to let companies create “agents” smart AI helpers that can handle complex jobs like answering questions, processing data, or even writing code. These agents use LLMs to understand language, reason, and act. Agentforce stands out because it works seamlessly with Salesforce tools like CRM data, making it great for sales, service, and development teams.
Salesforce doesn’t rely only on outside models; they have their own LLMs built by the Salesforce AI Research team. These are designed specifically for tasks like coding and business workflows, ensuring high performance and security within the Salesforce ecosystem.
These homegrown models are the core of Agentforce for Developers (formerly Einstein for Developers), making coding and AI tasks smoother and more reliable.
Agentforce isn’t limited to Salesforce’s models. It supports a wide range of managed LLMs from partners like OpenAI, Anthropic, and Google. These are ready-to-use and geo-aware (meaning they follow local data rules). Here’s a simple table of key ones:
Model Provider | Key Models | Best For |
---|---|---|
Anthropic (on Amazon Bedrock) | Claude 3 Haiku, Claude 3.7 Sonnet, Claude Sonnet 4 | Secure, high-trust tasks with Salesforce boundaries |
OpenAI / Azure OpenAI | GPT-4o, GPT-4o Mini, GPT-5, GPT-5 Mini | General chat, reasoning, and fast responses |
Google Vertex AI | Gemini 2.0 Flash, Gemini 2.5 Pro | Creative tasks and data analysis |
Older models like GPT-3.5 Turbo are rerouted to newer versions for better results. This variety lets you pick based on speed, cost, or features for example, Claude models shine in detailed reasoning, while GPT ones are versatile for everyday use.
Want even more control? Agentforce lets you bring your own LLM through integrations with:
This means you can connect custom or preferred models while still using Agentforce’s tools. It’s perfect for companies with specific needs, like extra privacy or specialized training. Just set it up via the Models API, and your agents can tap into these for prompts and actions.
Picking an LLM depends on your goals:
Test a few Salesforce doesn’t pick one “best” for everything, so experiment based on your use case. Always check performance metrics like response time and accuracy.
Agentforce doesn’t rely on a single model. The answer to “What LLM does Agentforce use” is a multi-model approach: Salesforce’s proprietary LLMs like CodeGen2.5 and xGen-Code handle coding and business-specific workflows, while third-party options such as GPT, Claude, and Gemini cover general reasoning, conversations, and creativity. With the Bring Your Own LLM (BYOLLM) feature, businesses can also integrate their own fine-tuned or hosted models. This flexibility ensures that Agentforce can adapt to any workflow while maintaining Salesforce’s high standards for security, trust, and scalability.
No. Agentforce is multi-model. It combines Salesforce’s proprietary LLMs (like CodeGen2.5 and xGen-Code) with third-party models from OpenAI, Anthropic, and Google, plus a Bring Your Own LLM (BYOLLM) option.
CodeGen2.5: Optimized for coding tasks, fast completions, bug fixes, and Salesforce Apex support.
xGen-Code: Handles both text and code, powering developer assistants and conversational coding help.
These models are built by Salesforce AI Research and tuned for business workflows inside the Salesforce ecosystem.
Agentforce supports managed models from leading providers:
OpenAI/Azure OpenAI → GPT-4o, GPT-4o Mini, GPT-5, GPT-5 Mini
Anthropic (via Amazon Bedrock) → Claude 3 Haiku, Claude 3.7 Sonnet, Claude Sonnet 4
Google Vertex AI → Gemini 2.0 Flash, Gemini 2.5 Pro
Older models like GPT-3.5 are automatically rerouted to newer versions for better performance.
Yes. With the BYOLLM feature, you can integrate your own or third-party models via Amazon Bedrock, Azure OpenAI, OpenAI, or Google Vertex AI. This is useful if you have specialized or fine-tuned models.
It depends on the task:
Coding/dev work → Defaults to Salesforce proprietary models (CodeGen2.5 or xGen-Code).
General chat/agents → Uses GPT or Claude, depending on your configuration.
Custom workflows → You can prioritize or switch models via Agentforce settings or the Models API.
Yes. Agentforce lets you configure agents to use different LLMs for different steps. Example: use xGen-Code for generating code, then switch to Claude for reasoning over customer data.
Yes. Salesforce keeps proprietary and some partner LLMs inside its “trust boundary”, ensuring enterprise-grade security, compliance, and data privacy.
Not directly. However, if you host them on supported platforms (like Amazon Bedrock or Google Vertex AI), you can connect them via the BYOLLM integration.