Kedar
Principal Technical Architect
Salesforce
Summary
Agentforce represents a meaningful shift in how organizations can use Salesforce. Rather than simply recommending next best actions, AI agents can reason over enterprise context and act across sales, service, operations and internal workflows.
That capability is powerful, but it fundamentally changes the leadership question. The question is no longer, “What can AI do?”, but, “What should AI be allowed to do, under what conditions, and with what controls in place?”
For CIOs and enterprise leaders, the answer is not to deploy agents as quickly as possible. The real work lies in preparing the foundations: trusted data, clear business policies, secure access models, defined escalation paths and a governance approach that supports safe scale.
Agentforce can be transformative, but only when it operates within a well-designed enterprise operating model.
Governance is often framed as a brake on innovation. In practice, it is what makes innovation safe enough to scale.
In an Agentforce context, governance spans far more than model configuration. It includes clarity on which business problems an agent is designed to address, what data it is allowed to access, the actions it can take autonomously and where human intervention is required. It also covers how agent decisions are monitored, reviewed and improved over time.
Data readiness comes first
Agentforce is only as effective as the data and context behind it. Where enterprise data is incomplete, inconsistent or fragmented across systems, agent behaviour will inevitably degrade.
Data readiness therefore extends beyond basic data quality. It also includes how data is structured, how easily it can be accessed by agents and how it is governed over time. Salesforce Data 360 (formerly Data Cloud) can play an important role by helping unify and activate enterprise data for AI-driven use cases. In many environments, external platforms such as Snowflake or Databricks also form part of the broader data strategy and should be considered as complementary components.
The practical reality is simple: if data is not trusted, the agent will not be trusted either.
The Einstein Trust Layer is necessary, but not sufficient
Salesforce’s Einstein Trust Layer is a critical component of trusted AI. It helps organizations protect sensitive information, ground AI outputs in enterprise context and apply important controls around agent behaviour.
However, it should not be viewed as a replacement for enterprise governance. The Trust Layer does not remove the need for thoughtful permission design, clearly articulated business policies, auditability and review, data stewardship and human oversight for higher‑risk decisions.
It provides a secure foundation. How that foundation is used, and where responsibility ultimately sits, remains an organisational decision.
The real readiness question: what can the agent do?
One of the most important readiness questions is not whether an agent can act, but whether it should.
Enterprise leaders need to be deliberate about where agents are limited to recommendations and where execution is appropriate, which actions carry acceptable risk, which require approval and how uncertainty and exceptions are handled. Just as importantly, business users need transparency around when agents have acted and why.
Agentic automation changes the operating model. Workflows are no longer designed solely for human users. Organizations must also define the boundaries of autonomous behaviour in a way that aligns with risk appetite and regulatory expectations.
Infrastructure and automation hygiene matter
A strong technical foundation is essential, particularly in organizations with years of accumulated Salesforce customization.
The challenge is rarely performance alone. It is whether the existing platform is stable, predictable and governable enough to support autonomous execution. Issues such as quality of existing development components (Flow, Apex etc.), overlapping automation, fragile integrations or inconsistent error handling may be manageable in human-led processes, but they become far more problematic when agents are executing actions at speed.
Readiness assessments should therefore look closely at automation design quality, integration stability, transaction side effects and operational monitoring. Governance in an Agentforce landscape must extend beyond AI policy to include automation hygiene.
Operating model and skills
Deploying autonomous agents does not remove the need for people. It changes how accountability works.
Organizations need clarity around who owns each agent, how performance is monitored, how exceptions are handled and how policies and behaviors are refined over time. Business and technical teams alike must understand how agents make decisions, what constraints apply and how success is measured.
This does not require creating an entirely new organizational structure. It does require clearly assigned ownership and an operating model that supports continuous learning and improvement.
The best Agentforce programs will not begin with broad ambition. They will begin with a tightly scoped use case that is meaningful, measurable and operationally safe.
A strong first pilot should have:
The purpose of the pilot is not simply to “test AI”. It is to validate the organization’s ability to govern AI responsibly. This means the pilot should also answer questions such as:
Only after those questions are answered should scale be considered.
Agentforce is not just another Salesforce feature. It represents a new execution model.
As a result, governance is no longer a technical footnote. It is a leadership responsibility. Organizations that benefit most from agentic AI will not be those that deploy the highest number of agents, but those that deploy agents the business genuinely trusts
The agentic era is gaining huge momentum, but lasting success will favor organizations that move with discipline.
Agentforce can deliver powerful autonomous automation, but only when it is grounded in trusted data, governed by clear policies and supported by a robust operating model. Data 360 provides the data context. The Einstein Trust Layer supports trusted AI behavior. Governance ensures accountability.
For CIOs, the message is straightforward: do not start with the agent. Start with governance.