
Dreamforce is always a showcase of what’s next: this year, agents are the headline act. They promise to automate decisions, anticipate needs, and act on behalf of humans. The demos are compelling. But before any of us rush to “create an agent,” there’s a quieter question worth asking: what conditions make an agent possible in the first place?
That’s where my two worlds intersect. As a sales director, I see how businesses want speed, autonomy, and measurable results from AI. As a sociology researcher, specializing in our relationship to digital technologies, I spend time unpacking the less visible layers: how knowledge is structured, how humans trust or resist systems, how data itself becomes a form of language.
Because at its core, data functions as a language. Agents don’t reason like humans — they parse, assemble, and act within the grammar of structured information. If the vocabulary is inconsistent, if the syntax is broken, if the meaning is fragmented across silos, the agent will speak nonsense with confidence. Preparing for agents is therefore less about the code you’ll write at Dreamforce, and more about the language you’re already speaking at home: how your data is defined, governed, and shared.
And alongside that technical layer sits a human one. Agents change workflows, decision rights, and responsibilities. Without conscious change management, the new “language” risks excluding as much as it empowers.
In the next sections, I’d like to explore those two conditions — data readiness as linguistic clarity and change management as cultural translation — and why they matter more than the excitement of a live demo.
If agents are designed to “act,” they do so by reading the world through data. Data is not neutral input: it is the vocabulary and grammar that shape what the agent can perceive, decide, and execute. In other words, the quality of an agent is the quality of the language it is taught to speak.
Think of a language with missing words, fractured syntax, or contradictory meanings. Communication becomes fragile, misunderstandings multiply, and nuance gets lost. This is what happens when organizations attempt to deploy agents on top of fragmented datasets, legacy silos, or poorly defined governance structures. The agent may appear fluent, but beneath the surface it is improvising with gaps, half-truths, and inconsistencies.
These are not simply technical inconveniences. They reveal something deeper: data categories are social decisions. When we define what counts as a “customer,” when we decide which interactions to log and which to ignore, we are not just recording reality; we are shaping it. As Pierre Bourdieu reminds us, to name is to make exist. By structuring data, we effectively build the world that our agents will inhabit.
The sociological question that follows is: who decides on the grammar of this language? Often, technical teams formalize schemas and taxonomies. But the lived knowledge of frontline workers — how they describe a client, how they interpret a status, how they actually solve problems — rarely makes its way into the dataset. The result is a narrow vocabulary: agents “speak,” but they lack the richness of the human idiom they were meant to assist.
The question is not simply do you have enough data? but rather:
Agents, then, are only as powerful as the fluency of the language they inherit. Preparing for them means less about adopting the latest features and more about cultivating a shared, intelligible, and trustworthy vocabulary across the organization.
If data is the language of agents, then change management is the act of translation between that new language and the people expected to live with it. Too often, organizations underestimate this step: they assume that if the technology is powerful, adoption will follow. In practice, the opposite is true. Without translation, the new language remains foreign, unsettling, and even exclusionary.
Unlike earlier digital tools, agents don’t simply assist; they decide and act. This changes the fabric of work:
In sociology, these are not just “operational adjustments.” They signal shifts in how power and responsibility circulate within an organization. If we don’t make these shifts explicit, they can appear neutral or automatic, even though they have real consequences for workplace dynamics.
Agents don’t only change tasks; they change how people feel about their place in the system. For some, agents bring relief — repetitive work is lifted, time is freed for more creative contributions. For others, they provoke anxiety — skills feel devalued, expertise appears less relevant, the future less certain. These reactions are rarely voiced in slide decks, but they shape adoption more than any technical training session.
Recognizing this emotional layer is just as important as providing technical training — it shapes adoption more than features alone.
Effective change management treats the arrival of agents as a cultural shift that must be narrated, explained, and collectively absorbed. That means:
The goal is not blind adoption but genuine appropriation — when employees feel agents extend their agency rather than replace it. This requires organizations to act less like translators of a finished language and more like co-authors of a dialect: adapting, renegotiating, and refining as they go.
Dreamforce thrives on inspiration. The demos are designed to show what is possible: in just a few clicks, an agent appears, and suddenly complex processes seem effortless. That energy is valuable — it helps us imagine futures that might otherwise feel distant.
But inspiration is only the beginning. The real value comes from what happens after the event, once we return to our own organizations. Building agents that last requires preparation that is less visible on stage: aligning data so it speaks a coherent language, supporting people as they adapt to new roles, and making space for dialogue about how this technology changes daily work.
In practice, that means asking a few key questions:
Framed this way, Dreamforce can be both an exciting showcase and a useful checkpoint. It invites us to connect inspiration with preparation, and to see agents not just as technical projects but as organizational journeys.
The path from demo to daily practice is not about dampening excitement — it is about channeling it into responsible action. With strong data foundations and thoughtful change management, the agents that inspire us in San Francisco can also create real, sustainable value back home.
Agents may be the headline of this year’s Dreamforce, but their success depends on foundations that are far less visible. Data must be treated as a language — precise, coherent, and shared — if agents are to speak with clarity rather than confusion. Change management must act as cultural translation, making sure this new language is not only understood but also embraced by the people who will live with it. And inspiration must lead to action: the excitement of the demo should open onto the patient, deliberate work of integration.
In my daily role as a sales director, I see how urgent the demand for speed and automation has become. At the same time, as a researcher in sociology, I am reminded that technologies do not arrive in a vacuum. They reshape how we organize knowledge, how we distribute responsibilities, and how we trust one another.
The real opportunity, then, is not only to build agents, but to build the conditions in which they can thrive — conditions where data speaks a clear language, where people feel included in the translation, and where inspiration fuels long-term responsibility.
Dreamforce is a powerful stage for imagining the future. But the most meaningful transformations will happen afterwards, in the quieter work of aligning, translating, and preparing. That is where agents will stop being novelties and start becoming lasting companions in how we work and decide together. The question is: are we ready to listen to the language they speak?