BACK
Agents speak data: are we ready to listen?

Agents speak data: are we ready to listen?

Agents are the Headline. Foundations are the Story.

Dreamforce is always a showcase of what’s next: this year, agents are the headline act. They promise to automate decisions, anticipate needs, and act on behalf of humans. The demos are compelling. But before any of us rush to “create an agent,” there’s a quieter question worth asking: what conditions make an agent possible in the first place?

That’s where my two worlds intersect. As a sales director, I see how businesses want speed, autonomy, and measurable results from AI. As a sociology researcher, specializing in our relationship to digital technologies, I spend time unpacking the less visible layers: how knowledge is structured, how humans trust or resist systems, how data itself becomes a form of language.

Because at its core, data functions as a language. Agents don’t reason like humans — they parse, assemble, and act within the grammar of structured information. If the vocabulary is inconsistent, if the syntax is broken, if the meaning is fragmented across silos, the agent will speak nonsense with confidence. Preparing for agents is therefore less about the code you’ll write at Dreamforce, and more about the language you’re already speaking at home: how your data is defined, governed, and shared.

And alongside that technical layer sits a human one. Agents change workflows, decision rights, and responsibilities. Without conscious change management, the new “language” risks excluding as much as it empowers.

In the next sections, I’d like to explore those two conditions — data readiness as linguistic clarity and change management as cultural translation — and why they matter more than the excitement of a live demo.

Data as the language of Agents

If agents are designed to “act,” they do so by reading the world through data. Data is not neutral input: it is the vocabulary and grammar that shape what the agent can perceive, decide, and execute. In other words, the quality of an agent is the quality of the language it is taught to speak.

Think of a language with missing words, fractured syntax, or contradictory meanings. Communication becomes fragile, misunderstandings multiply, and nuance gets lost. This is what happens when organizations attempt to deploy agents on top of fragmented datasets, legacy silos, or poorly defined governance structures. The agent may appear fluent, but beneath the surface it is improvising with gaps, half-truths, and inconsistencies.

Broken vocabularies: When data misleads agents

  • Customer identity: a single individual appears five times under different spellings, emails, or IDs. Humans spot the duplication; agents don’t. The agent’s “speech” will then be distorted, offering fragmented experiences or misfiring recommendations.
  • Product hierarchies: imagine a company where the same product is coded differently in sales, marketing, and supply chain systems. An agent asked to “optimize inventory” will find itself in a linguistic trap: the same object is called by three different names, so it cannot see the whole picture. Because an agent does.
  • Cultural nuance: in global organizations, the word “client” might mean “end-consumer” in one market and “distribution partner” in another. When this nuance is lost in the data model, the agent risks acting on the wrong assumptions — a mistranslation with real business consequences.

These are not simply technical inconveniences. They reveal something deeper: data categories are social decisions. When we define what counts as a “customer,” when we decide which interactions to log and which to ignore, we are not just recording reality; we are shaping it. As Pierre Bourdieu reminds us, to name is to make exist. By structuring data, we effectively build the world that our agents will inhabit.

Who Decides the Grammar?

The sociological question that follows is: who decides on the grammar of this language? Often, technical teams formalize schemas and taxonomies. But the lived knowledge of frontline workers — how they describe a client, how they interpret a status, how they actually solve problems — rarely makes its way into the dataset. The result is a narrow vocabulary: agents “speak,” but they lack the richness of the human idiom they were meant to assist.

From collection to coherence

The question is not simply do you have enough data? but rather:

  • Do you have a coherent “language” that your agents can understand and act upon?
  • Do the definitions of your data reflect how your organization actually works and makes sense of the world?
  • And perhaps most importantly: is there a shared process to align and enrich this language across business and technical communities?

Agents, then, are only as powerful as the fluency of the language they inherit. Preparing for them means less about adopting the latest features and more about cultivating a shared, intelligible, and trustworthy vocabulary across the organization.

Change Management as Cultural Translation

If data is the language of agents, then change management is the act of translation between that new language and the people expected to live with it. Too often, organizations underestimate this step: they assume that if the technology is powerful, adoption will follow. In practice, the opposite is true. Without translation, the new language remains foreign, unsettling, and even exclusionary.

When Technology Changes Roles and Rights

Unlike earlier digital tools, agents don’t simply assist; they decide and act. This changes the fabric of work:

  • Decision rights shift — who has authority when the agent acts autonomously?
  • Accountability becomes ambiguous — if an agent makes a mistake, who owns the outcome?
  • Roles are redefined — some tasks disappear, others require new interpretive skills.

In sociology, these are not just “operational adjustments.” They signal shifts in how power and responsibility circulate within an organization. If we don’t make these shifts explicit, they can appear neutral or automatic, even though they have real consequences for workplace dynamics.

The Emotional Layer of Adoption

Agents don’t only change tasks; they change how people feel about their place in the system. For some, agents bring relief — repetitive work is lifted, time is freed for more creative contributions. For others, they provoke anxiety — skills feel devalued, expertise appears less relevant, the future less certain. These reactions are rarely voiced in slide decks, but they shape adoption more than any technical training session.

Recognizing this emotional layer is just as important as providing technical training — it shapes adoption more than features alone.

Translation in practice

Effective change management treats the arrival of agents as a cultural shift that must be narrated, explained, and collectively absorbed. That means:

  • Framing the story: Why are we introducing agents? What problems do they solve for people, not just for KPIs?
  • Teaching the new grammar: Training is not only about buttons and dashboards; it’s about teaching employees how to interpret and question outputs, when to trust the system and when to intervene.
  • Creating dialogue: Building forums where resistance, concerns, and insights can be voiced, so that the “new language” is enriched rather than imposed.

From adoption to appropriation

The goal is not blind adoption but genuine appropriation — when employees feel agents extend their agency rather than replace it. This requires organizations to act less like translators of a finished language and more like co-authors of a dialect: adapting, renegotiating, and refining as they go.

From Dreamforce Inspiration to Responsible Action

Dreamforce thrives on inspiration. The demos are designed to show what is possible: in just a few clicks, an agent appears, and suddenly complex processes seem effortless. That energy is valuable — it helps us imagine futures that might otherwise feel distant.

The Spark vs. the Script

But inspiration is only the beginning. The real value comes from what happens after the event, once we return to our own organizations. Building agents that last requires preparation that is less visible on stage: aligning data so it speaks a coherent language, supporting people as they adapt to new roles, and making space for dialogue about how this technology changes daily work.

In practice, that means asking a few key questions:

  • Is our data ready to provide agents with a clear and consistent view of reality?
  • Do our teams have the context they need to understand and trust the outputs?
  • Are we approaching agents as extensions of human judgment rather than replacements for it?

Framed this way, Dreamforce can be both an exciting showcase and a useful checkpoint. It invites us to connect inspiration with preparation, and to see agents not just as technical projects but as organizational journeys.

The path from demo to daily practice is not about dampening excitement — it is about channeling it into responsible action. With strong data foundations and thoughtful change management, the agents that inspire us in San Francisco can also create real, sustainable value back home.

Speaking a Shared Language: Data, People, and Purpose

Agents may be the headline of this year’s Dreamforce, but their success depends on foundations that are far less visible. Data must be treated as a language — precise, coherent, and shared — if agents are to speak with clarity rather than confusion. Change management must act as cultural translation, making sure this new language is not only understood but also embraced by the people who will live with it. And inspiration must lead to action: the excitement of the demo should open onto the patient, deliberate work of integration.

In my daily role as a sales director, I see how urgent the demand for speed and automation has become. At the same time, as a researcher in sociology, I am reminded that technologies do not arrive in a vacuum. They reshape how we organize knowledge, how we distribute responsibilities, and how we trust one another.

The real opportunity, then, is not only to build agents, but to build the conditions in which they can thrive — conditions where data speaks a clear language, where people feel included in the translation, and where inspiration fuels long-term responsibility.

Dreamforce is a powerful stage for imagining the future. But the most meaningful transformations will happen afterwards, in the quieter work of aligning, translating, and preparing. That is where agents will stop being novelties and start becoming lasting companions in how we work and decide together. The question is: are we ready to listen to the language they speak?

Andrea Gacanin

Author: Andrea Gacanin, Sales Director and Researcher

Andrea is a Sales Director at OSF Digital with over 17 years of international experience spanning geopolitics, education, IT, and FMCG. She specializes in complex selling within the cloud environment, helping clients navigate digital transformation with confidence and clarity. Beyond her role at OSF, Andrea pursues doctoral research at Université Paris 8 on the intersection of technology, society, and surveillance capitalism, and she is also an avid ultrarunner.