
This article is the second of a 3-piece series entitled “Demystifying AI: Understanding the Core Concepts behind AllAi”. AllAi is the AI-powered productivity platform built by OSF Digital to enhance Salesforce-driven services.
In this article, we’ll dive into the world of Large Language Models (LLMs) and the fascinating art of prompting. As we unravel this topic, we’ll examine how technology like AllAi can revolutionize your productivity within the Salesforce ecosystem.
At its core, an LLM is designed to understand and generate human-like text. Think of it as having an incredibly smart, albeit text-bound, conversation partner. The magic behind it lies in language modeling—a process where the model predicts the likelihood of a sequence of words. When you provide an input (prompt), the LLM generates text one token (or word) at a time based on what it has learned from vast amounts of internet data. Picture this: You start with the sentence, "Salesforce is revolutionizing the way businesses...", and the model predicts what might come next based on the probabilities it has assigned to different words. Could it be "operate", "manage", or "grow"? The “temperature” setting influences this prediction: lower values lead to more deterministic responses, while higher values introduce creativity and variability.
Temperature influences the token probability distribution. Values greater than 1 make the distribution more uniform (flatter), increasing randomness. Values smaller than 1 make the distribution sharper, reducing randomness. Temperature equal to 1 means the model behaves normally (learned distribution). Source.
Initially, LLMs were primarily focused on text generation—completing sentences or phrases based on the given inputs. While this is impressive, it isn’t always human-like or contextually accurate. Enter instruction-tuning. This approach significantly enhances LLM capabilities by guiding models to follow explicit instructions seamlessly. Instruction-tuning allows the LLM to comprehend commands like, "Generate a report on quarterly sales," rather than just finishing sentences. This shift is essential for practical applications, making interactions with tools like AllAi more intuitive and efficient.
Instruction-tuning opens the door to two critical concepts: in-context learning and in-weight learning.
1. In-Weight Learning: The model’s knowledge is stored within its parameters or "weights" learned during training. This knowledge is static unless the model is retrained or fine-tuned.
2. In-Context Learning: This dynamic learning occurs during interaction when the model adapts based on the context provided in the prompt (user instructions). We harness these methods within the AllAi suite to deliver tailored experiences for Salesforce users, enabling tools like AllAi Chat and AllAi DevOps to be highly responsive and contextually accurate.
In-weight vs in-context learning. In the first one, all tasks the model can execute were seen during training. For the latter, a prompt dictates the task that needs to be done.
The real magic happens with prompting—how you frame your task to the LLM can lead to different levels of performance. Let’s break down three types of prompting strategies using Salesforce-centric examples:
Zero-Shot Learning
Here, you provide no additional context or examples on how to solve a given task. In theory, the model was not exposed to any examples of the task during training, so it needs to utilize prior knowledge and rely on analogic relationships to infer the solution for the task at hand. For instance:
Prompt:
"Give me a step-by-step guide teaching me how to create a Salesforce flow that sends an email to clients every 30 days."
One-Shot Learning
Here, you give the model a single example of how to solve a problem of interest. The model needs to able to generalize from a very limited amount of data, mimicking the human ability of recognizing new concepts/abstractions from a single observation. For instance:
Prompt:
"Act as an assistant capable of answering users' questions regarding Salesforce Marketing Cloud.
Example:
User Question: Can I override block prices for a given Block-priced line item?
Answer: Only if there is a standard set of block price records to choose from such as a standard silver/gold/platinum pricing model."
Few-Shot Learning
Here, you provide a few examples for increasing the chances of properly steering the LLM's behavior.
Prompt:
"Explain how to integrate Salesforce with a third-party email marketing platform specified by the user.
Example 1: Email marketing platform is Mailchimp. Solution: to integrate Salesforce with Mailchimp, follow these simple steps: [steps needed for integration with Mailchimp].
Example 2: Email marketing platform is Constant Contact. Solution: to link Salesforce with Constant Contact, follow these simple steps: [steps needed for integration with Constant Contact]."
In this article, we've explored the nuances of LLMs, from their foundational workings to instruction-tuning and more advanced prompting techniques. Understanding these concepts is pivotal as we develop sophisticated tools like AllAi to augment productivity and efficiency in Salesforce environments. Stay tuned for our next article, where we will delve into even more advanced topics such as Retrieval-Augmented Generation (RAG), fine-tuning, and beyond.
Rodrigo C. Barros, PhD, is an AI Director at OSF Digital, specializing in artificial intelligence solutions for digital transformation. He holds a PhD in Computer Science and Computational Mathematics from the University of São Paulo, Brazil, and also serves as an Associate Professor at PUCRS.