Reltio Connect

 View Only

Getting More Out of Reltio: Top AgentFlow Prompts

By Kash Mehdi posted an hour ago

  

Getting More Out of Reltio: Top AgentFlow Prompts

Seven agents, twenty prompts, and one lesson repeated across all of them: the quality of what you get out depends entirely on the quality of what you put in.

I have been spending a considerable amount of time with AgentFlow agents lately, and the thing I keep returning to is this: the agent is never the bottleneck. The prompt is.

AgentFlow agents are not general-purpose chatbots. They are pre-trained specialists, each purpose-built for a specific domain of data operations. Matching. Address validation. Document extraction. Workload management. They know their subject deeply. What they need from you is not instruction on method. They need business context.

The prompts I have seen work best are the ones that open with why something matters, not what to look at. "We are experiencing service delays because of inaccurate address data" gives the agent stakes. "Check address data" gives it nothing. That distinction sounds minor. In practice, it changes everything about the quality of the response.

"The agent is never the bottleneck. The quality of what you get out depends entirely on the quality of what you put in."

What follows are the exact prompts I use across every major AgentFlow agent, written the way I actually write them, with context on what each one does and why it is constructed the way it is. They are ready to copy and adapt.


Before the prompts themselves, a few principles worth internalizing. These apply across every agent in the suite.

Anatomy of an effective AgentFlow prompt


Tell the agent why this matters. Stakes shape prioritization. "We must resolve duplicates for our top 100 customers due to service delays" gets a different response than "find duplicates."

State the outcome, not the steps

Ask for a recommendation, not a lookup. "Tell me whether this merge was done correctly" outperforms "look at these records." The agent determines the steps.

Ask to see the plan before execution

Add "before making any changes, show me the planned changes with your reasoning" to any write operation. You retain control. The agent still does all the work.

Resist the urge to name which MCP tools the agent should call. It already knows. Specifying the method constrains the approach. Describe the outcome and step back.



The productivity gains from AgentFlow are real but they are conditional. A vague prompt produces a vague result, and you end up doing manual work anyway. A well-framed prompt produces a recommendation you can approve or reject in seconds. The difference in practice:

10–15

minutes saved per address correction compared to manual lookup and update

4

manual steps replaced by a single prompt for match and unmerge operations

10,000+

records processable at a scale that is simply not achievable by human teams


Every agent below follows the same conversational pattern: orient, investigate, recommend, execute. You control the gate at each transition. The agent surfaces its reasoning and waits for your approval before writing anything to the system.


1. Data Explorer: Read-only access to your data model and entity graph. No SQL required, no IT ticket needed.
The answer is somewhere in the data. You are just not sure which screen to open, which attribute to query, or who to ask. Data Explorer gives anyone, regardless of technical background, a direct line to the data through plain conversation.


Prompt 1 of 3 Orientation
Use this every time you open Data Explorer in an unfamiliar environment. One prompt produces a complete map of the tenant before you begin.
Tell me about your capabilities, including the list of tools.
What follows: A full inventory of 43+ specialized tools available in this session: search, relationship traversal, match analytics, and more. Consider it a mandatory orientation before anything else.

Prompt 2 of 3 Data Model
Before querying, understand what you are working with. Entity types, relationships, primary use cases, all returned in a single response.
Show me the data model.
What follows: Every entity type with description, primary use cases (B2B management, B2C analytics, product and supplier), and relationship types. You go from knowing nothing about an environment to understanding it completely in one exchange.

Prompt 3 of 3 Business Question
Once the model is understood, ask the question you actually needed answered. The agent handles everything beneath the surface.
How many customers do we have in New York City?
What follows: Entity search, geographic filtering, exact count returned immediately. Questions that formerly required a BI ticket or a SQL query answered in seconds.

2. Match Resolver: Forensic investigation and correction of incorrect merges, including web research the human team would never have time to conduct.
Incorrect merges are silent data quality debt. They compound over time, erode downstream trust, and are tedious beyond measure to fix by hand: unmerge each record, locate the correct target, re-merge, repeat. Match Resolver handles the entire sequence through conversation.

Prompt 1 of 3 Investigation
Deploy when a recent merge is suspect. Provide the user responsible and let the agent conduct the forensic review. The phrase "all demographic attributes" is deliberate — it is what surfaces the edge cases rules-based systems miss.
Analyze the most recent merge operation performed by [user@company.com]. Give me your recommendation on whether the merge was done correctly. Look for name variations and discrepancies across all demographic attributes that could indicate an incorrect merge was performed.
What follows: Full merge activity review, attribute-by-attribute comparison across merged records, flagged discrepancies, and pattern detection for suspicious behavior such as rapid successive merges without validation.

Prompt 2 of 3 Target Identification
Once the problem is confirmed, find the correct destination for the records to be unmerged. The agent will conduct external web research where necessary, cross-referencing identifiers such as NPI numbers against public directories.
Are there any good candidates to merge these records into once we unmerge them?
What follows: Candidate search across the tenant, web-sourced identity validation, and a specific unmerge-and-remerge strategy with reasoning for each affected record.

Prompt 3 of 3 Execution
Review the plan. When satisfied, approve. The agent executes the full operation and presents a summary of impact.
Yes, please proceed per your recommendations.
What follows: Sequential unmerge and remerge operations, followed by a complete summary: data quality improvement, match accuracy delta, compliance impact, and a full log of every MCP tool invoked.
Also worth knowing: Match Resolver handles forward-looking deduplication as well. Try: "We must resolve duplicates for organizations in our top 100 segment as soon as possible due to service delays. How can you help me accomplish this?" The agent walks through identify, enrich, compare, recommend, and execute.

3. Address Enricher: Evidence-based address correction drawing from three or more authoritative sources, with full audit lineage and human approval before any change is committed.
Validation rules produce binary labels: verified, partially verified, ambiguous. What they do not produce is a fix. At scale — tens of thousands of records — the manual correction backlog is simply not solvable by human teams. Two minutes per record for the agent. Fifteen for a person. The arithmetic is unambiguous.

Prompt 1 of 3 Identification and Analysis
Open with the business problem, not the data problem. The final sentence requesting the plan before any changes is the most important element of this prompt.
We are experiencing service delays at our top 100 customers because of inaccurate address information. We must have accurate addresses as soon as possible for these customers. Get an organization from our "Top 100" segment that has a partially verified address for analysis and enrichment. Before making any changes to the data, first show me the planned changes along with the reasoning for each change.
What follows: Data model discovery, search for qualifying Top 100 organizations with partially verified addresses, presentation of multiple candidates, and a request for your selection before proceeding.


Prompt 2 of 3 Record Selection
Name your selection. The agent retrieves the full record, presents the address issues in tabular form, conducts multi-source web research, and returns proposed changes with an evidence threshold score — nothing written until you approve.
Let's go with [Organization Name].
What follows: Full record retrieval, issue documentation, web research across three or more credible sources, and a proposed change set with reasoning and evidence confidence score.

Prompt 2 of 3 Record Selection
Name your selection. The agent retrieves the full record, presents the address issues in tabular form, conducts multi-source web research, and returns proposed changes with an evidence threshold score — nothing written until you approve.
Let's go with [Organization Name].
What follows: Full record retrieval, issue documentation, web research across three or more credible sources, and a proposed change set with reasoning and evidence confidence score.

Prompt 3 of 3 Approval
Review the evidence. When the reasoning satisfies you, approve. The Sources tab on the entity will subsequently reflect Reltio Data Cleanser as the contributing source, with complete lineage of what changed and when.
Approved to make the change.
What follows: Attribute update via the update_entity_attributes_tool, confirmation of successful write, and a summary of data provenance including every source consulted in the correction.

4. Unstructured Data Studio: Structured entity and relationship extraction from contracts, correspondence, and documents, mapped directly into the Intelligent Data Graph with full source lineage.
Most organizations claim a unified view of their customers and suppliers. What they mean, usually, is that their transactional records are consolidated. The context locked inside contracts, service agreements, and email threads rarely makes it into the MDM system. Missed entitlements, broken service moments, and strategic decisions made on incomplete information are the predictable result.

Extraction Prompt Contract Analysis
This prompt belongs in the Studio's prompt library and runs against your document batch. The instruction to take time and double-check carries more practical weight than it may appear, particularly for high-stakes commercial contracts.
You are an expert in contract document analysis. Your task is to read the following contract and extract key structured information in valid JSON format. This is an important task and we cannot afford errors in entity extraction so take your time and double check your work. ``` Identify the name of the contract. Identify the effective date. Identify the governing law (jurisdiction). Identify the contract term. Identify the parties associated with the contract (Section 1 - Parties). Identify the list of products and services associated with the contract (Section 2 - Scope and Deliverables).
What follows: Text extraction, entity and relationship identification, structured JSON output, and automatic mapping to the correct Reltio entity types — with full lineage preserved back to the source document.


5. Data Profiler:
Source data profiled and qualified before it reaches the tenant. Only what passes the quality threshold is loaded. The reload cycle ends here.
Loading poor-quality data into Reltio creates downstream problems that are expensive to diagnose and fix. The Data Profiler eliminates that risk: connect it to your source file, set a quality threshold, and load only what meets the standard.



Prompt 1 of 4 Initial Profile
Direct the agent to the source file. It conducts a structural preview first and pauses for schema confirmation before running the full profiling job.
Perform a data quality check for the CSV file "/your_file.csv" from AWS S3 bucket: "your-bucket-name". My credentials are: ``` "role": [your-role-arn] "externalID": [your-external-id] "region": [your-region]

What follows: File structure preview, per-column validation patterns, high-level data quality observations, and a request for confirmation before the full job runs.




Prompt 2 of 4 Attribute Analysis
Confirm the schema and initiate the deep attribute-level analysis. The agent returns findings ranked by severity.
Confirmed schema is correct. Please analyze this data and give me suggestions per attribute on possible cleanup I may need to perform prior to loading.
What follows: Asynchronous profiling job. On completion: a detailed quality summary per attribute with critical, high, and medium priority cleanup actions ranked by severity.


Prompt 3 of 4 Selective Mapping
Do not load everything. Ask the agent to build a mapping that only includes attributes above a quality threshold. It presents inclusions and exclusions with reasoning before proceeding.
Create a data loader mapping that only uses the attributes with a quality score above 90%. Show me the mapping prior to executing the data loader job.
What follows: Reltio data model consultation, qualified attribute mapping, excluded attribute documentation with rationale, and presentation of the complete data loader configuration for review.


Prompt 4 of 4 Load
Review the mapping. When it reflects the correct attributes at the correct threshold, proceed.
Proceed.
What follows: Data loader job execution using only approved attributes, completion monitoring, and confirmation of success.

5. Work Assigner:
Stewardship workloads balanced across regions and teams before service level agreements begin to slip.
Workflow management in large stewardship teams produces predictable failure modes: some users are overwhelmed, others are idle, assignments consistently reach people who are unavailable. Work Assigner examines the full task queue, identifies the imbalance, and recommends a redistribution strategy.


Prompt 1 of 2 Capability Briefing
A useful starting point when introducing Work Assigner to a new team or administrator. Sets appropriate expectations before the main analysis.
Tell me about your capabilities.
What follows: A description of the agent's core functions: task queue management, activity-based user filtering, and redistribution planning.


Prompt 2 of 2 Workload Analysis
The primary prompt. Replace the group name with your own. The request for table format ordered by task count is specific by design: it makes the output immediately actionable.
Analyze the workflow tasks for the users belonging to the group called "[your-group-name]". Present the information in a table format ordered by task count. Give me your recommendation on the best approach to balance the workload across these users.
What follows: User identification within the group, recent activity assessment, a task count table, overloaded user flagging, and a per-user redistribution recommendation with explicit reasoning.


5. Product Recommender: Personalized recommendations drawn from the full Intelligent Data Graph: profile attributes, relationship context, interaction history, and source systems — with probability scores and transparent reasoning.
Sales and service teams routinely operate with an incomplete picture. The relevant context is present in the data, but distributed across a dozen source systems, and assembling it by hand is not a realistic option. One example from a recent session: 76 data points across 23 attributes, 35 relationships, 11 source systems, and 18 interaction records — analyzed in seconds.


Prompt 1 of 3 Individual Recommendation
The foundational prompt. The inclusion of "profile, relationships and interactions" is not incidental — it is what compels the agent to traverse the full graph rather than surface-level attributes alone.
Get information for customer "[Customer Full Name]" and provide three compelling product recommendations based on their profile, relationships, and interactions.
What follows: Full profile retrieval, source system aggregation, relationship graph traversal, interaction history analysis, product catalog cross-reference, and three recommendations with probability scores, reasoning, and a customer snapshot of top driving signals.


Prompt 2 of 3 Household Analysis
Extend the analysis to everyone connected in the household graph. The agent returns product recommendations for the household and a channel strategy based on consent data collected across communication touchpoints.
Analyze the household relationships. What additional products could we offer to the household and why?
What follows: Household graph traversal, member identification, demographic and preference analysis, product recommendations for the household, and a communication strategy grounded in collected consent across channels.

Prompt 3 of 3 Transparent Reasoning
The prompt I end on, invariably. Ask the agent to enumerate every data point it considered and allow that number to make the case on its own.
How many data points did you use to generate these recommendations?
What follows: A full breakdown: attributes examined, relationships traversed, sources consulted, interactions reviewed. Work that would require hours from a human analyst, completed in seconds by the agent.

"Give the agent the business context, state the outcome you need, and step back."

All seven agents in this piece operate on the same underlying infrastructure: the Reltio Intelligent Data Graph, accessed through the MCP Server with 43+ platform tools available automatically. You do not need to specify which tools to call or how to sequence the operations. The agent handles that. Your responsibility is the prompt — and now you know how to write one.

0 comments
14 views

Permalink