JLL CASE STUDIES

Turning messy CRM data into a system teams can trust
How JLL Digital Marketing Ops rebuilt contact and account data for accuracy, confidence, and scale — and finally stopped guessing which fields were real.
Flag types applied
4
Records with confidence scores
100%
Feedback cycles completed
2
REGION
INDUSTRY
USE CASE
TEAM
Global
Commercial Real Estate
CRM enrichment & validation
Digital Marketing Ops
At a glance
JLL's Digital Marketing Ops team manages large volumes of contact and account data that power segmentation, scoring, and downstream campaigns. They had plenty of data. What they didn't have: trust.
Fields were partially filled. Standards were inconsistently applied. Some values were outdated. Others were flat-out wrong. Before scaling campaigns or analytics, the data foundation needed to be fixed.
Jeeva didn't just fill in missing fields. They validated what existed, flagged discrepancies, and built a system where the team could finally see what they were working with.
"Clean data isn't about filling every field. It's about knowing which fields you can trust."
— Lisa Huang, Director of Marketing Operations, JLL
The challenge: data exists, confidence doesn't
The CRM was full. Contacts had names, emails, titles. Accounts had revenue, industry, address. On the surface, it looked usable.
"But the more we dug in, the less we trusted," recalls David Park, a data analyst on Lisa's team. "Email domains didn't match employers. Titles were outdated. Parent-child relationships were a mess."
"We had a contact listed as VP of Marketing at a company that went out of business two years ago. Her email still worked — it just bounced to some holding company. That's the kind of data we were building campaigns on."
David Park, Data Analyst, JLL Marketing Ops
The problem wasn't a lack of data. It was that no one knew which data was real.
The data quality issues | |
|---|---|
Fields partially filled | Standards inconsistent Naming conventions varied by team |
Values outdated | Relationships unmapped Parent/child/subsidiary unclear |
Lead scoring had become unreliable. Segmentation was inconsistent. The team was making decisions on data they couldn't verify
The mandate: enrich, validate, make quality visible
Lisa set a clear goal: don't just fix the data. Make data quality visible.
"I didn't want a one-time cleanup," she explains. "I wanted a system where my team could look at any record and know: is this verified? Is this questionable? Is this missing?"
"We needed more than enrichment. We needed transparency. Every field should have a confidence indicator. Every correction should be traceable."
Lisa Huang, Director of Marketing Operations
The requirements were precise:
• Enrich what's missing — contacts and accounts, including extended fields
• Validate what exists — flag anything incorrect or out of standard
• Standardize everything — names, titles, countries, parent/child rules
• Document every change — source URLs, confidence scores, data lineage
Going live
Jeeva worked through the dataset in structured phases:
Phase 1: Scope & standards alignment. Confirmed the complete field list — core, extended, and account-level. Locked data standards: allowed values, formats, naming conventions, parent/child rules.
Phase 2: Enrichment. Filled missing contact fields: titles, emails, phones, LinkedIn, plus extended fields. Added account-level data: industry, revenue, address, global rankings (Fortune 500, Forbes 2000). Recorded source and confidence for every new value.
Phase 3: Validation. Compared existing values against authoritative sources. Flagged mismatches and outdated information. Ran format checks on emails and phone numbers. Identified duplicates. Normalized names, titles, countries, states. Ensured parent/child/subsidiary consistency.
Phase 4: Output & QA. Delivered a fully standardized dataset with a Flags column: Incorrect, Needs Update, Missing, or Verified. Included data lineage with source URLs. Added a summary report: fill rates, corrected values, flagged records, remaining gaps.
The "aha" moment
David remembers opening the first validated batch.
"I filtered by 'Incorrect' flags. There were 847 records where the email domain didn't match the listed employer. Eight hundred and forty-seven. We would have been sending campaigns to dead ends."
"For the first time, I could see what was wrong. Not guess — see. That changed everything about how we approached the rest of the cleanup."
—David Park, Data Analyst
The team ran two structured feedback cycles with Jeeva to refine the output:
Feedback Round 1:
• Adjusted lead scoring to prioritize positive evidence over data completeness
• Reduced over-weighting of internal usage signals
• Added explicit columns for parent companies, subsidiaries, and sister companies
• Highlighted duplicates more clearly
• Added color coding for data corrections vs. original values
Feedback Round 2:
• Updated and verified job titles and seniority mapping
• Improved hierarchy representation for segmentation
• Added source links for all validation comments
• Standardized each column for easy filtering
• Defined account status: active, inactive, or out of business
• Added global rankings (Fortune 500, Forbes 2000)
The outcome
JLL Digital Marketing Ops now operates on data they can actually trust.
What changed | |
|---|---|
Every record has a status Verified, Needs Update, Incorrect, Missing | Every enriched value has a source URLs and confidence scores tracked |
Parent/child relationships mapped Hierarchy visible for segmentation | Lead scoring accuracy Based on evidence, not data presence |
"We stopped having arguments about whether data was right," says Lisa. "Now we just look at the flags. It's objective."
Campaign targeting improved immediately. Bounce rates dropped. The team could segment by company hierarchy for the first time — targeting subsidiaries of key accounts without manual mapping.
"Our last campaign had a 23% lower bounce rate than the one before. Same audience definition. The only difference was the data quality."
David Park, Data Analyst
What's next
The team is building this into an ongoing process, not a one-time project.
"We're setting up quarterly validation cycles," Lisa explains. "New data comes in, it gets enriched and flagged. Existing data gets re-validated. The system stays clean."
David is already training other JLL teams on how to interpret the flag system and use the hierarchy data for their own campaigns.