Select Page

Haleon DAM

The Challenge

Users struggled to locate the assets they needed within the Haleon DAM. Search results often felt unfocused, and the volume of content made it hard to quickly identify the right files.
Scope

The scope of this project was to uncover the root causes of search issues across multiple user groups spanning 14 global business units, and to define an actionable plan including potential AI‑driven enhancements  that aligns with Haleon’s existing infrastructure.

MY ROLE

Senior UX Designer

Competitor analysis | Audit | User Interviews | Surveys | Qualitative and Quantitive Data Analysis | Client Workshops 

THE CLIENT

Haleon

TOOLS

Miro | Survey Monkey | PowerPoint | Figma

DAM Stakeholder Run Through & Audit 

To build a clear understanding of Haleon’s existing DAM platform, we scheduled a walkthrough with a key stakeholder. This session helped us observe how the platform is currently used the types of assets users search for, how different teams upload and manage content, and the specific pain points experienced across roles.

Before the walkthrough, we conducted a comparative review of other industry DAM platforms. This gave us a baseline understanding of common features such as metadata structures, tagging conventions, search behaviours, and filtering capabilities. Having this context allowed us to enter the walkthrough with a strong sense of industry standards and ask the right questions from the outset.

 

Key findings
  • No AI support across the DAM workflow, resulting in fully manual metadata entry and no automation to speed up tagging or categorisation.
  • Users were required to upload assets individually, often completing 30+ metadata fields per file, even when assets belonged to the same campaign or folder.
  • The end‑to‑end upload and tagging process fell below industry standards, making it slow, repetitive, and difficult to scale across global teams.
Internal Audit: Key Priorities

Our internal audit highlighted two tiers of challenges:
Primary challenges that required immediate attention to improve usability and workflow efficiency, and secondary challenges that, while not urgent, would significantly enhance the overall user experience.

 

These challenges had a direct impact on usability, increasing cognitive load, slowing down workflows, and creating inconsistencies across markets. This highlighted the need for clearer information architecture, more intuitive discovery patterns, and automation to reduce manual effort.

Primary Challenges:

  • Complex and restrictive asset discovery process
  • Fragmented experience across markets
  • Limited workflow customisation

Secondary Challenges

  • Simplify and modernise the search experience
  • Create seamless cross‑market asset management
  • Enable intuitive asset discovery workflows

Client Workshop & Hypotheses

Hypotheses Development

At this stage of the project, we had gathered a large number of assumptions about the challenges within the DAM. Rather than treating these as facts, we wanted to pressure‑test them with the client. Creating hypotheses allowed us to:

  • Turn assumptions into clear, testable statements
  • Show the client the thinking behind our research
  • Align on what problems genuinely matter before moving into solutions
  • Prioritise challenges based on evidence, not opinion
How we validated

We ran a client workshop to share our early research and review the assumptions that had emerged. Because many overlapped, we grouped them into clear thematic buckets.

On the Miro board, the client placed emoji markers against the assumptions they felt were most important. This kept the session quick, visual, and highly collaborative.

 

Why Hypotheses Were Created Before The Workshop

Before the session, we had already formed six hypotheses three primary and three secondary grounded in our research.

These reflected the challenges we felt most confident about moving forward with. The workshop acted as a validation step to confirm whether the client aligned with our thinking.

The Hypotheses We Prioritied

Primary Hypotheses

  1. Unintuitive Search Function
  2. Limited Filter Granularity
  3. Unclear Asset Type Differentiation

Secondary Hypotheses

  1. Limited Navigation Flexibility
  2. Inconsistent Asset Upload Practices
  3. Training and Support Limitations

Data backed persona’s

Discussion Guide 

Discussion Guide 

Moving into user testing was an exciting step. We interviewed 18 users across different disciplines, regions, and use cases each engaging with the DAM in different ways. We had 1.5 hours with each participant, giving us enough depth to observe real behaviours and uncover underlying issues.

Our research goals were to:

  • Evaluate usability: How effectively users navigate the DAM, locate assets, and use search and filtering.
  • Identify pain points: Uncover specific issues and understand the root causes behind difficulties in finding assets.
  • Inform recommendations: Use insights to propose targeted optimisations to improve the overall user experience.
Relevance By User Group

Because different user groups interacted with the DAM in different ways, not every hypothesis applied to every participant which meant each one was tested only with the users it was relevant to.

 

 

Data analysis from card sorting

xxxxxx

Insights

xxxx

User Testing

xxx

Insights we are looking at:
  • xxxx

Tree test visual (figma)

Survey Monkey

xxx

Insights we are looking at:
  • xxx

Tree test visual (figma)

Presentation and Ideation

xxxx

Insights we are looking at:
  • xxxx

Tree test visual (figma)

Next steps 

xxxxx