Verily

Enable researchers to set-up studies to discover new treatments

I have omitted confidential information in this case study. All information in this case study is my own and does not reflect the views of Verily. 

Role
Lead UX designer coordinating a 4 person UX team.

Team
Coordinating 1 UXR, 2 UXD, 1 UXW, and 6 PMs/POs across 4 product areas.

Duration
2022 - present

GOAL

Translate complex study plans into logic to be executed by the study team via ground-up redesign of the legacy product to fit study designers (clinical data managers) mental models.

COMPLEXITY

I led UX strategy and execution for Builder: coordinating a 4 person UX team working across dependencies with 4 product teams spanning 6 product managers and 30+ engineers to deliver designs for product re-design from 2022-present.

IMPACT

2024 launch has shown study set-up to be significantly easier, efficient, and accurate. 90%+ hours est* time saved per study set-up vs legacy product. First major client signed (# million contract*).

*  For confidentiality reasons I have omitted the actual values for these metrics.  


BACKGROUND

Translating complex study plans into logic to be executed by the study team is difficult for users


Research protocol

Schedule of activities

The research protocol is a document that describes the plan for a study: background, rationale, objective, design, method, etc.

Research activities


CHALLENGES

Our legacy product was unusable: unsafe, error-prone, and slow

Too slow
Many weeks* from start to study-live, including business processes. Industry benchmark is 3-4 weeks.

High engineering overhead
Many hours* spent per study by engineering, manually configuring visits & activities.

Error prone
CAPAs* (compliance errors) in 2021 vs 0-1 expected for competitor average.

Unscalable
Platform had years of growing UX and engineering debt from many* studies.

* For confidentiality reasons I have omitted the actual values for these metrics.  


RESEARCH

I generated UXR proposals and advocated to bring UXR onto the team. With UXR help, I gained a thorough understanding of why the current study setup process was so hard, and presented insights across product verticals

Analyzed 8 study 
design products with UXR 

Joined NIH All of Us study + 2 others 

Interviewed 20+ users with UXR 

Used consumer research apps 

Set up studies myself 

Attended clinical trial conferences 

Visited study sites in CA 

Visited study sites in CA with UXR 


INSIGHTS

Clinical data managers (CDM) were the key users most involved in study set-up, so I advocated for the team to focus into deeply understanding CDM journeys 

4 key phases in their journey emerged

Select study (Registry, clinical trial, etc)

Create visits and activities (Surveys, remote visits, skin photos, etc)

Add activities to the right visit(s)(COVID-19 survey on visit 2, etc)

Set-up complex study logic (Notifications, SMS, email, reminder calls, ddit checks)

Working with study operations and test engineers, I gathered examples (here is an actual one blurred for anonymity) of sheets detailing the configuration that the design must accommodate through this journey. 150+ activities must go to the right study participant, with different timeframes for completion, and with logic dependencies between each other that trigger activities.

E.g. [Send] the [post-treatment symptoms] [survey] to [enrolled] participants in the [control group] and the [treatment group] on [visit 1, 4, 5, 7, 8, and 11] but not [visit 2, 3, 6, 9, 10]. [7 days] [before] [visit 5], [7 days] [before] [visit 6], [7 days] [before] [visit 9], [7 days] [before] [visit 10]. Do not send any email notification to participants after survey completion. Do not send any payments to participants after survey completion. 


INSIGHTS

Using this new knowledge of the CDM journey and needs, I led a heuristic analysis with UXD and UXR colleagues to learn how misaligned the current product was

It was difficult for the CDM to find the right study

There is no clear method of filtering and differentiating studies. The UI is cluttered. All these icons are separate study set-up tools for each individual study. Study tools live outside the study itself. CDMs have several tabs open for each tool per study. There is no integration. 

Setting up study visits was complicated for CDMs

ENG sets up visits for CDM. Mistakes are often made because of communication errors, as ENG are not study experts.

Adding activities to the right visit(s) is difficult

Users can’t see what activities are under which visit(s). The only way to see this was to one-by-one click into each visit folder (of which there could be dozens of visits per study) to see activities for that visit

Adding activities to the study is time intensive

Activities (of which there could be 80+ per study) are added into the visit one-by-one, with no preview of what the activities look like to the study participant


DESIGN

The study schedule was at the center. It had to clearly show what activities were in which visit(s), the backbone on which all other study logic is configured.

An example study schedule from a study protocol. Activities are on the left. Visits are on top. The “X” shows which visit an activity should be completed in.

I led a design workshop with CDMs to understand their study schedule mental model. 
I applied that model into 3 explorations. I hypothesized “Concept B” would test best as it was closest to other task-scheduling type products, like Google Calendar.

Concept A: schedule over time 

Great for site staff who need to know their schedule, but limited view of full schedule for CDMs 

Concept B: kanban board

Like Trello, easy to adjust activities, but lots of copy/pasting, and adjustments don’t happen often enough to warrant this model

Concept C: table

View is complex and information dense, but offers the fullest picture at any one time

RITE analysis was conducted, and CDM feedback guided me to concept C, because it was closest to the tabular way they read research protocols

Focusing on activity relationships with visits, with high information density


SOLUTION

Redesign of the entire legacy product to fit user (clinical data managers) mental models 

Quickly navigate to the right study

All study tools are within the study itself. No need for opening several tabs just to work on one study.

Auto-generate visit(s) so that CDMs don’t have to

By uploading study documentation and having the system set-up visits, days of configuration time are saved.

Easily build activities without multiple windows

All study tools, like activity creation, are within the study. Instead of having separate windows open like in the legacy product, CDM can easily navigate between the tools they need.

Import multiple activities at once

Instead of taking minutes at a time to import 1 activity to a study at a time (of which there could be 100+ activities), users import all the activities they need for the study, all at once.


IMPACT

2024 internal launch has shown study set-up to be significantly faster, easier, and accurate

“Checkbox way is clear. It’s pretty similar to protocol so that’s nice.” 

-Clinical Data Manager 1

“It’s helpful to view the entire study and see which surveys I’ve added to each visit.”

-Clinical Data Manager 2

“I liked the clean interface. It’s minimal, reduces distractions, and very simple. Other EDCs can be very cluttered.”

-Clinical Data Manager 3

“I definitely like it and think it’ll be super helpful. I’m really excited to be able to actually use it.” 

-Clinical Data Manager 4

Efficient
95%+* reduced time to configure care use-case vs the legacy product

Faster study config
90%+ est* engineering time saved per study set-up vs legacy product

UXR testing
20+ users strong positive qualitative data across all users tested (pre-release)

Low errors
No CAPAs* (compliance errors) thus far after launch

For confidentiality reasons I have omitted the actual values for these metrics.