Surfacing Clinical Trial Insights
User research | Design | Usability Evaluation | Data Visualization | UX Management | Design Systems
DURATION
April 2022 - May 2024
TEAM
-
Product Owners
-
Designers (2)
-
Subject Matter Expert
-
Business Analysts
-
Developers
CLIENT
Large, globally distributed life sciences organization
My Roles
As a Designer (Individual Contributor):
I directly performed research and design work, alongside of the lead designer, and then became the lead designer myself. I provided the principal design work for key features including the information architecture, navigation, dashboard design and AI design.
As a UX Director:
I oversaw the team of designers working on the encompassing product suite, monitoring the quality of the work being done, while introducing a number of methods to ensure collaboration, productivity, roadmapping and application of best practices.
Situation
This project followed upon earlier attempts by the company to build applications that use analytics to oversee clinical trial execution. For this new effort, the team had chosen Power BI as the technical platform for analytics, positioned within a light React UI shell. This hybrid application was to be the climatic app within a new product suite that ingested, mapped, reviewed and analyzed data from a clinical trial.
Challenges
-
There was a project goal to keep the React portion of the app light and rely mostly on the Power BI data visualization capabilities.
-
The app was to serve a diverse set of user roles and clinical trials, who differed with respect to the focus and purpose of their data analysis.
-
Clinical trial data analysis is complex involving many data domains and relationships.
-
AI was being introduced within a low risk tolerant environment where subject health was at stake.
-
Product vision was vague and not elaborated.
-
Obtaining access to users with very specialized roles was difficult.
-
UX design was not well understood by the project team.
Solution
Provide clear and flexible views into the workings of clinical trials in order to promote their success through early problem detection and resolution.
-
Pragmatically designed application’s initial release to provide immediate value for users while fitting within project constraints.
-
Performed research with users and SME’s, as well as conducted usability testing, to understand user needs.
-
Designed rich, effective dashboards to aid analysis of clinical trials.
-
Designed the UI for a new AI based conversational method to augment traditional analysis and leverage history.
-
Created framework for designing AI based features that could be extended to other projects.
-
Extended the design system to additional chart types for dashboard design versatility and effectiveness.
-
Improved UX integration through collaboration planning, better requirements definition, UX roadmap and work tracking.
To comply with NDA, I have intentionally hidden and replaced content in this case study.
DISCOVERY
An interwoven quilt of clinical trial data with a story to tell
For this application to be successful, it had to efficiently provide a revealing window into the inner workings of an ongoing clinical trial, while being flexible enough to support the range of analysis performed by the various user roles, as well as the unique needs of each clinical trial.
User research with the clinical trial monitor role
Exactly how users analyzed clinical trial data was not well understood. Users in this space are specialized and were challenging to access, even internally within our own company. It was important that we could overcome this challenge and reveal their needs. A study monitor was one of the key user roles involved and during a previous related project, I had set out to learn more about their work.
First I created a survey and sent it to a group of monitors, which allowed me to identify user concerns that I could explore in more depth during interviews. Results indicated that monitors were facing issues with:
-
quickly accessing trustworthy data,
-
visualizing it for insight,
-
recording their observations, and
-
sharing out their analysis.
I explored this further during a series of interviews that I conducted where I observed examples of how study monitors currently analyzed clinical trial data. These interviews revealed important unmet needs, providing design opportunities to take advantage of, so that the application could be competitive and provide sought after value.
Analyzing results
I used an empathy map and scenario map to organize and analyze what I found from my user interviews. From this, I drew out important learnings about the role’s persona, goals, task steps, pain points, and opportunities for improvement.
The workflow as collaborative and on-going
Identifying trial issues was just the first step within a larger workflow to share out findings, take remediating actions and follow-up on the impact of those actions. That process were currently cumbersome.
Issues were narrated and shared
Users needed support for assembling a clear story to share with teams and sponsors about the nature of issues found, as well as conversing with others about those issues.
A portion of the analysis is specific to each trial
Each trial had unique needs for analysis and sponsor goals. Users needed the ability to create ad hoc data visualizations that went beyond the standard out-of-the-box dashboards.
Key findings
Data accuracy needed to be obvious
It was important that users had ways to gain confidence in the accuracy of the data so that they could rely on it for analysis.
Analysis was schedule driven
The analysis users performed on any given day was driven by a defined plan and the lifecycle of the trial, including unique points of emphasis for a given trial.
The data hierarchy must be easy to traverse
Analysis involved being able to move easily from high level issue identification to low level root cause, and back again.
Lean UX Canvas
The project team struggled with defining a shared vision for the application. To help them bring this into focus around valuable outcomes for the customer, as opposed to centering on features, I introduced them to the concept of a Lean UX canvas. This provided a structure to clarify the problem we were solving, identify the outcomes for the users and business that we were aiming for, and list the evidence that would show we were successful. By considering the user research findings and using the structure of the Lean UX canvas, this helped with brainstorming many ideas for how the tool could provide unmistakeable value to customers and the business.
Example takeaways
Providing users with visibility to how the underlying clinical study data was sourced, mapped, and reviewed prior to it being visualized on a dashboard would provide users confidence in its accuracy, while also opening up key tasks flows between our application and other applications within the encompassing data management suite that performed these functions.
By integrating a study’s plan and schedule for data analysis, the application could anticipate user needs and automatically deliver to the user the most appropriate data views at any given moment.
SME panel and user interviews
I worked with the product owner to facilitate sessions with a panel of SME’s and users that she organized to provide us on-going input into our designs.
Talking with the SME’s and users provided me more understanding of goals, tasks and painpoints, as well as the larger context of the work. This included learning about the different task focus of one role vs. another. It also allowed us to obtain feedback on early designs and refine them to address user needs.
Competitive research
This allowed me to see how competitors were solving similar design challenges. For example, it showed me how navigation among clinical trial studies, data domains and dashboards was being supported. It also showed examples of terminology and screen layout. In that way, it could spawn ideas, as well as show areas where we could gain a competitive advantage by providing a better solution in our product.
Information architecture
DESIGN
Enabling accelerated insight
Overall design
Given that the application, particularly in its early manifestation, would contain a large collection of dashboards and reports, and that users needed flexibility to explore the data space based on situation and need, the ability to easily find the right analytical view for the task at hand would be paramount. Users also needed the ability to easily change their focus from one clinical trial to another, while being able to resume previous work.
The information space was a multi-level hierarchy with various intersecting filtering parameters. For example, oncology studies had data specific to that therapeutic area that needed to be visualized in a certain way. Also, depending on the user role, they might focus only on a specific level of the data hierarchy, such as the subject (patient) level.
Subset of the parameters that shape the user's desired view of the data
The application would need to eventually grow to provide a comprehensive, well organized set of dashboards and reports, covering the operational, clinical, and data health dimensions of a clinical trial. It would also need to support the users in traversing the natural relationships between these views, so that they could troubleshoot issues within the clinical trial and understand how one occurrence (ex. trend or outlier) in the trial was affecting another. Therapeutic areas like oncology could have their own specialized views (ex. looking at change in tumor size over time).
Early information architecture concept
User Goals
I considered the goals that users would have when they used the application. Below are a few of the examples.
First time user
Determine what the application allows me to do.
Determine where I should begin.
Understand how the application is organized.
Inexperienced user
Obtain guidance on how to analyze a given domain and type of study.
Experienced user
Determine what analysis I need to perform today.
Resume analysis from previous session or time period.
Access the protocol for a study to provide context for my analysis.
Perform analysis not covered by the out of the box dashboards and reports.
Review the history of analysis performed on a previous study.
I mapped the typical user flow as they perform their clinical trial analysis.
Early Design
Final Design
I thought about how I might help users to:
-
Identify the study(s) they wished to analyze
-
Orient themselves to the application
-
Quickly identify views that met their current goal through a flexible set of pathways
-
Easily resume previous analysis
I first looked at providing multiple navigational paths , filtering options and categorization. I also exposed key attributes of a dashboard up front, including a short description of each dashboard and portion of the data that it focused on. I provided shortcuts to return to a recently viewed dashboard and featured overview dashboards that provide starting points for analysis. A keyword search was included to quickly identify data views that displayed a specific KRI or data field.
The final early release design removed filtering and search, pushing them to a future release due to feasibility and project timelines. Thumbnails were added to provide a dashboard preview and descriptions were moved to an exposable overlay. Use of typography was refined to establish a clear visual hierarchy. I discussed with the team that some of the terminology they suggested, such as ‘Self-Service Reports’ could be replaced with more familiar language going forward.
Solution
Future Concept
While the first release of the application would need to be kept simple due to time and budget contraints, I envisioned that the application could grow to later provide much more task support and flexibility.
-
A tab structure could allow multiple studies and dashboards to be open at the same time, supporting data comparison and cross-study work.
-
Intelligence could be brought to the tool to proactively surface insights and conditions of interest to the user up front, helping them to know where to focus their analysis.
-
Suggested mitigating actions could be offered based on what has worked in past similar situations.
-
Support for aligning to a team’s schedule for analysis and for collaboration among roles could also be added. For example, shortcut links to alerts, schedule of analysis, team collaborators and related conversations for each dashboard could be present to help coordinate clinical trial analysis and retain context.
Dashboards
This clinical analytics application was to provide a whole range of dashboards and reports covering the clinical and operational aspects of a clinical trial. I led the design of some of the dashboards myself and guiding the work of another designer on others.
Addressing process challenges
Observing that the project team was experiencing some challenges with integrating a healthy UX design process and collaboration while they worked on dashboards with the other designer, I diagnosed the issues and introduced changes to improve the process.
Improving UX integration and team collaboration
To provide a model for how to navigate the UX design process and to allow for key meetings to occur when needed, I created a process overview to guide the team.
I also incorporated the design process directly into a Figma file template, to both guide a designer and to orient a project team as to where to find artifacts for each process step.
Page structure within the Figma template that I created to help guide and organize dashboard design projects
Establishing a better starting point
The requirements that we are receiving for a dashboard were not doing an effective job at defining the design problem. Instead, they tended to lean into a given presumed solution. This was contributing to a slow start for the design effort. To address this, I suggested some changes to the SAFe-Epic-Hypothesis statement template and worked with the product owner to gain its adoption. I suggested to include coverage of:
-
What changes in customer behavior will indicate you have solved a real problem in a way that adds value to your customers?
-
A well structured user story statement
-
What will the dashboard be used to monitor, and what objectives will it support?
-
Who will use the dashboard and what are the questions that the dashboard should answer for the users?
Contributing to the design system
A design system is a way to inject not only consistency and best practice, but also to drive innovation.
We were looking for ways to bring more innovation to our applications in order to meet the challenges that users faced in performing specialized, complex tasks informed by large amounts of interrelated data. Creating new design patterns is a way to introduce such innovations, bridging the needs of multiple applications and combining the ideas of the team. I proposed creating a multi-panel layout that could be used across apps to clearly organize information while providing users the flexibility to expose what they needed at a given point in their task.
Protocol deviations dashboard
This one of the dashboards on which I performed the hands-on design work. It was to surface data on protocol deviations, which are instances where a clinical trial did not follow the protocol, i.e. the rules and procedure, for the trial.
User roles
For each dashboard that I worked on, I would identify the user roles that would use it and the pertinent questions that the users needed to answer via the data. I worked with the product owner as well as SME’s. I also leveraged my understanding from previous related project work and user research.
Study level
Project lead
Clinical lead
Study lead
Trial manager
Country level
Country manager
Site level
Site manager
Leader CRA's'/CRA's
Other
Sponsor
Medical Advisor
Therapeutic area lead
Program director
Early Concept
Based on what I learned about the user needs, I created a concept diagram to arrange the user questions of interest on the screen, along with the appropriate data relationships to make the answers salient. This allowed us to focus on providing the insights that the users needed, rather than immediately designing the visualizations. In the concept below, I arranged the questions in tiers to allow the user to focus at the cross-study, study, country and site level, while zeroing in on the type, location, and rate at which protocol deviations (abbreviated ‘pds’) were occurring.
Early Wireframe
I translated the concept into a wireframe, filling in the appropriate visualizations based on best practice. These could be shared with the subject matter experts and the product team for feedback to further refine the design.
‘Per Subject’ and ‘Per Visit’ rates were included to act as a leveling metric, since studies with more subjects could likely have more protocol deviations.
The design also attempted to show how the protocol deviations could be impacting subjects dropping out of the study (early termination rate).
CRA’s are a clinical trial role whose responsibilities include reporting protocol deviations. Using the chart at bottom, it would be easy to see which CRA’s were reporting the least amount of protocol deviations, which could indicate a problem, and which were taking the longest time to do so.
A box plot was used to convey how consistent the protocol deviations per subject was across sites and which sites stood out as outliers.
Feedback from subject matter experts
We used panel discussions with subject matter experts to help refine the designs. Sometimes they raised needs that were not anticipated by the original requirements provided by the product owner. But, they pointed us towards valuable outcomes that we could provide users in future releases. An example is being able to compare the prevalence of protocol deviations under different protocols. Another is having a subject level of view of protocol deviations affecting a given subject. Many times with dashboard design, fulfiling such needs may require new data to be captured and structured to make such views possible.
Final Design
The final design provided a well rounded view of protocol deviations occurring within a single study. At the top of the dashboard, it provided context in terms of the size of the study, while also listing key metrics of protocol deviation occurrence and impact. It also provided insight into which types of protocol deviations were most pervasive, as well as the countries and sites most affected.
I used a consistent arrangement of metrics at country and site level, allowing for faster digestion of information. This included placing rates of screen failure and early termination along side of protocol deviation occurrence to support users in weighting the impact of protocol deviations on study enrollment and dropout.
I also used an indicator (color and shape) to draw a user’s attention to a condition of interest, where they needed to take action because a particular severity level of protocol deviation was occurring too often.
Monthly trends were included pervasively to help users see where situations were getting better or worse. It would also help them determine if actions being taken to reduce protocol deviations at a site, for example, where working.
AI Chatbot
Design Problem
The product team wanted to add AI capabilities to the product in the form of a chatbot that would allow users the ability to ask questions about the clinical trial data and receive answers in the form of text and data visualizations. This was meant to help the
product be competitive through application of AI but also to fill gaps in analysis that the pre-defined dashboards might not cover.
More specifically, it could help users understand the data, whether any of it was missing, create custom visualizations quickly, and ask specific investigative questions. In this way, it could lead to additional data insights, time savings in creating charts manually, and support the nuances of each clinical trial.
We focused on a subset of the user roles to design this feature for:
Clinical Monitors
Medical Monitors
Data Reviewers
Discovery
To identify user needs, I gathered information from discussion with subject matter experts from the application’s target user roles. I also drew upon what I learned in the earlier user research that I had performed with clinical monitors.
Brainstorming how AI could be used
Since the product team came with a preconceived AI feature, I did not have the opportunity up front to do matchmaking of AI capabilities to user needs. So, I played some catch up by thinking creatively about how AI could be used in this application.
For example, the user research showed that clinical monitors could be asked questions by sponsors that were specific to a clinical trial and not covered by existing dashboards. It could take considerable time to respond to such a question by downloading data, manually doing calculations, and creating the right visualization. The skills of these users to do this varied, as it might require them to know Python, for example. The AI chatbot could potentially allow the clinical monitor to respond to such questions on the spot by using the right prompts., thus saving time and accounting for skill gaps the user may have.
Another example from the research, is that I found that it took a a significant amount of time for new monitors to learn to how to perform analysis. By keeping an historical record of AI conversations, new monitors could potentially leverage real examples in an efficient manner to both learn and execute effective lines of questioning of the data via the right sequence of prompts.
Understanding how the AI worked
To be able to understand how the AI capabilities could be brought to bear on the challenges users faced, and to best design the UI, I felt that it was important to understand how the AI chatbot was to be implemented, as well as have overall knowledge of the capabilities that AI offers. For this reason, I took internal classes that the company offered In the technical aspects of AI, as well as read a good book on designing for AI written by a designer with experience in this area.
I also met with the data scientist who was working on the AI to learn about the architecture, their vision for the feature, and their needs for gathering user feedback on its effectiveness.
Here are some of the technical questions that I considered:
-
Will the AI engine be trained on past prompts? One time or continually?
-
What data will the AI have access to?
-
Will the AI engine have access to both how the data looks currently and how it looked at various points in the past, for example so that it could calculate the difference between this month’s values and last month’s?
-
What training of the AI will take place?
-
Will the AI be already trained to identify conditions in the clinical trial data that need attention, ex. serious adverse events trending upwards, labs that outside of reference range, protocol deviations that are oddly numerous at a particular site, etc.?
-
Could the AI communicate with the data visualization platform we were using?
The data scientist helped me understand the strenghts and limitations of the AI platform, and where I could help them create a feedback loop to advance the AI going forward.
Design
I used a crazy 8’s exercise to brainstorm various ways that the AI chatbot capabilities could be integrated into the application. Although the team was going to start small by having a dedicated screen for the chatbot, these ideas opened up possibilities for allowing the AI to work more hand-in-hand with the users as they performed with in the rest of the application.
Based on one of the charts on the dashboard, potential issues to investigate could be surfaced by AI. The user could then opt to see the the logic behind how AI identified the issue.
Since users needed to understand the completeness and accuracy of the data, the AI could surface a visualization of the health of the data that is behind each chart.
Users could ask the AI questions about a specific chart.
Early Design Concepts
I began framing out what the UI might look like as a panel to right of a dashboard or in a larger footprint. I thought about what pieces need to be in place, such as how we would introduce the feature to new users, guide them as to how to create prompts, provide them a way to give feedback, and expose choices for what to do next. Thinking of AI in terms of building trust and what elements needed to be present to do that, eventually led to a framework that I proposed to my team of designers for how to design for AI.
Creating a framework to use when designing for AI
Based on what I was learning, I created a framework to provide a designer guidance on all of the aspects to be mindful of when designing an AI driven experience, particularly with the objective of building trust with user. Not all of these aspects apply in every situation. So, I listed design questions to consider. As my team was also in the process of building design system components, I also indicated where there were opportunities to standardize particular elements.
Evolving the design
I took over the design of the AI chatbot at an early stage after another designer had started some work. My goals for the design included:
-
Integrating the AI chatbot with the rest of the application, including allowing a user to retain the context of the study they were currently analyzing through use of the established dashboards and reports.
-
Introducing the function to users and ensuring that they knew where to begin.
-
Providing the users an understanding of what was meant by a ‘conversation’ and how to being a new one.
-
Making it easy for users to navigate between conversations as well as studies.
-
Providing users helpful options such as the ability to copy responses for sharing.
-
Supporting users ability to search for past conversations that related to their present goal.
-
Setting the right expectations about the scope of what AI could be asked about and the accuracy of its responses.
Testing the design
At the time when I left the project, we had not yet gotten to the point of testing the design. I stressed to the product team the importance of testing, which is particularly the case for AI driven experiences where each situation and user audience present different levels of need with regard to task support and trust building elements. It's also important to test the actual AI responses. The technical team was still working on the implementation and obtaining the right data to drive the AI.
Laying the groundwork for future increased user involvement
In the environment in which this project sat, it was often challenging to obtain users. There were a number of reasons for this, including the fact that the user roles were specialized roles that were hard to recruit, both internally and externally. Internal users were busy and there was not budget for recruiting. Also, the project teams were not versed in user research or usability testing, and these methods were not established. In response to this, I collaborated with our User research lead to find ways to overcome the barriers. Here is what I did:
-
Partnered with the user research lead to perform some usability testing
-
Identified potential recruitment and incentive methods
-
Created a database of participants by using tools to search employees by role and experience.
-
Advocated with my design org for user research tools to accelerate research and extend designer bandwidth.
Obtaining user feedback to drive the AI forward
It was a priority of the data scientist to include in the UI a way for users to provide feedback on the AI responses in terms of their accuracy, relevance, and completeness. They wanted this information to be able to improve the AI. My objective was to encourage such feedback by making it inviting, intuitive and quick for busy users who often may prefer to skip this step.
I looked at how other applications supported user feedback to discern trends and effective designs. I also considered the following:
-
the terminology that would be familiar to users
-
minimizing the interaction needed
-
minimiizing the ‘clutter’ in the conversational UI
-
applying the company design system
-
providing users an incentive to give their feedback
-
steering users away from entering private information
What I did:
-
Only exposed the feedback option by default for the latest response.
-
Used company design system components to assemble the feedback UI
-
Instead of the three 5 point scales envisioned by the product team for accuracy, relevance and completeness, I used simple toggles and simpler labels for the dimensions, along with an optional comment box.
-
Provided a color filled banner to confirm feedback submission and thank the user, including to help encourage future feedback.
-
In addition to mockups, I provided a flow chart to the development team to help them understand the intended interaction behavior.
RESULTS
Delivering early value and a foundation for growth
We were able to rapidly deliver an initial release that provided powerful out-of-the box dashboards, along with the ability of users to respond to specific study needs through custom dashboard creation and AI prompts.
-
Delivered a set of dashboards covering both clinical and operational aspects of a clinical trial, to help a users spot issues and address them for the success of the trial.
-
Provide users the ability to create custom dashboards for flexibility and greater application to study needs.
-
Introduced a game changing AI feature that would allow users to interact with the data in new ways, accelerating analysis.
-
Provided simple navigation between studies and dashboards.
REFLECTION
Discovering new paths to value
Driving design of this application within a specialized, health domain while also learning to be a UX manager was not easy. I was able to strike an effective balance, leveraging best practices in dashboard and AI design, solving team collaboration issues and making important compromises to meet an aggressive schedule. I also learned a few lessons to take forward.
Design the Path to User Involvement
Its important to start early in establishing conduits to the users in specialized roles, working with stakeholders to provide rationale for doing so and imaginatively overcoming barriers. Exposing stakeholders to the power of early research and usability testing can be very persuasive, and starting small is key.
Integrate AI Thoughtfully
Design can help ensure AI is applied properly by proactively looking at where in the user’s journey can the capabilities of AI address painpoints and accelerate work, as well as prioritizing those opportunities based on AI capability and risk.
Go Short and Long
Designing both for the short and long term can help gain alignment around a product vision, so that future growth is taken into account by the application’s architecture. Also, feedback we received from SME’s identified unanticipated needs that might require new data to be captured or new views to planned.
Maintain the throughline from user research findings to delivered outcomes
Ensure that user needs identified in research are accounted for as solutions for near and long term are prioritizes.
Establish Success Metrics Early
Metrics need to be established to objectively determine if customers are obtaining the intended value from the application. This can help a lot with getting buy-in for improved designs, avoiding development of the wrong solution, and demonstrating the value of leading with design.