DIABETES MANAGEMENT APP VALIDATION
in-person testing, healthcare, international study
Project lead, Moderator, root cause table writer
Bold insight
CHALLENGES
A medical technology company focused on diabetes management sought support from our team to conduct a summative human factors study to evaluate their diabetes management product with its intended users in the United Kingdom.
"How might we evaluate our client's diabetes management app to ensure that recent design and usability updates still support safe and effective use for users with diabetes in the UK?"

Process
Our team worked with the client to select London, UK as the ideal location for in-person summative testing. Our client also provided the participant screening criteria needed to successfully recruit adults with diabetes. These participants had a mix of experience with continuous glucose monitors, and were approximately evenly split between adults with Type 1 and Type 2 diabetes. Our US team worked closely with our London-based recruiter and research facility contact to ensure smooth recruitment and facility setup ahead of our arrival. Our team conducted weekly meetings with the client to review and finalize study materials and to track recruitment to ensure recruit quotas were met.
​
To ensure that the validation study met the required minimum of n=15 compete participant sessions within 5 testing days, our team implement a two-pronged strategy of over-recruiting participants for additional sessions, as well as running two testing rooms simultaneously.
​
The study design followed a simulated-use, in-depth interview human factors research approach where participants were asked to complete tasks in a usability lab-based environment as if they were completing those same tasks in real life. Participants interacted with an iOS version of the app for simulated-use tasks and a prototype of the app for knowledge task assessment questions on a smartphone and Apple smartwatch.
​
The 180-minute in-person session began with providing participants a brief introduction and overview or the app. Participants were then asked to complete tasks related to typical use of the app and were provided with other components or supplies needed to realistically simulate tasks within the app, such as a CGM applicator and twist cap. They were also asked about the information in the app as well as their understanding of the information being displayed throughout the app. Our team implemented a multi-camera technology setup to ensure that all participant interactions were recorded and viewable to international study personnel via livestream.
analysis
During the participant interviews, we collected performance data on simulated use tasks and knowledge task assessment questions, as well as subjective feedback from the participants. Root cause probing was conducted on tasks identified by the clients as critical and medically relevant, while root cause probing was conducted on non-critical or non-medically relevant tasks as time permitted.
Daily debriefs were conducted with the on-site clients to align on task scoring. Our team also conducted two-person verification between the moderator and notetaker pairs on all tasks and scores to ensure alignment and accuracy. ​
outcomes
Participants' performance data, subjective feedback, and root cause analysis on observed use errors was collected and delivered to the client in a final human factors engineering (HFE) report. Submission of the study and its findings for validation is currently in progress.