Live Member Checking
- Sarah Garrett
- Jun 21, 2022
- 7 min read
Updated: Apr 15
MCL Guidance
By Sarah Garrett
In many traditions, qualitative data analysis occurs over a period of months or years, much of it after engagement with research participants has concluded. Often, researcher(s) conduct their analytic work without further participant input, but some embrace the practice of “member checking.” They share their analysis with research participants to learn how well it aligns with participants’ experiences and perceptions.
Member checks re-connect participants with analysis, but they typically occur only after data collection has concluded or within longitudinal studies. This can raise practical dilemmas. Returning to the field months or years after collecting data may be difficult. Respondents may no longer recall or be interested in the study. Budgets may not include support for post-fieldwork follow-up or longitudinal data collection. Key members of the research team may have moved on to new work. In this blog, I share an approach to member checking developed by me and my colleagues that is sensitive to these challenges, has the potential to augment data quality, and can advance principles of team science and community engagement in research.
***
Scientists of all stripes are committed to “getting it right”—making sure their interpretation of data is valid. One of the tools for doing so is checking the analysis back with the individuals from whom the researcher collected data. Most studies in academic medicine, however, are not designed to support this, and circling back after analysis is complete can be impractical for the timelines of both the project and the participants. Moreover, on projects for which rapid analytic insights are needed, it may be problematic to delay analysis to elicit feedback. On a project launched in 2016, I helped to develop a method that colleagues and I came to call “live member checking” to address these challenges.
I was co-directing a study that used one-time in-depth interviews to try to understand factors that shaped cancer patients’ adherence to oral anti-cancer medications. In recent decades, new generations of oral anti-cancer medications have emerged as an alternative to clinic-based intravenous therapies. For many, oral medications represent an exciting and more accessible treatment format, but the drugs can have severe side effects and have to be taken according to a precise schedule. The pills can be remarkably expensive. We sought to understand what factors helped or hindered patients to adhere to their prescriptions.
We used a semi-structured interview guide, which included prompts about a number of factors based on the literature and our preliminary data. We systematically probed to understand participants’ health history, cancer and medication experiences, and how their priorities, resources, and relationships shaped their ability and willingness to regularly take their oral anti-cancer pills. We also asked questions to gather contextual information about their lives and to elicit their thoughts on experiences or views that fell outside the boundaries of our interview guide. Throughout, we were clear about our highest priority: according to the patient, what were the factors that made taking their medication feasible and desirable—or not?
To ensure we comprehensively captured factors relevant to the patient, we built a member checking process into the interview itself. I developed a graphics-based template, matched to the different prompts in the interview, that made it easy to take rapid structured notes. During the interviews, whether in person and over the phone, I took notes directly on the grid. At the end of the interview, I then shared back the factors I had noted, sorted by topic: side effects, cost, logistics, and so on. The goal was to provide a real-time check-in with the interviewee about whether I had captured everything they felt was relevant in each category. Reviewing these structured notes provided a chance to clarify any items I misunderstood, to add any factors I missed, and to revise notes I had recorded incorrectly. The process yielded high-quality and comprehensive data. As importantly, respondents felt validated and listened to. Many expressed their appreciation for the process.
In the last year, I used the same process for focus groups and in-depth interviews that I conducted for the MEND study: Multi-Stakeholder Engagement with State Policies to Advance Anti-racism in Maternal Health. In 2020, California passed a law that mandated all clinicians involved in perinatal care undergo training to decrease implicit bias. California lawmakers hoped such training could help address the persistent and immoral inequities in maternal health, particularly those borne by Black women and birthing people. Using community-based participatory methods, MEND aims to develop evidence to guide the development and implementation of clinician anti-bias training.
MEND launched during the COVID-19 pandemic, so all data collection was conducted over the phone or Zoom. We conducted one-time focus groups with Black women who recently gave birth in a hospital and individual interviews with perinatal clinicians. Our goal for both types of respondents was to identify the challenges, opportunities, and recommendations for effective implicit bias training. How could it achieve improvements in the care and clinical outcomes of Black women and birthing people?
As I did in the cancer medication project, I developed templates to record notes about challenges, opportunities, and recommendations as I heard them. The focus groups were facilitated on Zoom by a community partner co-investigator; I took notes “behind the scenes“ while off camera. At each group’s conclusion, I presented my notes and invited all participants to add, correct, and clarify. Interviews with clinicians followed the cancer project format; I took notes while interviewing respondents. In some cases, the checking prompted focus group and interviewee respondents to add more factors or to clarify an idea. In all cases, respondents ultimately affirmed the analytic notes. And as was my experience in the cancer project, many participants expressed appreciation for the opportunity to hear their ideas presented back to them and accurately represented. After presenting my notes, I heard things like “you did a good job reiterating a lot of what we said” (focus group), “you captured everything really well” (focus group), “nice summary” (interview), and “I appreciate it” (interview).
In addition to enhancing data quality, I appreciate that the live member checking method supports participants’ involvement in analysis. The approach engages participants in the first moment of analysis when the interviewer is trying out their interpretation of how a participant’s narrative fits with the research question. By creating opportunities for respondents to clarify, correct, and expand on what we discussed, the approach makes data collection more robust and democratic. For investigators whose timeframe or funding limits their ability to engage community members as advisors or co-investigators, live member checking is a modest way to enrich the research, enhance validity, and advance research justice during already-planned data collection.
Additionally, live member checking suits team science. The process yields real-time, participant-vetted insights from each interview or focus group. These insights can alert the team to emerging issues or help them to refine core study concepts. Shared with collaborators throughout the data collection, they support team self-reflection and better appreciation of the data.
For example, I often shared insights from the week’s data collection at MEND research and community advisor meetings. Approximately one-third of the way through data collection—before we had initiated data coding—I quickly conducted a preliminary thematic analysis based on focus group and interview insights on our key research questions. Both the insights from specific cases and the early analysis gave our team a strong and participant-validated understanding of the data and helped us to make informed decisions about ongoing data collection and analysis. I am confident that we would not have been as equipped to do this with transcript excerpts alone, or if we had had to wait for thematic analysis only after transcription and data coding processes were complete.
Despite its strengths, live member checking—like traditional member checking—is not right for all research projects. It may not work well for complex research questions nor analyses that require knowledge of particular settings, concepts, or theories. The projects I described above had straightforward goals: identify factors that constrained or enhanced adherence to oral anti-cancer medications or the effectiveness of clinician implicit bias training. The goal of each encounter was to answer those questions according to the respondents’ perception of the topic. Live member checking would not work as well for less straightforward research questions where the analysis could not be performed sufficiently “on the spot.”
Even when the research question lends itself to live member checking, it is neither a panacea nor magic bullet. The process must be implemented thoughtfully. It does not automatically produce valid data nor defeat the challenges that face any qualitative project. Additionally, doing analysis while collecting data requires substantial expertise. It takes practice to navigate the multi-tasking of recording data and jotting insightful analytic notes during the live encounter. Live member checking also requires thoughtful design decisions. I’ve used the method in person, via Zoom, and over the phone. The notes during audio-only encounters proved most detailed because I was comfortable looking at my notes for long stretches. Face-to-face interaction meant keeping my attention primarily on the participant.
Finally, there are power dynamics to consider. Some study participants comfortably embrace their role confirming, correcting, and expanding on data. Others feel less empowered or less comfortable doing so. Additional strategies may help legitimate and support the process, such as explaining how critical feedback benefits the study or normalizing the process by describing how other participants have approached the task.
Live member checking has served me and my collaborators well for the two studies described above. I expect to improve the method and its application by learning from thoughtful and theoretically-informed examples of member checking in longitudinal research. New MCL member Jen James, PhD, for example, has developed a “collective dialogue process” in order to embed collective dialogue—a key tenet of Black feminist epistemology—into the longitudinal research encounter.
Having shared my humble addition to the subfield of member checking, I’d love to hear about the experiences of others who have used similar methods, as well as questions, critiques, and suggestions.