Introduction to reviewing and synthesizing qualitative evidence

Joann Starks: Good afternoon, everyone. I
am Joann Starks of SEDL in Austin, Texas and I will be moderating today’s webinar entitled
Introduction to Reviewing and Synthesizing Qualitative Evidence. It is the first in a
series of four webinars that make up an online workshop on qualitative research synthesis.
I also want to thank my colleague, Ann Williams, for her logistical and technical support for
today’s session. The webinar is offered through the Center on Knowledge Translation
for Disability and Rehabilitation Research, KTDRR, which is funded by the National Institute
on Disability and Rehabilitation Research. The KTDRR is sponsoring a community of practice
on evidence for disability and rehabilitation or D&R research. Evidence in the field of
disability and rehabilitation often includes studies that follow a variety of qualitative
research paradigms. Such evidence is difficult to summarize using traditional systematic
research review procedures. The goal of this series of web-based workshops is to introduce
D&R researchers to the methodology of qualitative evidence reviews. Participants will be provided
a state-of-the-art overview on current approaches and will learn to apply those to the literature
base. Ongoing innovative initiatives at review-producing institutions will be highlighted. Today, our
speaker is Karin Hannes, assistant professor at the Methodology of Educational Sciences
research group at the Catholic University or KU Leuven in Belgium. She has a background
in adult education as well as medical and social sciences. Karin currently teaches qualitative
research methodology to undergraduates and masters students. She has been teaching evidence-based
practice and systematic review courses for over a decade, both in public health and educational
sciences. Karin is the founder of the Belgian Campbell Group, co-convenor of the Cochrane
Qualitative Research Group, and co-author of the Cochrane Handbook for Systematic Reviews
of Effectiveness. She has published several books and articles on qualitative evidence
synthesis, particularly on the critical appraisal of qualitative research. She also specializes
in visual research methodology. Thank you, Karin, for agreeing to conduct this introductory
session today on reviewing and synthesizing qualitative evidence. If you’re ready to
go, please take it away. Karin Hannes: Yes. Thank you, Joann, for such a nice introduction.
I have indeed been asked to introduce you to the fantastic and exciting world of systematic
reviews and then, more specifically, the qualitative evidence part of it. I’ll do my very best
to give it a bit of sex appeal and so by the end of this talk, I would be hoping that you
would all be motivated to start your own review projects. I just want to outline what I’m
going to talk within this particular presentation. What I want to speak to you about is how I
actually got triggered by qualitative evidence synthesis, hoping that this would also lead
you into seeing that searching for evidence, looking at evidence, actually, has nothing
to do with research or science so much, but is or should be some sort of a common attitude
that people should adopt. I also want to clarify what qualitative research is and in my opinion
what sort of evidence it may generate. On top of that, I’d like to show how it’s
going to contribute to effectiveness reviews, so how it differs from them, and I’ll give
a quick but very brief insight in potential approaches that can be used when you consider
qualitative evidence synthesis, how you can build your own review protocol, and I will
illustrate these things with some work examples. How did I get triggered by qualitative evidence
synthesis? Let’s start with the very beginning. It’s always a good point to start. Meet
Emma, and Emma is the youngest in our family and I’m going to use her as a case to explain
why effectiveness reviews have failed to me and more specifically what has been my worst
evidence-based case scenario so far. Emma is actually born on the sixth of October,
2010 and is a little sister of Door and Polle, you might see on this slide. Apart from a
lot of joy, she also brought a lot frustration in me, and I don’t know how many mothers
I have in the room but believe me after having been pregnant for the third time, it becomes
really, really hard to control your body especially your weight, and many moms will be able to
confirm that. After my third pregnancy, I not only gained 6 pounds that did not automatically
disappear again, but I further gained weight to the extent that I did not fit half of my
closet anymore. I was interested in actually knowing what can I do to actually control
the weight gain and actually get rid of the extra pounds? If you don’t know the answer
to your question, think a minute about where you would go look for it. So I did that and
I went looking in the Cochrane and Campbell library to see whether I could find reviews
that could provide me with an answer to that query. I found this Cochrane review on “Diet
or exercise, or both, for weight reduction in women after childbirth.” The answer to
my question from that review was that women who exercise did not lose significantly more
weight than women who were in the usual care group. That sort of comforted me so it meant
that I didn’t have to go out running or cycling for the upcoming five months. I also
learned that women who took part in a diet or diet plus exercise lost more weight than
women in the usual care. There was no difference in the magnitude of weight loss between diet
and diet plus exercise groups and the intervention seemed not to affect breastfeeding performance
at first, and then I thought that was a very important trigger for me. I found this study
in the Journal of the American College of Nutrition stating that those who ate cereals
were lowering rates compared to those who ate meat and eggs, bread, or even skipped
breakfast. So my simple logical reasoning actually was that if a diet helps to lose
weight after pregnancy and if cereals are proven to work well as a diet, then actually
the consumption of those cereals should lead to weight loss after my pregnancy. Right?
Wrong, because it didn’t. After having consumed bowls of cereals in several mornings for several
months, I didn’t achieve any effect and yes, that’s the moment where you actually
start panicking and thinking about, “Gosh, I’m not normal. I’m not like this average
person where it works. What is happening to me and what am I doing wrong? Am I not following
the protocol? Did I maybe buy the wrong type of cereals?” I was thinking and thinking,
and then realized that there must have been something that I had overlooked. Maybe there
was some sort of alternative explanation for not achieving the effectiveness the review
actually had promised to me. Instead of mourning about my weight, I started to go and dig a
little bit deeper into a different sort of literature and I came across a few qualitative
articles discussing, for example, the role of social support in weight loss, diet issues,
and so on; and also some of the barriers that have been perceived by mothers who engaged
in weight loss programs I learned from these studies, I learned a lot of things about why
I had such a hard time. I learned from the first study that female relatives, husbands,
and the right sort of people around you are the primary source of emotional, instrumental,
and informational support. Having just moved from Australia to Belgium at that point in
time, my social network, for example, was really thin. My family was living far from
me and while I tried to reserve some time to exercise, there are lots but actually very
limited. Only I didn’t see that at that point in time. So I learned a lot about facilitating
factors for engaging in programs with weight loss. From the second study, it highlighted
a lot of barriers and facilitators that women had experienced. The two of them may be applying
to me. The first thing was the unhealthy eating habits and because of that statement in the
study, I started logging what I actually ate during daytime. While it wasn’t a lot, I
think the things I did eat contained a lot of fat, not in the least the cheese crackers
I was consuming on a daily basis. Secondly, I also suffered from some sort of light depression.
I wasn’t feeling good about myself. I was no longer able to hold my breath long enough
for diving. I couldn’t get my foot off the ground in dance class, and a lot of these
things actually came together in that situation. So I looked carefully at the conclusions of
the studies and they pleaded actually for community-based family-oriented programs to
increase the chance of successful weight reduction, which was not something that I had found in
the previous effectiveness review. The conclusion of Study 2 – weight loss intervention should
address the psychological effects of childbearing, affordability, and perceptions of body image
– was not something that was particularly taken into account in the programs described
in that particular review. So it reminds me a bit of this advertisement that displays
a bald middle-aged man in his early 50s with a message this and then they show him some
sort of liquid. It’s the only approved in clinical tests to grow hair. Then if you turn
to the next image, then you see the same bald middle-aged man with hair growing all over
his body, his nose, his ears, his hands, except on his head with the message that individual
results may actually vary. After seeing that, I thought this is a perfect example of a wrong
effect but I now no longer panic because I’ve learned I may not be that average person and
there’s nothing abnormal about that. It happens to a lot of other people as well.
What I learned was that there are different sources of evidence that may need to be considered
and that qualitative evidence had been proven to be very valuable to me to explain a certain
situation. This is one of the most famous quotes in the history of systematic reviews.
It’s Archie Cochrane, and the Cochrane Collaboration disseminated systematic reviews in healthcare,
actually named the organization after this person. He stated that, “It’s surely a
great criticism of our profession,” meaning the health profession, “that we have not
organized a critical summary adapted periodically of all relevant randomized controlled trials.”
While I’m thinking that’s very true, I think it’s also a great criticism of our
profession that we have been foolish enough to think that critical summaries of relevant
randomized controlled trials would provide us with the right answer for each type of
query because we already learned that individual results may vary and that RCTs can’t explain
every sort of outcome. What we now are about to learn is that RCTS are further very limited
in the amount of questions they are able to answer as well. We use to see evidence in
terms of effectiveness research. It’s often mentioned in the context of trying to establish
some kind of causal relationship. I don’t know whether anyone of you ever looks at the
television series Sherlock Holmes, but Sherlock always goes like, “Watson, I know what caused
that.” Then Watson who’s down to earth says then, “But we have only administered
a few interviews and gone on to sight visits. Should you not collect evidence that is more
robust?” Indeed, if you talk in terms of causal effects, qualitative techniques may
be the worst choice to make but if you talk about evidence in a different fashion and
consider, for example, evidence of feasibility, the extent to which an intervention is practical,
cultural, financially possible within a given context. Then the picture actually changes
also when you want to assess the appropriateness of interventions, which is the extent to which
an intervention fits with a situation how it relates to the context in which it is given.
The RCTs are not able to provide you a lot of relevant information for that. The same
with evidence of meaningfulness or the extent to which an intervention is positively or
negatively experienced by your target group or how it relates to people’s personal experience,
opinions, failures, beliefs, or interpretations. So we have long neglected the whole bunch
of questions because we couldn’t quite fit them into the straightjacket of an RCT. Apart
from these types of evidence that firmly link into intervention research, there are other
questions we might be asking like what’s the evidence of the cost benefit of a particular
intervention? What are the lived experiences of people with a certain condition or living
in some sort of deprived situation that we do not know a lot about? What actually do
people value or not in an intervention or maybe just in daily life? So I always wondered
what if Archie Cochrane had thought about organizing a critical summary adapted periodically
of all relevant qualitative research studies. Now, that would’ve made the difference because
then we might have had about 6,000 mixed-method reviews that provide us with a much more in-depth
understanding of a condition, an intervention. Not all sorts of questions require a mixed-method
approach. For example, questions related to understanding the meaning of a particular
phenomenon such as how people make sense of a particular chronic disease, or why they
behave or feel the way they do. These questions may be explored in a stand-alone qualitative
evidence synthesis. They would provide another information on their own, but that would be
the easy way out because mixing evidence is really hard. It’s methodologically challenging
and we’re still working on the development of methods to actually smoothen the integration
of combining quantitative and qualitative evidence. Why is that so? Some people would
argue that it’s ridiculous to think that that is going to work, that we can just in
fact mix apples with oranges, but I actually support Gene Glass in saying that it is, of
course, about mixing apples and oranges, mixed-method reviews. Yes, in the study of fruit, nothing
else would be sensible, so comparing apples to oranges to him would be the only endeavor
worthy of true scientists, thus yes. When you look at it, comparing apples to apples
is trivial, in a sense. In order to be able to mix different strands of evidence, so in
order to let them inform each other, you need to be able to understand what exactly a quantitative
and a qualitative study is and what you can do with it at the meta level. To my understanding,
most of you have already had some sort of introduction in quantitative studies. My job
is actually to reveal a bit more about qualitative basic studies and how they can be used on
a meta level. First of all, I think it’s an inquiry of meaning. It addresses a different
sort of questions that go into the what of a phenomenon, the why things are what they
are, and the how people cope or deal with them. I very often tell my kids the story
of The Three Little Pigs and in case you’ve never heard about it, it’s a story about
Mama Pig who goes to the market and tells her three little piggies to find some sort
of shelter because there’s a big bad wolf running around. So each of the pigs builds
a house. The first piggy builds a house of straw which in effect the least solids but
the piggy then has plenty time to play afterwards. The second one builds a wooden house that
requires a bit more work, but still there’s plenty of time to play. The third one builds
a brick house and is laughed at by his brothers because he’s all sweating while they are
actually playing and having all the fun. Then the wolf comes. He blows down the straw and
the wooden house, but he can’t blow down the brick house. That house then gave shelter
to the three pigs in the end. When I tell this story, what occurs to me is that my kids
start asking many questions about it such as, “Why weren’t the first two pigs smart
enough to build a brick house? Were they too lazy? Why did Mommy not take them with her
to the market?” These are all questions about the meaning of the story. They seldom
ask, “Well, did these pigs really exist or could they really talk with each other?”
That would be easy because I could answer that with a yes or a no. The other questions
though, they require some sort of deeper understanding. It’s what I would call a rich, deep, thick
insightful or illuminated understanding. So many of our kids, many adults actually see
the world in terms of meaning and that’s what qualitative studies try to understand.
Looking at evidence, we can look at both strengths and how they can inform each other. For example,
I came across this effectiveness review on rehabilitation in a housing community integration
after an acute traumatic brain injury. It’s a systematic review presented to me by Chet
Meyers who is one of the facilitators of the series. This effectiveness review actually
found that a lot of community integration programs show positive results and should
be studied more rigorously. The others actually recommend that to further establish that post-acute
traumatic brain injury rehabilitation intervention, in order for them to improve community integration,
they should think about intervention strategies that are based on injury severity, for example.
They should take better care about their control groups. They should engage in longer-term
follow-up. If you look at it, these are all very instrumental suggestions to actually
improve the designs more than anything else. If you now look at qualitative evidence, looking
at the same topic of injuries and how people deal with it, then you see that these papers
actually look beyond numbers. Numbers are very bad in capturing experiences, nor do
they allow us to really conduct an in-depth exploration of a phenomenon. So I came across
this study from Gauvin-Lepage that really spoke to me in the sense that it succeeded
in capturing the lived experience of these people and how difficult it was to actually
reintegrate. In fact, the difficulties mentioned were not that much related to severity of
the condition, which was suggested in the quantitative review, but rather to the response
from the environment. This should actually certainly be taken into account when we start
promoting interventions for community integration because it tells us that once you focus on
knowing and understanding what life was before for these people that once you focus on managing
the psychological imbalance that this condition actually evokes, and on reframing the expectations
of the environment rather than focusing on the individual with the limitation itself
because these are all things that would help people recover their social roles from before
the injury. We know that famous cartoon where an interviewer goes up to a man in the street
and tells him to describe how he feels. The man then actually says, “The way I feel
is actually hard to quantify.” “Well,” said the interviewer, “how hard on a scale
from 1 to 10 would that be?” So it really shows that if we’ve trained in a particular
tradition of research, we may not have all sorts of interesting layers on our personal
raters. Another example of seeing what qualitative evidence can actually mean to enrich the insights
of quantitative studies is this cross-sectional study from Carpenter that indicates that life
satisfaction is more strongly related to community participation than to the impairment itself
and activity limitations. So in the article, Carpenter really pleads for community participation,
and that’s actually a good thing but then again, I found this study from Newman and
I included a short video or a fragment of it that actually states a lot of barriers
and facilitators of people function in their own community that should be taken care of
when we intend to promote community participation. I’d like to look at this video with you.
Okay. At the end of this video, I just want to pick up on two points. The first one is
that the video really shows that we’re actually not limited to textual accounts in qualitative
research that we use a lot of our senses to actually reach out to what people want to
express. The second thing is that it also shows that we can address a different layer
of knowledge here in community participation, and it’s probably a layer of knowledge that
quantitative reviews just can’t access. The next point I want to make is that in line
with what Archie Cochrane actually mentioned is that it would be great if we could summarize
or synthesize all the qualitative evidence that has been generated from different contexts
while still remaining sensitive to that context and bringing that together into some sort
of new theoretical, practical insight. What I wanted to present next is a short definition
of qualitative evidence synthesis that we came up with in our Cochrane Guidance checked
around dealing with qualitative research. Actually, it is a process of summarizing qualitative
research findings by comparing and analyzing textual, visual, or other sorts of research
evidence that might even be performance or dance-related type of evidence. It can be
derived from multiple accounts or just one event, one phenomenon or a situation. The
important thing is that it has to be reported in basic qualitative research studies. Why
am I saying this? The term qualitative review or qualitative evidence synthesis is actually
very often misused. Quantitative research is used actually for reviews in which statistical
tooling is just not an option to them, and they then summarize the information from the
studies in some sort of a narrative. This is actually not a qualitative synthesis because
the basic ingredient of that synthesis is still quantitative. It remains quantitative
information. These narratives are then actually very descriptive in nature. They don’t go
beyond the descriptives into a more profound level of interpretive understanding. You really
need to be able to distinguish between these two. Now, what this qualitative evidence synthesis
can do is explore questions such as how do people experience a condition or a situation?
I think I’ve just shown that through the examples we’ve discussed. Why does an intervention
work or may not work, for whom? In what particular circumstances does it work or does not work?
What are potential barriers and facilitators related to a program you’re trying to implement,
and what impact do specific barriers and facilitators have not only on the program but also on the
people, their experiences, their behavior? The definition of qualitative evidence synthesis
is actually very close to the definition of synthesis displayed in the Oxford English
Dictionary. It states that synthesis is actually a process or a result of building up separate
elements, especially ideas, into a connected whole, especially a theory or a system. So
I think that there are three components in my definition of qualitative evidence synthesis.
I always call it a systematic empirical inquiry into meaning. It is systematic in the sense
that qualitative evidence synthesis also has some sort of protocol or a starting bump.
They’re planned. They’re ordered and structured. The process of conducting a qualitative evidence
synthesis may not be as linear as a quantitative review but what we do as an author is actually
reconstruct that logic of science into some sort of linear report of what we’ve done.
It’s empirical in the sense that it comes very close the original intention of the word
“empirical” which is that it depends upon the world of our experience. It builds on
what we can capture with our senses, with all of our senses. It is by far an inquiry
into meaning because we try to develop a more complex picture of a phenomenon, and that
is what I call Rich, Deep, Think, Textured, Insightful as well when I was talking about
The Three Little Pigs and the questions that generated in my children. Now that we know
what qualitative evidence synthesis is, it’s maybe interesting to look into how it can
specifically contribute to treatment effectiveness reviews. It contributes in many ways. We already
learned from the pregnancy and the hair growth example that it contributes to the understanding
of heterogeneity between effects and between individuals, but it can do more than that.
It provides also a research-based context for interpreting and explaining of trial results.
If we take up a set of different questions such as how to achieve change more effectively,
how to improve our interventions, how to fit the subjective needs of our target group,
or even the ones who are in charge of delivering the program. What other type of interventions
might be needed to make it more successful? I think that overall would increase a lot
of the quality of the interventions we are engaged in. It provides evidence on subjective
experiences of those involved in developing, delivering, receiving interventions, or even
living with a particular condition or in a specific vulnerable environment, for example.
It can also reveal the extent to which effective interventions are actually adopted in policies
and practice. It engages a lot with questions related to implementation. This is actually
a short excerpt from our Chapter 20 in the Cochrane Handbook on the role of qualitative
evidence into Cochrane effectiveness reviews. As you can see, we identified four different
roles of qualitative evidence synthesis. It can inform a review by using evidence from
qualitative research to help define and refine questions. We are very used to develop our
own research questions and in the past before I engaged with qualitative evidence synthesis,
I wasn’t at all too sensitive to the fact of whether or not my question was actually
relevant to those out in the field and those having to implement what we academics were
actually suggesting. By qualitative research, you can actually probe people to refine your
question, make it a better match for them. We can also enhance reviews by synthesizing
evidence from qualitative research that is identified while you’re actually looking
at evidence of effectiveness. A lot of the RCTs we include in quantitative reviews, they
actually contain some implementation of process-related qualitative information that we could summarize
but that we tend to neglect too much. A lot of these studies may also have sibling studies
floating around on the web that are close connected to the effectiveness question but
not actually incorporated in the same study. So we cannot enhance these reviews by actually
looking at that evidence. We can extend them by searching specifically for evidence from
qualitative studies to address questions that relate to the effectiveness reviews but are
not focused on effectiveness itself. For example, that we achieved the intended outcome, why
not? Why didn’t we achieve it? One other influence is do we see that we’re not protocoling
at files and how can we deal with them? How can we change the program so that it would
be sensitive to concerns of those reviews? Or we can also supplement effectiveness reviews
by just synthesizing qualitative evidence within a stand-alone, complementary review
and in these types of reviews, we addressed questions on aspects other than effectiveness.
For example, how do people experience living with a particular condition. There has been
a lot of drivers that are needed for mixing both strands of evidence. One of them is a
greater recognition of the value of qualitative research in the evidence policy movement.
We also are faced in the past with a lot of empty reviews, stating like, “Oh, we didn’t
find any RCTs,” and we all know how hard it is to conduct an RCT, for example, in a
school setting. You can’t just randomize pupils across interventions. You have to work
with groups. So there’s a lot of practical limitations that lead to these empty reviews
where there are lots of evidence that could be taken into account. There’s also an increasing
demand for an incorporation of public perspectives and experiences in those reviews from funders
that has sparked the mixed methods debate. The most simple interventions within Cochrane
at least, they have long been researched and we’re now facing more complex questions,
interdisciplinary fields that can’t be accessed through RCTs alone. There’s a lot of interest
going to be in issues of process and implementations to actually optimize programs. A lot of traditions
that are growing in the primary research area, there’s a lot of mixed-method primary research
studies coming up, and these are also used to motivate reviews to take mixed-method approach.
There’s a lot more funding for these types of reviews as well, and the most dedicated
methods groups that have long catered for qualitative evidence are actually now moving
into a more mixed-method type of thinking. I haven’t spoken about how different qualitative
evidence synthesis are from reviews of effectiveness. We know the questions they might ask, but
there are certain differences that are more on a conceptual level which I’d like to
speak to you about. I’ve stolen this metaphor from David Gough and James Thomas. James Thomas
might be one of the speakers coming up in a later phase of this series, but they positioned
quantitative and qualitative reviews as a spectrum between aggregated versus configurative
types of reviews. Let’s start with the easy one that everyone is familiar with, which
is the aggregated type of metaphor. Metaphors are really interesting because they make visual
what we actually lack in capacity to do. So this is actually a metaphor, the pile of stones
for meta-analysis for quantitative types of reviews, because what they actually do is
identifying individual studies and then pooling the results of each of these studies, but
each of these studies in the meta-analysis remains visible. They don’t tear them apart.
They just create one overall measure of effect across these studies. What they do is they
increase the power of the measure there. I’m not sure whether you’ve been hiking in Wales,
for example, in the UK but there’s a lot of mist there. What they do for hikers is
they pile these stones up so that it makes it easy for people to actually find their
way. This is some sort of metaphor to say like you increase the power compared to just
one stone lying there flat with an arrow on it. The next slide is actually an example
from Cochrane of a quantitative bit from a mixed-method review, and this review actually
evaluated the impact of lay health workers in primary and community health care for maternal
and child health and the management of infectious diseases. What this review concluded as one
of the main conclusions is that lay health workers can increase immunization uptake in
children below the age of five years old. You can see the parallel with the pile of
stones here. You see all the individual studies on the left of your screen – the Barnes
until the Rodewald study. Then you see on the right-hand side, you see the effect measured
for each of these individual studies. At the end of the plot, you can see this diamond
that actually shows that lay health workers have a positive impact on the maternal and
child health. If you look at the individual studies, the picture is much more confusing.
Not all of them are on the same side. Not all of them have confidence integrals that
fall within the positive area. It’s only by pooling them that you get some sort of
clear picture of where the true effect may lie. Now, the second metaphor I’d like to
use is one of a mosaic and this really links into the idea of configuration that is very
central to qualitative evidence synthesis. What we do, we rearrange. We configure the
findings of primary studies in order to generate new theory or explore the salience of existing
theory in particular situations. What we actually do is piece together research knowledge from
different contexts. As you can see in that picture of the mosaic, the individual studies
are no longer recognizable. We actually turn them into a new holistic understanding of
a particular phenomenon. It’s much more diffused what the actual role of one particular
study is in these reviews. What is so important in qualitative evidence synthesis is that
you are very sensitive to context because what we do is we piece together research knowledge
from different contexts and you can’t just pile that up. You need to remain sensitive
to where does this evidence come from and would it be generalizable to other countries
or not. Just to give you one small anecdote on the role of context in our daily life,
this is an anecdote I’ve stolen from one of my colleagues. Cynthia, she’s from the
U.S., and she was telling that she was phoning her husband, saying like, “Honey, I’m
running late. Can you please put the chicken on the stove? I love you a lot,” and I hope
he does it. Okay, she thought to herself. Then she came home and what she found was
the chicken on the stove but not particularly in the way she had meant it because she actually
meant, “I’m late so I want you to cook dinner for us tonight.” He had interpreted
it as, “Yes, I’ll just put the box on the stove and then it’s melted down the
moment Cynthia will arrive.” So there are lots of examples in our daily life where we
can see that context is very important. What strikes me the most is that in quantitative
studies, we fielded out context as much as possible. We create experimental studies in
some sort of virtual laboratory environment, and so it’s remarkable that these streams
not have been brought together much, much earlier in history because we know that we
can’t do the one without the other to create some sort of ease in understanding of phenomenon.
So what you see here is the qualitative part of that mixed-method review produced by Claire
Glenton on barriers and facilitators to the implementation of these lay health worker
programs to improve access to maternal and child health care. What it actually addressed
instead of effectiveness of these lay health workers was the program acceptability, feasibility,
and appropriateness. It looked into the lay health worker relationship with other health
professionals, and with their clients the patient flow process, service integration,
socio-cultural conditions, and so on. The nice thing about it is that it gives a clear
overview of what people liked from the point of view from the lay health workers, from
the point of view of the clients, and from the point of view of the professional workers
who needed to work with these lay health workers on a daily basis. What this information can
contribute to the whole is that it allows you – and I’m fully aware that this is
very small for you to see but on the right-hand side, you see the overall outcome of the review,
improved health outcomes among mothers and children and some of the more secondary outcomes
like better quality service including appropriateness of consultation services. This is actually
evaluated through the quantitative one but the nice thing is that the qualitative synthesis
identified a lot of negative and positive moderators in that relation between lay health
workers and clients or other health professionals. For example, one of the negative moderators
was that health professionals were really concerned about their own loss of authority
about the knowledge of these people and whether it would be enough. While on the positive
moderator side, people explained that they really liked the lay health workers because
of the fact that they had more time. They were really supportive, but they did feel
a lack of knowledge as well, so they actually found each other in that particular group.
What you can do is you can bring in all that qualitative evidence in your flow chart of
your review and it provides you with a much clearer understanding of what is actually
going on. I’m going to sum this up with some of the differences between that sort
of meta-analysis and meta-synthesis stream of evidence. Meta-analysis is actually trying
to accumulate knowledge. The studies that you include in such a review have to be strictly
comparable to be able to pool them statistically, and it aims at creating more power. Think
about the pile of stones that I’ve shown you earlier today. They do that through numerical
data. While the meta-synthesis bits is actually trying to make sense of data. So the studies
that you include, they don’t have to address the complete similar account. It can be picked
from different context, even different target groups. They need to have some sort of basic
comparability on the level of the phenomenon or the topic of interest of your research,
and they add value in content through interpretation. We can move on to some of the general approaches
that can be used in qualitative evidence synthesis. To me, that’s a very difficult thing to
do. Thus, it used to be simple in the quans bit where you have to choose between a random
or a fixed sort of model. To me, the approaches to qualitative evidence remind me of a big
huge circus tent that hosts all these different approaches, and they’re very different in
terms of methodologies, perspectives, strategies they use. It is really a little bit more complex
than choosing between fixed or a random model in meta-analysis. Qualitative evidence synthesis
is actually just an umbrella term which encompasses many different approaches. In other words,
there’s room for many different types of views about qualitative evidence synthesis
that can comfortably be fit underneath this big tent. One thing that helps you to choose
between all these methods is think about what is the purpose of my qualitative evidence
review, and some of the purposes can be linked to certain types of qualitative evidence synthesis.
If you’re interested in bringing together separate findings into some sort of interpretive
explanation, you want to generate theory. You want to bring some newness, some holistic
point of view. You can’t choose, for example, meta-ethnography, and I’m very sure that
this type of approach will return in some of the other lectures. It’s one of the most
commonly used approaches. On the other hand, if you critically approach the literature
in terms of deconstructing research traditions or theoretical assumptions, if you want to
critique the work of others, then critical interpretive synthesis would be something
you should consider. Other reviewers like to produce theories or models that are based
on their topic of interest and may involve, for example, interactive processes, contextualized
understanding and action. They would rather move into grounded theory on a meta-level.
Some data you encounter are more or less descriptive. They don’t take too much for a lot of interpretation,
and in that case the meta-analysis would be a potential option. I personally work a lot
with meta-aggregation. It’s an approach that mirrors the linear approach of quantitative
reviews and summarizes evidence in order to develop lines of actions for practice and
policy. There’s a lot of more complex review types that really try to unpick relationships
between persons and environments that try to formulate patterns. For example, with such
intervention, what outcomes will I expect and how do think that links into different
populations, for example, on the level of gender or different settings – schools,
hospitals, and so on? There are also approaches that bring together research of different
designs and paradigms. Meta-narrative is an approach that caters for quantitative as well
as qualitative basic research, for example. So that’s one area that would help you to
choose. Another one is that you need to take some sort of stance epistemologically if you
want to choose a particular approach. As you can see, all of these approaches are somewhere
situated between a realist perspective and an idealist perspective. The idealist perspective
is much closer to interpretive science. The realist perspective is a bit closer to what
positivists would believe in, in the sense that they believe that there is some sort
of external truth out there that we can search for using qualitative evidence. While an idealist
type of research is really claiming that there is actually no shared reality that is independent
of human construction. So that means that every synthesis, every meta-level type of
product is actually colored through the lens of the review author. So depending on where
you are, some of these approaches would link better to your personal position than others.
That’s a different axe for you to choose, so there’s the purpose and the aim of your
review. There’s the position that you have about what is knowledge and how can you generate
that? It leads a bit into difference between people who are rather on the inquiry side
of qualitative evidence or on the more empirical scientific side of generated qualitative evidence.
So it links into that debate. Then a few other axes you need to consider if you want to choose
between approaches. That is what experience do I have in my team and what sort of resources
do I have? That’s very important to consider. Some of these approaches are very complex
to execute and some require a lot more time than others. You can actually try to jot down
all of the information on the nature of the research including the aims on the resource
requirements. Is it something that needs a lot of work? Is it something that works comprehensive
or more purposefully? You can jot down elements on the nature of your research theme. What
experience do I have? Do I have quantitative researchers? And do I want to do qualitative
synthesis or do I have relapse? What that could move us along that interpretive side
of meta-analysis? There’s the nature of research of how much structure do I prefer?
Do I prefer a linear approach? Am I comfortable with interrelated non-linear approaches through
evidence synthesis and so on? Based on the arguments between each of these categories,
you can then choose the approaches that my colleagues are going to outline in the next
series of lectures. This whole choosing process is actually outlined in one of the books we
produced with some of the colleagues of mine in the Methods Group. It’s available and
we also have a maillist from the Cochrane Qualitative Implementation Methods Group where
people can actually post questions for discussion. If you’re stuck in choosing things, if you’re
stuck in definitions you don’t know how to approach things, sometimes people post
questions to us and we do our best to try and answer those as well. Joann Starks: Well,
thank you very much, Karin. That was a wonderful presentation. I also want to thank everyone
for participating today. We hope you found the session to be informative and that you
will join us for the next three webinars. As you see on the final slide, we have a link
to a brief evaluation form – and let me move to that slide. As you can see on the final
slide, we have a link to a brief evaluation form and we’d really appreciate your input.
We will also be sending an email with the link for the evaluation. On this final note,
I would like to conclude today’s webinar with a big “thank you” to our speaker,
Karin Hannes, from myself, Ann Williams, and all of the staff at the KTDRR. We appreciate
the support from NIDRR to carry out the webinars and other activities. We look forward to your
participation in the following sessions. Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *