• Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies

Introduction.

  • Sampling Strategies
  • Sample Size
  • Qualitative Design Considerations
  • Discipline Specific and Special Considerations
  • Sampling Strategies Unique to Mixed Methods Designs

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Mixed Methods Research
  • Qualitative Research Design
  • Quantitative Research Designs in Educational Research

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Cyber Safety in Schools
  • Girls' Education in the Developing World
  • History of Education in Europe
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies by Timothy C. Guetterman LAST REVIEWED: 26 February 2020 LAST MODIFIED: 26 February 2020 DOI: 10.1093/obo/9780199756810-0241

Sampling is a critical, often overlooked aspect of the research process. The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will either allow us to generalize (i.e., quantitatively) or go into sufficient depth (i.e., qualitatively). While quantitative research is generally concerned with probability-based approaches, qualitative research typically uses nonprobability purposeful sampling approaches. Scholars generally focus on two major sampling topics: sampling strategies and sample sizes. Or simply, researchers should think about who to include and how many; both of these concerns are key. Mixed methods studies have both qualitative and quantitative sampling considerations. However, mixed methods studies also have unique considerations based on the relationship of quantitative and qualitative research within the study.

Sampling in Qualitative Research

Sampling in qualitative research may be divided into two major areas: overall sampling strategies and issues around sample size. Sampling strategies refers to the process of sampling and how to design a sampling. Qualitative sampling typically follows a nonprobability-based approach, such as purposive or purposeful sampling where participants or other units of analysis are selected intentionally for their ability to provide information to address research questions. Sample size refers to how many participants or other units are needed to address research questions. The methodological literature about sampling tends to fall into these two broad categories, though some articles, chapters, and books cover both concepts. Others have connected sampling to the type of qualitative design that is employed. Additionally, researchers might consider discipline specific sampling issues as much research does tend to operate within disciplinary views and constraints. Scholars in many disciplines have examined sampling around specific topics, research problems, or disciplines and provide guidance to making sampling decisions, such as appropriate strategies and sample size.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Education »
  • Meet the Editorial Board »
  • Academic Achievement
  • Academic Audit for Universities
  • Academic Freedom and Tenure in the United States
  • Action Research in Education
  • Adjuncts in Higher Education in the United States
  • Administrator Preparation
  • Adolescence
  • Advanced Placement and International Baccalaureate Courses
  • Advocacy and Activism in Early Childhood
  • African American Racial Identity and Learning
  • Alaska Native Education
  • Alternative Certification Programs for Educators
  • Alternative Schools
  • American Indian Education
  • Animals in Environmental Education
  • Art Education
  • Artificial Intelligence and Learning
  • Assessing School Leader Effectiveness
  • Assessment, Behavioral
  • Assessment, Educational
  • Assessment in Early Childhood Education
  • Assistive Technology
  • Augmented Reality in Education
  • Beginning-Teacher Induction
  • Bilingual Education and Bilingualism
  • Black Undergraduate Women: Critical Race and Gender Perspe...
  • Black Women in Academia
  • Blended Learning
  • Case Study in Education Research
  • Changing Professional and Academic Identities
  • Character Education
  • Children’s and Young Adult Literature
  • Children's Beliefs about Intelligence
  • Children's Rights in Early Childhood Education
  • Citizenship Education
  • Civic and Social Engagement of Higher Education
  • Classroom Learning Environments: Assessing and Investigati...
  • Classroom Management
  • Coherent Instructional Systems at the School and School Sy...
  • College Admissions in the United States
  • College Athletics in the United States
  • Community Relations
  • Comparative Education
  • Computer-Assisted Language Learning
  • Computer-Based Testing
  • Conceptualizing, Measuring, and Evaluating Improvement Net...
  • Continuous Improvement and "High Leverage" Educational Pro...
  • Counseling in Schools
  • Critical Approaches to Gender in Higher Education
  • Critical Perspectives on Educational Innovation and Improv...
  • Critical Race Theory
  • Crossborder and Transnational Higher Education
  • Cross-National Research on Continuous Improvement
  • Cross-Sector Research on Continuous Learning and Improveme...
  • Cultural Diversity in Early Childhood Education
  • Culturally Responsive Leadership
  • Culturally Responsive Pedagogies
  • Culturally Responsive Teacher Education in the United Stat...
  • Curriculum Design
  • Data Collection in Educational Research
  • Data-driven Decision Making in the United States
  • Deaf Education
  • Desegregation and Integration
  • Design Thinking and the Learning Sciences: Theoretical, Pr...
  • Development, Moral
  • Dialogic Pedagogy
  • Digital Age Teacher, The
  • Digital Citizenship
  • Digital Divides
  • Disabilities
  • Distance Learning
  • Distributed Leadership
  • Doctoral Education and Training
  • Early Childhood Education and Care (ECEC) in Denmark
  • Early Childhood Education and Development in Mexico
  • Early Childhood Education in Aotearoa New Zealand
  • Early Childhood Education in Australia
  • Early Childhood Education in China
  • Early Childhood Education in Europe
  • Early Childhood Education in Sub-Saharan Africa
  • Early Childhood Education in Sweden
  • Early Childhood Education Pedagogy
  • Early Childhood Education Policy
  • Early Childhood Education, The Arts in
  • Early Childhood Mathematics
  • Early Childhood Science
  • Early Childhood Teacher Education
  • Early Childhood Teachers in Aotearoa New Zealand
  • Early Years Professionalism and Professionalization Polici...
  • Economics of Education
  • Education For Children with Autism
  • Education for Sustainable Development
  • Education Leadership, Empirical Perspectives in
  • Education of Native Hawaiian Students
  • Education Reform and School Change
  • Educational Research Approaches: A Comparison
  • Educational Statistics for Longitudinal Research
  • Educator Partnerships with Parents and Families with a Foc...
  • Emotional and Affective Issues in Environmental and Sustai...
  • Emotional and Behavioral Disorders
  • English as an International Language for Academic Publishi...
  • Environmental and Science Education: Overlaps and Issues
  • Environmental Education
  • Environmental Education in Brazil
  • Epistemic Beliefs
  • Equity and Improvement: Engaging Communities in Educationa...
  • Equity, Ethnicity, Diversity, and Excellence in Education
  • Ethical Research with Young Children
  • Ethics and Education
  • Ethics of Teaching
  • Ethnic Studies
  • Evidence-Based Communication Assessment and Intervention
  • Family and Community Partnerships in Education
  • Family Day Care
  • Federal Government Programs and Issues
  • Feminization of Labor in Academia
  • Finance, Education
  • Financial Aid
  • Formative Assessment
  • Future-Focused Education
  • Gender and Achievement
  • Gender and Alternative Education
  • Gender, Power and Politics in the Academy
  • Gender-Based Violence on University Campuses
  • Gifted Education
  • Global Mindedness and Global Citizenship Education
  • Global University Rankings
  • Governance, Education
  • Grounded Theory
  • Growth of Effective Mental Health Services in Schools in t...
  • Higher Education and Globalization
  • Higher Education and the Developing World
  • Higher Education Faculty Characteristics and Trends in the...
  • Higher Education Finance
  • Higher Education Governance
  • Higher Education Graduate Outcomes and Destinations
  • Higher Education in Africa
  • Higher Education in China
  • Higher Education in Latin America
  • Higher Education in the United States, Historical Evolutio...
  • Higher Education, International Issues in
  • Higher Education Management
  • Higher Education Policy
  • Higher Education Research
  • Higher Education Student Assessment
  • High-stakes Testing
  • History of Early Childhood Education in the United States
  • History of Education in the United States
  • History of Technology Integration in Education
  • Homeschooling
  • Inclusion in Early Childhood: Difference, Disability, and ...
  • Inclusive Education
  • Indigenous Education in a Global Context
  • Indigenous Learning Environments
  • Indigenous Students in Higher Education in the United Stat...
  • Infant and Toddler Pedagogy
  • Inservice Teacher Education
  • Integrating Art across the Curriculum
  • Intelligence
  • Intensive Interventions for Children and Adolescents with ...
  • International Perspectives on Academic Freedom
  • Intersectionality and Education
  • Knowledge Development in Early Childhood
  • Leadership Development, Coaching and Feedback for
  • Leadership in Early Childhood Education
  • Leadership Training with an Emphasis on the United States
  • Learning Analytics in Higher Education
  • Learning Difficulties
  • Learning, Lifelong
  • Learning, Multimedia
  • Learning Strategies
  • Legal Matters and Education Law
  • LGBT Youth in Schools
  • Linguistic Diversity
  • Linguistically Inclusive Pedagogy
  • Literacy Development and Language Acquisition
  • Literature Reviews
  • Mathematics Identity
  • Mathematics Instruction and Interventions for Students wit...
  • Mathematics Teacher Education
  • Measurement for Improvement in Education
  • Measurement in Education in the United States
  • Meta-Analysis and Research Synthesis in Education
  • Methodological Approaches for Impact Evaluation in Educati...
  • Methodologies for Conducting Education Research
  • Mindfulness, Learning, and Education
  • Motherscholars
  • Multiliteracies in Early Childhood Education
  • Multiple Documents Literacy: Theory, Research, and Applica...
  • Multivariate Research Methodology
  • Museums, Education, and Curriculum
  • Music Education
  • Narrative Research in Education
  • Native American Studies
  • Nonformal and Informal Environmental Education
  • Note-Taking
  • Numeracy Education
  • One-to-One Technology in the K-12 Classroom
  • Online Education
  • Open Education
  • Organizing for Continuous Improvement in Education
  • Organizing Schools for the Inclusion of Students with Disa...
  • Outdoor Play and Learning
  • Outdoor Play and Learning in Early Childhood Education
  • Pedagogical Leadership
  • Pedagogy of Teacher Education, A
  • Performance Objectives and Measurement
  • Performance-based Research Assessment in Higher Education
  • Performance-based Research Funding
  • Phenomenology in Educational Research
  • Philosophy of Education
  • Physical Education
  • Podcasts in Education
  • Policy Context of United States Educational Innovation and...
  • Politics of Education
  • Portable Technology Use in Special Education Programs and ...
  • Post-humanism and Environmental Education
  • Pre-Service Teacher Education
  • Problem Solving
  • Productivity and Higher Education
  • Professional Development
  • Professional Learning Communities
  • Program Evaluation
  • Programs and Services for Students with Emotional or Behav...
  • Psychology Learning and Teaching
  • Psychometric Issues in the Assessment of English Language ...
  • Qualitative Data Analysis Techniques
  • Qualitative, Quantitative, and Mixed Methods Research Samp...
  • Queering the English Language Arts (ELA) Writing Classroom
  • Race and Affirmative Action in Higher Education
  • Reading Education
  • Refugee and New Immigrant Learners
  • Relational and Developmental Trauma and Schools
  • Relational Pedagogies in Early Childhood Education
  • Reliability in Educational Assessments
  • Religion in Elementary and Secondary Education in the Unit...
  • Researcher Development and Skills Training within the Cont...
  • Research-Practice Partnerships in Education within the Uni...
  • Response to Intervention
  • Restorative Practices
  • Risky Play in Early Childhood Education
  • Scale and Sustainability of Education Innovation and Impro...
  • Scaling Up Research-based Educational Practices
  • School Accreditation
  • School Choice
  • School Culture
  • School District Budgeting and Financial Management in the ...
  • School Improvement through Inclusive Education
  • School Reform
  • Schools, Private and Independent
  • School-Wide Positive Behavior Support
  • Science Education
  • Secondary to Postsecondary Transition Issues
  • Self-Regulated Learning
  • Self-Study of Teacher Education Practices
  • Service-Learning
  • Severe Disabilities
  • Single Salary Schedule
  • Single-sex Education
  • Single-Subject Research Design
  • Social Context of Education
  • Social Justice
  • Social Network Analysis
  • Social Pedagogy
  • Social Science and Education Research
  • Social Studies Education
  • Sociology of Education
  • Standards-Based Education
  • Statistical Assumptions
  • Student Access, Equity, and Diversity in Higher Education
  • Student Assignment Policy
  • Student Engagement in Tertiary Education
  • Student Learning, Development, Engagement, and Motivation ...
  • Student Participation
  • Student Voice in Teacher Development
  • Sustainability Education in Early Childhood Education
  • Sustainability in Early Childhood Education
  • Sustainability in Higher Education
  • Teacher Beliefs and Epistemologies
  • Teacher Collaboration in School Improvement
  • Teacher Evaluation and Teacher Effectiveness
  • Teacher Preparation
  • Teacher Training and Development
  • Teacher Unions and Associations
  • Teacher-Student Relationships
  • Teaching Critical Thinking
  • Technologies, Teaching, and Learning in Higher Education
  • Technology Education in Early Childhood
  • Technology, Educational
  • Technology-based Assessment
  • The Bologna Process
  • The Regulation of Standards in Higher Education
  • Theories of Educational Leadership
  • Three Conceptions of Literacy: Media, Narrative, and Gamin...
  • Tracking and Detracking
  • Traditions of Quality Improvement in Education
  • Transformative Learning
  • Transitions in Early Childhood Education
  • Tribally Controlled Colleges and Universities in the Unite...
  • Understanding the Psycho-Social Dimensions of Schools and ...
  • University Faculty Roles and Responsibilities in the Unite...
  • Using Ethnography in Educational Research
  • Value of Higher Education for Students and Other Stakehold...
  • Virtual Learning Environments
  • Vocational and Technical Education
  • Wellness and Well-Being in Education
  • Women's and Gender Studies
  • Young Children and Spirituality
  • Young Children's Learning Dispositions
  • Young Children's Working Theories
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [185.194.105.172]
  • 185.194.105.172

Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies

Sampling is a critical, often overlooked aspect of the research process. The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will either allow us to generalize (i.e., quantitatively) or go into sufficient depth (i.e., qualitatively). While quantitative research is generally concerned with probability-based approaches, qualitative research typically uses nonprobability purposeful sampling approaches. Scholars generally focus on two major sampling topics: sampling strategies and sample sizes. Or simply, researchers should think about who to include and how many; both of these concerns are key. Mixed methods studies have both qualitative and quantitative sampling considerations. However, mixed methods studies also have unique considerations based on the relationship of quantitative and qualitative research within the study.

  • Related Documents

Linking Research Questions to Mixed Methods Data Analysis Procedures 1

The purpose of this paper is to discuss the development of research questions in mixed methods studies. First, we discuss the ways that the goal of the study, the research objective(s), and the research purpose shape the formation of research questions. Second, we compare and contrast quantitative research questions and qualitative research questions. Third, we describe how to write mixed methods research questions, which we define as questions that embed quantitative and qualitative research questions. Finally, we provide a framework for linking research questions to mixed methods data analysis techniques. A major goal of our framework is to illustrate that the development of research questions and data analysis procedures in mixed method studies should occur logically and sequentially.

Mixed methods research in pedagogy: Characteristics, advantages and difficulties in application

The mixed methods research, as a new type of research, is discussed in the context of re-examining the relation between the two approaches, i.e. the possibility of not opposing, but connecting, combining and integrating them within research. The possibility to use such grounding in research as well could be quite important for pedagogy since the nature of the examined phenomena in this field is such that the majority has both quantitative and qualitative aspects. In order to explain the essence of a mixed methods research, the paper analyses its characteristics, first of all, what elements are combined, what is the nature of the relation between the combined elements and why they are combined, as well as its advantages and difficulties in application. The essential thing in a mixed methods research is the fact that combining refers to the research process as a whole, including the ontological and epistemological assumptions it is based on, which implies that the elements that are combined are understood rather broadly. The advantages and difficulties in application are considered in the context of the discussion of the possibility to combine all research elements, neutralise the limitations of quantitative and qualitative research methodology, implement the complex combining procedures etc.

How Marketers Conduct Mixed Methods Research

The complimentary nature of qualitative and quantitative research methods are examined with respect to a study assessing the market's view of a training and development institute in the Middle East. The qualitative portion consisted of focus groups conducted with seven distinct market segments served by the institute. The results proved insightful with respect to uncovering and understanding differences of opinion among the seven groups; however, taken alone, the qualitative research would have been very misleading with respect to the institute's standing in the Middle East.

Using Mixed-methods Research in Health & Education in Nepal

In the areas of health promotion and health education, mixed-methods research approach has become widely used. In mixed-methods research, also called multi-methods research, the researchers combine quantitative and qualitative research designs in a single study. This paper introduces the mixed-methods approach for use in research in health education. To illustrate this pragmatic research approach we are including an example of mixed-methods research as applied in Nepalese research.Journal of Health Promotion Vol.6 2008, p.45-48

Research Methods

<p class="MsoNormal" style="text-justify: inter-ideograph; text-align: justify; margin: 0in 34.2pt 0pt 0.5in;"><span style="font-size: 10pt;"><span style="font-family: Times New Roman;">This paper discusses three common research approaches, qualitative, quantitative, and mixed methods, along with the various research designs commonly used when conducting research within the framework of each approach. Creswell (2002) noted that quantitative research is the process of collecting, analyzing, interpreting, and writing the results of a study, while qualitative research is the approach to data collection, analysis, and report writing differing from the traditional, quantitative approaches. This paper provides a further distinction between quantitative and qualitative research methods. This paper also presents a summary of the different research methods to conduct research in quantitative, qualitative, and mixed methods studies.</span></span></p>

Distinguishing Between Quantitative and Qualitative Research: A Response to Morgan

This is a response to Morgan’s article ( Journal of Mixed Methods Research, 12(3), 268-279) on the qualitative/quantitative distinction. I argue that Morgan has mischaracterized my views on this distinction, and on the value of design typologies in mixed methods research, and that the qualitative/quantitative distinction is more productively framed on a different basis than the one he proposed.

Analysis of Community Preparedness Facing Erosion Disaster in Sambas Regency

This research was conducted to analyze the community's preparedness in the face of erosion in Sambas Regency, which is erosion caused by filing by river water. This research is a mixed methods research<em>.</em> This mixed research combines qualitative research and quantitative research, with a sample of people living in riverbank areas. The head of the local household was also asked for information on preparedness to face erosion disasters. The results of the research on community preparedness for erosion in Sambas District were based on the researchers' view that the community was not ready, people who had filled out the new research questionnaire were about 15% out of 100% who were ready. The results of interviews with residents indicated that there was no counselling, training on this preparedness was also one of the causes of the low level of community preparedness.

Kesejahteraan Anak Adopsi Usia Prasekolah (3-5 Tahun)

Child welfare is the responsibility of the family because the child is part of the family. However, in reality there are still many who neglect their children so that the children's welfare is threatened. Abandoned children need protection to ensure their survival. One of the efforts made in dealing with the problem of neglected children is through an institution-based child service program through child social service institutions. However, institution-based child services have not been optimal in realizing children's welfare. Thus, children who are in institution-based care need to be transferred to family-based care so that the child's welfare can be better. One of the permanent efforts to care for children is through adoption. The method used in this research is mixed methods research method. The design chosen in this study is Explanatory Sequential Mixed Methods, the researcher will measure the level of children's welfare with quantitative research first followed by qualitative research. The results of quantitative research regarding the welfare of preschool adopted children show that basically the welfare of adopted children is in the good category. The results of the qualitative research found that the background and reasons or motivation of adoptive parents to adopt an effect on the care of the adopted child so that the child's welfare can be better. Most adoptive parents do not yet have biological children, so the presence of adopted children is a complement to their long-awaited family. The opportunity they get for adoption makes them try to care for, nurture, and treat their adopted child very well. They always pay attention to children's physical development, children's psychological development, children's social development and children's cognitive development so that children's welfare can be achieved.

Change, Challenges, and Mixed Methods

This chapter discusses three ongoing issues related to the evaluation of qualitative research. First, the chapter considers whether a set of evaluation criteria is either determinative or changeable. Due to the evolving nature of qualitative research, it is likely that the way in which qualitative research is evaluated can change—not all at once, but gradually. Second, qualitative research has been criticized by newly resurrected positivists whose definitions of scientific research and evaluation criteria are narrow. “Politics of evidence” and a recent big-tent evaluation strategy are examined. Last, this chapter analyzes how validity criteria of qualitative research are incorporated into the evaluation of mixed methods research. The elements of qualitative research seem to be fairly represented but are largely treated as trivial. A criterion, the fit of research questions to design, is identified as distinctive in the review guide of the Journal of Mixed Methods Research.

Mixed Methods Research in Developing Country Contexts: Lessons From Field Research in Six Countries Across Africa and the Caribbean

Mixed methods research in developing countries has been increasing since the turn of the century. Given this, there is need to consolidate insights for future researchers. This article contributes to the methodological literature by exploring how cultural factors and logistical challenges in developing contexts interplay with mixed methods research design and implementation. Insights are based on the author’s research experience of using mixed methods in six projects across three African and three Caribbean countries. Three lessons are provided to aid researchers using mixed methods working in developing countries. First, cultural factors call for more reflexivity. Second, adopting a pragmatic research paradigm is necessary. And third, the research process should be iterative and adaptive.

Export Citation Format

Share document.

sampling methods in qualitative and quantitative research pdf

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

Lawrence a. palinkas.

1 School of Social Work, University of Southern California, Los Angeles, CA 90089-0411

Sarah M. Horwitz

2 Department of Child and Adolescent Psychiatry, New York University, New York, NY

Carla A. Green

3 Center for Health Research, Kaiser Permanente Northwest, Portland, OR

Jennifer P. Wisdom

4 George Washington University, Washington DC

Naihua Duan

5 New York State Neuropsychiatric Institute and Department of Psychiatry, Columbia University, New York, NY

Kimberly Hoagwood

Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

Recently there have been several calls for the use of mixed method designs in implementation research ( Proctor et al., 2009 ; Landsverk et al., 2012 ; Palinkas et al. 2011 ; Aarons et al., 2012). This has been precipitated by the realization that the challenges of implementing evidence-based and other innovative practices, treatments, interventions and programs are sufficiently complex that a single methodological approach is often inadequate. This is particularly true of efforts to implement evidence-based practices (EBPs) in statewide systems where relationships among key stakeholders extend both vertically (from state to local organizations) and horizontally (between organizations located in different parts of a state). As in other areas of research, mixed method designs are viewed as preferable in implementation research because they provide a better understanding of research issues than either qualitative or quantitative approaches alone ( Palinkas et al., 2011 ). In such designs, qualitative methods are used to explore and obtain depth of understanding as to the reasons for success or failure to implement evidence-based practice or to identify strategies for facilitating implementation while quantitative methods are used to test and confirm hypotheses based on an existing conceptual model and obtain breadth of understanding of predictors of successful implementation ( Teddlie & Tashakkori, 2003 ).

Sampling strategies for quantitative methods used in mixed methods designs in implementation research are generally well-established and based on probability theory. In contrast, sampling strategies for qualitative methods in implementation studies are less explicit and often less evident. Although the samples for qualitative inquiry are generally assumed to be selected purposefully to yield cases that are “information rich” (Patton, 2001), there are no clear guidelines for conducting purposeful sampling in mixed methods implementation studies, particularly when studies have more than one specific objective. Moreover, it is not entirely clear what forms of purposeful sampling are most appropriate for the challenges of using both quantitative and qualitative methods in the mixed methods designs used in implementation research. Such a consideration requires a determination of the objectives of each methodology and the potential impact of selecting one strategy to achieve one objective on the selection of other strategies to achieve additional objectives.

In this paper, we present different approaches to the use of purposeful sampling strategies in implementation research. We begin with a review of the principles and practice of purposeful sampling in implementation research, a summary of the types and categories of purposeful sampling strategies, and a set of recommendations for matching the appropriate single strategy or multistage strategy to study aims and quantitative method designs.

Principles of Purposeful Sampling

Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources ( Patton, 2002 ). This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ( Cresswell & Plano Clark, 2011 ). In addition to knowledge and experience, Bernard (2002) and Spradley (1979) note the importance of availability and willingness to participate, and the ability to communicate experiences and opinions in an articulate, expressive, and reflective manner. In contrast, probabilistic or random sampling is used to ensure the generalizability of findings by minimizing the potential for bias in selection and to control for the potential influence of known and unknown confounders.

As Morse and Niehaus (2009) observe, whether the methodology employed is quantitative or qualitative, sampling methods are intended to maximize efficiency and validity. Nevertheless, sampling must be consistent with the aims and assumptions inherent in the use of either method. Qualitative methods are, for the most part, intended to achieve depth of understanding while quantitative methods are intended to achieve breadth of understanding ( Patton, 2002 ). Qualitative methods place primary emphasis on saturation (i.e., obtaining a comprehensive understanding by continuing to sample until no new substantive information is acquired) ( Miles & Huberman, 1994 ). Quantitative methods place primary emphasis on generalizability (i.e., ensuring that the knowledge gained is representative of the population from which the sample was drawn). Each methodology, in turn, has different expectations and standards for determining the number of participants required to achieve its aims. Quantitative methods rely on established formulae for avoiding Type I and Type II errors, while qualitative methods often rely on precedents for determining number of participants based on type of analysis proposed (e.g., 3-6 participants interviewed multiple times in a phenomenological study versus 20-30 participants interviewed once or twice in a grounded theory study), level of detail required, and emphasis of homogeneity (requiring smaller samples) versus heterogeneity (requiring larger samples) ( Guest, Bunce & Johnson., 2006 ; Morse & Niehaus, 2009 ; Padgett, 2008 ).

Types of purposeful sampling designs

There exist numerous purposeful sampling designs. Examples include the selection of extreme or deviant (outlier) cases for the purpose of learning from an unusual manifestations of phenomena of interest; the selection of cases with maximum variation for the purpose of documenting unique or diverse variations that have emerged in adapting to different conditions, and to identify important common patterns that cut across variations; and the selection of homogeneous cases for the purpose of reducing variation, simplifying analysis, and facilitating group interviewing. A list of some of these strategies and examples of their use in implementation research is provided in Table 1 .

Purposeful sampling strategies in implementation research

StrategyObjectiveExampleConsiderations
Emphasis on similarity
Criterion-iTo identify and select all
cases that meet some
predetermined criterion
of importance
Selection of consultant
trainers and program
leaders at study sites to
facilitators and barriers
to EBP implementation
( ).
Can be used to identify
cases from standardized
questionnaires for in-
depth follow-up
( )
Criterion-eTo identify and select all
cases that exceed or fall
outside a specified
criterion
Selection of directors of
agencies that failed to
move to the next stage
of implementation
within expected period
of time.
Typical caseTo illustrate or highlight
what is typical, normal
or average
A child undergoing
treatment for trauma
( )
The purpose is to
describe and illustrate
what is typical to those
unfamiliar with the
setting, not to make
generalized statements
about the experiences
of all participants
( ).
HomogeneityTo describe a particular
subgroup in depth, to
reduce variation,
simplify analysis and
facilitate group
interviewing
Selecting Latino/a
directors of mental
health services agencies
to discuss challenges of
implementing evidence-
based treatments for
mental health problems
with Latino/a clients.
Often used for selecting
focus group participants
SnowballTo identify cases of
interest from sampling
people who know
people that generally
have similar
characteristics who, in
turn know people, also
with similar
characteristics.
Asking recruited
program managers to
identify clinicians,
administrative support
staff, and consumers for
project recruitment
( ).
Begins by asking key
informants or well-
situated people “Who
knows a lot about…”
(Patton, 2001)
Extreme or deviant caseTo illuminate both the
unusual and the typical
Selecting clinicians from
state agencies or
mental health with best
and worst performance
records or
implementation
outcomes
Extreme successes or
failures may be
discredited as being too
extreme or unusual to
yield useful
information, leading
one to select cases that
manifest sufficient
intensity to illuminate
the nature of success or
failure, but not in the
extreme.
Emphasis on variation
IntensitySame objective as
extreme case sampling
but with less emphasis
on extremes
Clinicians providing
usual care and clinicians
who dropped out of a
study prior to consent
to contrast with
clinicians who provided
the intervention under
investigation.
( )
Requires the researcher
to do some exploratory
work to determine the
nature of the variation
of the situation under
study, then sampling
intense examples of the
phenomenon of
interest.
Maximum variationImportant shared
patterns that cut across
cases and derived their
significance from having
emerged out of
heterogeneity.
Sampling mental health
services programs in
urban and rural areas in
different parts of the
state (north, central,
south) to capture
maximum variation in
location
( ).
Can be used to
document unique or
diverse variations that
have emerged in
adapting to different
conditions
( ).
Critical caseTo permit logical
generalization and
maximum application of
information because if
it is true in this one
case, it’s likely to be
true of all other cases
Investigation of a group
of agencies that
decided to stop using
an evidence-based
practice to identify
reasons for lack of EBP
sustainment.
Depends on recognition
of key dimensions that
make for a critical case.
Particularly important
when resources may
limit the study of only
one site (program,
community, population)
( )
Theory-basedTo find manifestations
of a theoretical
construct so as to
elaborate and examine
the construct and its
variations
Sampling therapists
based on academic
training to understand
the impact of CBT
training versus
psychodynamic training
in graduate school of
acceptance of EBPs
Sample on the basis of
potential manifestation
or representation of
important theoretical
constructs.
Sampling on the basis of
emerging concepts with
the aim being to
explore the dimensional
range or varied
conditions along which
the properties of
concepts vary.
Confirming and
disconfirming case
To confirm the
importance and
meaning of possible
patterns and checking
out the viability of
emergent findings with
new data and additional
cases.
Once trends are
identified, deliberately
seeking examples that
are counter to the
trend.
Usually employed in
later phases of data
collection. Confirmatory
cases are additional
examples that fit
already emergent
patterns to add
richness, depth and
credibility.
Disconfirming cases are
a source of rival
interpretations as well
as a means for placing
boundaries around
confirmed findings
Stratified purposefulTo capture major
variations rather than
to identify a common
core, although the
latter may emerge in
the analysis
Combining typical case
sampling with
maximum variation
sampling by taking a
stratified purposeful
sample of above
average, average, and
below average cases of
health care
expenditures for a
particular problem.
This represents less
than the full maximum
variation sample, but
more than simple
typical case sampling.
Purposeful randomTo increase the
credibility of results
Selecting for interviews
a random sample of
providers to describe
experiences with EBP
implementation.
Not as representative of
the population as a
probability random
sample.
Nonspecific emphasis
Opportunistic or
emergent
To take advantage of
circumstances, events
and opportunities for
additional data
collection as they arise.
Usually employed when
it is impossible to
identify sample or the
population from which
a sample should be
drawn at the outset of a
study. Used primarily in
conducting
ethnographic fieldwork
ConvenienceTo collect information
from participants who
are easily accessible to
the researcher
Recruiting providers
attending a staff
meeting for study
participation.
Although commonly
used, it is neither
purposeful nor strategic

Embedded in each strategy is the ability to compare and contrast, to identify similarities and differences in the phenomenon of interest. Nevertheless, some of these strategies (e.g., maximum variation sampling, extreme case sampling, intensity sampling, and purposeful random sampling) are used to identify and expand the range of variation or differences, similar to the use of quantitative measures to describe the variability or dispersion of values for a particular variable or variables, while other strategies (e.g., homogeneous sampling, typical case sampling, criterion sampling, and snowball sampling) are used to narrow the range of variation and focus on similarities. The latter are similar to the use of quantitative central tendency measures (e.g., mean, median, and mode). Moreover, certain strategies, like stratified purposeful sampling or opportunistic or emergent sampling, are designed to achieve both goals. As Patton (2002 , p. 240) explains, “the purpose of a stratified purposeful sample is to capture major variations rather than to identify a common core, although the latter may also emerge in the analysis. Each of the strata would constitute a fairly homogeneous sample.”

Challenges to use of purposeful sampling

Despite its wide use, there are numerous challenges in identifying and applying the appropriate purposeful sampling strategy in any study. For instance, the range of variation in a sample from which purposive sample is to be taken is often not really known at the outset of a study. To set as the goal the sampling of information-rich informants that cover the range of variation assumes one knows that range of variation. Consequently, an iterative approach of sampling and re-sampling to draw an appropriate sample is usually recommended to make certain the theoretical saturation occurs ( Miles & Huberman, 1994 ). However, that saturation may be determined a-priori on the basis of an existing theory or conceptual framework, or it may emerge from the data themselves, as in a grounded theory approach ( Glaser & Strauss, 1967 ). Second, there are a not insignificant number in the qualitative methods field who resist or refuse systematic sampling of any kind and reject the limiting nature of such realist, systematic, or positivist approaches. This includes critics of interventions and “bottom up” case studies and critiques. However, even those who equate purposeful sampling with systematic sampling must offer a rationale for selecting study participants that is linked with the aims of the investigation (i.e., why recruit these individuals for this particular study? What qualifies them to address the aims of the study?). While systematic sampling may be associated with a post-positivist tradition of qualitative data collection and analysis, such sampling is not inherently limited to such analyses and the need for such sampling is not inherently limited to post-positivist qualitative approaches ( Patton, 2002 ).

Purposeful Sampling in Implementation Research

Characteristics of implementation research.

In implementation research, quantitative and qualitative methods often play important roles, either simultaneously or sequentially, for the purpose of answering the same question through convergence of results from different sources, answering related questions in a complementary fashion, using one set of methods to expand or explain the results obtained from use of the other set of methods, using one set of methods to develop questionnaires or conceptual models that inform the use of the other set, and using one set of methods to identify the sample for analysis using the other set of methods ( Palinkas et al., 2011 ). A review of mixed method designs in implementation research conducted by Palinkas and colleagues (2011) revealed seven different sequential and simultaneous structural arrangements, five different functions of mixed methods, and three different ways of linking quantitative and qualitative data together. However, this review did not consider the sampling strategies involved in the types of quantitative and qualitative methods common to implementation research, nor did it consider the consequences of the sampling strategy selected for one method or set of methods on the choice of sampling strategy for the other method or set of methods. For instance, one of the most significant challenges to sampling in sequential mixed method designs lies in the limitations the initial method may place on sampling for the subsequent method. As Morse and Neihaus (2009) observe, when the initial method is qualitative, the sample selected may be too small and lack randomization necessary to fulfill the assumptions for a subsequent quantitative analysis. On the other hand, when the initial method is quantitative, the sample selected may be too large for each individual to be included in qualitative inquiry and lack purposeful selection to reduce the sample size to one more appropriate for qualitative research. The fact that potential participants were recruited and selected at random does not necessarily make them information rich.

A re-examination of the 22 studies and an additional 6 studies published since 2009 revealed that only 5 studies ( Aarons & Palinkas, 2007 ; Bachman et al., 2009 ; Palinkas et al., 2011 ; Palinkas et al., 2012 ; Slade et al., 2003) made a specific reference to purposeful sampling. An additional three studies ( Henke et al., 2008 ; Proctor et al., 2007 ; Swain et al., 2010 ) did not make explicit reference to purposeful sampling but did provide a rationale for sample selection. The remaining 20 studies provided no description of the sampling strategy used to identify participants for qualitative data collection and analysis; however, a rationale could be inferred based on a description of who were recruited and selected for participation. Of the 28 studies, 3 used more than one sampling strategy. Twenty-one of the 28 studies (75%) used some form of criterion sampling. In most instances, the criterion used is related to the individual’s role, either in the research project (i.e., trainer, team leader), or the agency (program director, clinical supervisor, clinician); in other words, criterion of inclusion in a certain category (criterion-i), in contrast to cases that are external to a specific criterion (criterion-e). For instance, in a series of studies based on the National Implementing Evidence-Based Practices Project, participants included semi-structured interviews with consultant trainers and program leaders at each study site ( Brunette et al., 2008 ; Marshall et al., 2008 ; Marty et al., 2007; Rapp et al., 2010 ; Woltmann et al., 2008 ). Six studies used some form of maximum variation sampling to ensure representativeness and diversity of organizations and individual practitioners. Two studies used intensity sampling to make contrasts. Aarons and Palinkas (2007) , for example, purposefully selected 15 child welfare case managers representing those having the most positive and those having the most negative views of SafeCare, an evidence-based prevention intervention, based on results of a web-based quantitative survey asking about the perceived value and usefulness of SafeCare. Kramer and Burns (2008) recruited and interviewed clinicians providing usual care and clinicians who dropped out of a study prior to consent to contrast with clinicians who provided the intervention under investigation. One study ( Hoagwood et al., 2007 ), used a typical case approach to identify participants for a qualitative assessment of the challenges faced in implementing a trauma-focused intervention for youth. One study ( Green & Aarons, 2011 ) used a combined snowball sampling/criterion-i strategy by asking recruited program managers to identify clinicians, administrative support staff, and consumers for project recruitment. County mental directors, agency directors, and program managers were recruited to represent the policy interests of implementation while clinicians, administrative support staff and consumers were recruited to represent the direct practice perspectives of EBP implementation.

Table 2 below provides a description of the use of different purposeful sampling strategies in mixed methods implementation studies. Criterion-i sampling was most frequently used in mixed methods implementation studies that employed a simultaneous design where the qualitative method was secondary to the quantitative method or studies that employed a simultaneous structure where the qualitative and quantitative methods were assigned equal priority. These mixed method designs were used to complement the depth of understanding afforded by the qualitative methods with the breadth of understanding afforded by the quantitative methods (n = 13), to explain or elaborate upon the findings of one set of methods (usually quantitative) with the findings from the other set of methods (n = 10), or to seek convergence through triangulation of results or quantifying qualitative data (n = 8). The process of mixing methods in the large majority (n = 18) of these studies involved embedding the qualitative study within the larger quantitative study. In one study (Goia & Dziadosz, 2008), criterion sampling was used in a simultaneous design where quantitative and qualitative data were merged together in a complementary fashion, and in two studies (Aarons et al., 2012; Zazelli et al., 2008 ), quantitative and qualitative data were connected together, one in sequential design for the purpose of developing a conceptual model ( Zazelli et al., 2008 ), and one in a simultaneous design for the purpose of complementing one another (Aarons et al., 2012). Three of the six studies that used maximum variation sampling used a simultaneous structure with quantitative methods taking priority over qualitative methods and a process of embedding the qualitative methods in a larger quantitative study ( Henke et al., 2008 ; Palinkas et al., 2010; Slade et al., 2008 ). Two of the six studies used maximum variation sampling in a sequential design ( Aarons et al., 2009 ; Zazelli et al., 2008 ) and one in a simultaneous design (Henke et al., 2010) for the purpose of development, and three used it in a simultaneous design for complementarity ( Bachman et al., 2009 ; Henke et al., 2008; Palinkas, Ell, Hansen, Cabassa, & Wells, 2011 ). The two studies relying upon intensity sampling used a simultaneous structure for the purpose of either convergence or expansion, and both studies involved a qualitative study embedded in a larger quantitative study ( Aarons & Palinkas, 2007 ; Kramer & Burns, 2008 ). The single typical case study involved a simultaneous design where the qualitative study was embedded in a larger quantitative study for the purpose of complementarity ( Hoagwood et al., 2007 ). The snowball/maximum variation study involved a sequential design where the qualitative study was merged into the quantitative data for the purpose of convergence and conceptual model development ( Green & Aarons, 2011 ). Although not used in any of the 28 implementation studies examined here, another common sequential sampling strategy is using criteria sampling of the larger quantitative sample to produce a second-stage qualitative sample in a manner similar to maximum variation sampling, except that the former narrows the range of variation while the latter expands the range.

Purposeful sampling strategies and mixed method designs in implementation research

Sampling strategyStructureDesignFunction
Single stage sampling (n = 22)
Criterion
(n = 18)
Simultaneous (n = 17)
Sequential (n = 6)
Merged (n = 9)
Connected (n = 9)
Embedded (n = 14)
Convergence (n = 6)
Complementarity (n = 12)
Expansion (n = 10)
Development (n = 3)
Sampling (n = 4)
Maximum variation
(n = 4)
Simultaneous (n = 3)
Sequential (n = 1)
Merged (n = 1)
Connected (n = 1)
Embedded (n = 2)
Convergence (n = 1)
Complementarity (n = 2)
Expansion (n = 1)
Development (n = 2)
Intensity
(n = 1)
Simultaneous
Sequential
Merged
Connected
Embedded
Convergence
Complementarity
Expansion
Development
Typical case Study
(n = 1)
SimultaneousEmbeddedComplementarity
Multistage sampling (n = 4)
Criterion/maximum
variation
(n = 2)
Simultaneous
Sequential
Embedded
Connected
Complementarity
Development
Criterion/intensity
(n = 1)
SimultaneousEmbeddedConvergence
Complementarity
Expansion
Criterion/snowball
(n = 1)
SequentialConnectedConvergence
Development

Criterion-i sampling as a purposeful sampling strategy shares many characteristics with random probability sampling, despite having different aims and different procedures for identifying and selecting potential participants. In both instances, study participants are drawn from agencies, organizations or systems involved in the implementation process. Individuals are selected based on the assumption that they possess knowledge and experience with the phenomenon of interest (i.e., the implementation of an EBP) and thus will be able to provide information that is both detailed (depth) and generalizable (breadth). Participants for a qualitative study, usually service providers, consumers, agency directors, or state policy-makers, are drawn from the larger sample of participants in the quantitative study. They are selected from the larger sample because they meet the same criteria, in this case, playing a specific role in the organization and/or implementation process. To some extent, they are assumed to be “representative” of that role, although implementation studies rarely explain the rationale for selecting only some and not all of the available role representatives (i.e., recruiting 15 providers from an agency for semi-structured interviews out of an available sample of 25 providers). From the perspective of qualitative methodology, participants who meet or exceed a specific criterion or criteria possess intimate (or, at the very least, greater) knowledge of the phenomenon of interest by virtue of their experience, making them information-rich cases.

However, criterion sampling may not be the most appropriate strategy for implementation research because by attempting to capture both breadth and depth of understanding, it may actually be inadequate to the task of accomplishing either. Although qualitative methods are often contrasted with quantitative methods on the basis of depth versus breadth, they actually require elements of both in order to provide a comprehensive understanding of the phenomenon of interest. Ideally, the goal of achieving theoretical saturation by providing as much detail as possible involves selection of individuals or cases that can ensure all aspects of that phenomenon are included in the examination and that any one aspect is thoroughly examined. This goal, therefore, requires an approach that sequentially or simultaneously expands and narrows the field of view, respectively. By selecting only individuals who meet a specific criterion defined on the basis of their role in the implementation process or who have a specific experience (e.g., engaged only in an implementation defined as successful or only in one defined as unsuccessful), one may fail to capture the experiences or activities of other groups playing other roles in the process. For instance, a focus only on practitioners may fail to capture the insights, experiences, and activities of consumers, family members, agency directors, administrative staff, or state policy leaders in the implementation process, thus limiting the breadth of understanding of that process. On the other hand, selecting participants on the basis of whether they were a practitioner, consumer, director, staff, or any of the above, may fail to identify those with the greatest experience or most knowledgeable or most able to communicate what they know and/or have experienced, thus limiting the depth of understanding of the implementation process.

To address the potential limitations of criterion sampling, other purposeful sampling strategies should be considered and possibly adopted in implementation research ( Figure 1 ). For instance, strategies placing greater emphasis on breadth and variation such as maximum variation, extreme case, confirming and disconfirming case sampling are better suited for an examination of differences, while strategies placing greater emphasis on depth and similarity such as homogeneous, snowball, and typical case sampling are better suited for an examination of commonalities or similarities, even though both types of sampling strategies include a focus on both differences and similarities. Alternatives to criterion sampling may be more appropriate to the specific functions of mixed methods, however. For instance, using qualitative methods for the purpose of complementarity may require that a sampling strategy emphasize similarity if it is to achieve depth of understanding or explore and develop hypotheses that complement a quantitative probability sampling strategy achieving breadth of understanding and testing hypotheses ( Kemper et al., 2003 ). Similarly, mixed methods that address related questions for the purpose of expanding or explaining results or developing new measures or conceptual models may require a purposeful sampling strategy aiming for similarity that complements probability sampling aiming for variation or dispersion. A narrowly focused purposeful sampling strategy for qualitative analysis that “complements” a broader focused probability sample for quantitative analysis may help to achieve a balance between increasing inference quality/trustworthiness (internal validity) and generalizability/transferability (external validity). A single method that focuses only on a broad view may decrease internal validity at the expense of external validity ( Kemper et al., 2003 ). On the other hand, the aim of convergence (answering the same question with either method) may suggest use of a purposeful sampling strategy that aims for breadth that parallels the quantitative probability sampling strategy.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-f0001.jpg

Purposeful and Random Sampling Strategies for Mixed Method Implementation Studies

  • (1) Priority and sequencing of Qualitative (QUAL) and Quantitative (QUAN) can be reversed.
  • (2) Refers to emphasis of sampling strategy.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-ig0002.jpg

Furthermore, the specific nature of implementation research suggests that a multistage purposeful sampling strategy be used. Three different multistage sampling strategies are illustrated in Figure 1 below. Several qualitative methodologists recommend sampling for variation (breadth) before sampling for commonalities (depth) ( Glaser, 1978 ; Bernard, 2002 ) (Multistage I). Also known as a “funnel approach”, this strategy is often recommended when conducting semi-structured interviews ( Spradley, 1979 ) or focus groups ( Morgan, 1997 ). This approach begins with a broad view of the topic and then proceeds to narrow down the conversation to very specific components of the topic. However, as noted earlier, the lack of a clear understanding of the nature of the range may require an iterative approach where each stage of data analysis helps to determine subsequent means of data collection and analysis ( Denzen, 1978 ; Patton, 2001) (Multistage II). Similarly, multistage purposeful sampling designs like opportunistic or emergent sampling, allow the option of adding to a sample to take advantage of unforeseen opportunities after data collection has been initiated (Patton, 2001, p. 240) (Multistage III). Multistage I models generally involve two stages, while a Multistage II model requires a minimum of 3 stages, alternating from sampling for variation to sampling for similarity. A Multistage III model begins with sampling for variation and ends with sampling for similarity, but may involve one or more intervening stages of sampling for variation or similarity as the need or opportunity arises.

Multistage purposeful sampling is also consistent with the use of hybrid designs to simultaneously examine intervention effectiveness and implementation. An extension of the concept of “practical clinical trials” ( Tunis, Stryer & Clancey, 2003 ), effectiveness-implementation hybrid designs provide benefits such as more rapid translational gains in clinical intervention uptake, more effective implementation strategies, and more useful information for researchers and decision makers ( Curran et al., 2012 ). Such designs may give equal priority to the testing of clinical treatments and implementation strategies (Hybrid Type 2) or give priority to the testing of treatment effectiveness (Hybrid Type 1) or implementation strategy (Hybrid Type 3). Curran and colleagues (2012) suggest that evaluation of the intervention’s effectiveness will require or involve use of quantitative measures while evaluation of the implementation process will require or involve use of mixed methods. When conducting a Hybrid Type 1 design (conducting a process evaluation of implementation in the context of a clinical effectiveness trial), the qualitative data could be used to inform the findings of the effectiveness trial. Thus, an effectiveness trial that finds substantial variation might purposefully select participants using a broader strategy like sampling for disconfirming cases to account for the variation. For instance, group randomized trials require knowledge of the contexts and circumstances similar and different across sites to account for inevitable site differences in interventions and assist local implementations of an intervention ( Bloom & Michalopoulos, 2013 ; Raudenbush & Liu, 2000 ). Alternatively, a narrow strategy may be used to account for the lack of variation. In either instance, the choice of a purposeful sampling strategy is determined by the outcomes of the quantitative analysis that is based on a probability sampling strategy. In Hybrid Type 2 and Type 3 designs where the implementation process is given equal or greater priority than the effectiveness trial, the purposeful sampling strategy must be first and foremost consistent with the aims of the implementation study, which may be to understand variation, central tendencies, or both. In all three instances, the sampling strategy employed for the implementation study may vary based on the priority assigned to that study relative to the effectiveness trial. For instance, purposeful sampling for a Hybrid Type 1 design may give higher priority to variation and comparison to understand the parameters of implementation processes or context as a contribution to an understanding of effectiveness outcomes (i.e., using qualitative data to expand upon or explain the results of the effectiveness trial), In effect, these process measures could be seen as modifiers of innovation/EBP outcome. In contrast, purposeful sampling for a Hybrid Type 3 design may give higher priority to similarity and depth to understand the core features of successful outcomes only.

Finally, multistage sampling strategies may be more consistent with innovations in experimental designs representing alternatives to the classic randomized controlled trial in community-based settings that have greater feasibility, acceptability, and external validity. While RCT designs provide the highest level of evidence, “in many clinical and community settings, and especially in studies with underserved populations and low resource settings, randomization may not be feasible or acceptable” ( Glasgow, et al., 2005 , p. 554). Randomized trials are also “relatively poor in assessing the benefit from complex public health or medical interventions that account for individual preferences for or against certain interventions, differential adherence or attrition, or varying dosage or tailoring of an intervention to individual needs” ( Brown et al., 2009 , p. 2). Several alternatives to the randomized design have been proposed, such as “interrupted time series,” “multiple baseline across settings” or “regression-discontinuity” designs. Optimal designs represent one such alternative to the classic RCT and are addressed in detail by Duan and colleagues (this issue) . Like purposeful sampling, optimal designs are intended to capture information-rich cases, usually identified as individuals most likely to benefit from the experimental intervention. The goal here is not to identify the typical or average patient, but patients who represent one end of the variation in an extreme case, intensity sampling, or criterion sampling strategy. Hence, a sampling strategy that begins by sampling for variation at the first stage and then sampling for homogeneity within a specific parameter of that variation (i.e., one end or the other of the distribution) at the second stage would seem the best approach for identifying an “optimal” sample for the clinical trial.

Another alternative to the classic RCT are the adaptive designs proposed by Brown and colleagues ( Brown et al, 2006 ; Brown et al., 2008 ; Brown et al., 2009 ). Adaptive designs are a sequence of trials that draw on the results of existing studies to determine the next stage of evaluation research. They use cumulative knowledge of current treatment successes or failures to change qualities of the ongoing trial. An adaptive intervention modifies what an individual subject (or community for a group-based trial) receives in response to his or her preferences or initial responses to an intervention. Consistent with multistage sampling in qualitative research, the design is somewhat iterative in nature in the sense that information gained from analysis of data collected at the first stage influences the nature of the data collected, and the way they are collected, at subsequent stages ( Denzen, 1978 ). Furthermore, many of these adaptive designs may benefit from a multistage purposeful sampling strategy at early phases of the clinical trial to identify the range of variation and core characteristics of study participants. This information can then be used for the purposes of identifying optimal dose of treatment, limiting sample size, randomizing participants into different enrollment procedures, determining who should be eligible for random assignment (as in the optimal design) to maximize treatment adherence and minimize dropout, or identifying incentives and motives that may be used to encourage participation in the trial itself.

Alternatives to the classic RCT design may also be desirable in studies that adopt a community-based participatory research framework ( Minkler & Wallerstein, 2003 ), considered to be an important tool on conducting implementation research ( Palinkas & Soydan, 2012 ). Such frameworks suggest that identification and recruitment of potential study participants will place greater emphasis on the priorities and “local knowledge” of community partners than on the need to sample for variation or uniformity. In this instance, the first stage of sampling may approximate the strategy of sampling politically important cases ( Patton, 2002 ) at the first stage, followed by other sampling strategies intended to maximize variations in stakeholder opinions or experience.

On the basis of this review, the following recommendations are offered for the use of purposeful sampling in mixed method implementation research. First, many mixed methods studies in health services research and implementation science do not clearly identify or provide a rationale for the sampling procedure for either quantitative or qualitative components of the study ( Wisdom et al., 2011 ), so a primary recommendation is for researchers to clearly describe their sampling strategies and provide the rationale for the strategy.

Second, use of a single stage strategy for purposeful sampling for qualitative portions of a mixed methods implementation study should adhere to the same general principles that govern all forms of sampling, qualitative or quantitative. Kemper and colleagues (2003) identify seven such principles: 1) the sampling strategy should stem logically from the conceptual framework as well as the research questions being addressed by the study; 2) the sample should be able to generate a thorough database on the type of phenomenon under study; 3) the sample should at least allow the possibility of drawing clear inferences and credible explanations from the data; 4) the sampling strategy must be ethical; 5) the sampling plan should be feasible; 6) the sampling plan should allow the researcher to transfer/generalize the conclusions of the study to other settings or populations; and 7) the sampling scheme should be as efficient as practical.

Third, the field of implementation research is at a stage itself where qualitative methods are intended primarily to explore the barriers and facilitators of EBP implementation and to develop new conceptual models of implementation process and outcomes. This is especially important in state implementation research, where fiscal necessities are driving policy reforms for which knowledge about EBP implementation barriers and facilitators are urgently needed. Thus a multistage strategy for purposeful sampling should begin first with a broader view with an emphasis on variation or dispersion and move to a narrow view with an emphasis on similarity or central tendencies. Such a strategy is necessary for the task of finding the optimal balance between internal and external validity.

Fourth, if we assume that probability sampling will be the preferred strategy for the quantitative components of most implementation research, the selection of a single or multistage purposeful sampling strategy should be based, in part, on how it relates to the probability sample, either for the purpose of answering the same question (in which case a strategy emphasizing variation and dispersion is preferred) or the for answering related questions (in which case, a strategy emphasizing similarity and central tendencies is preferred).

Fifth, it should be kept in mind that all sampling procedures, whether purposeful or probability, are designed to capture elements of both similarity and differences, of both centrality and dispersion, because both elements are essential to the task of generating new knowledge through the processes of comparison and contrast. Selecting a strategy that gives emphasis to one does not mean that it cannot be used for the other. Having said that, our analysis has assumed at least some degree of concordance between breadth of understanding associated with quantitative probability sampling and purposeful sampling strategies that emphasize variation on the one hand, and between the depth of understanding and purposeful sampling strategies that emphasize similarity on the other hand. While there may be some merit to that assumption, depth of understanding requires both an understanding of variation and common elements.

Finally, it should also be kept in mind that quantitative data can be generated from a purposeful sampling strategy and qualitative data can be generated from a probability sampling strategy. Each set of data is suited to a specific objective and each must adhere to a specific set of assumptions and requirements. Nevertheless, the promise of mixed methods, like the promise of implementation science, lies in its ability to move beyond the confines of existing methodological approaches and develop innovative solutions to important and complex problems. For states engaged in EBP implementation, the need for these solutions is urgent.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-f0004.jpg

Multistage Purposeful Sampling Strategies

Acknowledgments

This study was funded through a grant from the National Institute of Mental Health (P30-MH090322: K. Hoagwood, PI).

Sampling Techniques for Qualitative Research

  • First Online: 27 October 2022

Cite this chapter

sampling methods in qualitative and quantitative research pdf

  • Heather Douglas 4  

3429 Accesses

4 Citations

This chapter explains how to design suitable sampling strategies for qualitative research. The focus of this chapter is purposive (or theoretical) sampling to produce credible and trustworthy explanations of a phenomenon (a specific aspect of society). A specific research question (RQ) guides the methodology (the study design or approach ). It defines the participants, location, and actions to be used to answer the question. Qualitative studies use specific tools and techniques ( methods ) to sample people, organizations, or whatever is to be examined. The methodology guides the selection of tools and techniques for sampling, data analysis, quality assurance, etc. These all vary according to the purpose and design of the study and the RQ. In this chapter, a fake example is used to demonstrate how to apply your sampling strategy in a developing country.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research, the role of sampling in mixed methods-research.

sampling methods in qualitative and quantitative research pdf

Preparation of Qualitative Research

Douglas, H. (2010). Divergent orientations in social entrepreneurship organisations. In K. Hockerts, J. Robinson, & J. Mair (Eds.), Values and opportunities in social entrepreneurship (pp. 71–95). Palgrave Macmillan.

Chapter   Google Scholar  

Douglas, H., Eti-Tofinga, B., & Singh, G. (2018a). Contextualising social enterprise in Fiji. Social Enterprise Journal, 14 (2), 208–224. https://doi.org/10.1108/SEJ-05-2017-0032

Article   Google Scholar  

Douglas, H., Eti-Tofinga, B., & Singh, G. (2018b). Hybrid organisations contributing to wellbeing in small Pacific island countries. Sustainability Accounting, Management and Policy Journal, 9 (4), 490–514. https://doi.org/10.1108/SAMPJ-08-2017-0081

Douglas, H., & Borbasi, S. (2009). Parental perspectives on disability: The story of Sam, Anna, and Marcus. Disabilities: Insights from across fields and around the world, 2 , 201–217.

Google Scholar  

Douglas, H. (1999). Community transport in rural Queensland: Using community resources effectively in small communities. Paper presented at the 5th National Rural Health Conference, Adelaide, South Australia, pp. 14–17th March.

Douglas, H. (2006). Action, blastoff, chaos: ABC of successful youth participation. Child, Youth and Environments, 16 (1). Retrieved from http://www.colorado.edu/journals/cye

Douglas, H. (2007). Methodological sampling issues for researching new nonprofit organisations. Paper presented at the 52nd International Council for Small Business (ICSB) 13–15 June, Turku, Finland.

Draper, H., Wilson, S., Flanagan, S., & Ives, J. (2009). Offering payments, reimbursement and incentives to patients and family doctors to encourage participation in research. Family Practice, 26 (3), 231–238. https://doi.org/10.1093/fampra/cmp011

Puamua, P. Q. (1999). Understanding Fijian under-achievement: An integrated perspective. Directions, 21 (2), 100–112.

Download references

Author information

Authors and affiliations.

The University of Queensland, The Royal Society of Queensland, Activation Australia, Brisbane, Australia

Heather Douglas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Heather Douglas .

Editor information

Editors and affiliations.

Centre for Family and Child Studies, Research Institute of Humanities and Social Sciences, University of Sharjah, Sharjah, United Arab Emirates

M. Rezaul Islam

Department of Development Studies, University of Dhaka, Dhaka, Bangladesh

Niaz Ahmed Khan

Department of Social Work, School of Humanities, University of Johannesburg, Johannesburg, South Africa

Rajendra Baikady

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Douglas, H. (2022). Sampling Techniques for Qualitative Research. In: Islam, M.R., Khan, N.A., Baikady, R. (eds) Principles of Social Research Methodology. Springer, Singapore. https://doi.org/10.1007/978-981-19-5441-2_29

Download citation

DOI : https://doi.org/10.1007/978-981-19-5441-2_29

Published : 27 October 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-5219-7

Online ISBN : 978-981-19-5441-2

eBook Packages : Social Sciences Social Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

A .gov website belongs to an official government organization in the United States.

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • About YRBSS
  • YRBSS Results
  • Data and Documentation
  • YRBSS Methods
  • YRBSS Questionnaires
  • YRBSS Reports and Publications
  • Data Summary & Trends
  • Results Toolkit
  • Communication Resources
  • YRBSS Data Request Form

Related Topics:

  • View All Home
  • YRBS Explorer
  • Adolescent and School Health

What to know

  • Datasets and documentation for national, state, and district Youth Risk Behavior Surveys can be downloaded directly from this page.
  • The datasets are available in two file formats: Access and ASCII.
  • YRBS data are used by health departments, educators, lawmakers, doctors, and community organizations.
  • The Access and ASCII data can be downloaded and used as is.

An user using computer with blue graphic dashboard.

National datasets & documentation

  • SAS Format Program
  • SAS Input Program
  • SPSS Syntax
  • Data User’s Guide

National datasets & documentation by year

Blue and red data visualization displays on screen.

Combined high school

  • National (dat)
  • District (dat)
  • SPSS Input Program
  • States A-M (dat)
  • States N-Z (dat)
  • Districts (dat)
  • SAS Format Program (sas)
  • SAS Input Program (sas)
  • SPSS Syntax (sps)
  • Combined Datasets User’s Guide
  • National (zip)
  • States A-M (zip)
  • States N-Z (zip)

Combined middle school

  • State A-M (dat)
  • State N-Z (dat)
  • Datasets User’s Guide
  • Districts (zip)

Dataset file formats

Ascii files, using ascii files.

  • Save the ASCII data file to a folder on your computer.
  • Specify column locations for each variable as needed by the software you are using.

Note: SAS and SPSS programs need to be used to convert ASCII into SAS and SPSS datasets. How to use the ASCII data varies from one software package to another. Column positions for each variable usually have to be specified. Column positions for each variable can be found in the documentation for each year's data. Consult your software documentation for more information.

Access files

Using Access files

  • Save the Access file to a folder on your computer.
  • Uncompress the Access file.

Note: The file is stored in compressed form to improve download time; it will need to be uncompressed before it can be used. See file formats help for more information on uncompressing the file.

Using SAS files

  • Save the SAS Format Program, SAS Input Program, and the ASCII data files to a folder on your computer.
  • Open the SAS Format Program in SAS and edit it according to the instructions included in the comments in the program.
  • Run the program. This will create a permanent format library in the folder specified in the SAS program.
  • Open the SAS Input Program in SAS and edit it according to the instructions included in the comments in the program.
  • Run the program. This will read the ASCII data file and convert it into a permanent SAS dataset for the particular year in the folder specified in the SAS program.

Note: Each year of YRBSS data should go in its own folder because each year has its own format library. Format libraries are not comparable across years.

SAS format library

The SAS format library contains the formats used to make SAS output more readable. Formats are linked to the data so that results are displayed as words ("Male" or "Female", for instance) instead of numbers (1 or 2). The SAS YRBS data file is designed to use its companion format library.

The following example SAS program shows how to use the format library. It assumes that both the data file and the format library are in "c:data". Note that the program contains two libname statements. The first libname statement indicates where the data file is located; the second libname statement indicates where the format library is located.

where the data are */
libname library 'c:data'; /* tells SAS

where the formats are */

proc freq data=mydata.yrbs2005;
tables q2;

run;

Using the format library is recommended but technically is optional. If you do not want to use the format library, include the following statement at the start of your SAS program:

look for formats */

Please note that each year of YRBS data has its own format library. Format libraries are not the same across years of data.

For further information on using format libraries, please consult your SAS documentation.

Using SPSS Files:

  • Save the SPSS syntax file and the ASCII data files to a folder on your computer.
  • Open the SPSS syntax file in SPSS and edit it according to the instructions included in the comments in the file.
  • Run the syntax file. This will read the ASCII data file and convert it into a permanent SPSS data file that includes labels and formats.

Data availability & requesting data

Are yrbss results available by zip code, census tract, school, local school district, or county are yrbss results available for my town/city/local school district.

YRBSS data are not available by zip code, census tract, or school. Sample size limitations and confidentiality requirements do not support analyses at these levels.

YRBSS data are available for a small number of specifically funded local school districts or counties. CDC funds certain local school districts to conduct the YRBSS. Some of those local school districts are county-based. See Participation Maps & History for more information about county-based local school districts with YRBSS data. Data are only available for local school districts or counties on the list; no other local YRBSS data are available.

County-level identifiers are not available in the national YRBS dataset or in most state datasets.

For which jurisdictions are YRBSS datasets available?

YRBSS datasets are available for the United States overall, most states, some territories, some local school districts, and some tribal governments. Availability depends on YRBSS participation, data quality, and data-sharing policies. See Participation Maps & History for more information about data availability.

How can I get state, local school district, territory, or tribal government YRBSS data?

National YRBS datasets and documentation are available for download at YRBSS Data & Documentation . There is no charge for the data nor is permission needed to download or use the data.

Why are results not available from every state?

Results are not available from every state for several reasons. First, four states (Minnesota, Oregon, Washington, and Wyoming) do not participate in the YRBSS. Second, some states that do participate do not achieve a high enough overall response rate to receive weighted results. Therefore, their results are not posted on the CDC website and CDC does not distribute their data.

The Participation Map and the Participation History & Data Quality tables provide more details on which states participated in the YRBSS and whether they obtained weighted data.

How can I get national YRBSS datasets?

National YRBSS datasets and documentation are available for download at YRBSS Data & Documentation . There is no charge for the data nor is permission needed to download or use the data.

Do the national YRBS datasets have state identifiers?

The national YRBS datasets posted on the YRBSS website do not contain state or region identifiers because the national samples are not constructed to provide representative data at state or region levels. However, national YRBS datasets with state identifiers included are available upon request using the YRBSS Data Request Form .

The national YRBS datasets will not contain data from every state because the national YRBS is an independent sample; it is not the aggregate of individual state and local datasets.

Is the national YRBS dataset a combination of all the datasets from the states conducting their own YRBSS?

No, the national YRBS results are not the combination of state and local school district data. The national YRBS data are a separately drawn sample of high school students in grades 9–12 in the U.S. Some states may not have any schools chosen as part of the sample. States and local school districts each use a sample design that produces a representative sample of students in grades 9–12 for their state or local school district.

If you want to analyze state or local school district data please see Participation Maps & History for more information about data availability.

Do the national YRBS datasets include schools and students from every state?

No. The national YRBS sample is designed to be representative of students in grades 9–12 in the United States overall but does necessarily include students from every state.

In which data file formats are national YRBSS data available?

National YRBS datasets are available in two file formats: Access and ASCII. Additionally, SAS and SPSS programs are provided to convert the ASCII data into SAS and SPSS datasets. They can be downloaded at YRBSS Data & Documentation .

State, local school district, territory, and tribal government data sets from surveys conducted since 1999 are available in SAS, SPSS, ASCII, and Access formats. State, local school district, territory, and tribal government datasets from surveys conducted prior to 1999 are available in ASCII only.

Additional resources

Charts, graphs, diagrams, laptop display on digital blue screen.

Youth Risk Behavior Surveillance System (YRBSS)

YRBSS is the largest public health surveillance system in the U.S, monitoring multiple health-related behaviors among high school students.

COMMENTS

  1. (PDF) Sampling in Qualitative Research

    Answer 1: In qualitative research, samples are selected subjectively according to. the pur pose of the study, whereas in quantitative researc h probability sampling. technique are used to select ...

  2. PDF Sampling Strategies in Qualitative Research

    In this context, in part as a reaction against the positioning of qualitative research as less vital and relevant given its refusal to undertake random sampling with large numbers - due to a fundamental asymmetry in goals (e.g. Lincoln and Guba, 1985) and inability in

  3. Series: Practical guidance to qualitative research. Part 3: Sampling

    In quantitative studies, the sampling plan, including sample size, is determined in detail in beforehand but qualitative research projects start with a broadly defined sampling plan. This plan enables you to include a variety of settings and situations and a variety of participants, including negative cases or extreme cases to obtain rich data.

  4. Qualitative, Quantitative, and Mixed Methods Research Sampling

    However, mixed methods studies also have unique considerations based on the relationship of quantitative and qualitative research within the study. Sampling in Qualitative Research Sampling in qualitative research may be divided into two major areas: overall sampling strategies and issues around sample size.

  5. PDF Sampling Designs in Qualitative Research: Making the Sampling Process

    sampling in qualitative research is that numbers are unimportant in ensuring the adequacy of a sampling strategy" (p. 179). Nevertheless, some methodologists have provided guidelines for selecting samples in qualitative studies based on the research design (e.g., case study, ethnography, phenomenology, grounded theory) or research method (e.g.,

  6. Qualitative Sampling Methods

    Abstract. Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros ...

  7. (PDF) Sampling data and data collection in qualitative research

    A population sample is a. chosen subset usually representative of a wider population. In this chapter, both sampling and. data collection techniques used in qualitative research are the focus. 1 ...

  8. Combining Qualitative and Quantitative Sampling, Data Collection, and

    what is commonly referred to as qualitative from quantitative inquiry is the kind of sampling used. While qualitative research typically involves pur-poseful sampling to enhance understanding of the information-rich case (Patton, 1990), quantitative research ideally involves probability sampling to permit statistical inferences to be made. Although

  9. (PDF) Sampling in Qualitative Research: Rationale, Issues, and Methods

    Abstract. In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. But ...

  10. Qualitative, Quantitative, and Mixed Methods Research Sampling

    The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will either allow us to generalize (i.e ...

  11. Sampling Methods

    Knowledge of sampling methods is essential to design quality research. Critical questions are provided to help researchers choose a sampling method. This article reviews probability and non-probability sampling methods, lists and defines specific sampling techniques, and provides pros and cons for consideration.

  12. PDF Sampling Techniques for Qualitative Research

    Qualitative studies use specific tools and techniques (methods) to sample people, organizations, or whatever is to be examined. The methodology guides the selection of tools and techniques for sampling, data analysis, quality assurance, etc. These all vary according to the purpose and design of the study and the RQ.

  13. Purposeful sampling for qualitative data collection and analysis in

    Principles of Purposeful Sampling. Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources (Patton, 2002).This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ...

  14. Sampling Techniques for Qualitative Research

    This chapter explains how to design suitable sampling strategies for qualitative research. The focus of this chapter is purposive (or theoretical) sampling to produce credible and trustworthy explanations of a phenomenon (a specific aspect of society). A specific research question (RQ) guides the methodology (the study design or approach).It defines the participants, location, and actions to ...

  15. PDF Module 1 Qualitative Research Methods Overview

    Qualitative research is a type of scientific research. In general terms, scientific research consists of an investigation that: • seeks answers to a question. • systematically uses a predefined set of procedures to answer the question. • collects evidence. • produces findings that were not determined in advance.

  16. Big enough? Sampling in qualitative inquiry

    Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects ( Staller, 2013 ). As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies." (p.537). Patton (2002) argues, "perhaps nothing better captures the ...

  17. PDF Qualitative Sampling Techniques

    Sampling for Qualitative Research •The aim of the qualitative research is to understand, from within, the subjective reality of the study participants. •This will not be achieved through superficial knowledge about a large, representative sample of individuals. •Rather we want to reach people within the study area

  18. PDF Sampling in Qualitative Research

    A qualitative sampling plan describes how many observations, interviews, focus group discussions or cases are needed to ensure that the findings will contribute rich data. In quantitative studies, the sampling plan, including sample size, is determined in detail in beforehand but qualitative research projects start with a broadly defined ...

  19. (PDF) Sampling in Qualitative Research: Insights from an Overview of

    The methods literature regarding sampling in qualitative research is characterized by important inconsistencies and ambiguities, which can be problematic for students and researchers seeking a ...

  20. PDF Qualitative Approaches to Program Evaluation

    Qualitative sampling strategies are designed to generate a deep understanding of a topic rather than ... Qualitative and quantitative research should be reflexive and dynamic because the practitioner is part of the research, not separate from it (Malterud, 2001; Aamodt, 1982). ... Introduction to qualitative research methods for monitoring and ...

  21. PDF Precision Education Tools for Pediatrics Trainees: A Mixed-Methods

    Change" initiative, using a mixed-methods approach with purposive sampling. First, interviews and formative prototype testing yielded qualitative data which we analyzed with several coding cycles. These qualitative methods illuminated the work domain, broke it into learning use cases, and identified design requirements.

  22. Sampling in Qualitative Research: Rationale, Issues, and Methods

    In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. But sampling has long been a central concern in the social and humanistic inquiry, albeit in a different guise suited to the different goals. There is a need for more ...

  23. Data and Documentation

    Using SAS files. Save the SAS Format Program, SAS Input Program, and the ASCII data files to a folder on your computer. Open the SAS Format Program in SAS and edit it according to the instructions included in the comments in the program.

  24. (PDF) Sampling Methods in Research Methodology; How to Choose a

    Cluster sampling is advantageous for those researcher s. whose subjects are fragmented over large geographical areas as it saves time and money. (Davis, 2005). The stages to cluster sa mpling can ...

  25. (PDF) Sampling data and data collection in qualitative research

    115. Chapter 7 Sampling data and data collection in qualitative research. Depending on the types of data required for a qualita-. tive study, various methods of collecting data can be used ...

  26. arXiv:2407.16921v1 [cs.CV] 24 Jul 2024

    ority in both quantitative and qualitative assessments. 2. RELATED WORK 2.1. SAR-to-Optical Image Translation The interpretation of SAR images is a critical yet challenging task in remote sensing, with a growing interest in converting SAR to more interpretable optical images. Traditional meth-arXiv:2407.16921v1 [cs.CV] 24 Jul 2024

  27. (PDF) Quantitative and qualitative research methods: Considerations and

    Quantitative and qualitative research design represent the two sides of a coin in research project and Hammed (2020) citing Guba (1982) illustrated the axiomatic differences between the two ...

  28. Qualitative and Quantitative Research. A comparative analysis of these

    Including of possible differences in purpose, method, data sources, and data analysis. | Introduction According to Dr Ndalama (2023) define qualitative research as the study of the nature of ...

  29. The impact of bureaucracy and managerialism on ...

    A mixed-methods approach was used, gathering quantitative data through survey questionnaires and qualitative data through interviews to assess authors' satisfaction levels with the journal's ...