Marilyn Price-Mitchell Ph.D.

What Is Metacognition? How Does It Help Us Think?

Metacognitive strategies like self-reflection empower students for a lifetime..

Posted October 9, 2020 | Reviewed by Abigail Fagan

Siphotography/Deposit Photos

Metacognition is a high order thinking skill that is emerging from the shadows of academia to take its rightful place in classrooms around the world. As online classrooms extend into homes, this is an important time for parents and teachers to understand metacognition and how metacognitive strategies affect learning. These skills enable children to become better thinkers and decision-makers.

Metacognition: The Neglected Skill Set for Empowering Students is a new research-based book by educational consultants Dr. Robin Fogarty and Brian Pete that not only gets to the heart of why metacognition is important but gives teachers and parents insightful strategies for teaching metacognition to children from kindergarten through high school. This article summarizes several concepts from their book and shares three of their thirty strategies to strengthen metacognition.

What Is Metacognition?

Metacognition is the practice of being aware of one’s own thinking. Some scholars refer to it as “thinking about thinking.” Fogarty and Pete give a great everyday example of metacognition:

Think about the last time you reached the bottom of a page and thought to yourself, “I’m not sure what I just read.” Your brain just became aware of something you did not know, so instinctively you might reread the last sentence or rescan the paragraphs of the page. Maybe you will read the page again. In whatever ways you decide to capture the missing information, this momentary awareness of knowing what you know or do not know is called metacognition.

When we notice ourselves having an inner dialogue about our thinking and it prompts us to evaluate our learning or problem-solving processes, we are experiencing metacognition at work. This skill helps us think better, make sound decisions, and solve problems more effectively. In fact, research suggests that as a young person’s metacognitive abilities increase, they achieve at higher levels.

Fogarty and Pete outline three aspects of metacognition that are vital for children to learn: planning, monitoring, and evaluation. They convincingly argue that metacognition is best when it is infused in teaching strategies rather than taught directly. The key is to encourage students to explore and question their own metacognitive strategies in ways that become spontaneous and seemingly unconscious .

Metacognitive skills provide a basis for broader, psychological self-awareness , including how children gain a deeper understanding of themselves and the world around them.

Metacognitive Strategies to Use at Home or School

Fogarty and Pete successfully demystify metacognition and provide simple ways teachers and parents can strengthen children’s abilities to use these higher-order thinking skills. Below is a summary of metacognitive strategies from the three areas of planning, monitoring, and evaluation.

1. Planning Strategies

As students learn to plan, they learn to anticipate the strengths and weaknesses of their ideas. Planning strategies used to strengthen metacognition help students scrutinize plans at a time when they can most easily be changed.

One of ten metacognitive strategies outlined in the book is called “Inking Your Thinking.” It is a simple writing log that requires students to reflect on a lesson they are about to begin. Sample starters may include: “I predict…” “A question I have is…” or “A picture I have of this is…”

Writing logs are also helpful in the middle or end of assignments. For example, “The homework problem that puzzles me is…” “The way I will solve this problem is to…” or “I’m choosing this strategy because…”

2. Monitoring Strategies

Monitoring strategies used to strengthen metacognition help students check their progress and review their thinking at various stages. Different from scrutinizing, this strategy is reflective in nature. It also allows for adjustments while the plan, activity, or assignment is in motion. Monitoring strategies encourage recovery of learning, as in the example cited above when we are reading a book and notice that we forgot what we just read. We can recover our memory by scanning or re-reading.

One of many metacognitive strategies shared by Fogarty and Pete, called the “Alarm Clock,” is used to recover or rethink an idea once the student realizes something is amiss. The idea is to develop internal signals that sound an alarm. This signal prompts the student to recover a thought, rework a math problem, or capture an idea in a chart or picture. Metacognitive reflection involves thinking about “What I did,” then reviewing the pluses and minuses of one’s action. Finally, it means asking, “What other thoughts do I have” moving forward?

what is meta problem solving center

Teachers can easily build monitoring strategies into student assignments. Parents can reinforce these strategies too. Remember, the idea is not to tell children what they did correctly or incorrectly. Rather, help children monitor and think about their own learning. These are formative skills that last a lifetime.

3. Evaluation Strategies

According to Fogarty and Pete, the evaluation strategies of metacognition “are much like the mirror in a powder compact. Both serve to magnify the image, allow for careful scrutiny, and provide an up-close and personal view. When one opens the compact and looks in the mirror, only a small portion of the face is reflected back, but that particular part is magnified so that every nuance, every flaw, and every bump is blatantly in view.” Having this enlarged view makes inspection much easier.

When students inspect parts of their work, they learn about the nuances of their thinking processes. They learn to refine their work. They grow in their ability to apply their learning to new situations. “Connecting Elephants” is one of many metacognitive strategies to help students self-evaluate and apply their learning.

In this exercise, the metaphor of three imaginary elephants is used. The elephants are walking together in a circle, connected by the trunk and tail of another elephant. The three elephants represent three vital questions: 1) What is the big idea? 2) How does this connect to other big ideas? 3) How can I use this big idea? Using the image of a “big idea” helps students magnify and synthesize their learning. It encourages them to think about big ways their learning can be applied to new situations.

Metacognition and Self-Reflection

Reflective thinking is at the heart of metacognition. In today’s world of constant chatter, technology and reflective thinking can be at odds. In fact, mobile devices can prevent young people from seeing what is right before their eyes.

John Dewey, a renowned psychologist and education reformer, claimed that experiences alone were not enough. What is critical is an ability to perceive and then weave meaning from the threads of our experiences.

The function of metacognition and self-reflection is to make meaning. The creation of meaning is at the heart of what it means to be human.

Everyone can help foster self-reflection in young people.

Marilyn Price-Mitchell Ph.D.

Marilyn Price-Mitchell, Ph.D., is an Institute for Social Innovation Fellow at Fielding Graduate University and author of Tomorrow’s Change Makers.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that could derail our entire day. Here’s how we can face triggers with less reactivity and get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience
  • Resource Collection
  • State Resources

Access Resources for State Adult Education Staff

  • Topic Areas
  • About the Collection
  • Review Process
  • Reviewer Biographies
  • Federal Initiatives
  • COVID-19 Support
  • ADVANCE Integrated Education and Training (IET)
  • IET Train-the Trainer Resources
  • IET Resource Repository
  • Program Design
  • Collaboration and Industry Engagement
  • Curriculum and Instruction
  • Policy and Funding
  • Program Management - Staffing -Organization Support
  • Student Experience and Progress
  • Adult Numeracy Instruction 2.0
  • Advancing Innovation in Adult Education
  • Bridge Practices
  • Holistic Approach to Adult Ed
  • Integrated Education and Training (IET) Practices
  • Secondary Credentialing Practices
  • Business-Adult Education Partnerships Toolkit
  • Partnerships: Business Leaders
  • Partnerships: Adult Education Providers
  • Success Stories
  • Digital Literacy Initiatives
  • Digital Resilience in the American Workforce
  • Landscape Scan
  • Publications and Resources
  • DRAW Professional Development Resources
  • Employability Skills Framework
  • Enhancing Access for Refugees and New Americans
  • English Language Acquisition
  • Internationally-Trained Professionals
  • Rights and Responsibilities of Citizenship and Civic Participation
  • Workforce Preparation Activities
  • Workforce Training
  • Integrated Education and Training in Corrections
  • LINCS ESL PRO
  • Integrating Digital Literacy into English Language Instruction
  • Meeting the Language Needs of Today's English Language Learner
  • Open Educational Resources (OER) for English Language Instruction
  • Preparing English Learners for Work and Career Pathways
  • Recommendations for Applying These Resources Successfully
  • Moving Pathways Forward
  • Career Pathways Exchange
  • Power in Numbers
  • Adult Learner Stories
  • Meet Our Experts
  • Newsletters
  • Reentry Education Tool Kit
  • Education Services
  • Strategic Partnerships
  • Sustainability
  • Transition Processes
  • Program Infrastructure
  • SIA Resources and Professional Development
  • Fulfilling the Instructional Shifts
  • Observing in Classrooms
  • SIA ELA/Literacy Videos
  • SIA Math Videos
  • SIA ELA Videos
  • Conducting Curriculum Reviews
  • Boosting English Learner Instruction
  • Student Achievement in Reading
  • TEAL Just Write! Guide
  • Introduction
  • Fact Sheet: Research-Based Writing Instruction
  • Increase the Amount of Student Writing
  • Fact Sheet: Adult Learning Theories
  • Fact Sheet: Student-Centered Learning
  • Set and Monitor Goals
  • Fact Sheet: Self-Regulated Learning
  • Fact Sheet: Metacognitive Processes
  • Combine Sentences
  • Teach Self-Regulated Strategy Development
  • Fact Sheet: Self-Regulated Strategy Development
  • Teach Summarization
  • Make Use of Frames
  • Provide Constructive Feedback
  • Apply Universal Design for Learning
  • Fact Sheet: Universal Design for Learning
  • Check for Understanding
  • Fact Sheet: Formative Assessment
  • Differentiated Instruction
  • Fact Sheet: Differentiated Instruction
  • Gradual Release of Responsibility
  • Join a Professional Learning Community
  • Look at Student Work Regularly
  • Fact Sheet: Effective Lesson Planning
  • Use Technology Effectively
  • Fact Sheet: Technology-Supported Writing Instruction
  • Project Resources
  • Summer Institute
  • Teacher Effectiveness in Adult Education
  • Adult Education Teacher Induction Toolkit
  • Adult Education Teacher Competencies
  • Teacher Effectiveness Online Courses
  • Teaching Skills that Matter
  • Teaching Skills that Matter Toolkit Overview
  • Teaching Skills that Matter Civics Education
  • Teaching Skills that Matter Digital Literacy
  • Teaching Skills that Matter Financial Literacy
  • Teaching Skills that Matter Health Literacy
  • Teaching Skills that Matter Workforce Preparation
  • Teaching Skills that Matter Other Tools and Resources
  • Technology-Based Coaching in Adult Education
  • Technical Assistance and Professional Development
  • About LINCS
  • History of LINCS
  • LINCS Guide
  • Style Guide

TEAL Center Fact Sheet No. 4: Metacognitive Processes

Metacognition is one’s ability to use prior knowledge to plan a strategy for approaching a learning task, take necessary steps to problem solve, reflect on and evaluate results, and modify one’s approach as needed. It helps learners choose the right cognitive tool for the task and plays a critical role in successful learning.

What Is Metacognition?

Metacognition refers to awareness of one’s own knowledge—what one does and doesn’t know—and one’s ability to understand, control, and manipulate one’s cognitive processes (Meichenbaum, 1985). It includes knowing when and where to use particular strategies for learning and problem solving as well as how and why to use specific strategies. Metacognition is the ability to use prior knowledge to plan a strategy for approaching a learning task, take necessary steps to problem solve, reflect on and evaluate results, and modify one’s approach as needed. Flavell (1976), who first used the term, offers the following example: I am engaging in Metacognition if I notice that I am having more trouble learning A than B; if it strikes me that I should double check C before accepting it as fact (p. 232).

Cognitive strategies are the basic mental abilities we use to think, study, and learn (e.g., recalling information from memory, analyzing sounds and images, making associations between or comparing/contrasting different pieces of information, and making inferences or interpreting text). They help an individual achieve a particular goal, such as comprehending text or solving a math problem, and they can be individually identified and measured. In contrast, metacognitive strategies are used to ensure that an overarching learning goal is being or has been reached. Examples of metacognitive activities include planning how to approach a learning task, using appropriate skills and strategies to solve a problem, monitoring one’s own comprehension of text, self-assessing and self-correcting in response to the self-assessment, evaluating progress toward the completion of a task, and becoming aware of distracting stimuli.

Elements of Metacognition

Researchers distinguish between metacognitive knowledge and metacognitive regulation (Flavell, 1979, 1987; Schraw & Dennison, 1994). Metacognitive knowledge refers to what individuals know about themselves as cognitive processors, about different approaches that can be used for learning and problem solving, and about the demands of a particular learning task. Metacognitive regulation refers to adjustments individuals make to their processes to help control their learning, such as planning, information management strategies, comprehension monitoring, de-bugging strategies, and evaluation of progress and goals. Flavell (1979) further divides metacognitive knowledge into three categories:

  • Person variables: What one recognizes about his or her strengths and weaknesses in learning and processing information.
  • Task variables: What one knows or can figure out about the nature of a task and the processing demands required to complete the task—for example, knowledge that it will take more time to read, comprehend, and remember a technical article than it will a similar-length passage from a novel.
  • Strategy variables: The strategies a person has “at the ready” to apply in a flexible way to successfully accomplish a task; for example, knowing how to activate prior knowledge before reading a technical article, using a glossary to look up unfamiliar words, or recognizing that sometimes one has to reread a paragraph several times before it makes sense.

Livingston (1997) provides an example of all three variables: “I know that I ( person variable ) have difficulty with word problems ( task variable ), so I will answer the computational problems first and save the word problems for last ( strategy variable ).”

Why Teach Metacognitive Skills?

Research shows that metacognitive skills can be taught to students to improve their learning (Nietfeld & Shraw, 2002; Thiede, Anderson, & Therriault, 2003).

Constructing understanding requires both cognitive and metacognitive elements. Learners “construct knowledge” using cognitive strategies, and they guide, regulate, and evaluate their learning using metacognitive strategies. It is through this “thinking about thinking,” this use of metacognitive strategies, that real learning occurs. As students become more skilled at using metacognitive strategies, they gain confidence and become more independent as learners.

Individuals with well-developed metacognitive skills can think through a problem or approach a learning task, select appropriate strategies, and make decisions about a course of action to resolve the problem or successfully perform the task. They often think about their own thinking processes, taking time to think about and learn from mistakes or inaccuracies (North Central Regional Educational Laboratory, 1995). Some instructional programs encourage students to engage in “metacognitive conversations” with themselves so that they can “talk” with themselves about their learning, the challenges they encounter, and the ways in which they can self-correct and continue learning.

Moreover, individuals who demonstrate a wide variety of metacognitive skills perform better on exams and complete work more efficiently—they use the right tool for the job, and they modify learning strategies as needed, identifying blocks to learning and changing tools or strategies to ensure goal attainment. Because Metacognition plays a critical role in successful learning, it is imperative that instructors help learners develop metacognitively.

What’s the Research?

Metacognitive strategies can be taught (Halpern, 1996), they are associated with successful learning (Borkowski, Carr, & Pressley, 1987). Successful learners have a repertoire of strategies to select from and can transfer them to new settings (Pressley, Borkowski, & Schneider, 1987). Instructors need to set tasks at an appropriate level of difficulty (i.e., challenging enough so that students need to apply metacognitive strategies to monitor success but not so challenging that students become overwhelmed or frustrated), and instructors need to prompt learners to think about what they are doing as they complete these tasks (Biemiller & Meichenbaum, 1992). Instructors should take care not to do the thinking for learners or tell them what to do because this runs the risk of making students experts at seeking help rather than experts at thinking about and directing their own learning. Instead, effective instructors continually prompt learners, asking “What should you do next?”

McKeachie (1988) found that few college instructors explicitly teach strategies for monitoring learning. They assume that students have already learned these strategies in high school. But many have not and are unaware of the metacognitive process and its importance to learning. Rote memorization is the usual—and often the only—learning strategy employed by high school students when they enter college (Nist, 1993). Simpson and Nist (2000), in a review of the literature on strategic learning, emphasize that instructors need to provide explicit instruction on the use of study strategies. The implication for ABE programs is that it is likely that ABE learners need explicit instruction in both cognitive and metacognitive strategies. They need to know that they have choices about the strategies they can employ in different contexts, and they need to monitor their use of and success with these strategies.

Recommended Instructional Strategies

Instructors can encourage ABE learners to become more strategic thinkers by helping them focus on the ways they process information. Self-questioning, reflective journal writing, and discussing their thought processes with other learners are among the ways that teachers can encourage learners to examine and develop their metacognitive processes.

Fogarty (1994) suggests that Metacognition is a process that spans three distinct phases, and that, to be successful thinkers, students must do the following:

  • Develop a plan before approaching a learning task, such as reading for comprehension or solving a math problem.
  • Monitor their understanding; use “fix-up” strategies when meaning breaks down.
  • Evaluate their thinking after completing the task.

Instructors can model the application of questions, and they can prompt learners to ask themselves questions during each phase. They can incorporate into lesson plans opportunities for learners to practice using these questions during learning tasks, as illustratetd in the following examples:

  • During the planning phase, learners can ask, What am I supposed to learn? What prior knowledge will help me with this task? What should I do first? What should I look for in this reading? How much time do I have to complete this? In what direction do I want my thinking to take me?
  • During the monitoring phase, learners can ask, How am I doing? Am I on the right track? How should I proceed? What information is important to remember? Should I move in a different direction? Should I adjust the pace because of the difficulty? What can I do if I do not understand?
  • During the evaluation phase, learners can ask, H ow well did I do? What did I learn? Did I get the results I expected? What could I have done differently? Can I apply this way of thinking to other problems or situations? Is there anything I don’t understand—any gaps in my knowledge? Do I need to go back through the task to fill in any gaps in understanding? How might I apply this line of thinking to other problems?

Rather than viewing reading, writing, science, social studies, and math only as subjects or content to be taught, instructors can see them as opportunities for learners to reflect on their learning processes. Examples follow for each content area:

  • Reading: Teach learners how to ask questions during reading and model “think-alouds.” Ask learners questions during read-alouds and teach them to monitor their reading by constantly asking themselves if they understand what the text is about. Teach them to take notes or highlight important details, asking themselves, “Why is this a key phrase to highlight?” and “Why am I not highlighting this?”
  • Writing: Model prewriting strategies for organizing thoughts, such as brainstorming ideas using a word web, or using a graphic organizer to put ideas into paragraphs, with the main idea at the top and the supporting details below it.
  • Social Studies and Science: Teach learners the importance of using organizers such as KWL charts, Venn diagrams, concept maps , and anticipation/reaction charts to sort information and help them learn and understand content. Learners can use organizers prior to a task to focus their attention on what they already know and identify what they want to learn. They can use a Venn diagram to identify similarities and differences between two related concepts.
  • Math: Teach learners to use mnemonics to recall steps in a process, such as the order of mathematical operations. Model your thought processes in solving problems—for example, “This is a lot of information; where should I start? Now that I know____, is there something else I know?”

The goal of teaching metacognitive strategies is to help learners become comfortable with these strategies so that they employ them automatically to learning tasks, focusing their attention, deriving meaning, and making adjustments if something goes wrong. They do not think about these skills while performing them but, if asked what they are doing, they can usually accurately describe their metacognitive processes.

Biemiller, A., & Meichenbaum, D. (1992). The nature and nurture of the self-directed learner. Educational Leadership, 50, 75–80.

Borkowski, J., Carr, M., & Pressely, M. (1987). “Spontaneous” strategy use: Perspectives from metacognitive theory. Intelligence, 11, 61–75.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906–911.

Flavell, J. H. (1976). Metacognitive aspects of problem solving. In L. B. Resnick (Ed.), The nature of intelligence (pp. 231–236). Hillsdale, NJ: Lawrence Erlbaum Associates.

Flavell, J. H. (1987). Speculations about the nature and development of metacognition. In F. E. Weinert & R. H. Kluwe (Eds.), Metacognition, motivation, and understanding (pp. 21–29). Hillside, NJ: Lawrence Erlbaum Associates.

Fogarty, R. (1994). How to teach for metacognition. Palatine, IL: IRI/Skylight Publishing.

Halpern, D. F. (1996). Thought and knowledge: An introduction to critical thinking. Mahwah, NJ: Lawrence Erlbaum Associates.

Livingston, J. A. (1997). Metacognition: An overview. Retrieved December 27, 2011 from http://gse.buffalo.edu/fas/shuell/CEP564/Metacog.htm

McKeachie, W. J. (1988). The need for study strategy training. In C. E. Weinstein, E. T. Goetz, & P. A. Alexander (Eds.), Learning and study strategies: Issues in assessment, instruction, and evaluation (pp. 3–9). New York: Academic Press.

Meichenbaum, D. (1985). Teaching thinking: A cognitive-behavioral perspective. In S. F., Chipman, J. W. Segal, & R. Glaser (Eds.), Thinking and learning skills, Vol. 2: Research and open questions. Hillsdale, NJ: Lawrence Erlbaum Associates.

North Central Regional Educational Laboratory. (1995). Strategic teaching and reading project guidebook. Retrieved December 27, 2011

Nietfeld, J. L., & Shraw, G. (2002). The effect of knowledge and strategy explanation on monitoring accuracy. Journal of Educational Research, 95, 131–142.

Nist, S. (1993). What the literature says about academic literacy. Georgia Journal of Reading, Fall-Winter, 11–18.

Pressley, M., Borkowski, J. G., & Schneider, W. (1987). Cognitive strategies: Good strategy users coordinate metacognition and knowledge. In R. Vasta, & G. Whitehurst (Eds.), Annals of child development, 4, 80–129. Greenwich, CT: JAI Press.

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19, 460–475.

Simpson, M. L., & Nist, S. L. (2000). An update on strategic learning: It’s more than textbook reading strategies. Journal of Adolescent and Adult Literacy, 43 (6) 528–541.

Thiede, K. W., Anderson, M. C., & Therriault, D. (2003). Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology, 95, 66–73.

Authors: TEAL Center staff

Reviewed by: David Scanlon, Boston College

About the TEAL Center: The Teaching Excellence in Adult Literacy (TEAL) Center is a project of the U.S. Department of Education, Office of Career, Technical, and Adult Education (OCTAE), designed to improve the quality of teaching in adult education in the content areas.

  • Columbia University in the City of New York
  • Office of Teaching, Learning, and Innovation
  • University Policies
  • Columbia Online
  • Academic Calendar
  • Resources and Technology
  • Resources and Guides
  • Metacognition

Metacognitive thinking skills are important for instructors and students alike. This resource provides instructors with an overview of the what and why of metacognition and general “getting started” strategies for teaching for and with metacognition.

In this page:

What is metacognition?

Why use metacognition, getting started: how to teach both for and with metacognition, metacognition at columbia.

Cite this resource: Columbia Center for Teaching and Learning (2018). Metacognition Resource. Columbia University. Retrieved [today’s date] from https://ctl.columbia.edu/resources-and-technology/resources/metacognition/

what is meta problem solving center

  • assess the task.
  • plan for and use appropriate strategies and resources.
  • monitor task performance.
  • evaluate processes and products of their learning and revise their goals and strategies accordingly.

The Center for Teaching and Learning encourages instructors to teach metacognitively. This means to teach “ with and for metacognition.” To teach with metacognition involves instructors “thinking about their own thinking regarding their teaching” (Hartman, 2001: 149). To teach for metacognition involves instructors thinking about how their instruction helps to elucidate learning and problem solving strategies to their students (Hartman, 2001).

Learners with metacognitive skills are:

  • More self-aware as critical thinkers and problem solvers, enabling them to actively approach knowledge gaps and problems and to rely on themselves.
  • Able to monitor, plan, and control their mental processes.
  • Better able to assess the depth of their knowledge.
  • Able to transfer/apply their knowledge and skills to new situations.
  • Able to choose more effective learning strategies.
  • More likely to perform better academically.

Instructors who teach metacognitively / think about their teaching are:

  • More self-aware of their instructional capacities, and know what teaching strategies they rely upon, when and why these use these strategies, and how to use them effectively and inclusively.
  • Better able to regulate their instruction before, during, and after conducting a class session (i.e., to plan what and how to teach, monitor how lessons are going and make adjustments, and evaluate how a lesson went afterwards).
  • Better able to communicate, helping students understand the what, why, and how of their learning, which can lead to better learning outcomes.
  • Able to use their knowledge of students’ metacognitive skills to plan instruction designed to improve students’ metacognition and to create inclusive course climates.

Teaching for metacognition — Metacognitive strategies that serve students and their learning:

Design homework assignments that ask students to focus on their learning process. This includes having students monitor progress, identify and correct mistakes, and plan next steps.

Provide structures to guide students in creating implementable action plans for improvement.

Show students how to move stepwise from reflection to action. Use appropriate technology to support student self-regulation. Many platforms such as CourseWorks provide tools that students can use to keep up with their course work and monitor their progress.

Teaching with metacognition — Metacognitive strategies that serve the course and the instructor’s teaching practice:

Create an evaluation plan to periodically evaluate one’s teaching and course design, set-up, and content.

Structure the course to provide time for students to give feedback on the course and teaching. Evaluate course progress and successes of teaching Use course and instructional objectives to measure progress.

Schedule mid-course feedback surveys with students.

Request a mid-course review (offered as a service for graduate students).

Review end-of-course evaluations and reflect on the changes that will be made to maximize student learning. Build in time for metacognitive work Set aside time before, during, and after a course to reflect on one’s teaching practice, relationship with students, course climate and dynamics, as well as assumptions about the course material and its accessibility to students.

Metacognition and Memory Lab  |  Dr. Janet Metcalfe (Professor of Psychology and of Neurobiology and Behavior) runs a lab that focuses on how people use their metacognition to improve self-awareness and to guide their own learning and behavior. Dr. Metcalfe is author of Metacognition: A Textbook for Cognitive, Educational, Life Span & Applied Psychology (2009), co-authored with John Dunlosky.

In Fall 2018, the CTL and the Science of LEarning Research (SOLER) initiative co-organized the inaugural Science of Learning Symposium “Metacognition: From Research to Classroom” which brought together Columbia faculty, staff, graduate students, and experts in the science of learning to share the research on metacognition in learning, and to translate it into strategies that maximize student learning. View video recording of the event here .

Ambrose, S. A., Lovett, M., Bridges, M. W., DiPietro, M., & Norman, M. K. (2010). How Learning Works: Seven Research-Based Principles for Smart Teaching . San Francisco: John Wiley & Sons.

Dunlosky, J. and Metcalfe, J. (2009). Metacognition. Thousand Oaks, CA: Sage.

Flavell, J.H. (1976). Metacognitive Aspects of Problem Solving. In L.B. Resnick (Ed.), The Nature of Intelligence (pp. 231-236). Hillsdale, NJ: Erlbaum.

Hacker, D.J. (1998). Chapter 1. Definitions and Empirical Foundations. In Hacker, D.J.; Dunlosky, J.; and Graesser, A.C. (1998). Metacognition in Educational Theory and Practice. Mahwah, N.J.: Routledge.

Hartman, H.J. (2001). Chapter 8: Teaching Metacognitively. In Metacognition in Learning and Instruction. Kluwer Academic Publishers, 149 – 172.

Lai, E.R. (2011). Metacognition: A Literature Review. Pearson’s Research Reports. Retrieved from https://images.pearsonassessments.com/images/tmrs/Metacognition_Literature_Review_Final.pdf

McGuire, S.Y. (2015). Teach Students How to Learn: Strategies You Can Incorporate Into Any Course to Improve Student Metacognition, Study Skills, and Motivation. Sterling, VA: Stylus.

National Research Council (2000). How People Learn: Brain, Mind, Experience, and School . Expanded Edition . Washington, DC: The National Academies Press. https://doi.org/10.17226/9853

Nilson, L. (2013). Creating Self-Regulated Learners: Strategies to Strengthen Students’ Self-Awareness and Learning Skills. Sterling, VA: Stylus.

Schraw, G. and Dennison, R.S. (1994). Assessing Metacognitive Awareness. Contemporary Educational Psychology. 19(4): 460-475.

Explore our teaching resources.

  • Blended Learning
  • Contemplative Pedagogy
  • Inclusive Teaching Guide
  • FAQ for Teaching Assistants

The CTL researches and experiments.

The Columbia Center for Teaching and Learning provides an array of resources and tools for instructional activities.

This website uses cookies to identify users, improve the user experience and requires cookies to work. By continuing to use this website, you consent to Columbia University's use of cookies and similar technologies, in accordance with the Columbia University Website Cookie Notice .

Center for Teaching

Metacognition.

Chick, N. (2013). Metacognition. Vanderbilt University Center for Teaching. Retrieved [todaysdate] from https://cft.vanderbilt.edu/guides-sub-pages/metacognition/.

Thinking about One’s Thinking |   Putting Metacognition into Practice

Thinking about One’s Thinking

what is meta problem solving center

Initially studied for its development in young children (Baker & Brown, 1984; Flavell, 1985), researchers soon began to look at how experts display metacognitive thinking and how, then, these thought processes can be taught to novices to improve their learning (Hatano & Inagaki, 1986).  In How People Learn , the National Academy of Sciences’ synthesis of decades of research on the science of learning, one of the three key findings of this work is the effectiveness of a “‘metacognitive’ approach to instruction” (Bransford, Brown, & Cocking, 2000, p. 18).

Metacognitive practices increase students’ abilities to transfer or adapt their learning to new contexts and tasks (Bransford, Brown, & Cocking, p. 12; Palincsar & Brown, 1984; Scardamalia et al., 1984; Schoenfeld, 1983, 1985, 1991).  They do this by gaining a level of awareness above the subject matter : they also think about the tasks and contexts of different learning situations and themselves as learners in these different contexts.  When Pintrich (2002) asserts that “Students who know about the different kinds of strategies for learning, thinking, and problem solving will be more likely to use them” (p. 222), notice the students must “know about” these strategies, not just practice them.  As Zohar and David (2009) explain, there must be a “ conscious meta-strategic level of H[igher] O[rder] T[hinking]” (p. 179).

Metacognitive practices help students become aware of their strengths and weaknesses as learners, writers, readers, test-takers, group members, etc.  A key element is recognizing the limit of one’s knowledge or ability and then figuring out how to expand that knowledge or extend the ability. Those who know their strengths and weaknesses in these areas will be more likely to “actively monitor their learning strategies and resources and assess their readiness for particular tasks and performances” (Bransford, Brown, & Cocking, p. 67).

The absence of metacognition connects to the research by Dunning, Johnson, Ehrlinger, and Kruger on “Why People Fail to Recognize Their Own Incompetence” (2003).  They found that “people tend to be blissfully unaware of their incompetence,” lacking “insight about deficiencies in their intellectual and social skills.”  They identified this pattern across domains—from test-taking, writing grammatically, thinking logically, to recognizing humor, to hunters’ knowledge about firearms and medical lab technicians’ knowledge of medical terminology and problem-solving skills (p. 83-84).  In short, “if people lack the skills to produce correct answers, they are also cursed with an inability to know when their answers, or anyone else’s, are right or wrong” (p. 85).  This research suggests that increased metacognitive abilities—to learn specific (and correct) skills, how to recognize them, and how to practice them—is needed in many contexts.

Putting Metacognition into Practice

In “ Promoting Student Metacognition ,” Tanner (2012) offers a handful of specific activities for biology classes, but they can be adapted to any discipline. She first describes four assignments for explicit instruction (p. 116):

  • Preassessments—Encouraging Students to Examine Their Current Thinking: “What do I already know about this topic that could guide my learning?”

what is meta problem solving center

  • Retrospective Postassessments—Pushing Students to Recognize Conceptual Change: “Before this course, I thought evolution was… Now I think that evolution is ….” or “How is my thinking changing (or not changing) over time?”
  • Reflective Journals—Providing a Forum in Which Students Monitor Their Own Thinking: “What about my exam preparation worked well that I should remember to do next time? What did not work so well that I should not do next time or that I should change?”

Next are recommendations for developing a “classroom culture grounded in metacognition” (p. 116-118):

  • Giving Students License to Identify Confusions within the Classroom Culture:  ask students what they find confusing, acknowledge the difficulties
  • Integrating Reflection into Credited Course Work: integrate short reflection (oral or written) that ask students what they found challenging or what questions arose during an assignment/exam/project
  • Metacognitive Modeling by the Instructor for Students: model the thinking processes involved in your field and sought in your course by being explicit about “how you start, how you decide what to do first and then next, how you check your work, how you know when you are done” (p. 118)

To facilitate these activities, she also offers three useful tables:

  • Questions for students to ask themselves as they plan, monitor, and evaluate their thinking within four learning contexts—in class, assignments, quizzes/exams, and the course as a whole (p. 115)
  • Prompts for integrating metacognition into discussions of pairs during clicker activities, assignments, and quiz or exam preparation (p. 117)
  • Questions to help faculty metacognitively assess their own teaching (p. 119)

Weimer’s “ Deep Learning vs. Surface Learning: Getting Students to Understand the Difference ” (2012) offers additional recommendations for developing students’ metacognitive awareness and improvement of their study skills:

“[I]t is terribly important that in explicit and concerted ways we make students aware of themselves as learners. We must regularly ask, not only ‘What are you learning?’ but ‘How are you learning?’ We must confront them with the effectiveness (more often ineffectiveness) of their approaches. We must offer alternatives and then challenge students to test the efficacy of those approaches. ” (emphasis added)

She points to a tool developed by Stanger-Hall (2012, p. 297) for her students to identify their study strategies, which she divided into “ cognitively passive ” (“I previewed the reading before class,” “I came to class,” “I read the assigned text,” “I highlighted the text,” et al) and “ cognitively active study behaviors ” (“I asked myself: ‘How does it work?’ and ‘Why does it work this way?’” “I wrote my own study questions,” “I fit all the facts into a bigger picture,” “I closed my notes and tested how much I remembered,” et al) .  The specific focus of Stanger-Hall’s study is tangential to this discussion, 1 but imagine giving students lists like hers adapted to your course and then, after a major assignment, having students discuss which ones worked and which types of behaviors led to higher grades. Even further, follow Lovett’s advice (2013) by assigning “exam wrappers,” which include students reflecting on their previous exam-preparation strategies, assessing those strategies and then looking ahead to the next exam, and writing an action plan for a revised approach to studying. A common assignment in English composition courses is the self-assessment essay in which students apply course criteria to articulate their strengths and weaknesses within single papers or over the course of the semester. These activities can be adapted to assignments other than exams or essays, such as projects, speeches, discussions, and the like.

As these examples illustrate, for students to become more metacognitive, they must be taught the concept and its language explicitly (Pintrich, 2002; Tanner, 2012), though not in a content-delivery model (simply a reading or a lecture) and not in one lesson. Instead, the explicit instruction should be “designed according to a knowledge construction approach,” or students need to recognize, assess, and connect new skills to old ones, “and it needs to take place over an extended period of time” (Zohar & David, p. 187).  This kind of explicit instruction will help students expand or replace existing learning strategies with new and more effective ones, give students a way to talk about learning and thinking, compare strategies with their classmates’ and make more informed choices, and render learning “less opaque to students, rather than being something that happens mysteriously or that some students ‘get’ and learn and others struggle and don’t learn” (Pintrich, 2002, p. 223).

what is meta problem solving center

  • What to Expect (when reading philosophy)
  • The Ultimate Goal (of reading philosophy)
  • Basic Good Reading Behaviors
  • Important Background Information, or discipline- and course-specific reading practices, such as “reading for enlightenment” rather than information, and “problem-based classes” rather than historical or figure-based classes
  • A Three-Part Reading Process (pre-reading, understanding, and evaluating)
  • Flagging, or annotating the reading
  • Linear vs. Dialogical Writing (Philosophical writing is rarely straightforward but instead “a monologue that contains a dialogue” [p. 365].)

What would such a handout look like for your discipline?

Students can even be metacognitively prepared (and then prepare themselves) for the overarching learning experiences expected in specific contexts . Salvatori and Donahue’s The Elements (and Pleasures) of Difficulty (2004) encourages students to embrace difficult texts (and tasks) as part of deep learning, rather than an obstacle.  Their “difficulty paper” assignment helps students reflect on and articulate the nature of the difficulty and work through their responses to it (p. 9).  Similarly, in courses with sensitive subject matter, a different kind of learning occurs, one that involves complex emotional responses.  In “ Learning from Their Own Learning: How Metacognitive and Meta-affective Reflections Enhance Learning in Race-Related Courses ” (Chick, Karis, & Kernahan, 2009), students were informed about the common reactions to learning about racial inequality (Helms, 1995; Adams, Bell, & Griffin, 1997; see student handout, Chick, Karis, & Kernahan, p. 23-24) and then regularly wrote about their cognitive and affective responses to specific racialized situations.  The students with the most developed metacognitive and meta-affective practices at the end of the semester were able to “clear the obstacles and move away from” oversimplified thinking about race and racism ”to places of greater questioning, acknowledging the complexities of identity, and redefining the world in racial terms” (p. 14).

Ultimately, metacognition requires students to “externalize mental events” (Bransford, Brown, & Cocking, p. 67), such as what it means to learn, awareness of one’s strengths and weaknesses with specific skills or in a given learning context, plan what’s required to accomplish a specific learning goal or activity, identifying and correcting errors, and preparing ahead for learning processes.

————————

1 Students who were tested with short answer in addition to multiple-choice questions on their exams reported more cognitively active behaviors than those tested with just multiple-choice questions, and these active behaviors led to improved performance on the final exam.

  • Adams, Maurianne, Bell, Lee Ann, and Griffin, Pat. (1997). Teaching for diversity and social justice: A sourcebook . New York: Routledge.
  • Bransford, John D., Brown Ann L., and Cocking Rodney R. (2000). How people learn: Brain, mind, experience, and school . Washington, D.C.: National Academy Press.
  • Baker, Linda, and Brown, Ann L. (1984). Metacognitive skills and reading.  In Paul David Pearson, Michael L. Kamil, Rebecca Barr, & Peter Mosenthal (Eds.), Handbook of research in reading: Volume III (pp. 353–395).  New York: Longman.
  • Brown, Ann L. (1980). Metacognitive development and reading. In Rand J. Spiro, Bertram C. Bruce, and William F. Brewer, (Eds.), Theoretical issues in reading comprehension: Perspectives from cognitive psychology, linguistics, artificial intelligence, and education (pp. 453-482). Hillsdale, NJ: Erlbaum.
  • Chick, Nancy, Karis, Terri, and Kernahan, Cyndi. (2009). Learning from their own learning: how metacognitive and meta-affective reflections enhance learning in race-related courses . International Journal for the Scholarship of Teaching and Learning, 3(1). 1-28.
  • Commander, Nannette Evans, and Valeri-Gold, Marie. (2001). The learning portfolio: A valuable tool for increasing metacognitive awareness . The Learning Assistance Review, 6 (2), 5-18.
  • Concepción, David. (2004). Reading philosophy with background knowledge and metacognition . Teaching Philosophy , 27 (4). 351-368.
  • Dunning, David, Johnson, Kerri, Ehrlinger, Joyce, and Kruger, Justin. (2003) Why people fail to recognize their own incompetence . Current Directions in Psychological Science, 12 (3). 83-87.
  • Flavell,  John H. (1985). Cognitive development. Englewood Cliffs, NJ: Prentice Hall.
  • Hatano, Giyoo and Inagaki, Kayoko. (1986). Two courses of expertise. In Harold Stevenson, Azuma, Horishi, and Hakuta, Kinji (Eds.), Child development and education in Japan, New York: W.H. Freeman.
  • Helms, Janet E. (1995). An update of Helms’ white and people of color racial identity models . In J.G. Ponterotto, Joseph G., Casas, Manuel, Suzuki, Lisa A., and Alexander, Charlene M. (Eds.), Handbook of multicultural counseling (pp. 181-198) . Thousand Oaks, CA: Sage.
  • Lovett, Marsha C. (2013). Make exams worth more than the grade. In Matthew Kaplan, Naomi Silver, Danielle LaVague-Manty, and Deborah Meizlish (Eds.), Using reflection and metacognition to improve student learning: Across the disciplines, across the academy . Sterling, VA: Stylus.
  • Palincsar, Annemarie Sullivan, and Brown, Ann L. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities . Cognition and Instruction, 1 (2). 117-175.
  • Pintrich, Paul R. (2002). The Role of metacognitive knowledge in learning, teaching, and assessing . Theory into Practice, 41 (4). 219-225.
  • Salvatori, Mariolina Rizzi, and Donahue, Patricia. (2004). The Elements (and pleasures) of difficulty . New York: Pearson-Longman.
  • Scardamalia, Marlene, Bereiter, Carl, and Steinbach, Rosanne. (1984). Teachability of reflective processes in written composition . Cognitive Science , 8, 173-190.
  • Schoenfeld, Alan H. (1991). On mathematics as sense making: An informal attack on the fortunate divorce of formal and informal mathematics. In James F. Voss, David N. Perkins, and Judith W. Segal (Eds.), Informal reasoning and education (pp. 311-344). Hillsdale, NJ: Erlbaum.
  • Stanger-Hall, Kathrin F. (2012). Multiple-choice exams: An obstacle for higher-level thinking in introductory science classes . Cell Biology Education—Life Sciences Education, 11(3), 294-306.
  • Tanner, Kimberly D.  (2012). Promoting student metacognition . CBE—Life Sciences Education, 11, 113-120.
  • Weimer, Maryellen.  (2012, November 19). Deep learning vs. surface learning: Getting students to understand the difference . Retrieved from the Teaching Professor Blog from http://www.facultyfocus.com/articles/teaching-professor-blog/deep-learning-vs-surface-learning-getting-students-to-understand-the-difference/ .
  • Zohar, Anat, and David, Adi Ben. (2009). Paving a clear path in a thick forest: a conceptual analysis of a metacognitive component . Metacognition Learning , 4 , 177-195.

Creative Commons License

Photo credit:  wittygrittyinvisiblegirl via  Compfight cc

Photo Credit: Helga Weber via Compfight cc

Photo Credit: fiddle oak via Compfight cc

Teaching Guides

  • Online Course Development Resources
  • Principles & Frameworks
  • Pedagogies & Strategies
  • Reflecting & Assessing
  • Challenges & Opportunities
  • Populations & Contexts

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules

Read more

How it works

Transform your enterprise with the scalable mindsets, skills, & behavior change that drive performance.

Explore how BetterUp connects to your core business systems.

We pair AI with the latest in human-centered coaching to drive powerful, lasting learning and behavior change.

Build leaders that accelerate team performance and engagement.

Unlock performance potential at scale with AI-powered curated growth journeys.

Build resilience, well-being and agility to drive performance across your entire enterprise.

Transform your business, starting with your sales leaders.

Unlock business impact from the top with executive coaching.

Foster a culture of inclusion and belonging.

Accelerate the performance and potential of your agencies and employees.

See how innovative organizations use BetterUp to build a thriving workforce.

Discover how BetterUp measurably impacts key business outcomes for organizations like yours.

A demo is the first step to transforming your business. Meet with us to develop a plan for attaining your goals.

Request a demo

  • What is coaching?

Learn how 1:1 coaching works, who its for, and if it's right for you.

Accelerate your personal and professional growth with the expert guidance of a BetterUp Coach.

Types of Coaching

Navigate career transitions, accelerate your professional growth, and achieve your career goals with expert coaching.

Enhance your communication skills for better personal and professional relationships, with tailored coaching that focuses on your needs.

Find balance, resilience, and well-being in all areas of your life with holistic coaching designed to empower you.

Discover your perfect match : Take our 5-minute assessment and let us pair you with one of our top Coaches tailored just for you.

Find your Coach

Research, expert insights, and resources to develop courageous leaders within your organization.

Best practices, research, and tools to fuel individual and business growth.

View on-demand BetterUp events and learn about upcoming live discussions.

The latest insights and ideas for building a high-performing workplace.

  • BetterUp Briefing

The online magazine that helps you understand tomorrow's workforce trends, today.

Innovative research featured in peer-reviewed journals, press, and more.

Founded in 2022 to deepen the understanding of the intersection of well-being, purpose, and performance

We're on a mission to help everyone live with clarity, purpose, and passion.

Join us and create impactful change.

Read the buzz about BetterUp.

Meet the leadership that's passionate about empowering your workforce.

Find your Coach

For Business

For Individuals

What are metacognitive skills? Examples in everyday life

man-using-notebook-in-front-of-laptop-metacognitive-skills

Jump to section

What are metacognitive skills?

Examples of metacognitive skills, how to improve metacognitive skills, take charge of your mind.

When facing a career change or deciding to switch jobs, you might update the hard and soft skills on your resume. You could even take courses to upskill and expand your portfolio.

But some growth happens off the page. Your metacognitive skills contribute to your learning process and help you look inward to self-reflect and monitor your growth. They’re like a golden ticket toward excellence in both academia and your career path , always pushing you further.

A deeper understanding of metacognition, along with effective strategies for developing related skills, opens the door to heightened personal and professional development . Metacognitive thinking might just be the tool you need to reach your academic and career goals

Metacognitive skills are the soft skills you use to monitor and control your learning and problem-solving processes , or your thinking about thinking. This self-understanding is known as metacognition theory, a term that the American developmental psychologist John H. Flavell coined in the 1970s .

It might sound abstract, but these skills are mostly about self-awareness , learning, and organizing your thoughts. Metacognitive strategies include thinking out loud and answering reflective questions. They’re often relevant for students who need to memorize concepts fast or absorb lots of information at once.

But metacognition is important for everyone because it helps you retain information more efficiently and feel more confident about what you know. One meta-analysis of many studies showed that being aware of metacognitive strategies has a strong positive impact on teaching and learning , and that knowing how to plan ahead was a key indicator of future success.

Understanding your cognition and how you learn is a fundamental step in optimizing your educational process. To make the concept more tangible, here are a few cognitive skills examples:

Goal setting

One of the foremost metacognitive skills is knowing how to set goals — recognizing what your ambitions are and fine-tuning them into manageable and attainable objectives. The SMART goal framework is a good place to start because it dives deeper into what you know you can realistically achieve.

Whether it's a personal goal of grasping a complex concept, a professional goal of developing a new skill set, or a financial goal of achieving a budgeting milestone , setting a concrete goal helps you know what you’re working toward. It’s the first step to self-directed learning and achievement, giving you a destination for your path.

Planning and organization

Planning is an essential metacognition example because it sketches out the route you'll take to reach your goal, as well as identifying and collecting the specific strategies, resources, and support mechanisms you'll need along the way. It’s an in-demand skill for many jobs, but it also helps you learn new things.

Creating and organizing a plan is where you contemplate the best methods for learning, evaluate the materials and resources at your disposal, and determine the most efficient time management strategies. Even though it’s a concrete skill, it falls under the umbrella of metacognition because it involves self-awareness about your learning style and abilities.

womans-hand-writing-on-calendar-on-tablet-and-using-organizer-metacognitive-skills

Problem-solving

Central to metacognition is problem-solving, a higher-order cognitive process requiring both creative and critical thinking skills . Solving problems both at work and during learning begins with recognizing the issue at hand, analyzing the details, and considering potential solutions. The next step is selecting the most promising solution from the pool of possibilities and evaluating the results after implementation. 

The problem-solving process gives you the opportunity to grow from your mistakes and practice trial and error. It also helps you reflect and refine your approach for future endeavors. These qualities make it central to metacognition’s inward-facing yet action-oriented processes.

Concentration

Concentration allows you to fully engage with the information you’re processing and retain new knowledge. It involves a high degree of mental fitness , which you can develop with metacognition. Most tasks require the ability to ignore distractions , resist procrastination , and maintain a steady focus on the task at hand. 

This skill is paramount when it comes to work-from-home settings or jobs with lots of moving parts where countless distractions are constantly vying for your attention. And training your mind to focus better in general can also increase your learning efficacy and overall productivity.

Self-reflection

The practice of self-reflection involves continually assessing your performance, cognitive strategies, and experiences to foster self-improvement . It's a type of mental debriefing where you look back on your actions and outcomes, examining them critically to gain insight and experience valuable lessons. 

Reflective practice can help you identify what worked well, what didn't, and why, giving you the opportunity to make necessary adjustments for future actions. This continuous process enhances your learning and helps you adapt to new changes and strategies. 

thoughtful-woman-looking-out-the-window-alone-metacognitive-skills

Metacognition turns you into a self-aware problem solver, empowering you to take control of your education and become a more efficient thinker. Although it’s helpful for students, you can also apply it in the workplace while brainstorming and discovering new ways to fulfill your roles and responsibilities .

Here are some examples of metacognitive strategies and how to cultivate your abilities:

1. Determine your learning style

Are you a visual learner who thrives on images, diagrams, and color-coded notes? Are you an auditory learner who benefits more from verbal instructions, podcasts , or group discussions? Or are you a kinesthetic learner who enjoys hands-on experiences, experiments, or physical activities?

Metacognition in education is critical because it teaches you to recognize the way you intake information — the first step to effective strategies that help you truly retain information. By identifying your learning style, you can tailor your goals and study strategies to suit your strengths, maximizing your cognitive potential and improving your understanding of new material.

2. Find deeper meaning in what you read

Merely skimming the surface of the text you read won't lead to profound understanding or long-term retention. Instead, dive deep into the material. Employ reading strategies like note-taking, highlighting, and summarizing to help information enter your brain. 

If that process doesn’t work for you, try using brainstorming techniques like mind mapping to tease out the underlying themes and messages. This depth of processing enhances comprehension and allows you to connect new information to prior knowledge, promoting meaningful learning.

man-reading-book-outdoors-metacognitive-skills

3. Write organized plans

Deconstruct your tasks into manageable units and create a comprehensive, step-by-step plan. Having a detailed guide breaks down large, intimidating tasks into bite-sized, achievable parts, reduces the risk of procrastination, and helps manage cognitive load. This process frees up your mental energy for higher-order thinking.

4. Ask yourself open-ended questions

Metacognitive questioning is a powerful tool for fostering self-awareness. Asking good questions like “What am I trying to achieve?” and “Why did this approach work or not work?” facilitates a deeper understanding of your education style, promotes critical thinking, and enables self-directed learning. Your answers will pave the way for improved processes.

5. Ask for feedback

External perspectives offer valuable insights into your thinking patterns and strategies. Seek feedback from teachers, peers, or mentors and earn the metacognitive knowledge you need to identify strengths to harness and weaknesses to address. Remember, the objective isn’t to nitpick or micromanage. It’s constructive criticism to help refine your learning process.

6. Self-evaluate

Cultivate a habit of self-assessment and self-monitoring, whether you’re experiencing something new or working on an innovative project. Check in on progress regularly, and compare current performance with your goals. This continuous self-evaluation helps you maintain focus on your objectives and identify when you're going off track, allowing for timely adjustments when necessary. 

Introspection is a powerful tool, and you can’t overstate the importance of knowing yourself . After all, building your metacognitive skills begins with a strong foundation of self-awareness and accountability .

7. Focus on solutions

It's easy to let problems and obstacles discourage you during the learning process. But metacognitive skills encourage a solutions-oriented mindset. Instead of fixating on the challenges, shift your focus to identifying, analyzing, and implementing creative solutions . 

This proactive approach fosters resilience and adaptability skills in the face of adversity, helping you overcome whatever comes your way. Cultivating this mindset — sometimes known as a growth mindset — also boosts your problem-solving prowess and transforms challenges into opportunities for growth.

The simple act of writing about your learning experiences can heighten your metacognitive awareness. Journaling provides a space to reflect on your thought processes, emotions, and struggles, which can reveal patterns and trends in your behavior. It’s a springboard for improvement that helps you recognize and solve problems as they come.

close-up-of-womeone-journaling-with-cup-of-coffee-on-the-side-metacognitive-skills

In the journey of learning and career advancement, metacognitive skills are your compass toward improvement. They empower you to understand your cognitive processes, enhance your strategies, and become a more effective thinker. They’re useful whether you’re just starting a master’s degree or upskilling to earn a promotion.

Remember, the journey to gain metacognitive skills isn’t a race. It’s a personal voyage of self-discovery and growth. Each stride you take toward honing your metacognitive skills is a step toward a more successful, fulfilling, and self-aware life.

Cultivate your creativity

Foster creativity and continuous learning with guidance from our certified Coaches.

Elizabeth Perry, ACC

Elizabeth Perry is a Coach Community Manager at BetterUp. She uses strategic engagement strategies to cultivate a learning community across a global network of Coaches through in-person and virtual experiences, technology-enabled platforms, and strategic coaching industry partnerships. With over 3 years of coaching experience and a certification in transformative leadership and life coaching from Sofia University, Elizabeth leverages transpersonal psychology expertise to help coaches and clients gain awareness of their behavioral and thought patterns, discover their purpose and passions, and elevate their potential. She is a lifelong student of psychology, personal growth, and human potential as well as an ICF-certified ACC transpersonal life and leadership Coach.

What’s convergent thinking? How to be a better problem-solver

Self directed learning is the key to new skills and knowledge, what we can learn from “pandemic thrivers”, what i didn't know before working with a coach: the power of reflection, how observational learning affects growth and development, mental fitness tips from nba all-star, pau gasol, why asynchronous learning is the key to successful upskilling, the path to individual transformation in the workplace: part three, how to do inner work® (even if you're way too busy), similar articles, 10 problem-solving strategies to turn challenges on their head, how to develop critical thinking skills, self-awareness in leadership: how it will make you a better boss, learn what process mapping is and how to create one (+ examples), what are analytical skills examples and how to level up, critical thinking is the one skillset you can't afford not to master, stay connected with betterup, get our newsletter, event invites, plus product insights and research..

3100 E 5th Street, Suite 350 Austin, TX 78702

  • Platform Overview
  • Integrations
  • Powered by AI
  • BetterUp Lead™
  • BetterUp Manage™
  • BetterUp Care®
  • Sales Performance
  • Diversity & Inclusion
  • Case Studies
  • Why BetterUp?
  • About Coaching
  • Find your Coach
  • Career Coaching
  • Communication Coaching
  • Life Coaching
  • News and Press
  • Leadership Team
  • Become a BetterUp Coach
  • BetterUp Labs
  • Center for Purpose & Performance
  • Leadership Training
  • Business Coaching
  • Contact Support
  • Contact Sales
  • Privacy Policy
  • Acceptable Use Policy
  • Trust & Security
  • Cookie Preferences
  • MyU : For Students, Faculty, and Staff
  • Academic Leaders
  • Faculty and Instructors
  • Graduate Students and Postdocs

Center for Educational Innovation

  • Campus and Collegiate Liaisons
  • Pedagogical Innovations Journal Club
  • Teaching Enrichment Series
  • Recorded Webinars
  • Video Series
  • All Services
  • Teaching Consultations
  • Student Feedback Facilitation
  • Instructional Media Production
  • Curricular and Educational Initiative Consultations
  • Educational Research and Evaluation
  • Thank a Teacher
  • All Teaching Resources
  • Aligned Course Design
  • Active Learning
  • Team Projects
  • Active Learning Classrooms
  • Leveraging the Learning Sciences
  • Inclusive Teaching at a Predominantly White Institution
  • Strategies to Support Challenging Conversations in the Classroom
  • Assessments
  • Online Teaching and Design
  • AI and ChatGPT in Teaching
  • Documenting Growth in Teaching
  • Early Term Feedback
  • Scholarship of Teaching and Learning
  • Writing Your Teaching Philosophy
  • All Programs
  • Assessment Deep Dive
  • Designing and Delivering Online Learning
  • Early Career Teaching and Learning Program
  • International Teaching Assistant (ITA) Program
  • Preparing Future Faculty Program
  • Teaching with Access and Inclusion Program
  • Teaching for Student Well-Being Program
  • Teaching Assistant and Postdoc Professional Development Program

Metacognitive strategies improve learning

Metacognition refers to thinking about one's thinking and is a skill students can use as part of a broader collection of skills known as self-regulated learning. Metacognitive strategies for learning include planning and goal setting, monitoring, and reflecting on learning. Students can be instructed in the use of metacognitive strategies. Classroom interventions designed to improve students’ metacognitive approaches are associated with improved learning (Cogliano, 2021; Theobald, 2021).

Strategies to encourage students to use metacognitive techniques

  • Prompt students to develop study plans and to evaluate their approaches to planning for, monitoring, and evaluating their learning. Early in the term, advise and support students in making a study plan. After receiving feedback on the first and subsequent assessments, ask students to reflect on their performance and determine which study strategies worked and which did not. Encourage them to revise their study plans if needed. One way to support this is to ask students to identify their personal learning environment .  This is an activity where students identify the various resources and support available to them.
  • Offer practice tests. Explain to students the benefits of practice testing for improving retention and performance on exams. Create practice tests with an answer key to help students prepare for exams. Use practice questions for in-class formative feedback throughout the term. Consider creating a bank of practice questions from previous exams to share with students (Stanton, 2021).
  • Call attention to strategies students can adopt to space their practice. This can include explaining the benefits of spaced practice and encouraging students to map out weekly study sessions for your course on their calendar. These study sessions should include the most recent material and revisit older material, perhaps in the form of practice tests (Stanton, 2021).
  • Model your metacognitive processes with students. Show students the thinking process behind your approach to solving problems (Ambrose, 2010). This can take the form of a think-aloud where you talk through the steps you would take to plan, monitor, and reflect on your problem-solving approach.
  • Caroline Hilk
  • Research and Resources
  • Why Use Active Learning?
  • Successful Active Learning Implementation
  • Addressing Active Learning Challenges
  • Why Use Team Projects?
  • Project Description Examples
  • Project Description for Students
  • Team Projects and Student Development Outcomes
  • Forming Teams
  • Team Output
  • Individual Contributions to the Team
  • Individual Student Understanding
  • Supporting Students
  • Wrapping up the Project
  • Addressing Challenges
  • Course Planning
  • Working memory
  • Retrieval of information
  • Spaced practice
  • Active learning
  • Metacognition
  • Definitions and PWI Focus
  • A Flexible Framework
  • Class Climate
  • Course Content
  • An Ongoing Endeavor
  • Learn About Your Context
  • Design Your Course to Support Challenging Conversations
  • Design Your Challenging Conversations Class Session
  • Use Effective Facilitation Strategies
  • What to Do in a Challenging Moment
  • Debrief and Reflect On Your Experience, and Try, Try Again
  • Supplemental Resources
  • Align Assessments
  • Multiple Low Stakes Assessments
  • Authentic Assessments
  • Formative and Summative Assessments
  • Varied Forms of Assessments
  • Cumulative Assessments
  • Equitable Assessments
  • Essay Exams
  • Multiple Choice Exams and Quizzes
  • Academic Paper
  • Skill Observation
  • Alternative Assessments
  • Assessment Plan
  • Grade Assessments
  • Prepare Students
  • Reduce Student Anxiety
  • SRT Scores: Interpreting & Responding
  • Student Feedback Question Prompts
  • Research Questions and Design
  • Gathering data
  • Publication
  • GRAD 8101: Teaching in Higher Education
  • Finding a Practicum Mentor
  • GRAD 8200: Teaching for Learning
  • Proficiency Rating & TA Eligibility
  • Schedule a SETTA
  • TAPD Webinars

what is meta problem solving center

  • NLP Anchoring
  • NLP Rapport
  • Threshold Patterns
  • NLP Reframing
  • History of NLP
  • NLP Definition
  • NLP Modeling
  • NLP Strategies
  • Action Filter
  • Motivation Direction
  • Primary Interest
  • Convincer Channel
  • Satir Categories
  • NLP Glossary
  • NLP Products
  • Hypnosis Products
  • Useful Links
  • Book Summaries
  • Book Reviews
  • Your Questions
  • Conversational Hypnosis
  • Hypnosis Benefits
  • Hypnosis Myths
  • Hypnosis – the Milton Model
  • Persuasion and Influence
  • Weight Loss

The Meta Model

The Meta Model Problem Solving Strategies

The Meta model is a model for changing our maps of the world. It provides a number of problem solving strategies. We cause many of our problems by our unconscious rule governed behavior.

We have problems not because the world isn’t rich enough, but because our maps aren’t. Alfred Korzybski’s work demonstrated that we don’t operate on the world directly but through our maps or models

This model is the foundation of NLP. It evolved from watching extraordinary therapists and the kinds of interactions they had with clients that got results.

Our nervous system deletes and distorts whole portions of reality in order to make the world manageable. Our maps determine our behavioral options by creating rules and programs for how we do things.

We delete information to avoid being overwhelmed. We don’t see all the choices we have available. We attend to our priorities and overlook other things that might be valuable.

We generalize information in order to summarize and synthesize. Dealing with categories is much less demanding than dealing with individual cases. For example, we talk about dogs as a category rather than all the individual dogs that we have met.

Lastly we distort information, for instance when we plan or visualize the future.

How we build the maps that control our behavior

We use three universal modeling processes to build our maps or models. The Meta model uses these three processes. Its terminology is from the field of linguistics and may seem quite strange.

Meta Model Deletions

We pay attention to some parts of our experiences and not others. The millions of sights, sounds, smells and feelings in the external environment and our internal world would overwhelm us if we didn’t delete most of them.

Deleting enables us for instance to talk on the phone in the middle of a crowded room. We tune in to what is important like hearing our name mentioned at a party. We are also deleting information when we think of ourselves as having limited choices. We often overlook problem-solving strategies that recover deleted choices.

Deletion Patterns

  • Unspecified Nouns – Who or What
  • Unspecified Verbs – Understanding the Process

Simple deletions

  • Comparative deletions
  • Ly Adverbs – Obviously this is Useful

Meta Model Generalizations

We categorize and summarize in order to manage our experience. We do this by choosing a representative experience, so one particular dog (real or a combination) will represent our category of dogs.

Generalizing enables us to transfer learning from one area to another. We learn the doorknob principle and use it to open doors we’ve never seen before.

Generalization Patterns

  • Universal quantifiers – a Meta Model Generalization
  • Modal operators – a Meta Model Generalization
  • Complex equivalences – a Meta model generalization

The Meta Model Distortions

Our ability to distort experiences enables us to imagine new things and plan for the future. Distortion is useful in planning a trip, choosing new clothes and decorating a room.

On the other hand, distortions probably cause us the most problems. It can be limiting when we imagine negative events and become unresourceful. For instance, jealousy can be a response to imagining a partner being unfaithful and then responding as though it is real.

Distortion Patterns

  • Nominalizations – Recipe for Misunderstanding
  • Mind reading – Jumping to Conclusions
  • Cause effects – How our world works
  • Lost Performatives – Not my Beliefs
  • Linguistic Presuppositions – Accepting What I Say

Using the Model

This model provides a way to recover deleted information, uncover our rules and untangle misunderstandings in our own and others’ communication. It is particularly useful in business communication where clear unambiguous directions can be critical.

Meta model questions

The model is the questions. By listening for how someone has created his or her maps, we can ask an appropriate question to recover what has been deleted, generalized or distorted. This then expands and enriches the person’s choices for solving the problem.

Further Reading: The Secrets of Magic by L.Michael Hall  reprinted as Communication Magic

Leave a Reply

Your email address will not be published. Required fields are marked

Related Posts

Ly adverbs – obviously this is useful, cause effects – how our world works, lost performatives – not my beliefs, mind reading – jumping to conclusions, complex equivalences – a meta model generalization, privacy overview.

CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.

The Berkeley Well-Being Institute

  • All Access Pass
  • PLR Articles
  • PLR Courses
  • PLR Social Media
, M. A., Ph. D. candidate

Grab Our Free eBook to Learn How to Grow Your Wellness Business Exponentially!

What is metacognition (a definition), video: metacognition & awareness.

Metacognition vs Cognition

All-Access Pass - Wellness PLR Content Collection

Why Metacognition Is Important

Examples of metacognition.

  • When trying to decide how much to get your hopes up about receiving a particular job offer, you might ask yourself whether you are accurately remembering and interpreting everything that happened during the interview.
  • When providing feedback to somebody you supervise, you might consider whether there are any extenuating factors that influenced their behavior that you haven’t taken into account.
  • When a therapist meets with a client for the first time, they will typically monitor their approach to information-gathering, and likely change their style if they observe that the client is shutting down or giving minimal responses.

Metacognition Theory

Well-Being PLR Courses - Grow Your Business Fast

Metacognition Strategies

  • Connecting new information to things we already know—like when we put a friend’s recent grumpiness in the context of his having gotten a bad performance review at work.
  • Selecting thinking strategies—like when I choose to apply a growth mindset instead of a fixed mindset to my experience of learning a new instrument.
  • Planning, monitoring, and evaluating thinking—like when I watch another guitarist perform having chosen to monitor myself for judgmental thoughts, then I reflect after the fact on how well my efforts to reframe went.

Metacognition Questions

  • “What might I not be considering right now?”
  • “What is my usual response in a situation like this? Could I do something different?”
  • “What information do I not have that would help me make this decision?”
  • “How do I know what I think I know right now? Can I be truly certain about this?”

Metacognition in Education

Video: metacognition: learning about learning.

Articles Related to  Metacognition

  • Behavioral Psychology: Definition, Theories, & Examples
  • Mindlessness: Definition, Theory & Examples
  • Introspection: Definition (in Psychology), Examples, and ...
  • ​ Contemplation: Definition, Examples, & Theories
  • ​ ​Anchoring: Definition in Psychology & Examples ​​ ​​ ​​ ​​​​ ​​ ​

Books Related to Metacognition

  • Fifty Strategies to Boost Cognitive Engagement: Creating a Thinking Culture in the Classroom (50 Teaching Strategies to Support Cognitive Development)
  • The Complete Learner's Toolkit: Metacognition and Mindset - Equipping the modern learner with the thinking, social and self-regulation skills to succeed at school and in life
  • Metacognition: The Neglected Skill Set for Empowering Students, Revised Edition (Your planning guide to teaching mindful, reflective, proficient thinkers and problem solvers)

Final Thoughts on Metacognition

Don't forget to grab our free ebook to learn how to grow your wellness business exponentially.

  • Brown, R., & McNeill, D. (1966). The “tip of the tongue” phenomenon. Journal of Verbal Learning and Verbal Behavior, 5, 325–337.
  • Carruthers, P. (2014). Two concepts of metacognition. Journal of Comparative Psychology, 128(2), 138-139.
  • Dirkes, M. A. (1985). Metacognition: Students in charge of their thinking. Roeper Review, 8(2), 96-100.
  • Dunlosky, J., & Metcalfe, J. (2008). Metacognition. Los Angeles, CA: SAGE.
  • Efklides, A. (2011). Interactions of metacognition with motivation and affect in self-regulated learning: The MASRL model. Educational Psychologist, 46, 6–25. doi:10.1080/00461520.2 011.538645
  • Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906–911.
  • Hart, J. T. (1965). Memory and the feeling-of-knowing experience. Journal of Educational Psychology, 5(6), 208–216.
  • Mahdavi, M. (2014). An overview: Metacognition in education. International Journal of Multidisciplinary and current research, 2(6), 529-535.
  • Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new findings. In G. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (pp. 125–173). New York: Academic Press.
  • Nelson, T. O., Stuart, R. B., Howard, C., & Crowley, M. (1999). Metacognition and clinical psychology: A preliminary framework for research and practice. Clinical Psychology & Psychotherapy: An International Journal of Theory & Practice, 6(2), 73-79.
  • Rhodes, M. G. (2019). Metacognition. Teaching of Psychology, 46(2), 168–175.
  • Smith, J. D., Shields, W. E., & Washburn, D. A. (2003). The comparative psychology of uncertainty monitoring and metacognition. Behavioral and Brain Sciences, 26(3), 317-339.
  • Happiness ​
  • Stress Management
  • Self-Confidence
  • Manifestation
  • ​ All Articles...
  • All-Access Pass​
  • ​​PLR Content Packages
  • PLR Courses ​

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform

How Did You Solve It? Metacognition in Mathematics

author avatar

The Role of Metacognition in Problem-Solving

Metacognitive awareness and regulation, creating opportunities for metacognition to flourish, recording student thinking, math is meta.

1507 rhodes fig1

Charlie has a giant bag of gumballs and wants to share them with his friends. He gives half of what he has to his buddy, Jaysen. He gives half of what's left after that to Marinda. Then he gives half of what's left now to Zack. His mom makes him give 5 gumballs to his sister. Now he has 10 gumballs left. How many gumballs did Charlie have to begin with?

Boaler, J. (2016). Mathematical mindsets: Unleashing students' potential through creative math, inspiring messages and innovative teaching. San Francisco, CA: Jossey-Bass.

Boaler, J. (2019). Limitless mind: Learn, lead, and live without barriers. New York, NY: HarperOne.

Flavell, J. H. (1976). Metacognitive aspects of problem solving. In L. B. Resnick (Ed.), The nature of intelligence. Hillsdale, NJ: Erlbaum.

Pólya, G. (1945/2014). How to solve it: A new aspect of mathematical method. Princeton, NJ: Princeton University Press.

what is meta problem solving center

Sam Rhodes is an assistant professor of elementary mathematics education at Georgia Southern University and a former secondary mathematics teacher.

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action., related articles.

undefined

Bringing a Bold Voice to Mathematics

undefined

Tell Us About

undefined

STEM Instruction Can Be a Change Agent!

undefined

We Can All Teach Climate Change

undefined

Transforming Literacy Tasks to Deepen Learning

  • How we help
  • Summer Coaching
  • Individuals with ADHD
  • Care Teams & Providers
  • Our Coaches
  • Testimonials
  • Join Our Team
  • Infographics & guides
  • Events & webinars
  • Case studies

Get Started

Metacognition: The Power Behind Problem Solving

By Annabel Furber

As Executive Function coaches , we help our students develop their metacognition in order to be more effective learners. Studies indicate "...that explicit instruction in meta-cognition—the ability to monitor our own thinking and learning—can lead to learning success across subjects and grade levels from primary school through college” (Baker, 2013; Dunlosky & Metcalfe, 2009; Hattie, 2009; Wang, Haertel, & Walberg, 1993, as referenced in "The Boss of My Brain" by Donna Wilson and Marcus Conyers, 2014). 

Why would the ability to think about our own thinking help a student with problem-solving? Well, when you learn how to metacognate, you are also learning how to be the “driver” of your own brain. You are consciously taking control of your thinking process, but even more importantly, you are allowing for and seeking ways to make changes and ultimately learn.  

Problem-solving requires a lot of mental processing. Without an awareness of one’s own thinking and processes, this is extremely difficult. For instance, I remember meeting with a student who was highly resistant to organizing his folders and binders. My explanations and reasoning behind why this might be important were seen as unnecessary and burdensome. Respecting the student’s mindset , I did not push the task, but continued to ask related questions that would help him to uncover his resistance. He was willing to engage in an activity we call the Decisional Balance Sheet. In a 2x2 table we outlined the cost and benefit of keeping things as they are, i.e., no organizational strategies , and then the cost and benefit of making a change, i.e., maintaining better management of his materials. Time was the biggest factor as the student believed that organizing his materials would take too much time away from the pressing matter of completing homework each night. Mid-terms rolled around. He needed to review his notes and handouts, review graded materials, and complete a study guide. After an hour and a half, we were still searching for missing handouts and notes that seemed to have disappeared. It was at this point that the student was able to truly see the benefit of keeping the materials organized — but the biggest step forward was that the student was able to see that his resistance to organizing had been misaligned with his own thinking process. It actually took more time to start organizing later in the semester than to have developed this habit from the beginning of the semester.

To be a strong learner and problem-solver you have to be aware of your own strengths and weaknesses. This requires metacognition. You have to be able to understand the nature of the problem and the demands it will take to complete the task, which also requires metacognition. You have to be knowledgeable about the strategies you are going to use and that are available to you. I suspect you can guess that this is also powered by metacognition. In fact, it is so central to learning and becoming an efficient student that I like to think about it as if it were a sandwich. Wait...a sandwich?

OK, stick with me on this point.

Learning requires mental power and control, the way food fuels our body’s energy supply. As you build a sandwich, you begin with a piece of bread. Let’s consider this piece of bread as a slice of metacognition. You need to know and leverage your strengths and to be conscious of your own thought processes as a foundation to learning and problem-solving. Facts and learning challenges come along in the shape of sliced tomatoes, some deli meat, lettuce, and onion. But in order to really complete that sandwich, we’re going to need another slice of "metacognitive bread." We’re going to need to review and reflect again on the problem to ensure we fully comprehend the issue and have considered all the options. (Also, topless sandwiches are just awkward and messy to eat.)

Metacognition is considered by many experts to be the pinnacle of  Executive Function  skills .  It is also an underlying process that serves as the foundation for other Executive Function skills such as organizing, planning, prioritizing and more. Just like a sandwich, beginning and ending with a slice of "metacognitive bread" is an efficient and masterful (not to mention tasty) way to increase students' learning potential and problem-solving strategies.

photo credit: Student girl.  via photopin   (license)

About the Author

Annabel furber.

Annabel Furber is a Senior Level Executive Function coach and Supervisor with Beyond BookSmart. She also works as a college instructor and has a background in special education, psychology and neuroscience. She has experience in both educational practices and educational research. Annabel earned a Master’s degree in the field of Mind, Brain, and Education from Harvard Graduate School of Education. Annabel believes in Neurodiversity—that each mind is as unique as a thumb print and no single approach to teaching is useful for all—and that learning challenges often accompany unique skills and talents that require an understanding of the impact of context, motivation, and personal goals. In addition to Executive Function coaching and supervising other coaches, Annabel conducts research and development for Beyond BookSmart. Annabel also serves as an instructor for CAST (Center for Applied Special Technologies) through the Mass Focus program, a graduate course aimed to instruct teachers on how to Universally Design for Learning (UDL).

Previous Post

Countering Senioritis: Focus on Skills for College Success

Study Tips for Final Exams: Identify the Blind Spots

Latest Post

How can teachers and parents address impulsive behavior in children, my child never stops talking or moving: what does it mean, why is my toddler not listening to me, related post, how non-cognitive variables can help in the college admissions process.

Editor’s note: This week, we feature guest blogger Karen Spencer, Director of Educational...

Why Can't My Child See the Big Picture?

Has your son ever lost points on a test or assignment because he did not follow all the directions?...

What College Students Struggle with Most (and what you can do to help)

When you’re struggling with self-management, every day can feel like an uphill battle. Not knowing...

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 11 January 2023

The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature

  • Enwei Xu   ORCID: orcid.org/0000-0001-6424-8169 1 ,
  • Wei Wang 1 &
  • Qingxia Wang 1  

Humanities and Social Sciences Communications volume  10 , Article number:  16 ( 2023 ) Cite this article

17k Accesses

19 Citations

3 Altmetric

Metrics details

  • Science, technology and society

Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field of education as well as a key competence for learners in the 21st century. However, the effectiveness of collaborative problem-solving in promoting students’ critical thinking remains uncertain. This current research presents the major findings of a meta-analysis of 36 pieces of the literature revealed in worldwide educational periodicals during the 21st century to identify the effectiveness of collaborative problem-solving in promoting students’ critical thinking and to determine, based on evidence, whether and to what extent collaborative problem solving can result in a rise or decrease in critical thinking. The findings show that (1) collaborative problem solving is an effective teaching approach to foster students’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]); (2) in respect to the dimensions of critical thinking, collaborative problem solving can significantly and successfully enhance students’ attitudinal tendencies (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI[0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI[0.58, 0.82]); and (3) the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have an impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. On the basis of these results, recommendations are made for further study and instruction to better support students’ critical thinking in the context of collaborative problem-solving.

Similar content being viewed by others

what is meta problem solving center

A meta-analysis of the effects of design thinking on student learning

what is meta problem solving center

Fostering twenty-first century skills among primary school students through math project-based learning

what is meta problem solving center

A meta-analysis to gauge the impact of pedagogies employed in mixed-ability high school biology classrooms

Introduction.

Although critical thinking has a long history in research, the concept of critical thinking, which is regarded as an essential competence for learners in the 21st century, has recently attracted more attention from researchers and teaching practitioners (National Research Council, 2012 ). Critical thinking should be the core of curriculum reform based on key competencies in the field of education (Peng and Deng, 2017 ) because students with critical thinking can not only understand the meaning of knowledge but also effectively solve practical problems in real life even after knowledge is forgotten (Kek and Huijser, 2011 ). The definition of critical thinking is not universal (Ennis, 1989 ; Castle, 2009 ; Niu et al., 2013 ). In general, the definition of critical thinking is a self-aware and self-regulated thought process (Facione, 1990 ; Niu et al., 2013 ). It refers to the cognitive skills needed to interpret, analyze, synthesize, reason, and evaluate information as well as the attitudinal tendency to apply these abilities (Halpern, 2001 ). The view that critical thinking can be taught and learned through curriculum teaching has been widely supported by many researchers (e.g., Kuncel, 2011 ; Leng and Lu, 2020 ), leading to educators’ efforts to foster it among students. In the field of teaching practice, there are three types of courses for teaching critical thinking (Ennis, 1989 ). The first is an independent curriculum in which critical thinking is taught and cultivated without involving the knowledge of specific disciplines; the second is an integrated curriculum in which critical thinking is integrated into the teaching of other disciplines as a clear teaching goal; and the third is a mixed curriculum in which critical thinking is taught in parallel to the teaching of other disciplines for mixed teaching training. Furthermore, numerous measuring tools have been developed by researchers and educators to measure critical thinking in the context of teaching practice. These include standardized measurement tools, such as WGCTA, CCTST, CCTT, and CCTDI, which have been verified by repeated experiments and are considered effective and reliable by international scholars (Facione and Facione, 1992 ). In short, descriptions of critical thinking, including its two dimensions of attitudinal tendency and cognitive skills, different types of teaching courses, and standardized measurement tools provide a complex normative framework for understanding, teaching, and evaluating critical thinking.

Cultivating critical thinking in curriculum teaching can start with a problem, and one of the most popular critical thinking instructional approaches is problem-based learning (Liu et al., 2020 ). Duch et al. ( 2001 ) noted that problem-based learning in group collaboration is progressive active learning, which can improve students’ critical thinking and problem-solving skills. Collaborative problem-solving is the organic integration of collaborative learning and problem-based learning, which takes learners as the center of the learning process and uses problems with poor structure in real-world situations as the starting point for the learning process (Liang et al., 2017 ). Students learn the knowledge needed to solve problems in a collaborative group, reach a consensus on problems in the field, and form solutions through social cooperation methods, such as dialogue, interpretation, questioning, debate, negotiation, and reflection, thus promoting the development of learners’ domain knowledge and critical thinking (Cindy, 2004 ; Liang et al., 2017 ).

Collaborative problem-solving has been widely used in the teaching practice of critical thinking, and several studies have attempted to conduct a systematic review and meta-analysis of the empirical literature on critical thinking from various perspectives. However, little attention has been paid to the impact of collaborative problem-solving on critical thinking. Therefore, the best approach for developing and enhancing critical thinking throughout collaborative problem-solving is to examine how to implement critical thinking instruction; however, this issue is still unexplored, which means that many teachers are incapable of better instructing critical thinking (Leng and Lu, 2020 ; Niu et al., 2013 ). For example, Huber ( 2016 ) provided the meta-analysis findings of 71 publications on gaining critical thinking over various time frames in college with the aim of determining whether critical thinking was truly teachable. These authors found that learners significantly improve their critical thinking while in college and that critical thinking differs with factors such as teaching strategies, intervention duration, subject area, and teaching type. The usefulness of collaborative problem-solving in fostering students’ critical thinking, however, was not determined by this study, nor did it reveal whether there existed significant variations among the different elements. A meta-analysis of 31 pieces of educational literature was conducted by Liu et al. ( 2020 ) to assess the impact of problem-solving on college students’ critical thinking. These authors found that problem-solving could promote the development of critical thinking among college students and proposed establishing a reasonable group structure for problem-solving in a follow-up study to improve students’ critical thinking. Additionally, previous empirical studies have reached inconclusive and even contradictory conclusions about whether and to what extent collaborative problem-solving increases or decreases critical thinking levels. As an illustration, Yang et al. ( 2008 ) carried out an experiment on the integrated curriculum teaching of college students based on a web bulletin board with the goal of fostering participants’ critical thinking in the context of collaborative problem-solving. These authors’ research revealed that through sharing, debating, examining, and reflecting on various experiences and ideas, collaborative problem-solving can considerably enhance students’ critical thinking in real-life problem situations. In contrast, collaborative problem-solving had a positive impact on learners’ interaction and could improve learning interest and motivation but could not significantly improve students’ critical thinking when compared to traditional classroom teaching, according to research by Naber and Wyatt ( 2014 ) and Sendag and Odabasi ( 2009 ) on undergraduate and high school students, respectively.

The above studies show that there is inconsistency regarding the effectiveness of collaborative problem-solving in promoting students’ critical thinking. Therefore, it is essential to conduct a thorough and trustworthy review to detect and decide whether and to what degree collaborative problem-solving can result in a rise or decrease in critical thinking. Meta-analysis is a quantitative analysis approach that is utilized to examine quantitative data from various separate studies that are all focused on the same research topic. This approach characterizes the effectiveness of its impact by averaging the effect sizes of numerous qualitative studies in an effort to reduce the uncertainty brought on by independent research and produce more conclusive findings (Lipsey and Wilson, 2001 ).

This paper used a meta-analytic approach and carried out a meta-analysis to examine the effectiveness of collaborative problem-solving in promoting students’ critical thinking in order to make a contribution to both research and practice. The following research questions were addressed by this meta-analysis:

What is the overall effect size of collaborative problem-solving in promoting students’ critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills)?

How are the disparities between the study conclusions impacted by various moderating variables if the impacts of various experimental designs in the included studies are heterogeneous?

This research followed the strict procedures (e.g., database searching, identification, screening, eligibility, merging, duplicate removal, and analysis of included studies) of Cooper’s ( 2010 ) proposed meta-analysis approach for examining quantitative data from various separate studies that are all focused on the same research topic. The relevant empirical research that appeared in worldwide educational periodicals within the 21st century was subjected to this meta-analysis using Rev-Man 5.4. The consistency of the data extracted separately by two researchers was tested using Cohen’s kappa coefficient, and a publication bias test and a heterogeneity test were run on the sample data to ascertain the quality of this meta-analysis.

Data sources and search strategies

There were three stages to the data collection process for this meta-analysis, as shown in Fig. 1 , which shows the number of articles included and eliminated during the selection process based on the statement and study eligibility criteria.

figure 1

This flowchart shows the number of records identified, included and excluded in the article.

First, the databases used to systematically search for relevant articles were the journal papers of the Web of Science Core Collection and the Chinese Core source journal, as well as the Chinese Social Science Citation Index (CSSCI) source journal papers included in CNKI. These databases were selected because they are credible platforms that are sources of scholarly and peer-reviewed information with advanced search tools and contain literature relevant to the subject of our topic from reliable researchers and experts. The search string with the Boolean operator used in the Web of Science was “TS = (((“critical thinking” or “ct” and “pretest” or “posttest”) or (“critical thinking” or “ct” and “control group” or “quasi experiment” or “experiment”)) and (“collaboration” or “collaborative learning” or “CSCL”) and (“problem solving” or “problem-based learning” or “PBL”))”. The research area was “Education Educational Research”, and the search period was “January 1, 2000, to December 30, 2021”. A total of 412 papers were obtained. The search string with the Boolean operator used in the CNKI was “SU = (‘critical thinking’*‘collaboration’ + ‘critical thinking’*‘collaborative learning’ + ‘critical thinking’*‘CSCL’ + ‘critical thinking’*‘problem solving’ + ‘critical thinking’*‘problem-based learning’ + ‘critical thinking’*‘PBL’ + ‘critical thinking’*‘problem oriented’) AND FT = (‘experiment’ + ‘quasi experiment’ + ‘pretest’ + ‘posttest’ + ‘empirical study’)” (translated into Chinese when searching). A total of 56 studies were found throughout the search period of “January 2000 to December 2021”. From the databases, all duplicates and retractions were eliminated before exporting the references into Endnote, a program for managing bibliographic references. In all, 466 studies were found.

Second, the studies that matched the inclusion and exclusion criteria for the meta-analysis were chosen by two researchers after they had reviewed the abstracts and titles of the gathered articles, yielding a total of 126 studies.

Third, two researchers thoroughly reviewed each included article’s whole text in accordance with the inclusion and exclusion criteria. Meanwhile, a snowball search was performed using the references and citations of the included articles to ensure complete coverage of the articles. Ultimately, 36 articles were kept.

Two researchers worked together to carry out this entire process, and a consensus rate of almost 94.7% was reached after discussion and negotiation to clarify any emerging differences.

Eligibility criteria

Since not all the retrieved studies matched the criteria for this meta-analysis, eligibility criteria for both inclusion and exclusion were developed as follows:

The publication language of the included studies was limited to English and Chinese, and the full text could be obtained. Articles that did not meet the publication language and articles not published between 2000 and 2021 were excluded.

The research design of the included studies must be empirical and quantitative studies that can assess the effect of collaborative problem-solving on the development of critical thinking. Articles that could not identify the causal mechanisms by which collaborative problem-solving affects critical thinking, such as review articles and theoretical articles, were excluded.

The research method of the included studies must feature a randomized control experiment or a quasi-experiment, or a natural experiment, which have a higher degree of internal validity with strong experimental designs and can all plausibly provide evidence that critical thinking and collaborative problem-solving are causally related. Articles with non-experimental research methods, such as purely correlational or observational studies, were excluded.

The participants of the included studies were only students in school, including K-12 students and college students. Articles in which the participants were non-school students, such as social workers or adult learners, were excluded.

The research results of the included studies must mention definite signs that may be utilized to gauge critical thinking’s impact (e.g., sample size, mean value, or standard deviation). Articles that lacked specific measurement indicators for critical thinking and could not calculate the effect size were excluded.

Data coding design

In order to perform a meta-analysis, it is necessary to collect the most important information from the articles, codify that information’s properties, and convert descriptive data into quantitative data. Therefore, this study designed a data coding template (see Table 1 ). Ultimately, 16 coding fields were retained.

The designed data-coding template consisted of three pieces of information. Basic information about the papers was included in the descriptive information: the publishing year, author, serial number, and title of the paper.

The variable information for the experimental design had three variables: the independent variable (instruction method), the dependent variable (critical thinking), and the moderating variable (learning stage, teaching type, intervention duration, learning scaffold, group size, measuring tool, and subject area). Depending on the topic of this study, the intervention strategy, as the independent variable, was coded into collaborative and non-collaborative problem-solving. The dependent variable, critical thinking, was coded as a cognitive skill and an attitudinal tendency. And seven moderating variables were created by grouping and combining the experimental design variables discovered within the 36 studies (see Table 1 ), where learning stages were encoded as higher education, high school, middle school, and primary school or lower; teaching types were encoded as mixed courses, integrated courses, and independent courses; intervention durations were encoded as 0–1 weeks, 1–4 weeks, 4–12 weeks, and more than 12 weeks; group sizes were encoded as 2–3 persons, 4–6 persons, 7–10 persons, and more than 10 persons; learning scaffolds were encoded as teacher-supported learning scaffold, technique-supported learning scaffold, and resource-supported learning scaffold; measuring tools were encoded as standardized measurement tools (e.g., WGCTA, CCTT, CCTST, and CCTDI) and self-adapting measurement tools (e.g., modified or made by researchers); and subject areas were encoded according to the specific subjects used in the 36 included studies.

The data information contained three metrics for measuring critical thinking: sample size, average value, and standard deviation. It is vital to remember that studies with various experimental designs frequently adopt various formulas to determine the effect size. And this paper used Morris’ proposed standardized mean difference (SMD) calculation formula ( 2008 , p. 369; see Supplementary Table S3 ).

Procedure for extracting and coding data

According to the data coding template (see Table 1 ), the 36 papers’ information was retrieved by two researchers, who then entered them into Excel (see Supplementary Table S1 ). The results of each study were extracted separately in the data extraction procedure if an article contained numerous studies on critical thinking, or if a study assessed different critical thinking dimensions. For instance, Tiwari et al. ( 2010 ) used four time points, which were viewed as numerous different studies, to examine the outcomes of critical thinking, and Chen ( 2013 ) included the two outcome variables of attitudinal tendency and cognitive skills, which were regarded as two studies. After discussion and negotiation during data extraction, the two researchers’ consistency test coefficients were roughly 93.27%. Supplementary Table S2 details the key characteristics of the 36 included articles with 79 effect quantities, including descriptive information (e.g., the publishing year, author, serial number, and title of the paper), variable information (e.g., independent variables, dependent variables, and moderating variables), and data information (e.g., mean values, standard deviations, and sample size). Following that, testing for publication bias and heterogeneity was done on the sample data using the Rev-Man 5.4 software, and then the test results were used to conduct a meta-analysis.

Publication bias test

When the sample of studies included in a meta-analysis does not accurately reflect the general status of research on the relevant subject, publication bias is said to be exhibited in this research. The reliability and accuracy of the meta-analysis may be impacted by publication bias. Due to this, the meta-analysis needs to check the sample data for publication bias (Stewart et al., 2006 ). A popular method to check for publication bias is the funnel plot; and it is unlikely that there will be publishing bias when the data are equally dispersed on either side of the average effect size and targeted within the higher region. The data are equally dispersed within the higher portion of the efficient zone, consistent with the funnel plot connected with this analysis (see Fig. 2 ), indicating that publication bias is unlikely in this situation.

figure 2

This funnel plot shows the result of publication bias of 79 effect quantities across 36 studies.

Heterogeneity test

To select the appropriate effect models for the meta-analysis, one might use the results of a heterogeneity test on the data effect sizes. In a meta-analysis, it is common practice to gauge the degree of data heterogeneity using the I 2 value, and I 2  ≥ 50% is typically understood to denote medium-high heterogeneity, which calls for the adoption of a random effect model; if not, a fixed effect model ought to be applied (Lipsey and Wilson, 2001 ). The findings of the heterogeneity test in this paper (see Table 2 ) revealed that I 2 was 86% and displayed significant heterogeneity ( P  < 0.01). To ensure accuracy and reliability, the overall effect size ought to be calculated utilizing the random effect model.

The analysis of the overall effect size

This meta-analysis utilized a random effect model to examine 79 effect quantities from 36 studies after eliminating heterogeneity. In accordance with Cohen’s criterion (Cohen, 1992 ), it is abundantly clear from the analysis results, which are shown in the forest plot of the overall effect (see Fig. 3 ), that the cumulative impact size of cooperative problem-solving is 0.82, which is statistically significant ( z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]), and can encourage learners to practice critical thinking.

figure 3

This forest plot shows the analysis result of the overall effect size across 36 studies.

In addition, this study examined two distinct dimensions of critical thinking to better understand the precise contributions that collaborative problem-solving makes to the growth of critical thinking. The findings (see Table 3 ) indicate that collaborative problem-solving improves cognitive skills (ES = 0.70) and attitudinal tendency (ES = 1.17), with significant intergroup differences (chi 2  = 7.95, P  < 0.01). Although collaborative problem-solving improves both dimensions of critical thinking, it is essential to point out that the improvements in students’ attitudinal tendency are much more pronounced and have a significant comprehensive effect (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]), whereas gains in learners’ cognitive skill are slightly improved and are just above average. (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

The analysis of moderator effect size

The whole forest plot’s 79 effect quantities underwent a two-tailed test, which revealed significant heterogeneity ( I 2  = 86%, z  = 12.78, P  < 0.01), indicating differences between various effect sizes that may have been influenced by moderating factors other than sampling error. Therefore, exploring possible moderating factors that might produce considerable heterogeneity was done using subgroup analysis, such as the learning stage, learning scaffold, teaching type, group size, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, in order to further explore the key factors that influence critical thinking. The findings (see Table 4 ) indicate that various moderating factors have advantageous effects on critical thinking. In this situation, the subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), learning scaffold (chi 2  = 9.03, P  < 0.01), and teaching type (chi 2  = 7.20, P  < 0.05) are all significant moderators that can be applied to support the cultivation of critical thinking. However, since the learning stage and the measuring tools did not significantly differ among intergroup (chi 2  = 3.15, P  = 0.21 > 0.05, and chi 2  = 0.08, P  = 0.78 > 0.05), we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving. These are the precise outcomes, as follows:

Various learning stages influenced critical thinking positively, without significant intergroup differences (chi 2  = 3.15, P  = 0.21 > 0.05). High school was first on the list of effect sizes (ES = 1.36, P  < 0.01), then higher education (ES = 0.78, P  < 0.01), and middle school (ES = 0.73, P  < 0.01). These results show that, despite the learning stage’s beneficial influence on cultivating learners’ critical thinking, we are unable to explain why it is essential for cultivating critical thinking in the context of collaborative problem-solving.

Different teaching types had varying degrees of positive impact on critical thinking, with significant intergroup differences (chi 2  = 7.20, P  < 0.05). The effect size was ranked as follows: mixed courses (ES = 1.34, P  < 0.01), integrated courses (ES = 0.81, P  < 0.01), and independent courses (ES = 0.27, P  < 0.01). These results indicate that the most effective approach to cultivate critical thinking utilizing collaborative problem solving is through the teaching type of mixed courses.

Various intervention durations significantly improved critical thinking, and there were significant intergroup differences (chi 2  = 12.18, P  < 0.01). The effect sizes related to this variable showed a tendency to increase with longer intervention durations. The improvement in critical thinking reached a significant level (ES = 0.85, P  < 0.01) after more than 12 weeks of training. These findings indicate that the intervention duration and critical thinking’s impact are positively correlated, with a longer intervention duration having a greater effect.

Different learning scaffolds influenced critical thinking positively, with significant intergroup differences (chi 2  = 9.03, P  < 0.01). The resource-supported learning scaffold (ES = 0.69, P  < 0.01) acquired a medium-to-higher level of impact, the technique-supported learning scaffold (ES = 0.63, P  < 0.01) also attained a medium-to-higher level of impact, and the teacher-supported learning scaffold (ES = 0.92, P  < 0.01) displayed a high level of significant impact. These results show that the learning scaffold with teacher support has the greatest impact on cultivating critical thinking.

Various group sizes influenced critical thinking positively, and the intergroup differences were statistically significant (chi 2  = 8.77, P  < 0.05). Critical thinking showed a general declining trend with increasing group size. The overall effect size of 2–3 people in this situation was the biggest (ES = 0.99, P  < 0.01), and when the group size was greater than 7 people, the improvement in critical thinking was at the lower-middle level (ES < 0.5, P  < 0.01). These results show that the impact on critical thinking is positively connected with group size, and as group size grows, so does the overall impact.

Various measuring tools influenced critical thinking positively, with significant intergroup differences (chi 2  = 0.08, P  = 0.78 > 0.05). In this situation, the self-adapting measurement tools obtained an upper-medium level of effect (ES = 0.78), whereas the complete effect size of the standardized measurement tools was the largest, achieving a significant level of effect (ES = 0.84, P  < 0.01). These results show that, despite the beneficial influence of the measuring tool on cultivating critical thinking, we are unable to explain why it is crucial in fostering the growth of critical thinking by utilizing the approach of collaborative problem-solving.

Different subject areas had a greater impact on critical thinking, and the intergroup differences were statistically significant (chi 2  = 13.36, P  < 0.05). Mathematics had the greatest overall impact, achieving a significant level of effect (ES = 1.68, P  < 0.01), followed by science (ES = 1.25, P  < 0.01) and medical science (ES = 0.87, P  < 0.01), both of which also achieved a significant level of effect. Programming technology was the least effective (ES = 0.39, P  < 0.01), only having a medium-low degree of effect compared to education (ES = 0.72, P  < 0.01) and other fields (such as language, art, and social sciences) (ES = 0.58, P  < 0.01). These results suggest that scientific fields (e.g., mathematics, science) may be the most effective subject areas for cultivating critical thinking utilizing the approach of collaborative problem-solving.

The effectiveness of collaborative problem solving with regard to teaching critical thinking

According to this meta-analysis, using collaborative problem-solving as an intervention strategy in critical thinking teaching has a considerable amount of impact on cultivating learners’ critical thinking as a whole and has a favorable promotional effect on the two dimensions of critical thinking. According to certain studies, collaborative problem solving, the most frequently used critical thinking teaching strategy in curriculum instruction can considerably enhance students’ critical thinking (e.g., Liang et al., 2017 ; Liu et al., 2020 ; Cindy, 2004 ). This meta-analysis provides convergent data support for the above research views. Thus, the findings of this meta-analysis not only effectively address the first research query regarding the overall effect of cultivating critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills) utilizing the approach of collaborative problem-solving, but also enhance our confidence in cultivating critical thinking by using collaborative problem-solving intervention approach in the context of classroom teaching.

Furthermore, the associated improvements in attitudinal tendency are much stronger, but the corresponding improvements in cognitive skill are only marginally better. According to certain studies, cognitive skill differs from the attitudinal tendency in classroom instruction; the cultivation and development of the former as a key ability is a process of gradual accumulation, while the latter as an attitude is affected by the context of the teaching situation (e.g., a novel and exciting teaching approach, challenging and rewarding tasks) (Halpern, 2001 ; Wei and Hong, 2022 ). Collaborative problem-solving as a teaching approach is exciting and interesting, as well as rewarding and challenging; because it takes the learners as the focus and examines problems with poor structure in real situations, and it can inspire students to fully realize their potential for problem-solving, which will significantly improve their attitudinal tendency toward solving problems (Liu et al., 2020 ). Similar to how collaborative problem-solving influences attitudinal tendency, attitudinal tendency impacts cognitive skill when attempting to solve a problem (Liu et al., 2020 ; Zhang et al., 2022 ), and stronger attitudinal tendencies are associated with improved learning achievement and cognitive ability in students (Sison, 2008 ; Zhang et al., 2022 ). It can be seen that the two specific dimensions of critical thinking as well as critical thinking as a whole are affected by collaborative problem-solving, and this study illuminates the nuanced links between cognitive skills and attitudinal tendencies with regard to these two dimensions of critical thinking. To fully develop students’ capacity for critical thinking, future empirical research should pay closer attention to cognitive skills.

The moderating effects of collaborative problem solving with regard to teaching critical thinking

In order to further explore the key factors that influence critical thinking, exploring possible moderating effects that might produce considerable heterogeneity was done using subgroup analysis. The findings show that the moderating factors, such as the teaching type, learning stage, group size, learning scaffold, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, could all support the cultivation of collaborative problem-solving in critical thinking. Among them, the effect size differences between the learning stage and measuring tool are not significant, which does not explain why these two factors are crucial in supporting the cultivation of critical thinking utilizing the approach of collaborative problem-solving.

In terms of the learning stage, various learning stages influenced critical thinking positively without significant intergroup differences, indicating that we are unable to explain why it is crucial in fostering the growth of critical thinking.

Although high education accounts for 70.89% of all empirical studies performed by researchers, high school may be the appropriate learning stage to foster students’ critical thinking by utilizing the approach of collaborative problem-solving since it has the largest overall effect size. This phenomenon may be related to student’s cognitive development, which needs to be further studied in follow-up research.

With regard to teaching type, mixed course teaching may be the best teaching method to cultivate students’ critical thinking. Relevant studies have shown that in the actual teaching process if students are trained in thinking methods alone, the methods they learn are isolated and divorced from subject knowledge, which is not conducive to their transfer of thinking methods; therefore, if students’ thinking is trained only in subject teaching without systematic method training, it is challenging to apply to real-world circumstances (Ruggiero, 2012 ; Hu and Liu, 2015 ). Teaching critical thinking as mixed course teaching in parallel to other subject teachings can achieve the best effect on learners’ critical thinking, and explicit critical thinking instruction is more effective than less explicit critical thinking instruction (Bensley and Spero, 2014 ).

In terms of the intervention duration, with longer intervention times, the overall effect size shows an upward tendency. Thus, the intervention duration and critical thinking’s impact are positively correlated. Critical thinking, as a key competency for students in the 21st century, is difficult to get a meaningful improvement in a brief intervention duration. Instead, it could be developed over a lengthy period of time through consistent teaching and the progressive accumulation of knowledge (Halpern, 2001 ; Hu and Liu, 2015 ). Therefore, future empirical studies ought to take these restrictions into account throughout a longer period of critical thinking instruction.

With regard to group size, a group size of 2–3 persons has the highest effect size, and the comprehensive effect size decreases with increasing group size in general. This outcome is in line with some research findings; as an example, a group composed of two to four members is most appropriate for collaborative learning (Schellens and Valcke, 2006 ). However, the meta-analysis results also indicate that once the group size exceeds 7 people, small groups cannot produce better interaction and performance than large groups. This may be because the learning scaffolds of technique support, resource support, and teacher support improve the frequency and effectiveness of interaction among group members, and a collaborative group with more members may increase the diversity of views, which is helpful to cultivate critical thinking utilizing the approach of collaborative problem-solving.

With regard to the learning scaffold, the three different kinds of learning scaffolds can all enhance critical thinking. Among them, the teacher-supported learning scaffold has the largest overall effect size, demonstrating the interdependence of effective learning scaffolds and collaborative problem-solving. This outcome is in line with some research findings; as an example, a successful strategy is to encourage learners to collaborate, come up with solutions, and develop critical thinking skills by using learning scaffolds (Reiser, 2004 ; Xu et al., 2022 ); learning scaffolds can lower task complexity and unpleasant feelings while also enticing students to engage in learning activities (Wood et al., 2006 ); learning scaffolds are designed to assist students in using learning approaches more successfully to adapt the collaborative problem-solving process, and the teacher-supported learning scaffolds have the greatest influence on critical thinking in this process because they are more targeted, informative, and timely (Xu et al., 2022 ).

With respect to the measuring tool, despite the fact that standardized measurement tools (such as the WGCTA, CCTT, and CCTST) have been acknowledged as trustworthy and effective by worldwide experts, only 54.43% of the research included in this meta-analysis adopted them for assessment, and the results indicated no intergroup differences. These results suggest that not all teaching circumstances are appropriate for measuring critical thinking using standardized measurement tools. “The measuring tools for measuring thinking ability have limits in assessing learners in educational situations and should be adapted appropriately to accurately assess the changes in learners’ critical thinking.”, according to Simpson and Courtney ( 2002 , p. 91). As a result, in order to more fully and precisely gauge how learners’ critical thinking has evolved, we must properly modify standardized measuring tools based on collaborative problem-solving learning contexts.

With regard to the subject area, the comprehensive effect size of science departments (e.g., mathematics, science, medical science) is larger than that of language arts and social sciences. Some recent international education reforms have noted that critical thinking is a basic part of scientific literacy. Students with scientific literacy can prove the rationality of their judgment according to accurate evidence and reasonable standards when they face challenges or poorly structured problems (Kyndt et al., 2013 ), which makes critical thinking crucial for developing scientific understanding and applying this understanding to practical problem solving for problems related to science, technology, and society (Yore et al., 2007 ).

Suggestions for critical thinking teaching

Other than those stated in the discussion above, the following suggestions are offered for critical thinking instruction utilizing the approach of collaborative problem-solving.

First, teachers should put a special emphasis on the two core elements, which are collaboration and problem-solving, to design real problems based on collaborative situations. This meta-analysis provides evidence to support the view that collaborative problem-solving has a strong synergistic effect on promoting students’ critical thinking. Asking questions about real situations and allowing learners to take part in critical discussions on real problems during class instruction are key ways to teach critical thinking rather than simply reading speculative articles without practice (Mulnix, 2012 ). Furthermore, the improvement of students’ critical thinking is realized through cognitive conflict with other learners in the problem situation (Yang et al., 2008 ). Consequently, it is essential for teachers to put a special emphasis on the two core elements, which are collaboration and problem-solving, and design real problems and encourage students to discuss, negotiate, and argue based on collaborative problem-solving situations.

Second, teachers should design and implement mixed courses to cultivate learners’ critical thinking, utilizing the approach of collaborative problem-solving. Critical thinking can be taught through curriculum instruction (Kuncel, 2011 ; Leng and Lu, 2020 ), with the goal of cultivating learners’ critical thinking for flexible transfer and application in real problem-solving situations. This meta-analysis shows that mixed course teaching has a highly substantial impact on the cultivation and promotion of learners’ critical thinking. Therefore, teachers should design and implement mixed course teaching with real collaborative problem-solving situations in combination with the knowledge content of specific disciplines in conventional teaching, teach methods and strategies of critical thinking based on poorly structured problems to help students master critical thinking, and provide practical activities in which students can interact with each other to develop knowledge construction and critical thinking utilizing the approach of collaborative problem-solving.

Third, teachers should be more trained in critical thinking, particularly preservice teachers, and they also should be conscious of the ways in which teachers’ support for learning scaffolds can promote critical thinking. The learning scaffold supported by teachers had the greatest impact on learners’ critical thinking, in addition to being more directive, targeted, and timely (Wood et al., 2006 ). Critical thinking can only be effectively taught when teachers recognize the significance of critical thinking for students’ growth and use the proper approaches while designing instructional activities (Forawi, 2016 ). Therefore, with the intention of enabling teachers to create learning scaffolds to cultivate learners’ critical thinking utilizing the approach of collaborative problem solving, it is essential to concentrate on the teacher-supported learning scaffolds and enhance the instruction for teaching critical thinking to teachers, especially preservice teachers.

Implications and limitations

There are certain limitations in this meta-analysis, but future research can correct them. First, the search languages were restricted to English and Chinese, so it is possible that pertinent studies that were written in other languages were overlooked, resulting in an inadequate number of articles for review. Second, these data provided by the included studies are partially missing, such as whether teachers were trained in the theory and practice of critical thinking, the average age and gender of learners, and the differences in critical thinking among learners of various ages and genders. Third, as is typical for review articles, more studies were released while this meta-analysis was being done; therefore, it had a time limit. With the development of relevant research, future studies focusing on these issues are highly relevant and needed.

Conclusions

The subject of the magnitude of collaborative problem-solving’s impact on fostering students’ critical thinking, which received scant attention from other studies, was successfully addressed by this study. The question of the effectiveness of collaborative problem-solving in promoting students’ critical thinking was addressed in this study, which addressed a topic that had gotten little attention in earlier research. The following conclusions can be made:

Regarding the results obtained, collaborative problem solving is an effective teaching approach to foster learners’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]). With respect to the dimensions of critical thinking, collaborative problem-solving can significantly and effectively improve students’ attitudinal tendency, and the comprehensive effect is significant (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

As demonstrated by both the results and the discussion, there are varying degrees of beneficial effects on students’ critical thinking from all seven moderating factors, which were found across 36 studies. In this context, the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have a positive impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. Since the learning stage (chi 2  = 3.15, P  = 0.21 > 0.05) and measuring tools (chi 2  = 0.08, P  = 0.78 > 0.05) did not demonstrate any significant intergroup differences, we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving.

Data availability

All data generated or analyzed during this study are included within the article and its supplementary information files, and the supplementary information files are available in the Dataverse repository: https://doi.org/10.7910/DVN/IPFJO6 .

Bensley DA, Spero RA (2014) Improving critical thinking skills and meta-cognitive monitoring through direct infusion. Think Skills Creat 12:55–68. https://doi.org/10.1016/j.tsc.2014.02.001

Article   Google Scholar  

Castle A (2009) Defining and assessing critical thinking skills for student radiographers. Radiography 15(1):70–76. https://doi.org/10.1016/j.radi.2007.10.007

Chen XD (2013) An empirical study on the influence of PBL teaching model on critical thinking ability of non-English majors. J PLA Foreign Lang College 36 (04):68–72

Google Scholar  

Cohen A (1992) Antecedents of organizational commitment across occupational groups: a meta-analysis. J Organ Behav. https://doi.org/10.1002/job.4030130602

Cooper H (2010) Research synthesis and meta-analysis: a step-by-step approach, 4th edn. Sage, London, England

Cindy HS (2004) Problem-based learning: what and how do students learn? Educ Psychol Rev 51(1):31–39

Duch BJ, Gron SD, Allen DE (2001) The power of problem-based learning: a practical “how to” for teaching undergraduate courses in any discipline. Stylus Educ Sci 2:190–198

Ennis RH (1989) Critical thinking and subject specificity: clarification and needed research. Educ Res 18(3):4–10. https://doi.org/10.3102/0013189x018003004

Facione PA (1990) Critical thinking: a statement of expert consensus for purposes of educational assessment and instruction. Research findings and recommendations. Eric document reproduction service. https://eric.ed.gov/?id=ed315423

Facione PA, Facione NC (1992) The California Critical Thinking Dispositions Inventory (CCTDI) and the CCTDI test manual. California Academic Press, Millbrae, CA

Forawi SA (2016) Standard-based science education and critical thinking. Think Skills Creat 20:52–62. https://doi.org/10.1016/j.tsc.2016.02.005

Halpern DF (2001) Assessing the effectiveness of critical thinking instruction. J Gen Educ 50(4):270–286. https://doi.org/10.2307/27797889

Hu WP, Liu J (2015) Cultivation of pupils’ thinking ability: a five-year follow-up study. Psychol Behav Res 13(05):648–654. https://doi.org/10.3969/j.issn.1672-0628.2015.05.010

Huber K (2016) Does college teach critical thinking? A meta-analysis. Rev Educ Res 86(2):431–468. https://doi.org/10.3102/0034654315605917

Kek MYCA, Huijser H (2011) The power of problem-based learning in developing critical thinking skills: preparing students for tomorrow’s digital futures in today’s classrooms. High Educ Res Dev 30(3):329–341. https://doi.org/10.1080/07294360.2010.501074

Kuncel NR (2011) Measurement and meaning of critical thinking (Research report for the NRC 21st Century Skills Workshop). National Research Council, Washington, DC

Kyndt E, Raes E, Lismont B, Timmers F, Cascallar E, Dochy F (2013) A meta-analysis of the effects of face-to-face cooperative learning. Do recent studies falsify or verify earlier findings? Educ Res Rev 10(2):133–149. https://doi.org/10.1016/j.edurev.2013.02.002

Leng J, Lu XX (2020) Is critical thinking really teachable?—A meta-analysis based on 79 experimental or quasi experimental studies. Open Educ Res 26(06):110–118. https://doi.org/10.13966/j.cnki.kfjyyj.2020.06.011

Liang YZ, Zhu K, Zhao CL (2017) An empirical study on the depth of interaction promoted by collaborative problem solving learning activities. J E-educ Res 38(10):87–92. https://doi.org/10.13811/j.cnki.eer.2017.10.014

Lipsey M, Wilson D (2001) Practical meta-analysis. International Educational and Professional, London, pp. 92–160

Liu Z, Wu W, Jiang Q (2020) A study on the influence of problem based learning on college students’ critical thinking-based on a meta-analysis of 31 studies. Explor High Educ 03:43–49

Morris SB (2008) Estimating effect sizes from pretest-posttest-control group designs. Organ Res Methods 11(2):364–386. https://doi.org/10.1177/1094428106291059

Article   ADS   Google Scholar  

Mulnix JW (2012) Thinking critically about critical thinking. Educ Philos Theory 44(5):464–479. https://doi.org/10.1111/j.1469-5812.2010.00673.x

Naber J, Wyatt TH (2014) The effect of reflective writing interventions on the critical thinking skills and dispositions of baccalaureate nursing students. Nurse Educ Today 34(1):67–72. https://doi.org/10.1016/j.nedt.2013.04.002

National Research Council (2012) Education for life and work: developing transferable knowledge and skills in the 21st century. The National Academies Press, Washington, DC

Niu L, Behar HLS, Garvan CW (2013) Do instructional interventions influence college students’ critical thinking skills? A meta-analysis. Educ Res Rev 9(12):114–128. https://doi.org/10.1016/j.edurev.2012.12.002

Peng ZM, Deng L (2017) Towards the core of education reform: cultivating critical thinking skills as the core of skills in the 21st century. Res Educ Dev 24:57–63. https://doi.org/10.14121/j.cnki.1008-3855.2017.24.011

Reiser BJ (2004) Scaffolding complex learning: the mechanisms of structuring and problematizing student work. J Learn Sci 13(3):273–304. https://doi.org/10.1207/s15327809jls1303_2

Ruggiero VR (2012) The art of thinking: a guide to critical and creative thought, 4th edn. Harper Collins College Publishers, New York

Schellens T, Valcke M (2006) Fostering knowledge construction in university students through asynchronous discussion groups. Comput Educ 46(4):349–370. https://doi.org/10.1016/j.compedu.2004.07.010

Sendag S, Odabasi HF (2009) Effects of an online problem based learning course on content knowledge acquisition and critical thinking skills. Comput Educ 53(1):132–141. https://doi.org/10.1016/j.compedu.2009.01.008

Sison R (2008) Investigating Pair Programming in a Software Engineering Course in an Asian Setting. 2008 15th Asia-Pacific Software Engineering Conference, pp. 325–331. https://doi.org/10.1109/APSEC.2008.61

Simpson E, Courtney M (2002) Critical thinking in nursing education: literature review. Mary Courtney 8(2):89–98

Stewart L, Tierney J, Burdett S (2006) Do systematic reviews based on individual patient data offer a means of circumventing biases associated with trial publications? Publication bias in meta-analysis. John Wiley and Sons Inc, New York, pp. 261–286

Tiwari A, Lai P, So M, Yuen K (2010) A comparison of the effects of problem-based learning and lecturing on the development of students’ critical thinking. Med Educ 40(6):547–554. https://doi.org/10.1111/j.1365-2929.2006.02481.x

Wood D, Bruner JS, Ross G (2006) The role of tutoring in problem solving. J Child Psychol Psychiatry 17(2):89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x

Wei T, Hong S (2022) The meaning and realization of teachable critical thinking. Educ Theory Practice 10:51–57

Xu EW, Wang W, Wang QX (2022) A meta-analysis of the effectiveness of programming teaching in promoting K-12 students’ computational thinking. Educ Inf Technol. https://doi.org/10.1007/s10639-022-11445-2

Yang YC, Newby T, Bill R (2008) Facilitating interactions through structured web-based bulletin boards: a quasi-experimental study on promoting learners’ critical thinking skills. Comput Educ 50(4):1572–1585. https://doi.org/10.1016/j.compedu.2007.04.006

Yore LD, Pimm D, Tuan HL (2007) The literacy component of mathematical and scientific literacy. Int J Sci Math Educ 5(4):559–589. https://doi.org/10.1007/s10763-007-9089-4

Zhang T, Zhang S, Gao QQ, Wang JH (2022) Research on the development of learners’ critical thinking in online peer review. Audio Visual Educ Res 6:53–60. https://doi.org/10.13811/j.cnki.eer.2022.06.08

Download references

Acknowledgements

This research was supported by the graduate scientific research and innovation project of Xinjiang Uygur Autonomous Region named “Research on in-depth learning of high school information technology courses for the cultivation of computing thinking” (No. XJ2022G190) and the independent innovation fund project for doctoral students of the College of Educational Science of Xinjiang Normal University named “Research on project-based teaching of high school information technology courses from the perspective of discipline core literacy” (No. XJNUJKYA2003).

Author information

Authors and affiliations.

College of Educational Science, Xinjiang Normal University, 830017, Urumqi, Xinjiang, China

Enwei Xu, Wei Wang & Qingxia Wang

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Enwei Xu or Wei Wang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary tables, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xu, E., Wang, W. & Wang, Q. The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature. Humanit Soc Sci Commun 10 , 16 (2023). https://doi.org/10.1057/s41599-023-01508-1

Download citation

Received : 07 August 2022

Accepted : 04 January 2023

Published : 11 January 2023

DOI : https://doi.org/10.1057/s41599-023-01508-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Impacts of online collaborative learning on students’ intercultural communication apprehension and intercultural communicative competence.

  • Hoa Thi Hoang Chau
  • Hung Phu Bui
  • Quynh Thi Huong Dinh

Education and Information Technologies (2024)

Exploring the effects of digital technology on deep learning: a meta-analysis

Sustainable electricity generation and farm-grid utilization from photovoltaic aquaculture: a bibliometric analysis.

  • A. A. Amusa
  • M. Alhassan

International Journal of Environmental Science and Technology (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

what is meta problem solving center

Learning Center

Metacognitive Study Strategies

Do you spend a lot of time studying but feel like your hard work doesn’t help your performance on exams? You may not realize that your study techniques, which may have worked in high school, don’t necessarily translate to how you’re expected to learn in college. But don’t worry—we’ll show you how to analyze your current strategies, see what’s working and what isn’t, and come up with new, more effective study techniques. To do this, we’ll introduce you to the idea of “metacognition,” tell you why metacognition helps you learn better, and introduce some strategies for incorporating metacognition into your studying.

What is metacognition and why should I care?

Metacognition is thinking about how you think and learn. The key to metacognition is asking yourself self-reflective questions, which are powerful because they allow us to take inventory of where we currently are (thinking about what we already know), how we learn (what is working and what is not), and where we want to be (accurately gauging if we’ve mastered the material). Metacognition helps you to be a self-aware problem solver and take control of your learning. By using metacognition when you study, you can be strategic about your approach. You will be able to take stock of what you already know, what you need to work on, and how best to approach learning new material.

Strategies for using metacognition when you study

Below are some ideas for how to engage in metacognition when you are studying. Think about which of these resonate with you and plan to incorporate them into your study routine on a regular basis.

Use your syllabus as a roadmap

Look at your syllabus. Your professor probably included a course schedule, reading list, learning objectives or something similar to give you a sense of how the course is structured. Use this as your roadmap for the course. For example, for a reading-based course, think about why your professor might have assigned the readings in this particular order. How do they connect? What are the key themes that you notice? What prior knowledge do you have that could inform your reading of this new material? You can do this at multiple points throughout the semester, as you gain additional knowledge that you can piece together.

Summon your prior knowledge

Before you read your textbook or attend a lecture, look at the topic that is covered and ask yourself what you know about it already. What questions do you have? What do you hope to learn? Answering these questions will give context to what you are learning and help you start building a framework for new knowledge. It may also help you engage more deeply with the material.

Think aloud

Talk through your material. You can talk to your classmates, your friends, a tutor, or even a pet. Just verbalizing your thoughts can help you make more sense of the material and internalize it more deeply. Talking aloud is a great way to test yourself on how well you really know the material. In courses that require problem solving, explaining the steps aloud will ensure you really understand them and expose any gaps in knowledge that you might have. Ask yourself questions about what you are doing and why.

Ask yourself questions

Asking self-reflective questions is key to metacognition. Take the time to be introspective and honest with yourself about your comprehension. Below are some suggestions for metacognitive questions you can ask yourself.

  • Does this answer make sense given the information provided?
  • What strategy did I use to solve this problem that was helpful?
  • How does this information conflict with my prior understanding?
  • How does this information relate to what we learned last week?
  • What questions will I ask myself next time I’m working these types of problems?
  • What is confusing about this topic?
  • What are the relationships between these two concepts?
  • What conclusions can I make?

Try brainstorming some of your own questions as well.

Use writing

Writing can help you organize your thoughts and assess what you know. Just like thinking aloud, writing can help you identify what you do and don’t know, and how you are thinking about the concepts that you’re learning. Write out what you know and what questions you have about the learning objectives for each topic you are learning.

Organize your thoughts

Using concept maps or graphic organizers is another great way to visualize material and see the connections between the various concepts you are learning. Creating your concept map from memory is also a great study strategy because it is a form of self-testing.

Take notes from memory

Many students take notes as they are reading. Often this can turn notetaking into a passive activity, since it can be easy to fall into just copying directly from the book without thinking about the material and putting your notes in your own words. Instead, try reading short sections at a time and pausing periodically to summarize what you read from memory. This technique ensures that you are actively engaging with the material as you are reading and taking notes, and it helps you better gauge how much you’re actually remembering from what you read; it also engages your recall, which makes it more likely you’ll be able to remember and understand the material when you’re done.

Review your exams

Reviewing an exam that you’ve recently taken is a great time to use metacognition. Look at what you knew and what you missed. Try using this handout to analyze your preparation for the exam and track the items you missed, along with the reasons that you missed them. Then take the time to fill in the areas you still have gaps and make a plan for how you might change your preparation next time.

Take a timeout

When you’re learning, it’s important to periodically take a time out to make sure you’re engaging in metacognitive strategies. We often can get so absorbed in “doing” that we don’t always think about the why behind what we are doing. For example, if you are working through a math problem, it’s helpful to pause as you go and think about why you are doing each step, and how you knew that it followed from the previous step. Throughout the semester, you should continue to take timeouts before, during or after assignments to see how what you’re doing relates to the course as a whole and to the learning objectives that your professor has set.

Test yourself

You don’t want your exam to be the first time you accurately assess how well you know the material. Self-testing should be an integral part of your study sessions so that have a clear understanding of what you do and don’t know. Many of the methods described are about self-testing (e.g., thinking aloud, using writing, taking notes from memory) because they help you discern what you do and don’t actually know. Other common methods include practice tests and flash cards—anything that asks you to summon your knowledge and check if it’s correct.

Figure out how you learn

It is important to figure out what learning strategies work best for you. It will probably vary depending on what type of material you are trying to learn (e.g. chemistry vs. history), but it will be helpful to be open to trying new things and paying attention to what is effective for you. If flash cards never help you, stop using them and try something else instead. Making an appointment with an academic coach at the Learning Center is a great chance to reflect on what you have been doing and figuring out what works best for you.

Works consulted

McGuire, S.Y. and McGuire, S. (2016). Teach Students How to Learn: Strategies You Can Incorporate in Any Course to Improve Student Metacognition, Study Skills, and Motivation. Sterling, Virginia: Stylus Publishing, LLC.

Centre for Innovation and Excellence in Learning. Ten Metacognitive Teaching Strategies. Vancouver Island University. Retrieved from https://ciel.viu.ca/sites/default/files/ten_metacognitive_teaching_strategies.docx

Anderson, J. (2017, May 09). A Stanford researcher’s 15-minute study hack lifts B+ students into the As. Quartz. Retrieved from https://qz.com/978273/a-stanford-professors-15-minute-study-hack-improves-test-grades-by-a-third-of-a-grade/

Creative Commons License

If you enjoy using our handouts, we appreciate contributions of acknowledgement.

Make a Gift

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of jintell

Assessing Metacognitive Regulation during Problem Solving: A Comparison of Three Measures

Cristina d. zepeda.

1 Department of Psychology and Human Development, Vanderbilt University, Nashville, TN 37235, USA

Timothy J. Nokes-Malach

2 Department of Psychology, Learning Research and Development Center, University of Pittsburgh, Pittsburgh, PA 15260, USA

Associated Data

Summary levels of the data presented in this study are available on request from the corresponding author. The data are not publicly available to protect the privacy of the participants.

Metacognition is hypothesized to play a central role in problem solving and self-regulated learning. Various measures have been developed to assess metacognitive regulation, including survey items in questionnaires, verbal protocols, and metacognitive judgments. However, few studies have examined whether these measures assess the same metacognitive skills or are related to the same learning outcomes. To explore these questions, we investigated the relations between three metacognitive regulation measures given at various points during a learning activity and subsequent test. Verbal protocols were collected during the learning activity, questionnaire responses were collected after the learning tasks but before the test, and judgments of knowing (JOKs) were collected during the test. We found that the number of evaluation statements as measured via verbal protocols was positively associated with students’ responses on the control/debugging and evaluation components of the questionnaire. There were also two other positive trends. However, the number of monitoring statements was negatively associated with students’ responses on the monitoring component of the questionnaire and their JOKs on the later test. Each measure was also related to some aspect of performance, but the particular metacognitive skill, the direction of the effect, and the type of learning outcome differed across the measures. These results highlight the heterogeneity of outcomes across the measures, with each having different affordances and constraints for use in research and educational practice.

1. Introduction

Metacognition is a multi-faceted phenomenon that involves both the awareness and regulation of one’s cognitions ( Flavell 1979 ). Past research has shown that metacognitive regulation, or the skills learners use to manage their cognitions, is positively related to effective problem-solving ( Berardi-Coletta et al. 1995 ), transfer ( Lin and Lehman 1999 ), and self-regulated learning ( Zepeda et al. 2015 ). Furthermore, these skills have been shown to benefit student learning across a variety of academic domains, including math, science, reading, and writing ( Hacker et al. 2009 ). With research on metacognition advancing, multiple metacognitive skills have been proposed and evaluated, with researchers using different measures to assess each one ( Azevedo 2020 ). Although many measures and approaches have been proposed (e.g., verbal protocols, questionnaires, metacognitive judgments), less work has compared and contrasted these different measures with one another. This has led to questions about the relations of the measures to one another and concerns about measurement validity ( Veenman 2005 ; Veenman et al. 2003 ). To better understand metacognition conceptually and measure it practically, we need to compare how these different measures are similar to and different from one another.

In this work, we evaluate three types of metacognitive regulation measures: verbal protocols (e.g., students speaking their thoughts aloud during learning activities and then a researcher recording, transcribing, and coding those utterances for evidence of different metacognitive processes), a task-based questionnaire (e.g., asking students questions about how often they think they used different metacognitive processes during the learning task), and metacognitive judgments (specifically, judgments of knowing [JOKs]—a type of metacognitive judgment that asks students how confident they are about their answers on a test that is often based on content from a learning activity, sometimes referred to as retrospective confidence judgments). All three measures have been proposed to capture some aspect of metacognitive regulation. To evaluate the potential overlap of these measures, we conducted a theoretical analysis of each measure to better understand what it is intended to measure and how it has been typically used in research. We do so by reviewing the literature with consideration of each of the three measures in regard to their background theory, implications for what is learned, and attention to different aspects of validity. After this analysis, we investigate the measures in an empirical study, comparing and contrasting whether and how they are related to one another and learning outcomes during a problem-solving learning activity. Critically, this investigation has implications for practitioners trying to understand which aspects of their students’ metacognitive skills need support, as well as for theory and measurement development. Below, we first describe reasons why there might be some misalignment among the measures and then provide a detailed review of prior work using each type of measure and the validity of those measures.

1.1. Theory and Measurement: An Issue of Grain Size

One source of the variation in measurement is likely due to the variation in theories of metacognition (e.g., Brown 1987 ; Brown et al. 1983 ; Flavell 1979 ; Jacobs and Paris 1987 ; Nelson and Narens 1990 ; Schraw and Moshman 1995 ). Although most theories hypothesize that metacognition involves the ability to assess and regulate one’s thoughts, they differ in how they operationalize these constructs and their level of specificity ( Pintrich et al. 2000 ; e.g., Nelson and Narens 1990 ; Schraw and Dennison 1994 ). Two common differences across models of metacognition are the number of constructs specified and the level of analysis at which those constructs are described. Relevant to this study, metacognitive regulation has been represented across models as containing a variety of skills, such as planning, monitoring, control, and evaluating.

To illustrate the different number of constructs and the different levels of description, we compare a few models that conceptualize metacognitive regulation to one another. For example, Nelson and Narens’ ( 1990 ) model describes two higher-level constructs, whereas Schraw and Dennison’s ( 1994 ) model describes five constructs (see Figure 1 for an illustration). Nelson and Narens’ ( 1990 ) model consists of monitoring and control processes that assess the current state of working memory; it then uses that information to regulate and guide subsequent actions. These processes are described at a coarse grain level of analysis (see Figure 1 ), but the measurements of these constructs are operationalized at a more fine-grained level, focusing on different types of metacognitive judgments. Winne and Hadwin ( 1998 ) built upon this model and included additional higher-level metacognitive skills, such as planning and evaluating. Although Nelson and Narens’ ( 1990 ) model does contain aspects of planning (e.g., selection of processing) and evaluation (e.g., confidence in retrieved answers), these are included at the fine-grain level of description of monitoring and control and are not proposed as separate higher-level constructs.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00016-g001.jpg

A comparison of the coarse-grain skills of two models that conceptualize metacognitive regulation. The gray and patterned rectangles represent the coarse-grain skills represented in each model. The rounded white rectangles connect to the coarse-grain skills that they are associated with for each of the models, highlighting the potential (mis)alignment between the constructs and measures. The rounded white rectangles also contain the definition for each of the coarse-grain skills and measures we aim to measure in this work. Note that judgments of knowing (JOKs) are shown in gray to represent the misalignment across the models with associations to evaluation for Schraw and Dennison ( 1994 ) and monitoring for Nelson and Narens ( 1990 ).

Schraw and Dennison’s ( 1994 ) model also includes planning, monitoring, evaluating, as well as two additional higher-level skills, information management, and debugging. Similarly, Zimmerman’s ( 2001 ) self-regulated learning model includes the same metacognitive skills of planning, monitoring, and evaluation. Across these different models, each skill is hypothesized to have a distinct process that interacts with the other skills. To further illustrate some of these differences and similarities in the conceptualization of metacognitive regulation, in Figure 1 , we compare Schraw and Dennison’s ( 1994 ) model with Nelson and Narens’ ( 1990 ) model. One clear difference between the two models is that JOKs are represented under monitoring in Nelson and Narens’ representation; however, given the definitions of monitoring and evaluation in Schraw and Dennison’s representation (as well as the other two models mentioned earlier), this might also be related to evaluation. This difference in particular highlights both the misalignment across the theories and the misalignment across theories and measurement.

This misalignment across theories and measurement is also seen with other measures. For example, although some researchers initially sought to capture specific metacognitive skills via a questionnaire, they often ended up combining them into a single factor due to the challenges of establishing each one as a separate construct (e.g., combining metacognitive skills such as monitoring and evaluating, among others, into a single component called metacognitive regulation— Schraw and Dennison 1994 ). Similarly, Pressley and Afflerbach ( 1995 ) had difficulty differentiating monitoring from control processes in verbal protocols and found that they tend to occur at the same time. The challenges in differentiating between the metacognitive skills of monitoring, control, and evaluating could also explain why other researchers have proposed fewer interactive skills ( Howard-Rose and Winne 1993 ; Pintrich et al. 2000 ). In contrast, within hypermedia contexts, some researchers have been able to differentiate between specific, fine-grain skills, which they refer to as micro skills (e.g., learners questioning whether they understand the content) and larger-grain skills, which they refer to as macro skills (e.g., monitoring) ( Azevedo and Witherspoon 2009 ; Greene and Azevedo 2009 ).

In this work, we examine the relation between theory and measurement with respect to a subset of metacognitive skills. This subset includes monitoring, control/debugging, and evaluating. We define monitoring as one’s awareness of one’s thinking and knowledge during the task, control/debugging as goal-directed activities that aim to improve one’s understanding during the task, and evaluation as an assessment of one’s understanding, accuracy, and/or strategy use once the task is completed. For example, if a student identifies what they do not understand (monitoring) while attempting to solve a problem, then they have an opportunity to fill the gap in their knowledge by seeking new information, rereading, summarizing the instructions, trying out new ideas, and so forth (control/debugging). Then, once the solution has been generated, they can reflect on their accuracy, as well as which strategies or knowledge they found most beneficial to prepare them for future tasks (evaluation). We chose this subset of metacognitive skills as they are commonly represented across theories of metacognition and measurements. Students are also more likely to engage in monitoring, control/debugging, and evaluation during problem-solving activities compared to other metacognitive skills such as planning, which appears to happen less frequently, as students often just dive right into solving the problem (e.g., Schoenfeld 1992 ).

1.2. Relation among Measures

In addition to the issues of grain size, there are two additional factors that differ across the measures. These factors concern when (e.g., prospective, concurrent, or retrospective) and how (e.g., think aloud vs. questionnaire vs. judgment) metacognition is assessed. Concurrent or “online” measures such as verbal protocols (e.g., Chi et al. 1989 ) attempt to examine people’s metacognition as it is occurring, whereas retrospective measures such as questionnaires (e.g., Schraw and Dennison 1994 ) and JOKs (i.e., retrospective confidence judgments; see Dunlosky and Metcalfe 2009 for an overview) evaluate metacognition after the skills have been employed and/or a solution has been generated or the answer has been given. Unlike a task-based questionnaire, which typically takes place at a longer interval after completing a learning activity, JOKs that assess one’s confidence on test items take place immediately after each problem is solved. Therefore, in Figure 2 , there is more overlap between the JOKs and the test than there is between the task-based questionnaire and the learning activity. A key difference between the timing of all these measures is that, in contrast with the retrospective measures, concurrent verbal protocols allow access to the contents of working memory without having to rely on one’s long-term memory ( Ericsson and Simon 1980 ). Given that JOKs occur after a problem is solved, but also while the information is still present, they may act more like a concurrent measure than a retrospective measure. See Figure 2 for a visual representation of where some of these measures take place during the learning and assessment sequence that we used in the present study.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00016-g002.jpg

Visual representation of our across-methods-and-time design. The arrows indicate what each measure references. The verbal protocols were collected as a concurrent measure in reference to the learning activity. The task-based questionnaire was collected as a delayed retrospective measure in reference to the learning activity. The JOKs were collected as an immediate retrospective measure in reference to the test that was based on the learning content. Note, however, that the JOKs may act more like concurrent measures, as they are generated with the information still present (e.g., problem content); therefore, the box with JOKs overlaps more with the test on the learning activity, whereas the task-based questionnaire does not overlap with the learning activity.

Critically, few studies have directly compared these measures to one another. Those that have, have shown that student responses to questionnaires rarely correspond to concurrent measures ( Cromley and Azevedo 2006 ; Van Hout-Wolters 2009 ; Veenman 2005 ; Veenman et al. 2003 ; Winne and Jamieson-Noel 2002 ; Winne et al. 2002 ). For example, Veenman et al. ( 2003 ) found weak associations ( r ’s = −.18 to .29) between verbal protocols and a questionnaire assessing students’ metacognitive study habits. Van Hout-Wolters’ ( 2009 ) work revealed similar findings, in which correlations between verbal protocols and dispositional questionnaires were weak ( r ’s = −.07 to .22). In addition, Zepeda et al. ( 2015 ) found that students who received metacognitive training differed from a comparison condition in their accuracy in discriminating the metacognitive accuracy in their JOKs, but not in their general questionnaire responses. Schraw and Dennison ( 1994 ) and Sperling et al. ( 2004 ) showed similar findings, in which student accuracy regarding their JOKs was not related to their responses on the Metacognitive Awareness Inventory’s (MAI) metacognitive regulation dimension. The lack of associations among the different metacognitive measures may be due to the measures assessing different processes, an imprecise measure, or a combination of the two. Veenman et al. ( 2006 ) suggested that researchers should use a multi-method design to explicitly compare different methodologies and determine their convergent and external validity.

1.3. Relations to Robust Learning

Another way to examine the similarity of the measures is to examine whether they predict similar learning outcomes (e.g., external validity). To what degree do these different measures of the same construct predict similar learning outcomes? Prior research provides some evidence that metacognition is related to school achievement (e.g., grades or GPA) and performance on tests (e.g., quizzes, standardized assessments). However, no work has examined whether all three measures of the same construct predict the same type of learning outcome. Therefore, we investigated whether the different measures predicted different types of robust learning outcomes.

Robust learning is the acquisition of new knowledge or skills, which can be applied to new contexts (transfer) or prepare students for future learning (PFL) ( Bransford and Schwartz 1999 ; Koedinger et al. 2012 ; Schwartz et al. 2005 ; Richey and Nokes-Malach 2015 ). Transfer is defined as the ability to use and apply prior knowledge to solve new problems and PFL is defined as the ability use prior knowledge to learn new material (see Figure 3 for a comparison). For example, to assess transfer in the current study, learners attempt to apply knowledge (e.g., concept A) acquired from a statistics learning activity to new questions on a post-test that address the same concept (e.g., concept A’). Schwartz et al. refer to this process as ’transferring out’ knowledge from learning to test. To assess PFL, an embedded resource (Concept B) is incorporated into the post-test, in which learners have to apply what they learned in the earlier learning activity (i.e., their prior knowledge, Concept A) to understand the content in the resource. This is what Schwartz et al. refer to as ‘transferring in’. Then, that knowledge is assessed with a question to determine how well the students learned that information (i.e., testing with Concept B’). To our knowledge, there is no work examining the relation between metacognition and PFL using these different metacognitive regulation measures. To gain an understanding of how these measures have been related to different learning outcomes, we surveyed the literature.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00016-g003.jpg

A comparison of the flow of information and knowledge between transfer and PFL, as derived from Bransford and Schwartz ( 1999 ) and Schwartz et al. ( 2005 ). The top light-gray box represents transfer, and the bottom white box represents PFL. “Out” means that the knowledge learned is then demonstrated on an outside assessment. “In” means the learner takes in the information from the learning activity to inform how they interpret later information. The A’ and B’ on the assessment designate that the problems are not identical to the original problems presented in the learning activity.

1.3.1. Verbal Protocols and Learning

Past work has examined the relation of verbal protocols to different types of learning. For example, Van der Stel and Veenman ( 2010 ) found that increased use of metacognitive skills (e.g., planning, monitoring, and evaluating) was associated with better near transfer (e.g., performance on isomorphic problems with the same problem structure but different surface features). In other work, Renkl ( 1997 ) found that the frequency of positive monitoring statements (e.g., “that makes sense”) was unrelated to transfer performance, but the frequency of negative monitoring statements (e.g., “I do not understand this”) was negatively related to transfer. This result shows that different types of metacognitive phenomena are differentially related to transfer. In this case, monitoring behaviors can be useful for identifying when a learner does not understand something.

1.3.2. Questionnaires and Learning

Metacognitive questionnaires are typically used to capture the relation between metacognitive skills with measures of student achievement as assessed by class grades, GPA, or standardized tests ( Pintrich and De Groot 1990 ; Pintrich et al. 1993 ; Sperling et al. 2004 ). However, a focus on achievement measures makes it difficult to determine how much and what type of knowledge a student gained because the measures are coarse grained and often do not account for prior knowledge. For example, class grades (which determine GPA) typically include other factors in addition to individual learning assessments, such as participation and group work. Unlike prior work with verbal protocols, research using questionnaires has not evaluated the relations of metacognitive skills and different types of learning outcomes, such as transfer or PFL.

1.3.3. Metacognitive Judgments—JOKs and Learning

Judgments of knowing (JOKs) have typically been used in paired-associate learning paradigms ( Winne 2011 ). However, some work has examined JOKs and their relation to test performance and GPA ( Nietfeld et al. 2005 , 2006 ). For example, Nietfeld et al. ( 2005 ) found that students’ JOKs were positively related to learning outcomes across different tests (that included transfer items), even when controlling for GPA.

1.3.4. Summary of the Relations to Robust Learning

From this brief survey of the prior literature, we see that different metacognitive measures have been related to different types of learning outcomes. Questionnaires have primarily been related to achievement outcomes (e.g., grades and GPA), whereas verbal protocols and JOKs have been related to multiple learning outcomes, including achievement and transfer. This variation makes it difficult to determine whether these measures predict the same types of learning. To gain a better understanding of how metacognition is related to learning, we examine the relations among all three measures to transfer and PFL. These empirical and theoretical challenges have direct implications for determining measurement validity.

1.4. Measurement Validity

Given the different approaches used across the three metacognitive measures and drawing inspiration from Pintrich et al.’s ( 2000 ) review, we used aspects of Messick’s ( 1989 ) validity framework to structure our review for the validity and scope of each measure. The components of measurement validity that we focus on include substantial validity, external validity, content validity, generality of the meaning (generality for short), and relevance and utility (utility for short). Substantial validity concerns whether the measure produces the predicted structure of the theoretical constructs (e.g., the type and number of metacognitive skills). External validity concerns the predictive or convergent relations to variables that the theory predicts (e.g., the relation to similar types of learning outcomes and the relation between metacognitive measures). Content validity concerns whether the measure is tailored to a specific activity or material. Generality concerns the applicability of the measure to different populations, while utility examines the ease of implementation. Below, we describe each metacognitive measure and their alignment with each of the five aspects of validity.

1.4.1. Validity of Verbal Protocols

Verbal protocols provide fine-grained verbal data to test hypotheses about what and how metacognition is used when a participant is engaged in some learning or problem-solving activity. However, the level of theoretical specificity depends on the research goals of the work, the research questions asked, and the coding rubrics constructed. For example, Renkl ( 1997 ) only examined negative versus positive monitoring, whereas other verbal protocol analyses have attempted to create a detailed taxonomy for evaluating the metacognitive activity of a learner, regardless of the valence ( Greene and Azevedo 2009 ; Meijer et al. 2006 ). Although Meijer et al. ( 2006 ) originally sought to develop a fine-grain taxonomy, due to difficulties obtaining interrater reliability, they condensed their codes into fewer, more generalized aspects of metacognition. These examples reveal that verbal protocols have not arrived at a consensus for the level of analysis with existing protocols, revealing mixed results for the substantive validity of this approach.

Verbal protocols also have mixed results regarding external validity, as they have been shown to correlate with learning outcomes in some studies (e.g., Van der Stel and Veenman 2010 ), but not others ( Meijer et al. 2012 ; Renkl 1997 ). However, this might be attributed to the way in which the verbal protocols were coded. Some coding rubrics differ in whether they code for the quality of metacognition (e.g., accuracy in application) versus the quantity of a specific metacognitive activity (e.g., the frequency of occurrence) ( Meijer et al. 2012 ).

Within a specific coding rubric, there is evidence that verbal protocols have some content validity, as it is domain general. Veenman et al. ( 1997 ) found that the same coding rubric could be applied across three domains and was predictive of learning outcomes within each domain. Verbal protocols have also been successfully employed with a variety of populations (e.g., Veenman et al. 2004 ) and can be applied to a variety of contexts and tasks. They have been used in physics ( Chi et al. 1989 ), biology ( Gadgil et al. 2012 ), probability ( Renkl 1997 ), and reading ( Pressley and Afflerbach 1995 ), among others. Thus, we reveal the flexibility in applying this approach across different content and contexts.

One drawback of verbal protocols is that they take a substantial amount of time to administer and evaluate. Instead of administering the measurement to groups of students, researchers typically focus on one student at a time because of the challenges of recording multiple speakers and potential verbal interference across speakers in the same room. These protocols also require more time to transcribe and code, making this a time-consuming task for researchers and practically challenging to use in the classroom. Although think-aloud protocols are more difficult to employ in classrooms, they provide benefits to researchers, such as a fine-grained source of trace data ( Ericsson and Simon 1980 ). So, while there is utility in the fine-grain products, there is a lack of practical utility in classrooms.

1.4.2. Validity of Questionnaires

Questionnaires are often used to determine the degree to which students perceive using various metacognitive skills. The majority of questionnaires ask students to report on their dispositional use of the skills, although a few are specific to a task or context. The similarity between the structure of the measurement and theory is not well aligned. Although many questionnaires attempt to assess fine-grain distinctions between metacognitive skills, they often have difficulty doing so empirically. For example, Schraw and Dennison ( 1994 ) originally sought to capture five distinct metacognitive skills within the MAI; however, the results revealed only a single factor. Similar to verbal protocols, this misalignment reveals that questionnaires have not arrived at a consensus for the level of analysis with existing questionnaires, revealing mixed results for the substantive validity of this approach.

In contrast, there is more evidence for the external validity of questionnaires. Prior work has shown that questionnaires relate to other variables predicted by metacognitive theory, such as achievement ( Pintrich and De Groot 1990 ; Pintrich et al. 1993 ) as well as convergence with similar questionnaires assessing similar processes ( Sperling et al. 2004 ; Muis et al. 2007 ). For example, Sperling et al. ( 2004 ) found that the Regulation of Cognition dimension of the MAI was related to the Metacognitive Self-Regulation scale of the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al. 1991 ) (r = .46).

The content validity of a questionnaire depends on its intended scope. Some questionnaires are designed to capture the general use of metacognitive skills such as the MAI or MSLQ. Other questionnaires assess metacognitive skills for a particular task. For example, work by Van Hout-Wolters ( 2009 ) demonstrated that task-based measures have a stronger positive relation to verbal protocols than dispositional questionnaires. It is difficult to assess the strength of these different types of questionnaires because dispositional questionnaires typically focus on a generalization of the skills over a longer time-period than task-based questionnaires.

Additionally, metacognitive questionnaires have been reliably adapted to serve a variety of ages (e.g., Jr. MAI; Sperling et al. 2002 ). Of particular interest to educators and researchers is the utility of the measure with the ease of administering and scoring the instrument. Researchers have sought to develop easy-to-use retrospective questionnaires that take just a few minutes to complete. Perhaps the ease of this measure is the reason why there are many questionnaires aimed at capturing different types of content, making it difficult to assess the validity of such measures.

1.4.3. Validity of Metacognitive Judgments—JOKs

JOKs assess students’ accuracy in their monitoring of how well they know what they know after they have solved a problem or answered a question. Although often referred to as a monitoring component, some work also refers to these judgments as an evaluative skill (e.g., Winne 2011 ). Therefore, JOKs might measure one or both monitoring and evaluating skills. In some studies, these skills are collapsed together (e.g., Kistner et al. 2010 ). JOKs are one of many types of metacognitive judgments (see Alexander 2013 for an overview). We used JOKs because there has been some evidence suggesting that they have stronger relations to performance outcomes in comparison to the other types of metacognitive judgments ( Hunter-Blanks et al. 1988 ). JOKs also allowed us to gather multiple observations during an assessment, whereas we would have been limited in the number of observations for the other types of judgments given the nature of the learning task (see below for a description of the learning task).

Different types of calculations have been applied to determine the accuracy and consistency of student judgments (see Schraw 2009 ; Schraw et al. 2013 ). The prior literature has shown some evidence for substantive validity in that it is designed to capture one to two metacognitive skills, referred to as monitoring and evaluating. However, this structure may differ depending on the calculations used to assess different types of accuracy (see Schraw 2009 for a review). JOKs also have some evidence of external validity, as Nietfeld et al. ( 2005 , 2006 ) showed that student judgments were related to learning performance and GPA.

The content validity of JOKs is unclear. Some work has demonstrated it is domain general ( Mazancieux et al. 2020 ; Schraw 1996 ; Schraw et al. 1995 ) and other work has shown it is domain specific ( Kelemen et al. 2000 ). For example, Schraw ( 1996 ) showed that when controlling for test difficulty, confidence ratings from three unrelated tests (math, reading comprehension, and syllogism) were moderately related to each other (average r = .42). More recently, work has compared the types of calculations that have been applied to JOKs ( Dentakos et al. 2019 ). For example, calibrations of JOKs are positively related across tasks, but the resolution of the JOKs (e.g., relative accuracy and discrimination) are not positively related across tasks, suggesting that the type of calculation applied to determine one’s accuracy has implications for when and how JOKs are related. Regardless of these limitations, JOKs have also been applied to multiple domains (e.g., physics, general facts) and age groups ( Dunlosky and Metcalfe 2009 ).

In terms of utility, JOKs are moderately easy to implement. It takes more time to determine the accuracy calculations of these judgments than it does to evaluate questionnaire responses, but it is not as time intensive as verbal protocols. Thus, from a practical standpoint, there is utility in the administration of JOKs, but the utility in applying the calculations is challenging, as it requires additional time to apply those calculations, as well as the knowledge of how and what types of calculations to apply.

Drawing from Zepeda et al. ( 2015 ), we focus on the relation between three types of JOK calculations: absolute accuracy, gamma, and discrimination. They found differences in an experimental manipulation for one type of calculation (discrimination) but not others (absolute accuracy and gamma), suggesting that they captured different metacognitive processes. Therefore, in this study, we employ three different types of calculations: absolute accuracy and two measures of relative accuracy, gamma, and discrimination. Absolute accuracy compares judgments to performance, whereas Gamma evaluates confidence judgment accuracy on one item relative to another ( Nelson 1996 ). Schraw ( 1995 ) suggested that since there is not a one-to-one relation between gamma and absolute accuracy, research should report both. Discrimination examines the degree to which students can distinguish their confidence regarding an incorrect or correct performance ( Schraw 2009 ). Positive discrimination indicates that a learner gave higher confidence ratings for correct trials compared to incorrect trials, a negative value indicates higher confidence ratings for incorrect trials compared to correct trials, and a zero indicates no relation between the two. It can be interpreted that those with positive discrimination are aware of their correct performance. In addition to these calculations, we also examined average JOK ratings, given that students are typically poor at calibrating their understanding when the task is difficult ( Howie and Roebers 2007 ).

1.4.4. Summary of Measurement Validity

The validity across the three types of measurement reveals two consistent patterns, such that they all have been applied to different age groups (generality), and they tend to have mixed or only some support for their substantive validity. For the remaining three types of validity, different patterns emerge. Both questionnaires and JOKs have evidence of external validity and their content validity tends to be more sensitive to context. In contrast, for verbal protocols, there is mixed support for external validity and evidence of content validity. Additionally, all three measurements range in their ease of implementation (their utility) such that questionnaires are more easily applied and scored in educational contexts than JOKs, and both are easier to implement than verbal protocols. Given this landscape, we paid particular attention to the design and development of each measure, especially their alignment with theory (i.e., substantive validity) and their framing to content (e.g., using a task-based questionnaire versus a general one and examining different calculations of JOK accuracy).

1.5. Underlying Processes of the Measures

In addition to their relations to learning outcomes and past work evaluating their validity and scope, these measures likely capture similar and different processes. For example, for the monitoring skills, all three measures likely capture some aspect of monitoring, such as reflecting on one’s use of monitoring during a task-based questionnaire, the actual verbalization of monitoring, and the monitoring that contributes to one’s JOKs. At the same time, each of these measures might also reflect other processes. Reporting one’s use of monitoring requires the person to be aware of their use of monitoring, to monitor their monitoring, and rely on their long-term memory, whereas the verbal protocols capture the monitoring as it unfolds. These verbal protocols also likely contain more information about how the monitoring unfolds and might be more accurate at distinguishing between monitoring, control/debugging, and evaluating one’s learning process. In contrast, self-reporting on these skills might have more cross-over effects when students reflect on using these skills and determining the boundaries between them. The JOKs are similar to the task-based questionnaire, such that they may rely on monitoring the monitoring that took place during the learning task and one’s long-term memory of that experience, but they are different in that JOKs mainly involve monitoring one’s monitoring during the test. Some recent work supports the idea that there may be different monitoring skills at play among the measures. For example, McDonough et al. ( 2021 ) revealed that there appear to be two types of monitoring skills among metacognitive judgments: monitoring skills that occur during the encoding stage versus monitoring skills that occur at the retrieval stage, such that they rely on different pieces of information and cues (e.g., the difficulty of the learning task versus post-test).

As described when comparing the monitoring task-based questionnaire and monitoring statements, the control/debugging skills represented in the task-based questionnaire and the verbal protocols likely have similar overlaps, with some additional differences. Reporting one’s use of control/debugging requires them to be aware and monitor their control/debugging while also relying on their long-term memory. In contrast, the verbalizations capture the control/debugging as it unfolds. The degree of their need to control/debug their learning might also have implications for their reports on the questionnaire, such that in their reports, they might focus on the quantity as well as the productivity of their controlling/debugging.

Evaluating can also be captured across all three types of measures, but more directly by the verbal protocols and the task-based survey. For instance, the processes captured in the task-based questionnaire require learners to be aware of their evaluation process and know of the boundary between the skills. The verbal protocols more directly capture the evaluations as they occur and allow for a potentially more accurate differentiation between monitoring and evaluating. Additionally, the JOKs require students to reflect on their current understanding (i.e., monitoring) but also include aspects in which they evaluate how well they solved the present problem and learned the material during the learning activity. Thus, those measures may be related as well.

Given the different processes, boundaries, and demands of these different types of measures that aim to capture the same set of metacognitive skills, some aspects suggest that they should be related across the measures. Other aspects suggest that these measures may not be well aligned with one another because of the different processes that are required for each skill and measurement type. Therefore, the question remains: when the measures are developed to capture the same metacognitive skills, do they have similar relations to each other and learning outcomes?

1.6. Current Work

In this work, we assessed the relations among three metacognitive regulation measures: a retrospective task-based questionnaire, concurrent verbal protocols recorded during a learning activity, and JOKs elicited during a posttest (outlined in Table 1 ). The overall goal of this study was to investigate how these measures related to each other and to determine the degree to which they predict the similar outcomes for the same task.

Overview of the metacognitive regulation measurements.

MeasurementMetacognitive SkillTimingFraming of the AssessmentAnalytical MeasuresPredicted Learning Outcome
Verbal ProtocolsMonitoring, Control/Debugging, and EvaluatingConcurrentTask basedInter-rater reliability, Cronbach’s alphaLearning, transfer, and PFL
QuestionnairesMonitoring, Control/Debugging, and EvaluatingRetrospectiveTask basedSecond-Order CFA, Cronbach’s alphaLearning, transfer, and PFL
Metacognitive Judgments—JOKsMonitoring and Monitoring AccuracyRetrospective Test itemsCronbach’s alpha, Average, Mean Absolute accuracy, Gamma, and Discrimination measuresLearning, transfer, and PFL

Therefore, we hypothesized that:

Given that prior work tends to use these measures interchangeably and that they were developed to capture the same set of metacognitive skills, one hypothesis is that they will be positively related, as they assess similar metacognitive processes. Monitoring and evaluating assessed by JOKs will have a small positive association with the monitoring and evaluating assessed by the verbal protocols and the task-based questionnaire (rs between .20 and .30), but the associations for one type of skill might be higher than the other. This relation is expected to be small given that they occur at different time points with different types of activities, although all on the same learning content. We also predict a moderate relation between the verbal protocols and the task-based questionnaire for monitoring, control/debugging, and evaluating (rs between .30 and .50), which would be consistent with past work examining the relations between questionnaire and verbal protocols by Schellings and Van Hout-Wolters ( 2011 ) and Schellings et al. ( 2013 ). This relation is larger given that both measures are used in the same learning task but at slightly different time points. Alternatively, given the lengthy review we conducted showing that these measures are often not positively related with each other and that the measures themselves may require different processes, these measures might not be related to one another, even after being carefully matched across the metacognitive skills.

Although there are nuances between each of the measures, the prior work we reviewed suggests that they all should predict performance on learning, transfer, and PFL.

Prior studies examining metacognition tend to utilize tell-and-practice activities in which students receive direct instruction on the topic (e.g., Meijer et al. 2006 ). In contrast, we chose a structured-inquiry activity, as it might provide more opportunities for students to engage in metacognitive regulation ( Schwartz and Bransford 1998 ; Schwartz and Martin 2004 ). A core feature of these activities is that students try to invent new ways to think about, explain, and predict various patterns observed in the data. In the task we chose, students attempt to solve a challenging statistics problem in which they have an opportunity to monitor their progress and understanding, try out different strategies, and evaluate their performance. Although there is controversy in the learning sciences about the benefits of inquiry-based instruction ( Alfieri et al. 2011 ), several research groups have accumulated evidence of the benefits of these types of structured inquiry activities in math and science domains (e.g., Belenky and Nokes-Malach 2012 ; Kapur 2008 ; Roll et al. 2009 ; Schwartz and Martin 2004 ). For example, these activities have been shown to engage students in more constructive cognitive processes ( Roll et al. 2009 ) and to facilitate learning and transfer ( Kapur and Bielaczyc 2012 ; Kapur 2008 , 2012 ; Roll et al. 2009 ).

2. Materials and Methods

2.1. participants.

Sixty-four undergraduates (13 female, 51 male) enrolled in an Introductory Psychology course at a large Mid-Atlantic university participated in the study. All students consented to participate in the study and received credit for their participation. We excluded data from 19 students from the analyses, as they were able to correctly solve for mean deviation and/or standard deviation on the pre-test, which were the two mathematical concepts to be learned during the learning activity. The remaining 45 students (9 female, 36 male) were included in the analyses, as they still had an opportunity to learn the material. Within this sample, student GPAs included a broad range, with students self-reporting below a 2.0 (4.4%), 2.0–2.5 (20%), 2.5–3.0 (28.9%), 3.0–3.5 (24.4%), and 3.5–4.0 (22.2%). Within this sample, 77.8% of the students identified as white, 6.7% as African American, 6.7% as Biracial, 4.4% as Hispanic, 2.2% as Asian Indian, and 2.2% did not specify.

2.2. Design

Using an across-method-and-time design, we recorded student behaviors with video-recording software during a learning activity and collected student responses to a task-based questionnaire and JOKs. See Figure 4 for an overview of the experimental design, materials, and procedure.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00016-g004.jpg

Design summary. The three metacognitive measures captured in this work are italicized. The arrow shows the direction of the ordered activities.

2.3. Materials

The materials consisted of a pre-test, a learning task, a task-based questionnaire, a post-test, and an additional questionnaire that captured demographic information. The learning task was divided into three segments: an invention task on variability, a lecture on mean deviation, and a learning activity on standard deviation. These were identical to those used by Belenky and Nokes-Malach ( 2012 ), which were adapted from Schwartz and Martin ( 2004 ). Parts of the questionnaires assessed student metacognition, motivation, and cognitive processes; however, for this paper, we focus only on the metacognitive components.

2.3.1. Learning Pre-Test

The pre-test was used as a screening tool to remove data from participants who already knew how to solve mean, mean deviation, and standard deviation problems. These items were adapted from Belenky and Nokes-Malach ( 2012 ) and Schwartz and Martin ( 2004 ). All students completed a pre-test with three types of items targeting procedural and conceptual knowledge. All items were scored as either correct (1) or incorrect (0). Two questions assessed basic procedural knowledge of mean and mean deviation, and one assessed a conceptual problem that is matched to a preparation for future learning problem in the post-test (PFL; Bransford and Schwartz 1999 ).

2.3.2. Learning Task

The learning task consisted of two activities and a lecture. The first learning activity was based on calculating variability. Students were asked to invent a mathematical procedure to determine which of four pitching machines was most reliable (see Belenky and Nokes-Malach 2012, Figure 4, p. 12 ; Schwartz and Martin 2004, p. 135 ). The consolidation lecture provided an example that explained how to calculate variability using mean deviation and two practice problems with feedback on how to correctly solve the problems. The second activity asked students to invent a procedure to determine which of two track stars on two different events performed better (Bill on the high jump versus Joe on the long jump). Students received scratch paper and a calculator.

Scoring of the learning activities . Learning materials were evaluated based on the use of correct procedures and the selection of the correct response. Since students could determine the correct answer based on evaluating the means, we coded for every step students took and their interpretations of their final answers. For the variability activity, students could receive a total of 4 points. They received 1 point for calculating the mean, 1 for subtracting the numbers from the mean and taking the absolute value, 1 for taking the mean of those numbers, and 1 for stating that the Fireball Pitching Machine was the most reliable. For the second activity, the standardization activity, students could receive a total of 5 points. They received 1 point for calculating the mean, 1 for subtracting the numbers from the mean and squaring that value, 1 for taking the mean of those numbers, 1 for taking the square root of that value, and 1 for stating that Joe was more reliable.

2.3.3. Learning Post-Test

Similar to the pretest, many of the post-test items were identical to or adapted from Belenky and Nokes-Malach ( 2012 ), Gadgil ( 2014 ), and Schwartz and Martin ( 2004 ). The post-test contained seven items that measured students’ conceptual and procedural knowledge of the mean deviation. It also assessed students’ abilities to visually represent and reason about data. These items assess a variety of different types of transfer such as near and immediate (e.g., Nokes-Malach et al. 2013 ). For this work, we do not analyze these levels of transfer separately as there are not enough items for each transfer type to effectively examine outcomes.

Within the assessment, there was also a PFL problem that evaluated students’ abilities to apply information from an embedded resource to this standard deviation problem. The embedded learning resource was presented as a worked example in the post-test and showed students how to calculate a standardized score with a simple data set which was identical to Belenky and Nokes-Malach ( 2012, Figure 8, p. 16 ; adapted from Schwartz and Martin 2004, pp. 177–78 ). This resource also gave another simple problem using standardized scores. The PFL transfer problem appeared five problems after the worked example. The problem was presented later in the post-test so that the application of the information was not due to mere temporal proximity (i.e., the next problem), but instead, it required that students to notice, recall, and apply the relevant information at a later time. The PFL problem required students to determine which value from two different distributions was more impressive than the other. During the post-test, students were also asked to respond to a JOK for each problem in which they rated how confident they were in their answer from 1 being not at all confident to 5 being very confident .

Scoring of post-test items. Each item was coded for accuracy. The post-test comprised two types of problems: 6 transfer items focused on solving the correct procedure and understanding the concepts of mean deviation (α = .39) and 1 PFL problem. Two transfer problems involved the use of the correct procedure in which a correct response was coded as 1, and an incorrect response was coded as a 0. The other four transfer problems involved reasoning and were coded for the amount of detail within their reasoning. Each of these conceptual problems included different types of reasoning. One point was granted for a complete understanding of the concept or either a .67, .50, .33 for partial understanding (dependent on how many ideas were needed to represent a complete concept) or a 0. The post-test transfer items were scored out of a total of 6 points. The PFL problem was scored as correct (1) or incorrect (0).

2.3.4. Verbal Protocols

To provide practice with talking aloud, we included a 3 min activity where participants solved multiplication problems. Specifically, participants were told, “As you go through the rest of the experiment, there are going to be parts where I ask you to talk aloud, say whatever you are thinking. It is not doing any extra thinking, different thinking, or filtering what you say. Just say whatever it is you are naturally thinking. We’re going to record what you say in order to understand your thinking. So, to practice that, I will give you some multiplication problems; try solving them out loud to practice.” Then, the experimenter was instructed to give them feedback about how they were talking aloud with prompts such as,” That is exactly right, just say what you’re thinking. Talk aloud your thoughts.” Or “Remember to say your thoughts out loud” or “Naturally say whatever you are thinking, related or unrelated to this. Please do not filter what you’re saying.” Once participants completed the practice talking aloud activity, they were instructed to talk aloud for the different learning activities.

Processing and coding of the verbal protocols . To capture the metacognitive processes, we used prior rubrics for monitoring, control/debugging, and evaluating ( Chi et al. 1989 ; Gadgil et al. 2012 ; Renkl 1997 ; see Table 2 ). We also coded for two distinct types of debugging—conceptual error correction and calculation error correction. These were coded separately, as these types of corrections might be more directly related to better performance. Students who focus on their conceptual or procedural (calculation) understanding are aiming to increase a different type of understanding than those who are rereading or trying out other strategies. Those who reread and try out different strategies are still on the path of figuring out what the question is asking them to achieve, whereas those who are focusing on conceptual and calculation errors are further in their problem-solving process. Critically, we coded for the frequency of each metacognitive process as it aligned with prior rubrics that have measured verbal protocols in the past. We hypothesized that the first learning activity would have representative instances of metacognitive regulation, since it was an invention task.

Verbal coding rubric.

Code TypeDefinitionTranscript Examples
Monitoring Checking one’s understanding about what the task is asking them to do; making sure they understand what they are learning/doing.“I’m gonna figure out a pretty much the range of them from vertically and horizontally? I’m not sure if these numbers work (inaudible)”.
“That doesn’t make sense”.
Control/DebuggingAn action to correct one’s understanding or to enhance one’s understanding/progress. Often involves using a different strategy or rereading.“I’m re-reading the instructions a little bit”
“So try a different thing”.
Conceptual Error CorrectionA statement that reflects an understanding that something is incorrect with their strategy or reflects noticing a misconception about the problem.“I’m thinking of finding a better system because, most of these it works but not for Smythe’s finest because it’s accurate, it’s just drifting”.
Calculation Error CorrectionNoticing of a small error that is not explicitly conceptual. Small calculator errors would fall into this category.“4, whoops”.
Evaluation Reflects on their work to make sure they solved the problem accurately. Reviews for understanding of concepts as well as reflects on accurate problem-solving procedures such as strategies. “Gotta make sure I added all that stuff together correctly”.
“Let’s see, that looks pretty good”.
“Let’s check the match on these.”

All videos were transcribed and coded from the first learning activity on variability. Statement length was identified by clauses and natural pauses in speech. Then, two coders independently coded 20% of the data and reached an agreement as examined by an inter-coder reliability analysis ( k > .7). The coders discussed and resolved their discrepancies. Then, they independently coded the rest of the transcripts. The verbal protocol coding was based on prior rubrics and is represented with examples from the transcripts in Table 2 . Due to an experimental error, one participant was not recorded and was therefore excluded from all analyses involving the verbal protocols. For each student, we counted the number of statements generated for each coding category and divided this number by their total number of statements. On average students generated 58.79 statements with much variation ( SD = 34.10). Students engaged in monitoring the most ( M = 3.05 statements per student) followed by evaluation ( M = 2.71 statements per student). Students rarely employed control/debugging, conceptual error correction, and calculation error correction ( M = .23, .05, and .61, respectively). Therefore, we combined these scores into one control/debugging verbal protocol code ( M = .88 statements per student).

We also examined the relations between the total number of statements generated (i.e., verbosity) and the number of statements for each type of metacognitive category. The amount students monitored ( r = .59, p < .001), control/debugged ( r = .69, p < .001), and evaluated ( r = .72, p < .001) their understanding was related to the total number of utterances. Given this relationship, we divided each type of verbal protocol by the total number of utterances to control for the number of utterances.

2.3.5. Task-Based Metacognitive Questionnaire

We adapted questionnaire items from previously validated questionnaires and verbal protocol coding rubrics ( Chi et al. 1989 ; Gadgil et al. 2012 ; Renkl 1997 ) as indicated in Table 3 . Informed by this research and Schellings and Van Hout-Wolters’ ( 2011 ) in-depth analysis of the use of questionnaires and their emphasis on selecting an appropriate questionnaire given the nature of the to-be-assessed activity, we created a task-based questionnaire and adapted items from the MAI, MSLQ, Awareness of Independent Learning Inventory (AILI, Meijer et al. 2013 ), a problem-solving based questionnaire ( Howard et al. 2000 ; Inventory of Metacognitive Self-Regulation [IMSR] that was developed from the MAI and Jr. MAI as well as Fortunato et al. 1991 ), and a state-based questionnaire ( O’Neil and Abedi 1996 ; State Metacognitive Inventory [SMI]). In total, there were 24 metacognitive questions: 8 for monitoring, 9 for control/debugging, and 7 for evaluation. Students responded to each item using a Likert scale ranging from 1, strongly disagree, to 7, strongly agree . All items and their descriptive statistics are presented in Table 3 . We chose to develop and validate a task-based metacognitive questionnaire for three reasons. First, there is mixed evidence about the generality of metacognitive skills ( Van der Stel and Veenman 2014 ). Second, there are no task-based metacognitive measures for a problem-solving activity. Third, to our knowledge, no existing domain-general questionnaires reliably distinguish between the metacognitive skills of monitoring, control/debugging, and evaluation.

Descriptive statistics and factor loading for questionnaire items.

ItemOriginal Construct[Min, Max] Standardized FactorResidual EstimateVariance
During the activity, I found myself pausing to regularly to check my comprehension.MAI ( )[1, 7]4.20 (1.78).90.810.19
During the activity, I kept track of how much I understood the material, not just if I was getting the right answers.MSLQ Adaptation ( )[1, 7]4.18 (1.60).83.690.31
During the activity, I checked whether my understanding was sufficient to solve new problems.Based on verbal protocols[1, 7]4.47 (1.59).77.590.41
During the activity, I tried to determine which concepts I didn’t understand well.MSLQ ( )[1, 7]4.44 (1.65).85.730.27
During the activity, I felt that I was gradually gaining insight into the concepts and procedures of the problems.AILI ( )[2, 7]5.31 (1.28).75.560.44
During the activity, I made sure I understood how to correctly solve the problems.Based on verbal protocols[1, 7]4.71 (1.46).90.800.20
During the activity, I tried to understand why the procedure I was using worked.Strategies ( )[1, 7]4.40 (1.74).78.620.39
During the activity, I was concerned with how well I understood the procedure I was using.Strategies ( )[1, 7]4.38 (1.81).74.550.45
During the activity, I reevaluated my assumptions when I got confused.MAI ( )[2, 7]5.09 (1.58).94.890.11
During the activity, I stopped and went back over new information that was not clear.MAI ( )[1, 7]5.09 (1.54).65.420.58
During the activity, I changed strategies when I failed to understand the problem.MAI ( )[1, 7]4.11 (1.67).77.600.40
During the activity, I kept track of my progress and, if necessary, I changed my techniques or strategies.SMI ( )[1, 7]4.51 (1.52).89.790.21
During the activity, I corrected my errors when I realized I was solving problems incorrectly.SMI ( )[2, 7]5.36 (1.35).50.250.75
During the activity, I went back and tried to figure something out when I became confused about something.MSLQ ( )[2, 7]5.20 (1.58).87.750.25
During the activity, I changed the way I was studying in order to make sure I understood the material.MSLQ ( )[1, 7]3.82 (1.48).70.490.52
During the activity, I asked myself questions to make sure I understood the material.MSLQ ( )[1, 7]3.60 (1.59).49.250.76
REVERSE During the activity, I did not think about how well I was understanding the material, instead I was trying to solve the problems as quickly as possible.Based on verbal protocols[1, 7]3.82 (1.72).54.300.71
During the activity, I found myself analyzing the usefulness of strategies I was using.MAI ( )[1, 7]5.02 (1.55).48.230.77
During the activity, I reviewed what I had learned.Based on verbal protocols[2, 7]5.04 (1.40).57.330.67
During the activity, I checked my work all the way through each problem.IMSR ( )[1, 7]4.62 (1.72).94.880.12
During the activity, I checked to see if my calculations were correct.IMSR ( )[1, 7]4.73 (1.97).95.910.09
During the activity, I double-checked my work to make sure I did it right.IMSR ( )[1, 7]4.38 (1.87).89.790.21
During the activity, I reviewed the material to make sure I understood the information.MAI ( )[1, 7]4.49 (1.71).69.480.52
During the activity, I checked to make sure I understood how to correctly solve each problem.Based on verbal protocols[1, 7]4.64 (1.57).86.750.26

Note. The bolded italics represents each of the three factors with their respective items listed below each factor.

To evaluate the substantive validity of the questionnaire, we used a second-order CFA model consisting of three correlated factors (i.e., monitoring, control/debugging, and evaluation) and one superordinate factor (i.e., metacognitive regulation) with MPlus version 6.11. A robust weighted least squares estimation (WLSMV) was applied. Prior to running the model, normality assumptions were tested and met. The resulting second-order CFA model had an adequate goodness of fit, CFI = .96 TLI = .96, RMSEA = .096, X 2 (276) = 2862.30, p < .001 ( Hu and Bentler 1999 ). This finalized model also had a high internal reliability for each of the factors: superordinate, α = .95, monitoring, α = .92, control/debugging, α = .86 and evaluation, α = .87. For factor loadings and item descriptive statistics, see Table 3 . On average, students reported a moderate use of monitoring ( M = 4.51), control/debugging ( M = 4.51), and evaluation ( M = 4.70).

2.3.6. Use of JOKS

We also analyzed the JOKs (α = .86) using different calculations. As mentioned in the introduction, we calculated the mean absolute accuracy, gamma, and discrimination (see Schraw 2009 for the formulas). Gamma could not be computed for 9 participants (25% of the sample) since they responded with the same confidence rating for all seven items. Therefore, we did not examine gamma in our analyses. Absolute accuracy ranged from .06 to .57, with a lower score indicating better precision in their judgments, whereas discrimination in this study ranged from −3.75 to 4.50, with more positive scores indicating that students were able to indicate when they knew something.

2.4. Procedure

The study took approximately 120 min to complete (see Figure 4 an overview). At the beginning of the study, students were informed that they were going to be videotaped during the experiment and consented to participating in the study. Then, they moved on to complete the pre-test (15 min), followed by the experimenter instructing students to say their thoughts aloud. Then, the experimenter gave the students a sheet of paper with three multiplication problems on it. If students struggled to think aloud while solving problems (i.e., they did not say anything), then the experimenter modeled how to think aloud. Once students completed all three problems and the experimenter was satisfied that they understood how to think aloud (3 min), the experimenter moved onto the learning activity. Students had 15 min to complete the variability learning activity. After the variability activity, students watched a consolidation video (15 min) and worked through a standard deviation activity (15 min). Then, they were asked to complete the task-based questionnaire (10 min). Once the questionnaire was completed, the students had 35 min to complete the post-test. Upon completion of the post-test, students completed several questionnaires, a demographic survey, and then students were debriefed (12 min).

The first set of analyses examined whether the three measures were related to one another. The second set of analyses evaluated the degree to which the different measures related to learning, transfer, and PFL, providing external reliability for the measurements. Descriptive statistics for each measure are represented in Table 4 . For all analyses, alpha was set to .05 and results were interpreted as trending if p < .10.

Descriptive statistics for each measure.

MeasureVariable MinMax
Verbal ProtocolsMonitoring440.000.290.050.010.06
Control/Debugging440.000.060.010.0020.02
Evaluation440.000.160.040.010.04
QuestionnaireMonitoring451.136.754.510.191.29
Control/Debugging452.336.444.510.161.08
Evaluation452.147.004.700.191.28
JOKsMean452.005.004.310.090.60
Mean Absolute Accuracy450.060.570.220.020.13
Discrimination45−3.754.51.430.332.21

Note. To control for the variation in the length of the verbal protocols across participants, the verbal protocol measures were calculated by taking the total number of times the specified verbal protocol measure occurred by a participant and dividing that by the total number of utterances that participant made during the learning activity.

3.1. Relation within and across Metacognitive Measures

To evaluate whether the measures revealed similar associations between the different skills both within and across the measures, we used Pearson correlation analyses. See Table 5 for all correlations. Within the measures, we found that there were no associations among the skills in the verbal protocol codes, but there were positive associations between all the skills in the task-based questionnaire (monitoring, control/debugging, and evaluation). For the JOKs, there was a negative association between mean absolute accuracy and discrimination, meaning that the more accurate participants were at judging their confidence (a score closer to zero for absolute accuracy), the more likely they were aware of their correct performance (positive discrimination score). There was also a positive association between the average ratings of the JOKs and discrimination, meaning those who were assigning higher values in their confidence were also more aware of their correct performance.

Correlations between the task-based questionnaire, verbal protocols, and judgments of knowing.

Variable123456789
VPs1. Monitoring-.09.01−.36 *−.10−.16 −.41 *− .07 −.14
2. Control/Debugging -.16.12−.08.14 −.16 .03 −.08
3. Evaluation - .29 .31 * .37 * −.10 .02.01
Qs4. Monitoring - .73 ** .73 ** .26 .06.02
5. Control/Debugging - .65 **.02−.02 −.03
6. Evaluation -.15 .11 −.09
JOKs7. Average - .14 .39 **
8. Mean Absolute Accuracy -− .76 **
9. Discrimination -

Note. VPs = Verbal Protocols, Qs = Questionnaire, JOKs = Judgments of Knowing, † = p < .10, * = p < .05, and ** p < .01.

Across the measures, an interesting pattern emerged. The proportion of monitoring statements was negatively associated with the monitoring questionnaire and the average JOK ratings. However, there was no relationship between the monitoring questionnaire and the average JOK ratings. For the other skills, control/debugging and evaluation questionnaire responses positively correlated with the proportion of evaluation statements. There were also two trends for the monitoring questionnaire, such that it was positively related to the proportion of evaluation statements and the average JOK ratings. Otherwise, there were no other associations.

3.2. Relation between Metacognitive Measures and Learning

3.2.1. learning and test performance.

The learning materials included the first and second learning activities, and a post-test that included transfer items and a PFL item. For the first learning activity, the scores ranged from 0 to 3 (out of 4) with an average score of 1.6 points ( SD = .72, 40%). For the second learning activity, the scores ranged between 0 and 2 (out of 5) with an average score of 1.56 points ( SD = .59; 31%). Given the low performance when solving the second activity and the observation that most students were applying mean deviation to the second activity, instead of inventing a new procedure, we did not analyze these results. For the post-test transfer items, the scores ranged from 1 to 5.67 (out of 6) with an average score of 3.86 points ( SD = 1.26). We did not include the PFL in the transfer score, as we were particularly interested in examining the relation between the metacognitive measures and PFL. The PFL scores ranged from 0 to 1 (out of 1) with an average score of 0.49 ( SD = 0.51). For ease of interpretation, we converted student scores for all learning measures into the correct proportion in Table 6 .

Descriptive statistics for each learning measure.

Measure MinMax
First Learning Activity450.000.750.400.030.18
Transfer450.170.940.640.030.21
PFL450.001.000.490.080.51

To evaluate the relation between each metacognitive measure and the learning materials, we used a series of regressions. We used multiple linear regressions to test the amount of variance explained in the first learning activity and post-test performance by each measure. Then, to test the amount of variance explained by each metacognitive measure in the PFL performance, we used multiple logistic regression. In addition to these models, we also regressed the learning outcomes on the most predictive variables from each of the measures and entered them into a competing model to evaluate whether and how much they uniquely contribute to the overall variance.

3.2.2. Verbal Protocols and Learning Outcomes

For verbal protocols, we entered each of the codes into the model. The model predicting performance on the first learning activity explained 14.2% of the variance as indexed by the adjusted R 2 statistic, F (3, 40) = 2.21, p = .10. Within the model, there was only an effect of monitoring, β = −0.37, t = −2.51, p = .02, VIF = 1.00 ( Table 7 ). The models predicting transfer, F (3, 40) = 0.19, p = .90, and PFL scores, χ 2 (3, N = 44) = 5.05, p = .17, were not significant.

Multiple linear regression model predicting performance on the first activity with verbal protocols.

Variable VIF
Monitoring statements−0.37−2.51.02 *1.01
Control/Debugging statements−0.05−0.32.751.03
Evaluation statements−0.03−0.17.871.02
Constant 10.06<.001 ***

Note. * = p < .05 and *** p < .001.

3.2.3. Task-Based Questionnaire and Learning Outcomes

For the task-based questionnaire, we computed two types of models: one with all three metacognitive skills and the other with each metacognitive skill entered separately. Entering all three skills simultaneously led to no significant relations for the first learning activity, F (3, 41) = 1.46, p = .24, transfer, F (3, 41) = 0.15, p = .93, or PFL χ 2 (1, N = 45) = 2.97, p = .40. However, because the three factors were highly correlated, we entered each factor into three separate models ( Kraha et al. 2012 ).

Entering the skills into separate models revealed a marginal effect of self-reported monitoring, β = 0.27, t = 1.87, p = .07, VIF = 1.00, and self-reported evaluation, β = 0.29, t = 2.0, p = .05, VIF = 1.00, on the first learning activity. The model predicting performance on the first learning activity with self-reported monitoring explained 7.5% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 3.50, p = .07, whereas the model predicting performance on the first learning activity with self-reported evaluation explained 8.5% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 4.01, p = .05. Otherwise, there were no significant relations. Self-reported monitoring and evaluation were not related to performance on transfer, F (1, 43) = 0.1, p = .75 and F (1, 43) = 0.02, p = .88), respectively, or PFL scores, χ 2 (1, N = 45) = 0.01, p = .91, χ 2 (1, N = 45) = 1.29, p = .26), respectively, and self-reported control/debugging had no relation to any of the learning outcomes (learning activity: F (1, 43) = 1.52, p = .22; transfer: F (1, 43) = 0.07, p = .79; PFL: χ 2 (1, N = 45) = .69, p = .41).

3.2.4. JOKs and Learning Outcomes

The JOK calculations were entered into three separate models for each learning outcome, since they were highly correlated with each other.

Average ratings . The model predicting first activity explained 10.4% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 6.11, p = .02, in which there was an effect of average JOK ratings, β = 0.35, t = 2.47, p = .02, VIF = 1.00. The model predicting transfer explained 14.1% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 7.07, p = .01, in which there was an effect of average JOK ratings, β = 0.38, t = 2.66, p = .01, VIF = 1.00. The logistic model predicting PFL scores explained 15.6% of the variance as indexed by the adjusted Nagelkerke R 2 statistic, χ 2 (1, N = 43) = 5.6, p < .05. There was an effect of average JOK ratings, B = 4.17, Exp (B) = 64.71, Wald’s χ 2 (1, N = 44) = 4.21, p = .04. Thus, higher average JOK ratings were associated with an increase in the likelihood of solving the PFL problem.

Mean absolute accuracy . The model predicting first activity explained 4.2% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 1.85, p =.18. The model predicting transfer explained 50.8% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 43.42, p < .001, in which there was an effect of mean absolute accuracy, β = −0.71, t = −6.59, p < .001, VIF = 1.00. The logistic model predicting PFL scores explained 8.9% of the variance as indexed by the adjusted Nagelkerke R 2 statistic, χ 2 (1, N = 43) = 3.03, p = .08, in which there was a marginal effect of mean absolute accuracy, B = −4.26, Exp (B) = 0.01, Wald’s χ 2 (1, N = 44) = 2.74, p = .098. Thus, increasing mean absolute accuracy (i.e., worse accuracy) was associated with a reduction in the likelihood of solving the PFL problem.

Discrimination . The model predicting performance on the first activity explained 0.1% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 0.05, p = .83. The model predicting transfer explained 88.1% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 318.61, p < .001, in which there was an effect of discrimination, β = 0.94, t = 17.85, p < .001, VIF = 1.00. The logistic model predicting PFL scores explained 33.6% of the variance as indexed by the adjusted Nagelkerke R 2 statistic, χ 2 (1, N = 43) = 12.80, p < .001, in which there was an effect of discrimination, B = 0.60, Exp (B) = 1.82, Wald’s χ 2 (1, N = 44) = 8.88, p = .003. Thus, increasing discrimination was associated with an increased likelihood of solving the PFL problem.

3.2.5. Competing Models

We evaluated the competing models for the learning activity to determine whether constructs from different measurements were predictive of differential variances within these learning outcomes. The models predicting transfer and PFL were not computed, as only the JOKs were predictive. For the model predicting the first learning activity, we regressed it on self-reported evaluation, monitoring statements, and JOK average. The model explained 24.7% of the variance as indexed by the adjusted R 2 statistic, F (3, 40) = 4.37, p = .009. Within the model, there was a marginal effect of self-reported evaluation, β = 0.24, t = 1.71, p = .095, VIF = 1.03. Otherwise, there were no other significant effects ( Table 8 ).

Multiple linear regression model predicting performance on the first activity with self-reported evaluation, monitoring statements, and JOK average.

Variable VIF
Self-reported Evaluation 0.24 1.71.0951.03
Monitoring Statements−0.24−1.60.121.22
JOK Average 0.23 1.53.131.21
Constant −0.08.93

4. Discussion

From these results, we raise some important questions about the measures of metacognitive regulation, specifically those that assess the skills of monitoring, control/debugging, and evaluation. Not only do we find that the task-based questionnaire, verbal protocols, and JOK measures assessing these skills show little relation to one another, but they also predict different learning outcomes. Although these results suggest that these measures are capturing different processes, one aspect of these results suggests that they capture some overlapping variance, such that the different types of measures did not result in a significant model in the competing model for the learning activity. Below, we discuss these results further by first focusing on relation among the measures and their relation to learning outcomes and then turning to their implications and areas for future research.

4.1. Relation of Measures

A central goal of this study was to examine the degree to which these different measures of metacognitive regulation relate to each other for a subset of metacognitive skills (monitoring, control/debugging, and evaluation). The results demonstrated that there is little association between the task-based metacognitive regulation questionnaire and the corresponding verbal protocols, suggesting that these measurements are inaccurate, measure different processes than intended, or some combination of the two. For example, self-reported monitoring was negatively related to the monitoring statements. This finding suggests that the more students monitored their understanding, the less likely they were to report doing so on a questionnaire, reflecting a disconnect between what students do versus what they think they do. This misalignment might be particularly true for students who are struggling with the content and are making more monitoring statements. It also implies that students are unaware of the amount they are struggling—or worse, they are aware of it, but when asked about it, they are biased to say the opposite, perhaps because they do not want to appear incompetent. This speculation is also related to the observational finding that when students monitored their understanding, they were more likely to share negative monitoring statements such as “I do not understand this.” Therefore, perhaps a more in-depth analysis of the monitoring statements might provide clarity on the relation between these two measures. Another possibility is a mismatch of monitoring valence across the two measures because the monitoring questionnaire items are almost all positively framed (e.g., “ During the activity , I felt that I was gradually gaining insight into the concepts and procedures of the problems ”), whereas the verbal protocols could capture either positive or negative framings. If what is being expressed in the verbal protocols is just monitoring what one does not understand, then we would expect to see a negative correlation such as the one we found. That is, self-reported monitoring is likely to be negatively aligned with negative monitoring statements but potentially not positive monitoring statements. A similar pattern might also be true of the JOK average ratings and the monitoring statements, as they were also negatively associated with each other, especially since the JOKs capture one’s confidence.

The frequency of evaluation statements was associated with self-reported evaluation as well as self-reported control/debugging, which suggests that the different self-reported constructs capture a similar aspect of metacognitive behavior. There was also a trend in which self-reported monitoring was also positively related to evaluation statements. This partial alignment between the questionnaire and verbal protocols might be due to students’ awareness in the moment in which some processes are more explicit (e.g., evaluation) than others (e.g., control/debugging). The lack of differentiation on the questionnaire could also be attributed to students not being very accurate at knowing what they did and did not do during a learning task. This interpretation is consistent with work by Veenman et al. ( 2003 ), in which students’ self-reports had little relation to their actual behaviors. Instead, students might be self-reporting the gist of their actions and not their specific behaviors which are captured in the verbal protocols. It is also possible that there could have been more overlap between the two measures if we coded the verbal protocols for the entire set of learning activities that the students were self-reporting about (not just the first learning activity). It is also unclear as to what students were referencing when answering the self-reports. They could have been referencing their behaviors on the most recent task (i.e., the standard deviation activity) in which we did not code for their metacognitive verbalizations.

There was also a trend in which the average JOK ratings were positively related to self-reported monitoring, suggesting that the average JOK ratings reflected some aspects of monitoring that were captured in the questionnaire. Otherwise, there were no associations between the JOKs and the monitoring and evaluation statements or questions. As mentioned earlier, JOKs capture the accuracy of one’s monitoring and evaluating, not just the act of performing the skill or recounting how many times they engaged in an instance. This result reveals that perhaps being able to identify when one engages in the skills is different from gauging whether one is understanding information or self-reporting on whether one was engaged in checking one’s understanding. Another interpretation is that the JOK accuracy might benefit from the additional learning experiences that took place after the verbal protocols (i.e., the consolidation video) and after the questionnaire (i.e., the embedded resource). These additional resources may provide a more comprehensive picture of the learner’s understanding and might have allowed them to resolve some of their misunderstandings. Prior research also shows that students can learn from a test ( Pan and Rickard 2018 ), providing them with additional information to inform their judgments.

The learning activity might have also played a role in the relationship across the different measures. As mentioned, the structured inquiry task allows for more opportunities to engage in metacognition. This opportunity might also allow for instances in which the metacognitive skills are difficult to distinguish, as they might co-occur or overlap with each other. Perhaps if the learning activity were designed to elicit a specific metacognitive behavior, different associations would emerge.

4.2. Robust Learning

In terms of learning, we see that students’ self-reported use of monitoring and evaluation has a marginal relation to their performance on the first activity, which provides some external validity for those two components. However, there was not a relation between the self-reports and the transfer or PFL performance. It could be that the monitoring and evaluation components of the questionnaire were able to predict performance specific to the task with which they were based on but not the application of the knowledge beyond the task. This finding suggests that these questionnaire measures are limited in the types of learning outcomes they can predict. It is also important to note the differences between this work and past; here, the questionnaire was task specific and involved a problem-solving activity, whereas other work has looked at more domain-general content and related the questionnaires to achievement. Therefore, it is difficult to know whether the task specific framing of the questionnaire limits its predictability, or the change in assessment, or both.

The low internal reliability of the transfer post-test could have also posed difficulties in examining these analyses, as students were responding very differently across the items. The lack of internal reliability might be attributed to the combination of different types of transfer items within the assessment. Future work could employ an assessment with multiple items per concept and per transfer type (e.g., near versus intermediate) to determine the extent to which the reliability of the test items impacted the results.

As predicted, there was an association between monitoring verbal protocols and performance on the first learning activity. The negative association, as well as the observation that the majority of the metacognitive statements reflected a lack of understanding, aligns well with Renkl’s ( 1997 ) findings, in which negative monitoring was related to transfer outcomes. Although monitoring was not a positive predictor, we used a verbal protocol rubric that differs from those who have found positive learning outcomes as we coded for the frequency of the metacognitive statements and not other aspects of a metacognitive event, such as the quality or valence; (e.g., Van der Stel and Veenman 2010 ). For example, the quality of the metacognitive event can be meaningful and add precision to the outcomes they predict ( Binbasaran-Tuysuzoglu and Greene 2015 ). We did not see an association between the verbal protocols with performance on the transfer or PFL problems. One reason for the lack of relationship might be that the verbal protocols occurred during encoding stage with different materials and were not identical to the retrieval- and application-based materials that were used at the post-test. Although there is no prior work evaluating PFL with verbal protocols, other work evaluating transfer suggests that we would have found some relation (e.g., Renkl 1997 ). It would be productive for research to explore how different verbal protocol rubrics relate to one another and whether the types of verbal protocols elicited from different tasks result in different relations to robust learning.

Students’ average JOK ratings, absolute accuracy (knowing when they knew something), and discrimination (rating correct items with higher confidence than incorrect items) were strong predictors of performance on transfer and PFL. These relations could be due to the time-contingent and content-dependent aspects of JOKs, as they were tied to the test which occurred after the learning, whereas the verbal protocols and questionnaires were tied to the learning materials and occurred during and after the learning materials, respectively. Regardless, these findings suggest that being able to monitor one’s understanding is important for learning outcomes. Given there was a strong negative relation between the average JOK ratings and monitoring questionnaire and no relationship between the questionnaire and discrimination and absolute accuracy, it also supports that these measures capture different aspects of metacognition. JOKs might be assessing one’s accuracy at identifying their understanding (i.e., monitoring accuracy) whereas the average JOKs and the monitoring questionnaire might be assessing one’s awareness of checking one’s understanding. However, when comparing the average JOK ratings to the monitoring questionnaire on performance for the first learning activity, the average JOKs have a stronger relationship, implying that after a learning experience and consolidation lecture, students are more accurate at recognizing their understanding.

Although prior work has argued that JOKs are domain general ( Schraw 1996 ), we do not find discrimination or absolute accuracy to be predictive of the learning activity; however, the average JOK ratings were predictive. Students who had higher average JOKs performed better on the learning activity, but it did not matter how accurate their JOKs were. However, for transfer and PFL measures, their accuracy in their monitoring did matter. This finding suggests that students’ ability to monitor their understanding might transfer across different learning measures, but their accuracy is more dependent on the actual learning measure. This assumption is consistent with prior work in which students’ monitoring accuracy varied as a function of the item difficulty ( Pulford and Colman 1997 ).

When generating competing models across the metacognitive measures, we were only able to examine one in which we predicted performance on the first activity with evaluation questionnaire, monitoring statements, and JOK average. The overall model was not significant. This finding suggests that they captured shared variances in their relation to learning, but that they are distinctly different in that they were not associated with each other.

4.3. Theoretical and Educational Implications

One goal of this study was to explore the relation between different skills and at what level of specificity to describe the constructs. We were able to establish a second-order factor with a task-based survey in which the different skills represented the higher-order factor of metacognitive regulation, but also the unique factors for each skill, such that they were distinguishable. We were also able to distinguish between the different metacognitive skills in the verbal protocols with adequate inter-rater reliability between the two coders and the differential relations the codes had with each other and the learning and robust learning outcomes. The lack of correlation between the verbal protocol codes shows that they are not related to each other and suggests that they are capturing different skills. This finding is further supported when predicting learning outcomes, as the verbal protocol codes are related to different types of learning outcomes. This work highlights the need for future theory building to incorporate specific types of metacognitive skills and measures into a more cohesive metacognitive framework. Doing so would inform both future research examining how these processes operate, as well as educators who want to understand whether there are particular aspects of metacognition that their students could use more or less support in using.

This work also has practical implications for education. Although verbal protocols provide insight into what participants were thinking, they were least predictive of subsequent learning performance. However, the utility in using verbal protocols in classroom settings is still meaningful and relevant in certain situations. Of course, a teacher could not conduct verbal protocols for all their students, but it could be applied if they were concerned about how a particular student was engaging in the problem-solving process. In this case, a productive exercise might be to ask the student to verbalize their thoughts as they solve the problem and for the teacher to take notes on whether there are certain metacognitive prompts that may help guide the student during their problem-solving process.

The task-based questionnaire and the metacognitive judgment measures, which are more easily applied to several students at one time and thus are more easily applied in educational contexts, had stronger relations to learning outcomes. Given that the JOKs in this study were positively related to multiple learning outcomes, it might have more utility in the classroom settings. The use of these JOKs will allow teachers to measure how well students are able to monitor their learning performance. To compliment this approach, if teachers want to understand whether their students are engaging in different types of metacognitive skills as they learn the content in their courses, then the use of the task-based questionnaire could readily capture which types of metacognitive skills they are employing. The use of these measures can be used in a way that is complimentary, given the goals of the teacher.

4.4. Future Research

This work examines a subset of metacognitive measures, but there are many more in the literature that should be compared to evaluate how metacognitive regulation functions. Given the nature of the monitoring examined in this work, it would be particularly interesting to examine how different metacognitive judgments such as judgments of learning relate to the monitoring assessed by the verbal protocols and the questionnaire. Kelemen et al. ( 2000 ) provide evidence that different metacognitive judgments assess different processes, so we might expect to find different associations. For example, perhaps judgments of learning are more related to monitoring statements than JOKs. Judgments of learning have a closer temporal proximity to the monitoring statements and target the same material as the verbal protocols. In contrast, JOKs typically occur at a delay and assess post-test materials that are not identical to the material presented in the learning activity. In this work, we were not able to capture both judgments of learning and JOKs because the learning activity did not allow for multiple measures of judgments of learning. Therefore, if a learning activity allowed for more flexibility in capturing multiple judgments of learning, then we might see different relations emerge due to the timing of the measures.

Future work could also explore the predictability the task-based questionnaire has over other validated self-report measures such as a domain-based adoption of the MAI or MSLQ. It would also be interesting to examine how these different measures relate to other external factors as predicted by theories of self-regulated learning. Some of these factors include examining the degree to which the task-based questionnaire, JOKs, and verbal protocols relate to motivational aspects such as achievement goal orientations, as well as more cognitive sense-making processes such as analogical comparison and self-explanation. Perhaps this type of research would provide more support for some self-regulated learning theories over others given their hypothesized relationships. More pertinent to this line of work, this approach has the potential to help refine theories of metacognitive regulation and their associated measures by providing greater insight into the different processes captured by each measure and skill.

Acknowledgments

We thank Christian Schunn, Vincent Aleven, and Ming-Te Wang for their feedback on the study. We also thank research assistants Christina Hlutkowsky, Morgan Everett, Sarah Honsaker, and Christine Ebdlahad for their help in transcribing and/or coding the data.

Funding Statement

This research was supported by National Science Foundation (SBE 0836012) to the Pittsburgh Science of Learning Center ( http://www.learnlab.org ).

Author Contributions

Conceptualization, C.D.Z. and T.J.N.-M.; Formal analysis, C.D.Z.; Writing—original draft, C.D.Z.; Writing—review & editing, C.D.Z. and T.J.N.-M.; Project administration, C.D.Z. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the University of Pittsburgh (PRO13070080, approved on 2/3/2014).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

Conflicts of interest.

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

  • Alexander Patricia A. Calibration: What is it and why it matters? An introduction to the special issue on calibrating calibration. Learning and Instruction. 2013; 24 :1–3. doi: 10.1016/j.learninstruc.2012.10.003. [ CrossRef ] [ Google Scholar ]
  • Alfieri Louis, Brooks Patricia J., Aldrich Naomi J., Tenenbaum Harriet R. Does discovery-based instruction enhance learning? Journal of Educational Psychology. 2011; 103 :1–18. doi: 10.1037/a0021017. [ CrossRef ] [ Google Scholar ]
  • Azevedo Roger, Witherspoon Amy M. Self-regulated use of hypermedia. In: Hacker Douglas J., Dunlosky John, Graesser Arthur C., editors. Handbook of Metacognition in Education. Erlbaum; Mahwah: 2009. [ Google Scholar ]
  • Azevedo Roger. Reflections on the field of metacognition: Issues, challenges, and opportunities. Metacognition Learning. 2020; 15 :91–98. doi: 10.1007/s11409-020-09231-x. [ CrossRef ] [ Google Scholar ]
  • Belenky Daniel M., Nokes-Malach Timothy J. Motivation and transfer: The role of mastery-approach goals in preparation for future learning. Journal of the Learning Sciences. 2012; 21 :399–432. doi: 10.1080/10508406.2011.651232. [ CrossRef ] [ Google Scholar ]
  • Berardi-Coletta Bernadette, Buyer Linda S., Dominowski Roger L., Rellinger Elizabeth R. Metacognition and problem solving: A process-oriented approach. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1995; 21 :205–23. doi: 10.1037/0278-7393.21.1.205. [ CrossRef ] [ Google Scholar ]
  • Binbasaran-Tuysuzoglu Banu, Greene Jeffrey Alan. An investigation of the role of contingent metacognitive behavior in self-regulated learning. Metacognition and Learning. 2015; 10 :77–98. doi: 10.1007/s11409-014-9126-y. [ CrossRef ] [ Google Scholar ]
  • Bransford John D., Schwartz Daniel L. Rethinking transfer: A simple proposal with multiple implications. Review of Research in Education. 1999; 24 :61–100. doi: 10.3102/0091732x024001061. [ CrossRef ] [ Google Scholar ]
  • Brown Ann L. Metacognition, executive control, self-regulation, and other more mysterious mechanisms. In: Weinert Franz Emanuel, Kluwe Rainer H., editors. Metacognition, Motivation, and Understanding. Lawrence Erlbaum Associates; Hillsdale: 1987. pp. 65–116. [ Google Scholar ]
  • Brown Ann L., Bransford John D., Ferrara Roberta A., Campione Joseph C. Learning, remembering, and understanding. In: Flavell John H., Markman Ellen M., editors. Handbook of Child Psychology: Vol. 3. Cognitive Development. 4th ed. Wiley; New York: 1983. pp. 77–166. [ Google Scholar ]
  • Chi Michelene T. H., Bassok Miriam, Lewis Matthew W., Reimann Peter, Glaser Robert. Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science. 1989; 13 :145–82. doi: 10.1207/s15516709cog1302_1. [ CrossRef ] [ Google Scholar ]
  • Cromley Jennifer G., Azevedo Roger. Self-report of reading comprehension strategies: What are we measuring? Metacognition and Learning. 2006; 1 :229–47. doi: 10.1007/s11409-006-9002-5. [ CrossRef ] [ Google Scholar ]
  • Dentakos Stella, Saoud Wafa, Ackerman Rakefet, Toplak Maggie E. Does domain matter? Monitoring accuracy across domains. Metacognition and Learning. 2019; 14 :413–36. doi: 10.1007/s11409-019-09198-4. [ CrossRef ] [ Google Scholar ]
  • Dunlosky John, Metcalfe Janet. Metacognition. Sage Publications, Inc.; Thousand Oaks: 2009. [ Google Scholar ]
  • Ericsson K. Anders, Simon Herbert A. Verbal reports as data. Psychological Review. 1980; 87 :215–51. doi: 10.1037/0033-295X.87.3.215. [ CrossRef ] [ Google Scholar ]
  • Flavell John H. Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist. 1979; 34 :906–11. doi: 10.1037/0003-066X.34.10.906. [ CrossRef ] [ Google Scholar ]
  • Fortunato Irene, Hecht Deborah, Tittle Carol Kehr, Alvarez Laura. Metacognition and problem solving. Arithmetic Teacher. 1991; 38 :38–40. doi: 10.5951/AT.39.4.0038. [ CrossRef ] [ Google Scholar ]
  • Gadgil Soniya, Nokes-Malach Timothy J., Chi Michelene T. H. Effectiveness of holistic mental model confrontation in driving conceptual change. Learning and Instruction. 2012; 22 :47–61. doi: 10.1016/j.learninstruc.2011.06.002. [ CrossRef ] [ Google Scholar ]
  • Gadgil Soniya. Doctoral dissertation. University of Pittsburgh; Pittsburgh, PA, USA: 2014. Understanding the Interaction between Students’ Theories of Intelligence and Learning Activities. [ Google Scholar ]
  • Greene Jeffrey Alan, Azevedo Roger. A macro-level analysis of SRL processes and their relations to the acquisition of a sophisticated mental model of a complex system. Contemporary Educational Psychology. 2009; 34 :18–29. doi: 10.1016/j.cedpsych.2008.05.006. [ CrossRef ] [ Google Scholar ]
  • Hacker Douglas J., Dunlosky John, Graesser Arthur C. Handbook of Metacognition in Education. Routledge; New York: 2009. [ Google Scholar ]
  • Howard Bruce C., McGee Steven, Shia Regina, Hong Namsoo S. Metacognitive self-regulation and problem-solving: Expanding the theory base through factor analysis; Paper presented at the Annual Meeting of the American Educational Research Association; New Orleans, LA, USA. April 24–28; 2000. [ Google Scholar ]
  • Howard-Rose Dawn, Winne Philip H. Measuring component and sets of cognitive processes in self-regulated learning. Journal of Educational Psychology. 1993; 85 :591–604. doi: 10.1037/0022-0663.85.4.591. [ CrossRef ] [ Google Scholar ]
  • Howie Pauline, Roebers Claudia M. Developmental progression in the confidence-accuracy relationship in event recall: Insights provided by a calibration perspective. Applied Cognitive Psychology. 2007; 21 :871–93. doi: 10.1002/acp.1302. [ CrossRef ] [ Google Scholar ]
  • Hu Li-tze, Bentler Peter M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal. 1999; 6 :1–55. doi: 10.1080/10705519909540118. [ CrossRef ] [ Google Scholar ]
  • Hunter-Blanks Patricia, Ghatala Elizabeth S., Pressley Michael, Levin Joel R. Comparison of monitoring during study and during testing on a sentence-learning task. Journal of Educational Psychology. 1988; 80 :279–83. doi: 10.1037/0022-0663.80.3.279. [ CrossRef ] [ Google Scholar ]
  • Jacobs Janis E., Paris Scott G. Children’s metacognition about reading: Issues in definition, measurement, and instruction. Educational Psychologist. 1987; 22 :255–78. doi: 10.1080/00461520.1987.9653052. [ CrossRef ] [ Google Scholar ]
  • Kapur Manu, Bielaczyc Katerine. Designing for productive failure. Journal of the Learning Sciences. 2012; 21 :45–83. doi: 10.1080/10508406.2011.591717. [ CrossRef ] [ Google Scholar ]
  • Kapur Manu. Productive failure. Cognition and Instruction. 2008; 26 :379–424. doi: 10.1080/07370000802212669. [ CrossRef ] [ Google Scholar ]
  • Kapur Manu. Productive failure in learning the concept of variance. Instructional Science. 2012; 40 :651–72. doi: 10.1007/s11251-012-9209-6. [ CrossRef ] [ Google Scholar ]
  • Kelemen William L., Frost Peter J., Weaver Charles A. Individual differences in metacognition: Evidence against a general metacognitive ability. Memory & Cognition. 2000; 28 :92–107. doi: 10.3758/BF03211579. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kistner Saskia, Rakoczy Katrin, Otto Barbara, Ewijk Charlotte Dignath-van, Büttner Gerhard, Klieme Eckhard. Promotion of self-regulated learning in classrooms: Investigating frequency, quality, and consequences for student performance. Metacognition and Learning. 2010; 5 :157–71. doi: 10.1007/s11409-010-9055-3. [ CrossRef ] [ Google Scholar ]
  • Koedinger Kenneth R., Corbett Albert T., Perfetti Charles. The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science. 2012; 36 :757–98. doi: 10.1111/j.1551-6709.2012.01245.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kraha Amanda, Turner Heather, Nimon Kim, Zientek Linda Reichwein, Henson Robin K. Tools to support interpreting multiple regression in the face of multicollinearity. Frontiers in Psychology. 2012; 3 :44. doi: 10.3389/fpsyg.2012.00044. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lin Xiaodong, Lehman James D. Supporting learning of variable control in a computer-based biology environment: Effects of prompting college students to reflect on their own thinking. Journal of Research in Science Teaching. 1999; 36 :837–58. doi: 10.1002/(SICI)1098-2736(199909)36:7<837::AID-TEA6>3.0.CO;2-U. [ CrossRef ] [ Google Scholar ]
  • Mazancieux Audrey, Fleming Stephen M., Souchay Céline, Moulin Chris J. A. Is there a G factor for metacognition? Correlations in retrospective metacognitive sensitivity across tasks. Journal of Experimental Psychology: General. 2020; 149 :1788–99. doi: 10.1037/xge0000746. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McDonough Ian M., Enam Tasnuva, Kraemer Kyle R., Eakin Deborah K., Kim Minjung. Is there more to metamemory? An argument for two specialized monitoring abilities. Psychonomic Bulletin & Review. 2021; 28 :1657–67. doi: 10.3758/s13423-021-01930-z. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meijer Joost, Veenman Marcel V. J., van Hout-Wolters Bernadette H. A. M. Metacognitive activities in text studying and problem solving: Development of a taxonomy. Educational Research and Evaluation. 2006; 12 :209–37. doi: 10.1080/13803610500479991. [ CrossRef ] [ Google Scholar ]
  • Meijer Joost, Veenman Marcel V. J., van Hout-Wolters Bernadette H. A. M. Multi-domain, multi-method measures of metacognitive activity: What is all the fuss about metacognition … indeed? Research Papers in Education. 2012; 27 :597–627. doi: 10.1080/02671522.2010.550011. [ CrossRef ] [ Google Scholar ]
  • Meijer Joost, Sleegers Peter, Elshout-Mohr Marianne, van Daalen-Kapteijns Maartje, Meeus Wil, Tempelaar Dirk. The development of a questionnaire on metacognition for students in higher education. Educational Research. 2013; 55 :31–52. doi: 10.1080/00131881.2013.767024. [ CrossRef ] [ Google Scholar ]
  • Messick Samuel. Validity. In: Linn Robert L., editor. Educational Measurement. 3rd ed. Macmillan; New York: 1989. pp. 13–103. [ Google Scholar ]
  • Muis Krista R., Winne Philip H., Jamieson-Noel Dianne. Using a multitrait-multimethod analysis to examine conceptual similarities of three self-regulated learning inventories. The British Journal of Educational Psychology. 2007; 77 :177–95. doi: 10.1348/000709905X90876. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nelson Thomas O. Gamma is a measure of the accuracy of predicting performance on one item relative to another item, not the absolute performance on an individual item Comments on Schraw. Applied Cognitive Psychology. 1996; 10 :257–60. doi: 10.1002/(SICI)1099-0720(199606)10:3<257::AID-ACP400>3.0.CO;2-9. [ CrossRef ] [ Google Scholar ]
  • Nelson Thomas O., Narens L. Metamemory: A theoretical framework and new findings. Psychology of Learning and Motivation. 1990; 26 :125–73. doi: 10.1016/S0079-7421(08)60053-5. [ CrossRef ] [ Google Scholar ]
  • Nietfeld John L., Cao Li, Osborne Jason W. Metacognitive monitoring accuracy and student performance in the postsecondary classroom. The Journal of Experimental Education. 2005; 74 :7–28. [ Google Scholar ]
  • Nietfeld John L., Cao Li, Osborne Jason W. The effect of distributed monitoring exercises and feedback on performance, monitoring accuracy, and self-efficacy. Metacognition and Learning. 2006; 1 :159–79. doi: 10.1007/s10409-006-9595-6. [ CrossRef ] [ Google Scholar ]
  • Nokes-Malach Timothy J., Van Lehn Kurt, Belenky Daniel M., Lichtenstein Max, Cox Gregory. Coordinating principles and examples through analogy and self-explanation. European Journal of Education of Psychology. 2013; 28 :1237–63. doi: 10.1007/s10212-012-0164-z. [ CrossRef ] [ Google Scholar ]
  • O’Neil Harold F., Jr., Abedi Jamal. Reliability and validity of a state metacognitive inventory: Potential for alternative assessment. Journal of Educational Research. 1996; 89 :234–45. doi: 10.1080/00220671.1996.9941208. [ CrossRef ] [ Google Scholar ]
  • Pan Steven C., Rickard Timothy C. Transfer of test-enhanced learning: Meta-analytic review and synthesis. Psychological Bulletin. 2018; 144 :710–56. doi: 10.1037/bul0000151. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pintrich Paul R., De Groot Elisabeth V. Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology. 1990; 82 :33–40. doi: 10.1037/0022-0663.82.1.33. [ CrossRef ] [ Google Scholar ]
  • Pintrich Paul R., Wolters Christopher A., Baxter Gail P. Assessing metacognition and self-regulated learning. In: Schraw Gregory, Impara James C., editors. Issues in the Measurement of Metacognition. Buros Institute of Mental Measurements; Lincoln: 2000. [ Google Scholar ]
  • Pintrich Paul R., Smith David A. F., Garcia Teresa, McKeachie Wilbert J. A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ) The University of Michigan; Ann Arbor: 1991. [ Google Scholar ]
  • Pintrich Paul R., Smith David A. F., Garcia Teresa, McKeachie Wilbert J. Predictive validity and reliability of the Motivated Strategies for Learning Questionnaire (MSLQ) Educational and Psychological Measurement. 1993; 53 :801–13. doi: 10.1177/0013164493053003024. [ CrossRef ] [ Google Scholar ]
  • Pressley Michael, Afflerbach Peter. Verbal Protocols of Reading: The Nature of Constructively Responsive Reading. Routledge; New York: 1995. [ Google Scholar ]
  • Pulford Briony D., Colman Andrew M. Overconfidence: Feedback and item difficulty effects. Personality and Individual Differences. 1997; 23 :125–33. doi: 10.1016/S0191-8869(97)00028-7. [ CrossRef ] [ Google Scholar ]
  • Renkl Alexander. Learning from worked-out examples: A study on individual differences. Cognitive Science. 1997; 21 :1–29. doi: 10.1207/s15516709cog2101_1. [ CrossRef ] [ Google Scholar ]
  • Richey J. Elizabeth, Nokes-Malach Timothy J. Comparing four instructional techniques for promoting robust learning. Educational Psychology Review. 2015; 27 :181–218. doi: 10.1007/s10648-014-9268-0. [ CrossRef ] [ Google Scholar ]
  • Roll Ido, Aleven Vincent, Koedinger Kenneth R. Helping students know “further”—Increasing the flexibility of students ’ knowledge using symbolic invention tasks. In: Taatgen Niels A., Van Rijn Hedderik., editors. Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Cognitive Science Society; Austin: 2009. pp. 1169–74. [ Google Scholar ]
  • Schellings Gonny L. M., van Hout-Wolters Bernadette H. A. M., Veenman Marcel V. J., Meijer Joost. Assessing metacognitive activities: The in-depth comparison of a task-specific questionnaire with think-aloud protocols. European Journal of Psychology of Education. 2013; 28 :963–90. doi: 10.1007/s10212-012-0149-y. [ CrossRef ] [ Google Scholar ]
  • Schellings Gonny, Van Hout-Wolters Bernadette. Measuring strategy use with self-report instruments: Theoretical and empirical considerations. Metacognition and Learning. 2011; 6 :83–90. doi: 10.1007/s11409-011-9081-9. [ CrossRef ] [ Google Scholar ]
  • Schoenfeld Alan H. Learning to think mathematically: Problem solving, metacognition, and sense making in mathematics. In: Grouws Douglas., editor. Handbook for Research on Mathematics Teaching and Learning. Macmillan; New York: 1992. pp. 334–70. [ Google Scholar ]
  • Schraw Gregory, Moshman David. Metacognitive theories. Educational Psychology Review. 1995; 7 :351–71. doi: 10.1007/BF02212307. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory, Dennison Rayne Sperling. Assessing metacognitive awareness. Contemporary Educational Psychology. 1994; 19 :460–75. doi: 10.1006/ceps.1994.1033. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory, Kuch Fred, Gutierrez Antonio P. Measure for measure: Calibrating ten commonly used calibration scores. Learning and Instruction. 2013; 24 :48–57. doi: 10.1016/j.learninstruc.2012.08.007. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory, Dunkle Michael E., Bendixen Lisa D., Roedel Teresa DeBacker. Does a general monitoring skill exist? Journal of Educational Psychology. 1995; 87 :433–444. doi: 10.1037/0022-0663.87.3.433. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory. Measures of feeling-of-knowing accuracy: A new look at an old problem. Applied Cognitive Psychology. 1995; 9 :321–32. doi: 10.1002/acp.2350090405. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory. The effect of generalized metacognitive knowledge on test performance and confidence judgments. The Journal of Experimental Education. 1996; 65 :135–46. doi: 10.1080/00220973.1997.9943788. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory. A conceptual analysis of five measures of metacognitive monitoring. Metacognition and Learning. 2009; 4 :33–45. doi: 10.1007/s11409-008-9031-3. [ CrossRef ] [ Google Scholar ]
  • Schwartz Daniel L., Bransford John D. A time for telling. Cognition and Instruction. 1998; 16 :475–522. doi: 10.1207/s1532690xci1604_4. [ CrossRef ] [ Google Scholar ]
  • Schwartz Daniel L., Martin Taylor. Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction. 2004; 22 :129–84. doi: 10.1207/s1532690xci2202_1. [ CrossRef ] [ Google Scholar ]
  • Schwartz Daniel L., Bransford John D., Sears David. Efficiency and innovation in transfer. In: Mestre Jose., editor. Transfer of Learning from a Modern Multidisciplinary Perspective. Information Age Publishers; Greenwich: 2005. pp. 1–51. [ Google Scholar ]
  • Sperling Rayne A., Howard Bruce C., Miller Lee Ann, Murphy Cheryl. Measures of children’s knowledge and regulation of cognition. Contemporary Educational Psychology. 2002; 27 :51–79. doi: 10.1006/ceps.2001.1091. [ CrossRef ] [ Google Scholar ]
  • Sperling Rayne A., Howard Bruce C., Staley Richard, DuBois Nelson. Metacognition and self-regulated learning constructs. Educational Research and Evaluation. 2004; 10 :117–39. doi: 10.1076/edre.10.2.117.27905. [ CrossRef ] [ Google Scholar ]
  • Van der Stel Manita, Veenman Marcel V. J. Development of metacognitive skillfulness: A longitudinal study. Learning and Individual Differences. 2010; 20 :220–24. doi: 10.1016/j.lindif.2009.11.005. [ CrossRef ] [ Google Scholar ]
  • Van der Stel Manita, Veenman Marcel V. J. Metacognitive skills and intellectual ability of young adolescents: A longitudinal study from a developmental perspective. European Journal of Psychology of Education. 2014; 29 :117–37. doi: 10.1007/s10212-013-0190-5. [ CrossRef ] [ Google Scholar ]
  • Van Hout-Wolters B. H. A. M. Leerstrategieën meten. Soorten meetmethoden en hun bruikbaarheid in onderwijs en onderzoek. [Measuring learning strategies. Different kinds of assessment methods and their usefulness in education and research] Pedagogische Studiën. 2009; 86 :103–10. [ Google Scholar ]
  • Veenman Marcel V. J. The assessment of metacognitive skills: What can be learned from multi- method designs? In: Artelt Cordula, Moschner Barbara., editors. Lernstrategien und Metakognition: Implikationen für Forschung und Praxis. Waxmann; Berlin: 2005. pp. 75–97. [ Google Scholar ]
  • Veenman Marcel V. J., Van Hout-Wolters Bernadette H. A. M., Afflerbach Peter. Metacognition and learning: Conceptual and methodological considerations. Metacognition and Learning. 2006; 1 :3–14. doi: 10.1007/s11409-006-6893-0. [ CrossRef ] [ Google Scholar ]
  • Veenman Marcel V. J., Prins Frans J., Verheij Joke. Learning styles: Self-reports versus thinking-aloud measures. British Journal of Educational Psychology. 2003; 73 :357–72. doi: 10.1348/000709903322275885. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Veenman Marcel V. J., Elshout Jan J., Meijer Joost. The generality vs. domain-specificity of metacognitive skills in novice learning across domains. Learning and Instruction. 1997; 7 :187–209. doi: 10.1016/S0959-4752(96)00025-4. [ CrossRef ] [ Google Scholar ]
  • Veenman Marcel V. J., Wilhelm Pascal, Beishuizen Jos J. The relation between intellectual and metacognitive skills from a developmental perspective. Learning and Instruction. 2004; 14 :89–109. doi: 10.1016/j.learninstruc.2003.10.004. [ CrossRef ] [ Google Scholar ]
  • Winne Philip H. A cognitive and metacognitive analysis of self-regulated learning. In: Zimmerman Barry J., Schunk Dale H., editors. Handbook of Self-Regulation of Learning and Performance. Routeledge; New York: 2011. pp. 15–32. [ Google Scholar ]
  • Winne Philip H., Hadwin Allyson F. Studying as self-regulated learning. In: Hacker Douglas J., Dunlosky John, Graesser Arthur C., editors. Metacognition in Educational Theory and Practice. Erlbaum; Hillsdale: 1998. pp. 277–304. [ Google Scholar ]
  • Winne Philip H., Jamieson-Noel Dianne. Exploring students’ calibration of self reports about study tactics and achievement. Contemporary Educational Psychology. 2002; 27 :551–72. doi: 10.1016/S0361-476X(02)00006-1. [ CrossRef ] [ Google Scholar ]
  • Winne Philip H., Jamieson-Noel Dianne, Muis Krista. Methodological issues and advances in researching tactics, strategies, and self-regulated learning. In: Pintrich Paul R., Maehr Martin L., editors. Advances in Motivation and Achievement: New Directions in Measures and Methods. Vol. 12. JAI Press; Greenwich: 2002. pp. 121–55. [ Google Scholar ]
  • Wolters Christopher A. Advancing achievement goal theory: Using goal structures and goal orientations to predict students’ motivation, cognition, and achievement. Journal of Educational Psychology. 2004; 96 :236–50. doi: 10.1037/0022-0663.96.2.236. [ CrossRef ] [ Google Scholar ]
  • Zepeda Cristina D., Richey J. Elizabeth, Ronevich Paul, Nokes-Malach Timothy J. Direct Instruction of Metacognition Benefits Adolescent Science Learning, Transfer, and Motivation: An In Vivo Study. Journal of Educational Psychology. 2015; 107 :954–70. doi: 10.1037/edu0000022. [ CrossRef ] [ Google Scholar ]
  • Zimmerman Barry J. Theories of self-regulated learning and academic achievement: An overview and analysis. In: Zimmerman Barry J., Schunk Dale H., editors. Self-Regulated Learning and Academic Achievement: Theoretical Perspectives. Erlbaum; Mahwah: 2001. pp. 1–37. [ Google Scholar ]

Meta-Complexity: A Basic Introduction for the Meta-Perplexed

what is meta problem solving center

by Adam Becker (science communicator in residence, Spring 2023)

Think about the last time you faced a problem you couldn’t solve. Say it was something practical, something that seemed small — a leaky faucet, for example. There’s an exposed screw right on the top of the faucet handle, so you figure all you need to do is turn the faucet off as far as it will go, and then tighten that screw. So you try that, and it doesn’t work. You get a different screwdriver, a better fit for the screw, but you can’t get it to budge. You grab a wrench and take apart the faucet handle, and that doesn’t help much either — it turns out there’s far more under there than you’d expected, and you can barely put it back together again. You’re about to give up and call a plumber, but first you want to see whether you’re close. Maybe it really is easy to fix the problem, and you just need to know where to look. Or maybe it’s far more difficult than you think. So now you’re trying to solve a new problem, a meta-problem: instead of fixing the leaky faucet, you’re trying to figure out how hard it will be to fix the leaky faucet. You turn to the internet, and find that there are many different kinds of faucets and sinks, some of which are practically indistinguishable, and there are different reasons they can leak, unique to each type of sink. Simply determining the difficulty of fixing your leaky faucet is itself turning out to be more difficult than you expected.

Theoretical computer scientists have been facing their own version of this problem for decades. Many of the problems they ask are about complexity: How hard must a computer (really, an idealized version of one) work to perform a particular task? One such task, famous in the annals of both mathematics and computer science — theoretical computer science is where the two disciplines meet — is the traveling salesperson problem. Imagine a traveling salesperson, going from city to city. Starting from her home, she has a list of cities she must visit, and a map with the distances between those cities. Her budget limits the total distance she can travel to a certain maximum, so she’d like to find a route shorter than that maximum distance that allows her to visit each of the cities on her list, returning to her home city at the end. Given her list of cities and her budget, does such a route exist?

There is no known method for solving this problem quickly in a general way — a method that would work for all possible budgets and lists of cities that the salesperson might have. There are ways of doing it, but all of them take a large number of calculations relative to the number of cities on the list, and thus take a great deal of time, especially as the number of cities increases. In fact, the shortest such guaranteed method known for solving the traveling salesperson problem takes, in general, an exponentially larger amount of time as the number of cities on the list increases, because there’s no known way to do this that’s significantly faster than brute-forcing the problem by checking every possible route. Compare this with verifying a solution to the traveling salesperson problem: that’s easy. All you have to do is confirm that the solution does in fact visit every city once, and that the total distance of the route is shorter than the maximum allowed by the salesperson’s budget. 

This property of the traveling salesperson problem — it seems like it can be solved in general only by a lengthy brute-force method, but it’s fast to verify a given solution — places it into a class of “computational complexity” known as NP. (This stands for “nondeterministic polynomial time,” and it’s not particularly important to understand that name in order to understand what’s going on here.) Compare this with a problem like determining whether the last entry on a list of numbers is the largest, for which there are known (and straightforward) methods that don’t scale exponentially with the length of the list. Such problems, which can be solved and verified quickly, are in a complexity class called P, a special subset of NP.

On the face of it, NP and P seem to be different; the traveling salesperson problem (TSP) can’t be solved quickly by any known method. But the trouble, for computer scientists, begins with those words “known method.” While nobody knows a fast way of solving a problem like the traveling salesperson problem, that doesn’t mean no such method exists. Finding such a method would show that TSP actually belongs in P. In fact, it would show more than that, because computer scientists have proved that TSP is not just a member of NP — it is NP-complete: if there were an efficient solution to TSP, it could be adapted to solve every other problem in NP quickly too. Therefore, a fast solution to TSP wouldn’t just show that TSP is part of P — it would show that every problem in NP is a member of P, making P and NP the same complexity class. But if instead someone were to prove that there is no universally fast method for solving TSP, this would mean that TSP and many other similarly difficult problems in NP aren’t in P, meaning that P and NP are not the same complexity class.

So which is it? Does P = NP or not? Nobody knows. This question has haunted theoretical computer science for well over half a century, resisting all attempts at solution — or even reasonable progress. And like the leaky faucet, this difficulty has prompted computer scientists to think about a meta-problem: What’s the complexity of proving whether P = NP? How intricate must a proof that resolves this question be? Is there a trick to it — is it the kind of thing that looks simple in retrospect? Or is it the sort of proof that requires a great deal of intricate mathematics and novel proof techniques? This is meta-complexity : evaluating the complexity of questions that are themselves about computational complexity. The Simons Institute held a research program on the topic in Spring 2023.

Meta-complexity isn’t a new idea. Starting in the late 1940s, pioneers in early computer science on both sides of the Iron Curtain were considering an optimization problem, like TSP, but about idealized computers rather than an idealized salesperson. Specifically, they were thinking about small computers of unknown architecture: black boxes that can be studied only through their behavior. Say you have one of these computers, a little black box that lets you input any whole number you like, up to a certain size. When you do, the box gives you either a 0 or a 1 as output. You want to know what’s in the box, so you start going through inputs and outputs systematically, making a table. 0 gives you 1, 1 gives you 0, 2 gives you 1, and so on. The question these early computer scientists were asking was this: Given a particular table of inputs and outputs, what is the least complex architecture that could be inside this black box doing the computing? If you have a “circuit size budget” — like the traveling salesperson’s travel budget — is there a circuit small enough to fit within your budget that could do what the black box does? These questions became known as the minimum circuit size problem (MCSP). Once these questions had been asked, the next one was: What’s the computational complexity of MCSP itself? 

This is another form of meta-complexity: a question about the complexity of a problem that is itself about complexity. And this time, there’s a known answer. MCSP (at least the second version of it, asking about circuits smaller than a certain size) is in NP: it’s easy to confirm that a solution is correct, but there doesn’t seem to be a general solution to the problem other than a brute-force search. But is MCSP NP-complete? Is it as hard as the hardest problems in NP, like TSP is, and would a fast way of solving it — like solving TSP — mean proving all problems in NP are actually in P? MCSP “seems to really capture that kind of flavor of an unstructured search space — circuits that don’t necessarily have much to do with each other — so shouldn’t you be able to show that not only is MCSP contained in NP, but it is one of the hardest problems in NP, it is NP-complete?” said Marco Carmosino, research scientist at IBM, last year. “It is 2023 and we still have not proved that MCSP is NP-[complete].”

These two forms of meta-complexity — questions about the difficulty of proofs about complexity classes, and questions about the complexity of problems about complexity — are linked. The first kind of meta-complexity, about the difficulty of proofs about complexity, has roots stretching as far back as the work of legendary logician Kurt Gödel in the mid-20th century, as well as the origins of modern logic and meta-mathematics around the turn of the 20th century, in the generations immediately preceding Gödel. But starting in the 1970s — not long after the first formal introduction of the P = NP question — and continuing ever since, computer scientists started proving rigorous results about why such problems were difficult to solve. These “barrier” proofs showed that many common proof techniques used in computer science simply could not solve questions like P vs. NP. Going back to the analogy of fixing the leaky faucet, these barrier proofs would be like finding out that using a screwdriver or a wrench at all would doom you to failure.

But while barrier proofs could be seen as disheartening, they were also informative: they told computer scientists that they would be wasting their time to attempt a solution using those tools, and that any real solution to the problem must lie elsewhere. As work continued over the following decades, computer scientists found further barriers and proofs. But recently, examining the structure of those barriers has led to a burst of activity in meta-complexity, with new results making progress toward old problems like whether P = NP, as well as revealing unexpected connections within the field. Computer scientists working in meta-complexity have not only shown links between various measures of complexity, but have also found deep connections between their own subfield and other areas of computer science, like learning theory and cryptography. “The scope and centrality of meta-complexity has dramatically expanded over the past 10-ish years or so, as breakthroughs show that cryptographic primitives and learning primitives end up being not just reducible to but equivalent to solutions to meta-computational problems. And that attracts attention — that attracts excitement. And the proof techniques are very cool,” said Carmosino, who was a research fellow with the Institute's Meta-Complexity program. “And so it’s very rich, what’s going on right now. A dense network of connections is all jelling together all at once. It's very exciting. … We can use [meta-complexity] as a tool to migrate techniques between these disparate areas of theoretical computer science and show that, really, the field is more unified than it looks.” And with the perspective afforded by meta-complexity, perhaps P vs. NP — the leaky faucet that has been dripping away in the heart of computer science for half a century — will, someday, yield to a solution.

Related stories

what is meta problem solving center

The Simons Institute mourns the passing of Luca Trevisan, who served as senior scientist at the Institute from 2014...

Jim Simons cropped for homepage

Dear friends, I deeply mourn the passing of Jim Simons, the founding benefactor of the Simons Institute for the...

  • Join our email list
  • Add to Calendar

what is meta problem solving center

Whitney Humanities Center

The meta-problem of consciousness.

The hard problem of consciousness is the problem of explaining how physical systems give rise to subjective experience. The hard problem typically contrasts with the easy problems of explaining behavior. However, there is one behavior with an especially close tie to the hard problem: people sometimes make verbal reports such as “consciousness is puzzling” and “there is a hard problem of consciousness.” The meta-problem of consciousness is the problem of explaining these reports. The meta-problem is strictly speaking an easy problem, and solving it is a tractable empirical project for cognitive scientists. At the same time, a solution will almost certainly have consequences for the hard problem of consciousness. In this talk I will lay out the meta-problem research program, I will examine some recent experimental evidence, and I will evaluate some potential solutions.

David Chalmers is University Professor of Philosophy and Neural Science and codirector of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996), Constructing the World (2012), and Reality+: Virtual Worlds and the Problems of Philosophy (2022).

The spring 2022 Shulman Lectures have been organized in conjunction with the Yale College seminar “Metaphysics Meets Cognitive Science” taught by Brian Scholl (Psychology) and L. A. Paul (Philosophy).

  • Memberships

Meta Analysis: definition, meaning and steps to conduct

Meta Analysis - Toolshero

Meta-analysis: This article explains the concept of meta-analysis in a practical way. The article begins with an introduction to this concept, followed by a definition and a general explanation. You will also find a practical example and tips for conducting a simple analysis yourself. Enjoy reading!

What is a meta-analysis?

Have you ever wondered how doctors and researchers often make the right decisions about complex (medical) treatments? A powerful tool they use is the so-called meta-analysis. With this approach, they combine the results of multiple scientific studies to get a clearer picture of the overall effectiveness of a treatment.

Definition and meaning

But what exactly is meta-analysis? It’s a research process that systematically brings together the findings of individual studies and uses statistical methods to calculate an overall or ‘absolute’ effect.

Free Toolshero ebook

It’s not just about merging data from smaller studies to increase sample size. Analysts also use systematic methods to account for differences in research approaches, treatment outcomes, and sample sizes.

For example, they also test the sensitivity and validity of their results for their own research protocols and statistical analyses.

Admittedly, that sounds difficult. It can also be described as putting puzzle pieces together to see the bigger picture. According to experts, scientists are often confronted with valuable but sometimes contradictory results in individual studies.

Meta-analyses play an important role in putting these puzzle pieces together and combining the findings of multiple studies to provide a more complete understanding.

Due to the combination of several scientific studies, it is considered the most comprehensive form of scientific research. This creates more confidence in the conclusions drawn, as a larger body of research is considered.

A practical example

Imagine this: there are several studies examining the same medical treatment, and each study reports slightly different results due to some degree of error.

Meta-analysis helps the researcher by combining these results to get closer to the truth.

By using statistical approaches, an estimated mean can be derived that reflects the common effect observed in the studies.

Steps in conducting a meta-analysis

Meta-analyses are usually preceded by a systematic review, as this helps identify and assess all relevant facts. It is an extremely precise and complex process, which is almost exclusively performed in a scientific research setting.

The general steps are as follows:

  • Formulating the research question , for example by using the PICO model
  • Searching for literature
  • Selection of studies based on certain criteria
  • Selection of specific studies on a well-defined topic
  • Deciding whether to include unpublished studies to avoid publication bias
  • Determining which dependent variables or summary measures are allowed
  • Selection of the right model, for example a fixed-effect or random-effect meta-analysis
  • Investigating sources of heterogeneity between studies, for example by meta-regression or by subgroup analysis
  • Following formal guidelines for conducting and reporting the analysis as described in the Cochrane Handbook
  • Use of Reporting Guidelines

By following these steps, meta-analyses can be performed to obtain reliable summaries and conclusions from a wide range of research data.

Meta-analyses have very valuable advantages.

First, it provides an estimate of the unknown effect size, which helps us understand how effective a treatment really is.

It also allows us to compare and contrast results from different studies. It helps identify patterns between the findings, uncover sources of disagreement, and uncover interesting connections that may emerge when multiple studies are analyzed together.

However, like any research method, meta-analysis also has its limitations. A concern is possible bias in individual studies due to questionable research practices or publication bias.

If such biases are present, the overall treatment effect calculated via this type of analysis may not reflect the true efficacy of a treatment.

Another challenge lies in dealing with heterogeneous studies.

Each study can have its own unique characteristics and produce different results. When we average these differences in a meta-analysis, the result may not accurately represent a specific group studied.

It’s like averaging the weight of apples and oranges – the result may not accurately represent both the apples and the oranges.

This means that researchers must make careful choices during the analysis process, such as how to search for studies, which studies to select based on specific criteria, how to handle incomplete data, analyze the data, and take publication bias into account.

Despite these challenges, meta-analysis remains a valuable tool in evidence-based research.

It is often an essential part of systematic reviews, where multiple studies are extensively analyzed. By combining evidence from different sources, it provides a more comprehensive insight into the effectiveness of medical treatments, for example.

Meta-analysis in psychology

Meta-analysis plays an important role in various fields, including psychology. It provides value primarily through its ability to bring together results from different studies.

Imagine there are many little puzzle pieces of information scattered across different studies. Meta-analysis helps us put all those pieces together and get a complete picture.

It helps psychologists discover patterns and trends and draw more reliable conclusions about certain topics, such as the effectiveness of a treatment or the relationship between certain factors.

Join the Toolshero community

Now it’s your turn

What do you think? Do you recognize the explanation of meta-analysis? Have you ever heard of this research method? Have you ever performed this analysis yourself? What do you think are the benefits its use? How would you explain its importance to someone who has no experience with research methods? What tips or comments can you share with us?

Share your experience and knowledge in the comments box below.

More information

  • Guzzo, R. A., Jackson, S. E., & Katzell, R. A. (1987). Meta-analysis. Research in organizational behavior, 9(1), 407-442.
  • Becker, B. J. (2000). Multivariate meta-analysis. Handbook of applied multivariate statistics and mathematical modeling, 499-525.
  • Haidich, A. B. (2010). Meta-analysis in medical research. Hippokratia, 14(Suppl 1), 29.
  • Field, A. P., & Gillett, R. (2010). How to do a meta‐analysis. British Journal of Mathematical and Statistical Psychology, 63(3), 665-694.

How to cite this article: Janse, B. (2024). Meta Analysis . Retrieved [insert date] from Toolshero: https://www.toolshero.com/research/meta-analysis/

Original publication date: 06/27/2024 | Last update: 06/27/2024

Add a link to this page on your website: <a href=”https://www.toolshero.com/research/meta-analysis/”>Toolshero: Meta Analysis</a>

Did you find this article interesting?

Your rating is more than welcome or share this article via Social media!

Average rating 4 / 5. Vote count: 4

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Ben Janse

Ben Janse is a young professional working at ToolsHero as Content Manager. He is also an International Business student at Rotterdam Business School where he focusses on analyzing and developing management models. Thanks to his theoretical and practical knowledge, he knows how to distinguish main- and side issues and to make the essence of each article clearly visible.

Related ARTICLES

mystery shopping toolshero

Mystery Shopping: the Basics and Variables

Conceptual Framework - Toolshero

Conceptual framework: the Basics and an Example

Respondents - Toolshero

Respondents: the definition, meaning and the recruitment

market research toolshero

Market Research: the Basics and Tools

Gartner Magic Quadrant - Toolshero

Gartner Magic Quadrant report and basics explained

Univariate Analysis - Toolshero

Univariate Analysis: basic theory and example

Also interesting.

Bivariate Analysis - Toolshero

Bivariate Analysis in Research explained

Contingency table - Toolshero

Contingency Table: the Theory and an Example

Content Analysis - Toolshero

Content Analysis explained plus example

Leave a reply cancel reply.

You must be logged in to post a comment.

BOOST YOUR SKILLS

Toolshero supports people worldwide ( 10+ million visitors from 100+ countries ) to empower themselves through an easily accessible and high-quality learning platform for personal and professional development.

By making access to scientific knowledge simple and affordable, self-development becomes attainable for everyone, including you! Join our learning platform and boost your skills with Toolshero.

what is meta problem solving center

POPULAR TOPICS

  • Change Management
  • Marketing Theories
  • Problem Solving Theories
  • Psychology Theories

ABOUT TOOLSHERO

  • Free Toolshero e-book
  • Memberships & Pricing

Powered by WordPress

Username or Email Address

University of Richmond members can login directly using their NetID. To manage your NetID's password, please visit UR Webpass .

Remember Me

Lost your password?

← Go to Jepson Internship 2024

A person using a laptop, presumably learning what C R M is

What is CRM?

Manage, track, and store information related to potential customers using a centralized, data-driven software solution.

Defining CRM

Customer relationship management (CRM) is a set of integrated, data-driven software solutions that help manage, track, and store information related to your company’s current and potential customers. By keeping this information in a centralized system, business teams have access to the insights they need, the moment they need them.

Without the support of an integrated CRM solution, your company may miss growth opportunities and lose potential revenue because it’s not optimizing operating processes or making the most of customer relationships and sales leads.

What does a CRM do?

Not too long ago, companies tracked customer-related data with spreadsheets, email, address books, and other siloed, often paper-based CRM solutions. A lack of integration and automation prevented people within and across teams from quickly finding and sharing up-to-date information, slowing their ability to create marketing campaigns, pursue new sales leads, and service customers.

Fast forward to today. CRM systems automatically collect a wealth of information about existing and prospective customers. This data includes email addresses, phone numbers, company websites, social media posts, purchase histories, and service and support tickets. The system next integrates the data and generates consolidated profiles to be shared with appropriate teams.

CRM systems also connect with other business tools, including online chat and document sharing apps. In addition, they have built-in business intelligence and artificial intelligence (AI) capabilities that accelerate administrative tasks and provide actionable insights.

In other words, modern CRM tools give sales, marketing, commerce, field service, and customer service teams immediate visibility into—and access to—everything crucial to developing, improving, and retaining customer relationships.

Some ways you can use CRM capabilities to benefit your company are to:

  • Monitor each opportunity through the sales funnel for better sales. CRM solutions help track lead-related data, accompanied with insights, so sales and marketing teams can stay organized, understand where each lead is in the sales process, and know who has worked on each opportunity.
  • Use sales monitoring to get real-time performance data. Link sales data into your CRM solution to provide an immediate, accurate picture of sales. With a real-time view of your pipeline, you’ll be aware of any slowdowns and bottlenecks—or if your team won a major deal.
  • Plan your next step with insight generation. Focus on what matters most using AI and built-in intelligence to identify the top priorities and how your team can make the most of their time and efforts. For example, sales teams can identify which leads are ready to hand off and which need follow-up.
  • Optimize workflows with automation. Build sales quotes, gather customer feedback, and send email campaigns with task automation, which helps streamline marketing, sales, and customer service. Thus, helping eliminate repetitive tasks so your team can focus on high-impact activities.
  • Track customer interactions for greater impact. CRM solutions include features that tap into customer behavior and surface opportunities for optimization to help you better understand engagement across various customer touchpoints.
  • Connect across multiple platforms for superior customer engagement. Whether through live chat, calls, email, or social interactions, CRM solutions help you connect with customers where they are, helping build the trust and loyalty that keeps your customers coming back.
  • Grow with agility and gain a competitive advantage. A scalable, integrated CRM solution built on a security-rich platform helps meet the ever-changing needs of your business and the marketplace. Quickly launch new marketing, e-commerce, and other initiatives and deliver rapid responses to consumer demands and marketplace conditions.

Why implement a CRM solution?

As you define your CRM strategy and evaluate customer relationship management solutions , look for one that provides a complete view of each customer relationship. You also need a solution that collects relevant data at every customer touchpoint, analyzes it, and surfaces the insights intelligently.

Learn how to choose the right CRM for your needs in The CRM Buyer’s Guide for Today’s Business . With the right CRM system, your company helps enhance communications and ensure excellent experiences at each stage of the customer journey, as outlined below:

  • Identify and engage the right customers. Predictive insight and data-driven buyer behavior helps you learn how to identify, target, and attract the right leads—and then turn them into customers.
  • Improve customer interaction. With a complete view of the customer, every member of the sales team will know a customer’s history, purchasing patterns, and any specific data that’ll help your team provide the most attentive service to each individual customer.
  • Track progress across the customer journey. Knowing where a customer is in your overall sales lifecycle helps you target campaigns and opportunities for the highest engagement.
  • Increase team productivity. Improved visibility and streamlined processes help increase productivity, helping your team focus on what matters most.

How can a CRM help your company?

Companies of all sizes benefit from CRM software. For small businesses seeking to grow, CRM helps automate business processes, freeing employees to focus on higher-value activities. For enterprises, CRM helps simplify and improve even the most complex customer engagements.

Take a closer look at how a CRM system helps benefit your individual business teams.

Marketing teams

Improve your customers’ journey. With the ability to generate multichannel marketing campaigns, nurture sales-ready leads with targeted buyer experiences, and align your teams with planning and real-time tracking tools, you’re able to present curated marketing strategies that’ll resonate with your customers.

As you gain insights into your brand reputation and market through customized dashboards of data analysis, you’re able to prioritize the leads that matter most to your business and adapt quickly with insights and business decisions fueled by the results of targeted, automated processes.

Sales teams

Empower sellers to engage with customers to truly understand their needs, and effectively win more deals. As the business grows, finding the right prospects and customers with targeted sales strategies becomes easier, resulting in a successful plan of action for the next step in your pipeline.

Building a smarter selling strategy with embedded insights helps foster relationships, boost productivity, accelerate sales performances, and innovate with a modern and adaptable platform. And by using AI capabilities that can measure past and present leading indicators, you can track customer relationships from start to finish and automate sales execution with contextual prompts that delivers a personalized experience and aligns with the buyer’s journey anytime, anywhere.

Customer service teams

Provide customers with an effortless omnichannel experience. With the use of service bots, your customer service teams will have the tools to be able to deliver value and improve engagement with every interaction. Offering personalized services, agents can upsell or cross-sell using relevant, contextual data, and based on feedback, surveys, and social listening, optimize their resources based on real-time service trends.

In delivering a guided, intelligent service supported on all channels, customers can connect with agents easily and quickly resolve their issues, resulting in a first-class customer experience.

Field service teams

Empower your agents to create a better in-person experience. By implementing the Internet of Things (IoT) into your operations, you’re able to detect problems faster—automate work orders, schedule, and dispatch technicians in just a few clicks. By streamlining scheduling and inventory management , you can boost onsite efficiency, deliver a more personalized service, and reduce costs.

By providing transparent communications with real-time technician location tracking, appointment reminders, quotes, contracts, and scheduling information, customers stay connected to your field agents and build trust with your business.

Project service automation teams

Improve your profitability with integrated planning tools and analytics that help build your customer-centric delivery model. By gaining transparency into costs and revenue using robust project planning capabilities and intuitive dashboards, you’re able to anticipate demands, determine resources capacity, and forecast project profitability.

And with the ability to measure utilization with real-time dashboards, you can empower your service professionals to apply those insights to their own workflows and optimize resources at any given time. With visibility into those insights, teams are more likely to simplify processes internally, seamlessly collaborate, and increase productivity.

Why use Dynamics 365 for your CRM solution?

With Dynamics 365 , you get a flexible and customizable solution suited to your business requirements. Choose a standalone application to meet the needs of a specific line of business or use multiple CRM applications that work together as a powerful, comprehensive solution.

Chat with Sales

Available Monday to Friday

8 AM to 5 PM Central Time.

Request we contact you

Have a Dynamics 365 sales expert contact you.

Chat with a Microsoft sales specialist for answers to your Dynamics 365 questions.

More From Forbes

It’s the 21st century energy economy, stupid—undercut by dysfunction.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

An icon representing Artificial Intelligence is being displayed on a smartphone, with AI and EU ... [+] stars visible in the background, in this photo illustration. EU policymakers are reaching a political agreement that is poised to become the global benchmark for regulating Artificial Intelligence, in Brussels, Belgium, on December 12, 2023. (Photo by Jonathan Raa/NurPhoto via Getty Images)

Dysfunctional governance and contentious politics could uproot the American economic transition centered on big data and clean energy. The growth of artificial intelligence, data centers, and electric vehicles is contingent on the expansion of the underlying infrastructure.

Decarbonization is happening, led by consumer demand and public policies. However, it won’t be easy, and must balance climate change and economic growth. New industries will spark American commerce, requiring all hands on deck — from industry to policymakers to local communities.

“I think the good news is that policymakers are drawing a straight line between power availability and maintaining U.S. leadership in technology like artificial intelligence,” says Christopher Wellise, vice president for sustainability at Equinix, during a United States Energy Association briefing in which I took part. “And they've noted that if we fail to solve for power availability, we risk seeding this leadership to foreign adversaries who can and will solve this.”

Artificial intelligence, which consumes roughly 10 times more energy than a Google search, will only grow—to as much as 10% of the nation’s electricity. “We’re hearing data service requests for four to five gigawatts for co-located facilities. That's a significant requirement in terms of additional supply and delivery capacity that has to be planned, sited, permitted, and constructed,” says Daniel Brooks, with the Electric Power Research Institute. However, there is not enough grid capacity.

That phenomenon coincides with decarbonization efforts and the pursuit of carbon neutrality goals by 2050. To that end, renewable energy escalated by 250,000 megawatts globally over the last decade. Meanwhile, U.S. policymakers want electric vehicle growth to hit 50% of all new sales by 2030.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

The decreased use of fossil fuels means increased consumption of clean electricity, which could skyrocket from 20% to 60% by 2050 in this country. But that will require an expansion of the central transmission system. Currently, 260,000 megawatts of power generation are waiting to connect to the U.S. grid—more than double the current fleet of power generators. Ninety-five percent of the generation in the queue is solar, wind, and battery storage, according to the Lawrence Berkeley National Laboratory .

Setting Realistic Goals

WASHINGTON - MARCH 2: Activists hold signs as they participate in the Power Shift '09 rally on the ... [+] West Lawn of the U.S. Capitol March 2, 2009 in Washington, DC. Youth activists called for urgent congressional actions on climate change, energy and the economy. (Photo by Alex Wong/Getty Images)

When I covered the energy transition in the late 1990s, utilities didn’t even acknowledge that climate change was a thing. In 2012, I wrote a column asking “what if” climate change is real and urging stakeholders to make small investments to avoid catastrophe later. Many readers viscerally responded.

Much has changed. Power companies smell opportunity—to sell more electricity, as long as they can reliably deliver their product, especially as their loads have evolved to include big data. “We're growing about 10% a year just in organic growth. And then you add the data centers that are coming in. They are basically doubling or even tripling our overall size,” says David Naylor, president of the Rayburn Electric Cooperative.

Investor-owned utilities are stepping up, too. American Electric Power, CenterPoint Energy, and Duke Energy are among the utilities that are shedding coal-fired power and replacing it with renewables and natural gas. Overall, 70% of the largest U.S. electric and gas utilities have net-zero goals.

If policymakers set realistic aims, power producers will come through, says Todd Snitchler, president of the Electric Power Supply Association. However, they must “recognize the delta between their aspirational policy goals and the operational realities of the system. If they set the guardrails and allow industry the opportunity to perform, we have demonstrated over a century that we can meet the expectations.”

We need more transmission and natural gas pipelines to keep the economy running. The U.S. Department of Energy says the grid may need to expand by 60% by 2030 and triple by 2050 to meet clean energy demands. Meanwhile, the Interstate Natural Gas Association of America said the country must have 24,000 miles of new natural gas pipelines by 2035. Is this doable, especially when building new infrastructure can take a decade?

What Problem Are We Solving?

Transmission poles are seen by U.S. Route 95, United States, on October 21, 2022. (Photo by Beata ... [+] Zawrzel/NurPhoto via Getty Images)

Luckily, high-tech enterprises like Google, Meta, and Microsoft are sinking money into on-site generation—even carbon-free small modular nuclear reactors. Those kinds of investments are required to commercialize small nuclear projects and for global economies to hit their net-zero goals.

“If the data center issues that we've talked about are the catalyst for really advancing our understanding of nuclear, I think that's a great contribution,” says Jim Robb, president of North American Electric Reliability Corporation.

The Biden Administration aims to reduce greenhouse gas emissions by 40% by 2030 from a 2005 baseline—a key reason behind the Inflation Reduction Act , which is set to kick in $369 billion for 21st-century energy and climate projects. However, the shifting sands coupled with the political infighting creates a perilous environment.

Dan Brouillette, chief executive of the Edison Electric Institute, says utilities want to reduce emissions. He insists, though, that natural gas is critical to stabilizing the grid and fulfilling the economic promise of artificial intelligence and big data.

“What's the problem we're solving? Is it national security? Is it climate change? Is it a 2015 Paris goal? That's an important question for all of us to ask,” says Brouillette. “Some of the dysfunction is that policymakers come to these questions with very different views on the priority. If we coalesce around one of those and allow the industry to work, it will produce the solutions that we need as Americans and citizens of the world.”

It’s not a post-World War II economy. It’s the 21st Century, complete with artificial intelligence, internet enterprises, and electric vehicles, requiring modern infrastructure and collaboration. Dysfunctional governance is a threat. However, success breeds innumerable results—millions of jobs and much cleaner air.

Ken Silverstein

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

  • Share full article

Advertisement

Supported by

Do We Need Language to Think?

A group of neuroscientists argue that our words are primarily for communicating, not for reasoning.

Two computer images of brains with various parts of each highlighted in red, orange and yellow.

By Carl Zimmer

For thousands of years, philosophers have argued about the purpose of language. Plato believed it was essential for thinking. Thought “is a silent inner conversation of the soul with itself,” he wrote.

Many modern scholars have advanced similar views. Starting in the 1960s, Noam Chomsky, a linguist at M.I.T., argued that we use language for reasoning and other forms of thought. “If there is a severe deficit of language, there will be severe deficit of thought,” he wrote .

As an undergraduate, Evelina Fedorenko took Dr. Chomsky’s class and heard him describe his theory. “I really liked the idea,” she recalled. But she was puzzled by the lack of evidence. “A lot of things he was saying were just stated as if they were facts — the truth,” she said.

Dr. Fedorenko went on to become a cognitive neuroscientist at M.I.T., using brain scanning to investigate how the brain produces language. And after 15 years, her research has led her to a startling conclusion: We don’t need language to think.

“When you start evaluating it, you just don’t find support for this role of language in thinking,” she said.

When Dr. Fedorenko began this work in 2009, studies had found that the same brain regions required for language were also active when people reasoned or carried out arithmetic.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Robin Brooks, Ben Harris

June 26, 2024

Howard Henderson

June 27, 2024

Andre M. Perry, Manann Donoghoe

James Goldgeier, Elizabeth N. Saunders

The Brookings Institution conducts independent research to improve policy and governance at the local, national, and global levels

We bring together leading experts in government, academia, and beyond to provide nonpartisan research and analysis on the most important issues in the world.

From deep-dive reports to brief explainers on trending topics, we analyze complicated problems and generate innovative solutions.

Brookings has been at the forefront of public policy for over 100 years, providing data and insights to inform critical decisions during some of the most important inflection points in recent history.

Subscribe to the Brookings Brief

Get a daily newsletter with our best research on top issues.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Already a subscriber? Manage your subscriptions

Support Our Work

Invest in solutions. Drive impact.

Business Insider highlights research by Jon Valant showing that Arizona's universal education savings accounts are primarily benefiting wealthy families.

Valerie Wirtschafter spoke to the Washington Post about her latest study finding that Russian state media are ramping up on TikTok in both Spanish and English.

Tony Pipa writes in the New York Times about what's necessary for rural communities to benefit from federal investments made in the IIJA, IRA, & CHIPs.

What does the death of Iran’s President really mean? Suzanne Maloney writes in Politico about a transition already underway.

America’s foreign policy: A conversation with Secretary of State Antony Blinken

Online Only

10:30 am - 11:15 am EDT

The Brookings Institution, Washington D.C.

10:00 am - 11:15 am EDT

10:00 am - 11:00 am EDT

AEI, Washington DC

12:45 pm - 1:45 pm EDT

Jenny Schuetz Eve Devens

June 24, 2024

Keesha Middlemass

Sofoklis Goulas

Diana Fu Emile Dirks

Yemi Osinbajo

Brookings Explains

Unpack critical policy issues through fact-based explainers.

Listen to informative discussions on important policy challenges.

COMMENTS

  1. What Is Metacognition? How Does It Help Us Think?

    Metacognition is the practice of being aware of one's own thinking. Some scholars refer to it as "thinking about thinking.". Fogarty and Pete give a great everyday example of metacognition ...

  2. TEAL Center Fact Sheet No. 4: Metacognitive Processes

    Metacognition is one's ability to use prior knowledge to plan a strategy for approaching a learning task, take necessary steps to problem solve, reflect on and evaluate results, and modify one's approach as needed. It helps learners choose the right cognitive tool for the task and plays a critical role in successful learning.

  3. Metacognition

    Metacognition, sometimes described as "thinking about your own thinking," refers to knowledge about one's own thoughts and cognitive processes as well as the cognitive regulation involved in directing one's learning. Engaging in metacognition allows learners to recognize gaps in their knowledge or difficulty in acquiring new information ...

  4. Metacognition

    Metacognition is, put simply, thinking about one's thinking. More precisely, it refers to the processes used to plan, monitor, and assess one's understanding and performance. Metacognition includes a critical awareness of a) one's thinking and learning and b) oneself as a thinker and learner. Initially studied for its development in young ...

  5. What Are Metacognitive Skills? Definition & 5 Examples

    Metacognitive skills are the soft skills you use to monitor and control your learning and problem-solving processes, or your thinking about thinking. This self-understanding is known as metacognition theory, a term that the American developmental psychologist John H. Flavell coined in the 1970s. It might sound abstract, but these skills are ...

  6. Meta Problem Solving Center

    Meta Problem Solving Center. 1,806 likes · 1 talking about this. Art Apreciation Online Exhibit is on May 24, 2020

  7. Metacognitive strategies improve learning

    Metacognitive strategies improve learning. Metacognition refers to thinking about one's thinking and is a skill students can use as part of a broader collection of skills known as self-regulated learning. Metacognitive strategies for learning include planning and goal setting, monitoring, and reflecting on learning.

  8. The Meta Model Problem Solving Strategies

    The Meta model is a model for changing our maps of the world. It provides a number of problem solving strategies. We cause many of our problems by our unconscious rule governed behavior. We have problems not because the world isn't rich enough, but because our maps aren't. Alfred Korzybski's work demonstrated that we don't operate on ...

  9. Metacognition: Definition, Strategies, & Skills

    Since learning always involves new information or change, metacognition is central to all experiences in education (Flavell, 1979), whether the learning at hand involves active listening, reading, problem-solving, or social interactions. Learning how to harness one's metacognition can help learners become more effective over time (Mahdavi ...

  10. How Did You Solve It? Metacognition in Mathematics

    Metacognition in Mathematics. a. Sam Rhodes. Math relies on metacognition. Strategies like journaling and recording during the problem-solving process help students monitor and regulate their thinking. Curriculum Instructional Strategies. People have a strong emotional reaction when they find out that I'm a professor of mathematics education.

  11. PDF Meta-Intelligence: Understanding, Control, and Interactivity ...

    Problem-solvers choose one or more approaches to problem solving based on their skills and attitudes as these interact with the problem or problems at hand. The complex of processes of choosing one or more approaches, controlling them, and coordinating the various approaches is what we call meta-intelligence (a concept introduced inSternbergn.d.b).

  12. Metacognition: The Power Behind Problem Solving

    To be a strong learner and problem-solver you have to be aware of your own strengths and weaknesses. This requires metacognition. You have to be able to understand the nature of the problem and the demands it will take to complete the task, which also requires metacognition. You have to be knowledgeable about the strategies you are going to use ...

  13. PDF Metacognitive Skills and Problem- Solving

    Problem-solving involves not only cognitive strategies but also metacognitive skills and it is more than just implementing strategies to solve the problems. In fact, metacognitive skills are related to problem-solving (Ader, 2013; Mayer, 1998), and students who have these skills can decide whether a problem is sensible, ...

  14. The effectiveness of collaborative problem solving in promoting

    Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field ...

  15. How can we measure metacognition in creative problem-solving

    The metacognition in creative problem-solving (MCPS) scale was introduced to self-assess metacognitive skills during the creative problem-solving process.. The MCPS scale is designed to capture the engagement in planning, monitoring, regulation, and evaluation in creative problem-solving, being more suitable for problem-solving research than other instruments focusing on learning-specific ...

  16. Metacognitive Study Strategies

    Metacognition is thinking about how you think and learn. The key to metacognition is asking yourself self-reflective questions, which are powerful because they allow us to take inventory of where we currently are (thinking about what we already know), how we learn (what is working and what is not), and where we want to be (accurately gauging if ...

  17. Assessing Metacognitive Regulation during Problem Solving: A Comparison

    1. Introduction. Metacognition is a multi-faceted phenomenon that involves both the awareness and regulation of one's cognitions (Flavell 1979).Past research has shown that metacognitive regulation, or the skills learners use to manage their cognitions, is positively related to effective problem-solving (Berardi-Coletta et al. 1995), transfer (Lin and Lehman 1999), and self-regulated ...

  18. Meta-Complexity: A Basic Introduction for the Meta-Perplexed

    Meta-Complexity: A Basic Introduction for the Meta-Perplexed. by Adam Becker (science communicator in residence, Spring 2023) Think about the last time you faced a problem you couldn't solve. Say it was something practical, something that seemed small — a leaky faucet, for example. There's an exposed screw right on the top of the faucet ...

  19. The Meta-Problem of Consciousness

    The meta-problem is strictly speaking an easy problem, and solving it is a tractable empirical project for cognitive scientists. At the same time, a solution will almost certainly have consequences for the hard problem of consciousness. In this talk I will lay out the meta-problem research program, I will examine some recent experimental ...

  20. Meta Problem Solving Center

    Meta Problem Solving Center. 276,574 likes · 1 talking about this · 2 were here. Tom's Stickers (M) Sdn Bhd Official Facebook Page Malaysia - A Leader In The Automotive Graphic Stickers Industry....

  21. Meta Analysis: definition, meaning and steps to conduct

    Meta-analysis: This article explains the concept of meta-analysis in a practical way. The article begins with an introduction to this concept, followed by a definition and a general explanation. You will also find a practical example and tips for conducting a simple analysis yourself. Enjoy reading! What is a meta-analysis?

  22. Meta Problem Solving Center

    Meta Problem Solving Center, Owings, Maryland. 53 likes. Welcome to Our Page The Premium Contractor for Kitchens, Decks, Renovations and more!

  23. Programmer VR on Meta Quest

    A first-of-a-kind fun game on the Meta Store to help you learn programming & develop problem-solving skills. Your mission? Help Bob navigate a virtual world, overcome obstacles & improve intelligence level. With each puzzle, learn programming concepts like Instructions execution order, conditional statements, loops -- all while you play a fun game.

  24. Problem-solving is what sets Alta Concierge apart in an extremely

    Problem solving is integral to all processes at Alta Concierge. Because Alta's clients are so high profile, from successful business individuals and tech founders to the top celebrities and athletes, they are not used to hearing "no" or that their request cannot be fulfilled.

  25. What is CRM?

    Defining CRM. Customer relationship management (CRM) is a set of integrated, data-driven software solutions that help manage, track, and store information related to your company's current and potential customers. By keeping this information in a centralized system, business teams have access to the insights they need, the moment they need them.

  26. It's The 21st Century Energy Economy, Stupid—Undercut By ...

    It's the 21st Century, complete with artificial intelligence, internet enterprises, and electric vehicles, requiring modern infrastructure and collaboration. Dysfunctional governance is a threat ...

  27. Meta Problem Solving Center

    Meta Problem Solving Center. 698 likes. Health/beauty

  28. How Our Brain Produces Language and Thought, According to

    Starting in the 1960s, Noam Chomsky, a linguist at M.I.T., argued that we use language for reasoning and other forms of thought. "If there is a severe deficit of language, there will be severe ...

  29. Brookings

    The Brookings Institution is a nonprofit public policy organization based in Washington, DC. Our mission is to conduct in-depth research that leads to new ideas for solving problems facing society ...

  30. Impress Call Center Employers with Problem-Solving

    Call center administration often involves teamwork, and your ability to collaborate is a critical aspect of problem-solving. Share examples of how you've worked with others to overcome obstacles ...