Center for Teaching Innovation

Resource library.

  • Exam Wrapper Examples

Self-Assessment

Self-assessment activities help students to be a realistic judge of their own performance and to improve their work. 

Why Use Self-Assessment? 

  • Promotes the skills of reflective practice and self-monitoring. 
  • Promotes academic integrity through student self-reporting of learning progress. 
  • Develops self-directed learning. 
  • Increases student motivation. 
  • Helps students develop a range of personal, transferrable skills. 

Considerations for Using Self-Assessment 

  • The difference between self-assessment and self-grading will need clarification. 
  • The process of effective self-assessment will require instruction and sufficient time for students to learn. 
  • Students are used to a system where they have little or no input in how they are assessed and are often unaware of assessment criteria. 
  • Students will want to know how much self-assessed assignments will count toward their final grade in the course. 
  • Incorporating self-assessment can motivate students to engage with the material more deeply. 
  • Self-assessment assignments can take more time. 
  • Research shows that students can be more stringent in their self-assessment than the instructor. 

Getting Started with Self-Assessment 

  • Identify which assignments and criteria are to be assessed. 
  • Articulate expectations and clear criteria for the task. This can be accomplished with a  rubric . You may also ask students to complete a checklist before turning in an assignment. 
  • Motivate students by framing the assignment as an opportunity to reflect objectively on their work, determine how this work aligns with the assignment criteria, and determine ways for improvement. 
  • Provide an opportunity for students to agree upon and take ownership of the assessment criteria. 
  • Draw attention to the inner dialogue that people engage in as they produce a piece of work. You can model this by talking out loud as you solve a problem, or by explaining the types of decisions you had to think about and make as you moved along through a project. 
  • Consider using an “exam wrapper” or “assignment wrapper.” These short worksheets ask students to reflect on their performance on the exam or assignment, how they studied or prepared, and what they might do differently in the future. Examples of exam and homework wrappers can be found through Carnegie Mellon University’s Eberly Center. 

Education Corner

Helping Students Thrive by Using Self-Assessment

Photo of author

As a teacher, when you design a lesson or unit, you design it with the hope that everything will go according to plan, your students will learn the content, and they’ll be ready to move on to the next concept. If you’ve been a teacher for more than a day or two, however, you know that this often isn’t the case.

Some students will pick up the information and quickly get bored while others will be lost and quickly fall behind. And sometimes, the lesson will fall flat and none of your students will understand much of anything.

Other times, a lesson will work really well with one group of students, but it will flop with another. This is all just par for the course with teaching, and you never know what you’re going to get on any given day.

Thankfully, there is a way you can make your lessons better, more achievable, and more appropriate for all students. The solution is to teach them how to use self-assessment.

Self-assessment is one of those “teach a man to fish” concepts–once students understand how to self-assess, they’ll be more equipped to learn in all aspects of their life. At the very basic level, self assessment is simple: students need to think:

  • What was I supposed to learn?
  • Did I learn it?
  • What questions do I still have?

This formative assessment helps students and teachers understand where they’re at in their learning. The more students learn to do this at your direction and the more techniques they have to self-assess, the more likely they are to inherently do it on their own.

What does self-assessment look like?

Self-assessment can take many forms, and it can be very quick and informal, or it might be more structured and important. In essence though, self assessment looks like students pausing to examine what they do and don’t know. However, if you simply say, “OK, class, time to self-assess,” you’ll likely be met with blank stares.

The more you’re able to walk students through strategies for self-assessment, the more they’ll understand the purpose, process, and value of thinking about their learning. For the best results to reach the most students, aim to incorporate different types of self-assessment, just as you aim to incorporate different ways of teaching into your lessons.

Why self-assessment works

One of the reasons self-assessment is so effective is because it helps students stay within their zone of proximal development when they’re learning. In this zone, students are being challenged, which means they’re learning, but they’re not being pushed too hard into frustration.

The reason this is so helpful is because teachers can see anywhere from 15-150+ students every day, so it’s hard for a teacher to know where every single student is at in his or her learning. Without stopping for self-assessment, it’s easy for a teacher to move on before students are ready or to belabor a concept students mastered days ago.

When students are able to self-assess, they take control of their learning and realize when they need to ask more questions or spend more time working on a concept. Self-assessment that is relayed back to the teacher, either formally or informally, helps the teacher get a better idea of where students are at with their learning.

Another benefit of self-assessment is that students tend to take more ownership and find more value in their learning, according to a study out of Duquesne University. According to the study, formative assessments like self-assessment “give students the means, motive, and opportunity to take control of their own learning.” When teachers give students those opportunities, they empower their students and help turn them into active, rather than passive learners.

Self-assessment also helps students practice learning independently, which is a key skill for life, and especially for students who are pursuing higher education.

How to execute self assessment

To truly make this part of your classroom, you’ll need to explain to students what you’re doing, why you’re doing it, and you’ll need to hold them accountable for their self assessment. The following steps can help you successfully set up self-assessment in your classroom.

Step 1: Explain what self-assessment is and why it’s important

Sometimes teachers have a tendency to surprise students with what’s coming next or to not explain the reasoning behind a teaching strategy or decision. While this is often done out of a desire for control and power as the leader of the classroom, it doesn’t do much to help students and their learning.

If students don’t understand why they’re doing what they’re doing, they usually won’t do it at all, or will just to the bare minimum to go through the motions and get the grade. If students don’t understand the purpose of a learning strategy, they often see it as busy work. Most students are very used to being assessed only by their teachers, so they may not understand why they’re suddenly being asked to take stock of their own learning.

Make sure you take the time to explain why you’re implementing this new learning strategy and how it is going to directly benefit them. That explanation is going to vary based on the age of your students and other factors, but you can give students some variation of the explanation of why self-assessment works above.

Step 2: Always show a model

As you scroll down, you’ll see that we give you some examples of ways to use self assessment; each time you try one of these new techniques, be sure to create an exemplar model for your students. If you want this to work, students need to know what the goal that they’re working toward looks like.

Depending on the type of self-assessment you’re working with, a simple model might be enough, or students might need to practice with the work of others. A low stakes way to start this out is with examples from past students. Pull out an old project from years past and have students assess the project as if it were their own.

Once students learn how to be respectful and constructive with this peer assessment, they can practice with the peers in their class. Including this step often makes it easier for students to assess their own work. It can be hard to look back at your own work or thought process, especially if not much time has passed since you did the work.

Step 3: Teach students different strategies of self-assessment

We all learn best by doing, so rather than just giving students a list of self-assessment strategies, take your time walking through different strategies together. Also remember that the strategy that works best for Jimmy might not work well for Susan, so the more you can diversify self-assessment for your students, the more students you’re going to be able to reach.

Try starting with just one type of self assessment, give students time to master that type, then add another type. As time goes on, you can offer students choice in the type of self-assessment they want to use.

Step 4: Practice

Before you ask students to actively assess their own work, let them practice with some low stakes examples. It’s hard for many people to critique themselves and to recognize they have room for improvement, yet it’s essential.

Give students some examples of work from past students (names always removed) and walk through “self” assessment with those examples together as a class.

Step 5: Create a way to hold students accountable

Self-assessment shouldn’t always be tied to a grade, but students will catch on quickly if you’re not somehow holding them accountable. There are many ways to do this, for example:

  • Conference with each student throughout the process
  • Make self-assessment part of the final grade for a project or unit
  • Create a self-assessment reward chart

The important thing to remember with holding students accountable for their self-assessment is that you should be holding them accountable for doing the self-assessment, but not for what they do or don’t know, nor for the changes they make based on their self-assessment.

Step 6: Don’t stop

Sometimes we have a tendency to try a strategy once or twice and then let it slide as the school year goes on, but as students learn that they’re no longer being held accountable, they will stop. You can’t ever assume a student will keep using a strategy unless you give them explicit instructions and hold them accountable.

Remember that as with anything, students will get better at self-assessment the more they practice it. The more you explicitly assign self-assessment, the more it will become a normal part of the learning process.

Examples of self assessment

Remember that it’s good to use a variety of self-assessment strategies so all students have a chance to find a style that works best for them. Any time you introduce a new strategy or assign self-assessment, be very clear about what students should do and how they should do it.

The strategies we suggest are broken down by age, but always use your best judgment regarding which strategies will be best for your students.

KWL chart: Before starting a lesson or unit, have students write or say what they already know (K) and what they want to know (W) about the topic. After the lesson or unit, they write or say what they learned (L). This can easily evolve into larger discussions and assignments.

Goals: At the end of each lesson, day, week, etc. students write one learning goal they would like to achieve. This can be very open-ended, or it could be very focused, asking students to reflect on one specific subject or topic. You can expand on this by having students return to their goal to see if they met it, encouraging them to ask for help if they haven’t met their goal.

Red, yellow, green: Give each student three circles: one red, one yellow, and one green. Throughout the school day, students place their red circle on their desk if they’re lost or confused, yellow if they’re struggling a little bit, and green if they understand, and they’re good to go. You can also stop to have students check their understanding by asking them to hold up a color. Some students feel shy about admitting they’re confused, so this strategy can also work really well if you have students place their heads down before holding up their circle.

Objective check: In the morning, give students a list of objectives you will cover in school today. Have each student write down an objective they would really like to learn today. At the end of the day, students return to the objective and determine whether they learned it or not.

Tricky spots: Work with students to identify where they struggle (for example, “I have trouble with word problems in math,” or “I have trouble spelling new words”). When starting a new lesson or unit, have each student identify one tricky spot they want to focus on. Be sure to check in with students often on their tricky spot to make sure they are making progress and not getting frustrated.

Highlighting: Have students go back to a writing assignment, worksheet, or project and highlight the section that they think was their best work. As an extension, have them explain why this was their best work. This is an excellent strategy to use with students who struggle or lack confidence in their work.

Self reflection: After a speech or presentation, have students write down three things they did well and one thing they can improve on. Extend this by returning to these during the next speech or presentation; you could even make them part of the rubric for the next assignment.

Exit tickets: Before students can leave the room, they must fill out an exit ticket and hand it to the teacher. You might ask them to write one thing they learned today and one thing they want to learn tomorrow, for example.

Think, pair, share: Pose a reflective question or prompt to students, for example you might tell them to think about or even write down the most important thing they learned in class today. Next, have them pair with a partner or small group to discuss their answer to the question or prompt, and finally, have students report back to the whole class.

Grades 9-12

Rubrics: Before completing a project, give students the rubric you will use to grade their effort. Have students complete a draft of the project and assess themselves using the rubric. After they do this, you might conference with them, give them feedback, or have them complete a reflective assignment. Then, have students complete a second draft that they will turn in for their grade (or to continue to work and improve upon).

Writing conferences: After students write an outline or first draft of an essay, hold an individual conference with each student. Before you provide your input, have students identify the strengths and weaknesses of their work. Use their self assessment as the guide of what you discuss during the conference. You might even find that students are more critical of themselves than you would have been.

Empty rubrics: At the beginning of a project, leave a space on the rubric empty. Help each student fill in the empty spot with something they need to work on, whether it’s something that they’re already good at and want to get even better or it’s something they struggle with and want to get better at.

Similar Posts:

  • Discover Your Learning Style – Comprehensive Guide on Different Learning Styles
  • 15 Learning Theories in Education (A Complete Summary)
  • 35 of the BEST Educational Apps for Teachers (Updated 2024)

Leave a Comment Cancel reply

Save my name and email in this browser for the next time I comment.

Skip to Content

Other ways to search:

  • Events Calendar
  • Student Self-assessment

Self-assessments encourage students to reflect on their growing skills and knowledge, learning goals and processes, products of their learning, and progress in the course. Student self-assessment can take many forms, from low-stakes check-ins on their understanding of the day’s lecture content to self-assessment and self-evaluation of their performance on major projects. Student self-assessment is also an important practice in courses that use alternative grading approaches . While the foci and mechanisms of self-assessment vary widely, at their core the purpose of all self-assessment is to “generate feedback that promotes learning and improvements in performance” (Andrade, 2019). Fostering students’ self-assessment skills can also help them develop an array of transferable lifelong learning skills, including:

  • Metacognition: Thinking about one’s own thinking. Metacognitive skills allow learners to “monitor, plan, and control their mental processing and accurately judge how well they’ve learned something” (McGuire & McGuire 2015).
  • Critical thinking: Carefully reasoning about the evidence and strength of evidence presented in support of a claim or argument.
  • Reflective thinking: Examining or questioning one’s own assumptions, positionality, basis of your beliefs, growth, etc.
  • Self-regulated learning: Setting goals, checking in on one’s own progress, reflecting on what learning or study strategies are working well or not so well, being intentional about where/when/how one studies, etc.

Students' skills to self-assess can vary, especially if they have not encountered many opportunities for structured self-assessment. Therefore, it is important to provide structure, guidance, and support to help them develop these skills over time.

  • Create a supportive learning environment so that students feel comfortable sharing their self-assessment experiences ( Create a Supportive Course Climate ).
  • Foster a growth-mindset in students by using strategies that show students that abilities can be grown through hard work, effective strategies, and help from others when needed ( Fostering Growth Mindset ; Identifying teaching behaviors that foster growth mindset classroom cultures ).
  • Set clear, specific, measurable, and achievable learning outcomes so that students know what is expected of them and can better assess their progress ( Creating and Using Learning Outcomes ).
  • Explain the concept of self-assessment and some of the benefits (above).
  • Provide students with specific prompts and/or rubrics to guide self-assessment ( assessing student learning with Rubrics ).
  • Provide clear instructions (see an example under Rubrics below).
  • Encourage students to make adjustments to their learning strategies (e.g., retrieval, spacing, interleaving, elaboration, generation, reflection, calibration; Make It Stick , pp. 200-225) and/or set new goals based on their identified areas for improvement.

Self-Assessment Techniques

Expand the boxes below to learn more about techniques you can use to engage students in self-assessment and decide which would work best for your context.

To foster self-assessment as part of students’ regular learning practice you can embed prompts directly into your formative and summative assignments and assessments. 

  • What do you think is a fair grade for the work you have handed in, and why do you think so?
  • What did you do best in this task?
  • What did you do least well in this task?
  • What did you find was the hardest part of completing this task?
  • What was the most important thing you learned in doing this task?
  • If you had more time to complete the task, what (if anything) would you change, and why?

Providing students the opportunity to regularly engage in writing that allows them to reflect on their learning experiences, habits, and practices can help students retain learning, identify challenges, and strengthen their metacognitive skills. Reflective writing may take the form of short writing prompts related to assignments (see Embedded self-assessment prompts above and Classroom Assessment Techniques ) or writing more broadly about recent learning experiences (e.g., What? So What? Now What? Journals ). Reflective writing is a skill that takes practice and is most effective when done regularly throughout the course ( Using Reflective Writing to Deepen Student Learning ).

Rubrics are an important tool to help students self-assess their work, especially for self-assessment that includes multiple prompts about the same piece of work. If you’re providing a rubric to guide self-assessment, it is important to also provide instructions on how to use the rubric.

Students are using a rubric (e.g., grading rubric for written assignments (docx) ) to self-assess a draft essay before turning it in or making revisions. As part of that process, you want them to assess their use of textual evidence to support their claim. Here are example instructions you could provide (adapted from Beard, 2021):

To self-assess your use of textual evidence to support your claim, please follow these steps:

  • In your draft, highlight your claim sentence and where you used textual evidence to support your claim
  • Based on the textual evidence you used, circle your current level of skill on the provided rubric
  • Use the information on the provided rubric to list one action you can take to make your textual evidence stronger

Self-assessment surveys can be helpful if you are asking students to self-assess their skills, knowledge, attitudes, and/or effectiveness of study methods they used. These may take the form of 2-3 free-response questions or a questionnaire where students rate their agreement with a series of statements (e.g., I am skilled at creating formulas in Excel”, “I can define ‘promissory coup’”, “I feel confident in my study skills”). A Background Knowledge Probe administered at the very beginning of the course (or when starting a new unit) can help you better understand what students already know (or don’t know) about the class subject. Self-assessment surveys administered over time can help you and students assess their progress toward meeting defined learning outcomes (and provide you with feedback on the effectiveness of your teaching methods). Student Assessment of their Learning Gains is a free tool that you can use to create and administer self-assessment surveys for your course.

Wrappers are tools that learners use after completing and receiving feedback on an exam or assignment ( exam and assignment wrappers , post-test analysis ) or even after listening to a lecture ( lecture wrappers ). Instead of focusing on content, wrappers focus on the process of learning and are designed to provide students with a chance to reflect on their learning strategies and plan new strategies before the next assignment or assessment. The Eberly Center at Carnegie Mellon includes multiple examples of exam, homework, and paper wrappers for several disciplines.

References:

Andrade, H. L. (2019). A critical review of research on student self-assessment . Frontiers in Education , 4, Article 87. 

Beard, E. (2021, April 27). The importance of student self-assessment . Northwest Evaluation Association (NWEA).

Brown, P. C., Roediger III, H. L., & McDaniel, M. A. (2014). Make it stick: The science of successful learning . Cambridge, MA: Harvard University Press

McGuire, S. Y., & McGuire, S. (2015). Teach students how to learn: Strategies you can incorporate into any course to improve student metacognition, study skills, and motivation . New York, NY: Routledge. 

McMillan, J. H., & Hearn, J. (2008). Student Self-Assessment: The Key to Stronger Student Motivation and Higher Achievement . Educational Horizons , 87 (1), 40–49.

Race, P. (2001). A briefing on self, peer and group assessment (pdf) . LTSN Generic Centre, Assessment Series No. 9. 

RCampus. (2023, June 7). Student self-assessments: Importance, benefits, and implementation . 

Teaching (n.d.). Student Self-Assessment . University of New South Wales Sydney.

Further Reading & Resources: 

Bjork, R. (n.d.). Applying cognitive psychology to enhance educational practice . UCLA Bjork Learning and Forgetting Lab.

Center for Teaching and Learning (n.d.). Classroom Assessment Techniques . University of Colorado Boulder.

Center for Teaching and Learning (n.d.). Formative Assessments . University of Colorado Boulder.

Center for Teaching and Learning (n.d.). Student Peer Assessment . University of Colorado Boulder.

Center for Teaching and Learning (n.d.). Summative Assessments . University of Colorado Boulder

Center for Teaching and Learning (n.d.). Summative Assessments: Types . University of Colorado Boulder

  • Assessment in Large Enrollment Classes
  • Classroom Assessment Techniques
  • Creating and Using Learning Outcomes
  • Early Feedback
  • Five Misconceptions on Writing Feedback
  • Formative Assessments
  • Frequent Feedback
  • Online and Remote Exams
  • Student Learning Outcomes Assessment
  • Student Peer Assessment
  • Summative Assessments: Best Practices
  • Summative Assessments: Types
  • Assessing & Reflecting on Teaching
  • Departmental Teaching Evaluation
  • Equity in Assessment
  • Glossary of Terms
  • Attendance Policies
  • Books We Recommend
  • Classroom Management
  • Community-Developed Resources
  • Compassion & Self-Compassion
  • Course Design & Development
  • Course-in-a-box for New CU Educators
  • Enthusiasm & Teaching
  • First Day Tips
  • Flexible Teaching
  • Grants & Awards
  • Inclusivity
  • Learner Motivation
  • Making Teaching & Learning Visible
  • National Center for Faculty Development & Diversity
  • Open Education
  • Student Support Toolkit
  • Sustainaiblity
  • TA/Instructor Agreement
  • Teaching & Learning in the Age of AI
  • Teaching Well with Technology

Univeristy of Pittsburgh - Home Page

University Center for Teaching and Learning

Self-assessment, self assessment.

Self-assessments allow instructors to reflect upon and describe their teaching and learning goals, challenges, and accomplishments. The format of self-assessments varies and can include reflective statements, activity reports, annual goal setting and tracking, or the use of  a tool like the Wieman Teaching Practices Inventory. Teaching Center staff can offer individual instructors feedback on their self-assessments and recommendations for how to use results to improve teaching. The Teaching Center can also help schools and departments select, design, and teach instructors to use self-assessment tools.

Sample Self-Assessment Tools

  • The Teaching Practices Inventory , a 72-item reflective, self-reporting tool developed by Carl Wieman and Sarah Gilbert, was created for instructors teaching undergraduate STEM courses. It helps instructors determine the extent to which they use research-based teaching practices.
  • The Teaching Perspectives Inventory , a 45-item inventory that can be used to determine your teaching orientation. This inventory can be a helpful tool for reflection and improvement of teaching. It can also help you prepare to write or revise a statement of teaching philosophy .
  • Instructor Self-Evaluation , created by the Measurement and Research Division of the Office of Instructional Resources at the University of Illinois Urbana
  • The Inventory of Inclusive Teaching Strategies, created by the University of Michigan’s CRLT
  • Faculty Teaching Self-Assessment form, created by Central Piedmont Community College
  • Faculty Self-Evaluation of Teaching , created by the University of Dayton, contains self-evaluation rubrics, a narrative self-evaluation form, and several series of reflective questions.

Resources and Readings for Self-Assessment

Blumberg, P. (2014). Assessing and improving your teaching: Strategies and rubrics for faculty growth and student learning . Jossey-Bass.

Collins, J. B., & Pratt, D. D. (2011). The Teaching Perspectives Inventory at 10 Years and 100,000 respondents: Reliability and validity of a teacher self-report inventory. Adult Education Quarterly, 61 (4), 358–375. ( NOTE: To access this content, you must be logged in or log into the University Library System.)

Holmgren, R.A. (2004, March 26). Structuring self-evaluations. Allegheny College.

Rico-Reintsch, K. I. (2019). Using faculty self-evaluation as an innovative tool to improve university courses. Revista CEA, 5 (10), 69-81. doi:10.22430/24223182.1445

Wieman, C. & Gilbert, S. (2014). The Teaching Practices Inventory: A new tool for characterizing college and university teaching in mathematics and science. CBE Life Sciences Education, 13 (3). doi: 10.1187/cbe.14-02-0023

  • Important dates for the summer term  
  • Generative AI Resources for Faculty
  • Student Communication & Engagement Resource Hub
  • Summer Term Finals Assessment Strategies
  • Importing your summer term grades to PeopleSoft
  • Enter your summer term grades in Canvas
  • Alternative Final Assessment Ideas
  • Not sure what you need?
  • Accessibility Resource Hub
  • Assessment Resource Hub
  • Canvas and Ed Tech Support
  • Center for Mentoring
  • Creating and Using Video
  • Diversity, Equity and Inclusion
  • General Pedagogy Resource Hub
  • Graduate Student/TA Resources
  • Remote Learning Resource Hub
  • Syllabus Checklist
  • Student Communication and Engagement
  • Technology and Equipment
  • Classroom & Event Services
  • Assessment of Teaching
  • Classroom Technology
  • Custom Workshops
  • Open Lab Makerspace
  • Pedagogy, Practice, & Assessment
  • Need something else? Contact Us
  • Educational Software Consulting
  • Learning Communities
  • Makerspaces and Emerging Technology
  • Mentoring Support
  • Online Programs
  • Teaching Surveys
  • Testing Services
  • Classroom Recordings and Lecture Capture
  • Creating DIY Introduction Videos
  • Media Creation Lab
  • Studio & On-Location Recordings
  • Video Resources for Teaching
  • Assessment and Teaching Conference
  • Diversity Institute
  • New Faculty Orientation
  • New TA Orientation
  • Teaching Center Newsletter
  • Meet Our Team
  • About the Executive Director
  • Award Nomination Form
  • Award Recipients
  • About the Teaching Center
  • Annual Report
  • Join Our Team

EW

  • Featured Articles
  • Report Card Comments
  • Needs Improvement Comments
  • Teacher's Lounge
  • New Teachers
  • Our Bloggers
  • Article Library
  • Featured Lessons
  • Every-Day Edits
  • Lesson Library
  • Emergency Sub Plans
  • Character Education
  • Lesson of the Day
  • 5-Minute Lessons
  • Learning Games
  • Lesson Planning
  • Subjects Center
  • Teaching Grammar
  • Leadership Resources
  • Parent Newsletter Resources
  • Advice from School Leaders
  • Programs, Strategies and Events
  • Principal Toolbox
  • Administrator's Desk
  • Interview Questions
  • Professional Learning Communities
  • Teachers Observing Teachers
  • Tech Lesson Plans
  • Science, Math & Reading Games
  • Tech in the Classroom
  • Web Site Reviews
  • Creating a WebQuest
  • Digital Citizenship
  • All Online PD Courses
  • Child Development Courses
  • Reading and Writing Courses
  • Math & Science Courses
  • Classroom Technology Courses
  • Spanish in the Classroom Course
  • Classroom Management
  • Responsive Classroom
  • Dr. Ken Shore: Classroom Problem Solver
  • A to Z Grant Writing Courses
  • Worksheet Library
  • Highlights for Children
  • Venn Diagram Templates
  • Reading Games
  • Word Search Puzzles
  • Math Crossword Puzzles
  • Geography A to Z
  • Holidays & Special Days
  • Internet Scavenger Hunts
  • Student Certificates

Newsletter Sign Up

Search form

The power of reflection and self-assessment in student learning.

what is a self assessment report in education

Learning is so much more than facts. Facts can be memorized and forgotten. But real learning stays with you for life. It involves developing critical thinking skills, problem-solving abilities, and the capacity for self-improvement. Reflection and self-assessment are vital in deepening understanding, fostering growth, and enhancing student learning. 

Reflection Involves Contemplation and Self-Analysis

Reflection is thinking deeply about one's experiences, actions, and thoughts. When students focus on these, they connect theory and practice, and their learning takes on a whole new direction. Through reflection, students can better understand the underlying concepts, ideas, and principles they have encountered, leading to more profound subject matter comprehension.

Try one-minute essays. At the end of a lesson, ask your students to write down their thoughts for one minute. What did they struggle with? What were they good at? The simple act of writing down their thoughts will start a deeper self-analysis process.

By reflecting on their thinking, students can recognize their own strengths and weaknesses, leading to more effective learning strategies and problem-solving skills. When students are given the time and wherewithal to reflect, they develop accountability for their own learning process.

Self-Assessment Follows Self-Reflection

Self-assessment is closely linked to reflection and involves students evaluating their learning and performance. It empowers students to take ownership of their education by actively participating in the evaluation process. Through self-assessment, students develop a deep sense of responsibility and accountability for their progress, contributing to intrinsic motivation and a growth mindset. 

Within your grading rubric, allow your students to grade themselves. Did they feel like they gave their all? Could they have done better? Allowing your students the chance to be honest with their work will stimulate academic responsibility. 

By examining their work, students can identify their strengths and weaknesses, enabling them to set realistic goals and develop strategies to improve their learning outcomes. Self-assessment also encourages students to take risks and embrace challenges, as they see these as opportunities for growth rather than failures.

Show them the path to continuous improvement, where students are not afraid to make mistakes but view them as valuable learning experiences.

Combine the Two to Develop Critical Thinking Skills

How often do we ask our students to think critically? We need to ask ourselves if they have developed those skills. Thankfully, one significant benefit of reflection and self-assessment is gaining critical thinking skills. 

Critical thinking involves analyzing information, evaluating evidence, and making informed judgments. Through reflection, students are encouraged to question assumptions, challenge their own beliefs, and consider alternative perspectives.

By critically examining their experiences and knowledge, students can develop a deeper understanding of the subject matter and become more independent thinkers. Furthermore, they engage in higher-order thinking processes, such as analyzing, synthesizing, and evaluating. These skills are essential not only for academic success but also for lifelong learning and professional development.

Students Begin Looking at the Process, Rather than the Outcome

When students engage in reflection and self-assessment, they shift their focus from grades and external validation to the learning process. They begin to see challenges and setbacks as opportunities for growth and improvement rather than as indicators of failure. This mindset is a breeding ground for resilience, perseverance, and a love for learning.

Recently there has been a shift among high school seniors; they celebrate their college rejection letters, rejoicing in the fact that they put themselves out there and know their failure is only another opportunity for growth. 

Students become more willing to take risks, seek feedback, and embrace new challenges, knowing their abilities can be developed over time. When students can reflect on their learning experiences, they develop a deeper connection to the material. They become active participants in their own education rather than passive recipients of information.

And that, as educators, makes our hearts soar!

Motivation and Engagement Come Through Reflection and Self-Assessment

By assessing their progress and setting goals, students become more motivated to strive for excellence and take responsibility for their learning outcomes. Reflection also provides students with a sense of purpose and meaning, as they can see the relevance and application to real-life situations. This intrinsic motivation is a powerful driver for sustained engagement and continuous improvement both in and out of the classroom.

As educators, creating opportunities for students to reflect on their learning experiences and assess their progress is crucial. By doing so, we equip them with the necessary skills and mindset to become lifelong learners who can confidently and purposefully navigate the world's complexities.

EW Lesson Plans

what is a self assessment report in education

EW Professional Development

Ew worksheets.

what is a self assessment report in education

 

what is a self assessment report in education

Sign up for our free weekly newsletter and receive

top education news, lesson ideas, teaching tips and more!

No thanks, I don't need to stay current on what works in education!

COPYRIGHT 1996-2016 BY EDUCATION WORLD, INC. ALL RIGHTS RESERVED.

COPYRIGHT 1996 - 2024 BY EDUCATION WORLD, INC. ALL RIGHTS RESERVED.

  • SchoolNotes.com
  • The Educator's Network

what is a self assessment report in education

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Systematic review article, a critical review of research on student self-assessment.

what is a self assessment report in education

  • Educational Psychology and Methodology, University at Albany, Albany, NY, United States

This article is a review of research on student self-assessment conducted largely between 2013 and 2018. The purpose of the review is to provide an updated overview of theory and research. The treatment of theory involves articulating a refined definition and operationalization of self-assessment. The review of 76 empirical studies offers a critical perspective on what has been investigated, including the relationship between self-assessment and achievement, consistency of self-assessment and others' assessments, student perceptions of self-assessment, and the association between self-assessment and self-regulated learning. An argument is made for less research on consistency and summative self-assessment, and more on the cognitive and affective mechanisms of formative self-assessment.

This review of research on student self-assessment expands on a review published as a chapter in the Cambridge Handbook of Instructional Feedback ( Andrade, 2018 , reprinted with permission). The timespan for the original review was January 2013 to October 2016. A lot of research has been done on the subject since then, including at least two meta-analyses; hence this expanded review, in which I provide an updated overview of theory and research. The treatment of theory presented here involves articulating a refined definition and operationalization of self-assessment through a lens of feedback. My review of the growing body of empirical research offers a critical perspective, in the interest of provoking new investigations into neglected areas.

Defining and Operationalizing Student Self-Assessment

Without exception, reviews of self-assessment ( Sargeant, 2008 ; Brown and Harris, 2013 ; Panadero et al., 2016a ) call for clearer definitions: What is self-assessment, and what is not? This question is surprisingly difficult to answer, as the term self-assessment has been used to describe a diverse range of activities, such as assigning a happy or sad face to a story just told, estimating the number of correct answers on a math test, graphing scores for dart throwing, indicating understanding (or the lack thereof) of a science concept, using a rubric to identify strengths and weaknesses in one's persuasive essay, writing reflective journal entries, and so on. Each of those activities involves some kind of assessment of one's own functioning, but they are so different that distinctions among types of self-assessment are needed. I will draw those distinctions in terms of the purposes of self-assessment which, in turn, determine its features: a classic form-fits-function analysis.

What is Self-Assessment?

Brown and Harris (2013) defined self-assessment in the K-16 context as a “descriptive and evaluative act carried out by the student concerning his or her own work and academic abilities” (p. 368). Panadero et al. (2016a) defined it as a “wide variety of mechanisms and techniques through which students describe (i.e., assess) and possibly assign merit or worth to (i.e., evaluate) the qualities of their own learning processes and products” (p. 804). Referring to physicians, Epstein et al. (2008) defined “concurrent self-assessment” as “ongoing moment-to-moment self-monitoring” (p. 5). Self-monitoring “refers to the ability to notice our own actions, curiosity to examine the effects of those actions, and willingness to use those observations to improve behavior and thinking in the future” (p. 5). Taken together, these definitions include self-assessment of one's abilities, processes , and products —everything but the kitchen sink. This very broad conception might seem unwieldy, but it works because each object of assessment—competence, process, and product—is subject to the influence of feedback from oneself.

What is missing from each of these definitions, however, is the purpose of the act of self-assessment. Their authors might rightly point out that the purpose is implied, but a formal definition requires us to make it plain: Why do we ask students to self-assess? I have long held that self-assessment is feedback ( Andrade, 2010 ), and that the purpose of feedback is to inform adjustments to processes and products that deepen learning and enhance performance; hence the purpose of self-assessment is to generate feedback that promotes learning and improvements in performance. This learning-oriented purpose of self-assessment implies that it should be formative: if there is no opportunity for adjustment and correction, self-assessment is almost pointless.

Why Self-Assess?

Clarity about the purpose of self-assessment allows us to interpret what otherwise appear to be discordant findings from research, which has produced mixed results in terms of both the accuracy of students' self-assessments and their influence on learning and/or performance. I believe the source of the discord can be traced to the different ways in which self-assessment is carried out, such as whether it is summative and formative. This issue will be taken up again in the review of current research that follows this overview. For now, consider a study of the accuracy and validity of summative self-assessment in teacher education conducted by Tejeiro et al. (2012) , which showed that students' self-assigned marks tended to be higher than marks given by professors. All 122 students in the study assigned themselves a grade at the end of their course, but half of the students were told that their self-assigned grade would count toward 5% of their final grade. In both groups, students' self-assessments were higher than grades given by professors, especially for students with “poorer results” (p. 791) and those for whom self-assessment counted toward the final grade. In the group that was told their self-assessments would count toward their final grade, no relationship was found between the professor's and the students' assessments. Tejeiro et al. concluded that, although students' and professor's assessments tend to be highly similar when self-assessment did not count toward final grades, overestimations increased dramatically when students' self-assessments did count. Interviews of students who self-assigned highly discrepant grades revealed (as you might guess) that they were motivated by the desire to obtain the highest possible grades.

Studies like Tejeiro et al's. (2012) are interesting in terms of the information they provide about the relationship between consistency and honesty, but the purpose of the self-assessment, beyond addressing interesting research questions, is unclear. There is no feedback purpose. This is also true for another example of a study of summative self-assessment of competence, during which elementary-school children took the Test of Narrative Language and then were asked to self-evaluate “how you did in making up stories today” by pointing to one of five pictures, from a “very happy face” (rating of five) to a “very sad face” (rating of one) ( Kaderavek et al., 2004 . p. 37). The usual results were reported: Older children and good narrators were more accurate than younger children and poor narrators, and males tended to more frequently overestimate their ability.

Typical of clinical studies of accuracy in self-evaluation, this study rests on a definition and operationalization of self-assessment with no value in terms of instructional feedback. If those children were asked to rate their stories and then revise or, better yet, if they assessed their stories according to clear, developmentally appropriate criteria before revising, the valence of their self-assessments in terms of instructional feedback would skyrocket. I speculate that their accuracy would too. In contrast, studies of formative self-assessment suggest that when the act of self-assessing is given a learning-oriented purpose, students' self-assessments are relatively consistent with those of external evaluators, including professors ( Lopez and Kossack, 2007 ; Barney et al., 2012 ; Leach, 2012 ), teachers ( Bol et al., 2012 ; Chang et al., 2012 , 2013 ), researchers ( Panadero and Romero, 2014 ; Fitzpatrick and Schulz, 2016 ), and expert medical assessors ( Hawkins et al., 2012 ).

My commitment to keeping self-assessment formative is firm. However, Gavin Brown (personal communication, April 2011) reminded me that summative self-assessment exists and we cannot ignore it; any definition of self-assessment must acknowledge and distinguish between formative and summative forms of it. Thus, the taxonomy in Table 1 , which depicts self-assessment as serving formative and/or summative purposes, and focuses on competence, processes, and/or products.

www.frontiersin.org

Table 1 . A taxonomy of self-assessment.

Fortunately, a formative view of self-assessment seems to be taking hold in various educational contexts. For instance, Sargeant (2008) noted that all seven authors in a special issue of the Journal of Continuing Education in the Health Professions “conceptualize self-assessment within a formative, educational perspective, and see it as an activity that draws upon both external and internal data, standards, and resources to inform and make decisions about one's performance” (p. 1). Sargeant also stresses the point that self-assessment should be guided by evaluative criteria: “Multiple external sources can and should inform self-assessment, perhaps most important among them performance standards” (p. 1). Now we are talking about the how of self-assessment, which demands an operationalization of self-assessment practice. Let us examine each object of self-assessment (competence, processes, and/or products) with an eye for what is assessed and why.

What is Self-Assessed?

Monitoring and self-assessing processes are practically synonymous with self-regulated learning (SRL), or at least central components of it such as goal-setting and monitoring, or metacognition. Research on SRL has clearly shown that self-generated feedback on one's approach to learning is associated with academic gains ( Zimmerman and Schunk, 2011 ). Self-assessment of the products , such as papers and presentations, are the easiest to defend as feedback, especially when those self-assessments are grounded in explicit, relevant, evaluative criteria and followed by opportunities to relearn and/or revise ( Andrade, 2010 ).

Including the self-assessment of competence in this definition is a little trickier. I hesitated to include it because of the risk of sneaking in global assessments of one's overall ability, self-esteem, and self-concept (“I'm good enough, I'm smart enough, and doggone it, people like me,” Franken, 1992 ), which do not seem relevant to a discussion of feedback in the context of learning. Research on global self-assessment, or self-perception, is popular in the medical education literature, but even there, scholars have begun to question its usefulness in terms of influencing learning and professional growth (e.g., see Sargeant et al., 2008 ). Eva and Regehr (2008) seem to agree in the following passage, which states the case in a way that makes it worthy of a long quotation:

Self-assessment is often (implicitly or otherwise) conceptualized as a personal, unguided reflection on performance for the purposes of generating an individually derived summary of one's own level of knowledge, skill, and understanding in a particular area. For example, this conceptualization would appear to be the only reasonable basis for studies that fit into what Colliver et al. (2005) has described as the “guess your grade” model of self-assessment research, the results of which form the core foundation for the recurring conclusion that self-assessment is generally poor. This unguided, internally generated construction of self-assessment stands in stark contrast to the model put forward by Boud (1999) , who argued that the phrase self-assessment should not imply an isolated or individualistic activity; it should commonly involve peers, teachers, and other sources of information. The conceptualization of self-assessment as enunciated in Boud's description would appear to involve a process by which one takes personal responsibility for looking outward, explicitly seeking feedback, and information from external sources, then using these externally generated sources of assessment data to direct performance improvements. In this construction, self-assessment is more of a pedagogical strategy than an ability to judge for oneself; it is a habit that one needs to acquire and enact rather than an ability that one needs to master (p. 15).

As in the K-16 context, self-assessment is coming to be seen as having value as much or more so in terms of pedagogy as in assessment ( Silver et al., 2008 ; Brown and Harris, 2014 ). In the end, however, I decided that self-assessing one's competence to successfully learn a particular concept or complete a particular task (which sounds a lot like self-efficacy—more on that later) might be useful feedback because it can inform decisions about how to proceed, such as the amount of time to invest in learning how to play the flute, or whether or not to seek help learning the steps of the jitterbug. An important caveat, however, is that self-assessments of competence are only useful if students have opportunities to do something about their perceived low competence—that is, it serves the purpose of formative feedback for the learner.

How to Self-Assess?

Panadero et al. (2016a) summarized five very different taxonomies of self-assessment and called for the development of a comprehensive typology that considers, among other things, its purpose, the presence or absence of criteria, and the method. In response, I propose the taxonomy depicted in Table 1 , which focuses on the what (competence, process, or product), the why (formative or summative), and the how (methods, including whether or not they include standards, e.g., criteria) of self-assessment. The collections of examples of methods in the table is inexhaustive.

I put the methods in Table 1 where I think they belong, but many of them could be placed in more than one cell. Take self-efficacy , for instance, which is essentially a self-assessment of one's competence to successfully undertake a particular task ( Bandura, 1997 ). Summative judgments of self-efficacy are certainly possible but they seem like a silly thing to do—what is the point, from a learning perspective? Formative self-efficacy judgments, on the other hand, can inform next steps in learning and skill building. There is reason to believe that monitoring and making adjustments to one's self-efficacy (e.g., by setting goals or attributing success to effort) can be productive ( Zimmerman, 2000 ), so I placed self-efficacy in the formative row.

It is important to emphasize that self-efficacy is task-specific, more or less ( Bandura, 1997 ). This taxonomy does not include general, holistic evaluations of one's abilities, for example, “I am good at math.” Global assessment of competence does not provide the leverage, in terms of feedback, that is provided by task-specific assessments of competence, that is, self-efficacy. Eva and Regehr (2008) provided an illustrative example: “We suspect most people are prompted to open a dictionary as a result of encountering a word for which they are uncertain of the meaning rather than out of a broader assessment that their vocabulary could be improved” (p. 16). The exclusion of global evaluations of oneself resonates with research that clearly shows that feedback that focuses on aspects of a task (e.g., “I did not solve most of the algebra problems”) is more effective than feedback that focuses on the self (e.g., “I am bad at math”) ( Kluger and DeNisi, 1996 ; Dweck, 2006 ; Hattie and Timperley, 2007 ). Hence, global self-evaluations of ability or competence do not appear in Table 1 .

Another approach to student self-assessment that could be placed in more than one cell is traffic lights . The term traffic lights refers to asking students to use green, yellow, or red objects (or thumbs up, sideways, or down—anything will do) to indicate whether they think they have good, partial, or little understanding ( Black et al., 2003 ). It would be appropriate for traffic lights to appear in multiple places in Table 1 , depending on how they are used. Traffic lights seem to be most effective at supporting students' reflections on how well they understand a concept or have mastered a skill, which is line with their creators' original intent, so they are categorized as formative self-assessments of one's learning—which sounds like metacognition.

In fact, several of the methods included in Table 1 come from research on metacognition, including self-monitoring , such as checking one's reading comprehension, and self-testing , e.g., checking one's performance on test items. These last two methods have been excluded from some taxonomies of self-assessment (e.g., Boud and Brew, 1995 ) because they do not engage students in explicitly considering relevant standards or criteria. However, new conceptions of self-assessment are grounded in theories of the self- and co-regulation of learning ( Andrade and Brookhart, 2016 ), which includes self-monitoring of learning processes with and without explicit standards.

However, my research favors self-assessment with regard to standards ( Andrade and Boulay, 2003 ; Andrade and Du, 2007 ; Andrade et al., 2008 , 2009 , 2010 ), as does related research by Panadero and his colleagues (see below). I have involved students in self-assessment of stories, essays, or mathematical word problems according to rubrics or checklists with criteria. For example, two studies investigated the relationship between elementary or middle school students' scores on a written assignment and a process that involved them in reading a model paper, co-creating criteria, self-assessing first drafts with a rubric, and revising ( Andrade et al., 2008 , 2010 ). The self-assessment was highly scaffolded: students were asked to underline key phrases in the rubric with colored pencils (e.g., underline “clearly states an opinion” in blue), then underline or circle in their drafts the evidence of having met the standard articulated by the phrase (e.g., his or her opinion) with the same blue pencil. If students found they had not met the standard, they were asked to write themselves a reminder to make improvements when they wrote their final drafts. This process was followed for each criterion on the rubric. There were main effects on scores for every self-assessed criterion on the rubric, suggesting that guided self-assessment according to the co-created criteria helped students produce more effective writing.

Panadero and his colleagues have also done quasi-experimental and experimental research on standards-referenced self-assessment, using rubrics or lists of assessment criteria that are presented in the form of questions ( Panadero et al., 2012 , 2013 , 2014 ; Panadero and Romero, 2014 ). Panadero calls the list of assessment criteria a script because his work is grounded in research on scaffolding (e.g., Kollar et al., 2006 ): I call it a checklist because that is the term used in classroom assessment contexts. Either way, the list provides standards for the task. Here is a script for a written summary that Panadero et al. (2014) used with college students in a psychology class:

• Does my summary transmit the main idea from the text? Is it at the beginning of my summary?

• Are the important ideas also in my summary?

• Have I selected the main ideas from the text to make them explicit in my summary?

• Have I thought about my purpose for the summary? What is my goal?

Taken together, the results of the studies cited above suggest that students who engaged in self-assessment using scripts or rubrics were more self-regulated, as measured by self-report questionnaires and/or think aloud protocols, than were students in the comparison or control groups. Effect sizes were very small to moderate (η 2 = 0.06–0.42), and statistically significant. Most interesting, perhaps, is one study ( Panadero and Romero, 2014 ) that demonstrated an association between rubric-referenced self-assessment activities and all three phases of SRL; forethought, performance, and reflection.

There are surely many other methods of self-assessment to include in Table 1 , as well as interesting conversations to be had about which method goes where and why. In the meantime, I offer the taxonomy in Table 1 as a way to define and operationalize self-assessment in instructional contexts and as a framework for the following overview of current research on the subject.

An Overview of Current Research on Self-Assessment

Several recent reviews of self-assessment are available ( Brown and Harris, 2013 ; Brown et al., 2015 ; Panadero et al., 2017 ), so I will not summarize the entire body of research here. Instead, I chose to take a birds-eye view of the field, with goal of reporting on what has been sufficiently researched and what remains to be done. I used the references lists from reviews, as well as other relevant sources, as a starting point. In order to update the list of sources, I directed two new searches 1 , the first of the ERIC database, and the second of both ERIC and PsychINFO. Both searches included two search terms, “self-assessment” OR “self-evaluation.” Advanced search options had four delimiters: (1) peer-reviewed, (2) January, 2013–October, 2016 and then October 2016–March 2019, (3) English, and (4) full-text. Because the focus was on K-20 educational contexts, sources were excluded if they were about early childhood education or professional development.

The first search yielded 347 hits; the second 1,163. Research that was unrelated to instructional feedback was excluded, such as studies limited to self-estimates of performance before or after taking a test, guesses about whether a test item was answered correctly, and estimates of how many tasks could be completed in a certain amount of time. Although some of the excluded studies might be thought of as useful investigations of self-monitoring, as a group they seemed too unrelated to theories of self-generated feedback to be appropriate for this review. Seventy-six studies were selected for inclusion in Table S1 (Supplementary Material), which also contains a few studies published before 2013 that were not included in key reviews, as well as studies solicited directly from authors.

The Table S1 in the Supplementary Material contains a complete list of studies included in this review, organized by the focus or topic of the study, as well as brief descriptions of each. The “type” column Table S1 (Supplementary Material) indicates whether the study focused on formative or summative self-assessment. This distinction was often difficult to make due to a lack of information. For example, Memis and Seven (2015) frame their study in terms of formative assessment, and note that the purpose of the self-evaluation done by the sixth grade students is to “help students improve their [science] reports” (p. 39), but they do not indicate how the self-assessments were done, nor whether students were given time to revise their reports based on their judgments or supported in making revisions. A sentence or two of explanation about the process of self-assessment in the procedures sections of published studies would be most useful.

Figure 1 graphically represents the number of studies in the four most common topic categories found in the table—achievement, consistency, student perceptions, and SRL. The figure reveals that research on self-assessment is on the rise, with consistency the most popular topic. Of the 76 studies in the table in the appendix, 44 were inquiries into the consistency of students' self-assessments with other judgments (e.g., a test score or teacher's grade). Twenty-five studies investigated the relationship between self-assessment and achievement. Fifteen explored students' perceptions of self-assessment. Twelve studies focused on the association between self-assessment and self-regulated learning. One examined self-efficacy, and two qualitative studies documented the mental processes involved in self-assessment. The sum ( n = 99) of the list of research topics is more than 76 because several studies had multiple foci. In the remainder of this review I examine each topic in turn.

www.frontiersin.org

Figure 1 . Topics of self-assessment studies, 2013–2018.

Consistency

Table S1 (Supplementary Material) reveals that much of the recent research on self-assessment has investigated the accuracy or, more accurately, consistency, of students' self-assessments. The term consistency is more appropriate in the classroom context because the quality of students' self-assessments is often determined by comparing them with their teachers' assessments and then generating correlations. Given the evidence of the unreliability of teachers' grades ( Falchikov, 2005 ), the assumption that teachers' assessments are accurate might not be well-founded ( Leach, 2012 ; Brown et al., 2015 ). Ratings of student work done by researchers are also suspect, unless evidence of the validity and reliability of the inferences made about student work by researchers is available. Consequently, much of the research on classroom-based self-assessment should use the term consistency , which refers to the degree of alignment between students' and expert raters' evaluations, avoiding the purer, more rigorous term accuracy unless it is fitting.

In their review, Brown and Harris (2013) reported that correlations between student self-ratings and other measures tended to be weakly to strongly positive, ranging from r ≈ 0.20 to 0.80, with few studies reporting correlations >0.60. But their review included results from studies of any self-appraisal of school work, including summative self-rating/grading, predictions about the correctness of answers on test items, and formative, criteria-based self-assessments, a combination of methods that makes the correlations they reported difficult to interpret. Qualitatively different forms of self-assessment, especially summative and formative types, cannot be lumped together without obfuscating important aspects of self-assessment as feedback.

Given my concern about combining studies of summative and formative assessment, you might anticipate a call for research on consistency that distinguishes between the two. I will make no such call for three reasons. One is that we have enough research on the subject, including the 22 studies in Table S1 (Supplementary Material) that were published after Brown and Harris's review (2013 ). Drawing only on studies included in Table S1 (Supplementary Material), we can say with confidence that summative self-assessment tends to be inconsistent with external judgements ( Baxter and Norman, 2011 ; De Grez et al., 2012 ; Admiraal et al., 2015 ), with males tending to overrate and females to underrate ( Nowell and Alston, 2007 ; Marks et al., 2018 ). There are exceptions ( Alaoutinen, 2012 ; Lopez-Pastor et al., 2012 ) as well as mixed results, with students being consistent regarding some aspects of their learning but not others ( Blanch-Hartigan, 2011 ; Harding and Hbaci, 2015 ; Nguyen and Foster, 2018 ). We can also say that older, more academically competent learners tend to be more consistent ( Hacker et al., 2000 ; Lew et al., 2010 ; Alaoutinen, 2012 ; Guillory and Blankson, 2017 ; Butler, 2018 ; Nagel and Lindsey, 2018 ). There is evidence that consistency can be improved through experience ( Lopez and Kossack, 2007 ; Yilmaz, 2017 ; Nagel and Lindsey, 2018 ), the use of guidelines ( Bol et al., 2012 ), feedback ( Thawabieh, 2017 ), and standards ( Baars et al., 2014 ), perhaps in the form of rubrics ( Panadero and Romero, 2014 ). Modeling and feedback also help ( Labuhn et al., 2010 ; Miller and Geraci, 2011 ; Hawkins et al., 2012 ; Kostons et al., 2012 ).

An outcome typical of research on the consistency of summative self-assessment can be found in row 59, which summarizes the study by Tejeiro et al. (2012) discussed earlier: Students' self-assessments were higher than marks given by professors, especially for students with poorer results, and no relationship was found between the professors' and the students' assessments in the group in which self-assessment counted toward the final mark. Students are not stupid: if they know that they can influence their final grade, and that their judgment is summative rather than intended to inform revision and improvement, they will be motivated to inflate their self-evaluation. I do not believe we need more research to demonstrate that phenomenon.

The second reason I am not calling for additional research on consistency is a lot of it seems somewhat irrelevant. This might be because the interest in accuracy is rooted in clinical research on calibration, which has very different aims. Calibration accuracy is the “magnitude of consent between learners' true and self-evaluated task performance. Accurately calibrated learners' task performance equals their self-evaluated task performance” ( Wollenschläger et al., 2016 ). Calibration research often asks study participants to predict or postdict the correctness of their responses to test items. I caution about generalizing from clinical experiments to authentic classroom contexts because the dismal picture of our human potential to self-judge was painted by calibration researchers before study participants were effectively taught how to predict with accuracy, or provided with the tools they needed to be accurate, or motivated to do so. Calibration researchers know that, of course, and have conducted intervention studies that attempt to improve accuracy, with some success (e.g., Bol et al., 2012 ). Studies of formative self-assessment also suggest that consistency increases when it is taught and supported in many of the ways any other skill must be taught and supported ( Lopez and Kossack, 2007 ; Labuhn et al., 2010 ; Chang et al., 2012 , 2013 ; Hawkins et al., 2012 ; Panadero and Romero, 2014 ; Lin-Siegler et al., 2015 ; Fitzpatrick and Schulz, 2016 ).

Even clinical psychological studies that go beyond calibration to examine the associations between monitoring accuracy and subsequent study behaviors do not transfer well to classroom assessment research. After repeatedly encountering claims that, for example, low self-assessment accuracy leads to poor task-selection accuracy and “suboptimal learning outcomes” ( Raaijmakers et al., 2019 , p. 1), I dug into the cited studies and discovered two limitations. The first is that the tasks in which study participants engage are quite inauthentic. A typical task involves studying “word pairs (e.g., railroad—mother), followed by a delayed judgment of learning (JOL) in which the students predicted the chances of remembering the pair… After making a JOL, the entire pair was presented for restudy for 4 s [ sic ], and after all pairs had been restudied, a criterion test of paired-associate recall occurred” ( Dunlosky and Rawson, 2012 , p. 272). Although memory for word pairs might be important in some classroom contexts, it is not safe to assume that results from studies like that one can predict students' behaviors after criterion-referenced self-assessment of their comprehension of complex texts, lengthy compositions, or solutions to multi-step mathematical problems.

The second limitation of studies like the typical one described above is more serious: Participants in research like that are not permitted to regulate their own studying, which is experimentally manipulated by a computer program. This came as a surprise, since many of the claims were about students' poor study choices but they were rarely allowed to make actual choices. For example, Dunlosky and Rawson (2012) permitted participants to “use monitoring to effectively control learning” by programming the computer so that “a participant would need to have judged his or her recall of a definition entirely correct on three different trials, and once they judged it entirely correct on the third trial, that particular key term definition was dropped [by the computer program] from further practice” (p. 272). The authors note that this study design is an improvement on designs that did not require all participants to use the same regulation algorithm, but it does not reflect the kinds of decisions that learners make in class or while doing homework. In fact, a large body of research shows that students can make wise choices when they self-pace the study of to-be-learned materials and then allocate study time to each item ( Bjork et al., 2013 , p. 425):

In a typical experiment, the students first study all the items at an experimenter-paced rate (e.g., study 60 paired associates for 3 s each), which familiarizes the students with the items; after this familiarity phase, the students then either choose which items they want to restudy (e.g., all items are presented in an array, and the students select which ones to restudy) and/or pace their restudy of each item. Several dependent measures have been widely used, such as how long each item is studied, whether an item is selected for restudy, and in what order items are selected for restudy. The literature on these aspects of self-regulated study is massive (for a comprehensive overview, see both Dunlosky and Ariel, 2011 and Son and Metcalfe, 2000 ), but the evidence is largely consistent with a few basic conclusions. First, if students have a chance to practice retrieval prior to restudying items, they almost exclusively choose to restudy unrecalled items and drop the previously recalled items from restudy ( Metcalfe and Kornell, 2005 ). Second, when pacing their study of individual items that have been selected for restudy, students typically spend more time studying items that are more, rather than less, difficult to learn. Such a strategy is consistent with a discrepancy-reduction model of self-paced study (which states that people continue to study an item until they reach mastery), although some key revisions to this model are needed to account for all the data. For instance, students may not continue to study until they reach some static criterion of mastery, but instead, they may continue to study until they perceive that they are no longer making progress.

I propose that this research, which suggests that students' unscaffolded, unmeasured, informal self-assessments tend to lead to appropriate task selection, is better aligned with research on classroom-based self-assessment. Nonetheless, even this comparison is inadequate because the study participants were not taught to compare their performance to the criteria for mastery, as is often done in classroom-based self-assessment.

The third and final reason I do not believe we need additional research on consistency is that I think it is a distraction from the true purposes of self-assessment. Many if not most of the articles about the accuracy of self-assessment are grounded in the assumption that accuracy is necessary for self-assessment to be useful, particularly in terms of subsequent studying and revision behaviors. Although it seems obvious that accurate evaluations of their performance positively influence students' study strategy selection, which should produce improvements in achievement, I have not seen relevant research that tests those conjectures. Some claim that inaccurate estimates of learning lead to the selection of inappropriate learning tasks ( Kostons et al., 2012 ) but they cite research that does not support their claim. For example, Kostons et al. cite studies that focus on the effectiveness of SRL interventions but do not address the accuracy of participants' estimates of learning, nor the relationship of those estimates to the selection of next steps. Other studies produce findings that support my skepticism. Take, for instance, two relevant studies of calibration. One suggested that performance and judgments of performance had little influence on subsequent test preparation behavior ( Hacker et al., 2000 ), and the other showed that study participants followed their predictions of performance to the same degree, regardless of monitoring accuracy ( van Loon et al., 2014 ).

Eva and Regehr (2008) believe that:

Research questions that take the form of “How well do various practitioners self-assess?” “How can we improve self-assessment?” or “How can we measure self-assessment skill?” should be considered defunct and removed from the research agenda [because] there have been hundreds of studies into these questions and the answers are “Poorly,” “You can't,” and “Don't bother” (p. 18).

I almost agree. A study that could change my mind about the importance of accuracy of self-assessment would be an investigation that goes beyond attempting to improve accuracy just for the sake of accuracy by instead examining the relearning/revision behaviors of accurate and inaccurate self-assessors: Do students whose self-assessments match the valid and reliable judgments of expert raters (hence my use of the term accuracy ) make better decisions about what they need to do to deepen their learning and improve their work? Here, I admit, is a call for research related to consistency: I would love to see a high-quality investigation of the relationship between accuracy in formative self-assessment, and students' subsequent study and revision behaviors, and their learning. For example, a study that closely examines the revisions to writing made by accurate and inaccurate self-assessors, and the resulting outcomes in terms of the quality of their writing, would be most welcome.

Table S1 (Supplementary Material) indicates that by 2018 researchers began publishing studies that more directly address the hypothesized link between self-assessment and subsequent learning behaviors, as well as important questions about the processes learners engage in while self-assessing ( Yan and Brown, 2017 ). One, a study by Nugteren et al. (2018 row 19 in Table S1 (Supplementary Material)), asked “How do inaccurate [summative] self-assessments influence task selections?” (p. 368) and employed a clever exploratory research design. The results suggested that most of the 15 students in their sample over-estimated their performance and made inaccurate learning-task selections. Nugteren et al. recommended helping students make more accurate self-assessments, but I think the more interesting finding is related to why students made task selections that were too difficult or too easy, given their prior performance: They based most task selections on interest in the content of particular items (not the overarching content to be learned), and infrequently considered task difficulty and support level. For instance, while working on the genetics tasks, students reported selecting tasks because they were fun or interesting, not because they addressed self-identified weaknesses in their understanding of genetics. Nugteren et al. proposed that students would benefit from instruction on task selection. I second that proposal: Rather than directing our efforts on accuracy in the service of improving subsequent task selection, let us simply teach students to use the information at hand to select next best steps, among other things.

Butler (2018 , row 76 in Table S1 (Supplementary Material)) has conducted at least two studies of learners' processes of responding to self-assessment items and how they arrived at their judgments. Comparing generic, decontextualized items to task-specific, contextualized items (which she calls after-task items ), she drew two unsurprising conclusions: the task-specific items “generally showed higher correlations with task performance,” and older students “appeared to be more conservative in their judgment compared with their younger counterparts” (p. 249). The contribution of the study is the detailed information it provides about how students generated their judgments. For example, Butler's qualitative data analyses revealed that when asked to self-assess in terms of vague or non-specific items, the children often “contextualized the descriptions based on their own experiences, goals, and expectations,” (p. 257) focused on the task at hand, and situated items in the specific task context. Perhaps as a result, the correlation between after-task self-assessment and task performance was generally higher than for generic self-assessment.

Butler (2018) notes that her study enriches our empirical understanding of the processes by which children respond to self-assessment. This is a very promising direction for the field. Similar studies of processing during formative self-assessment of a variety of task types in a classroom context would likely produce significant advances in our understanding of how and why self-assessment influences learning and performance.

Student Perceptions

Fifteen of the studies listed in Table S1 (Supplementary Material) focused on students' perceptions of self-assessment. The studies of children suggest that they tend to have unsophisticated understandings of its purposes ( Harris and Brown, 2013 ; Bourke, 2016 ) that might lead to shallow implementation of related processes. In contrast, results from the studies conducted in higher education settings suggested that college and university students understood the function of self-assessment ( Ratminingsih et al., 2018 ) and generally found it to be useful for guiding evaluation and revision ( Micán and Medina, 2017 ), understanding how to take responsibility for learning ( Lopez and Kossack, 2007 ; Bourke, 2014 ; Ndoye, 2017 ), prompting them to think more critically and deeply ( van Helvoort, 2012 ; Siow, 2015 ), applying newfound skills ( Murakami et al., 2012 ), and fostering self-regulated learning by guiding them to set goals, plan, self-monitor and reflect ( Wang, 2017 ).

Not surprisingly, positive perceptions of self-assessment were typically developed by students who actively engaged the formative type by, for example, developing their own criteria for an effective self-assessment response ( Bourke, 2014 ), or using a rubric or checklist to guide their assessments and then revising their work ( Huang and Gui, 2015 ; Wang, 2017 ). Earlier research suggested that children's attitudes toward self-assessment can become negative if it is summative ( Ross et al., 1998 ). However, even summative self-assessment was reported by adult learners to be useful in helping them become more critical of their own and others' writing throughout the course and in subsequent courses ( van Helvoort, 2012 ).

Achievement

Twenty-five of the studies in Table S1 (Supplementary Material) investigated the relation between self-assessment and achievement, including two meta-analyses. Twenty of the 25 clearly employed the formative type. Without exception, those 20 studies, plus the two meta-analyses ( Graham et al., 2015 ; Sanchez et al., 2017 ) demonstrated a positive association between self-assessment and learning. The meta-analysis conducted by Graham and his colleagues, which included 10 studies, yielded an average weighted effect size of 0.62 on writing quality. The Sanchez et al. meta-analysis revealed that, although 12 of the 44 effect sizes were negative, on average, “students who engaged in self-grading performed better ( g = 0.34) on subsequent tests than did students who did not” (p. 1,049).

All but two of the non-meta-analytic studies of achievement in Table S1 (Supplementary Material) were quasi-experimental or experimental, providing relatively rigorous evidence that their treatment groups outperformed their comparison or control groups in terms of everything from writing to dart-throwing, map-making, speaking English, and exams in a wide variety of disciplines. One experiment on summative self-assessment ( Miller and Geraci, 2011 ), in contrast, resulted in no improvements in exam scores, while the other one did ( Raaijmakers et al., 2017 ).

It would be easy to overgeneralize and claim that the question about the effect of self-assessment on learning has been answered, but there are unanswered questions about the key components of effective self-assessment, especially social-emotional components related to power and trust ( Andrade and Brown, 2016 ). The trends are pretty clear, however: it appears that formative forms of self-assessment can promote knowledge and skill development. This is not surprising, given that it involves many of the processes known to support learning, including practice, feedback, revision, and especially the intellectually demanding work of making complex, criteria-referenced judgments ( Panadero et al., 2014 ). Boud (1995a , b) predicted this trend when he noted that many self-assessment processes undermine learning by rushing to judgment, thereby failing to engage students with the standards or criteria for their work.

Self-Regulated Learning

The association between self-assessment and learning has also been explained in terms of self-regulation ( Andrade, 2010 ; Panadero and Alonso-Tapia, 2013 ; Andrade and Brookhart, 2016 , 2019 ; Panadero et al., 2016b ). Self-regulated learning (SRL) occurs when learners set goals and then monitor and manage their thoughts, feelings, and actions to reach those goals. SRL is moderately to highly correlated with achievement ( Zimmerman and Schunk, 2011 ). Research suggests that formative assessment is a potential influence on SRL ( Nicol and Macfarlane-Dick, 2006 ). The 12 studies in Table S1 (Supplementary Material) that focus on SRL demonstrate the recent increase in interest in the relationship between self-assessment and SRL.

Conceptual and practical overlaps between the two fields are abundant. In fact, Brown and Harris (2014) recommend that student self-assessment no longer be treated as an assessment, but as an essential competence for self-regulation. Butler and Winne (1995) introduced the role of self-generated feedback in self-regulation years ago:

[For] all self-regulated activities, feedback is an inherent catalyst. As learners monitor their engagement with tasks, internal feedback is generated by the monitoring process. That feedback describes the nature of outcomes and the qualities of the cognitive processes that led to those states (p. 245).

The outcomes and processes referred to by Butler and Winne are many of the same products and processes I referred to earlier in the definition of self-assessment and in Table 1 .

In general, research and practice related to self-assessment has tended to focus on judging the products of student learning, while scholarship on self-regulated learning encompasses both processes and products. The very practical focus of much of the research on self-assessment means it might be playing catch-up, in terms of theory development, with the SRL literature, which is grounded in experimental paradigms from cognitive psychology ( de Bruin and van Gog, 2012 ), while self-assessment research is ahead in terms of implementation (E. Panadero, personal communication, October 21, 2016). One major exception is the work done on Self-regulated Strategy Development ( Glaser and Brunstein, 2007 ; Harris et al., 2008 ), which has successfully integrated SRL research with classroom practices, including self-assessment, to teach writing to students with special needs.

Nicol and Macfarlane-Dick (2006) have been explicit about the potential for self-assessment practices to support self-regulated learning:

To develop systematically the learner's capacity for self-regulation, teachers need to create more structured opportunities for self-monitoring and the judging of progression to goals. Self-assessment tasks are an effective way of achieving this, as are activities that encourage reflection on learning progress (p. 207).

The studies of SRL in Table S1 (Supplementary Material) provide encouraging findings regarding the potential role of self-assessment in promoting achievement, self-regulated learning in general, and metacognition and study strategies related to task selection in particular. The studies also represent a solution to the “methodological and theoretical challenges involved in bringing metacognitive research to the real world, using meaningful learning materials” ( Koriat, 2012 , p. 296).

Future Directions for Research

I agree with ( Yan and Brown, 2017 ) statement that “from a pedagogical perspective, the benefits of self-assessment may come from active engagement in the learning process, rather than by being “veridical” or coinciding with reality, because students' reflection and metacognitive monitoring lead to improved learning” (p. 1,248). Future research should focus less on accuracy/consistency/veridicality, and more on the precise mechanisms of self-assessment ( Butler, 2018 ).

An important aspect of research on self-assessment that is not explicitly represented in Table S1 (Supplementary Material) is practice, or pedagogy: Under what conditions does self-assessment work best, and how are those conditions influenced by context? Fortunately, the studies listed in the table, as well as others (see especially Andrade and Valtcheva, 2009 ; Nielsen, 2014 ; Panadero et al., 2016a ), point toward an answer. But we still have questions about how best to scaffold effective formative self-assessment. One area of inquiry is about the characteristics of the task being assessed, and the standards or criteria used by learners during self-assessment.

Influence of Types of Tasks and Standards or Criteria

Type of task or competency assessed seems to matter (e.g., Dolosic, 2018 , Nguyen and Foster, 2018 ), as do the criteria ( Yilmaz, 2017 ), but we do not yet have a comprehensive understanding of how or why. There is some evidence that it is important that the criteria used to self-assess are concrete, task-specific ( Butler, 2018 ), and graduated. For example, Fastre et al. (2010) revealed an association between self-assessment according to task-specific criteria and task performance: In a quasi-experimental study of 39 novice vocational education students studying stoma care, they compared concrete, task-specific criteria (“performance-based criteria”) such as “Introduces herself to the patient” and “Consults the care file for details concerning the stoma” to vaguer, “competence-based criteria” such as “Shows interest, listens actively, shows empathy to the patient” and “Is discrete with sensitive topics.” The performance-based criteria group outperformed the competence-based group on tests of task performance, presumably because “performance-based criteria make it easier to distinguish levels of performance, enabling a step-by-step process of performance improvement” (p. 530).

This finding echoes the results of a study of self-regulated learning by Kitsantas and Zimmerman (2006) , who argued that “fine-grained standards can have two key benefits: They can enable learners to be more sensitive to small changes in skill and make more appropriate adaptations in learning strategies” (p. 203). In their study, 70 college students were taught how to throw darts at a target. The purpose of the study was to examine the role of graphing of self-recorded outcomes and self-evaluative standards in learning a motor skill. Students who were provided with graduated self-evaluative standards surpassed “those who were provided with absolute standards or no standards (control) in both motor skill and in motivational beliefs (i.e., self-efficacy, attributions, and self-satisfaction)” (p. 201). Kitsantas and Zimmerman hypothesized that setting high absolute standards would limit a learner's sensitivity to small improvements in functioning. This hypothesis was supported by the finding that students who set absolute standards reported significantly less awareness of learning progress (and hit the bull's-eye less often) than students who set graduated standards. “The correlation between the self-evaluation and dart-throwing outcomes measures was extraordinarily high ( r = 0.94)” (p. 210). Classroom-based research on specific, graduated self-assessment criteria would be informative.

Cognitive and Affective Mechanisms of Self-Assessment

There are many additional questions about pedagogy, such as the hoped-for investigation mentioned above of the relationship between accuracy in formative self-assessment, students' subsequent study behaviors, and their learning. There is also a need for research on how to help teachers give students a central role in their learning by creating space for self-assessment (e.g., see Hawe and Parr, 2014 ), and the complex power dynamics involved in doing so ( Tan, 2004 , 2009 ; Taras, 2008 ; Leach, 2012 ). However, there is an even more pressing need for investigations into the internal mechanisms experienced by students engaged in assessing their own learning. Angela Lui and I call this the next black box ( Lui, 2017 ).

Black and Wiliam (1998) used the term black box to emphasize the fact that what happened in most classrooms was largely unknown: all we knew was that some inputs (e.g., teachers, resources, standards, and requirements) were fed into the box, and that certain outputs (e.g., more knowledgeable and competent students, acceptable levels of achievement) would follow. But what, they asked, is happening inside, and what new inputs will produce better outputs? Black and Wiliam's review spawned a great deal of research on formative assessment, some but not all of which suggests a positive relationship with academic achievement ( Bennett, 2011 ; Kingston and Nash, 2011 ). To better understand why and how the use of formative assessment in general and self-assessment in particular is associated with improvements in academic achievement in some instances but not others, we need research that looks into the next black box: the cognitive and affective mechanisms of students who are engaged in assessment processes ( Lui, 2017 ).

The role of internal mechanisms has been discussed in theory but not yet fully tested. Crooks (1988) argued that the impact of assessment is influenced by students' interpretation of the tasks and results, and Butler and Winne (1995) theorized that both cognitive and affective processes play a role in determining how feedback is internalized and used to self-regulate learning. Other theoretical frameworks about the internal processes of receiving and responding to feedback have been developed (e.g., Nicol and Macfarlane-Dick, 2006 ; Draper, 2009 ; Andrade, 2013 ; Lipnevich et al., 2016 ). Yet, Shute (2008) noted in her review of the literature on formative feedback that “despite the plethora of research on the topic, the specific mechanisms relating feedback to learning are still mostly murky, with very few (if any) general conclusions” (p. 156). This area is ripe for research.

Self-assessment is the act of monitoring one's processes and products in order to make adjustments that deepen learning and enhance performance. Although it can be summative, the evidence presented in this review strongly suggests that self-assessment is most beneficial, in terms of both achievement and self-regulated learning, when it is used formatively and supported by training.

What is not yet clear is why and how self-assessment works. Those of you who like to investigate phenomena that are maddeningly difficult to measure will rejoice to hear that the cognitive and affective mechanisms of self-assessment are the next black box. Studies of the ways in which learners think and feel, the interactions between their thoughts and feelings and their context, and the implications for pedagogy will make major contributions to our field.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2019.00087/full#supplementary-material

1. ^ I am grateful to my graduate assistants, Joanna Weaver and Taja Young, for conducting the searches.

Admiraal, W., Huisman, B., and Pilli, O. (2015). Assessment in massive open online courses. Electron. J. e-Learning , 13, 207–216.

Google Scholar

Alaoutinen, S. (2012). Evaluating the effect of learning style and student background on self-assessment accuracy. Comput. Sci. Educ. 22, 175–198. doi: 10.1080/08993408.2012.692924

CrossRef Full Text | Google Scholar

Al-Rawahi, N. M., and Al-Balushi, S. M. (2015). The effect of reflective science journal writing on students' self-regulated learning strategies. Int. J. Environ. Sci. Educ. 10, 367–379. doi: 10.12973/ijese.2015.250a

Andrade, H. (2010). “Students as the definitive source of formative assessment: academic self-assessment and the self-regulation of learning,” in Handbook of Formative Assessment , eds H. Andrade and G. Cizek (New York, NY: Routledge, 90–105.

Andrade, H. (2013). “Classroom assessment in the context of learning theory and research,” in Sage Handbook of Research on Classroom Assessment , ed J. H. McMillan (New York, NY: Sage), 17–34. doi: 10.4135/9781452218649.n2

Andrade, H. (2018). “Feedback in the context of self-assessment,” in Cambridge Handbook of Instructional Feedback , eds A. Lipnevich and J. Smith (Cambridge: Cambridge University Press), 376–408.

PubMed Abstract

Andrade, H., and Boulay, B. (2003). The role of rubric-referenced self-assessment in learning to write. J. Educ. Res. 97, 21–34. doi: 10.1080/00220670309596625

Andrade, H., and Brookhart, S. (2019). Classroom assessment as the co-regulation of learning. Assessm. Educ. Principles Policy Pract. doi: 10.1080/0969594X.2019.1571992

Andrade, H., and Brookhart, S. M. (2016). “The role of classroom assessment in supporting self-regulated learning,” in Assessment for Learning: Meeting the Challenge of Implementation , eds D. Laveault and L. Allal (Heidelberg: Springer), 293–309. doi: 10.1007/978-3-319-39211-0_17

Andrade, H., and Du, Y. (2007). Student responses to criteria-referenced self-assessment. Assess. Evalu. High. Educ. 32, 159–181. doi: 10.1080/02602930600801928

Andrade, H., Du, Y., and Mycek, K. (2010). Rubric-referenced self-assessment and middle school students' writing. Assess. Educ. 17, 199–214. doi: 10.1080/09695941003696172

Andrade, H., Du, Y., and Wang, X. (2008). Putting rubrics to the test: The effect of a model, criteria generation, and rubric-referenced self-assessment on elementary school students' writing. Educ. Meas. 27, 3–13. doi: 10.1111/j.1745-3992.2008.00118.x

Andrade, H., and Valtcheva, A. (2009). Promoting learning and achievement through self- assessment. Theory Pract. 48, 12–19. doi: 10.1080/00405840802577544

Andrade, H., Wang, X., Du, Y., and Akawi, R. (2009). Rubric-referenced self-assessment and self-efficacy for writing. J. Educ. Res. 102, 287–302. doi: 10.3200/JOER.102.4.287-302

Andrade, H. L., and Brown, G. T. L. (2016). “Student self-assessment in the classroom,” in Handbook of Human and Social Conditions in Assessment , eds G. T. L. Brown and L. R. Harris (New York, NY: Routledge), 319–334.

PubMed Abstract | Google Scholar

Baars, M., Vink, S., van Gog, T., de Bruin, A., and Paas, F. (2014). Effects of training self-assessment and using assessment standards on retrospective and prospective monitoring of problem solving. Learn. Instruc. 33, 92–107. doi: 10.1016/j.learninstruc.2014.04.004

Balderas, I., and Cuamatzi, P. M. (2018). Self and peer correction to improve college students' writing skills. Profile. 20, 179–194. doi: 10.15446/profile.v20n2.67095

Bandura, A. (1997). Self-efficacy: The Exercise of Control . New York, NY: Freeman.

Barney, S., Khurum, M., Petersen, K., Unterkalmsteiner, M., and Jabangwe, R. (2012). Improving students with rubric-based self-assessment and oral feedback. IEEE Transac. Educ. 55, 319–325. doi: 10.1109/TE.2011.2172981

Baxter, P., and Norman, G. (2011). Self-assessment or self deception? A lack of association between nursing students' self-assessment and performance. J. Adv. Nurs. 67, 2406–2413. doi: 10.1111/j.1365-2648.2011.05658.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Bennett, R. E. (2011). Formative assessment: a critical review. Assess. Educ. 18, 5–25. doi: 10.1080/0969594X.2010.513678

Birjandi, P., and Hadidi Tamjid, N. (2012). The role of self-, peer and teacher assessment in promoting Iranian EFL learners' writing performance. Assess. Evalu. High. Educ. 37, 513–533. doi: 10.1080/02602938.2010.549204

Bjork, R. A., Dunlosky, J., and Kornell, N. (2013). Self-regulated learning: beliefs, techniques, and illusions. Annu. Rev. Psychol. 64, 417–444. doi: 10.1146/annurev-psych-113011-143823

Black, P., Harrison, C., Lee, C., Marshall, B., and Wiliam, D. (2003). Assessment for Learning: Putting it into Practice . Berkshire: Open University Press.

Black, P., and Wiliam, D. (1998). Inside the black box: raising standards through classroom assessment. Phi Delta Kappan 80, 139–144; 146–148.

Blanch-Hartigan, D. (2011). Medical students' self-assessment of performance: results from three meta-analyses. Patient Educ. Counsel. 84, 3–9. doi: 10.1016/j.pec.2010.06.037

Bol, L., Hacker, D. J., Walck, C. C., and Nunnery, J. A. (2012). The effects of individual or group guidelines on the calibration accuracy and achievement of high school biology students. Contemp. Educ. Psychol. 37, 280–287. doi: 10.1016/j.cedpsych.2012.02.004

Boud, D. (1995a). Implementing Student Self-Assessment, 2nd Edn. Australian Capital Territory: Higher Education Research and Development Society of Australasia.

Boud, D. (1995b). Enhancing Learning Through Self-Assessment. London: Kogan Page.

Boud, D. (1999). Avoiding the traps: Seeking good practice in the use of self-assessment and reflection in professional courses. Soc. Work Educ. 18, 121–132. doi: 10.1080/02615479911220131

Boud, D., and Brew, A. (1995). Developing a typology for learner self-assessment practices. Res. Dev. High. Educ. 18, 130–135.

Bourke, R. (2014). Self-assessment in professional programmes within tertiary institutions. Teach. High. Educ. 19, 908–918. doi: 10.1080/13562517.2014.934353

Bourke, R. (2016). Liberating the learner through self-assessment. Cambridge J. Educ. 46, 97–111. doi: 10.1080/0305764X.2015.1015963

Brown, G., Andrade, H., and Chen, F. (2015). Accuracy in student self-assessment: directions and cautions for research. Assess. Educ. 22, 444–457. doi: 10.1080/0969594X.2014.996523

Brown, G. T., and Harris, L. R. (2013). “Student self-assessment,” in Sage Handbook of Research on Classroom Assessment , ed J. H. McMillan (Los Angeles, CA: Sage), 367–393. doi: 10.4135/9781452218649.n21

Brown, G. T. L., and Harris, L. R. (2014). The future of self-assessment in classroom practice: reframing self-assessment as a core competency. Frontline Learn. Res. 3, 22–30. doi: 10.14786/flr.v2i1.24

Butler, D. L., and Winne, P. H. (1995). Feedback and self-regulated learning: a theoretical synthesis. Rev. Educ. Res. 65, 245–281. doi: 10.3102/00346543065003245

Butler, Y. G. (2018). “Young learners' processes and rationales for responding to self-assessment items: cases for generic can-do and five-point Likert-type formats,” in Useful Assessment and Evaluation in Language Education , eds J. Davis et al. (Washington, DC: Georgetown University Press), 21–39. doi: 10.2307/j.ctvvngrq.5

CrossRef Full Text

Chang, C.-C., Liang, C., and Chen, Y.-H. (2013). Is learner self-assessment reliable and valid in a Web-based portfolio environment for high school students? Comput. Educ. 60, 325–334. doi: 10.1016/j.compedu.2012.05.012

Chang, C.-C., Tseng, K.-H., and Lou, S.-J. (2012). A comparative analysis of the consistency and difference among teacher-assessment, student self-assessment and peer-assessment in a Web-based portfolio assessment environment for high school students. Comput. Educ. 58, 303–320. doi: 10.1016/j.compedu.2011.08.005

Colliver, J., Verhulst, S, and Barrows, H. (2005). Self-assessment in medical practice: a further concern about the conventional research paradigm. Teach. Learn. Med. 17, 200–201. doi: 10.1207/s15328015tlm1703_1

Crooks, T. J. (1988). The impact of classroom evaluation practices on students. Rev. Educ. Res. 58, 438–481. doi: 10.3102/00346543058004438

de Bruin, A. B. H., and van Gog, T. (2012). Improving self-monitoring and self-regulation: From cognitive psychology to the classroom , Learn. Instruct. 22, 245–252. doi: 10.1016/j.learninstruc.2012.01.003

De Grez, L., Valcke, M., and Roozen, I. (2012). How effective are self- and peer assessment of oral presentation skills compared with teachers' assessments? Active Learn. High. Educ. 13, 129–142. doi: 10.1177/1469787412441284

Dolosic, H. (2018). An examination of self-assessment and interconnected facets of second language reading. Read. Foreign Langu. 30, 189–208.

Draper, S. W. (2009). What are learners actually regulating when given feedback? Br. J. Educ. Technol. 40, 306–315. doi: 10.1111/j.1467-8535.2008.00930.x

Dunlosky, J., and Ariel, R. (2011). “Self-regulated learning and the allocation of study time,” in Psychology of Learning and Motivation , Vol. 54 ed B. Ross (Cambridge, MA: Academic Press), 103–140. doi: 10.1016/B978-0-12-385527-5.00004-8

Dunlosky, J., and Rawson, K. A. (2012). Overconfidence produces underachievement: inaccurate self evaluations undermine students' learning and retention. Learn. Instr. 22, 271–280. doi: 10.1016/j.learninstruc.2011.08.003

Dweck, C. (2006). Mindset: The New Psychology of Success. New York, NY: Random House.

Epstein, R. M., Siegel, D. J., and Silberman, J. (2008). Self-monitoring in clinical practice: a challenge for medical educators. J. Contin. Educ. Health Prof. 28, 5–13. doi: 10.1002/chp.149

Eva, K. W., and Regehr, G. (2008). “I'll never play professional football” and other fallacies of self-assessment. J. Contin. Educ. Health Prof. 28, 14–19. doi: 10.1002/chp.150

Falchikov, N. (2005). Improving Assessment Through Student Involvement: Practical Solutions for Aiding Learning in Higher and Further Education . London: Routledge Falmer.

Fastre, G. M. J., van der Klink, M. R., Sluijsmans, D., and van Merrienboer, J. J. G. (2012). Drawing students' attention to relevant assessment criteria: effects on self-assessment skills and performance. J. Voc. Educ. Train. 64, 185–198. doi: 10.1080/13636820.2011.630537

Fastre, G. M. J., van der Klink, M. R., and van Merrienboer, J. J. G. (2010). The effects of performance-based assessment criteria on student performance and self-assessment skills. Adv. Health Sci. Educ. 15, 517–532. doi: 10.1007/s10459-009-9215-x

Fitzpatrick, B., and Schulz, H. (2016). “Teaching young students to self-assess critically,” Paper presented at the Annual Meeting of the American Educational Research Association (Washington, DC).

Franken, A. S. (1992). I'm Good Enough, I'm Smart Enough, and Doggone it, People Like Me! Daily affirmations by Stuart Smalley. New York, NY: Dell.

Glaser, C., and Brunstein, J. C. (2007). Improving fourth-grade students' composition skills: effects of strategy instruction and self-regulation procedures. J. Educ. Psychol. 99, 297–310. doi: 10.1037/0022-0663.99.2.297

Gonida, E. N., and Leondari, A. (2011). Patterns of motivation among adolescents with biased and accurate self-efficacy beliefs. Int. J. Educ. Res. 50, 209–220. doi: 10.1016/j.ijer.2011.08.002

Graham, S., Hebert, M., and Harris, K. R. (2015). Formative assessment and writing. Elem. Sch. J. 115, 523–547. doi: 10.1086/681947

Guillory, J. J., and Blankson, A. N. (2017). Using recently acquired knowledge to self-assess understanding in the classroom. Sch. Teach. Learn. Psychol. 3, 77–89. doi: 10.1037/stl0000079

Hacker, D. J., Bol, L., Horgan, D. D., and Rakow, E. A. (2000). Test prediction and performance in a classroom context. J. Educ. Psychol. 92, 160–170. doi: 10.1037/0022-0663.92.1.160

Harding, J. L., and Hbaci, I. (2015). Evaluating pre-service teachers math teaching experience from different perspectives. Univ. J. Educ. Res. 3, 382–389. doi: 10.13189/ujer.2015.030605

Harris, K. R., Graham, S., Mason, L. H., and Friedlander, B. (2008). Powerful Writing Strategies for All Students . Baltimore, MD: Brookes.

Harris, L. R., and Brown, G. T. L. (2013). Opportunities and obstacles to consider when using peer- and self-assessment to improve student learning: case studies into teachers' implementation. Teach. Teach. Educ. 36, 101–111. doi: 10.1016/j.tate.2013.07.008

Hattie, J., and Timperley, H. (2007). The power of feedback. Rev. Educ. Res. 77, 81–112. doi: 10.3102/003465430298487

Hawe, E., and Parr, J. (2014). Assessment for learning in the writing classroom: an incomplete realization. Curr. J. 25, 210–237. doi: 10.1080/09585176.2013.862172

Hawkins, S. C., Osborne, A., Schofield, S. J., Pournaras, D. J., and Chester, J. F. (2012). Improving the accuracy of self-assessment of practical clinical skills using video feedback: the importance of including benchmarks. Med. Teach. 34, 279–284. doi: 10.3109/0142159X.2012.658897

Huang, Y., and Gui, M. (2015). Articulating teachers' expectations afore: Impact of rubrics on Chinese EFL learners' self-assessment and speaking ability. J. Educ. Train. Stud. 3, 126–132. doi: 10.11114/jets.v3i3.753

Kaderavek, J. N., Gillam, R. B., Ukrainetz, T. A., Justice, L. M., and Eisenberg, S. N. (2004). School-age children's self-assessment of oral narrative production. Commun. Disord. Q. 26, 37–48. doi: 10.1177/15257401040260010401

Karnilowicz, W. (2012). A comparison of self-assessment and tutor assessment of undergraduate psychology students. Soc. Behav. Person. 40, 591–604. doi: 10.2224/sbp.2012.40.4.591

Kevereski, L. (2017). (Self) evaluation of knowledge in students' population in higher education in Macedonia. Res. Pedag. 7, 69–75. doi: 10.17810/2015.49

Kingston, N. M., and Nash, B. (2011). Formative assessment: a meta-analysis and a call for research. Educ. Meas. 30, 28–37. doi: 10.1111/j.1745-3992.2011.00220.x

Kitsantas, A., and Zimmerman, B. J. (2006). Enhancing self-regulation of practice: the influence of graphing and self-evaluative standards. Metacogn. Learn. 1, 201–212. doi: 10.1007/s11409-006-9000-7

Kluger, A. N., and DeNisi, A. (1996). The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol. Bull. 119, 254–284. doi: 10.1037/0033-2909.119.2.254

Kollar, I., Fischer, F., and Hesse, F. (2006). Collaboration scripts: a conceptual analysis. Educ. Psychol. Rev. 18, 159–185. doi: 10.1007/s10648-006-9007-2

Kolovelonis, A., Goudas, M., and Dermitzaki, I. (2012). Students' performance calibration in a basketball dribbling task in elementary physical education. Int. Electron. J. Elem. Educ. 4, 507–517.

Koriat, A. (2012). The relationships between monitoring, regulation and performance. Learn. Instru. 22, 296–298. doi: 10.1016/j.learninstruc.2012.01.002

Kostons, D., van Gog, T., and Paas, F. (2012). Training self-assessment and task-selection skills: a cognitive approach to improving self-regulated learning. Learn. Instruc. 22, 121–132. doi: 10.1016/j.learninstruc.2011.08.004

Labuhn, A. S., Zimmerman, B. J., and Hasselhorn, M. (2010). Enhancing students' self-regulation and mathematics performance: the influence of feedback and self-evaluative standards Metacogn. Learn. 5, 173–194. doi: 10.1007/s11409-010-9056-2

Leach, L. (2012). Optional self-assessment: some tensions and dilemmas. Assess. Evalu. High. Educ. 37, 137–147. doi: 10.1080/02602938.2010.515013

Lew, M. D. N., Alwis, W. A. M., and Schmidt, H. G. (2010). Accuracy of students' self-assessment and their beliefs about its utility. Assess. Evalu. High. Educ. 35, 135–156. doi: 10.1080/02602930802687737

Lin-Siegler, X., Shaenfield, D., and Elder, A. D. (2015). Contrasting case instruction can improve self-assessment of writing. Educ. Technol. Res. Dev. 63, 517–537. doi: 10.1007/s11423-015-9390-9

Lipnevich, A. A., Berg, D. A. G., and Smith, J. K. (2016). “Toward a model of student response to feedback,” in The Handbook of Human and Social Conditions in Assessment , eds G. T. L. Brown and L. R. Harris (New York, NY: Routledge), 169–185.

Lopez, R., and Kossack, S. (2007). Effects of recurring use of self-assessment in university courses. Int. J. Learn. 14, 203–216. doi: 10.18848/1447-9494/CGP/v14i04/45277

Lopez-Pastor, V. M., Fernandez-Balboa, J.-M., Santos Pastor, M. L., and Aranda, A. F. (2012). Students' self-grading, professor's grading and negotiated final grading at three university programmes: analysis of reliability and grade difference ranges and tendencies. Assess. Evalu. High. Educ. 37, 453–464. doi: 10.1080/02602938.2010.545868

Lui, A. (2017). Validity of the responses to feedback survey: operationalizing and measuring students' cognitive and affective responses to teachers' feedback (Doctoral dissertation). University at Albany—SUNY: Albany NY.

Marks, M. B., Haug, J. C., and Hu, H. (2018). Investigating undergraduate business internships: do supervisor and self-evaluations differ? J. Educ. Bus. 93, 33–45. doi: 10.1080/08832323.2017.1414025

Memis, E. K., and Seven, S. (2015). Effects of an SWH approach and self-evaluation on sixth grade students' learning and retention of an electricity unit. Int. J. Prog. Educ. 11, 32–49.

Metcalfe, J., and Kornell, N. (2005). A region of proximal learning model of study time allocation. J. Mem. Langu. 52, 463–477. doi: 10.1016/j.jml.2004.12.001

Meusen-Beekman, K. D., Joosten-ten Brinke, D., and Boshuizen, H. P. A. (2016). Effects of formative assessments to develop self-regulation among sixth grade students: results from a randomized controlled intervention. Stud. Educ. Evalu. 51, 126–136. doi: 10.1016/j.stueduc.2016.10.008

Micán, D. A., and Medina, C. L. (2017). Boosting vocabulary learning through self-assessment in an English language teaching context. Assess. Evalu. High. Educ. 42, 398–414. doi: 10.1080/02602938.2015.1118433

Miller, T. M., and Geraci, L. (2011). Training metacognition in the classroom: the influence of incentives and feedback on exam predictions. Metacogn. Learn. 6, 303–314. doi: 10.1007/s11409-011-9083-7

Murakami, C., Valvona, C., and Broudy, D. (2012). Turning apathy into activeness in oral communication classes: regular self- and peer-assessment in a TBLT programme. System 40, 407–420. doi: 10.1016/j.system.2012.07.003

Nagel, M., and Lindsey, B. (2018). The use of classroom clickers to support improved self-assessment in introductory chemistry. J. College Sci. Teach. 47, 72–79.

Ndoye, A. (2017). Peer/self-assessment and student learning. Int. J. Teach. Learn. High. Educ. 29, 255–269.

Nguyen, T., and Foster, K. A. (2018). Research note—multiple time point course evaluation and student learning outcomes in an MSW course. J. Soc. Work Educ. 54, 715–723. doi: 10.1080/10437797.2018.1474151

Nicol, D., and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Stud. High. Educ. 31, 199–218. doi: 10.1080/03075070600572090

Nielsen, K. (2014), Self-assessment methods in writing instruction: a conceptual framework, successful practices and essential strategies. J. Res. Read. 37, 1–16. doi: 10.1111/j.1467-9817.2012.01533.x.

Nowell, C., and Alston, R. M. (2007). I thought I got an A! Overconfidence across the economics curriculum. J. Econ. Educ. 38, 131–142. doi: 10.3200/JECE.38.2.131-142

Nugteren, M. L., Jarodzka, H., Kester, L., and Van Merriënboer, J. J. G. (2018). Self-regulation of secondary school students: self-assessments are inaccurate and insufficiently used for learning-task selection. Instruc. Sci. 46, 357–381. doi: 10.1007/s11251-018-9448-2

Panadero, E., and Alonso-Tapia, J. (2013). Self-assessment: theoretical and practical connotations. When it happens, how is it acquired and what to do to develop it in our students. Electron. J. Res. Educ. Psychol. 11, 551–576. doi: 10.14204/ejrep.30.12200

Panadero, E., Alonso-Tapia, J., and Huertas, J. A. (2012). Rubrics and self-assessment scripts effects on self-regulation, learning and self-efficacy in secondary education. Learn. Individ. Differ. 22, 806–813. doi: 10.1016/j.lindif.2012.04.007

Panadero, E., Alonso-Tapia, J., and Huertas, J. A. (2014). Rubrics vs. self-assessment scripts: effects on first year university students' self-regulation and performance. J. Study Educ. Dev. 3, 149–183. doi: 10.1080/02103702.2014.881655

Panadero, E., Alonso-Tapia, J., and Reche, E. (2013). Rubrics vs. self-assessment scripts effect on self-regulation, performance and self-efficacy in pre-service teachers. Stud. Educ. Evalu. 39, 125–132. doi: 10.1016/j.stueduc.2013.04.001

Panadero, E., Brown, G. L., and Strijbos, J.-W. (2016a). The future of student self-assessment: a review of known unknowns and potential directions. Educ. Psychol. Rev. 28, 803–830. doi: 10.1007/s10648-015-9350-2

Panadero, E., Jonsson, A., and Botella, J. (2017). Effects of self-assessment on self-regulated learning and self-efficacy: four meta-analyses. Educ. Res. Rev. 22, 74–98. doi: 10.1016/j.edurev.2017.08.004

Panadero, E., Jonsson, A., and Strijbos, J. W. (2016b). “Scaffolding self-regulated learning through self-assessment and peer assessment: guidelines for classroom implementation,” in Assessment for Learning: Meeting the Challenge of Implementation , eds D. Laveault and L. Allal (New York, NY: Springer), 311–326. doi: 10.1007/978-3-319-39211-0_18

Panadero, E., and Romero, M. (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assess. Educ. 21, 133–148. doi: 10.1080/0969594X.2013.877872

Papanthymou, A., and Darra, M. (2018). Student self-assessment in higher education: The international experience and the Greek example. World J. Educ. 8, 130–146. doi: 10.5430/wje.v8n6p130

Punhagui, G. C., and de Souza, N. A. (2013). Self-regulation in the learning process: actions through self-assessment activities with Brazilian students. Int. Educ. Stud. 6, 47–62. doi: 10.5539/ies.v6n10p47

Raaijmakers, S. F., Baars, M., Paas, F., van Merriënboer, J. J. G., and van Gog, T. (2019). Metacognition and Learning , 1–22. doi: 10.1007/s11409-019-09189-5

Raaijmakers, S. F., Baars, M., Schapp, L., Paas, F., van Merrienboer, J., and van Gog, T. (2017). Training self-regulated learning with video modeling examples: do task-selection skills transfer? Instr. Sci. 46, 273–290. doi: 10.1007/s11251-017-9434-0

Ratminingsih, N. M., Marhaeni, A. A. I. N., and Vigayanti, L. P. D. (2018). Self-assessment: the effect on students' independence and writing competence. Int. J. Instruc. 11, 277–290. doi: 10.12973/iji.2018.11320a

Ross, J. A., Rolheiser, C., and Hogaboam-Gray, A. (1998). “Impact of self-evaluation training on mathematics achievement in a cooperative learning environment,” Paper presented at the annual meeting of the American Educational Research Association (San Diego, CA).

Ross, J. A., and Starling, M. (2008). Self-assessment in a technology-supported environment: the case of grade 9 geography. Assess. Educ. 15, 183–199. doi: 10.1080/09695940802164218

Samaie, M., Nejad, A. M., and Qaracholloo, M. (2018). An inquiry into the efficiency of whatsapp for self- and peer-assessments of oral language proficiency. Br. J. Educ. Technol. 49, 111–126. doi: 10.1111/bjet.12519

Sanchez, C. E., Atkinson, K. M., Koenka, A. C., Moshontz, H., and Cooper, H. (2017). Self-grading and peer-grading for formative and summative assessments in 3rd through 12th grade classrooms: a meta-analysis. J. Educ. Psychol. 109, 1049–1066. doi: 10.1037/edu0000190

Sargeant, J. (2008). Toward a common understanding of self-assessment. J. Contin. Educ. Health Prof. 28, 1–4. doi: 10.1002/chp.148

Sargeant, J., Mann, K., van der Vleuten, C., and Metsemakers, J. (2008). “Directed” self-assessment: practice and feedback within a social context. J. Contin. Educ. Health Prof. 28, 47–54. doi: 10.1002/chp.155

Shute, V. (2008). Focus on formative feedback. Rev. Educ. Res. 78, 153–189. doi: 10.3102/0034654307313795

Silver, I., Campbell, C., Marlow, B., and Sargeant, J. (2008). Self-assessment and continuing professional development: the Canadian perspective. J. Contin. Educ. Health Prof. 28, 25–31. doi: 10.1002/chp.152

Siow, L.-F. (2015). Students' perceptions on self- and peer-assessment in enhancing learning experience. Malaysian Online J. Educ. Sci. 3, 21–35.

Son, L. K., and Metcalfe, J. (2000). Metacognitive and control strategies in study-time allocation. J. Exp. Psychol. 26, 204–221. doi: 10.1037/0278-7393.26.1.204

Tan, K. (2004). Does student self-assessment empower or discipline students? Assess. Evalu. Higher Educ. 29, 651–662. doi: 10.1080/0260293042000227209

Tan, K. (2009). Meanings and practices of power in academics' conceptions of student self-assessment. Teach. High. Educ. 14, 361–373. doi: 10.1080/13562510903050111

Taras, M. (2008). Issues of power and equity in two models of self-assessment. Teach. High. Educ. 13, 81–92. doi: 10.1080/13562510701794076

Tejeiro, R. A., Gomez-Vallecillo, J. L., Romero, A. F., Pelegrina, M., Wallace, A., and Emberley, E. (2012). Summative self-assessment in higher education: implications of its counting towards the final mark. Electron. J. Res. Educ. Psychol. 10, 789–812.

Thawabieh, A. M. (2017). A comparison between students' self-assessment and teachers' assessment. J. Curri. Teach. 6, 14–20. doi: 10.5430/jct.v6n1p14

Tulgar, A. T. (2017). Selfie@ssessment as an alternative form of self-assessment at undergraduate level in higher education. J. Langu. Linguis. Stud. 13, 321–335.

van Helvoort, A. A. J. (2012). How adult students in information studies use a scoring rubric for the development of their information literacy skills. J. Acad. Librarian. 38, 165–171. doi: 10.1016/j.acalib.2012.03.016

van Loon, M. H., de Bruin, A. B. H., van Gog, T., van Merriënboer, J. J. G., and Dunlosky, J. (2014). Can students evaluate their understanding of cause-and-effect relations? The effects of diagram completion on monitoring accuracy. Acta Psychol. 151, 143–154. doi: 10.1016/j.actpsy.2014.06.007

van Reybroeck, M., Penneman, J., Vidick, C., and Galand, B. (2017). Progressive treatment and self-assessment: Effects on students' automatisation of grammatical spelling and self-efficacy beliefs. Read. Writing 30, 1965–1985. doi: 10.1007/s11145-017-9761-1

Wang, W. (2017). Using rubrics in student self-assessment: student perceptions in the English as a foreign language writing context. Assess. Evalu. High. Educ. 42, 1280–1292. doi: 10.1080/02602938.2016.1261993

Wollenschläger, M., Hattie, J., Machts, N., Möller, J., and Harms, U. (2016). What makes rubrics effective in teacher-feedback? Transparency of learning goals is not enough. Contemp. Educ. Psychol. 44–45, 1–11. doi: 10.1016/j.cedpsych.2015.11.003

Yan, Z., and Brown, G. T. L. (2017). A cyclical self-assessment process: towards a model of how students engage in self-assessment. Assess. Evalu. High. Educ. 42, 1247–1262. doi: 10.1080/02602938.2016.1260091

Yilmaz, F. N. (2017). Reliability of scores obtained from self-, peer-, and teacher-assessments on teaching materials prepared by teacher candidates. Educ. Sci. 17, 395–409. doi: 10.12738/estp.2017.2.0098

Zimmerman, B. J. (2000). Self-efficacy: an essential motive to learn. Contemp. Educ. Psychol. 25, 82–91. doi: 10.1006/ceps.1999.1016

Zimmerman, B. J., and Schunk, D. H. (2011). “Self-regulated learning and performance: an introduction and overview,” in Handbook of Self-Regulation of Learning and Performance , eds B. J. Zimmerman and D. H. Schunk (New York, NY: Routledge), 1–14.

Keywords: self-assessment, self-evaluation, self-grading, formative assessment, classroom assessment, self-regulated learning (SRL)

Citation: Andrade HL (2019) A Critical Review of Research on Student Self-Assessment. Front. Educ. 4:87. doi: 10.3389/feduc.2019.00087

Received: 27 April 2019; Accepted: 02 August 2019; Published: 27 August 2019.

Reviewed by:

Copyright © 2019 Andrade. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Heidi L. Andrade, handrade@albany.edu

This article is part of the Research Topic

Advances in Classroom Assessment Theory and Practice

Mobile search

  • Student Self-Assessment

Student self-assessment occurs when learners assess their own performance. With practice, they learn to:

  • objectively reflect on and critically evaluate their own progress and skill development
  • identify gaps in their understanding and capabilities
  • discern how to improve their performance
  • learn independently and think critically.

Use self-assessment to develop the learning skills students will need for professional competence, and to make them aware of and more responsible for their own learning processes.

Sometimes teachers use self-assessment and peer assessment together. For example, they might require students to use a rubric to critique the work of their peers, and then to apply the same criteria to their own work. Nulty (n.d.) argues that students must first learn to peer assess if they are to self-assess effectively.

Skilled self-assessment can be as reliable as other forms of assessment, but teachers must provide students with training and practice if they want results to closely align with other assessors' results.

You can introduce students to the idea of self-assessment using:

  • ongoing structured formative learning (for example, by using online quizzes that give students immediate feedback on their performance) or
  • summative assessment (for example, requiring students to grade their own performance).

The literature suggests that self-assessment may be more useful as a formative learning tool than as a summative assessment.

Self-assessment benefits students by:

  • helping them develop important meta-cognitive skills that contribute to a range of important graduate capabilities. All professionals must be able to evaluate their own performance; thus, this practice should be embedded in higher-education learning as early as possible.
  • increasing their self-awareness through reflective practice, making the criteria for self-evaluation explicit, and making performance-improvement practices intrinsic to ongoing learning.
  • contributing to the development of critical reviewing skills, enabling students to more objectively evaluate their own performance and, when used in conjunction with peer assessment, others' performance as well. With peer assessment, students become more practised in giving constructive feedback and receiving and acting on the feedback they receive.
  • helping them take control of their own learning and assessment, and giving them the chance to manage their own learning and development more independently.
  • giving them greater agency regarding assessment, thus enriching their learning.
  • possibly, in the long run, reducing the teacher's assessment workload – although this benefit is not sufficient on its own to introduce student self-assessment.

Although studies have shown that most students are fairly capable self-assessors, introducing self-assessment can raise dilemmas and challenges. For example:

  • Lower-performing and less experienced students tend to overestimate their achievements. As with peer assessment, students' ability to self-assess accurately must be developed over time, and with substantial guidance. It is definitely not initially a time-saving exercise for the teacher.
  • Students may resist self-assessment, perceiving assessment and grading to be the teacher's job, or having no confidence in their ability to assess themselves.
  • Issues can arise if students' self-assessments are not consistent with peer or staff assessments.

Designing self-assessment

Students often readily accept the use of self-assessment as part of a formative learning process. It satisfies their need for formal self-reflection on their progress, and gives them agency when they are planning their learning. It may also give them valuable experience in self-assessment that they can apply throughout the course.

Design self-assessment carefully, and ensure that you integrate its use into the assessment plan. This way you optimise the benefits to learning, appropriately engage students in the process by giving them clear directions and explanations and ensure that contingency plans are in place if issues arise.

Here are some factors to consider when including student self-assessment in your learning design:

  • It is unreasonable to expect students to become experts in self-assessment after a single course.
  • It is reasonable to expect that they will be capable self-assessors by the end of their undergraduate program.
  • Consider students' different experience levels when designing tasks, and support the development of their self-assessment capabilities accordingly.
  • Provide more guidance and facilitation for less experienced students.
  • Make clear to students the rationale for self-assessment and its intended benefits to their learning, so that they do not misconstrue the strategy as indicating that the teacher is lazy.
  • At first, you can provide predetermined assessment criteria for students to use in self-assessing their work. In some areas and at higher levels of study, these may be best determined by the teacher.
  • Students may find it significantly more interesting and motivating if you involve them in developing the assessment criteria. This also encourages their autonomy and self-management as learners.
  • Helping develop assessment criteria develops students' assessment literacy and promotes a shared understanding of tasks and assessment standards.
  • Students can be capable assessors of their own and their peers' performance. Help them build their meta-awareness about this capability so that they can articulate and defend their critiques of their own work, and clarify what they can do to improve their performance.
  • Providing an expert assessment of students' work allows them to cross-check their self-assessment, as does combining self-assessment with peer assessment.
  • Use assessed examples of students' work to illustrate different levels of achievement. This will clarify the standards and show how criteria are applied.
  • This is a complex decision. Self-assessment for grading may be more appropriate in high-level undergraduate or postgraduate courses, especially where class sizes are smaller.
  • If you decide that self-assessment will contribute to the grade, precisely state to both students and assessors, at the outset, how much it will contribute.
  • Introduce self-assessment for practice and familiarisation before you use it to contribute to grading. For example, have students attach a self-assessment report to their submitted work.
  • Assessment of learning is intrinsically inexact and subjective. Use assessment rubrics, whether predetermined by the teacher or negotiated with students, to specify expected standards of performance against stated criteria.
  • Shared use of a rubric by staff and students can prompt valuable conversations about assessment principles and quality standards.
  • The more a student's self-assessment contributes to the grade, the greater will be the need for the teacher to moderate the grade with their own assessment. Remember, though, that "if tutors moderate student self-assessments with anything other than a light touch, students do not put their hearts into being objective in their self-assessment" ( Race, 2001, p. 14 ). But if self-assessment results are not moderated, the fairness of the process will be questionable, no matter how capable the students may be as self-assessors. A moderation process can simply consist of comparing the tutor's and/or peer's grade and the student's self-assessed grade. Where they are very different, you can discuss the discrepancy with the student, with an eye to possibly reviewing the grade. Such processes are more difficult to manage in very large classes.

Practical methods

Reflective journal.

Having students produce a reflective journal about their own learning and achievements is a logical way to engage them in self-assessment, as it gives both them and their assessors insights into their learning.

Extend the reflective-journal task to include their thoughts on how they can improve their performance.

You can assess the reflective journal, or the students can reflect on their reflections, or assess their peers' journals and give feedback.

One version of this type of assessment task is the "self-assessment schedule" (Boud, 1992), a formal document prepared by the student that presents their achievements alongside their learning goals and comments on what they feel they have achieved.

Self-assessment prompts for students

You can incorporate self-assessment into almost any assessment task, either when or after students submit their assignments.

Race (2001) suggests structuring the self-assessment by prompting students;  for example, by asking them:

  • What do you think is a fair grade for the work you have handed in, and why do you think so?
  • What did you do best in this assessment task?
  • What did you do least well in this assessment task?
  • What did you find was the hardest part?
  • What was the most important thing you learned in doing this assessment task?
  • If you had more time to complete the task, what (if anything) would you change, and why?

Self-assessment in group work

Self-assessment can focus on aspects of a task that only the student can comment on, such as their contribution to teamwork and the collaborative production of a group's outputs. When students are allowed to do this, they see it as reducing the risk of being judged unfairly (Nulty, n.d.).

Self-assessment of class participation

Assessing students' participation in class discussions and activities is often seen as an overly subjective process. If students can see that you value their perceptions of the quality of their own and their peers' contributions, they are likely to become more active in the classroom. Combining student self- and peer assessment with tutor assessment makes for a more reliable grade (Dancer & Kamvounias, 2005).

Using technology

Use online tools such as journals or blogs to manage self-assessment based on reflective activities. You might, for example, require students to publish regular reflections in response to question prompts. Both you and they can then assess their learning process. You can set up a private journal for this purpose, or a blog that can be shared with other students (or made public) and comments invited.

For more objective tasks, such as scientific or mathematical calculation, you can provide automatically marked online tests. Invite students to create questions to contribute to the test database; this adds a meta-cognitive layer to the exercise. Online tools such as Peerwise are being developed for this purpose, but the Moodle Learning Management System allows the compilation of question banks , and self-assessment can be incorporated into a workshop activity.

Boud, D. (1992). The use of self-assessment schedules in negotiated learning. Studies in Higher Education ,17(2), 185-201.

Brew, C., Riley, P. & Walta, C. (2009). Education students and their teachers: Comparing views on participative assessment practices. Assessment and Evaluation in Higher Education, 36(4), 641-657.

Dancer, D. & Kamvounias, P. (2005). Student involvement in assessment: a project to assess class participation fairly and reliably. Assessment and Evaluation in Higher Education , 30(4), 445-454.

Kearney, S. (2019). Transforming the first-year experience through self and peer assessment . Journal of University Teaching & Learning Practice , 16(5). https://doi.org/10.53761/1.16.5.3

McDonald, B. & Boud, D. (2003). The impact of self-assessment on achievement: The effects of self-assessment training on performance in external examinations. Assessment in Education , 10(2), 209-220.

Nulty, D.D. (n.d.). A guide to peer and self assessment: approaches and practice strategies for academics. Griffith Institute for Higher Education, Griffith University.

Nulty, D.D. (2010). Peer and self assessment in the first year of university. Assessment and Evaluation in Higher Education, 1469–297X.

Panadero, E., Pérez, D.G., Ruiz, J.F., Fraile J., Sánchez-Iglesias, I. & Brown, G. T. L. (2022). University students’ strategies and criteria during self-assessment: instructor’s feedback, rubrics, and year level effects . European Journal of Psychology of Education (October). https://doi.org/10.1007/s10212-022-00639-4

Race, P. (2001). A Briefing on Self, Peer and Group Assessment . Assessment Series No. 9., LTSN Generic Centre.

UNSW Teaching Gateway. The Moodle Workshop tool .

  • Teaching for learning
  • Assessment Toolkit
  • Digital Assessment at UNSW
  • Designing Assessment
  • Assessment Methods
  • Using Assessment Rubrics
  • Reducing Plagiarism
  • Giving Assessment Feedback
  • Reviewing Assessment Quality
  • Spotlight on Assessment
  • Assessment Development Framework
  • Teaching Settings

Events & news

Self-Report is Indispensable to Assess Students’ Learning

Article sidebar, main article content.

Self-report is required to assess mental states in nuanced ways. By implication, self-report is indispensable to capture the psychological processes driving human learning, such as learners’ emotions, motivation, strategy use, and metacognition. As shown in the contributions to this special issue, self-report related to learning shows convergent and predictive validity, and there are ways to further strengthen its power. However, self-report is limited to assess conscious contents, lacks temporal resolution, and is subject to response sets and memory biases. As such, it needs to be complemented by alternative measures. Future research on self-report should consider not only closed-response quantitative measures but also alternative self-report methodologies, make use of within-person analysis, and investigate the impact of respondents’ emotions on processes and outcomes of self-report assessments.    

Article Details

FLR adopts the Attribution-NonCommercial-NoDerivs Creative Common License (BY-NC-ND). That is, Copyright for articles published in this journal is retained by the authors with, however, first publication rights granted to the journal. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

Cookie Notice

This website uses cookies. Contact us for questions or requests.

Augusta University Logo

Information for:

  • Current Students
  • Faculty & Staff
  • Degrees & Programs
  • Campus Maps
  • Jobs & Careers
  • Campus Shuttles
  • Student Life
  •   Giving

A teacher sitting at a desk and writing notes on a piece of paper.

  • Augusta University

What Is a Teacher Self-Assessment? Tools, Types and Benefits

The number of public school teachers has grown in the past decade, the National Center for Education Statistics reports: A comparison between the 2011-12 and 2020-21 academic years reveals an 11 percent increase. However, according to the Government Accountability Office (GAO), there are teacher shortages in approximately three-quarters of U.S. states. During the pandemic, between 2019 and 2021, 7 percent of public school teachers decided to leave the profession, further exacerbating the challenge, the GAO reports.

Despite the many difficulties they face, teachers remain dedicated to providing the best education possible to their students. As teachers seek to nurture their students’ curiosity and creativity, teacher self-assessments emerge as a valuable practice that can benefit everyone. These reflective exercises, combined with pursuing an advanced education in inclusive instruction, can empower educators to gauge their effectiveness in the classroom.

What Is a Teacher Self-Assessment?

A teacher self-assessment is a tool that a teacher can use to measure their teaching performance against their goals. The process often involves answering questions about their teaching methods and what impact they have on student learning.

Questions may focus on key elements of teaching practice, such as:

  • Am I creating a conducive learning environment?
  • How is my teaching practice facilitating student-centered learning?
  • What enrichment activities am I offering? Are they encouraging open discussions?

A teacher self-assessment may involve collecting and analyzing relevant evidence, for example, student work samples or questionnaire responses.

Since objectivity is key in self-assessments, teachers use specific criteria to assess the effectiveness of their teaching practice. Another key is to be straightforward and honest when answering questions. Self-assessments are best conducted in a setting conducive to reflection, whether that’s at home, in a classroom after school hours, or in a quiet space of choice.

Different Types of Teacher Self-Assessments

A teacher self-assessment can be approached in any of a number of different ways. A common practice is for teachers to observe themselves through video recordings. This approach can help them rate their own effectiveness in delivering instruction, engaging students and managing the classroom.

In another example, teachers compile a portfolio containing lesson plans and sample materials. Teachers could include a collection of work that showcases their growth, reflections, achievements and evidence of development. Teachers can then refer to these materials when they set goals for the future as well.

Teacher self-assessments can also include activities such as journaling. Teachers can record their experiences throughout a school year — both good and bad — and then review their notes at the end of the year. This approach can provide insights into what was effective and what could be improved upon.

A teacher’s self-assessment can include observation from peers as well. In this approach, teachers ask their colleagues, typically more experienced teachers, to observe them in their classroom. This can lead to constructive feedback and new ideas that can help teachers improve their teaching practice and their students’ educational outcomes.

These self-assessment approaches can empower teachers to gain insights, identify growth areas and improve their overall effectiveness.

The Benefits of Self-Assessment

Teacher self-assessments provide a systematic approach to building new skills, which can help teachers become more competitive and position themselves for better pay. Of course, many criteria influence teacher salaries, and there are variances in teacher salaries by state as well.

By engaging in a self-assessment, teachers can gain insights into their teaching methods and strategies, which can enable them to identify areas for improvement and make necessary adjustments to improve student outcomes.

Teacher self-assessments offer numerous other benefits for educators as well, such as the following:

  • They can uncover challenges and skills teachers may have overlooked or not fully recognized. A deep exploration of their teaching practice can illuminate areas for further development.
  • They can help teachers identify pressing problems. For example, teachers might recognize the need to prioritize certain important issues for students, leading them to work with an instructional coordinator to address specific concerns through targeted solutions and improvements.
  • They can allow teachers to delve into specific aspects of their teaching practice. Teachers can move beyond the jargon, such as “good performance,” to come up with concrete evaluations of their instructional performance.

Another benefit of teacher self-assessments is that they allow teachers to make informed decisions, implement effective strategies and be honest with themselves about what’s working in the classroom. Making an effort to conduct self-assessments also demonstrates a teacher’s willingness to improve, their commitment to pursuing teaching excellence and their desire to achieve professional growth.

Teacher Self-Assessment vs. Traditional Evaluations: What’s Different?

Teacher self-assessments and traditional evaluations may be different, but they both can play an important role in teachers’ professional development. Self-assessment is a personalized, reflective approach. The source of the evaluation is the teacher themselves. In other words, teachers consider their own teaching practice to judge their strengths and weaknesses and identify areas for improvement.

Self-assessments are also voluntary and tend to be informal. Teachers may choose to ask their peers to observe them in class for the purpose of self-improvement.

On the other hand, traditional evaluations are based on the perspectives of school administrators or teachers’ supervisors. Unlike a self-assessment, a traditional evaluation is a formal process that relies on standardized criteria, accountability measures and observation protocols defined by a school district.

Teacher self-assessments empower teachers to practice autonomy in setting goals and creating action plans for improvement. In contrast, traditional evaluations are typically a requirement imposed by a supervisor or external authority such as a school district administrator. This approach may limit a teacher’s ability to shape the evaluation process.

Combining these two approaches can provide a comprehensive and well-rounded evaluation of a teacher’s performance and professional growth.

Self-Assessment Tools

Teachers can use any of various tools for their self-assessments, including self-reflection, surveys and questionnaires. For instance, they can use rubrics containing checklists that outline teaching criteria. Before the start of the school year, the teacher develops a checklist to measure areas such as planning, lesson content, classroom organization, instruction delivery, student engagement and classroom management. Throughout the year, they rate themselves and identify areas for improvement.

Teachers can also use readily available self-assessment tools that offer structured frameworks. These tools pose questions that cover aspects of teaching such as subject matter knowledge, planning skills and effectiveness in delivering instruction. Teachers assess themselves and receive personalized feedback. For example, the American Institutes for Research offers teachers a self-assessment tool to help them reflect on their teaching practices that support social and emotional learning for students.

Another teacher self-assessment method is collaboration with others. As such, a teacher can ask their students about the students’ perception of what’s being taught, the classroom environment, their level of engagement and their level of satisfaction. Students can share their feedback through a survey or questionnaire the teacher provides.

Advance Your Teaching Career

Teacher self-assessments can unlock teachers’ potential to advance in their careers. They can help teachers overcome challenges and harness their strengths. They are a powerful tool that can empower lifelong learning, enabling teachers to adapt to challenges, stay ahead of the curve and embrace the latest teaching practices.

Teachers with a growth mindset and a commitment to continuous improvement can further enhance their careers by advancing their education. Augusta University Online’s Master of Education in Instruction program offers a curriculum focused on classroom management, pedagogical theory, assessment analysis and curriculum design. The program focuses on preparing teachers with the skills and knowledge they need to foster inclusive and student-focused learning environments.

Learn more about how the program can help you reach your professional goals as a teacher.

Recommended Readings What Can You Do With a Master of Education?

Sources: Abeka, Teacher Self-Evaluation American Institutes for Research, Self-Assessing Social and Emotional Instruction and Competencies: A Tool for Teachers Center on Great Teachers and Leaders, Teacher Leadership: Self-Assessment and Readiness Tools Education Week , “The Status of the Teaching Profession Is at a 50-Year Low. What Can We Do About It?” EF, “Why This One Habit Can Transform Your Teaching” National Center for Education Statistics, Characteristics of Public School Teachers National Education Association, NEA Teacher Evaluation and Accountability Toolkit SafetyCulture, “Teacher Evaluation Methods for Effective Quality Teaching” U.S. Government Accountability Office, “Pandemic Learning: Less Academic Progress Overall, Student and Teacher Strain, and Implications for the Future”

Want to hear more about Augusta University Online’s programs?

Fill out the form below, and an admissions representative will reach out to you via email or phone with more information. After you’ve completed the form, you’ll automatically be redirected to learn more about Augusta University Online and your chosen program.

You are using an outdated browser. Upgrade your browser today or install Google Chrome Frame to better experience this site.

  • Professional learning

Teach. Learn. Grow.

Teach. learn. grow. the education blog.

Erin Beard

The importance of student self-assessment

what is a self assessment report in education

Want to know a secret? I didn’t mean to become a secondary ELA teacher. One of the reasons why an ELA endorsement wasn’t originally at the top of my list was this persistent worry: How does one keep up with marking and grading all those ELA assignments? I do love a good challenge, though, so despite my concerns, I jumped into the adventure, and I am so glad that I did.

Over time, I faced that ELA assignment volume fear. Through support from mentors, observations of experts, and practice in professional development settings, I learned how to embed student self-assessment into learning processes so that my workload concerns were alleviated. More importantly, my students were able to build metacognition and self-efficacy skills .

I don’t want to give the impression that learning how to embed self-assessment processes was smooth or linear, or that the process is complete for me. There was trial and error, zigs and zags, and I am still learning how to improve my practice. I hope that sharing the things I’ve learned along the way can bring relief and support you in your work.

The topic of student self-assessment is huge, so for the purposes of this blog post, I will focus on three things that I’ve learned along the way. If you want to know more about the components and benefits of self-assessment, check out this short Dylan Wiliam video . I also encourage you to read Heidi Andrade’s “A critical review of research on student self-assessment.” She asks and answers several key questions about self-assessment including, “What is self-assessment?” and “Why self-assess?” She also digs into how it relates to feedback.

1. Reflect on your role

When I first started teaching, I was prepared to operate as a learner-manager instead of a learner-empowerer. As a learner-manager ELA teacher, I would give directions for an essay, set a due date, collect essays, take hours to mark and grade the essays, and then hand back the papers. Inevitably I would be frustrated when students ignored my marks or tossed out the paper, only to ask how to raise their grade at the end of the quarter. Ugh! Wasn’t this precisely why I hadn’t planned on being an ELA teacher?

Authentic, meaningful, and effective student self-assessment requires participants to be honest and vulnerable.

Slowly I came to understand how important it was for me to make the shift from thinking of myself as a learner-manager (an “I say it. You do it” approach) to thinking of myself as a learner-empowerer (a “How do I partner with my students to build knowledge, skills, and self-efficacy throughout the learning journey?” approach). (For more about learner-manager versus learner-empowerer as well as information about the connections to equity and a trauma-informed practice, see my post “6 ways to help heal toxic stress, trauma, and inequity in your virtual or in-person classroom” .)

Once I committed to being a learner-empowerer, my actions followed suit. I engineered learning goal paths that made students active agents in the learning processes. For example, I built in small, quick opportunities to practice self-assessing along the way so that by the time we arrived at an end point, students’ work was solid and they could reflect on the goal or explain a grade. More on that in the next tip.

2. Use reliable strategies, processes, and tools

Here are four “moves”—examples of specific actions I learned to embed in learning processes—that I think can help you as you consider the role of self-assessment in your classroom.

Nurture a community of learning

Authentic, meaningful, and effective student self-assessment requires participants to be honest and vulnerable. I had to deliberately foster a safe, respectful, and inclusive learning environment. In my blog post “5 little things that are really big” I explain specific ways to partner with students to make this happen. My colleague Cara Holt outlines 10 useful community-building strategies in her blog post, and another colleague, Vicki McCoy, explores self-assessment and metacognition in “How formative assessment boosts metacognition—and learning.”

Reallocate time and energy

Early in my teaching career, I was hesitant to fully dive into processes for successful student self-assessment. It felt like the learning experiences in a lesson or unit would take longer if I did because of the need to make room for the exercises that make self-assessment fruitful, such as clarifying goals, using examples, and engaging in peer feedback. I had to trust that reworking how I used teaching and learning time would pay off in the end, and it did.

By making time in the lesson or unit for the short, frequent exercises that make self-assessment successful, I spent less time tracking down unfinished work and nagging students about revisions. I also ultimately saved time grading. Once my students and I got the hang of self-assessment processes, students could reflect on the learning goals and articulate their grade rationale—and they were usually right on! No more frustration of unread markups and ignored grades.

Use examples of work

Self-assessment is even more fruitful when students can process examples of work that illustrate the learning goals and success criteria. In other words, for meaningful self-assessment, we had to work to make sure there was a solid foundation of understanding about examples and how to use the examples as a guide. After creating that solid understanding of success through examples, my students and I could take next steps with effective self, peer, and teacher feedback, which ultimately led to successful self-assessment.

I learned to start with strong examples so that students were sure to have a sound reference for what the end result should look like. Sometimes I could access these examples from the provided curriculum materials or from previous students; sometimes I made the examples myself, especially if the learning goals or path to the learning goals were specific to my students’ motivations and interests.

If my students were working on a learning goal (such as building argumentation claims and counterclaims) expressed in a multi-step product (such as a multiple paragraph argument), we would process a whole example. We would also examine specific pieces (e.g., paragraphs or even sentences). The students and I would look for the success criteria together using processes that aligned to the learning goal(s). For instance, if we were working on argumentation learning goals, we would use an argument rubric to guide our processing of the example, usually a few parts of the rubric at a time. This practice set forth the words and procedures for effective teacher, self, and peer feedback grounded in concrete illustrations of the learning goals and success criteria. With the provided examples, we could practice a feedback strategy such as Stars (strengths) and Stairs (next steps): Using the language of the rubric, what is a strength of this argumentation example? Using the language of the rubric, what is a next step for this argumentation example? Students had plenty of practice with the strategy first for processing examples, then for practicing feedback, and finally as a frame for self-assessment: Using the language of the rubric, what is a strength of your argumentation? Using the language of the rubric, what is a next step for your argumentation?

Figuring out the self-assessment strategies, processes, and tools that work best in partnership with your students is an ongoing expedition that requires time, patience, and a sense of humor.

Once a solid understanding of the end result and its pieces are established through sound examples, it can be fun to process silly non-examples with students. For example, one of my favorite silly non-examples to use when practicing reading and writing for information is a YouTube video of a dad following his children’s written instructions for making a peanut butter and jelly sandwich . You can even ask students to help you make those silly non-examples, which is another way for students to both become active partners in learning and internalize what does and does not illustrate the learning goal(s) and success criteria. Making time for processing examples and non-examples equips students with a clear picture of the end result and the frames for self-assessment success.

Include self-assessment prompts during the journey and at the end

For far too long, I would tell students to self-assess and hope that they followed through. Eventually I made it a habit to embed self-assessment prompts, space, and time directly on formative (practice) and summative materials.

When relevant, I would simply embed a one-sentence self-assessment frame (e.g., On a scale of 1–5, my claim sentence is currently a ___ because ___.) Other times it was better to prompt more than a one-sentence self-assessment (e.g., To self-assess your use of textual evidence to support your claim, please follow these steps: 1. In your draft, highlight where you used textual evidence to support your claim. 2. Based on the textual evidence that you used, circle your current level of skill on the provided rubric. 3. Use the information on the provided rubric to list one action you take to make your textual evidence even stronger.) The students and I would use their self-assessment answers to plan next steps, which sometimes looked like adjusting the lesson plan for the next day for more practice or making mixed groups for the next formative exercise.

For the summative task(s), I got into the habit of making sure to include student self-assessment as the last part of the experience. For instance, if there was a written or spoken product, after the conclusion sentence students would also self-assess using a provided frame. If the summative was a set of prompts or questions, the last prompt or question would be self-assessment.

3. Embrace the process

Figuring out the self-assessment strategies, processes, and tools that work best in partnership with your students is an ongoing expedition that requires time, patience, and a sense of humor. You’ll take steps forward, steps sideways, and steps back. It can get messy, but that’s normal. Authentic, human-centered learning is messy!

I encourage you to try one new thing at a time, celebrate quick wins, think of “failures” as learning opportunities, and lean on your students for their help. Be reassured that applying self-assessment practices is one of the most valuable parts of the learning process. For more on the value of student self-assessment, see the discussion section in “Examining the impact of self-assessment with the use of rubrics on primary school students’ performance.”

Suggested next steps

I encourage you to continue the journey of including students as active agents in the learning process. Growing in or expanding upon the practices listed here can help you continue that journey. In case you find them helpful, here are a few discussion questions that can guide your thinking about student self-assessment. Tackle them on your own or with a colleague.

Questions for teachers

  • What’s one student self-assessment strategy that you already use?
  • What’s one student self-assessment strategy that you would like to try?
  • What support do you need to try a new self-assessment strategy?
  • What will inspire you to keep up the hard work of embedding student self-assessment in the learning journey?

Recommended for you

what is a self assessment report in education

MAP Reading Fluency with Coach provides targeted interventions grounded in the science of reading 

what is a self assessment report in education

K–12 data leadership: Be the change for your school community

what is a self assessment report in education

Understanding formative, summative, and interim assessment and their role in student learning

what is a self assessment report in education

Reading differentiation made easy

MAP Reading Fluency now includes Coach, a virtual tutor designed to help students strengthen reading skills in as little as 30 minutes a week.

what is a self assessment report in education

Helping students grow

Students continue to rebound from pandemic school closures. NWEA® and Learning Heroes experts talk about how best to support them here on our blog, Teach. Learn. Grow.

See the post

what is a self assessment report in education

Put the science of reading into action

The science of reading is not a buzzword. It’s the converging evidence of what matters and what works in literacy instruction. We can help you make it part of your practice.

Get the guide

what is a self assessment report in education

Support teachers with PL

High-quality professional learning can help teachers feel invested—and supported—in their work.

Read the article

what is a self assessment report in education

STAY CURRENT by subscribing to our newsletter

You are now signed up to receive our newsletter containing the latest news, blogs, and resources from nwea..

self assessment

The Function of Self-Assessment and Self-Evaluation in the Quality Cycle

Table of Contents

The Self-Assessment Review (SAR) serves as a critical tool for academic institutions, providing an annual reflection on teaching standards against internal and Ofsted benchmarks. This document delves into the SAR process, emphasising its role in streamlining feedback and enhancing course management and accountability. At its core, SAR ensures that individual course assessments flow upward, from strand to department and finally to the broader teaching establishment, prior to being ratified by governing bodies. This hierarchical feedback system empowers course leads to take ownership, driving improvement while ensuring their accountability.

Introduction

A Self-Assessment Review (SAR) is a review of what the teaching establishment, department or strand has carried out over the last year, including an evaluation against the teaching establishments own internal criteria and Ofsted requirements. When completing a SAR, it is best to “imagine that you were sitting down explaining how and what you did to a stranger not familiar with your work, including judgements on how well you do it” (Hatton, 2016).

The SAR Process

Quality arrangements of own organisation learning programme.

I have selected the BSc in Applied Computing for evaluation as this is the learning programme on which I mostly teach. The predicted achievement is 90% which is significantly higher than another local teaching organisation, the only other provider of this learning programme. At the end of the course, the predicted achievement was accurate as one student suspended study for a year and did not complete the course this year, resulting in the actual achievement being 90%. Overall good outcomes with the majority of students going to work in industry or further study in the form of a master’s degree, this could still be improved further. Linked closely with the predicted/actual achievement is the level of retention being 100% as all students either completed the course or suspended until next year. The learner voice is positive with 100% of students rating 4 or above out of 5 for “overall, I am satisfied with the quality of the course…”, another prominent rating from students is 75% of students gave a score of 5 out of 5 for “staff are good at explaining things”. The vast majority of the feedback is very positive, some of the less positive feedback is 50% of students rated “feedback on my work has been timely” as average. The student attendance on the course is poor, with an average percentage of 72%, this is lower than the teaching establishments minimum requirement for attendance, however, can be expected as the learners are adults.

Areas for Improvement in my Learning Programme

Feedback delivery speed.

The second area that requires addressing is attendance; this can be improved with greater levels of communication during induction relating to the attendance expectations. This should be followed up throughout the year with warnings for those students who dip below the required 90% percent attendance. If the attendance doesn’t continue it should be explained to students that disciplinary action will follow, this will have an impact on the majority of students and result in the attendance figure being higher. Another aspect to consider is to investigate why the attendance is so low, the BSc requires a large amount of independent research that many students complete at a local university library, currently the teaching establishment has no system in place to record when students have conducted research outside the class, or even in its own library. A simple email from students followed by a check of the library will likely have an impact on attendance figures.

Bibliography

Further reading, understanding and using educational theories by karl aubrey and alison riley., the use of data in school counseling: hatching results for students, programs, and the profession by trish hatch., assessment clear and simple: a practical guide for institutions, departments, and general education by barbara e. walvoord..

This book provides a concise and step-by-step guide to the process of assessment. It delves into the process of determining if students are learning what educators think they should learn and offers a clear way to integrate assessment into program review, accreditation, and setting strategic goals.

How to Create and Use Rubrics for Formative Assessment and Grading by Susan M. Brookhart.

Educause review, chronicle of higher education.

This is one of the most prominent publications in the field of higher education. It covers a broad range of topics including academic reviews, program assessments, and educational innovations.

Author Profile

what is a self assessment report in education

Latest entries

Opportunity Knocks

Leave a Comment Cancel reply

Wilson College Online Blog

Benefits and examples of student self-assessments, written by: wilson college   •  jan 18, 2024.

A teacher high-fives a student in a classroom.

Benefits and Examples of Student Self-Assessments ¶

Studies suggest that student self-assessments can help students perform better. A study published in the online journal Assessment & Evaluation in Higher Education found that when students evaluate themselves and receive feedback, their grades improve. Feedback literacy means being able to understand and use feedback to improve learning.

Proper student self-assessment goes beyond simple self-rating or guesswork. It involves students evaluating their own work against specific criteria. Then, they can reflect on and judge their performance based on objective and select benchmarks.

The application of self-assessment can vary. Examples of student self-assessment range from using learning logs and reflective journals to rubrics self-assessments. Teachers need to understand how student self-assessment helps students learn so they can properly guide them and advance their educational development. 

What Is Student Self-Assessment? ¶

Student self-assessment is an approach that students can apply to gauge how they’re doing in school. This can be accomplished by checking their own work, performance, behavior, as well as understanding what they’ve learned and what they still need to learn. This encourages students to take ownership of their learning and fosters accountability. Examples of student self-assessment activities can include establishing goals, identifying strengths and weaknesses, and setting plans for improvement.

Effective self-assessment means consistently showing students how to think about their own learning through the entire learning process. Insights gained from the self-assessment process can help improve their achievement. Teachers play a critical role in regularly making the process clear to students and removing obstacles for productive self-assessments.

Why Is Student Self-Assessment Important? ¶

Student self-assessment is important for several reasons. It encourages independent learning, enhances learning outcomes, and can potentially reduce teachers’ workloads. It also helps students understand what they are learning and the reasons why they are learning about a particular subject. This understanding is a key to educational growth and helps provide a clear path to follow in their ongoing growth journey. 

Student-self assessment also promotes metacognition, which means understanding one’s own thought processes. This helps improve cognitive functions, including problem solving, learning, and decision making.

What Are the Benefits of Student Self-Assessment? ¶

Student self-assessment boosts motivation and engagement by empowering students to take control of their learning. Active involvement in assessing their progress increases their investment in education. 

Numerous studies highlight the positive impact of student self-assessment. For instance, a study in the International Journal of Educational Research revealed marked enhancement in writing skills among students who use rubrics compared to those who did not.

Additional examples of student self-assessment benefits include the following.

  • Students can evaluate knowledge and learning processes.
  • Critical thinking and problem-solving skills can be improved.
  • It champions a growth mindset for viewing challenges as opportunities.

Student self-assessment not only benefits students themselves, it provides valuable feedback to educators. This helps teachers gain insights into students’ understanding, strengths, and weaknesses. It also promotes a supportive and inclusive learning environment, as students feel that their perspectives and voices are valued.

7 Examples of Student Self-Assessments ¶

For teachers, discovering effective self-assessment strategies, processes, and tools in collaboration with students is an ongoing journey that demands time and patience. Here are seven examples of student self-assessments.

1. Learning Log ¶

A learning log serves as a personal journal for students to record their thoughts, questions, and experiences on their educational journey. It encourages deep self-awareness, helping students track their growth, identify strengths and weaknesses, and gain insights into their learning patterns. By updating their learning log regularly, students take ownership of their learning, fostering a deeper understanding of their academic development and promoting accountability.

2. Reflective Journals ¶

Reflective journals prompt students to write about their learning experiences, challenges, and achievements, including personal reflections, emotions, and opinions. Such journals encourage deep reflection and enhance self-awareness. Reflective logs, however, differ from learning logs by being more open-ended and less focused on objective data. Journaling in an online format can also be beneficial. It can open versatile reflective tools for students in the form of documents, videos, or audio, fostering richer learning experiences and improved outcomes.

3. Goal Setting ¶

Students can set specific goals that are measurable, achievable, relevant, and timebound (SMART) within their learning logs or reflective journals. Examples of student self-assessment SMART goals can include improving academic performance—more specifically, a student can say: “I will increase my grade from a B+ to an A next marking period.” 

Another example of goal setting can include working to increase class participation—for example, establishing a promise to participate in school at least 3 times every day. This approach provides students with clear roadmaps for their learning journey and fosters a sense of accountability.

4. Rubric Self-Assessment ¶

Rubrics can aid in the self-assessment process. This helps students identify areas for improvement and take ownership of their learning process, providing a clear framework to evaluate their own work and gain an understanding of high-quality performance. 

Commonly used rubric types in schools include holistic rubrics (an overall performance assessment) and analytic rubrics (a breakdown of performance into separate elements). Another type, developmental rubrics, measure growth along a proficiency scale from novice to expert. Assessment rubrics help students proactively self-assess their work against predefined criteria.

5. Questionnaires or Surveys ¶

Students can regularly fill out self-assessment questionnaires or surveys to reflect on their learning experiences, as well as identify strengths, weaknesses, and improvement opportunities. Examples of student self-assessment questions can include: What did you learn today? Did you try your best on your assignment? Answers to these types of questions can help students set action plans for continuous improvement.

6. Self-Reflection Worksheets ¶

Self-reflection worksheets enable students to reflect on the breadth of their learning experiences, identify areas for improvement, and develop action plans to address those areas. Including action plans in these worksheets empowers students to take tangible steps toward enhancing their abilities, turning self-awareness into a catalyst for growth and development. 

7. Exit Tickets ¶

Exit tickets are brief assessments or reflections completed by students at the end of a lesson or class session. They help educators gauge student comprehension, identify areas that need further attention, and tailor future lessons accordingly. Exit tickets also encourage students to reflect on their learning and provide valuable feedback to the teacher.

Become A Transformative Educator ¶

Despite the noted benefits of student self-assessment, putting forward new strategies into practice can come with challenges. If you are looking to incorporate teaching strategies to better connect with students and help improve learning outcomes, an advanced degree in education can prepare you with essential skills to become a transformative educator.

Wilson College Online offers a Master of Education (MEd) degree , covering subjects such as differentiated instruction, best practices, and technology integration. This program hones teaching skills through both research and classroom practices. It also offers educators the flexibility to continue teaching in their current district, thanks to the program’s asynchronous self-paced model.

Discover how the Wilson College Online Master of Education degree can empower you to further impact the lives of students.

Recommended Readings

Pros and Cons of Education Apps for Toddlers

Special Education Teacher Salary and Career Overview

Tips for Building an Engaging Special Education Classroom

ASCD, “How to Provide Better Feedback Through Rubrics”

Assessment & Evaluation in Higher Education, “Self-assessment is About More Than Self: The Enabling Role of Feedback Literacy”

EducationWeek, “Rubric Do’s & Don’ts”

Edutopia, “Teaching Students to Assess Their Learning”

E-Learning Heroes, “Using Learning Journals in E-Learning #344”

International Journal of Educational Research Open, “Examining the Impact of Self-assessment with the Use of Rubrics on Primary School Students’ Performance”

Kami, “How to Use SMART Goals for Your Students”

NWEA, “2 Types of Student Goal Setting that Empower Early Learners”

NWEA, “Proof That Student Self-Assessment Moves Learning Forward”

ProProfs, “Self-Assessment for Students: The Ultimate Guide”

Recent Articles

Two Smiling Businesspeople Shake Hands During a Meeting Around a Conference Table.

What Can You Do With a Business Degree?

Wilson college, may 28, 2024.

Social Media Video Production (1)

A Guide to Social Media Video Production

May 21, 2024.

A smiling school principal stands outside of a school building.

How to Become a School Principal

Learn more about the benefits of receiving your degree from wilson college online.

  • Join Login Volunteer Classified Give Give ASEE Donations...
  • Mission, Vision, Goals
  • Public Policy Statements
  • Constitution
  • Organizational Structure
  • Investment Policy
  • Financial Policy
  • Our History
  • Staff Contacts
  • Board Of Directors
  • Academy Of Fellows
  • Past Board Members
  • Advisory Committees
  • Representatives to External Organizations
  • Executive Director's Message
  • Meeting Minutes
  • Careers at ASEE
  • Privacy Statement
  • Your Member Page
  • Membership Directory
  • COVID Recovery
  • Engineering Culture
  • Divisions, Fellows, and Campus Reps
  • Sections and Zones
  • Individual Membership
  • Institutional Membership
  • Major Activities
  • Prospective Partner or Sponsor
  • About Fellowships
  • High School
  • Undergraduate
  • Post-Doctoral
  • Other Programs
  • 2022 Annual Conference & Exposition
  • 2021 Virtual Annual Conference & Exposition
  • 2020 Virtual Annual Conference & Exposition
  • Section & Zone Meetings
  • Conference for Industry and Education Collaboration (CIEC)
  • CMC Workforce Summit
  • Engineering Deans Institute (EDI)
  • Research Leadership Institute (RLI) (Formerly ERC)
  • Engineering Technology Leaders Institute (ETLI)
  • EDC Public Policy Colloquium (PPC)
  • Frontiers in Education
  • First Year Engineering Experience
  • Workforce Summit
  • Future Conference Dates
  • Newsletters
  • Division Publications
  • Journal of Engineering Education
  • Advances in Engineering Education
  • Conference Proceedings
  • Section Proceedings
  • Zone Proceedings
  • Monographs and Reports
  • Prism Magazine
  • Profiles of E&ET Colleges
  • Case Study Series: Engineering-Enhanced Liberal Education
  • Data Analysis
  • Annual Reports
  • Academic Job Opportunities
  • Course Catalog
  • Engineering Education Community Resource
  • eGFI Teachers
  • eGFI Students
  • Engineering Teacher PD Endorsement

what is a self assessment report in education

  • Past Forums
  • Registration
  • Program Schedule
  • ERC Past Conferences
  • ETLI Sponsorship Options
  • 2017 Global Colloquium
  • Call for Proposals
  • Past Conferences
  • STEP Grantees Meeting

2017 ASEE Annual Conference & Exposition

Self-assessment to improve learning and evaluation, presented at tips and tricks for assessing student performance.

Self-assessment is a powerful mechanism for enhancing learning. It encourages students to reflect on how their own work meets the goals set for learning concepts and skills. It promotes metacognition about what is being learned, and effective practices for learning. It encourages students to think about how a particular assignment or course fits into the context of their education. It imparts reflective skills that will be useful on the job or in academic research.

Most other kinds of assessment place the student in a passive role. The student simply receives feedback from the instructor or TA. Self-assessment, by contrast, forces students to become autonomous learners, to think about how what they should be learning. Having learned self-assessment skills, students can continue to apply them in their career and in other contexts throughout life.

While self-assessment cannot reliably be used as a standalone grading mechanism, it can be combined with other kinds of assessment to provide richer feedback and promote more student “buy-in” for the grading process. For example, an instructor might have students self-assess their work based on a rubric, and assign a score. The instructor might agree to use these self-assigned grades when they are “close enough” to the grade the instructor would have assigned, but to use instructor-assigned grades when the self-grades are not within tolerance.

Self-assessment can also be combined with peer assessment to reward students whose judgment of their own work agrees with their peers’. In Calibrated Peer Review, students are asked to rate the work of three of their peers, and then to rate their own work on the same scale. Only after they complete all of these ratings are they allowed to see others’ assessments of their own work. CPR assignments are often configured to award points to students whose self-ratings agree with peers’ ratings of their work. The Coursera MOOC platform employs a similar strategy. Recently a “calibrated self-assessment” strategy has been proposed, that uses self-assigned scores as the sole grading mechanism for most work, subject to spot-checks by the instructor. Self-assigned grades are trusted for those students whose spot-checked grades are shown to be valid; students whose self-assigned grades are incorrect are assigned a penalty based on the degree of misgrading of their work.

In self-assessment, as in other kinds of assessment, a good rubric is essential to a good review process. It will include detailed criteria, to draw students’ attention to important aspects of the work. The criteria should mention the goals and keywords of the assignment, so that students will focus on their goals in assessment as well as their writing.

This paper will cover the benefits of self-assessment, and then provide several examples of how it can be combined with other assessments.

Orcid 16x16

Dr. Gehringer is a professor in the Departments of Computer Science, and Electrical & Computer Engineering. His research interests include data mining to improve software-engineering practice, and improving assessment through machine learning and natural language processing.

Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.

» Download paper

« View session

Advertisement

Advertisement

University students’ strategies and criteria during self-assessment: instructor’s feedback, rubrics, and year level effects

  • Open access
  • Published: 24 October 2022
  • Volume 38 , pages 1031–1051, ( 2023 )

Cite this article

You have full access to this open access article

what is a self assessment report in education

  • Ernesto Panadero 1 , 2 ,
  • Daniel García Pérez 3 ,
  • Javier Fernández Ruiz   ORCID: orcid.org/0000-0001-5419-7687 4 ,
  • Juan Fraile 5 ,
  • Iván Sánchez-Iglesias 6 &
  • Gavin T. L. Brown 7  

3867 Accesses

5 Citations

13 Altmetric

Explore all metrics

This study explores the effects of feedback type, feedback occasion, and year level on student self-assessments in higher education. In total, 126 university students participated in this randomized experiment under three experimental conditions (i.e., rubric feedback, instructor’s written feedback, and rubric feedback plus instructor’s written feedback). Participants, after random assignment to feedback condition, were video-recorded performing a self-assessment on a writing task both before and after receiving feedback. The quality of self-assessment strategies decreased after feedback of all kinds, but the number of strategies increased for the combined feedback condition. The number of self-assessment criteria increased for rubric and combined conditions, while feedback helped shift criteria use from basic to advanced criteria. Student year level was not systematically related to changes in self-assessment after feedback. In general, the combination of rubric and instructor’s feedback produced the best effects.

Similar content being viewed by others

what is a self assessment report in education

Effects of Rubric-Based and Detailed Peer Feedback on University-Level English as a Foreign Language Students’ Writing Self-efficacy and Subsequent Revisions

what is a self assessment report in education

Rubrics enhance accuracy and reduce cognitive load in self-assessment

what is a self assessment report in education

Examining pre-service teachers’ feedback on low- and high-quality written assignments

Avoid common mistakes on your manuscript.

Self-assessment of learning is linked to greater self-regulation (Andrade, 2018 ; Yan, 2019 ) and achievement (Brown & Harris, 2013 ). Furthermore, the ability to evaluate one’s own work and processes is an important objective of higher education (Tai et al., 2017 ). However, our understanding of how students integrate feedback within their self-assessment processes is limited (Panadero et al., 2016 ), though we have a considerable knowledge on how feedback concerning task, process, and self-regulatory processes has been shown to improve educational outcomes (Butler & Winne, 1995 ; Hattie & Timperley, 2007 ). In one of the few studies exploring self-assessment and external feedback, (Yan & Brown, 2017 ) showed in an interview study with teacher education students that students claim to seek external feedback to form a self-assessment. Hence, it is important to understand how to support the development of realistic and sophisticated self-assessment. A successful formative assessment practice has been the introduction of rubrics or scoring guides into classroom practice (Brookhart & Chen, 2015 ). Hence, it was expected that students would describe more complex self-assessment processes when provided feedback based on a rubric.

In a randomized experiment with university students, this study systematically extends our understanding of the role feedback plays on self-assessment by manipulating the type of feedback, its timing, and the expertise level of tertiary students. The study extends our understanding of the self-assessment “black box” by examining the strategies and criteria students used. Hence, this study provides new insights into how we can support robust self-assessment.

  • Self-assessment

Self-assessment “involves a wide variety of mechanisms and techniques through which students describe (i.e., assess) and possibly assign merit or worth to (i.e., evaluate) the qualities of their own learning processes and products” (Panadero et al., 2016  p. 804). This definition indicates that self-assessment can take different shapes, from self-grading (e.g., Falchikov & Boud, 1989 ) to formative approaches (e.g., Andrade, 2018 ). However, what exactly happens when students self-assess is still largely mysterious.

Yan and Brown ( 2017 ) interviewed 17 undergraduate students from a teacher education institute using six general learning scenarios (e.g., How good are you at learning a new physical skill ?) and five questions specific to self-assessment (e.g., What criteria did you use to conduct self-assessment ?). From that data, the authors built a schematic cyclical self-assessment process consisting of three subprocesses: (1) determining performance criteria, (2) self-directed feedback seeking, and (3) self-reflection. Despite being an early effort to unpack the black box, the results are limited by a small sample and highly descriptive and interpretive analysis of interview data.

More recently, Panadero et al. ( 2020 ) analyzed the behavior of 64 secondary education students when self-assessing Spanish and mathematics tasks. Multi-method data sources (i.e., think aloud protocols, direct observation and self-report via questionnaires) described self-assessment actions as either strategies or criteria. The study showed that (1) the use of self-assessment strategies and criteria was more frequent and advanced without feedback and among girls, (2) there were different self-assessment patterns by school subject, (3) patterns of strategy and criteria use differed by school year, and (4) none of the self-assessment strategies or criteria had a statistically significant effect on self-efficacy.

Factors influencing self-assessment

Feedback in general has been shown to improve academic performance, especially when focused on specific tasks, processes, and self-regulation (Hattie & Timperley, 2007 ; Wisniewski et al., 2020 ). Butler and Winne’s ( 1995 ) feedback review showed that self-regulated learners adjust their internal feedback mechanisms in response to external feedback (e.g., scores, comments from teachers). Scholars have claimed that students need instructor’s feedback about their self-assessments as well as about content knowledge (Andrade, 2018 ; Brown & Harris, 2014 ; Panadero et al., 2016 ; Boud, 1995 ). Previous studies have shown little effect of external feedback on student self-assessment (Panadero et al., 2012 , 2020 ; Raaijmakers et al., 2019 ). Thus, understanding how external feedback such as instructor’s or via instruments (e.g., rubrics) can influence students’ self-assessment is important.

Among feedback factors that influence student outcomes (Lipnevich et al., 2016 ), the timing of feedback is important. In general, delayed feedback is more likely to contribute to learning transfer, whereas prompt feedback is useful for difficult tasks (Shute, 2008 ). However, linking feedback to self-assessment is relatively rare. Panadero et al. ( 2020 ) found that secondary education students self-assessed using fewer strategies and criteria after receiving feedback. This has crucial implications for instructors as to when they should deliver their feedback, if they want students to develop calibrated self-assessments.

One potentially powerful mechanism for providing feedback is a marking, scoring, or curricular rubric, which has been shown to have stronger effects on performance than other assessment tools, such as exemplars (Lipnevich et al., 2014 ). The use of rubrics in education and research has grown steadily in the last years (Dawson, 2017 ), due to its instructional value with positive effects for students, teachers and even programs (Halonen et al., 2003 ). Rubric use has been associated with positive effects on self-assessment interventions and academic performance (Brookhart & Chen, 2015 ). Previous research has demonstrated that a rubric alone produced better results than combining rubrics with exemplars (Lipnevich et al., 2014 ). Although there is previous research exploring the effects of rubrics when compared or combined with feedback (Panadero et al., 2012 , 2020 ; Wollenschläger et al., 2016 ), we still need insights around the impact of rubrics with or without feedback on student self-assessment.

It was established in the self-assessment literature that more sophisticated and accurate self-assessments are conducted by older and more academically advanced students (Brown & Harris, 2013 ; Barnett & Hixon, 1997 ; Boud & Falchikov, 1989 ; Kostons et al., 2009 , 2010 ). As Boud and Falchikov ( 1989 ) demonstrated, it was subject specific competence that reduced discrepancy between self-assessments and teacher evaluations. However, recent research shows that the relationship might not be so straight forward (Panadero et al., 2020 ; Yan, 2018 ). Additionally, it is unclear at what level of higher education students need to be to have sufficient expertise to self-assess appropriately. Thus, an investigation with students in consecutive years of study in the same domain might clarify the role of year level on self-assessment capacity.

Research aim and questions

The current study adds to this body of research by examining the number and type of self-assessment strategies and criteria among higher education students in a randomized experiment which manipulated three feedback conditions (rubric vs. instructor’s vs. combined) without a control group because the university Ethics Committee did not grant permission. Importantly, we also examined feedback occasion (before vs. after) and year level (1st, 2nd, and 3rd university undergraduates). This is a single group, multi-method study (i.e., think aloud, observation, and self-report; though only the two first ones are analyzed here).

We explored three research questions (RQ):

What are the self-assessment strategies and criteria that higher education students implement before and after feedback?

Hypothesis 1 (H1): Self-assessment strategies and criteria will decrease when feedback is provided, in line with Panadero et al. ( 2020 ).

What are the effects of feedback type and feedback occasion on self-assessment behaviors (i.e., number and type of strategy and criteria)?

H2: Rubric feedback will provide better self-assessment practices than other feedback types, in line with Lipnevich et al. ( 2014 ).

What is the effect of student year level on the results?

H3: Students in higher years within a discipline will use more sophisticated strategies and criteria in their self-assessments. There are results in different directions from no differences in primary education but less self-assessment in more advanced secondary education students (Yan, 2018 ), to more similarities than expected yet some differences identified in secondary education students (Panadero et al., 2020 ). Nevertheless, as our participants are higher education students, it is expected they will behave differently with more advanced students showing higher self-assessment skills.

A convenience sampling method at one university site where the first author worked created a sample of 126 undergraduate psychology students (88.1% females) across first, second, and third years of study (34.9%, 31.7%, and 33.3%, respectively). Participants were randomly assigned to one of three feedback conditions: rubric only ( n  = 43), instructor’s written feedback ( n  = 43), and rubric and instructor’s written feedback combined ( n  = 40). Participants received credit in accordance with the faculty volunteering programme. In a 3 × 3 ANOVA, given a risk level of α  = 0.005, and a statistical power of 1 −  β  = 0.800, the current sample size would detect a medium effect size, f  = 0.280 (G*Power 3.1.9.2; Faul et al., 2007 ).

Data collection and instruments

Data from the video-recorded think aloud protocols was inductively coded using the categories defined in a previous study (Panadero et al., 2020 ). In addition, two structured feedback intervention tools were used (i.e., rubric and instructor’s feedback).

Coded video-recorded data

Think-aloud protocols.

Participants were asked to think aloud while conducting two self-assessments of their written essay. The first was an unguided self-assessment in which students were asked to evaluate the quality of their essay and the reasons for their evaluation. Participants were asked to express their thoughts and feelings and reminded that if they were silent, they would be prompted to think out loud. After the feedback was provided, students were asked to talk about their thoughts and feelings concerning the feedback and to repeat the think aloud process of self-assessing their essay. If the participant remained silent for more than 30 s, they were verbally reminded to think out loud. There were no time restrictions to perform the self-assessment.

A closed coding process was followed, as the codes were already defined as part of a previous study (see Panadero et al., 2020 ) with secondary education students. In such study, a deductive approach was employed to create the two general coding categories of self-assessment elements: strategies and criteria. Additionally, we created codes for those general categories. The categories were contrasted with the data using an inductive approach, to ensure that they were applicable to the new sample and procedure.

The video-recorded think-aloud content was coded to identify the strategies and criteria each student used. As in our previous study, we further organized each set of 13 categories into four levels for clarity in interpretation (0–3). Such levels classify the categories depending on their type and complexity. Details of the levels, categories, definitions, and exemplar comments are provided in Table 1 .

Intervention prompts

Rubric (appendix 1).

It was created for this study using experts’ models of writing composition. It contains three types of criteria: (1) writing process, (2) structure and coherence, and (3) sentences, vocabulary, and punctuation. There are three levels of quality: low, average, and high. The rubric is analytic as three criteria should be scored independently. The rubric was provided to some of the students during the experimental procedure, depending on the experimental condition, but it was not explicitly used by de instructor to provide feedback on the essays.

Instructor’s feedback (Appendix 2)

The instructor provided feedback to each essay using the same categories as the rubric. For the “writing process” criterion, as that was not directly observable by the instructor, he provided feedback by suggesting whether some of those strategies had been put into places (e.g., planning). Additionally, it included a grade ranging from 0 to 10 points. All essays were evaluated by the second author. The first author evaluated a third of the essays reaching total agreement in the rubric categories.

This randomized experiment is part of a larger study; this report focuses on the specific self-assessment strategies and criteria students elicited (see Fig.  1 ), as measured via thinking aloud protocols and observations. After attending a 3 h’ group seminar on academic writing, participants wrote a short essay answering the question: “Why is the psychologist profession necessary?”. This topic was directly directed to the participants’ psychology programme. There was no length limitation for the essays that were written in the participants’ computers, which then submitted it to the research team. This essay did not have implications outside of the research experiment but we emphasized its utility for the students’ academic perspective of the programme. Some days later (approx. 1 week), participants went individually to the laboratory setting. There, they participated in the experiment face-to-face with one of the authors.

figure 1

Experimental procedure

First, they received the instructions for self-assessing their essay that was handed out to them in its original form, in other words with no feedback. Students were instructed to while self-assessing think aloud their thoughts, emotions, and motivational reactions. Then, they performed the first think aloud self-assessment of the essay they had written. Right after, participants were given feedback on their essay according to the condition they had been assigned to (rubric vs. instructor vs. combined) and asked to self-assess again. The rubric group was handed out the rubric with the instruction of using it for their self-assessment. In the instructor’s feedback group, the participants were said that they should use the instructor’s feedback for their self-assessment. Finally, the combined group received both instructions. After reading the feedback, each participant repeated the self-assessment thinking aloud.

Data analysis

The coding of the think aloud utterances for strategies and criteria was evaluated in three rounds of inter-judge agreement. In round one, agreement between two judges on 15 videos reached an average Krippendorff’s α  = 0.78, with three categories below 0.70. After discussion and consensus building around the low agreement categories, a second set of 15 videos was coded with an average Krippendorff’s α  = 0.83. A third round, using 15 new videos, produced Krippendorff’s α  = 0.87. This indicates the final coding values are dependable. The direct observation was performed in situ during data collection but more intensively during the coding of the video data. The observation data was used to inform and confirm the thinking aloud categories via defining the participants’ behavior, so as supplementary data to further establish the categories.

The categorical variables were described using multiple dichotomous frequency tables, as each participant could display more than one behavior. To study the effect of the factors (feedback occasion, condition, and year level) on self-assessment strategies and criteria frequencies, we conducted ANOVAs and square test to compare differences among the levels.

RQ1: What are the self-assessment strategies and criteria that higher education students implement before and after feedback?

Type of strategies.

Table 2 shows the multiple self-assessment strategies enacted by the participants. The most used before feedback were Read the essay , Think of different responses , and Read the instructions . After the feedback, the most used were Read the feedback or rubric received and Compare essay to feedback or rubric . These strategies are low level according to our code except for Think of different responses which show a deeper level of self-assessment elaboration. Three main results can be extracted. First, the strategies used before and after feedback are similar in nature, with five categories occurring at both moments. However, second, once the students received the feedback, there was a general decrease in the number of frequency of strategies with three out of the five strategies showing significant decreases. This is logical as most of the strategies were basic, and participants did not need to enact them again (e.g., read the essay, which they had done just minutes before). Also, there was the appearance of two new strategies that were not present before the feedback as they are specific to the reception of feedback (i.e., Read the feedback or rubric received and Compare essay to feedback or rubric ). Third, after the feedback, there was also a new category that the participants did not activate it before: Compares question and response .

Type of criteria

As the students could choose more than one criterion, we described multiple dichotomous variables. In general, the most used criteria before the feedback were: Sentences and punctuation marks , Negative intuition , Positive intuition , and Paragraph structure (Table 3 ). The most used after the feedback were as follows: Feedback received , Sentences and punctuation marks , Paragraph structure , and Writing process . When it comes to the trajectories, most of the criteria frequencies decreased significantly after receiving the feedback. However, there were three criteria that increased after feedback (significantly Writing process and Paragraph structure , non-significantly Sentences and punctuation marks ) all being advanced strategies and all increasing in the rubric and combined condition but decreasing in the instructor’s condition. Additionally, a new criterion was used Feedback received , which, for obvious reasons, only occurred after feedback.

RQ2: What are the effects of feedback type and feedback occasion on number and type of strategy and criteria in self-assessment behaviors?

At time 1, before receiving feedback, the number of strategies by condition (Table 4 ) differed statistically and substantially ( F (2, 121)  = 4.22, p  = 0.017, η 2  = 0.65) with a significant post hoc difference between the instructor condition ( M  = 2.78, SD  = 0.183) and the combined condition ( M  = 2.06, SD  = 0.185); the rubric condition did not differ from any of the two ( M  = 2.37, SD  = 0.179). When it comes to number of criteria used, the conditions were equivalent ( F (2, 121)  = 0.48, p  = 0.62, η 2  = 0.008, 1 – β  = 0.127) with no differences among the three groups: instructor ( M  = 3.32, SD  = 0.224), rubric ( M  = 3.51, SD  = 0.219), or combined ( M  = 3.63, SD  = 0.227). We also analyzed if there were differences within the different levels of strategies ( χ 2 (6)  = 8.38, p  = 0.21), and levels of criteria ( χ 2 (6)  = 6.32, p  = 0.39), but both were equivalently distributed across conditions.

At time 2, after feedback, the number of strategies by condition (Table 4 ) did not differ ( F (2,121)  = 0.42, p  = 0.66, η 2  = 0.007, 1 −  β  = 0.118): instructor ( M  = 2.56, SD  = 0.976), rubric ( M  = 2.44, SD  = 0.765), or combined ( M  = 2.40, SD  = 0.671), showing that the effects of rubric had no meaningful impact on the number of strategies. However, the number of criteria differed substantially ( F (2,121)  = 25.30, p  < 0.001, η 2  = 0.295) with significant post hoc differences for Rubric ( M  = 4.48, SD  = 0.165) and combined conditions ( M  = 4.50, SD  = 0.171) that outperformed the instructor condition ( M  = 3.02, SD  = 0.169), both at p  < 0.001. Similar to the number of strategies, the level of strategies was equivalently distributed across conditions ( χ 2 (6) = 2.29, p  = 0.89). However, and to be expected, the level of criteria differed significantly ( χ 2 (4) = 12.00, p  = 0.02), which is likely to be a function of the large sum of criteria differences across conditions at Time 2 (i.e., 193, 134, 180, respectively). When viewed as differences based on percentage of responses at each level, this is statistically not significant (χ 2 (4)  = 7.74, p  = 0.10).

When we explored the interaction condition by feedback occasion, we found no significant effect in self-assessment strategies ( F (2,121)  = 1.74, p  = 0.180, η 2  = 0.028). However, we found a significant main effect of condition in self-assessment criteria ( F (2,115)  = 7.97, p  = 0.001, η 2  = 0.116). The pre-post increase in number of strategies deployed was greater (post hoc p  = 0.002) in the rubric ( M  = 0.938, SE  = 0.247) than in the instructor's feedback ( M  =  − 0.291, SE  = 0.253) condition. The combined condition ( M  = 0.881, SE  = 0.256) also yielded a greater increase (post hoc p  = 0.004) compared to the instructor’s feedback.

RQ3: What is the effect of student year level on the results?

We calculated the differences in strategies and criteria by year level between pre- and post-feedback conditions in two-way ANOVAs with condition and year level as factors. When it comes to the use of strategies, neither main effects (i.e., year level, F (2, 115)  = 1.04, p  = 0.359, η 2  = 0.018, 1 −  β  = 0.227; feedback type, F (2, 115)  = 1.72, p  = 0.183, η 2  = 0.029, 1 −  β  = 0.355) nor interaction ( F (2, 115)  = 0.973, p  = 0.425, η 2  = 0.033, 1 −  β  = 0.300) was significant, largely due to lack of power. Likewise, in the use of criteria, the same result was seen (i.e., year level, F (2, 115)  = 1.68, p  = 0.192, η 2  = 0.028, 1 −  β  = 0.347; feedback type, F (2, 115)  = 7,57, p  < 0.001, η 2  = 0.116, 1 −  β  = 0.940; and interaction, F (2, 115)  = 0.25, p  = 0.911, η 2  = 0.009, 1 −  β  = 0.102). Therefore, our hypothesis that older students would show more advanced self-assessment action is not supported.

This study explored the effects of three factors (i.e., feedback type, feedback occasion, and year level) on self-assessment strategies and criteria. This study contributes to our understanding of what happens in the “black box” of self-assessment by disentangling the frequency and type of self-assessment actions in response to different types of feedback.

Effects on self-assessment: strategy and criteria

In RQ1, we categorized self-assessment actions in a writing task in terms of strategies and criteria. Strategies were categorized on their depth or sophistication ranging from very basic activities (e.g.,  Read   the essay ) to advanced ones (e.g.,  t hink of different responses ). Understandably, the most common strategies were relatively low level, as they are foundational to understanding the task. However, once feedback was received most of the strategies focused on the content of the feedback received (e.g.,  Compare essay to feedback or rubric ), making the feedback as the anchor point of comparison (Nicol, 2021 ). In consequence, the strategies used prior to feedback were greatly reduced in number, indicating that, with feedback, self-assessment strategies were led by that information. Self-assessment criteria demonstrated similar effects. Prior to feedback, students used a wide range of criteria ranging from very basic (e.g.,  Negative intuition ) to advanced (e.g.,  Writing process ). Upon receipt of feedback, most of the criteria responded to the feedback in a less sophisticated manner, especially in the presence of rubrics.

In terms of the three different feedback conditions (RQ2), the two conditions containing rubrics outperformed the instructor’s feedback group in terms of criteria and close the initial gap in strategies. Despite of the instructor’s feedback condition having a higher number of self-assessment strategies before the intervention than the combined group, that difference vanished after feedback. Both the rubric and combined conditions had a higher number and more advanced types of criteria after feedback than the instructor’s feedback condition by large margins. No statistically significant differences in self-assessment strategies and criteria were found across the year levels (RQ3) regardless of feedback presence or type.

Regarding the alignment of our results to previous research, first, the feedback occasion effects on self-assessment strategies are very similar to a study with secondary education students (Panadero et al., 2020 ), as these strategies decreased significantly after feedback except for the ones related to the use of the feedback. In contrast, while the secondary education students decreased their number of criteria used and the type of criteria, here university students increased the number of criteria and used more advanced criteria when using rubrics, an instrument that was not implemented in Panadero et al. ( 2020 ). (Wollenschläger et al., 2016 ) compared three conditions (rubric, rubric and individual performance feedback, rubric and individual performance-improvement), finding that the latest was more powerful in increasing performance than the two first conditions. An important difference of this study is that it examined the impact of rubric and feedback on self-assessment, while the Wollenschläger et al. ( 2016 ) study examined the effects on academic performance. Hence, the impact of feedback appears to be contingent upon the kind of assessment being implemented.

Also, the secondary education students in Panadero et al. ( 2020 ) study showed differences across year levels, which was not found here with university students. This year level lack of effects aligns with Yan ( 2018 ) primary education students where he did not find differences, but is it not aligned with the same study when comparing secondary education students where he found significant differences (i.e., older students self-reporting lower levels of self-assessment). Unlike studies that have reported clearly delineated phases of self-assessment (Yan & Brown, 2017 ), the think aloud protocols in this study did not identify clear-cut phases, finding instead a naturally evolving process. While (Panadero et al., 2012 ) reported that scripts were better than rubrics, this study found that the presence of rubrics led to more sophisticated criteria use; future research would need to determine if script-based feedback would have any greater impact.

Three main conclusions from this study can be reached. First, there are different effects due to the form of feedback, with rubric-assisted feedback being especially promising for self-assessment. The effect of rubrics corrected the initial difference between the instructor’s feedback and the combined group so that, after receiving the feedback or/and rubric, all conditions were equal in terms of the number of self-assessment strategies. Also, and more interestingly, the rubrics conditions showed bigger effects on the use of criteria even in a situation in which the participants had already self-assessed freely before. This might indicate that rubrics as a tool are indeed very useful in stimulating student reflection on their work (Brookhart, 2018 ), more so than instructor’s feedback which may have been perceived as external criticism rather than supportive of improvement. This effect could be caused by instructor’s feedback putting students in passive position (e.g., they are being evaluated, they are recipients of feedback), while rubrics provided them with guides to explore and reflect by themselves. This also might speak to the importance of tools, such as rubrics, to support active self-assessment, rather than of the importance of providing corrective or evaluative feedback. This result might seem logical, as rubrics contain clear criteria and performance levels to which performance can be anchored. This may be especially pertinent to higher education students who are used to being assessed and graded against standards (e.g., Brookhart and Chen, 2015 ). Therefore, one viable conclusion is that the best type of feedback among the explored ones here is using rubrics, followed by a combination of rubric and instructor’s feedback.

Second, the introduction of feedback does impact self-assessment practices. Feedback decreased the number of strategies and increased the level of criteria used. A feature of this study is that students had to self-assess before they received feedback and then again upon receiving it. This process shows the impact of feedback in that it changes the strategies and criteria that students used. Therefore, for educational benefit, feedback may best be presented after students are required to implement their own self-assessment based on their own strategies and criteria. It may be that performance feedback prior to self-assessment will discourage students from the constructive strategies and criteria they exhibited in the pre-feedback stage.

And third, although self-assessment strategies did not become more advanced over years of study among our participants (i.e., our year level variable), this is not likely to be because there was a ceiling effect in the task itself. It is possible for students to exhibit in such a task more sophisticated strategies and criteria. It may be that, once entry to higher education is achieved, self-assessment is relatively homogeneous for this type of task. Perhaps much more demanding tasks (e.g., research thesis) would require more sophisticated self-assessment behaviors.

Limitations and future research

First, our participants conducted a first self-assessment without any structure or teaching on how to effectively evaluate one’s own work. Future research could introduce an intervention on self-assessment prior to the introduction of feedback to better eliminate confounds between self-assessment and feedback. Second, feedback focused on the essay writing task, not on the self-assessment process; such feedback may have had an effect on the quality of subsequent self-assessments (e.g. Andrade, 2018 ; Panadero et al., 2016 ). Third, the absence of a control group with no feedback is a limitation, although our conditions can be more realistic controls than no feedback as it is unusual to find activities without some kind of feedback in real educational settings. Additionally, internal feedback seems to be ubiquitous and automatic in any event (Butler & Winne, 1995 ), so even in the absence of experimenter-controlled feedback, there will be feedback. Fourth, the rubric contained an assessment criteria (i.e., writing process ) that only the students could assess as the instructor did not have access to the process. Fifth, it could be an interesting line of work to explore peer feedback and how it affects self-assessment strategies and criteria. While there has been some research in that direction (To & Panadero, 2019 ), it would be interesting to explore these effects using our methodology to fulfill the aim of “opening the black box of self-assessment.” Sixth, it is likely that greater insights into self-assessment could be achieved by combining this self-reported approach to self-assessment with technology, such as eye-tracking (Jarodzka et al., 2017 ) or physiological reaction equipment (Azevedo et al., 2018 ). These additional tools may allow for a more precise understanding of the underlying cognitive, emotional, and motivational processes in self-assessment and in response to feedback. And seventh, future research should also seek to determine if there are gender or content-specific effects on self-assessment and feedback (Panadero et al., 2016 ).

Conclusions

In general, this study shows that rubrics have the greatest potential to increase positively the quality of student self-assessment behaviors. The study also indicates that feedback has a mixed effect on self-assessment strategies and criteria use. This may explain in part why reliance on feedback from peers or markers has been shown to have a negative impact on overall academic performance (Brown et al., 2016 ). Students who rely more on their own evaluative and self-regulatory learning strategies are more likely to discount external feedback. The provision of rubrics is likely to enable more effective and thoughtful self-assessed judgements about learning priorities. All in all, this study helps to better understand the specific strategies and criteria higher education students enact while self-assessing, something that is key to really understanding how self-assessment works.

Andrade, H. (2018). Feedback in the context of self-assessment. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 376–408). Cambridge University Press.

Chapter   Google Scholar  

Azevedo, R., Taub, M., & Mudrick, N. V. (2018). Understanding and reasoning about real-time cognitive, affective, and metacognitive processes to foster self-regulation with advanced learning technologies. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (pp. 254–270). Routledge.

Google Scholar  

Barnett, J. E., & Hixon, J. E. (1997). Effects of grade level and subject on student test score predictions. The Journal of Educational Research, 90 (3), 170–174. https://doi.org/10.1080/00220671.1997.10543773

Article   Google Scholar  

Boud, D. (1995). Assessment and learning: Contradictory or complementary. In P. Knight (Ed.), Assessment for learning in higher education (pp. 35–48). Kogan.

Boud, D., & Falchikov, N. (1989). Quantitative studies of student self-assessment in higher education: A critical analysis of findings. Higher Education, 18 (5), 529–549. https://doi.org/10.1007/BF00138746

Brookhart, S. M. (2018). Appropriate criteria: Key to effective rubrics. Frontiers in Education, 3 (22), 1–12. https://doi.org/10.3389/feduc.2018.00022

Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67 (3), 343–368. https://doi.org/10.1080/00131911.2014.929565

Brown, G. T. L., & Harris, L. R. (2013). Student self-assessment. In J. H. McMillan (Ed.), The SAGE handbook of research on classroom assessment (pp. 367–393). Sage.

Brown, G. T. L., & Harris, L. R. (2014). The future of self-assessment in classroom practice: Reframing self-assessment as a core competency. Frontline Learning Research, 3 , 22–30. https://doi.org/10.14786/flr.v2i1.24

Brown, G. T. L., Peterson, E. R., & Yao, E. S. (2016). Student conceptions of feedback: Impact on self-regulation, self-efficacy, and academic achievement. British Journal of Educational Psychology, 86 (4), 606–629.

Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65 (3), 245–281. https://doi.org/10.3102/00346543065003245

Dawson, P. (2017). Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education , 1–14.  https://doi.org/10.1080/02602938.2015.1111294

Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: A meta-analysis. Review of Educational Research, 59 (4), 395–430. https://doi.org/10.3102/00346543059004395

Faul, F., Erdfelder, E., Lang, A.-G., & y Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39 (2), 175–191. https://doi.org/10.3758/BF03193146

Halonen, J. S., Bosack, T., Clay, S., McCarthy, M., Dunn, D. S., Hill Iv, G. W., & Whitlock, K. (2003). A rubric for learning, teaching, and assessing scientific inquiry in psychology. Teaching of Psychology, 30 (3), 196–208. https://doi.org/10.1207/s15328023top3003_01

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81–112. https://doi.org/10.3102/003465430298487

Jarodzka, H., Holmqvist, K., & Gruber, H. (2017). Eye tracking in educational science: Theoretical frameworks and research agendas. Journal of Eye Movement Research, 10 (1), 1–18. https://doi.org/10.16910/jemr.10.1.3

Kostons, D., Van Gog, T., & Paas, F. (2009). How do I do? Investigating effects of expertise and performance-process records on self-assessment. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition, 23 (9), 1256–1265. https://doi.org/10.1002/acp.1528

Kostons, D., van Gog, T., & Paas, F. (2010). Self-assessment and task selection in learner-controlled instruction: Differences between effective and ineffective learners. Computers & Education, 54 (4), 932–940. https://doi.org/10.1016/j.compedu.2009.09.025

Lipnevich, A. A., McCallen, L. N., Miles, K. P., & Smith, J. K. (2014). Mind the gap! Students’ use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42 (4), 539–559. https://doi.org/10.1007/s11251-013-9299-9

Lipnevich, A. A., Berg D. A., & Smith J. (2016). Toward a model of student response to feedback. In G. T. L. Brown & L. Harris (Eds.), Handbook of human and social conditions in assessment  (pp. 169–185). Routledge.

Nicol, D. (2021). The power of internal feedback: Exploiting natural comparison processes. Assessment & Evaluation in Higher Education, 46 (5), 756–778. https://doi.org/10.1080/02602938.2020.1823314

Panadero, E., Tapia, J. A., & Huertas, J. A. (2012). Rubrics and self-assessment scripts effects on self-regulation, learning and self-efficacy in secondary education. Learning and Individual Differences, 22 (6), 806–813. https://doi.org/10.1016/j.lindif.2012.04.007

Panadero, E., Brown, G. T., & Strijbos, J. W. (2016). The future of student self-assessment: A review of known unknowns and potential directions. Educational Psychology Review, 28 (4), 803–830. https://doi.org/10.1007/s10648-015-9350-2

Panadero, E., Fernández-Ruiz, J., & Sánchez-Iglesias, I. (2020). Secondary education students’ self-assessment: the effects of feedback, subject matter, year level, and gender. Assessment in Education: Principles, Policy & Practice , 1–28. https://doi.org/10.1080/0969594X.2020.1835823

Raaijmakers, S. F., Baars, M., Paas, F., van Merriënboer, J. J., & van Gog, T. (2019). Effects of self-assessment feedback on self-assessment and task-selection accuracy.  Metacognition and Learning , 1–22.  https://doi.org/10.1007/s11409-019-09189-5

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78 (1), 153–189. https://doi.org/10.3102/0034654307313795

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2017). Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education . https://doi.org/10.1007/s10734-017-0220-3

To, J., & Panadero, E. (2019). Peer assessment effects on the self-assessment process of firstyear undergraduates. Assessment & Evaluation in Higher Education, 44 (6), 920–932. https://doi.org/10.1080/02602938.2018.1548559

Wisniewski, B., Zierer, K., & Hattie, J. (2020). The power of feedback revisited: A meta-analysis of educational feedback research. Frontiers in Psychology, 10 (3087). https://doi.org/10.3389/fpsyg.2019.03087

Wollenschläger, M., Hattie, J., Machts, N., Möller, J., & Harms, U. (2016). What makes rubrics effective in teacher-feedback? Transparency of learning goals is not enough. Contemporary Educational Psychology . https://doi.org/10.1016/j.cedpsych.2015.11.003

Yan, Z. (2018). Student self-assessment practices: The role of gender, school level and goal orientation. Assessment in Education: Principles, Policy & Practice, 25 (2), 183–199. https://doi.org/10.1080/0969594X.2016.1218324

Yan, Z. (2019). Self-assessment in the process of self-regulated learning and its relationship with academic achievement. Assessment & Evaluation In Higher Education , 1–15.  https://doi.org/10.1080/02602938.2019.1629390

Yan, Z., & Brown, G. T. (2017). A cyclical self-assessment process: towards a model of how students engage in self-assessment. Assessment & Evaluation in Higher Education, 42 (8), 1247–1262. https://doi.org/10.1080/02602938.2016.1260091

Download references

Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research funded by Fundación BBVA call Investigadores y Creadores Culturales 2015 (project name Transición a la educación superior id. 122500) and by Spanish Ministry of Economy and Competitiveness (Ministerio de Economía y Competitividad) National I + D Call (Convocatoria Excelencia) project reference EDU2016-79714-P.

Author information

Authors and affiliations.

Facultad de Psicología Y Educación, Universidad de Deusto, Bilbao, Spain

Ernesto Panadero

IKERBASQUE, Basque Foundation for Science, Bilbao, Spain

Departamento de Investigación y Psicología en Educación, Universidad Complutense de Madrid, Madrid, Spain

Daniel García Pérez

Departamento de Psicología Evolutiva y de la Educación, Universidad Autónoma de Madrid, 28049, Madrid, Spain

Javier Fernández Ruiz

Universidad Francisco de Vitoria, Madrid, Spain

Juan Fraile

Departamento de Psicobiología y Metodología de las Ciencias del Comportamiento, Universidad Complutense de Madrid, Madrid, Spain

Iván Sánchez-Iglesias

Faculty of Education & Social Work, The University of Auckland, Auckland, New Zealand

Gavin T. L. Brown

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Javier Fernández Ruiz .

Ethics declarations

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Facultad de Psicología y Educación, Universidad de Deusto, Bilbao, Spain. IKERBASQUE, Basque Foundation for Science, Bilbao, Spain.

Current themes of research :

Self-regulated learning. Formative assessment (self-assessment, peer assessment, and teacher’s feedback). Rubrics. Socially shared regulated learning.

Most relevant publications in the field of Psychology of Education :

Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research.  Frontiers in psychology ,  8 , 422.

Panadero, E., & Jonsson, A. (2020). A critical review of the arguments against the use of rubrics.  Educational Research Review ,  30 , 100,329.

Panadero, E., Brown, G. T., & Strijbos, J. W. (2016). The future of student self-assessment: A review of known unknowns and potential directions.  Educational Psychology Review ,  28 (4), 803–830.

Daniel García-Pérez

Departamento de Investigación y Psicología en Educación, Universidad Complutense de Madrid, Madrid, Spain.

Educational research. Democratic education. Assessment. Learning strategies.

García-Pérez, D., Fraile, J., & Panadero, E. (2021). Learning strategies and self-regulation in context: How higher education students approach different courses, assessments, and challenges.  European Journal of Psychology of Education ,  36 (2), 533–550.

Panadero, E., Garcia-Pérez, D., & Fraile, J. (2018). Self-assessment for learning in vocational education and training.  Handbook of vocational education and training: Developments in the changing world of work , 1–12.

Departamento de Psicología Evolutiva y de la Educación, Universidad Autónoma de Madrid, Madrid 28,049, Spain. E-mail: [email protected].

Higher education. Formative assessment. Teacher education. Assessment design.

Fernández-Ruiz, J., Panadero, E., & García-Pérez, D. (2021). Assessment from a disciplinary approach: Design and implementation in three undergraduate programmes.  Assessment in Education: Principles, Policy & Practice ,  28 (5–6), 703–723.

Fernández Ruiz, J., Panadero, E., García-Pérez, D., & Pinedo, L. (2021). Assessment design decisions in practice: Profile identification in approaches to assessment design.  Assessment & Evaluation in Higher Education , 1–16.

Universidad Francisco de Vitoria, Madrid, Spain.

Rubrics. Self-regulated learning. Assessment.

Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students.  Studies in Educational Evaluation ,  53 , 69–76.

Panadero, E., Fraile, J., Fernández Ruiz, J., Castilla-Estévez, D., & Ruiz, M. A. (2019). Spanish university assessment practices: Examination tradition with diversity by faculty.  Assessment & Evaluation in Higher Education ,  44 (3), 379–397.

Departamento de Psicobiología y Metodología de las Ciencias del Comportamiento, Universidad Complutense de Madrid, Madrid, Spain.

Statistics. Data analysis. Psychology. Psychometrics.

Panadero, E., Fernández-Ruiz, J., & Sánchez-Iglesias, I. (2020). Secondary education students’ self-assessment: The effects of feedback, subject matter, year level, and gender.  Assessment in Education: Principles, Policy & Practice ,  27 (6), 607–634.

Faculty of Education & Social Work, The University of Auckland, Auckland, New Zealand.

Educational assessment. Psychology of assessment. Psychometrics.

Brown, G. T. (2004). Teachers’ conceptions of assessment: Implications for policy and professional development.  Assessment in Education: Principles, Policy & Practice ,  11 (3), 301–318.

Brown, G. T. L., & Harris, L. R. (2013). Student self-assessment. In J. McMillan (Ed.), The SAGE handbook of research on classroom assessment (pp. 367–393). Thousand Oaks, CA: SAGE.

Appendix 1. Rubric

Category

Low quality

Average quality

High quality

Writing process

I started writing the text without planning what I wanted to write. I have hardly reread what I was writing and, when I finished, I have not reviewed the text or I have only looked for misspellings

2 options:

a) Before writing, I have planned what I wanted to communicate. At the end, I have hardly reviewed the text or I have only looked for misspellings

b) I started writing without thinking much about what I wanted to tell. However, I reviewed the text several times, looking for all or some of these factors: Text structure, coherence and connection between paragraphs, clarity of the message, style, and spelling

Before writing, I thoroughly planned what I wanted to tell and how I was going to do it. I reviewed while I was writing and, at the end, I also reviewed the full text at least once

While reviewing, I looked for all or some of these factors: Text structure, coherence and connection between paragraphs, clarity of the message, style, and spelling

Text components:

Structure and coherence/connection between paragraphs

There is no clear structure, with an introduction, a crux, and a closing

Lack of incorrect use of text connectors and/or discourse markers

Regarding paragraphs, one of these two happens:

a) The text has only one or two paragraphs, without clear internal and external coherence

b) The text has many very short paragraphs, which makes it difficult to follow the argument line

A structure is somehow present (introduction, crux, and closure) but could be more clearly delimited

Connectors are most of the times used appropriately. However, there may be one or more of these flaws:

Same paragraph includes different unorganized ideas

Same idea in two paragraphs when it could be in one

The paragraph where the argument is developed is too long; it could be divided

Connector/text markers are misused

There is a very clear structure in the text: including opening, argument crux and closing

Ideas are connected and presented in well-organized paragraphs

Connectors and/or discourse markers are effectively used

Text components:

Sentences, vocabulary, and punctuation

Sentences are too long (over 40 words) or too short. Excessive use of text insertions within sentences. Punctuation is incorrect (e.g., lack of commas, the break the sentence)

Too many colloquial expressions

Abuse of passive or impersonal tenses

Most sentences are of adequate length, with a few too long or short or incomplete

Punctuation is correct, although there may be a few mistakes

The vocabulary is adequate, but different terms are used to refer to the central concept of the text

Some colloquial expression may appear

The sentences are well constructed, usually following a simple structure, in an active language and a coherent use of the verbs

Punctuation is correct

The vocabulary is adequate, and the main terms are used with precision

Appendix 2. Instructor feedback (three samples)

The text structure has important flaws. It does not follow a coherent argument; on the contrary, ideas change abruptly in each paragraph. For instance, any of the first three paragraphs could actually be the introduction paragraph because each of them present different ideas as it was the introduction. Later, in the argument crux, there are several ideas without connection. Finally, the previous to the last paragraph seems to be closing the text but, nonetheless, there is an additional paragraph after it. Furthermore, that previous to the last paragraph includes a new idea (about the methodology), which has not been mentioned before and it could be used as an argument in favor of Psychology.

To sum, even though a central message can be perceived (the multiple areas of application of Psychology), it is not developed nor transmitted effectively. Regarding grammar, highlighted in the text, there are mistakes and comments in the footnotes.

The text has a quite clear structure, with a paragraph of introduction, three for crux, and a closing paragraph. However, there are two arguments in the introduction, and one of them is not developed in order to refute it (the skepticism of certain people). In addition, the last paragraph includes a new idea that has not been discussed before and it does not recap and finish with the main message to be transmitted. In general, there is a correct use of connectors and discourse markers.

Regarding the style and grammar, in general, the construction of the sentences is correct, and the vocabulary is appropriate. Nevertheless, there are some mistakes in the sentence construction and some limitations in the vocabulary selection, which are highlighted in the text and commented in footnotes.

The text has an adequate argumentative structure, with an introductory paragraph, four for the argument crux, and a closing paragraph. Connectors and discourse marks are properly used.

Regarding the text style, it is correct considering the vocabulary, the use of punctuation marks, and the sentence construction. There are some minor mistakes highlighted in the text and commented in footnotes.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Panadero, E., Pérez, D.G., Ruiz, J.F. et al. University students’ strategies and criteria during self-assessment: instructor’s feedback, rubrics, and year level effects. Eur J Psychol Educ 38 , 1031–1051 (2023). https://doi.org/10.1007/s10212-022-00639-4

Download citation

Received : 13 January 2022

Revised : 16 September 2022

Accepted : 27 September 2022

Published : 24 October 2022

Issue Date : September 2023

DOI : https://doi.org/10.1007/s10212-022-00639-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Feedback effects
  • Higher education
  • Find a journal
  • Publish with us
  • Track your research

what is a self assessment report in education

Self-Assessment Report (SAR) Process   //

Search

  • Quality Assurance
  • ​Blended & Digital Learning
  • Teaching & Learning
  • Haile T. Debas Teachers’ Academy
  • Online Teaching in Higher Education
  • Teaching Stories
  • Teaching Tips
  • SoTL Conference
  • Teaching Dossiers
  • Quality Stories
  • Digital Learning Week
  • Blended Digital Manual
  • QTL_net AI Guidance
  • Upcoming Events
  • Vice Provost's Message
  • Our Partners
  • Meet Our Team

what is a self assessment report in education

Self-Assessment Report (SAR) Process​​​

Introduction.

The Self-Assessment Report (SAR) is the first step of a programme review. It is the critical self-analysis of a programme or entity based on documented evidence and completed by the programme or entity itself prior to the external peer review. Prior to this, a curriculum review exercise is usually conducted, a report of which will be included in the SAR.

What is it?​

The SAR is an approximately 40-page (plus appendices) holistic report that covers all aspects of the programme and allows the deliverers and organizers of the programme to self-appraise its accomplishments and progress.

There are guidelines that provide for ‘What constitutes a ‘complete self-assessment report’ as required by the university’s Academic Quality Framework. Before the SAR report is forwarded to the Peer Reviewers, the Quality Assurance Review Committee (QARC) members independently review for its completeness, which requires the following elements: 

The SAR addresses all 18 cells (see IUCEA Road Map, Volume 1: p. 36), and provides a substantial array of evidence to support the findings of the Report. 

The SAR clearly defines the strengths and weaknesses of the programme, as the Self-Assessment Committee sees them, and also areas of good practice. 

The contents of the proposed Improvement Plan (or Action Plan) are closely aligned with the identified weaknesses of the programme, and the proposed actions are SMART (i.e. specific, measurable, achievable, realistic and time-bound). 

How does it work?

QAI conducts self-assessment training for SAR participants appointed by the Dean. The individuals who undergo the SAR training then carry out the SAR exercise for programme(s) in their entity. The completed SAR is independently reviewed by QARC for its ‘completeness’ and signed-off for the next level, which is the peer/external review process.​

Benefits: 

Why is self-evaluation important?

The workers within an educational organization should always aim to produce, improve and enhance its quality, rather than passively accepting that everything is fine.

The self-assessment will provide information that was not previously known to all parties and enhance the transparency and accountability of a programme/entity.

The self-assessment involves all members of an organization, including students, in the discussion on the quality of education, and takes their input into account.

A self-assessment serves as a preparation for a site visit by external experts, providing them with basic information.​

Participation:

The SAR team is composed of 4-6 faculty members, at least 1 academic staff member and at least 1 student. These individuals are nominated by the Dean and selected by the Provost. The Dean will nominate a faculty member from the group to serve as Chair and to be responsible for the production of the SAR. Normally, the Programme Director/Head of progr​ammes chairs the SAR group. A secretary from within the entity will be assigned to work with the group. 

An effective self-assessment is time-consuming. It requires effort b​y staff and students. The approximate duration of a programme SAR is 4-6 months, and a cluster-programme or full entity SAR is 6-8 months. 

The following table shows all AKU entities/programmes that have gone through self-assessment training and have completed or are undergoing the SAR exercise:​

School of Nursing and Midwifery, TanzaniaBachelor of Science (Post-RN BScN) Complete
School of Nursing and Midwifery, KenyaBachelor of Science (Post-RN BScN) Complete
School of Nursing and Midwifery, UgandaBachelor of Science (Post-RN BScN) Complete

​​School of Nursing and Midwifery, Pakistan

Bachelor of Science in Nursing (BScN)
Co​mplete
Bachelor of Science (Post-RN BScN) Complete
Bachelor of Science in Midwifery Complete (Post-RM BScM)
​Complete
Master of Science in NursingComplete

 

Medical College, East Africa

Post Graduate Medical Education- specialty in Obstetrics and Gynecology, AKU Health SciencesComplete
Post Graduate Medical Education-specialty in Pediatrics and Child HealthComplete

​​​Medical College, Pakistan

Master of Health Professions Education (MHPE)Com​​plete

Master of Science (Health Policy & Management)

Complete

Master of Science (Epidemiology & Biostatistics)

Complete
Associate of Science in Dental Hygiene (ASDH)Ongoing
Institute for Educational Development, East AfricaMaster of Education
Complete
Master of EducationComplete
Master of PhilosophyComplete

​ 

PGME Pakistan


General SurgeryOngo​ing
Internal Medicine
Ongoing
Orthopedics​​

QAI is available to aid the programmes throughout the SAR process through a three-tier support system.

1st tier: Training Sessions

QAI will conduct SAR training for members of the SAR committee 

If requested, QAI can provide an additional cell-by-cell training session(s)

QAI will guide the members through the process, address any challenges and answer all questions, and provide examples of other entities that have faced similar challenges

QAI will refer the SAR team to available self-assessment forums/resources both internal and external to the university

2nd tier: Email and Phone Correspondence

Any questions or concerns from the SAR team can be addressed to QAI via email or telephone and will be responded to as soon as possible

QAI will send periodic reminders to the SAR Chair requesting updates on the process and/or inquiring if they require additional aid

Contact [email protected] for any questions

3rd tier: Additional One-on-one Meetings

Individual or group consultation with SAR members and QAI members

Zoom, Skype or face-to-face

Testimonials:

“I personally started to analyze m​y own teaching/learning practices.” 

(SONAM Pakistan, 2015)

“Initially we felt threatened and defensive towards having to participate in the cyclical review process. However, as we went through the self-assessment process, the act of identifying our own issues means we are more likely to make the needed changes for improvement. It was a very usefu​l process!” 

(Department Chair Medical College Pakistan Graduate Programmes, 2016)

“[I] have more clarity on ​​how to identify evidences and to develop an improvement plan.” 

(IED PK SAR Training, 2018)

what is a self assessment report in education

QAI Associate Director Faisal Notta conducting SAR training for SONAM Pakistan (2015)

what is a self assessment report in education

  QTL Director Tashmin Khamis conducting SAR training for IED PK (2018)

​resources:.

For the full guidance sheet, the SAR checklist of “completeness” and other helpful links, check out our Tools for Programme Review​ at AKU in the Resources section.​

U.S. flag

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

School Health Index

An online self-assessment and planning tool for schools.

The School Health Index (SHI) Self-Assessment and Planning Guide is an online self-evaluation and planning tool for schools. The SHI is built on CDC’s research-based guidelines for school health programs that identify the policies and practices most likely to be effective in reducing youth health risk behaviors. The SHI is easy to use and is completely confidential.

The SHI (and related materials) is available as an interactive, customizable online tool or downloadable, printable version. The SHI aligns with the Whole School, Whole Community, Whole Child (WSCC) model.

SHI

CDC developed the SHI in partnership with school administrators and staff, school health experts, parents, and national nongovernmental health and education agencies to:

  • Enable schools to identify strengths and weaknesses of health and safety policies and programs.
  • Enable schools to develop an action plan for improving student health that can be incorporated into the School Improvement Plan.
  • Engage teachers, parents, students, and the community in promoting health-enhancing behaviors and better health.
  • SHI for Elementary Schools [PDF – 2.3 MB]
  • SHI for Middle and High Schools [PDF – 3.2 MB]

Your Guide to Using the School Health Index (SHI) [PDF – 13 MB] includes information and resources for district and school staff who are familiar with the SHI and who are charged with completing the assessment. Included in this Guide are information, materials, and resources on how to implement the SHI in schools, as well as a Facilitating Groups section for conducting trainings, workshops, or presentations on the SHI.

This course  introduces you to CDC’s School Health Index: Self-Assessment and Planning Guide . After this training, you will be ready to conduct or participate in a self-assessment and create a plan to improve the health of students in your school or district.

Please tell us what you think about the CDC Healthy Schools website.

Healthy Youth

To receive email updates about this page, enter your email address:

Exit Notification / Disclaimer Policy

  • The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website.
  • Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website.
  • You will be subject to the destination website's privacy policy when you follow the link.
  • CDC is not responsible for Section 508 compliance (accessibility) on other federal or private website.

what is a self assessment report in education

Lewiston Sun Journal

Account Subscription: ACTIVE

Questions about your account? Our customer service team can be reached at [email protected] during business hours at (207) 791-6000 .

  • Advertiser Democrat

SAD 17 sets goals to improve student assessment scores

what is a self assessment report in education

You are able to gift 5 more articles this month.

Anyone can access the link you share with no account required. Learn more .

With a Lewiston Sun Journal subscription, you can gift 5 articles each month.

It looks like you do not have any active subscriptions. To get one, go to the subscriptions page .

Loading....

PARIS — During Monday night’s Maine School Administrative District 17 business meeting, Curriculum Director Jill Bartash shared data from state assessment tests for reading and math conducted over the last three years. Scores for most schools are showing improved performance since the end of the pandemic, but it is clear that challenges from two years of school closings, quarantines and remote learning remain.

An additional challenge to analyzing assessment scores year-over-year are that three different testing formats have been utilized in Maine in the past six years, Bartash explained to directors. During the 2018-19 academic year, eMPowerME was implemented, and the following year there were no assessments done while schools were closed during the early months of COVID-related mandates.

what is a self assessment report in education

SAD 17 Curriculum Director Jill Bartash walked school board directors through state assessment results during Monday night’s school board meeting. Nicole Carter / Advertiser Democrat

During the 2020-21 and 2021-22 school years tests were provided by Northwest Evaluation Association (NWEA), a division of Houghton Mifflin Harcourt publishing. Since 2022 the Maine Department of Education has utilized Maine Through Year, another test program developed by NWEA.

The assessments are done in public school districts statewide. SAD 17 has also used Star assessments for reading and math for several years, which has the benefit of more consistent data history.

According to Bartash, the major takeaways from assessment data are attendance, loss of learning between 2019 and 2022, and for continuing educator training to meet the demands of post-pandemic education.

“Attendance matters,” she explained in her presentation.”Students need to be in school to learn new concepts, to fill learning gaps from previous years, and to gain the benefits of social connections to the school community. Advertisement

“Rebounding [from COVID] will take time. Students missed from two-thirds to an entire year of schooling. This will take years to recapture. And professional development, coaching and intervention matter.”

She said educators are teaching children who have missed out on fundamentals to prepare them for later grades and it is important they are prepared to fill in gaps that have not been part of their curriculum in the past.

To counter those challenges, SAD 17 will continue targeting school attendance with a team approach to reduce chronic absenteeism, prioritize professional development and training to support students as they progress from lost classroom experiences and build multi-tiered supports and interventions to meet struggling students.

Most recent results were not certified, but year-to-year comparisons by school show that Oxford Hills scores continue to lag below state averages. But as statewide scores are improving in post-pandemic school years, local results are on similar upward trajectories.

The state averages in 2020-21 were 85.0 in reading and 81.3 in math. The next two years each declined to 64.6 in reading and 48.7 in math, but recovered slightly this year to 65.3 although math went to 47.2.

In SAD 17, 2020-21 averaged 80.1 in reading and 74.1, dipping down to 56.2 in reading and 32.2 in math last year, but this year reading improved to 57.0 and math held at 32.2.

In other business, the board voted unanimously to approve $2 million in tax and revenue anticipation notes. It establishes a line of credit for the district to pay bills during the school year when there are timing gaps in federal grants and reimbursements and local share payments. Renewed annually, the fund is a tool for managing cash flow of budgeted expenditures.

Comments are not available on this story.

Send questions/comments to the editors.

« Previous

Norway voters approve budget, ordinance changes at annual town meeting

Next »

Oxford County EMA director honored for guiding region through the worst

Advertiser Democrat Headlines

  • Enter your email
  • Comments This field is for validation purposes and should be left unchanged.

Member Log In

Please enter your username and password below. Already a subscriber but don't have one? Click here .

Not a subscriber? Click here to see your options

Home

Certification of Health IT

Health information technology advisory committee (hitac), health equity, hti-1 final rule, information blocking, interoperability, patient access to health records, clinical quality and safety, health it and health information exchange basics, health it in health care settings, health it resources, laws, regulation, and policy, onc funding opportunities, onc hitech programs, privacy, security, and hipaa, scientific initiatives, standards & technology, usability and provider burden, security risk assessment tool.

The  Health Insurance Portability and Accountability Act (HIPAA) Security Rule  requires that  covered entities  and its business associates conduct a risk assessment of their healthcare organization. A risk assessment helps your organization ensure it is compliant with HIPAA’s  administrative, physical, and technical safeguards . A risk assessment also helps reveal areas where your organization’s protected health information (PHI) could be at risk. To learn more about the assessment process and how it benefits your organization, visit the  Office for Civil Rights' official guidance .

What is the Security Risk Assessment Tool (SRA Tool)?

The Office of the National Coordinator for Health Information Technology (ONC), in collaboration with the HHS Office for Civil Rights (OCR), developed a downloadable Security Risk Assessment (SRA) Tool to help guide you through the process. The tool is designed to help healthcare providers conduct a security risk assessment as required by the HIPAA Security Rule. The target audience of this tool is medium and small providers; thus, use of this tool may not be appropriate for larger organizations.

SRA Tool for Windows

The SRA Tool is a desktop application that walks users through the security risk assessment process using a simple, wizard-based approach. Users are guided through multiple-choice questions, threat and vulnerability assessments, and asset and vendor management. References and additional guidance are given along the way. Reports are available to save and print after the assessment is completed.

This application can be installed on computers running 64-bit versions of Microsoft Windows 7/8/10/11. All information entered into the tool is stored locally on the user's computer. HHS does not collect, view, store, or transmit any information entered into the SRA Tool.

Download Version 3.4 of the SRA Tool for Windows [.msi - 70.4 MB]

SRA Tool Excel Workbook

This version of the SRA Tool takes the same content from the Windows desktop application and presents it in a familiar spreadsheet format. The Excel Workbook contains conditional formatting and formulas to calculate and help identify risk in a similar fashion to the SRA Tool application. This version of the SRA Tool is intended to replace the legacy "Paper Version" and may be a good option for users who do not have access to Microsoft Windows or otherwise need more flexibility than is provided by the SRA Tool for Windows.

This workbook can be used on any computer using Microsoft Excel or another program capable of handling .xlsx files. Some features and formatting may only work in Excel.

Download Version 3.4 of the SRA Tool Excel Workbook [.xlsx - 128 KB]

SRA Tool User Guide

Download the SRA Tool User Guide for FAQs and details on how to install and use the SRA Tool application and SRA Tool Excel Workbook.

Download SRA Tool User Guide [.pdf - 3.3 MB]

What's new in Version 3.4: 

  • Remediation Report – Track response to vulnerabilities inside the tool
  • Glossary & tool tips – Hover over terms to get more information
  • HICP 2023 edition references
  • Bug fixes, usability improvements

The Security Risk Assessment Tool at HealthIT.gov is provided for informational purposes only. Use of this tool is neither required by nor guarantees compliance with federal, state or local laws. Please note that the information presented may not be applicable or appropriate for all health care providers and organizations. The Security Risk Assessment Tool is not intended to be an exhaustive or definitive source on safeguarding health information from privacy and security risks. For more information about the HIPAA Privacy and Security Rules, please visit the  HHS Office for Civil Rights Health Information Privacy website .

NOTE: The NIST Standards provided in this tool are for informational purposes only as they may reflect current best practices in information technology and are not required for compliance with the HIPAA Security Rule’s requirements for risk assessment and risk management. This tool is not intended to serve as legal advice or as recommendations based on a provider or professional’s specific circumstances. We encourage providers, and professionals to seek expert advice when evaluating the use of this tool.

Open Survey

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

what is a self assessment report in education

Bring photo ID to vote Check what photo ID you'll need to vote in person in the General Election on 4 July.

  • Money and tax
  • Self Assessment

Check how to register for Self Assessment

Use this tool to find out how to register for Self Assessment.

You must register for Self Assessment by 5 October 2024 if you have to send a tax return and you have not sent one before.

This service is also available  in Welsh (Cymraeg) .

If you’ve registered before

If you’ve registered for Self Assessment before but did not send a tax return last year, you must register again.

If you’re waiting for a Unique Taxpayer Reference (UTR), you can check when you can expect a reply from HMRC .

Before you start

You should check if you need to send a tax return before registering.

Related content

Is this page useful.

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. Please fill in this survey (opens in a new tab) .

  • Weekly email
  • Martin's Blog
  • Deals Hunters' Blog
  • About the site

Capital Gains Reporting (Self Assessment)

formwrangler

formwrangler said: Probate has been granted and the house isn't up for sale as yet, I'm just trying to get a grasp of the process.
formwrangler said: Sorry @Bookworm105 - my brother and I (the executors) will be selling the property
Bookworm105 said: formwrangler said: Sorry @Bookworm105 - my brother and I (the executors) will be selling the property

sheramber

  • All Categories
  • 344.3K Banking & Borrowing
  • 250.4K Reduce Debt & Boost Income
  • 450.2K Spending & Discounts
  • 236.4K Work, Benefits & Business
  • 609.9K Mortgages, Homes & Bills
  • 173.6K Life & Family
  • 249K Travel & Transport
  • 1.5M Hobbies & Leisure
  • 15.9K Discuss & Feedback
  • 15.1K Coronavirus Support Boards

If you have a credit card, loan or savings with Sainsbury's Bank, you could be moved to NatWest next year as part of a deal between the two banks – but it's still early days and there's NO change right now.

The Bank of England has held the base rate at 5.25% for the seventh time in a row – but with some economists and traders still predicting a rate cut in August, there could be a knock on impact on mortgage and savings rates.

The new EDF Ensure tracker tariff is effectively a discounted version of the Price Cap that will be £50 cheaper over the year – and it does this by lowering standing charges, not unit rates.

If you're booking your next holiday, watch out for fake deals, clone websites and bogus cancellations, as criminals are using a variety of sophisticated methods to trick holidaymakers out of their money.

Seven major airports across the UK upped the cost of dropping off passengers over the last year, a MoneySavingExpert investigation reveals – and five now charge a whopping £6 or more to let someone out at the kerb. But at most airports it's possible to beat the fees and take someone to the terminal free if you know where to go – we've full airport-by-airport help below.

The £100 bonus will be paid to millions of the building society's current account customers who also have savings or a mortgage with it.

Protecting and expanding access to cash and other essential banking services – such as affordable credit – must be a priority for the next Government, according to a new report published today.

The Lloyds Banking Group, which owns all three brands, says most customers will be charged the same or less under its shake-up – but others will pay more, and some Club Lloyds customers will see their rate nearly double.

Get our FREE Weekly email full of deals & guides - and it’s spam-free

IMAGES

  1. 15 Self-Assessment Examples for Students (2024)

    what is a self assessment report in education

  2. FREE 10+ Self Assessment Samples in PDF

    what is a self assessment report in education

  3. Teacher Self Assessment Samples

    what is a self assessment report in education

  4. Self-assessment

    what is a self assessment report in education

  5. Self-assessment

    what is a self assessment report in education

  6. Free Self-Evaluation Templates

    what is a self assessment report in education

VIDEO

  1. Workshop on how to write Self Assessment Report SAR

  2. ASIIN SELF ASSESSMENT REPORT CONSULTATION

  3. Self Assessment Report

  4. English Plus 3: Student self-assessment rubric

  5. TX Topic Explainer: Self Assessment on Companies

  6. Public Pre-K Self-Assessment Tool Informational Session

COMMENTS

  1. PDF Commentary: Self-Report is Indispensable to Assess Students' Learning

    As such, self-report was a primary assessment method in psychology and education from early on, and it continued to be a primary method throughout all developmental phases in the history of these disciplines, even in the prime time of behaviourism early in the 20 th century.

  2. Self-Assessment

    The process of effective self-assessment will require instruction and sufficient time for students to learn. Students are used to a system where they have little or no input in how they are assessed and are often unaware of assessment criteria. Students will want to know how much self-assessed assignments will count toward their final grade in ...

  3. Self and peer assessment

    Through self and peer assessment, students take more responsibility for their own learning. It helps the individual to: assess their own progress objectively. crystallise learning objectives. recognise their understanding. think about what they did not understand. grow in confidence. take their own learning forwards.

  4. PDF Student Self-Assessment: The Key to Stronger Student Motivation and

    The Self-Assessment Process. Self-monitoring, a skill necessary for effective self-assessment, involves focused attention to some aspect of behavior or thinking (Schunk 2004). Self-monitoring students pay deliberate attention to what they are doing, often in relation to external standards. Thus, self-monitoring concerns awareness of thinking ...

  5. Helping Students Thrive by Using Self-Assessment

    According to the study, formative assessments like self-assessment "give students the means, motive, and opportunity to take control of their own learning.". When teachers give students those opportunities, they empower their students and help turn them into active, rather than passive learners.

  6. Student Self-assessment

    Student Self-assessment. Self-assessments encourage students to reflect on their growing skills and knowledge, learning goals and processes, products of their learning, and progress in the course. Student self-assessment can take many forms, from low-stakes check-ins on their understanding of the day's lecture content to self-assessment and ...

  7. Self-Assessment

    Self Assessment. Self-assessments allow instructors to reflect upon and describe their teaching and learning goals, challenges, and accomplishments. The format of self-assessments varies and can include reflective statements, activity reports, annual goal setting and tracking, or the use of a tool like the Wieman Teaching Practices Inventory.

  8. The Power of Reflection and Self-Assessment in ...

    Self-assessment is closely linked to reflection and involves students evaluating their learning and performance. It empowers students to take ownership of their education by actively participating in the evaluation process. Through self-assessment, students develop a deep sense of responsibility and accountability for their progress ...

  9. A Critical Review of Research on Student Self-Assessment

    Fortunately, a formative view of self-assessment seems to be taking hold in various educational contexts. For instance, Sargeant (2008) noted that all seven authors in a special issue of the Journal of Continuing Education in the Health Professions "conceptualize self-assessment within a formative, educational perspective, and see it as an activity that draws upon both external and internal ...

  10. PDF Getting self-assessment reports right

    Getting self-assessment reports right This guidance note is intended to help any learning and skills provider, whether a single sector subject area apprenticeship independent learning provider or a complex 15 area general further education college, in the self-assessment process. When correctly applied, the ideas represented have brought about

  11. Full article: Self-assessment is about more than self: the enabling

    The purpose of this conceptual article is twofold. First, we articulate the interplay between feedback literacy and self-assessment based on a reframing and integration of the two concepts. Secondly, we unfold the self-assessment process into three steps: (1) determining and applying assessment criteria, (2) self-reflection, and (3) self ...

  12. Understanding Student Self-Reports of Academic Performance and Course

    In recent years, student surveys have played an increasingly large role in educational research, policy making, and, particularly, accountability efforts. However, research on the accuracy of students' self-reports about themselves and their education is limited to analyses of overall grade point average and ACT/SAT standardized test scores.

  13. Student Self-Assessment

    Student self-assessment occurs when learners assess their own performance. With practice, they learn to: objectively reflect on and critically evaluate their own progress and skill development. identify gaps in their understanding and capabilities. discern how to improve their performance. learn independently and think critically.

  14. PDF How to successfully introduce self-assessment in your classroom

    Self-assessment is a key part of Assessment for Learning where relection during the low of learning is used to improve learning and teaching. The beneits of self- and peer assessment In primary and secondary education, peer and self-assessment, is shown to: increase student engagement and empower students, and enable greater autonomy from the ...

  15. Self-Report is Indispensable to Assess Students' Learning

    Self-report is required to assess mental states in nuanced ways. By implication, self-report is indispensable to capture the psychological processes driving human learning, such as learners' emotions, motivation, strategy use, and metacognition. As shown in the contributions to this special issue, self-report related to learning shows convergent and predictive validity, and there are ways to ...

  16. What Is a Teacher Self-Assessment? Tools, Types and Benefits

    Self-assessment is a personalized, reflective approach. The source of the evaluation is the teacher themselves. In other words, teachers consider their own teaching practice to judge their strengths and weaknesses and identify areas for improvement. Self-assessments are also voluntary and tend to be informal.

  17. The importance of student self-assessment

    Self-assessment is even more fruitful when students can process examples of work that illustrate the learning goals and success criteria. In other words, for meaningful self-assessment, we had to work to make sure there was a solid foundation of understanding about examples and how to use the examples as a guide.

  18. The Function of Self-Assessment and Self-Evaluation in the Quality

    A Self-Assessment Review (SAR) is a review of what the teaching establishment, department or strand has carried out over the last year, including an evaluation against the teaching establishments own internal criteria and Ofsted requirements. When completing a SAR, it is best to "imagine that you were sitting down explaining how and what you ...

  19. Benefits and Examples of Student Self-Assessments

    Here are seven examples of student self-assessments. 1. Learning Log. A learning log serves as a personal journal for students to record their thoughts, questions, and experiences on their educational journey. It encourages deep self-awareness, helping students track their growth, identify strengths and weaknesses, and gain insights into their ...

  20. Self-Assessment to Improve Learning and Evaluation American Society for

    Self-assessment is a powerful mechanism for enhancing learning. It encourages students to reflect on how their own work meets the goals set for learning concepts and skills. It promotes metacognition about what is being learned, and effective practices for learning. It encourages students to think about how a particular assignment or course ...

  21. Comparing Self-Report Assessments and Scenario-Based ...

    Self-report assessments are used frequently in higher education to assess a variety of constructs, including attitudes, opinions, knowledge, and competence. Systems thinking is an example of one competence often measured using self-report assessments where individuals answer several questions about their perceptions of their own skills, habits, or daily decisions. In this study, we define ...

  22. University students' strategies and criteria during self-assessment

    Self-assessment of learning is linked to greater self-regulation (Andrade, 2018; Yan, 2019) and achievement (Brown & Harris, 2013).Furthermore, the ability to evaluate one's own work and processes is an important objective of higher education (Tai et al., 2017).However, our understanding of how students integrate feedback within their self-assessment processes is limited (Panadero et al ...

  23. Self-Assessment Report (SAR) Process

    The Self-Assessment Report (SAR) is the first step of a programme review. It is the critical self-analysis of a programme or entity based on documented evidence and completed by the programme or entity itself prior to the external peer review. Prior to this, a curriculum review exercise is usually conducted, a report of which will be included ...

  24. SHI

    The School Health Index (SHI) Self-Assessment and Planning Guide is an online self-evaluation and planning tool for schools. The SHI is built on CDC's research-based guidelines for school health programs that identify the policies and practices most likely to be effective in reducing youth health risk behaviors.

  25. Relationships and sex education (RSE) and health education

    Added 'Implementing relationships education, relationships and sex education and health education 2020 to 2021'. 25 July 2019 Added a link to the sex and relationship education statutory guidance.

  26. SAD 17 sets goals to improve student assessment scores

    PARIS — During Monday night's Maine School Administrative District 17 business meeting, Curriculum Director Jill Bartash shared data from state assessment tests for reading and math conducted ...

  27. Results of Self-Evaluation of Curricula 2024: Curricula at a Good Level

    The results of the self-evaluation of the curricula were discussed at a seminar of the Education Management Group on 10.-11.6.2024. Faculty-specific results have been submitted to the Deans of Education on 22.5.2024 for processing in the Education Committees and Degree Programme Committees.

  28. Security Risk Assessment Tool

    The Health Insurance Portability and Accountability Act (HIPAA) Security Rule requires that covered entities and its business associates conduct a risk assessment of their healthcare organization. A risk assessment helps your organization ensure it is compliant with HIPAA's administrative, physical, and technical safeguards.A risk assessment also helps reveal areas where your organization ...

  29. Check how to register for Self Assessment

    Education and learning; ... You must register for Self Assessment by 5 October 2024 if you have to send a tax return and you have not sent one before. ... Report a problem with this page

  30. Capital Gains Reporting (Self Assessment)

    Assuming that is not the case then rather than transfer the property into your own names you can use a deed of appropriation to share the gain and take advantage of each of your allowances. This must be put in place before the exchange of contracts for the property, but the cost of doing this might not be worth it if the gain is a small one.