Teaching @ Purdue

Students writing on white boards

Applications in Context Topics

  •   Overview of Applications in Context
  •   Rubrics
  •   Discussions
  •   Lecturing
  •   Outcomes and Objectives
  •   Creating Exams
  •   Creating Inclusive Grading Structures

Leave Your Feedback Cancel reply

You must be logged in to post a comment.

Outcomes and Objectives

Home  »  Applications in Context  » Outcomes and Objectives

Learning outcomes and objectives are the fundamental elements of most well-designed courses. Well-conceived outcomes and objectives serve as guideposts to help instructors work through the design of a course such that students receive the guidance and structure to achieve meaningful outcomes, as well as guide how those outcomes can be assessed accurately and appropriately.

The Basics of Learning Outcomes and Objectives

Defining terms.

While the terms “learning outcomes” and “learning objectives” are used with varied meanings in varied contexts across higher education, at Purdue we try to use them in a more precise manner. By  Learning Outcomes  we mean a set of three to five goals that reflect what students will be able to achieve or skills or attitudes they will develop during the class. We use  Learning Objective  to refer to the steps that lead  into  a particular outcome. By approaching teaching and learning goals in this way, we can help students understand the path toward successful completion of the class. Some people also use the term  Learning Goals , and this can be useful especially in discussions with students about what outcomes and objectives mean, particularly if you co-construct one or more outcomes with students; so, while we do not use the term officially, learning goals may be useful in discussions with students.

Represent the result rather than the means

Outcomes define the end results of a student’s successful engagement in a class. It is important to remember that the  ends  are different than the  means . An outcome is not the process with which students engage to reach that goal, but the end result of achieving that goal. In some cases, the end result will be learning a process, but integrating a process into one’s cognitive and skill repertoire is different than going through a process (e.g., the act of learning how to write a research paper is different than the process of writing a research paper).

A note about Foundational and Embedded learning outcomes

At Purdue, many courses are designated as fulfilling foundational and/or embedded learning outcomes. These outcomes are defined at the university level and assessed regularly. They help to define what it means for a student to complete an undergraduate degree from Purdue, and they set Purdue apart for its high standards of student achievement across a range of core topics. Each of the outcome types is approached and handled differently. Check with your department about if/how your class is designated as fulfilling these outcomes, and email  [email protected]  if you would like assistance incorporating them into your class.

Writing Meaningful Outcomes

There are numerous strategies for writing effective learning outcomes, and they all have various advantages and disadvantages including more or less structure. One of the most common approaches is to think of outcomes as finishing the following sentence: “Upon successfully completing this course, students will be able to…” This framing emphasizes outcomes as the forward-looking result rather than the means. It also supports transparency by prompting a discussion about what success in the course looks like.

The basics: a verb and an object

If you are just beginning to write outcomes and objectives, try aiming for three components. The following are two similar models that may be useful for thinking through this in your class:

Approach 1:

  • The  verb  generally refers to [actions associated with] the intended  cognitive process .
  • The  object  generally describes the  knowledge  students are expected to acquire or construct.
  • A statement regarding the  criterion for successful performance .

Approach 2  (from Tobin and Behling’s book,  Reach Everyone, Teach Everyone  see page 181 in Chapter 7 for examples) :

  • Desired  behavior , with as much specificity as possible.
  • Measurement  that explains how you will gauge a student’s mastery.
  • Level of  proficiency  a student should exhibit to have mastered the objective.

The implications of language

We begin with the verb because research into cognitive processes reveals that the verb has profound implications for the type and complexity of cognitive processes. In fact, there are countless lists of verbs, often associated with Benjamin Bloom, a highly influential educational theorist who defined learning around mastery and in doing so began to categorize different types of cognitive, affective, and psychomotor processes based on their difficulty in hierarchical order. In the early 2000s, this work was revised and expanded by a large team of scholars, including adding an additional dimension to the cognitive hierarchy. These verb lists can be misleading, as you may often see the same verb associated with multiple cognitive tasks. We encourage you to use the descriptors in your outcome to identify what students will actually be able to do and ensure that your use of the verb appropriately aligns.

When we ask ourselves questions about the implications of our verb choices, we are often forced to reckon with overused generic terms. The most common example is “understand.” For many, this is the first verb that comes to mind when thinking about what students should be able to do at the end of a course. Consider the popular YouTube series by  Wired  in which an expert  explains a topic at five levels of complexity : a child, a teen, an undergraduate, a graduate student, and a peer expert. At the end of these explanations all have developed or demonstrated an understanding of the concept, but their understanding is vastly different.  One mode of working out outcomes and objectives is to start with “understand” and then add a second verb that clarifies the level (what a student at this level will be able to do). Often this use of “understand” lacks clarity unless we add a second verb, in which case it often become clearer and more precise to remove the generic “understand.”

Be transparent: avoid secrets and highlight challenges

Valuing and caring are legitimate outcomes.

Instructors often use what might be termed “secret” learning outcomes or objectives, which are often affective rather than cognitive in nature. For example, in some classes an instructor may want students to appreciate the importance of the subject matter. Often, this involves teaching material that students perceive as tangential to their degree program, but instructors and departments believe is essential. Some common examples involve writing and communication skills, ethics, or legal knowledge in fields where practitioners make use of these competencies every day, but students are often more focused on what they perceive as more quantifiable skills. In the affective learning domain, you may consider outcomes focused on valuing or caring about something (see the alternate outcomes below).

Reveal bottlenecks

Another type of secret or hidden outcome or objective involves something instructors have identified as bottlenecks in their course or discipline. These bottlenecks often reflect ideas, concepts, or skills that may seem small, but when not mastered can pose long-lasting challenges for many students. Sometimes these may seem tangential, like those values described above, other times a bottleneck may be part of a process that students tend to skip (varying modes of checking for errors, for example), or sometimes they require that a student take a different perspective when engaging with a source or problem. Students may often experience these bottlenecks by relying on learning methods that worked with low-complexity topics but cannot handle the complex elements of your course. Some topics are counterintuitive to how we experience the world, and to avoid bottlenecks, students need to overcome their preconceptions and experiences. By highlighting these bottlenecks as explicit outcomes or objectives, making them transparent, pointing to the challenges they pose, and highlighting why it is vital to overcome them, we support students’ long-term success as they move beyond our class as well.

Consider different types of outcomes and objectives

The vast majority of learning outcomes and advice related to outcomes focuses on discrete cognitive skills that are measurable through simple means. For example, a common approach to an outcome may read something like: “Apply the first law of thermodynamics in a closed system.” These discrete and easily measurable skills are vital in many disciplines, but you may also think about learning outcomes that focus on other aspects of one’s life and development. L. Dee Fink, the author of the book  Creating Significant Learning Experiences , describes six different outcome categories. The first three deal with these cognitive skills and the second three with affective, interpersonal, and intrapersonal development. By including this second set of goals in our course design and development, we introduce opportunities to support students’ ability to engage in more meaningful ways with each other and, by extension, their feeling of belongingness, connection, and individuality in the class.

  • Foundational knowledge : understanding and remembering information and ideas
  • Application : skills, critical thinking, creative thinking, practical thinking, and managing projects (e.g., the thermodynamics example above)
  • Integration : connecting information, ideas, perspectives, people, or realms of life
  • Human dimension : learning about oneself and others
  • Caring : developing new feelings, interests, and values
  • Learning how to learn : becoming a better student, inquiring about a subject, becoming a self-directed learner

Try treating students as partners around outcomes

Co-construction.

While the broad shape of an outcome will almost always be carefully crafted ahead of time, one approach to help students feel connected to the class is to enlist them in co-constructing parts of an outcome. Most frequently, this co-construction revolves around what success will look like, and it is particularly useful when it is an outcome that in which different students can succeed in different ways. For example, in a discussion-oriented class, one of the outcomes may focus on students developing their communication skills through class participation. But personality and other differences may mean that students have vastly different needs in terms of developing these skills. At a basic level, some students may have greater challenges with speaking up and sharing their thoughts in front of their peers and instructor. Other students may need to better develop their skills in listening to peers and responding productively. By approaching this outcome through co-construction, each student can set and be measured by appropriate goals that will pose a challenge to that student and help them develop important skills.

When outcomes are fixed, focus on communicating and responding to students

In most classes, outcomes and objectives are pre-determined and sometimes must adhere to standards beyond an instructor’s control, whether fitting university requirements or those of national accreditors. Especially in cases where outcomes are fixed, it is too easy to assume that students’ goals are also fixed. Even when classes are required as part of a sequence for a major, students often have widely varying goals for their lives and careers, and sometimes even thoughts regarding how this particular class may fit into achieving their goals. When we start the semester, we can ask students about their goals and what they hope to get out of a class and use existing outcomes and objectives to highlight connections and possibilities. Remember that, because students have not yet engaged with this material, they are much less prepared to make the connections. What may seem obvious to an expert instructor may seem opaque to a learner.

Ask students about the achievements related to outcomes

One common model for understanding student achievement involves asking students about their success specifically related to the course outcomes. This can be done to gauge their perception of success: As a result of your work in this class, what gains did you make in [course outcome]” or to gauge the effectiveness of specific teaching practices: “How much did the following aspects of the course help you in your learning? (Examples might include class and lab activities, assessments, particular learning methods, and resources).” Both of these questions come from the  SALG (Student Assessment of Learning Gains)  survey/tool (note: the website is rather dated).  Studies  demonstrate that, while students tend to overestimate their competence relative to instructors, their input broadly is informative, and when these disparities emerge, they can be useful for instructors to interrogate teaching and assessment practices.

Share and reference outcomes and objectives early and often

Discuss outcomes and objectives in every class session.

One of the most common instructor complaints is that students do not pay attention to the outcomes and objectives of a class. This is often a case of mutual neglect. In addition to including class outcomes in your syllabus, highlight outcomes and their connections to objectives in each class session and in instructions for assignments. During class sessions, find opportunities to remind students of these connections. By creating a culture of outcomes and objectives integrated throughout elements of the class, students are better able to follow their progression and understand how different class components and learning integrates and synthesizes with each other.

Build outcomes into the design of assignments

When sharing instructions or guidelines for an assessment, make sure to share and discuss how the assignment fits into the structure of learning outcomes and objectives for the class. See the  Creating Inclusive Grading Structures  page for more detail and structures.​​​​​​​

Write outcomes that reflect your students’ experiences and abilities

Prepare for different academic experiences.

One challenge in planning a class is that it is easy to imagine an idealized student who will enroll in your class. They will have completed certain other classes, possibly had certain experiences, may have certain goals. This ideal student assumption leads many instructors to complain that students were not properly prepared for their class. When writing outcomes, it is valuable to write them for the reality of students present. In reality, students will take a variety of paths, and prerequisite classes may have been completed at other institutions or with a variety of instructors who may have emphasized different elements. Even in situations where every student took the exact same class with the exact same instructor the exact semester prior, students’ strengths and weaknesses with particular topics and skills covered will vary. This does not mean you must re-teach prerequisite courses but building in objectives that highlight particular elements of previous classes will help strengthen and clarify previous learning in addition to helping students identify existing gaps to fill.

Outcomes can reflect a multitude of expressive processes

As outcomes — particularly their language — are intimately intertwined with assessment processes, think carefully about how wording choices may limit students’ ability to express their learning. If the outcome specifies writing, is learning to write in the appropriate format and for the appropriate audience central or is writing one common way (e.g., written language) enough for students to express the more central component of an outcome? What if “write” were turned into “express,” “share,” or “present,” all of which open up greater flexibility in modality of conveying a student’s understanding of content or mastery of skills that are not specific to the written form?

Use the  Learning Outcomes Worksheet  to practice writing at least one outcome and identifying what category you would place it in. You will find a variety of actual examples from Purdue instructors on the second page of the worksheet.

Learning Outcomes

After you have developed one or more outcomes, view the Creating Inclusive Grading Structures and/or Lecturing pages to consider ways of putting your new outcome(s) into practice in your class.

Hanstedt, P. (2018).  Creating wicked students: Designing courses for a complex world . Stylus Publishing.

In this book, Hanstedt argues for creating courses to prepare students to deal with complex problems that do not have simple answers and often draw on a variety of different disciplinary skills and techniques.  Chapter 2 , in particular, focuses on writing goals (his term for outcomes), with numerous examples.

Fink, L. D. (2013)  Creating significant learning experiences: An integrated approach to designing college course . John Wiley & Sons.

As noted above, Fink's approach focuses on creating outcomes (also using the term goals) that fit six distinct categories. Like Handstedt, Fink provides guidance and numerous examples of how to construct such goals.

Anderson L. W. & Krathwohl, D. R. (2001).  A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives . Longman.

This update/revision to Bloom's cognitive domain includes numerous resources and examples as well as adds a cognitive process dimension, recognizing that any of the six cognitive categories can also be broken down into four processes: factual knowledge, conceptual knowledge, procedural knowledge, and metacognitive knowledge.

Note: Purdue Libraries only has a print version of this book.  You can find  online resources  developed by Iowa State University, including detailed information about the knowledge dimension.

Module Navigation

  • Next Module: Creating Exams
  • Previous Module: Lecturing
  • Current Topic: Applications in Context
  • Next Topic: Certificate Program

Creating Learning Outcomes

Main navigation.

A learning outcome is a concise description of what students will learn and how that learning will be assessed. Having clearly articulated learning outcomes can make designing a course, assessing student learning progress, and facilitating learning activities easier and more effective. Learning outcomes can also help students regulate their learning and develop effective study strategies.

Defining the terms

Educational research uses a number of terms for this concept, including learning goals, student learning objectives, session outcomes, and more. 

In alignment with other Stanford resources, we will use learning outcomes as a general term for what students will learn and how that learning will be assessed. This includes both goals and objectives. We will use learning goals to describe general outcomes for an entire course or program. We will use learning objectives when discussing more focused outcomes for specific lessons or activities.

For example, a learning goal might be “By the end of the course, students will be able to develop coherent literary arguments.” 

Whereas a learning objective might be, “By the end of Week 5, students will be able to write a coherent thesis statement supported by at least two pieces of evidence.”

Learning outcomes benefit instructors

Learning outcomes can help instructors in a number of ways by:

  • Providing a framework and rationale for making course design decisions about the sequence of topics and instruction, content selection, and so on.
  • Communicating to students what they must do to make progress in learning in your course.
  • Clarifying your intentions to the teaching team, course guests, and other colleagues.
  • Providing a framework for transparent and equitable assessment of student learning. 
  • Making outcomes concerning values and beliefs, such as dedication to discipline-specific values, more concrete and assessable.
  • Making inclusion and belonging explicit and integral to the course design.

Learning outcomes benefit students 

Clearly, articulated learning outcomes can also help guide and support students in their own learning by:

  • Clearly communicating the range of learning students will be expected to acquire and demonstrate.
  • Helping learners concentrate on the areas that they need to develop to progress in the course.
  • Helping learners monitor their own progress, reflect on the efficacy of their study strategies, and seek out support or better strategies. (See Promoting Student Metacognition for more on this topic.)

Choosing learning outcomes

When writing learning outcomes to represent the aims and practices of a course or even a discipline, consider:

  • What is the big idea that you hope students will still retain from the course even years later?
  • What are the most important concepts, ideas, methods, theories, approaches, and perspectives of your field that students should learn?
  • What are the most important skills that students should develop and be able to apply in and after your course?
  • What would students need to have mastered earlier in the course or program in order to make progress later or in subsequent courses?
  • What skills and knowledge would students need if they were to pursue a career in this field or contribute to communities impacted by this field?
  • What values, attitudes, and habits of mind and affect would students need if they are to pursue a career in this field or contribute to communities impacted by this field?
  • How can the learning outcomes span a wide range of skills that serve students with differing levels of preparation?
  • How can learning outcomes offer a range of assessment types to serve a diverse student population?

Use learning taxonomies to inform learning outcomes

Learning taxonomies describe how a learner’s understanding develops from simple to complex when learning different subjects or tasks. They are useful here for identifying any foundational skills or knowledge needed for more complex learning, and for matching observable behaviors to different types of learning.

Bloom’s Taxonomy

Bloom’s Taxonomy is a hierarchical model and includes three domains of learning: cognitive, psychomotor, and affective. In this model, learning occurs hierarchically, as each skill builds on previous skills towards increasingly sophisticated learning. For example, in the cognitive domain, learning begins with remembering, then understanding, applying, analyzing, evaluating, and lastly creating. 

Taxonomy of Significant Learning

The Taxonomy of Significant Learning is a non-hierarchical and integral model of learning. It describes learning as a meaningful, holistic, and integral network. This model has six intersecting domains: knowledge, application, integration, human dimension, caring, and learning how to learn. 

See our resource on Learning Taxonomies and Verbs for a summary of these two learning taxonomies.

How to write learning outcomes

Writing learning outcomes can be made easier by using the ABCD approach. This strategy identifies four key elements of an effective learning outcome:

Consider the following example: Students (audience) , will be able to label and describe (behavior) , given a diagram of the eye at the end of this lesson (condition) , all seven extraocular muscles, and at least two of their actions (degree) .

Audience 

Define who will achieve the outcome. Outcomes commonly include phrases such as “After completing this course, students will be able to...” or “After completing this activity, workshop participants will be able to...”

Keeping your audience in mind as you develop your learning outcomes helps ensure that they are relevant and centered on what learners must achieve. Make sure the learning outcome is focused on the student’s behavior, not the instructor’s. If the outcome describes an instructional activity or topic, then it is too focused on the instructor’s intentions and not the students.

Try to understand your audience so that you can better align your learning goals or objectives to meet their needs. While every group of students is different, certain generalizations about their prior knowledge, goals, motivation, and so on might be made based on course prerequisites, their year-level, or majors. 

Use action verbs to describe observable behavior that demonstrates mastery of the goal or objective. Depending on the skill, knowledge, or domain of the behavior, you might select a different action verb. Particularly for learning objectives which are more specific, avoid verbs that are vague or difficult to assess, such as “understand”, “appreciate”, or “know”.

The behavior usually completes the audience phrase “students will be able to…” with a specific action verb that learners can interpret without ambiguity. We recommend beginning learning goals with a phrase that makes it clear that students are expected to actively contribute to progressing towards a learning goal. For example, “through active engagement and completion of course activities, students will be able to…”

Example action verbs

Consider the following examples of verbs from different learning domains of Bloom’s Taxonomy . Generally speaking, items listed at the top under each domain are more suitable for advanced students, and items listed at the bottom are more suitable for novice or beginning students. Using verbs and associated skills from all three domains, regardless of your discipline area, can benefit students by diversifying the learning experience. 

For the cognitive domain:

  • Create, investigate, design
  • Evaluate, argue, support
  • Analyze, compare, examine
  • Solve, operate, demonstrate
  • Describe, locate, translate
  • Remember, define, duplicate, list

For the psychomotor domain:

  • Invent, create, manage
  • Articulate, construct, solve
  • Complete, calibrate, control
  • Build, perform, execute
  • Copy, repeat, follow

For the affective domain:

  • Internalize, propose, conclude
  • Organize, systematize, integrate
  • Justify, share, persuade
  • Respond, contribute, cooperate
  • Capture, pursue, consume

Often we develop broad goals first, then break them down into specific objectives. For example, if a goal is for learners to be able to compose an essay, break it down into several objectives, such as forming a clear thesis statement, coherently ordering points, following a salient argument, gathering and quoting evidence effectively, and so on.

State the conditions, if any, under which the behavior is to be performed. Consider the following conditions:

  • Equipment or tools, such as using a laboratory device or a specified software application.
  • Situation or environment, such as in a clinical setting, or during a performance.
  • Materials or format, such as written text, a slide presentation, or using specified materials.

The level of specificity for conditions within an objective may vary and should be appropriate to the broader goals. If the conditions are implicit or understood as part of the classroom or assessment situation, it may not be necessary to state them. 

When articulating the conditions in learning outcomes, ensure that they are sensorily and financially accessible to all students.

Degree 

Degree states the standard or criterion for acceptable performance. The degree should be related to real-world expectations: what standard should the learner meet to be judged proficient? For example:

  • With 90% accuracy
  • Within 10 minutes
  • Suitable for submission to an edited journal
  • Obtain a valid solution
  • In a 100-word paragraph

The specificity of the degree will vary. You might take into consideration professional standards, what a student would need to succeed in subsequent courses in a series, or what is required by you as the instructor to accurately assess learning when determining the degree. Where the degree is easy to measure (such as pass or fail) or accuracy is not required, it may be omitted.

Characteristics of effective learning outcomes

The acronym SMART is useful for remembering the characteristics of an effective learning outcome.

  • Specific : clear and distinct from others.
  • Measurable : identifies observable student action.
  • Attainable : suitably challenging for students in the course.
  • Related : connected to other objectives and student interests.
  • Time-bound : likely to be achieved and keep students on task within the given time frame.

Examples of effective learning outcomes

These examples generally follow the ABCD and SMART guidelines. 

Arts and Humanities

Learning goals.

Upon completion of this course, students will be able to apply critical terms and methodology in completing a written literary analysis of a selected literary work.

At the end of the course, students will be able to demonstrate oral competence with the French language in pronunciation, vocabulary, and language fluency in a 10 minute in-person interview with a member of the teaching team.

Learning objectives

After completing lessons 1 through 5, given images of specific works of art, students will be able to identify the artist, artistic period, and describe their historical, social, and philosophical contexts in a two-page written essay.

By the end of this course, students will be able to describe the steps in planning a research study, including identifying and formulating relevant theories, generating alternative solutions and strategies, and application to a hypothetical case in a written research proposal.

At the end of this lesson, given a diagram of the eye, students will be able to label all of the extraocular muscles and describe at least two of their actions.

Using chemical datasets gathered at the end of the first lab unit, students will be able to create plots and trend lines of that data in Excel and make quantitative predictions about future experiments.

  • How to Write Learning Goals , Evaluation and Research, Student Affairs (2021).
  • SMART Guidelines , Center for Teaching and Learning (2020).
  • Learning Taxonomies and Verbs , Center for Teaching and Learning (2021).

Writing Student Learning Outcomes

Student learning outcomes state what students are expected to know or be able to do upon completion of a course or program. Course learning outcomes may contribute, or map to, program learning outcomes, and are required in group instruction course syllabi .

At both the course and program level, student learning outcomes should be clear, observable and measurable, and reflect what will be included in the course or program requirements (assignments, exams, projects, etc.). Typically there are 3-7 course learning outcomes and 3-7 program learning outcomes.

When submitting learning outcomes for course or program approvals, or assessment planning and reporting, please:

  • Begin with a verb (exclude any introductory text and the phrase “Students will…”, as this is assumed)
  • Limit the length of each learning outcome to 400 characters
  • Exclude special characters (e.g., accents, umlats, ampersands, etc.)
  • Exclude special formatting (e.g., bullets, dashes, numbering, etc.)

Writing Course Learning Outcomes Video

Watch Video

Steps for Writing Outcomes

The following are recommended steps for writing clear, observable and measurable student learning outcomes. In general, use student-focused language, begin with action verbs and ensure that the learning outcomes demonstrate actionable attributes.

1. Begin with an Action Verb

Begin with an action verb that denotes the level of learning expected. Terms such as know , understand , learn , appreciate are generally not specific enough to be measurable. Levels of learning and associated verbs may include the following:

  • Remembering and understanding: recall, identify, label, illustrate, summarize.
  • Applying and analyzing: use, differentiate, organize, integrate, apply, solve, analyze.
  • Evaluating and creating: Monitor, test, judge, produce, revise, compose.

Consult Bloom’s Revised Taxonomy (below) for more details. For additional sample action verbs, consult this list from The Centre for Learning, Innovation & Simulation at The Michener Institute of Education at UNH.

2. Follow with a Statement

  • Identify and summarize the important feature of major periods in the history of western culture
  • Apply important chemical concepts and principles to draw conclusions about chemical reactions
  • Demonstrate knowledge about the significance of current research in the field of psychology by writing a research paper
  • Length – Should be no more than 400 characters.

*Note: Any special characters (e.g., accents, umlats, ampersands, etc.) and formatting (e.g., bullets, dashes, numbering, etc.) will need to be removed when submitting learning outcomes through HelioCampus Assessment and Credentialing (formerly AEFIS) and other digital campus systems.

Revised Bloom’s Taxonomy of Learning: The “Cognitive” Domain

Graphic depiction of Revised Bloom's Taxonomy

To the right: find a sampling of verbs that represent learning at each level. Find additional action verbs .

*Text adapted from: Bloom, B.S. (Ed.) 1956. Taxonomy of Educational Objectives: The classification of educational goals. Handbook 1, Cognitive Domain. New York.

Anderson, L.W. (Ed.), Krathwohl, D.R. (Ed.), Airasian, P.W., Cruikshank, K.A., Mayer, R.E., Pintrich, P.R., Raths, J., & Wittrock, M.C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives (Complete edition). New York: Longman.

Examples of Learning Outcomes

Academic program learning outcomes.

The following examples of academic program student learning outcomes come from a variety of academic programs across campus, and are organized in four broad areas: 1) contextualization of knowledge; 2) praxis and technique; 3) critical thinking; and, 4) research and communication.

Student learning outcomes for each UW-Madison undergraduate and graduate academic program can be found in Guide . Click on the program of your choosing to find its designated learning outcomes.

This is an accordion element with a series of buttons that open and close related content panels.

Contextualization of Knowledge

Students will…

  • identify, formulate and solve problems using appropriate information and approaches.
  • demonstrate their understanding of major theories, approaches, concepts, and current and classical research findings in the area of concentration.
  • apply knowledge of mathematics, chemistry, physics, and materials science and engineering principles to materials and materials systems.
  • demonstrate an understanding of the basic biology of microorganisms.

Praxis and Technique

  • utilize the techniques, skills and modern tools necessary for practice.
  • demonstrate professional and ethical responsibility.
  • appropriately apply laws, codes, regulations, architectural and interiors standards that protect the health and safety of the public.

Critical Thinking

  • recognize, describe, predict, and analyze systems behavior.
  • evaluate evidence to determine and implement best practice.
  • examine technical literature, resolve ambiguity and develop conclusions.
  • synthesize knowledge and use insight and creativity to better understand and improve systems.

Research and Communication

  • retrieve, analyze, and interpret the professional and lay literature providing information to both professionals and the public.
  • propose original research: outlining a plan, assembling the necessary protocol, and performing the original research.
  • design and conduct experiments, and analyze and interpret data.
  • write clear and concise technical reports and research articles.
  • communicate effectively through written reports, oral presentations and discussion.
  • guide, mentor and support peers to achieve excellence in practice of the discipline.
  • work in multi-disciplinary teams and provide leadership on materials-related problems that arise in multi-disciplinary work.

Course Learning Outcomes

  • identify, formulate and solve integrative chemistry problems. (Chemistry)
  • build probability models to quantify risks of an insurance system, and use data and technology to make appropriate statistical inferences. (Actuarial Science)
  • use basic vector, raster, 3D design, video and web technologies in the creation of works of art. (Art)
  • apply differential calculus to model rates of change in time of physical and biological phenomena. (Math)
  • identify characteristics of certain structures of the body and explain how structure governs function. (Human Anatomy lab)
  • calculate the magnitude and direction of magnetic fields created by moving electric charges. (Physics)

Additional Resources

  • Bloom’s Taxonomy
  • The Six Facets of Understanding – Wiggins, G. & McTighe, J. (2005). Understanding by Design (2nd ed.). ASCD
  • Taxonomy of Significant Learning – Fink, L.D. (2003). A Self-Directed Guide to Designing Courses for Significant Learning. Jossey-Bass
  • College of Agricultural & Life Sciences Undergraduate Learning Outcomes
  • College of Letters & Science Undergraduate Learning Outcomes

Guides best practices : Write learning outcomes and create an outline

Write learning outcomes.

We recommend writing a few learning outcomes to steer your guide. A learning outcome states what a participant will be able to do after instruction. A learning outcome should contain:

A

A BEHAVIOR What will they be able to do after the instruction? Use a verb that can be observed so that later on you can assess whether the guide was successful.

C

For example, a learning outcome for this guide is that - After using this guide  (condition) , Stanford Libraries staff  (audience)  will write  (behavior)  one to three learning outcomes for each guide they create (degree).

Create an outline

Once you write a handful of learning outcomes (1 to 3), you can start to divide up the material into "chunks". These will become the pages on your guide.

Check mark

DO  create small, digestible chunks. The capacity for working memory is finite. Learning cannot take place if a user is overwhelmed if too much information is presented to them at one time.

X mark

  • << Previous: What is a guide?
  • Next: How to build a guide >>
  • What is a guide?
  • Write learning outcomes and create an outline
  • How to build a guide
  • Test for accessibility issues
  • Maintenance and help
  • Last Updated: Feb 1, 2024 3:38 PM
  • URL: https://guides.library.stanford.edu/bestpractices

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Writing and using learning outcomes: a practical guide

Profile image of Aine Hyland

Related Papers

Chahira Nouira

learning outcomes for research paper

Anthony F. Camilleri

Practice and Evidence of the Scholarship of Teaching and Learning in Higher Education

Simon Paul Atkinson

For universities to remain relevant and competitive in a global market of Higher Apprenticeships and work-based learning provision there will need to be a much clearer articulation of the benefits accrued by students in their ‘graduateness’. A review of 20 UK institutions, 80 undergraduate modules and some 435 individual intended learning outcomes (ILOs) being taken by students in 2014-15 reveals the lack of definition of ILOs in terms of skills development attractive to employers. This paper argues that employability should be more clearly articulated in the ILOs specified at module level and that the development of employability skills at an institutional level requires sustained attention to ensuring transparency in module designs to ensure student choice and measurable skills acquisition is possible.

Ivana Batarelo Kokić

Current research on the implementation of the entrepreneurship programs in higher education commonly focuses on the distinction between business and non-business programs. The special focus of this pilot study is Higher Education Institutions (HEI) offering non-business programs. The quantitative section of the larger study of strategic piloting, implemented by SEECEL in cooperation with national partners, focused on the higher education institution (HEI) as unit of change to test the impact of the entrepreneurial learning SEECEL ELKCA instrument on a student’s entrepreneurship-related learning outcomes. The pilot study recommendations are related to the validity of the developed questionnaire and further implementation of entrepreneurial learning in an HEI. The internal consistency of the entrepreneurship-related learning outcomes scale indicates the high validity of the questionnaire. It is possible to conclude that further efforts should be made to incorporate entrepreneurial competences in teacher training programs and also to develop further entrepreneurship modules designed by SEECEL in non-teacher training HEIs to improve entrepreneurial competence in relation to the higher levels of learning in the cognitive domain and the affective domain. Additional data analysis revealed non-teacher training and teacher training HEIs that experienced significant changes in several entrepreneurial competence domains. These institutions should serve as examples of good practice and mentors. The current societal circumstances and the requirements to develop entrepreneurial universities demand additional efforts from higher education institutions and universities in implementing entrepreneurship education in current university programs.

Theoretical considerations and practical approaches to foster employability in a dynamic industry: This paper outlines processes for developing higher quality training and qualification programmes, in particular in regard to internet-sector jobs. In addition, it includes development guidelines for identifying training requirements and writing learning outcomes. This document was prepared as part of the PIN (ProInternet) Thematic Network Project, Agreement no. 2009-2204/001-001, under the auspices of the Leonardo da Vinci Programme. In was written in conjuction with Work Package 3, Labelisation, Certification and Normalisation. Work-package leader was teh DEKRA Akademie GmbH, located in Stuttgart, Germany; primary co-authors were the FOM Hochschule in Essen, Germany and the hellenic Open University in Patras, Greece. It was published as FOM Arbeitspapier Nr. 34.

Bright Ziso

Leilani Escalada

CATALINA ULRICH HYGUM

This Cedefop reference publication maps and analyses the shift to learning outcomes in education and training policies and practices across Europe. Bringing evidence on the development of national policies from 33 countries, the study examines progress made in recent years (2009 onwards) and attempts to capture the character of political reform at national, institution and local levels. Ten case studies in nine countries produce new empirical evidence on the presence of learning outcomes approaches in the design and delivery of programmes and curricula for teacher education programmes. Based on extensive literature review, interviews conducted with various stakeholders in curriculum policy-making and practice, focus groups and on-site visits, findings show how learning outcomes approaches increasingly feature as catalysts for policy and practical reform, influencing education and training practice. This publication also reveals the diversity of uses of the learning outcomes approaches being applied and highlights the complexity of implementing learning-outcomes centred policies and developing appropriate strategies at both systemic and subsystemic levels.

Stephen Adam

This chapter explores the nature and functions of learning outcomes in the context of the Bologna educational reforms. Section 1 explains what they are and where they originate. Section 2 explores their practical application and multiple functions, and provides a schematic summary. Section 3 places them in the context of current pedagogical reform and highlights their relationship to curriculum development - teaching learning and assessment. Section 4 establishes their centrality to the Bologna Process and the successful completion of the European Higher Education Area. Finally, the concluding section 5 points to some important issues associated with their application in the immediate future.

The Curriculum for Spatial Citizenship Education serves as a guiding foundation for creating local curriculum approaches of Spatial Citizenship (SC) teacher education and training across the European Higher Educational Area (EHEA). Thus, it addresses all stakeholders related to that special field of teaching and learning at secondary school as well as higher education, and for in-service teacher training. This paper outlines the development process of the Spatial Citizenship Curriculum within the progress of the Comenius project Spatial Citizenship (SPACIT). Giving an example of implementation, this paper also introduces a framework of a blended-learning in-service teacher training course.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Dr. D. ASIR ANTONY GNANA SINGH B.E., M.E., M.B.A., Ph.D., , jebi lee

Irish Educational Studies

David Brancaleone

De Editores , Manuel Silva , Isabel Chumbo , Alexandra Albuquerque

Ana Gonçalves

IEEE Access

Lindsey Conner

Kamo Chilingaryan , Chilingaryan Kamo

Kamo Chilingaryan

Bilal Khawaja

Abdunasir Sideeg , Abdunasir Sideeg

2017 ASEE Annual Conference & Exposition Proceedings

Wajid Hussain

Álvaro Rocha

Richard A. Voorhees , Alice Bedard-Voorhees

REGINA LAMBIN

HJALMAR P U N L A HERNANDEZ

Abubakar Lk

BPP University Working Paper

Journal of Curriculum Studies

Matěj Vrhel

Juan José Sobrino

Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education

Mats Daniels

Gertina J. van Schalkwyk

Mukund Darvhekar

Hong Y. Ching

Sheperd S Moyo

Ona Vileikis

Giuseppe Modica

Tom S . Cockburn

midhun.k Kulappuram

Clifford Adelman

Marinus de Bakker

ETHE journal

Dr. Ahmed Gumaa Siddiek

8 th European GIS Education Seminar 6-9 September, …

Luc Zwartjes

Koen Van Balen

Science Park Research Organization & Counselling

Francois Adoue

ACM Sigcse Bulletin

Colin Johnson , Jana Jackova

… : Learning outcomes based higher education: the …

The 10 th AISOFOLL, Qitep in Language

Moch Said Mardjuki

Peter Merckx

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

A Comparison of Student Learning Outcomes: Online Education vs. Traditional Classroom Instruction

Despite the prevalence of online learning today, it is often viewed as a less favorable option when compared to the traditional, in-person educational experience. Criticisms of online learning come from various sectors, like employer groups, college faculty, and the general public, and generally includes a lack of perceived quality as well as rigor. Additionally, some students report feelings of social isolation in online learning (Protopsaltis & Baum, 2019).

In my experience as an online student as well as an online educator, online learning has been just the opposite. I have been teaching in a fully online master’s degree program for the last three years and have found it to be a rich and rewarding experience for students and faculty alike. As an instructor, I have felt more connected to and engaged with my online students when compared to in-person students. I have also found that students are actively engaged with course content and demonstrate evidence of higher-order thinking through their work. Students report high levels of satisfaction with their experiences in online learning as well as the program overall as indicated in their Student Evaluations of Teaching  (SET) at the end of every course. I believe that intelligent course design, in addition to my engagement in professional development related to teaching and learning online, has greatly influenced my experience.

In an article by Wiley Education Services, authors identified the top six challenges facing US institutions of higher education, and include:

  • Declining student enrollment
  • Financial difficulties
  • Fewer high school graduates
  • Decreased state funding
  • Lower world rankings
  • Declining international student enrollments

Of the strategies that institutions are exploring to remedy these issues, online learning is reported to be a key focus for many universities (“Top Challenges Facing US Higher Education”, n.d.).

learning outcomes for research paper

Babson Survey Research Group, 2016, [PDF file].

Some of the questions I would like to explore in further research include:

  • What factors influence engagement and connection in distance education?
  • Are the learning outcomes in online education any different than the outcomes achieved in a traditional classroom setting?
  • How do course design and instructor training influence these factors?
  • In what ways might educational technology tools enhance the overall experience for students and instructors alike?

In this literature review, I have chosen to focus on a comparison of student learning outcomes in online education versus the traditional classroom setting. My hope is that this research will unlock the answers to some of the additional questions posed above and provide additional direction for future research.

Online Learning Defined

According to Mayadas, Miller, and Sener (2015), online courses are defined by all course activity taking place online with no required in-person sessions or on-campus activity. It is important to note, however, that the Babson Survey Research Group, a prominent organization known for their surveys and research in online learning, defines online learning as a course in which 80-100% occurs online. While this distinction was made in an effort to provide consistency in surveys year over year, most institutions continue to define online learning as learning that occurs 100% online.

Blended or hybrid learning is defined by courses that mix face to face meetings, sessions, or activities with online work. The ratio of online to classroom activity is often determined by the label in which the course is given. For example, a blended classroom course would likely include more time spent in the classroom, with the remaining work occurring outside of the classroom with the assistance of technology. On the other hand, a blended online course would contain a greater percentage of work done online, with some required in-person sessions or meetings (Mayadas, Miller, & Sener, 2015).

A classroom course (also referred to as a traditional course) refers to course activity that is anchored to a regular meeting time.

Enrollment Trends in Online Education

There has been an upward trend in the number of postsecondary students enrolled in online courses in the U.S. since 2002. A report by the Babson Survey Research Group showed that in 2016, more than six million students were enrolled in at least one online course. This number accounted for 31.6% of all college students (Seaman, Allen, & Seaman, 2018). Approximately one in three students are enrolled in online courses with no in-person component. Of these students, 47% take classes in a fully online program. The remaining 53% take some, but not all courses online (Protopsaltis & Baum, 2019).

learning outcomes for research paper

(Seaman et al., 2016, p. 11)

Perceptions of Online Education

In a 2016 report by the Babson Survey Research Group, surveys of faculty between 2002-2015 showed approval ratings regarding the value and legitimacy of online education ranged from 28-34 percent. While numbers have increased and decreased over the thirteen-year time frame, faculty approval was at 29 percent in 2015, just 1 percent higher than the approval ratings noted in 2002 – indicating that perceptions have remained relatively unchanged over the years (Allen, Seaman, Poulin, & Straut, 2016).

learning outcomes for research paper

(Allen, I.E., Seaman, J., Poulin, R., Taylor Strout, T., 2016, p. 26)

In a separate survey of chief academic officers, perceptions of online learning appeared to align with that of faculty. In this survey, leaders were asked to rate their perceived quality of learning outcomes in online learning when compared to traditional in-person settings. While the percentage of leaders rating online learning as “inferior” or “somewhat inferior” to traditional face-to-face courses dropped from 43 percent to 23 percent between 2003 to 2012, the number rose again to 29 percent in 2015 (Allen, Seaman, Poulin, & Straut, 2016).

learning outcomes for research paper

Faculty and academic leaders in higher education are not alone when it comes to perceptions of inferiority when compared to traditional classroom instruction. A 2013 Gallop poll assessing public perceptions showed that respondents rated online education as “worse” in five of the seven categories seen in the table below.

learning outcomes for research paper

(Saad, L., Busteed, B., and Ogisi, M., 2013, October 15)

In general, Americans believed that online education provides both lower quality and less individualized instruction and less rigorous testing and grading when compared to the traditional classroom setting. In addition, respondents also thought that employers would perceive a degree from an online program less positively when compared to a degree obtained through traditional classroom instruction (Saad, Busteed, & Ogisi, 2013).

Student Perceptions of Online Learning

So what do students have to say about online learning? In  Online College Students 2015: Comprehensive Data on Demands and Preferences,  1500 college students who were either enrolled or planning to enroll in a fully online undergraduate, graduate, or certificate program were surveyed. 78 percent of students believed the academic quality of their online learning experience to be better than or equal to their experiences with traditional classroom learning. Furthermore, 30 percent of online students polled said that they would likely not attend classes face to face if their program were not available online (Clienfelter & Aslanian, 2015). The following video describes some of the common reasons why students choose to attend college online.

How Online Learning Affects the Lives of Students ( Pearson North America, 2018, June 25)

In a 2015 study comparing student perceptions of online learning with face to face learning, researchers found that the majority of students surveyed expressed a preference for traditional face to face classes. A content analysis of the findings, however, brought attention to two key ideas: 1) student opinions of online learning may be based on “old typology of distance education” (Tichavsky, et al, 2015, p.6) as opposed to actual experience, and 2) a student’s inclination to choose one form over another is connected to issues of teaching presence and self-regulated learning (Tichavsky et al, 2015).

Student Learning Outcomes

Given the upward trend in student enrollment in online courses in postsecondary schools and the steady ratings of the low perceived value of online learning by stakeholder groups, it should be no surprise that there is a large body of literature comparing student learning outcomes in online classes to the traditional classroom environment.

While a majority of the studies reviewed found no significant difference in learning outcomes when comparing online to traditional courses (Cavanaugh & Jacquemin, 2015; Kemp & Grieve, 2014; Lyke & Frank 2012; Nichols, Shaffer, & Shockey, 2003; Stack, 2015; Summers, Waigandt, & Whittaker, 2005), there were a few outliers. In a 2019 report by Protopsaltis & Baum, authors confirmed that while learning is often found to be similar between the two mediums, students “with weak academic preparation and those from low-income and underrepresented backgrounds consistently underperform in fully-online environments” (Protopsaltis & Baum, 2019, n.p.). An important consideration, however, is that these findings are primarily based on students enrolled in online courses at the community college level – a demographic with a historically high rate of attrition compared to students attending four-year institutions (Ashby, Sadera, & McNary, 2011). Furthermore, students enrolled in online courses have been shown to have a 10 – 20 percent increase in attrition over their peers who are enrolled in traditional classroom instruction (Angelino, Williams, & Natvig, 2007). Therefore, attrition may be a key contributor to the lack of achievement seen in this subgroup of students enrolled in online education.

In contrast, there were a small number of studies that showed that online students tend to outperform those enrolled in traditional classroom instruction. One study, in particular, found a significant difference in test scores for students enrolled in an online, undergraduate business course. The confounding variable, in this case, was age. Researchers found a significant difference in performance in nontraditional age students over their traditional age counterparts. Authors concluded that older students may elect to take online classes for practical reasons related to outside work schedules, and this may, in turn, contribute to the learning that occurs overall (Slover & Mandernach, 2018).

In a meta-analysis and review of online learning spanning the years 1996 to 2008, authors from the US Department of Education found that students who took all or part of their classes online showed better learning outcomes than those students who took the same courses face-to-face. In these cases, it is important to note that there were many differences noted in the online and face-to-face versions, including the amount of time students spent engaged with course content. The authors concluded that the differences in learning outcomes may be attributed to learning design as opposed to the specific mode of delivery (Means, Toyoma, Murphy, Bakia, Jones, 2009).

Limitations and Opportunities

After examining the research comparing student learning outcomes in online education with the traditional classroom setting, there are many limitations that came to light, creating areas of opportunity for additional research. In many of the studies referenced, it is difficult to determine the pedagogical practices used in course design and delivery. Research shows the importance of student-student and student-teacher interaction in online learning, and the positive impact of these variables on student learning (Bernard, Borokhovski, Schmid, Tamim, & Abrami, 2014). Some researchers note that while many studies comparing online and traditional classroom learning exist, the methodologies and design issues make it challenging to explain the results conclusively (Mollenkopf, Vu, Crow, & Black, 2017). For example, some online courses may be structured in a variety of ways, i.e. self-paced, instructor-led and may be classified as synchronous or asynchronous (Moore, Dickson-Deane, Galyan, 2011)

Another gap in the literature is the failure to use a common language across studies to define the learning environment. This issue is explored extensively in a 2011 study by Moore, Dickson-Deane, and Galyan. Here, the authors examine the differences between e-learning, online learning, and distance learning in the literature, and how the terminology is often used interchangeably despite the variances in characteristics that define each. The authors also discuss the variability in the terms “course” versus “program”. This variability in the literature presents a challenge when attempting to compare one study of online learning to another (Moore, Dickson-Deane, & Galyan, 2011).

Finally, much of the literature in higher education focuses on undergraduate-level classes within the United States. Little research is available on outcomes in graduate-level classes as well as general information on student learning outcomes and perceptions of online learning outside of the U.S.

As we look to the future, there are additional questions to explore in the area of online learning. Overall, this research led to questions related to learning design when comparing the two modalities in higher education. Further research is needed to investigate the instructional strategies used to enhance student learning, especially in students with weaker academic preparation or from underrepresented backgrounds. Given the integral role that online learning is expected to play in the future of higher education in the United States, it may be even more critical to move beyond comparisons of online versus face to face. Instead, choosing to focus on sound pedagogical quality with consideration for the mode of delivery as a means for promoting positive learning outcomes.

Allen, I.E., Seaman, J., Poulin, R., & Straut, T. (2016). Online Report Card: Tracking Online Education in the United States [PDF file]. Babson Survey Research Group.   http://onlinelearningsurvey.com/reports/onlinereportcard.pdf

Angelino, L. M., Williams, F. K., & Natvig, D. (2007). Strategies to engage online students and reduce attrition rates.  The Journal of Educators Online , 4(2).

Ashby, J., Sadera, W.A., & McNary, S.W. (2011). Comparing student success between developmental math courses offered online, blended, and face-to-face.  Journal of Interactive Online Learning , 10(3), 128-140.

Bernard, R.M., Borokhovski, E., Schmid, R.F., Tamim, R.M., & Abrami, P.C. (2014). A meta-analysis of blended learning and technology use in higher education: From the general to the applied.  Journal of Computing in Higher Education , 26(1), 87-122.

Cavanaugh, J.K. & Jacquemin, S.J. (2015). A large sample comparison of grade based student learning outcomes in online vs. face-fo-face courses.  Journal of Asynchronous Learning Network,  19(2).

Clinefelter, D. L., & Aslanian, C. B. (2015). Online college students 2015: Comprehensive data on demands and preferences.   https://www.learninghouse.com/wp-content/uploads/2017/09/OnlineCollegeStudents2015.pdf

Golubovskaya, E.A., Tikhonova, E.V., & Mekeko, N.M. (2019). Measuring learning outcome and students’ satisfaction in ELT (e-learning against conventional learning). Paper presented the ACM International Conference Proceeding Series, 34-38. Doi: 10.1145/3337682.3337704

Kemp, N. & Grieve, R. (2014). Face-to-face or face-to-screen? Undergraduates’ opinions and test performance in classroom vs. online learning.  Frontiers in Psychology , 5. Doi: 10.3389/fpsyg.2014.01278

Lyke, J., & Frank, M. (2012). Comparison of student learning outcomes in online and traditional classroom environments in a psychology course. (Cover story).  Journal of Instructional Psychology , 39(3/4), 245-250.

Mayadas, F., Miller, G. & Senner, J.  Definitions of E-Learning Courses and Programs Version 2.0.  Online Learning Consortium.  https://onlinelearningconsortium.org/updated-e-learning-definitions-2/

Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. US Department of Education.  https://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

Mollenkopf, D., Vu, P., Crow, S, & Black, C. (2017). Does online learning deliver? A comparison of student teacher outcomes from candidates in face to face and online program pathways.  Online Journal of Distance Learning Administration.  20(1).

Moore, J.L., Dickson-Deane, C., & Galyan, K. (2011). E-Learning, online learning, and distance learning environments: Are they the same?  The Internet and Higher Education . 14(2), 129-135.

Nichols, J., Shaffer, B., & Shockey, K. (2003). Changing the face of instruction: Is online or in-class more effective?   College & Research Libraries , 64(5), 378–388.  https://doi-org.proxy2.library.illinois.edu/10.5860/crl.64.5.378

Parsons-Pollard, N., Lacks, T.R., & Grant, P.H. (2008). A comparative assessment of student learning outcomes in large online and traditional campus based introduction to criminal justice courses.  Criminal Justice Studies , 2, 225-239.

Pearson North America. (2018, June 25).  How Online Learning Affects the Lives of Students . YouTube.  https://www.youtube.com/watch?v=mPDMagf_oAE

Protopsaltis, S., & Baum, S. (2019). Does online education live up to its promise? A look at the evidence and implications for federal policy [PDF file].   http://mason.gmu.edu/~sprotops/OnlineEd.pdf

Saad, L., Busteed, B., & Ogisi, M. (October 15, 2013). In U.S., Online Education Rated Best for Value and Options.  https://news.gallup.com/poll/165425/online-education-rated-best-value-options.aspx

Stack, S. (2015). Learning Outcomes in an Online vs Traditional Course.  International Journal for the Scholarship of Teaching and Learning , 9(1).

Seaman, J.E., Allen, I.E., & Seaman, J. (2018). Grade Increase: Tracking Distance Education in the United States [PDF file]. Babson Survey Research Group.  http://onlinelearningsurvey.com/reports/gradeincrease.pdf

Slover, E. & Mandernach, J. (2018). Beyond Online versus Face-to-Face Comparisons: The Interaction of Student Age and Mode of Instruction on Academic Achievement.  Journal of Educators Online,  15(1) .  https://files.eric.ed.gov/fulltext/EJ1168945.pdf

Summers, J., Waigandt, A., & Whittaker, T. (2005). A Comparison of Student Achievement and Satisfaction in an Online Versus a Traditional Face-to-Face Statistics Class.  Innovative Higher Education , 29(3), 233–250.  https://doi-org.proxy2.library.illinois.edu/10.1007/s10755-005-1938-x

Tichavsky, L.P., Hunt, A., Driscoll, A., & Jicha, K. (2015). “It’s just nice having a real teacher”: Student perceptions of online versus face-to-face instruction.  International Journal for the Scholarship of Teaching and Learning.  9(2).

Wiley Education Services. (n.d.).  Top challenges facing U.S. higher education.  https://edservices.wiley.com/top-higher-education-challenges/

July 17, 2020

Online Learning

college , distance education , distance learning , face to face , higher education , online learning , postsecondary , traditional learning , university , virtual learning

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

© 2024 — Powered by WordPress

Theme by Anders Noren — Up ↑

images/structure/msublackhorizsm.png

  • Office of the Provost
  • Academic Assessment

Sample Learning Outcomes and Rubrics

Sample learning outcomes & rubrics.

A printable document of these SAMPLES can be found here. 

The following are sample program learning outcomes and rubrics to provide some guidance in the development of assessment standards.  These are merely examples and can be modified to fit the needs of your program.  The outcomes and measurements MUST be relevant and meaningful to your program, providing information that will be useful in continuing quality improvement. Remember, when developing of rubrics, consider the thresholds that will demonstrate PLO’s are being met.

Examples of Program Learning Outcomes Some learning outcomes will require a rubric with perimeters for achievement, some will be percentage achievement, and still others may be designed as milestones completed (with time or percentage as unit measured).  Ideally, your assessments will combine direct and indirect measures.  The following are examples of some assessment ideas which are fairly typical of graduate assessment.  Depending on your program, what works for you will vary, but most programs should address the following assessment themes:

Demonstrate    Subject Content Knowledge (Generally in written or oral form, portfolio, project completion, or other demonstration of content knowledge)

Demonstrate oral communication skills representative of their disciplinary field.

Demonstrate skills in oral and/or written communication sufficient to;

  • publish in a peer-reviewed journal
  • present work in their field
  • prepare grant proposals.

Demonstrate, through service, the value of their discipline to the academy and community at large.

Demonstrate a mastery of skills and knowledge at a level required for college and university undergraduate teaching in their discipline and assessment of student learning.

Critical Thinking Analyze and evaluate the literature relevant to their area of study.

Critically apply theories, methodologies, and knowledge to address fundamental questions in their primary area of study.

Demonstrate knowledge progression

Develop research objectives and hypotheses

Collect, summarize and interpret research data.

Pursue research of significance in the discipline or an interdisciplinary or creative project.

Applications Apply research theories, methodologies, and disciplinary knowledge to address fundamental questions in their primary area of study.

Produce and defend an original significant contribution to knowledge Develop professional curriculum vitae with required skills to secure a profession position appropriate to their degree.

Demonstrate Ethical Standards

 Follow the principles of ethics in their field and in academia.

 Interact productively with people from diverse backgrounds as both leaders/mentors and team members with integrity and professionalism. Be able to conduct scholarly activities in an ethical manner.  Familiarly with guiding principles and strategies in the ethical conduct of research and/or teaching

Understand ethical issues and responsibilities especially in matters related to professionalism and (if applicable) in matters related the laboratory setting and in writing and publishing scientific papers.

Measurement Examples

The assessment of program-level learning outcomes should be formative, providing information on students as they work toward achieving required outcomes, and summative, determining satisfactory progress toward degree completion.  

Response Threshold (short list of examples)

  • At least 80% of students will be ranked at acceptable or exceptional in subject content knowledge, written communication, and oral communication skills. (Threshold based on rubric)
  • At least 90% of students will pass their defense on their first attempt.
  • 100% of students will successfully complete the ethics training and lab safety training.
  • 90% of students will successfully complete foundation classes (those required by the department) with a grade of “B” or higher.
  • By second year, 80% of graduate students will have participated in a Poster Presentation
  • By their final year, 80% of students will have a published in a peer-reviewed journal
  • Develop a sliding scale for students in different levels within the graduate program. 80% of students score at “mastery” level on department rubric.
  • 80% of students will successfully complete courses specified in program of study by end of (period of time – will depend on nature of program, but time is a valuable measurement)
  • 60% of Plan A grad students will submit final signed thesis by end of fifth semester.

Note: Rubrics must not be used to asses or evaluate individual students, and should not inform the decision regarding whether a student passes a defense or course.  The data should be aggregated for all students in the program over a two-year period in order to assess the success of the program in meeting its program learning outcomes

Use of Rubrics Rubrics are a more precise means of establishing student performance.  Depending on the assessment measures for your program learning outcomes, they can be invaluable in seeing trends in the attainment of student achievement.  The following are rubrics are from various sources, and they are certainly not the limit your option.  The basic concept of a rubric is 1) The assessment outcome (what’s being assessed) 2) Levels of achievement (poor, limited, acceptable, and exceptional) between 4-5 levels are sufficient.  Levels can be descriptive (as above), numerical (1-5), or a combination of both.

Sample Rubrics (Developed by CLS):

Rubric for the Assessment of Subject Content Knowledge

 

Score

 

1

2*

3

4**

5

 

Little inquiry; limited knowledge shown

 

Explores topic with curiosity; adequate knowledge from variety of sources displayed

 

Knowledge base displays scope, thoroughness, and quality

 

Does not identify or summarize the problem/ question accurately, if at all

 

The main question is identified and clearly stated

 

The main question and subsidiary, embedded or implicit aspects of a question are identified and clearly stated

 

Identifies & evaluates the quality of supporting data/evidence; detects connections and patterns

No supporting data or evidence is utilized; separates into few parts;  detects few connections or patterns

 

Evidence is used but not carefully examined; source(s) of evidence are not questioned for accuracy, precision, relevance and completeness; facts and opinions are stated but not clearly distinguished from value judgments

 

Evidence is identified and carefully examined for accuracy, precision, relevance, and completeness; facts and opinions are stated and clearly distinguished; combines facts and ideas to create new knowledge that is comprehensive and significant

 

Identifies and evaluates conclusions, implications, & consequences; develops ideas

Combines few facts and ideas; needs more development; conclusions, implications; consequences are not provided

 

Accurately identifies conclusions, implications, and consequences with a brief evaluative summary; uses perspectives and insights to explain relationships; states own position on the question

 

Accurately identifies conclusions, implications, and consequences with a brief evaluative summary; uses perspectives and insights to explain relationships; states own position on the question

 

 

 

 

 

 

 

*Exhibits most characteristics of ‘1’ and some of ‘3’; **Exhibits most characteristics of ‘3’ and some of ‘5’

Rubric for the Assessment of Written Communication

 

Level of Achievement

Score

1

2*

3

4**

5

 

ideas, examples, reasons & evidence, point of view

Topic is poorly developed, support is only vague or general; ideas are trite; wording is unclear, simplistic; reflects lack of understanding of topic and audience; minimally accomplishes goals of the assignment

 

Topic is evident; some supporting detail; wording is generally clear; reflects understanding of topic and audience; generally accomplishes goals of the assignment

 

Thesis topic is clearly stated and well developed; details/wording is accurate, specific, appropriate for the topic & audience with no digressions; evidence of effective, clear thinking; completely accomplishes the goals of the assignment

 

focus, coherence, progression of ideas, thesis developed

Disorganized and unfocused; serious problems with coherence and progression of ideas; weak or non- existent thesis

 

Generally organized & focused, demonstrating coherence & progression of ideas; presents a thesis and suggests a plan of development that is mostly carried out

 

Clearly focused and organized around a central theme; thesis presented or implied with noticeable coherence; provides specific & accurate support

 

word choice & sentence variety

Displays frequent & fundamental errors in vocabulary; repetitive words and sentence types; sentences may be simplistic and disjointed

 

Competent use of language and sometimes varies sentence structure; generally focused

 

Choice of language & sentence structure is precise & purposeful, demonstrating a command of language and variety of sentence structures

 

grammar, punctuation, spelling, paragraphing, format; (as applicable) documentation

Errors interfere with writer’s ability to consistently communicate purpose; pervasive mechanical errors obscure meaning; inappropriate format; in text and ending documentation are generally inconsistent and incomplete; cited information is not incorporated into the document

 

Occasional errors do not interfere with writer’s ability to communicate purpose; generally appropriate format; in text and ending documentation are generally clear, consistent, and complete; cited information is somewhat incorporated into the document

 

Control of conventions contribute to the writer’s ability to communicate purpose; free of most mechanical errors; appropriate format; In text and ending documentation are clear, consistent, and complete; cited information is incorporated effectively into the document

 

 

 

 

 

 

 

Rubric for the Assessment of Oral Communication

 

Level of Achievement

Score

1

2*

3

4**

5

 

depth of content, relevant support, clear explanation

Provides irrelevant or no support: explanation of concepts is unclear or inaccurate

 

Main points adequately substantiated with timely, relevant and sufficient support; accurate explanation of key concepts

 

Depth of content reflects thorough understanding of topic; main points well supported with timely, relevant and sufficient support; provided precise explanation of key concepts

 

Main points distinct from support, transitions, coherence

Lack of structure; ideas are not coherent; no transitions; difficult to identify introduction, body, and conclusions

 

Clear organizational pattern; main points are made clearly; smooth transitions differentiate key points

 

Effective organization well suited to purpose; main points are clearly distinct from supporting details; transitions create coherent progress toward conclusion

 

 

 

 

 

 

 

Examples provided by Animal Science and Range Management: Rubric for Assessment of: Effectiveness in written communication of substantive content.

4 = Exceeds Standards: Student demonstrates competent performance exceeding normal standards at either the M.S. or Ph.D. level.

3 = Meets Standards: Student demonstrates appropriate performance for

professionalization

2 = Below Standards: Student does not demonstrate the skills commensurate with M.S. or Ph.D. degree.

1 = Unacceptable: Performance is clearly inadequate. Student demonstrates an inability or unwillingness to develop appropriate skills .

Indicators of

Effective Written

Communication of

Substantive Content

 

 

 

 

 

1

 

 

 

 

2

 

 

 

 

3

 

 

 

 

4

 

 

 

 

Score

Style / Organization

Paper is poorly

written and

reveals a lack

of effort

suitable for a

graduate

student

 

Paper conveys

appropriate

ideas, but

reveals weak

control over

diction, syntax,

and

organization.

 

Effective

command of

sentence

structure and

diction. Paper

is organized in

a logical

scientific

manner

 

Excellent

command of

sentence

structure,

diction, and

organization is

appropriate for

subject matter

content

 

 

Content

Major

omissions

necessary for

scientific

paper.

 

Some

necessary

components of

an effective

paper missing

or poorly

described.

 

Good job

presenting

ideas; contains

all necessary

content for

scientific

paper, but not

as clear or

succinct as it

could be.

 

Clearly

presents

appropriate

justification,

objectives and

methods; If

available,

results are

complete and

inferences

follow from

the data

 

 

Grammar

Weak

grammar,

spelling

 

Several

grammar and

spelling errors

 

Few spelling

and grammar

errors

 

No spelling or

grammar

mistakes

 

 

Sources

Poorly sourced

Some major

relative

literature not

covered

 

Major relative

literature

discussed

 

Exhaustive

literature

presented

 

 

Rubric for Assessment of: Effectiveness in oral communication of substantive content.

3 = Meets Standards: Student demonstrates appropriate performance for professionalization

1 = Unacceptable: Performance is clearly inadequate. Student demonstrates an inability or unwillingness to develop appropriate skills.

Indicators of

Effective Oral

Communication

of Substantive

Content

 

 

 

 

 

1

 

 

 

 

2

 

 

 

 

3

 

 

 

 

4

 

 

 

 

Score

Organization

Poor

Insufficient

Adequate

Presentation is arranged

logically

 

 

Content

Omission of

critical

information

necessary for a

scientific

presentation

 

Missing key

components of

effective

presentation

 

Most

components

covered, but

talk would

benefit from

additional

information

 

Material

presented was

complete and

appropriate, all key

components

covered

 

 

Clarity

Study

justification,

objectives, and

methods unclear;

demonstrated

lack of

preparation

 

Slides poorly

arranged or

improperly

formatted.

Font size too

small, too

crowded,

inappropriate

color scheme,

overuse of

acronyms and

jargon

 

Presentation is relatively clear; some slides too busy or

lacking; visual

aids are well

designed,

legible, with

appropriate

content

 

Presentation is succinct and

clear; avoids

jargon and

acronyms;

visual aids are

well designed,

legible, with

appropriate

content

 

 

Knowledge &

Understanding

 

Demonstrates

poor

knowledge of

the materials

presented

 

Demonstrates

a lack of

knowledge in

critical

components of

the study (e.g.,

literature,

study design,

analyses)

 

Demonstrates

solid

understanding

of the topic

and associated

literature;

highlights

important

points w

here

study is

strongest;

delivers

effective

conclusion

 

Demonstrates

a superb grasp

of the topic

and the

literature

related to the

topic; well

prepared for

questions;

Revisits

important and

relative points

 

 

Delivery

Obvious ill-

preparedness

 

Ineffective

delivery; poor

speech

mechanics;

nervous habits

interfered with

effective

presentation

 

Effective

delivery;

appropriate

volume, few

nervous habits,

relatively little

reliance on

notes;

evidence of

preparation

 

Outstanding

delivery;

engagement

with audience,

little reliance

on notes,

smooth

transitions

 

 

Examples provided by Department of Chemistry and Biochemistry Response Threshold All programs:

  • At least 80% of students will be ranked at the level of exceptional

in subject content knowledge, written communication, and oral communication.

 

Unacceptable

Acceptable

Exceptional

Organization of the

presentation

 

 

 

Clarity of the presentation

 

 

 

Effective use of slides and/or other visual aides

 

 

 

Demonstration of appropriate level

of subject knowledge

 

 

 

 

Unacceptable

Acceptable

Exceptional

Organization of the thesis: focus, coherence, progression of ideas is appropriate

 

 

 

Clarity of the thesis: Language word choice and grammar conventions are appropriate.

 

 

 

Content: Subject vocabulary , development of ideas, examples, and reference citations are at appropriate level.

 

 

 

 

Unacceptable

Acceptable

Exceptional

Identified and articulated the problem/

hypothesis of the research project.

 

Unable to identify problem on their own.

 

Identified the problem but had some ambiguity in articulating the problem statement.

 

Identified the problem

and outlined the necessary objectives to

solve the problem.

 

Conducted research to test the hypothesis.

 

Not clearly able to design an effective protocol.

 

Designed an effective

protocol including

appropriate control

experiments.

 

 

Designed effective

protocols

including

appropriate control

experiments and

independently

identified follow­‐up

experiments.

 

Analyzed data and detected

connections and patterns.

 

Not able to independently

analyze data

 

Independently analyzed data and detected some

appropriate connections

and patterns.

 

Independently analyzed

data and thoroughly

detected connections

and patterns.

 

Drew conclusions, implications, and consequences; developed ideas.

 

Combines few facts and

ideas, needs more development, conclusions

and consequences are not provided.

 

Accurately identifies

conclusions,

implications and

consequences with a brief evaluative

summary.

 

Accurately identifies

conclusions,

implications, and

consequences with a well-­‐

developed

explanation.

Provides

objective

analysis of own assertions.

 

Office of the Executive Vice President for Academic Affairs and Provost

Montana State University P.O. Box 172560 Bozeman, MT 59717-2560

Tel: (406) 994-4371 Email: [email protected] Location: 212 Montana Hall

  • Handbook Archive

Research Project (Learning Outcomes)

Subject EDUC90824 (2015)

Note: This is an archived Handbook entry from 2015.

Credit Points: 25
Level: 9 (Graduate/Postgraduate)
Dates & Locations:

This subject is not offered in 2015.

Time Commitment: Contact Hours: 72 hours
Total Time Commitment:

340 hours

Prerequisites:
Corequisites: None
Recommended Background Knowledge: None
Non Allowed Subjects: None
Core Participation Requirements:

For the purposes of considering request for Reasonable Adjustments under the Disability Standards for Education (Cwth 2005), and Students Experiencing Academic Disadvantage Policy, academic requirements for this subject are articulated in the Subject Description, Subject Objectives, Generic Skills and Assessment Requirements of this entry.

The University is dedicated to provide support to those with special requirements. Further details on the disability support scheme can be found at the Disability Liaison website: http://www.services.unimelb.edu.au/disability

Contact Us Call: 13 MELB (13 6352)

Subject Overview:

In this subject, participants implement a research project allowing them to undertake a yearlong study of their classroom students’ progress. They are required to analyse and review student performance data and explore the impact of the various teaching and learning strategies on student achievement.

Throughout the year participants receive ongoing supervision from a member of academic staff through campus or school based group workshops. The research project culminates with participants synthesizing the findings of their research in a written form such as a conference paper, journal article or report. They are also expected to report findings to their school community.

Learning Outcomes:

On completion of this subject, participants should be able to:

Assessment:

Hurdle – Oral presentation of the report to relevant members the school community

Attendance at all classes (tutorial/seminars/practical classes/lectures/labs/online classes) is obligatory. Failure to attend 80% of classes will normally result in failure in the subject.

Prescribed Texts: None
Breadth Options:

This subject is not available as a breadth subject.

Fees Information:
Generic Skills:

On completion of this subject, participants will have the knowledge, skills and understanding to enable them to:

Download PDF version .

A step-by-step guide to causal study design using real-world data

  • Open access
  • Published: 19 June 2024

Cite this article

You have full access to this open access article

learning outcomes for research paper

  • Sarah Ruth Hoffman 1 ,
  • Nilesh Gangan 1 ,
  • Xiaoxue Chen 2 ,
  • Joseph L. Smith 1 ,
  • Arlene Tave 1 ,
  • Yiling Yang 1 ,
  • Christopher L. Crowe 1 ,
  • Susan dosReis 3 &
  • Michael Grabner 1  

Due to the need for generalizable and rapidly delivered evidence to inform healthcare decision-making, real-world data have grown increasingly important to answer causal questions. However, causal inference using observational data poses numerous challenges, and relevant methodological literature is vast. We endeavored to identify underlying unifying themes of causal inference using real-world healthcare data and connect them into a single schema to aid in observational study design, and to demonstrate this schema using a previously published research example. A multidisciplinary team (epidemiology, biostatistics, health economics) reviewed the literature related to causal inference and observational data to identify key concepts. A visual guide to causal study design was developed to concisely and clearly illustrate how the concepts are conceptually related to one another. A case study was selected to demonstrate an application of the guide. An eight-step guide to causal study design was created, integrating essential concepts from the literature, anchored into conceptual groupings according to natural steps in the study design process. The steps include defining the causal research question and the estimand; creating a directed acyclic graph; identifying biases and design and analytic techniques to mitigate their effect, and techniques to examine the robustness of findings. The cardiovascular case study demonstrates the applicability of the steps to developing a research plan. This paper used an existing study to demonstrate the relevance of the guide. We encourage researchers to incorporate this guide at the study design stage in order to elevate the quality of future real-world evidence.

Avoid common mistakes on your manuscript.

1 Introduction

Approximately 50 new drugs are approved each year in the United States (Mullard 2022 ). For all new drugs, randomized controlled trials (RCTs) are the gold-standard by which potential effectiveness (“efficacy”) and safety are established. However, RCTs cannot guarantee how a drug will perform in a less controlled context. For this reason, regulators frequently require observational, post-approval studies using “real-world” data, sometimes even as a condition of drug approval. The “real-world” data requested by regulators is often derived from insurance claims databases and/or healthcare records. Importantly, these data are recorded during routine clinical care without concern for potential use in research. Yet, in recent years, there has been increasing use of such data for causal inference and regulatory decision making, presenting a variety of methodologic challenges for researchers and stakeholders to consider (Arlett et al. 2022 ; Berger et al. 2017 ; Concato and ElZarrad 2022 ; Cox et al. 2009 ; European Medicines Agency 2023 ; Franklin and Schneeweiss 2017 ; Girman et al. 2014 ; Hernán and Robins 2016 ; International Society for Pharmacoeconomics and Outcomes Research (ISPOR) 2022 ; International Society for Pharmacoepidemiology (ISPE) 2020 ; Stuart et al. 2013 ; U.S. Food and Drug Administration 2018 ; Velentgas et al. 2013 ).

Current guidance for causal inference using observational healthcare data articulates the need for careful study design (Berger et al. 2017 ; Cox et al. 2009 ; European Medicines Agency 2023 ; Girman et al. 2014 ; Hernán and Robins 2016 ; Stuart et al. 2013 ; Velentgas et al. 2013 ). In 2009, Cox et al. described common sources of bias in observational data and recommended specific strategies to mitigate these biases (Cox et al. 2009 ). In 2013, Stuart et al. emphasized counterfactual theory and trial emulation, offered several approaches to address unmeasured confounding, and provided guidance on the use of propensity scores to balance confounding covariates (Stuart et al. 2013 ). In 2013, the Agency for Healthcare Research and Quality (AHRQ) released an extensive, 200-page guide to developing a protocol for comparative effectiveness research using observational data (Velentgas et al. 2013 ). The guide emphasized development of the research question, with additional chapters on study design, comparator selection, sensitivity analyses, and directed acyclic graphs (Velentgas et al. 2013 ). In 2014, Girman et al. provided a clear set of steps for assessing study feasibility including examination of the appropriateness of the data for the research question (i.e., ‘fit-for-purpose’), empirical equipoise, and interpretability, stating that comparative effectiveness research using observational data “should be designed with the goal of drawing a causal inference” (Girman et al. 2014 ). In 2017 , Berger et al. described aspects of “study hygiene,” focusing on procedural practices to enhance confidence in, and credibility of, real-world data studies (Berger et al. 2017 ). Currently, the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP) maintains a guide on methodological standards in pharmacoepidemiology which discusses causal inference using observational data and includes an overview of study designs, a chapter on methods to address bias and confounding, and guidance on writing statistical analysis plans (European Medicines Agency 2023 ). In addition to these resources, the “target trial framework” provides a structured approach to planning studies for causal inferences from observational databases (Hernán and Robins 2016 ; Wang et al. 2023b ). This framework, published in 2016, encourages researchers to first imagine a clinical trial for the study question of interest and then to subsequently design the observational study to reflect the hypothetical trial (Hernán and Robins 2016 ).

While the literature addresses critical issues collectively, there remains a need for a framework that puts key components, including the target trial approach, into a simple, overarching schema (Loveless 2022 ) so they can be more easily remembered, and communicated to all stakeholders including (new) researchers, peer-reviewers, and other users of the research findings (e.g., practicing providers, professional clinical societies, regulators). For this reason, we created a step-by-step guide for causal inference using administrative health data, which aims to integrate these various best practices at a high level and complements existing, more specific guidance, including those from the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the International Society for Pharmacoepidemiology (ISPE) (Berger et al. 2017 ; Cox et al. 2009 ; Girman et al. 2014 ). We demonstrate the application of this schema using a previously published paper in cardiovascular research.

This work involved a formative phase and an implementation phase to evaluate the utility of the causal guide. In the formative phase, a multidisciplinary team with research expertise in epidemiology, biostatistics, and health economics reviewed selected literature (peer-reviewed publications, including those mentioned in the introduction, as well as graduate-level textbooks) related to causal inference and observational healthcare data from the pharmacoepidemiologic and pharmacoeconomic perspectives. The potential outcomes framework served as the foundation for our conception of causal inference (Rubin 2005 ). Information was grouped into the following four concepts: (1) Defining the Research Question; (2) Defining the Estimand; (3) Identifying and Mitigating Biases; (4) Sensitivity Analysis. A step-by-step guide to causal study design was developed to distill the essential elements of each concept, organizing them into a single schema so that the concepts are clearly related to one another. References for each step of the schema are included in the Supplemental Table.

In the implementation phase we tested the application of the causal guide to previously published work (Dondo et al. 2017 ). The previously published work utilized data from the Myocardial Ischaemia National Audit Project (MINAP), the United Kingdom’s national heart attack register. The goal of the study was to assess the effect of β-blockers on all-cause mortality among patients hospitalized for acute myocardial infarction without heart failure or left ventricular systolic dysfunction. We selected this paper for the case study because of its clear descriptions of the research goal and methods, and the explicit and methodical consideration of potential biases and use of sensitivity analyses to examine the robustness of the main findings.

3.1 Overview of the eight steps

The step-by-step guide to causal inference comprises eight distinct steps (Fig.  1 ) across the four concepts. As scientific inquiry and study design are iterative processes, the various steps may be completed in a different order than shown, and steps may be revisited.

figure 1

A step-by-step guide for causal study design

Abbreviations: GEE: generalized estimating equations; IPC/TW: inverse probability of censoring/treatment weighting; ITR: individual treatment response; MSM: marginal structural model; TE: treatment effect

Please refer to the Supplemental Table for references providing more in-depth information.

1 Ensure that the exposure and outcome are well-defined based on literature and expert opinion.

2 More specifically, measures of association are not affected by issues such as confounding and selection bias because they do not intend to isolate and quantify a single causal pathway. However, information bias (e.g., variable misclassification) can negatively affect association estimates, and association estimates remain subject to random variability (and are hence reported with confidence intervals).

3 This list is not exhaustive; it focuses on frequently encountered biases.

4 To assess bias in a nonrandomized study following the target trial framework, use of the ROBINS-I tool is recommended ( https://www.bmj.com/content/355/bmj.i4919 ).

5 Only a selection of the most popular approaches is presented here. Other methods exist; e.g., g-computation and g-estimation for both time-invariant and time-varying analysis; instrumental variables; and doubly-robust estimation methods. There are also program evaluation methods (e.g., difference-in-differences, regression discontinuities) that can be applied to pharmacoepidemiologic questions. Conventional outcome regression analysis is not recommended for causal estimation due to issues determining covariate balance, correct model specification, and interpretability of effect estimates.

6 Online tools include, among others, an E-value calculator for unmeasured confounding ( https://www.evalue-calculator.com /) and the P95 outcome misclassification estimator ( http://apps.p-95.com/ISPE /).

3.2 Defining the Research question (step 1)

The process of designing a study begins with defining the research question. Research questions typically center on whether a causal relationship exists between an exposure and an outcome. This contrasts with associative questions, which, by their nature, do not require causal study design elements because they do not attempt to isolate a causal pathway from a single exposure to an outcome under study. It is important to note that the phrasing of the question itself should clarify whether an association or a causal relationship is of interest. The study question “Does statin use reduce the risk of future cardiovascular events?” is explicitly causal and requires that the study design addresses biases such as confounding. In contrast, the study question “Is statin use associated with a reduced risk of future cardiovascular events?” can be answered without control of confounding since the word “association” implies correlation. Too often, however, researchers use the word “association” to describe their findings when their methods were created to address explicitly causal questions (Hernán 2018 ). For example, a study that uses propensity score-based methods to balance risk factors between treatment groups is explicitly attempting to isolate a causal pathway by removing confounding factors. This is different from a study that intends only to measure an association. In fact, some journals may require that the word “association” be used when causal language would be more appropriate; however, this is beginning to change (Flanagin et al. 2024 ).

3.3 Defining the estimand (steps 2, 3, 4)

The estimand is the causal effect of research interest and is described in terms of required design elements: the target population for the counterfactual contrast, the kind of effect, and the effect/outcome measure.

In Step 2, the study team determines the target population of interest, which depends on the research question of interest. For example, we may want to estimate the effect of the treatment in the entire study population, i.e., the hypothetical contrast between all study patients taking the drug of interest versus all study patients taking the comparator (the average treatment effect; ATE). Other effects can be examined, including the average treatment effect in the treated or untreated (ATT or ATU).When covariate distributions are the same across the treated and untreated populations and there is no effect modification by covariates, these effects are generally the same (Wang et al. 2017 ). In RCTs, this occurs naturally due to randomization, but in non-randomized data, careful study design and statistical methods must be used to mitigate confounding bias.

In Step 3, the study team decides whether to measure the intention-to-treat (ITT), per-protocol, or as-treated effect. The ITT approach is also known as “first-treatment-carried-forward” in the observational literature (Lund et al. 2015 ). In trials, the ITT measures the effect of treatment assignment rather than the treatment itself, and in observational data the ITT can be conceptualized as measuring the effect of treatment as started . To compute the ITT effect from observational data, patients are placed into the exposure group corresponding to the treatment that they initiate, and treatment switching or discontinuation are purposely ignored in the analysis. Alternatively, a per-protocol effect can be measured from observational data by classifying patients according to the treatment that they initiated but censoring them when they stop, switch, or otherwise change treatment (Danaei et al. 2013 ; Yang et al. 2014 ). Finally, “as-treated” effects are estimated from observational data by classifying patients according to their actual treatment exposure during follow-up, for example by using multiple time windows to measure exposure changes (Danaei et al. 2013 ; Yang et al. 2014 ).

Step 4 is the final step in specifying the estimand in which the research team determines the effect measure of interest. Answering this question has two parts. First, the team must consider how the outcome of interest will be measured. Risks, rates, hazards, odds, and costs are common ways of measuring outcomes, but each measure may be best suited to a particular scenario. For example, risks assume patients across comparison groups have equal follow-up time, while rates allow for variable follow-up time (Rothman et al. 2008 ). Costs may be of interest in studies focused on economic outcomes, including as inputs to cost-effectiveness analyses. After deciding how the outcome will be measured, it is necessary to consider whether the resulting quantity will be compared across groups using a ratio or a difference. Ratios convey the effect of exposure in a way that is easy to understand, but they do not provide an estimate of how many patients will be affected. On the other hand, differences provide a clearer estimate of the potential public health impact of exposure; for example, by allowing the calculation of the number of patients that must be treated to cause or prevent one instance of the outcome of interest (Tripepi et al. 2007 ).

3.4 Identifying and mitigating biases (steps 5, 6, 7)

Observational, real-world studies can be subject to multiple potential sources of bias, which can be grouped into confounding, selection, measurement, and time-related biases (Prada-Ramallal et al. 2019 ).

In Step 5, as a practical first approach in developing strategies to address threats to causal inference, researchers should create a visual mapping of factors that may be related to the exposure, outcome, or both (also called a directed acyclic graph or DAG) (Pearl 1995 ). While creating a high-quality DAG can be challenging, guidance is increasingly available to facilitate the process (Ferguson et al. 2020 ; Gatto et al. 2022 ; Hernán and Robins 2020 ; Rodrigues et al. 2022 ; Sauer 2013 ). The types of inter-variable relationships depicted by DAGs include confounders, colliders, and mediators. Confounders are variables that affect both exposure and outcome, and it is necessary to control for them in order to isolate the causal pathway of interest. Colliders represent variables affected by two other variables, such as exposure and outcome (Griffith et al. 2020 ). Colliders should not be conditioned on since by doing so, the association between exposure and outcome will become distorted. Mediators are variables that are affected by the exposure and go on to affect the outcome. As such, mediators are on the causal pathway between exposure and outcome and should also not be conditioned on, otherwise a path between exposure and outcome will be closed and the total effect of the exposure on the outcome cannot be estimated. Mediation analysis is a separate type of analysis aiming to distinguish between direct and indirect (mediated) effects between exposure and outcome and may be applied in certain cases (Richiardi et al. 2013 ). Overall, the process of creating a DAG can create valuable insights about the nature of the hypothesized underlying data generating process and the biases that are likely to be encountered (Digitale et al. 2022 ). Finally, an extension to DAGs which incorporates counterfactual theory is available in the form of Single World Intervention Graphs (SWIGs) as described in a 2013 primer (Richardson and Robins 2013 ).

In Step 6, researchers comprehensively assess the possibility of different types of bias in their study, above and beyond what the creation of the DAG reveals. Many potential biases have been identified and summarized in the literature (Berger et al. 2017 ; Cox et al. 2009 ; European Medicines Agency 2023 ; Girman et al. 2014 ; Stuart et al. 2013 ; Velentgas et al. 2013 ). Every study can be subject to one or more biases, each of which can be addressed using one or more methods. The study team should thoroughly and explicitly identify all possible biases with consideration for the specifics of the available data and the nuances of the population and health care system(s) from which the data arise. Once the potential biases are identified and listed, the team can consider potential solutions using a variety of study design and analytic techniques.

In Step 7, the study team considers solutions to the biases identified in Step 6. “Target trial” thinking serves as the basis for many of these solutions by requiring researchers to consider how observational studies can be designed to ensure comparison groups are similar and produce valid inferences by emulating RCTs (Labrecque and Swanson 2017 ; Wang et al. 2023b ). Designing studies to include only new users of a drug and an active comparator group is one way of increasing the similarity of patients across both groups, particularly in terms of treatment history. Careful consideration must be paid to the specification of the time periods and their relationship to inclusion/exclusion criteria (Suissa and Dell’Aniello 2020 ). For instance, if a drug is used intermittently, a longer wash-out period is needed to ensure adequate capture of prior use in order to avoid bias (Riis et al. 2015 ). The study team should consider how to approach confounding adjustment, and whether both time-invariant and time-varying confounding may be present. Many potential biases exist, and many methods have been developed to address them in order to improve causal estimation from observational data. Many of these methods, such as propensity score estimation, can be enhanced by machine learning (Athey and Imbens 2019 ; Belthangady et al. 2021 ; Mai et al. 2022 ; Onasanya et al. 2024 ; Schuler and Rose 2017 ; Westreich et al. 2010 ). Machine learning has many potential applications in the causal inference discipline, and like other tools, must be used with careful planning and intentionality. To aid in the assessment of potential biases, especially time-related ones, and the development of a plan to address them, the study design should be visualized (Gatto et al. 2022 ; Schneeweiss et al. 2019 ). Additionally, we note the opportunity for collaboration across research disciplines (e.g., the application of difference-in-difference methods (Zhou et al. 2016 ) to the estimation of comparative drug effectiveness and safety).

3.5 Quality Control & sensitivity analyses (step 8)

Causal study design concludes with Step 8, which includes planning quality control and sensitivity analyses to improve the internal validity of the study. Quality control begins with reviewing study output for prima facie validity. Patient characteristics (e.g., distributions of age, sex, region) should align with expected values from the researchers’ intuition and the literature, and researchers should assess reasons for any discrepancies. Sensitivity analyses should be conducted to determine the robustness of study findings. Researchers can test the stability of study estimates using a different estimand or type of model than was used in the primary analysis. Sensitivity analysis estimates that are similar to those of the primary analysis might confirm that the primary analysis estimates are appropriate. The research team may be interested in how changes to study inclusion/exclusion criteria may affect study findings or wish to address uncertainties related to measuring the exposure or outcome in the administrative data by modifying the algorithms used to identify exposure or outcome (e.g., requiring hospitalization with a diagnosis code in a principal position rather than counting any claim with the diagnosis code in any position). As feasible, existing validation studies for the exposure and outcome should be referenced, or new validation efforts undertaken. The results of such validation studies can inform study estimates via quantitative bias analyses (Lanes and Beachler 2023 ). The study team may also consider biases arising from unmeasured confounding and plan quantitative bias analyses to explore how unmeasured confounding may impact estimates. Quantitative bias analysis can assess the directionality, magnitude, and uncertainty of errors arising from a variety of limitations (Brenner and Gefeller 1993 ; Lash et al. 2009 , 2014 ; Leahy et al. 2022 ).

3.6 Illustration using a previously published research study

In order to demonstrate how the guide can be used to plan a research study utilizing causal methods, we turn to a previously published study (Dondo et al. 2017 ) that assessed the causal relationship between the use of 𝛽-blockers and mortality after acute myocardial infarction in patients without heart failure or left ventricular systolic dysfunction. The investigators sought to answer a causal research question (Step 1), and so we proceed to Step 2. Use (or no use) of 𝛽-blockers was determined after discharge without taking into consideration discontinuation or future treatment changes (i.e., intention-to-treat). Considering treatment for whom (Step 3), both ATE and ATT were evaluated. Since survival was the primary outcome, an absolute difference in survival time was chosen as the effect measure (Step 4). While there was no explicit directed acyclic graph provided, the investigators specified a list of confounders.

Robust methodologies were established by consideration of possible sources of biases and addressing them using viable solutions (Steps 6 and 7). Table  1 offers a list of the identified potential biases and their corresponding solutions as implemented. For example, to minimize potential biases including prevalent-user bias and selection bias, the sample was restricted to patients with no previous use of 𝛽-blockers, no contraindication for 𝛽-blockers, and no prescription of loop diuretics. To improve balance across the comparator groups in terms of baseline confounders, i.e., those that could influence both exposure (𝛽-blocker use) and outcome (mortality), propensity score-based inverse probability of treatment weighting (IPTW) was employed. However, we noted that the baseline look-back period to assess measured covariates was not explicitly listed in the paper.

Quality control and sensitivity analysis (Step 8) is described extensively. The overlap of propensity score distributions between comparator groups was tested and confounder balance was assessed. Since observations in the tail-end of the propensity score distribution may violate the positivity assumption (Crump et al. 2009 ), a sensitivity analysis was conducted including only cases within 0.1 to 0.9 of the propensity score distribution. While not mentioned by the authors, the PS tails can be influenced by unmeasured confounders (Sturmer et al. 2021 ), and the findings were robust with and without trimming. An assessment of extreme IPTW weights, while not included, would further help increase confidence in the robustness of the analysis. An instrumental variable approach was employed to assess potential selection bias due to unmeasured confounding, using hospital rates of guideline-indicated prescribing as the instrument. Additionally, potential bias caused by missing data was attenuated through the use of multiple imputation, and separate models were built for complete cases only and imputed/complete cases.

4 Discussion

We have described a conceptual schema for designing observational real-world studies to estimate causal effects. The application of this schema to a previously published study illuminates the methodologic structure of the study, revealing how each structural element is related to a potential bias which it is meant to address. Real-world evidence is increasingly accepted by healthcare stakeholders, including the FDA (Concato and Corrigan-Curay 2022 ; Concato and ElZarrad 2022 ), and its use for comparative effectiveness and safety assessments requires appropriate causal study design; our guide is meant to facilitate this design process and complement existing, more specific, guidance.

Existing guidance for causal inference using observational data includes components that can be clearly mapped onto the schema that we have developed. For example, in 2009 Cox et al. described common sources of bias in observational data and recommended specific strategies to mitigate these biases, corresponding to steps 6–8 of our step-by-step guide (Cox et al. 2009 ). In 2013, the AHRQ emphasized development of the research question, corresponding to steps 1–4 of our guide, with additional chapters on study design, comparator selection, sensitivity analyses, and directed acyclic graphs which correspond to steps 7 and 5, respectively (Velentgas et al. 2013 ). Much of Girman et al.’s manuscript (Girman et al. 2014 ) corresponds with steps 1–4 of our guide, and the matter of equipoise and interpretability specifically correspond to steps 3 and 7–8. The current ENCePP guide on methodological standards in pharmacoepidemiology contains a section on formulating a meaningful research question, corresponding to step 1, and describes strategies to mitigate specific sources of bias, corresponding to steps 6–8 (European Medicines Agency 2023 ). Recent works by the FDA Sentinel Innovation Center (Desai et al. 2024 ) and the Joint Initiative for Causal Inference (Dang et al. 2023 ) provide more advanced exposition of many of the steps in our guide. The target trial framework contains guidance on developing seven components of the study protocol, including eligibility criteria, treatment strategies, assignment procedures, follow-up period, outcome, causal contrast of interest, and analysis plan (Hernán and Robins 2016 ). Our work places the target trial framework into a larger context illustrating its relationship with other important study planning considerations, including the creation of a directed acyclic graph and incorporation of prespecified sensitivity and quantitative bias analyses.

Ultimately, the feasibility of estimating causal effects relies on the capabilities of the available data. Real-world data sources are complex, and the investigator must carefully consider whether the data on hand are sufficient to answer the research question. For example, a study that relies solely on claims data for outcome ascertainment may suffer from outcome misclassification bias (Lanes and Beachler 2023 ). This bias can be addressed through medical record validation for a random subset of patients, followed by quantitative bias analysis (Lanes and Beachler 2023 ). If instead, the investigator wishes to apply a previously published, claims-based algorithm validated in a different database, they must carefully consider the transportability of that algorithm to their own study population. In this way, causal inference from real-world data requires the ability to think creatively and resourcefully about how various data sources and elements can be leveraged, with consideration for the strengths and limitations of each source. The heart of causal inference is in the pairing of humility and creativity: the humility to acknowledge what the data cannot do, and the creativity to address those limitations as best as one can at the time.

4.1 Limitations

As with any attempt to synthesize a broad array of information into a single, simplified schema, there are several limitations to our work. Space and useability constraints necessitated simplification of the complex source material and selections among many available methodologies, and information about the relative importance of each step is not currently included. Additionally, it is important to consider the context of our work. This step-by-step guide emphasizes analytic techniques (e.g., propensity scores) that are used most frequently within our own research environment and may not include less familiar study designs and analytic techniques. However, one strength of the guide is that additional designs and techniques or concepts can easily be incorporated into the existing schema. The benefit of a schema is that new information can be added and is more readily accessed due to its association with previously sorted information (Loveless 2022 ). It is also important to note that causal inference was approached as a broad overarching concept defined by the totality of the research, from start to finish, rather than focusing on a particular analytic technique, however we view this as a strength rather than a limitation.

Finally, the focus of this guide was on the methodologic aspects of study planning. As a result, we did not include steps for drafting or registering the study protocol in a public database or for communicating results. We strongly encourage researchers to register their study protocols and communicate their findings with transparency. A protocol template endorsed by ISPOR and ISPE for studies using real-world data to evaluate treatment effects is available (Wang et al. 2023a ). Additionally, the steps described above are intended to illustrate an order of thinking in the study planning process, and these steps are often iterative. The guide is not intended to reflect the order of study execution; specifically, quality control procedures and sensitivity analyses should also be formulated up-front at the protocol stage.

5 Conclusion

We outlined steps and described key conceptual issues of importance in designing real-world studies to answer causal questions, and created a visually appealing, user-friendly resource to help researchers clearly define and navigate these issues. We hope this guide serves to enhance the quality, and thus the impact, of real-world evidence.

Data availability

No datasets were generated or analysed during the current study.

Arlett, P., Kjaer, J., Broich, K., Cooke, E.: Real-world evidence in EU Medicines Regulation: Enabling Use and establishing value. Clin. Pharmacol. Ther. 111 (1), 21–23 (2022)

Article   PubMed   Google Scholar  

Athey, S., Imbens, G.W.: Machine Learning Methods That Economists Should Know About. Annual Review of Economics 11(Volume 11, 2019): 685–725. (2019)

Belthangady, C., Stedden, W., Norgeot, B.: Minimizing bias in massive multi-arm observational studies with BCAUS: Balancing covariates automatically using supervision. BMC Med. Res. Methodol. 21 (1), 190 (2021)

Article   PubMed   PubMed Central   Google Scholar  

Berger, M.L., Sox, H., Willke, R.J., Brixner, D.L., Eichler, H.G., Goettsch, W., Madigan, D., Makady, A., Schneeweiss, S., Tarricone, R., Wang, S.V., Watkins, J.: and C. Daniel Mullins. 2017. Good practices for real-world data studies of treatment and/or comparative effectiveness: Recommendations from the joint ISPOR-ISPE Special Task Force on real-world evidence in health care decision making. Pharmacoepidemiol Drug Saf. 26 (9): 1033–1039

Brenner, H., Gefeller, O.: Use of the positive predictive value to correct for disease misclassification in epidemiologic studies. Am. J. Epidemiol. 138 (11), 1007–1015 (1993)

Article   CAS   PubMed   Google Scholar  

Concato, J., Corrigan-Curay, J.: Real-world evidence - where are we now? N Engl. J. Med. 386 (18), 1680–1682 (2022)

Concato, J., ElZarrad, M.: FDA Issues Draft Guidances on Real-World Evidence, Prepares to Publish More in Future [accessed on 2022]. (2022). https://www.fda.gov/drugs/news-events-human-drugs/fda-issues-draft-guidances-real-world-evidence-prepares-publish-more-future

Cox, E., Martin, B.C., Van Staa, T., Garbe, E., Siebert, U., Johnson, M.L.: Good research practices for comparative effectiveness research: Approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: The International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report–Part II. Value Health. 12 (8), 1053–1061 (2009)

Crump, R.K., Hotz, V.J., Imbens, G.W., Mitnik, O.A.: Dealing with limited overlap in estimation of average treatment effects. Biometrika. 96 (1), 187–199 (2009)

Article   Google Scholar  

Danaei, G., Rodriguez, L.A., Cantero, O.F., Logan, R., Hernan, M.A.: Observational data for comparative effectiveness research: An emulation of randomised trials of statins and primary prevention of coronary heart disease. Stat. Methods Med. Res. 22 (1), 70–96 (2013)

Dang, L.E., Gruber, S., Lee, H., Dahabreh, I.J., Stuart, E.A., Williamson, B.D., Wyss, R., Diaz, I., Ghosh, D., Kiciman, E., Alemayehu, D., Hoffman, K.L., Vossen, C.Y., Huml, R.A., Ravn, H., Kvist, K., Pratley, R., Shih, M.C., Pennello, G., Martin, D., Waddy, S.P., Barr, C.E., Akacha, M., Buse, J.B., van der Laan, M., Petersen, M.: A causal roadmap for generating high-quality real-world evidence. J. Clin. Transl Sci. 7 (1), e212 (2023)

Desai, R.J., Wang, S.V., Sreedhara, S.K., Zabotka, L., Khosrow-Khavar, F., Nelson, J.C., Shi, X., Toh, S., Wyss, R., Patorno, E., Dutcher, S., Li, J., Lee, H., Ball, R., Dal Pan, G., Segal, J.B., Suissa, S., Rothman, K.J., Greenland, S., Hernan, M.A., Heagerty, P.J., Schneeweiss, S.: Process guide for inferential studies using healthcare data from routine clinical practice to evaluate causal effects of drugs (PRINCIPLED): Considerations from the FDA Sentinel Innovation Center. BMJ. 384 , e076460 (2024)

Digitale, J.C., Martin, J.N., Glymour, M.M.: Tutorial on directed acyclic graphs. J. Clin. Epidemiol. 142 , 264–267 (2022)

Dondo, T.B., Hall, M., West, R.M., Jernberg, T., Lindahl, B., Bueno, H., Danchin, N., Deanfield, J.E., Hemingway, H., Fox, K.A.A., Timmis, A.D., Gale, C.P.: beta-blockers and Mortality after Acute myocardial infarction in patients without heart failure or ventricular dysfunction. J. Am. Coll. Cardiol. 69 (22), 2710–2720 (2017)

Article   CAS   PubMed   PubMed Central   Google Scholar  

European Medicines Agency: ENCePP Guide on Methodological Standards in Pharmacoepidemiology [accessed on 2023]. (2023). https://www.encepp.eu/standards_and_guidances/methodologicalGuide.shtml

Ferguson, K.D., McCann, M., Katikireddi, S.V., Thomson, H., Green, M.J., Smith, D.J., Lewsey, J.D.: Evidence synthesis for constructing directed acyclic graphs (ESC-DAGs): A novel and systematic method for building directed acyclic graphs. Int. J. Epidemiol. 49 (1), 322–329 (2020)

Flanagin, A., Lewis, R.J., Muth, C.C., Curfman, G.: What does the proposed causal inference Framework for Observational studies Mean for JAMA and the JAMA Network Journals? JAMA (2024)

U.S. Food and Drug Administration: Framework for FDA’s Real-World Evidence Program [accessed on 2018]. (2018). https://www.fda.gov/media/120060/download

Franklin, J.M., Schneeweiss, S.: When and how can Real World Data analyses substitute for randomized controlled trials? Clin. Pharmacol. Ther. 102 (6), 924–933 (2017)

Gatto, N.M., Wang, S.V., Murk, W., Mattox, P., Brookhart, M.A., Bate, A., Schneeweiss, S., Rassen, J.A.: Visualizations throughout pharmacoepidemiology study planning, implementation, and reporting. Pharmacoepidemiol Drug Saf. 31 (11), 1140–1152 (2022)

Girman, C.J., Faries, D., Ryan, P., Rotelli, M., Belger, M., Binkowitz, B., O’Neill, R.: and C. E. R. S. W. G. Drug Information Association. 2014. Pre-study feasibility and identifying sensitivity analyses for protocol pre-specification in comparative effectiveness research. J. Comp. Eff. Res. 3 (3): 259–270

Griffith, G.J., Morris, T.T., Tudball, M.J., Herbert, A., Mancano, G., Pike, L., Sharp, G.C., Sterne, J., Palmer, T.M., Davey Smith, G., Tilling, K., Zuccolo, L., Davies, N.M., Hemani, G.: Collider bias undermines our understanding of COVID-19 disease risk and severity. Nat. Commun. 11 (1), 5749 (2020)

Hernán, M.A.: The C-Word: Scientific euphemisms do not improve causal inference from Observational Data. Am. J. Public Health. 108 (5), 616–619 (2018)

Hernán, M.A., Robins, J.M.: Using Big Data to emulate a target Trial when a Randomized Trial is not available. Am. J. Epidemiol. 183 (8), 758–764 (2016)

Hernán, M., Robins, J.: Causal Inference: What if. Chapman & Hall/CRC, Boca Raton (2020)

Google Scholar  

International Society for Pharmacoeconomics and Outcomes Research (ISPOR): Strategic Initiatives: Real-World Evidence [accessed on 2022]. (2022). https://www.ispor.org/strategic-initiatives/real-world-evidence

International Society for Pharmacoepidemiology (ISPE): Position on Real-World Evidence [accessed on 2020]. (2020). https://pharmacoepi.org/pub/?id=136DECF1-C559-BA4F-92C4-CF6E3ED16BB6

Labrecque, J.A., Swanson, S.A.: Target trial emulation: Teaching epidemiology and beyond. Eur. J. Epidemiol. 32 (6), 473–475 (2017)

Lanes, S., Beachler, D.C.: Validation to correct for outcome misclassification bias. Pharmacoepidemiol Drug Saf. (2023)

Lash, T.L., Fox, M.P., Fink, A.K.: Applying Quantitative bias Analysis to Epidemiologic data. Springer (2009)

Lash, T.L., Fox, M.P., MacLehose, R.F., Maldonado, G., McCandless, L.C., Greenland, S.: Good practices for quantitative bias analysis. Int. J. Epidemiol. 43 (6), 1969–1985 (2014)

Leahy, T.P., Kent, S., Sammon, C., Groenwold, R.H., Grieve, R., Ramagopalan, S., Gomes, M.: Unmeasured confounding in nonrandomized studies: Quantitative bias analysis in health technology assessment. J. Comp. Eff. Res. 11 (12), 851–859 (2022)

Loveless, B.: A Complete Guide to Schema Theory and its Role in Education [accessed on 2022]. (2022). https://www.educationcorner.com/schema-theory/

Lund, J.L., Richardson, D.B., Sturmer, T.: The active comparator, new user study design in pharmacoepidemiology: Historical foundations and contemporary application. Curr. Epidemiol. Rep. 2 (4), 221–228 (2015)

Mai, X., Teng, C., Gao, Y., Governor, S., He, X., Kalloo, G., Hoffman, S., Mbiydzenyuy, D., Beachler, D.: A pragmatic comparison of logistic regression versus machine learning methods for propensity score estimation. Supplement: Abstracts of the 38th International Conference on Pharmacoepidemiology: Advancing Pharmacoepidemiology and Real-World Evidence for the Global Community, August 26–28, 2022, Copenhagen, Denmark. Pharmacoepidemiology and Drug Safety 31(S2). (2022)

Mullard, A.: 2021 FDA approvals. Nat. Rev. Drug Discov. 21 (2), 83–88 (2022)

Onasanya, O., Hoffman, S., Harris, K., Dixon, R., Grabner, M.: Current applications of machine learning for causal inference in healthcare research using observational data. International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Atlanta, GA. (2024)

Pearl, J.: Causal diagrams for empirical research. Biometrika. 82 (4), 669–688 (1995)

Prada-Ramallal, G., Takkouche, B., Figueiras, A.: Bias in pharmacoepidemiologic studies using secondary health care databases: A scoping review. BMC Med. Res. Methodol. 19 (1), 53 (2019)

Richardson, T.S., Robins, J.M.: Single World Intervention Graphs: A Primer [accessed on 2013]. (2013). https://www.stats.ox.ac.uk/~evans/uai13/Richardson.pdf

Richiardi, L., Bellocco, R., Zugna, D.: Mediation analysis in epidemiology: Methods, interpretation and bias. Int. J. Epidemiol. 42 (5), 1511–1519 (2013)

Riis, A.H., Johansen, M.B., Jacobsen, J.B., Brookhart, M.A., Sturmer, T., Stovring, H.: Short look-back periods in pharmacoepidemiologic studies of new users of antibiotics and asthma medications introduce severe misclassification. Pharmacoepidemiol Drug Saf. 24 (5), 478–485 (2015)

Rodrigues, D., Kreif, N., Lawrence-Jones, A., Barahona, M., Mayer, E.: Reflection on modern methods: Constructing directed acyclic graphs (DAGs) with domain experts for health services research. Int. J. Epidemiol. 51 (4), 1339–1348 (2022)

Rothman, K.J., Greenland, S., Lash, T.L.: Modern Epidemiology. Wolters Kluwer Health/Lippincott Williams & Wilkins, Philadelphia (2008)

Rubin, D.B.: Causal inference using potential outcomes. J. Am. Stat. Assoc. 100 (469), 322–331 (2005)

Article   CAS   Google Scholar  

Sauer, B.V.: TJ. Use of Directed Acyclic Graphs. In Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide , edited by P. Velentgas, N. Dreyer, and P. Nourjah: Agency for Healthcare Research and Quality (US) (2013)

Schneeweiss, S., Rassen, J.A., Brown, J.S., Rothman, K.J., Happe, L., Arlett, P., Dal Pan, G., Goettsch, W., Murk, W., Wang, S.V.: Graphical depiction of longitudinal study designs in Health Care databases. Ann. Intern. Med. 170 (6), 398–406 (2019)

Schuler, M.S., Rose, S.: Targeted maximum likelihood estimation for causal inference in Observational studies. Am. J. Epidemiol. 185 (1), 65–73 (2017)

Stuart, E.A., DuGoff, E., Abrams, M., Salkever, D., Steinwachs, D.: Estimating causal effects in observational studies using Electronic Health data: Challenges and (some) solutions. EGEMS (Wash DC) 1 (3). (2013)

Sturmer, T., Webster-Clark, M., Lund, J.L., Wyss, R., Ellis, A.R., Lunt, M., Rothman, K.J., Glynn, R.J.: Propensity score weighting and trimming strategies for reducing Variance and Bias of Treatment Effect estimates: A Simulation Study. Am. J. Epidemiol. 190 (8), 1659–1670 (2021)

Suissa, S., Dell’Aniello, S.: Time-related biases in pharmacoepidemiology. Pharmacoepidemiol Drug Saf. 29 (9), 1101–1110 (2020)

Tripepi, G., Jager, K.J., Dekker, F.W., Wanner, C., Zoccali, C.: Measures of effect: Relative risks, odds ratios, risk difference, and ‘number needed to treat’. Kidney Int. 72 (7), 789–791 (2007)

Velentgas, P., Dreyer, N., Nourjah, P., Smith, S., Torchia, M.: Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide. Agency for Healthcare Research and Quality (AHRQ) Publication 12(13). (2013)

Wang, A., Nianogo, R.A., Arah, O.A.: G-computation of average treatment effects on the treated and the untreated. BMC Med. Res. Methodol. 17 (1), 3 (2017)

Wang, S.V., Pottegard, A., Crown, W., Arlett, P., Ashcroft, D.M., Benchimol, E.I., Berger, M.L., Crane, G., Goettsch, W., Hua, W., Kabadi, S., Kern, D.M., Kurz, X., Langan, S., Nonaka, T., Orsini, L., Perez-Gutthann, S., Pinheiro, S., Pratt, N., Schneeweiss, S., Toussi, M., Williams, R.J.: HARmonized Protocol Template to enhance reproducibility of hypothesis evaluating real-world evidence studies on treatment effects: A good practices report of a joint ISPE/ISPOR task force. Pharmacoepidemiol Drug Saf. 32 (1), 44–55 (2023a)

Wang, S.V., Schneeweiss, S., Initiative, R.-D., Franklin, J.M., Desai, R.J., Feldman, W., Garry, E.M., Glynn, R.J., Lin, K.J., Paik, J., Patorno, E., Suissa, S., D’Andrea, E., Jawaid, D., Lee, H., Pawar, A., Sreedhara, S.K., Tesfaye, H., Bessette, L.G., Zabotka, L., Lee, S.B., Gautam, N., York, C., Zakoul, H., Concato, J., Martin, D., Paraoan, D.: and K. Quinto. Emulation of Randomized Clinical Trials With Nonrandomized Database Analyses: Results of 32 Clinical Trials. JAMA 329(16): 1376-85. (2023b)

Westreich, D., Lessler, J., Funk, M.J.: Propensity score estimation: Neural networks, support vector machines, decision trees (CART), and meta-classifiers as alternatives to logistic regression. J. Clin. Epidemiol. 63 (8), 826–833 (2010)

Yang, S., Eaton, C.B., Lu, J., Lapane, K.L.: Application of marginal structural models in pharmacoepidemiologic studies: A systematic review. Pharmacoepidemiol Drug Saf. 23 (6), 560–571 (2014)

Zhou, H., Taber, C., Arcona, S., Li, Y.: Difference-in-differences method in comparative Effectiveness Research: Utility with unbalanced groups. Appl. Health Econ. Health Policy. 14 (4), 419–429 (2016)

Download references

The authors received no financial support for this research.

Author information

Authors and affiliations.

Carelon Research, Wilmington, DE, USA

Sarah Ruth Hoffman, Nilesh Gangan, Joseph L. Smith, Arlene Tave, Yiling Yang, Christopher L. Crowe & Michael Grabner

Elevance Health, Indianapolis, IN, USA

Xiaoxue Chen

University of Maryland School of Pharmacy, Baltimore, MD, USA

Susan dosReis

You can also search for this author in PubMed   Google Scholar

Contributions

SH, NG, JS, AT, CC, MG are employees of Carelon Research, a wholly owned subsidiary of Elevance Health, which conducts health outcomes research with both internal and external funding, including a variety of private and public entities. XC was an employee of Elevance Health at the time of study conduct. YY was an employee of Carelon Research at the time of study conduct. SH, MG, and JLS are shareholders of Elevance Health. SdR receives funding from GlaxoSmithKline for a project unrelated to the content of this manuscript and conducts research that is funded by state and federal agencies.

Corresponding author

Correspondence to Sarah Ruth Hoffman .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Hoffman, S.R., Gangan, N., Chen, X. et al. A step-by-step guide to causal study design using real-world data. Health Serv Outcomes Res Method (2024). https://doi.org/10.1007/s10742-024-00333-6

Download citation

Received : 07 December 2023

Revised : 31 May 2024

Accepted : 10 June 2024

Published : 19 June 2024

DOI : https://doi.org/10.1007/s10742-024-00333-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Causal inference
  • Real-world data
  • Confounding
  • Non-randomized data
  • Bias in pharmacoepidemiology
  • Find a journal
  • Publish with us
  • Track your research

Linking quality and child development in early childhood education and care: Research summary

In 2023, the Australian Education Research Organisation (AERO) and researchers from the Queensland Brain Institute at The University of Queensland partnered to examine how specific aspects of quality relate to learning and development outcomes for children in Australia. This study contributes the first empirical evidence linking quality ratings of Australian early childhood education and care (ECEC) services (using the National Quality Standard) with child development. It confirms the value of investing in improving the quality of ECEC for all Australian children, as well as in our national system for assessing and rating the quality of ECEC services. 

This research summary presents key findings from the project. Our technical report provides more detailed information about these findings and the methods used for this research.

Keywords: educational data, education research, actionable insights, child development, learning outcomes

  • Evidence - use & generation
  • Linking quality and child development in early childhood education and care: Technical report
  • Actionable insights into Australian education
  • Promoting equity for multilingual children in early childhood
  • Early childhood data in Australia: Scoping report

Australia's national education evidence body

  • DOI: 10.61707/g55xqd25
  • Corpus ID: 270342629

From Data to Success: The Role of Interactive Support in Big Data Learning Outcomes

  • Wenming Pei , Chaithanaskorn Phawitpiriyakliti , Sid Terason
  • Published in International Journal of… 7 June 2024
  • Computer Science, Education

25 References

Big data and analytics in higher education: opportunities and challenges.

  • Highly Influential

Learning analytics: drivers, developments and challenges

The interactive effect of uncertainty avoidance cultural values and leadership styles on open service innovation: a look at malaysian healthcare sector, personal learning environments, social media, and self-regulated learning: a natural formula for connecting formal and informal learning, analyzing qualitative data, nmc horizon report: 2016 higher education edition, qualitative research: a guide to design and implementation, motivation and social cognitive theory, qualitative research design: an interactive approach, vygotsky and pedagogy, related papers.

Showing 1 through 3 of 0 Related Papers

Center on the Developing Child at Harvard University homepage

Father holding daughter

InBrief: Extreme Heat Affects Early Childhood Development & Health

Check out our InBrief on the effects of extreme heat on babies & young children and read about actionable solutions to prevent or minimize these impacts.

learning outcomes for research paper

June 2024 Newsletter

Learn about our webinar on June 25 on the science of human variation, and find more resources and videos on early childhood development and well-being.

View this newsletter

Connect With Us

Subscribe to our newsletter.

Subscribe to our newsletter to stay up to date on the latest news and events from the Center on the Developing Child.

Permissions Requests

General information, press / media inquiries, browse the full list of center resources. filter by media type or topic..

IMAGES

  1. Student Learning Outcomes Research Paper

    learning outcomes for research paper

  2. How To Write Learning Outcomes

    learning outcomes for research paper

  3. PPT

    learning outcomes for research paper

  4. How to Write Learning Outcomes

    learning outcomes for research paper

  5. How to Write Learning Outcomes

    learning outcomes for research paper

  6. How to write learning outcomes and assessment criteria

    learning outcomes for research paper

VIDEO

  1. Learning Outcomes and Pedagogy || Dr Meenakshi Narula

  2. SLOs-Based Education: What, Why, How?

  3. Assessment Quickies #4: Mapping Student Learning Outcomes to the Curriculum

  4. Oral Research Presentation: Tamzin Reynolds

  5. Submit your paper in International Journal of Public Health Science (IJPHS) today!

  6. 04 -The two components of a course outcome

COMMENTS

  1. What matters for student learning outcomes? A systematic review of studies exploring system-level factors of educational effectiveness

    Meta-analysis comprises a powerful tool for synthesising prior research and empirically validating theoretical frameworks. Using this tool and two recent multilevel models of educational effectiveness as guiding frameworks, this paper synthesises the results of 195 studies investigating the association between system-level characteristics and student learning outcomes.

  2. Full article: The current emphasis on learning outcomes

    The current Australian higher education quality standards, and regulatory activity, centre on the student educational experience. The very first domain in these standards covers this experience from admission through to the attainment of a certified qualification, or part thereof, with one of the five standards explicitly on learning outcomes and assessment.

  3. The Relationship between Study Skills and Learning Outcomes: A Meta

    This paper reports the results of a meta-analysis of 52 studies that investigated the relationship between a range of study strategies and outcomes measures.Low correlations were found between a range of different types of study skills and various outcome measures. Having many study skills (i.e. versatility), as assessed by total study skills scores, produced the largest correlations with both ...

  4. PDF A guide to writing learning outcomes in higher education

    How to Write Learning Outcomes Purvis & Winwood 2 Identifying learning outcomes is key to planning courses and their constituent elements. ... 2019). Research on the problems and impact of learning outcomes on higher education are plentiful in the peer reviewed higher education literature with a long history of research describing the benefits ...

  5. Outcomes and Objectives

    In some cases, the end result will be learning a process, but integrating a process into one's cognitive and skill repertoire is different than going through a process (e.g., the act of learning how to write a research paper is different than the process of writing a research paper). A note about Foundational and Embedded learning outcomes

  6. Creating Learning Outcomes

    Learning outcomes benefit instructors. Learning outcomes can help instructors in a number of ways by: Providing a framework and rationale for making course design decisions about the sequence of topics and instruction, content selection, and so on. Communicating to students what they must do to make progress in learning in your course.

  7. PDF Determinants of Students' Perceived Learning Outcome and Satisfaction

    Determinants of Students' Perceived Learning Outcome and Satisfaction in Online Learning during the Pandemic of COVID19 ... interests regarding the publication of this paper. Transparency: The author confirms that the manuscript is an honest, ... Journal of Education and e-Learning Research, 2020, 7(3): 285-292 286

  8. Improving Students' Learning With Effective Learning Techniques:

    Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques.

  9. PDF Writing Learning Outcomes

    Course Learning Outcomes are the primary skills, behaviors, abilities, expertise, and proficiencies the student will "own" at the end of the course. While the student will need a certain level of knowledge or information in ... Students will improve their writing of a research paper. Students will appreciate 20th century American literature.

  10. Student Learning Outcomes Assessment in Higher Education and in

    Validated writing portfolios and research paper bibliographies as effective ways to assess information literacy learning outcomes: Seeber, 2013 ... Additionally, most institutions indicate information literacy and research skills are desired learning outcomes for students. As such, the opportunity is in place and only needs for us to take ...

  11. Writing Student Learning Outcomes

    Student learning outcomes state what students are expected to know or be able to do upon completion of a course or program. ... Demonstrate knowledge about the significance of current research in the field of psychology by writing a research paper; Length - Should be no more than 400 characters. *Note: Any special characters (e.g., accents ...

  12. Write learning outcomes and create an outline

    Try chunking around the learning objectives you wrote. Alternatively, organize your guide around the user's likely process through the learning objective. DO create small, digestible chunks. The capacity for working memory is finite. Learning cannot take place if a user is overwhelmed if too much information is presented to them at one time.

  13. PDF The Impact of Technology Integration on Student Learning Outcomes: a

    This research paper examines the effects of technology integration on student learning outcomes through a comparative study. By analyzing existing literature, empirical data, and case studies, the ...

  14. Writing and using learning outcomes: a practical guide

    In another paper, Harden (2002b) describes how learning outcomes have been used to develop a model for use in medical training: Learning outcomes in medical education Learning outcomes can be specified in a way that covers the range of necessary competences and emphasises the integration of different competences in the practice of medicine.

  15. Impact of use of technology on student learning outcomes: Evidence from

    Research has so far not addressed the issues of implementation and scalability of technologies used in schools to improve learning outcomes. Our paper is one of the first to analyse a large-scale intervention design to deal with concerns of implementation and scalability; and thereby making access to quality education more equitable and inclusive.

  16. A Comparison of Student Learning Outcomes: Online Education vs

    Little research is available on outcomes in graduate-level classes as well as general information on student learning outcomes and perceptions of online learning outside of the U.S. ... & Mekeko, N.M. (2019). Measuring learning outcome and students' satisfaction in ELT (e-learning against conventional learning). Paper presented the ACM ...

  17. The outcomes of learner-centred pedagogy: A systematic review

    This article summarises the findings of a systematic review of 62 journal articles reporting the outcomes of LCP implementation in low- to middle-income countries. The review found relatively few studies that provided objective evidence of LCP effectiveness. A higher number of studies identified non-objective perspectives of LCP effectiveness ...

  18. (PDF) Learning outcomes: What are they? Who defines them ...

    The. development of the concept of learning outcomes is described as a linear process, starting with the objectives movement, continuing through the mastery learning. theories, before ending up in ...

  19. Importance and Benefits of Learning Outcomes

    6.2 Benefiting Teachers find it easy to plan a les son. Learning outcomes help teachers plan a lesson. Learning outcomes give a clear idea of what and how much to teach and plan accordingly ...

  20. Sample Learning Outcomes and Rubrics

    A printable document of these SAMPLES can be found here. The following are sample program learning outcomes and rubrics to provide some guidance in the development of assessment standards. These are merely examples and can be modified to fit the needs of your program. The outcomes and measurements MUST be relevant and meaningful to your program ...

  21. (PDF) Evaluation on the Effectiveness of Learning Outcomes from

    Figure 4 shows the distribution of the feedback o n their perception of construct 1. Referring to the. distribution, more than 60% of students agreed that the learning outcomes are very important ...

  22. Learning platforms and learning outcomes

    Introduction. This special issue of Learning, Media and Technology focuses on research that provides current insights into learning outcomes arising from the use of learning platforms by pupils, students and teachers in schools. Learning platforms have been widely introduced into higher education, to support access by students and teachers to course materials, enhancing involvement and ...

  23. Student Satisfaction, Needs, and Learning Outcomes:

    As part of a teaching/research Fulbright award, a private, selective, comprehensive university of 23,000 students commissioned a study of the satisfaction, needs, and learning outcomes of students relative to student services and other outside-the-classroom activities.

  24. Research Project (Learning Outcomes)

    The research project culminates with participants synthesizing the findings of their research in a written form such as a conference paper, journal article or report. They are also expected to report findings to their school community. Learning Outcomes: On completion of this subject, participants should be able to:

  25. A step-by-step guide to causal study design using real-world data

    The cardiovascular case study demonstrates the applicability of the steps to developing a research plan. This paper used an existing study to demonstrate the relevance of the guide. We encourage researchers to incorporate this guide at the study design stage in order to elevate the quality of future real-world evidence.

  26. Learning about learning outcomes: the student perspective

    Additionally, certain respondents reported that learning outcomes can restrict or overfragment their knowledge. Whilst many students wanted learning outcomes to remain a central part of their learning experience, the findings suggest further work is required to establish more effective use of learning outcomes as a learning resource.

  27. Improving Students' Learning With Effective Learning Techniques:

    Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques.

  28. Linking quality and child development in early childhood education and

    In 2023, the Australian Education Research Organisation (AERO) and researchers from the Queensland Brain Institute at The University of Queensland partnered to examine how specific aspects of quality relate to learning and development outcomes for children in Australia.

  29. From Data to Success: The Role of Interactive Support in Big Data

    The study explores how interactive support and teaching environments influence the impact of big data capabilities on learning outcomes among university students and contributes to the theoretical understanding of big data in education and offers practical recommendations for educators, policymakers, and researchers. The integration of big data analytics into higher education has emerged as a ...

  30. Center on the Developing Child at Harvard University

    This free, self-guided toolkit uses key, science-informed principles of child development with the goal of helping anyone involved in the development, implementation, or evaluation of programs for children and families achieve breakthrough outcomes.