Wednesday

Beginning to think about how to measure student engagement - quantiatively

Once our FLC all got on the same page about the fact that we needed to measure either process or outcome measures (or both) related to student engagement, we had a good discussion about what constitutes "engagement." 

The idea of "engagement" is near and dear to my heart - as it was central to my dissertation research on engagement in substance abuse treatment among people with intellectual disabilities.  In that project, I operationalized engagement to include a measure of the proportion of expected visits attended within a certain timeframe after beginning, or in the parlance of health services researchers, intiating, substance abuse treatment.  In applying this thinking to engaging graduate students in required research courses, I realized that while attendance is part of the picture - it is not all.  As with my dissertation research, no study is perfect - but in thinking about how to measure engagement in this project, I went back to the literature on student engagement. 

The memo (more accurately a list of links/items on engagement from others) pasted in below is something I prepared for our FLC members for discussion on this topic after going back to the literature.  We had a wide-ranging discussion of how we might use each of the approaches listed in this memo to measure our own students' engagement in our required research courses.  We also talked about some of the individual questions posed by NSSE and how we might tweak them.  My own approach, informed by reviewing all of this material, resulted in this draft survey instrument (click here, you will need to enter data in order to see the entire survey).  In designing this survey, I used the best practice of designing "table shells" ahead of time, to visualize what my data would look like once collected, in order to make sure that my measurement approaches were testable in the ways that I needed them to be (see tables here).

MEMORANDUM
November 17, 2010
To:          SONAR FLC Members
From:    Elspeth
Re:         Approaches to measuring student engagement/engagement
Clinical measurements of engagement: 
Attendance at X out of X sections (rate)
Measures of student engagement:
Graduate National Survey of Student Engagement (NSSE)
This survey is designed to measure the degree of graduate student engagement in college activities that correlate to student learning and personal development. This survey includes questions on class participation, academic rigor, amount of effort put into class work and overall experienceReports:
2007/08 Graduate National Survey of Student Engagement Summary.pdf
Table 1: In your postbaccalaureate experience at CSU Stanislaus, about how often have you done each of the following?
Agreement
a) Asked questions in class or contributed to class discussions
50.0%
b) Made a class presentation
44.6%
c) Prepared two or more drafts of a paper or assignment before turning it in
31.9%
d) Worked on a paper or project that required integrating ideas or information from various sources
54.3%
e) Included diverse perspectives (different races, religions, genders, political beliefs, etc.) in class discussions or writing assignments
42.4%
f) Came to class without completing readings or assignments
6.5%
g) Worked with other students on projects during class
33.3%
h) Worked with classmates outside of class to prepare class assignments
25.3%
i) Put together ideas or concepts from different courses when completing assignments or during class discussions
36.7%
j) Tutored or taught other students (paid or voluntary)
13.2%
k) Participated in a community-based project (e.g., service learning) as part of a regular course
6.6%
l) Used an electronic medium (listserv, chat group, Internet, instant messaging, etc.) to discuss or complete an assignment
46.7%
m) Used e-mail to communicate with an instructor
49.5%
n) Discussed grades or assignments with an instructor
28.9%
o) Talked about career plans with a faculty member or advisor
19.6%
p) Discussed ideas from your readings or classes with faculty
23.1%
q) Received prompt written or oral feedback from faculty on your academic performance
27.2%
r) Worked harder than you thought you could to meet an instructor's standards or expectations
20.7%
s) Worked with faculty members on activities other than course work (committees, orientation, student life activities, etc.)
9.8%
t) Discussed ideas from your readings or classes with others outside of class (students, family members, co-workers, etc.)
34.8%
u) Had serious conversations with students of a different race or ethnicity than your own
34.8%
v) Had serious conversations with students who are very different from you in terms of their religious beliefs, political opinions, or personal values
32.6% Items

Table 2: How often has your coursework emphasized your engagement in the following mental activities?
Agreement

a) Memorizing facts, ideas, or methods from your courses and readings so you can repeat them in pretty much the same form

16.3%

b) Analyzing the basic elements of an idea, experience, or theory, such as examining a particular case or situation in depth and considering its components

43.5%

c) Synthesizing and organizing ideas, information, or experiences into new, more complex interpretations and relationships

39.1%

d) Making judgments about the value of information, arguments, or methods, such as examining how others gathered and interpreted data and assessing the soundness of their conclusions

40.2%

e) Applying theories or concepts to practical problems or in new situations



ERIC Identifier:  ED482269
Publication Date: 2003-09-00
Author: Chapman, Elaine
Source: ERIC Clearinghouse on Assessment and Evaluation
Assessing Student Engagement Rates. ERIC Digest.  
Given the emphasis placed on levels of academic achievement in schools, the way in
which students acquire knowledge through the learning process has become a primary concern. Several studies have highlighted the significant role that affective factors can play in learning (e.g., Mathewson, 1994; Wigfield, 1997), placing particular emphasis on student engagement. This Digest defines student engagement and describes various methods used to measure it, both in empirical research studies and at the classroom level.
"WHAT IS STUDENT ENGAGEMENT?"
Early studies of student engagement often focused on time-on-task behaviors (e.g.,
Fisher, et al., 1980; Brophy, 1983). More recently, however, other definitions have
appeared in the literature. Student engagement has been used to depict students'
willingness to participate in routine school activities, such as attending classes,
submitting required work, and following teachers' directions in class. For example,
Natriello (1984) defined student engagement as "participating in the activities offered as part of the school program" (p.14). Negative indicators of engagement in this study included unexcused absences from classes, cheating on tests, and damaging school property.
Another definition focuses on more subtle cognitive, behavioral, and affective indicators of student engagement in specific learning tasks. This orientation is reflected well in the definition offered by Skinner & Belmont (1993):
Children who are engaged show sustained behavioral involvement in learning activities accompanied by a positive emotional tone. They select tasks at the border of their competencies, initiate action when given the opportunity, and exert intense effort and concentration in the implementation of learning tasks; they show generally positive emotions during ongoing action, including enthusiasm, optimism, curiosity, and interest.
The opposite of engagement is disaffection. Disaffected children are passive, do not tryhard, and give up easily in the face of challenges [they can] be bored, depressed,
anxious, or even angry about their presence in the classroom; they can be withdrawn
from learning opportunities or even rebellious towards teachers and classmates. (p.
572.)
From a different perspective, Pintrich and & De Groot (1990) associated engagement levels with students' use of cognitive, meta-cognitive and self-regulatory strategies to monitor and guide their learning processes. In this view, student engagement is viewed as motivated behavior apparent from the kinds of cognitive strategies students choose to use (e.g., simple or "surface" processing strategies such as rehearsal versus "deeper" processing strategies such as elaboration), and by their willingness to persist with difficult tasks by regulating their own learning behavior.
Use of cognitive and meta-cognitive strategies (e.g., I went back over things I didn't
understand" and "I tried to figure out how today's work fit with what I had learned
before") may be taken to indicate active task engagement, while use of shallow
strategies (e.g., "I skipped the hard parts") may be taken to indicate superficial
engagement (Meece, Blumefield, and Hoyle, 1988).
"HOW IS STUDENT ENGAGEMENT MEASURED?"
The most common way that student engagement is measured is through information
reported by the students themselves. Other methods include checklists and rating
scales completed by teachers, observations, work sample analyses, and case studies.
Each of these methods is described briefly below.
"Self-Reports." Students may be asked to complete surveys or questionnaires 
regarding their level of task engagement. Items relating to the cognitive aspects of engagement often ask students to report on factors such as their attention versus distraction during class, the mental effort they expend on these tasks (e.g., to integrate new concepts with previous knowledge), and task persistence (e.g., their reaction to perceived failure to comprehend the course material). Students can also be asked to report on their response levels during class time (e.g., making verbal responses within group discussions, looking for distractions, and engaging in non-academic social interaction) as an index of behavioral task engagement. Affective engagement questions typically ask students to rate their interest in and emotional reactions to learning tasks on indices such as choice of activities (e.g., selection of more versus less challenging tasks), the desire to know more about particular topics, and feelings of stimulation or excitement in beginning new projects.
In addition to asking the question of whether students are engaged in learning tasks,
self-report measures can provide some indication of why this is the case. Research into achievement goal orientations, for example, has indicated positive relationships
between task or mastery goals, which reflect a desire for knowledge or skill acquisition, and students' use of effective learning strategies (e.g., Covington, 2000). Studies have also demonstrated positive relationships between students' perceived learning control and adaptive learning processes (e.g., Strickland, 1989; Thompson et al., 1998).
"Checklists and Rating Scales." In addition to student self-report measures, a few studies have used summative rating scales to measure student engagement levels. For example, the teacher report scales used by Skinner & Belmont (1993) asked teachers to assess their students' willingness to participate in school tasks (i.e., effort, attention, and persistence during the initiation and execution of learning activities, such as "When faced with a difficult problem, this student doesn't try"), as well as their emotional reactions to these tasks (i.e., interest versus boredom, happiness versus sadness, anxiety and anger, such as "When in class, this student seems happy"). The Teacher Questionnaire on Student Motivation to Read developed by Sweet, Guthrie, & Ng (1996) asks teachers to report on factors relating to student engagement rates, such as activities (e.g., enjoys reading about favorite activities), autonomy (e.g., knows how to choose a book he or she would want to read), and individual factors (e.g., is easily distracted while reading).
"Direct Observations." Although self-report scales are widely used, the validity of the data yielded by these measures will vary considerably with students' abilities to accurately assess their own cognitions, behaviors, and affective responses (Assor & Connell, 1992). Direct observations are often used to confirm students' reported levels of engagement in learning tasks. A number of established protocols are available in this area (e.g., Ellett & Chauvin, 1991). Most of these observational studies have used some form of momentary time sampling system. In these methods, the observer records whether a behavior was present or absent at the moment that the time interval ends or else during a specific time period.
In classwide observations, approximately 5 minutes of observational data can generally be collected on each target student per lesson. Thus, a 30-minute observation period would allow observations of approximately 5 target students, with 6 to 7 sessions being required to observe a full class. In addition, to obtain a representative sample of students' behavior over the full course of a lesson, observations are generally rotated across students so that each student is observed continuously for only one minute at a time.
"Work Sample Analyses." Evidence of higher-order problem-solving and metacognitive learning strategies can be gathered from sources such as student projects, portfolios, performances, exhibitions, and learning journals or logs (e.g., Royer, Cisero, & Carlo, 1993; Wolf, et al., 1990). The efficacy of these methods hinges on the use of suitably structured tasks and scoring rubrics. For example, a rubric to assess the application of higher-order thinking skills in a student portfolio might include criteria for evidence of problem-solving, planning, and self-evaluation in the work. A number of formal and informal protocols for assessing students' self-regulated learning strategies also incorporate components that focus on metacognitive skills (e.g., Pintrich & DeGroot, 1990; Ward & Traweek, 1993). The Metacognitive Knowledge Monitoring Assessment and the Assessment of Cognitive Monitoring Effectiveness are more targeted measures suitable for use in classroom situations and with demonstrated sound psychometric properties in empirical evaluations (Osborne, 2001).
"Focused Case Studies
From a different perspective, Pintrich and & De Groot (1990) associated engagement levels with students' use of cognitive, meta-cognitive and self-regulatory strategies to monitor and guide their learning processes. In this view, student engagement is viewed as motivated behavior apparent from the kinds of cognitive strategies students choose to use (e.g., simple or "surface" processing strategies such as rehearsal versus "deeper" processing strategies such as elaboration), and by their willingness to persist with difficult tasks by regulating their own learning behavior.
Use of cognitive and meta-cognitive strategies (e.g., I went back over things I didn't
understand" and "I tried to figure out how today's work fit with what I had learned
before") may be taken to indicate active task engagement, while use of shallow
strategies (e.g., "I skipped the hard parts") may be taken to indicate superficial
engagement (Meece, Blumefield, and Hoyle, 1988).

"Checklists and Rating Scales." In addition to student self-report measures, a few studies have used summative rating scales to measure student engagement levels. For example, the teacher report scales used by Skinner & Belmont (1993) asked teachers to assess their students' willingness to participate in school tasks (i.e., effort, attention, and persistence during the initiation and execution of learning activities, such as "When faced with a difficult problem, this student doesn't try"), as well as their emotional reactions to these tasks (i.e., interest versus boredom, happiness versus sadness, anxiety and anger, such as "When in class, this student seems happy"). The Teacher Questionnaire on Student Motivation to Read developed by Sweet, Guthrie, & Ng (1996) asks teachers to report on factors relating to student engagement rates, such as activities (e.g., enjoys reading about favorite activities), autonomy (e.g., knows how to choose a book he or she would want to read), and individual factors (e.g., is easily distracted while reading).

"Focused Case Studies." When the focus of an investigation is restricted to a small
group of target students, it is often more useful to collect detailed descriptive accounts of engagement rates. Case studies allow researchers to address questions of student engagement inductively by recording details about students in interaction with other people and objects within classrooms. These accounts should describe both students' behaviors and the classroom contexts in which they occur. This might include, for example, the behavior of peers, direct antecedents to the target student's behaviors (e.g., teacher directions), as well as the student's response and the observed consequences of that response (e.g., reactions from teachers or peers). Case studies generally attempt to place observations of engagement within the total context of the classroom and/or school, and are concerned as much with the processes associated with engagement as they are in depicting engagement levels.

REFERENCES

Assor, A., & Connell, J.P. (1992). The validity of students' self-reports as measures of performance-affecting self-appraisals. In D.H. Schunk & J. Meece (Eds.), Student Perceptions in the Classroom (pp.25-46). Hillsdale, NJ: Lawrence Erlbaum.

Brophy, J. (1983). Conceptualizing student motivation. Educational Psychologist, 18, 200-215.

Covington, M. (2000). Goal theory, motivation, and school achievement: an integrative review. Annual Review of Psychology, 51, 171-200.

Ellett, C.D., & Chauvin, E. (1991). Development, validity, and reliability of a new
generation of assessments of effective teaching and learning: Future directions for the
study of learning environments. Journal of Classroom Interaction, 26(2): 25-36.

Fisher, C., Berliner, D., Filby, N., Marliave, R., Cahen, L., & Dishaw, M. (1980).
Teaching behaviors, academic learning time, and student achievement: An overview. In C. Denham & A.

Lieberman (Eds.), Time to Learn. Washington, D.C.: National Institute of Education.

Mathewson, G.C. (1994). Model of attitude influence upon reading and learning to read. In R.B. Ruddell & H. Singer (Eds.), Theoretical Models and Processes of Reading, 3rd. Ed. (pp. 1131-1161). Newark, DE: International Reading Association.

Meece, J.L., Blumenfield, P.C., & Hoyle, R.H. (1988). Students' goal orientations and cognitive engagement in classroom activities. Journal of Educational Psychology, 80 (4): 514-523.

Natriello, G. (1984). Problems in the evaluation of students and student disengagement from secondary schools. Journal of Research and Development in Education, 17, 14-24.

Osborne, J. (2001). Assessing metacognition in the classroom: the assessment of
cognition monitoring effectiveness. Unpublished manuscript, the Department of
Educational Psychology, University of Oklahoma.

Pintrich, P.R., & De Groot, E.V. (1990). Motivational and self-regulated learning
components of classroom academic performance. Journal of Educational Psychology, 82(1): 33-40.

Royer, J.M., Cisero, C.A., & Carlo, M.S. (1993). Techniques and procedures for
assessing cognitive skills. Review of Educational Research, 63(2): 201-243.

Skinner, E.A., & Belmont, M.J. (1993). Motivation in the classroom: Reciprocal effects of teacher behavior and student engagement across the school year. Journal of Educational Psychology, 85(4): 571-581.

Strickland, B.R. (1989). Internal-external control expectancies: From contingency to creativity. American Psychologist, 44(1): 1-12.

Sweet, A.P., Guthrie, J.T., & Ng, M. (1996). Teacher Perceptions and Students'
Motivation To Read (Reading Research Report No. 69). Athens, GA: National Reading Research Center.

Thompson, M., Kaslow, N.J., Weiss, B., & Nolen-Hoeksema, S. (1998). Children's Attributional Style Questionnaire revised: Psychometric examination. Psychological Assessment, 10(2): 166-170.

Ward, L., & Traweek, D. (1993). Application of a metacognitive strategy to assessment, intervention, and consultation: A think-aloud technique. Journal of School Psychology, 31, 469-485.

Wigfield, A. (1997). Reading engagement: a rationale for theory and teaching. In J.T. Guthrie and A.

Wigfield (Eds.), Reading Engagement: Motivating Readers Through
Integrated Instruction. Newark, DE: International Reading Association.

Wolf, D., Bixby, J., Glenn, J., & Gardner, H. (1990). To use their minds well:
Investigating new forms of student assessment. Review of Research in Education, 17, 31-74.

No comments:

Post a Comment