Rubric for technology

Logo of Promethean Boards Bound: A Blog About Educational Technology featuring a cartoon map of the Upper Cumberland region

by MARK WEST
Original posting
8/24/09

Most Recent Update
8/23/10

Revision: 08-23-10

Instructional Time Engagement Duration

Referring to the final report on ActivClassroom by Haystead and Marzano (finalreportonactivclassroom.pdf), specifically figure 22c on page 38, one is lead to bleieve that 75%-80% is the ideal time goal. In fact, I used to use the weightings of that table to define this rubric. And yet, in the findings section (page 41), the summary states,

A teacher uses Promethean ActivClassroom extensively in the classroom but not beyond 80% of the time.

"Extensively" is a very subjective and less than quantitative term. I was using the values in figure 22c and while at a conference in July 2010 I had the opportunity to discuss this with Debra Pickering, a senior consultant with Marzano Research Labs, and I mentioned the chart. And I got a reply I didn't expect:

"We're not seeing that trend so much now. It appears that the teachers who used the boards more often simply became more proficient with the technology and that proficiency coupled with good teaching lead to the academic gains."

So, going back to the Summary of Phase I of Marzano and Haystead's research (p. 41):

When corrected for attenuation, the percentile gain associated with the use of Promethean ActivClassroom is 17 percent... Additionally, the meta-analysis of the seven types of moderator variables indicated conditions under which the technology might produce maximum results. Considered as a set, one might predict relatively large percentile gains in student achievement under the following conditions:
  • A teacher is experienced.
  • A teacher has used Promethean ActivClassroom for an extended period of time.
  • A teacher uses Promethean ActivClassroom extensively in the classroom but not beyond 80% of the time.
  • A teacher has high confidence in his or her ability to use Promethean ActivClassroom.

Building a rubric of some these indicators would be spurious (being graded on experience and time using ActivClassroom inequitable ("when I show up tomorrow, I have another day of experience, how much does that boost my grade?") and any attempt and evaluating another's confidence really means evaluating their acting ability). So that leaves me with the "extensively" worded time statement.

However, there was a phase II of the study

The summary of phase II dealt with teaching techniques and is found on page 64:

Taken at face value, the multiple correlation of .821 (see Figure 43) might suggest a strong effect on student achievement under the following conditions:
  • The teacher organizes content into small digestible bites that are designed with students’ background knowledge and needs in mind (i.e., the teacher chunks new content).
  • These chunks of new content logically lead one to the other (i.e., understanding the first chunk helps students understand the second chunk and so on).
  • While addressing chunks, the teacher continually determines whether the pace must be slowed or increased to maintain high engagement and understanding (pacing).
  • The teacher monitors the extent to which students understand the new content (monitoring).
  • When it is evident that students do not understand portions of the content, the teacher reviews the content with the class or re-teaches it.
  • During each chunk, the teacher asks questions and addresses them in such a way that all students have an opportunity to respond and answers are continually examined as to their correctness and depth of understanding.

So the method is chunking, pacing, monitoring, reteaching (if needed), and formative assessment. Furthermore, the chunking is dependent on having a logical flow, being paced and formative assessment for grasping understanding.

In fact, in a recent article (Marzano, Robert J. “Teaching with Interactive Whiteboards”. Educational Leadership. November 2009: 80-82.), he wrote:

  • [S]ome potential pitfalls in using the technology: Using the voting devices but doing little with the findings.
  • In many classrooms, teachers simply noted how many students obtained the correct answer instead of probing into why one answer was more appropriate than another.

Marzano's focus is clearly on formative assessment (of course, he's also written a lot on that very topic, part of his Classroom Strategies That Work series).

Table I: Chunking Rubric
1 2 3 4
During instruction, some chunking (less than 75%).

Some logical leading and/or pacing and/or monitoring and/or reteaching and/or evidence of formative assessment used to direct instruction.
Instruction is mostly chunked (75% or more).

Some logical leading and/or pacing and/or monitoring and/or reteaching and/or evidence of formative assessment used to direct instruction.
During instruction, some chunking (less than 75%).

When chunking occurs, there is a logical structure with pacing, monitoring, reteaching as needed and there is evidence of formative assessment used to direct instruction.
Instruction is mostly chunked (75% or more).

When chunking occurs, there is a logical structure with pacing, monitoring, reteaching as needed and there is evidence of formative assessment used to direct instruction.

Number of Students Present Engaged in Session

I used to look at engagement of students. One of the downfalls of Interactive Whiteboards pointed out by Dr. Pickering (and numerous naysayers on the Internet) is that while one student is at the board, other students can easily become bored. So focusing on time students spend at the board vs. the teaching techniques may actually hamper the engagement we seek to get with IWBs. And if the whole class is engaged because of Learner Rseponse Systems, the focus should be on the formative assessment power of the voting units rather than the time spent voting (going back to Marzano's quote about what the instructor does with the data derived).

This has been removed as we have better indicators for success than this. Engagement is important, but we should be looking at the formative assessment data so that we provide more appropriate feedback for our students.

Revision: 01-20-10

Majority of Technology usage focused on this level of Bloom's Taxonomy

This is a measure of where the majority of the instructor's questions lie. I think it's wrong to demonize the use of low level questions when they are appropriate for learning, but the goal should be higher level thinking skills:

Majority of Technology usage focused on this level of Bloom's Taxonomy (Cognitive Domain)

  • Level 4 (Analysis), Level 5 (Synthesis) or Level 6 (Evaluation) = 4 points
  • Level 3 (Application) = 3 points
  • Level 2 (Comprehension) = 2 points
  • Level 1 (Knowledge) = 1 point

Note: I'm still using the older names; these names have been revised and this Wiki page describes Lorin Anderson's revision of Benjamin S. Bloom's Taxonomy.

Revision: 09-16-09

Calculation Revision

To calculate this, use the level of the most frequent level of questions if that is the highest level question asked; in all other cases, I will use standard mathematical averaging, rounding 0.49 and below down, and rounding .50 and above up.

Example 1: a teacher asks 3 knowledge level questions and 3 synthesis level questions, where is he? 3 knowledge (3 x 1 - 3) plus 3 synthesis (3 x 5 = 15). The way I am calculating the teacher is 15+3= 18 / 6 (number of questions asked) = overall level of 3.

Example 2: a teacher asks 3 knowledge level questions and 4 synthesis level questions, where is she? Since the majority is at synthesis level, she is at level 5. (No math is used here, as there are 4 synthesis [level 5] vs. 3 knowledge level [level 1] questions and because level 5 is higher than level 1 meaning it's the highest level of question used).

Example 3: a teacher asks 4 knowledge level questions and 3 synthesis level questions, where is she? While there is a majority, it's not the highest level of questions posed to the students, so averaging is uses 4 knowledge (4 x 1 = 4) + 3 synthesis (3 x 5 = 15). So the total is 15 + 4 = 19. 7 questions were asked, so 19 / 7= 2.71, which rounds up to 3.

Example 4: a teacher asks 2 knowledge level questions, 3 analysis questions, and 1 synthesis level question, where is he? Again, the majority of questions is not at the highest level of questions posed to the students, so averaging is used. (1 x 2) + (4 x 3) + 5 = 19; 19 / 6 = 3.17, which rounds down to 3.


Revision: 09-16-09

What Counts As A Question

In all cases, the context of the question overides actual words used. There is no time limit for context and re-asking the same question does not count as a separate question.

Example 1: different questions, same words

A math teacher has several problems for her students. On question 1, the teacher asks Jenny, "How would you solve this problem?". Jenny gives her answer, and the teacher moves on to question 2. She asks Billy, "How would you solve this problem?"

The teacher has asked two questions using the same words, "How would you solve this problem?" and that's because the questions focus on different math problems (the context). Each math problem will have to be analyzed and solved in its own way; the wording of the question is merely redundant, but it's still two questions.

Example 2: same question, different words

At questions 2, when the teacher asks Billy, "How would you solve this problem?", as Billy starts, the student is called out of the room by the office. The teacher then asks Bobby, "Can you solve this?"

The context is the same (both questions focus on problem 2). The fact that the wording is different, the teacher is still having the class work on the same problem; thus, this is still the same question.

Example 3: same question, different words, different times

Our math teacher is working with a problem 2x+5=17. The teacher turns to Shaniqua and says, "In this problem, x is a variable. Shaniqua, can you tell me what a variable is?" Seven minutes and five problems later, while working on 3x-2=16, the teacher turns to Rajeesh and says, "Raj, can you remind us what a variable is?" This is the same question, worded differently, and at a later time, but it still counts as the same question. There is no penalty for rechecking comprehension, no matter how many times the question is repeated.

Revision: 01-20-10

NETS-T

This has been changed from individual evaluations of each indicator to an overall look in one section at these indicators: ISTE's NETS-T (http://www.iste.org/Content/NavigationMenu/NETS/ForTeachers/2008Standards/NETS_for_Teachers_2008.htm)

This now looks overall how the educator performs on nets as 1 indicator, so as not to make the NETS weigh too heavily into the equation.

Table II: NETS
1 2 3 4
Met any 1 of the 5 standards Met any 2 of the 5 standards Met any 3 of the 5 standards Met any 4 (or more) of the 5 standards

Revision: 08-23-10

Summary

Therefore, there are now only three indicators looked at in a single visit:

Table I: Chunking Rubric
1 2 3 4
During instruction, some chunking (less than 75%).

Some logical leading and/or pacing and/or monitoring and/or reteaching and/or evidence of formative assessment used to direct instruction.
Instruction is mostly chunked (75% or more).

Some logical leading and/or pacing and/or monitoring and/or reteaching and/or evidence of formative assessment used to direct instruction.
During instruction, some chunking (less than 75%).

When chunking occurs, there is a logical structure with pacing, monitoring, reteaching as needed and there is evidence of formative assessment used to direct instruction.
Instruction is mostly chunked (75% or more).

When chunking occurs, there is a logical structure with pacing, monitoring, reteaching as needed and there is evidence of formative assessment used to direct instruction.

Majority of Technology usage focused on this level of Bloom's Taxonomy (Cognitive Domain)

  • Level 4 (Analysis), Level 5 (Synthesis) or Level 6 (Evaluation) = 4 points
  • Level 3 (Application) = 3 points
  • Level 2 (Comprehension) = 2 points
  • Level 1 (Knowledge) = 1 point

and

Table II: NETS
1 2 3 4
Met any 1 of the 5 standards Met any 2 of the 5 standards Met any 3 of the 5 standards Met any 4 (or more) of the 5 standards

ĉ
Mark West,
Dec 10, 2010, 7:01 AM
Comments