by MARK WEST
Most Recent Update
Referring to the final report on ActivClassroom by Haystead and Marzano (finalreportonactivclassroom.pdf), specifically figure 22c on page 38, one is lead to bleieve that 75%-80% is the ideal time goal. In fact, I used to use the weightings of that table to define this rubric. And yet, in the findings section (page 41), the summary states,
A teacher uses Promethean ActivClassroom extensively in the classroom but not beyond 80% of the time.
"Extensively" is a very subjective and less than quantitative term. I was using the values in figure 22c and while at a conference in July 2010 I had the opportunity to discuss this with Debra Pickering, a senior consultant with Marzano Research Labs, and I mentioned the chart. And I got a reply I didn't expect:
"We're not seeing that trend so much now. It appears that the teachers who used the boards more often simply became more proficient with the technology and that proficiency coupled with good teaching lead to the academic gains."
So, going back to the Summary of Phase I of Marzano and Haystead's research (p. 41):
When corrected for attenuation, the percentile gain associated with the use of Promethean ActivClassroom is 17 percent... Additionally, the meta-analysis of the seven types of moderator variables indicated conditions under which the technology might produce maximum results. Considered as a set, one might predict relatively large percentile gains in student achievement under the following conditions:
Building a rubric of some these indicators would be spurious (being graded on experience and time using ActivClassroom inequitable ("when I show up tomorrow, I have another day of experience, how much does that boost my grade?") and any attempt and evaluating another's confidence really means evaluating their acting ability). So that leaves me with the "extensively" worded time statement.
The summary of phase II dealt with teaching techniques and is found on page 64:
Taken at face value, the multiple correlation of .821 (see Figure 43) might suggest a strong effect on student achievement under the following conditions:
So the method is chunking, pacing, monitoring, reteaching (if needed), and formative assessment. Furthermore, the chunking is dependent on having a logical flow, being paced and formative assessment for grasping understanding.
In fact, in a recent article (Marzano, Robert J. “Teaching with Interactive Whiteboards”. Educational Leadership. November 2009: 80-82.), he wrote:
Marzano's focus is clearly on formative assessment (of course, he's also written a lot on that very topic, part of his Classroom Strategies That Work series).
I used to look at engagement of students. One of the downfalls of Interactive Whiteboards pointed out by Dr. Pickering (and numerous naysayers on the Internet) is that while one student is at the board, other students can easily become bored. So focusing on time students spend at the board vs. the teaching techniques may actually hamper the engagement we seek to get with IWBs. And if the whole class is engaged because of Learner Rseponse Systems, the focus should be on the formative assessment power of the voting units rather than the time spent voting (going back to Marzano's quote about what the instructor does with the data derived).
This has been removed as we have better indicators for success than this. Engagement is important, but we should be looking at the formative assessment data so that we provide more appropriate feedback for our students.
This is a measure of where the majority of the instructor's questions lie. I think it's wrong to demonize the use of low level questions when they are appropriate for learning, but the goal should be higher level thinking skills:
Note: I'm still using the older names; these names have been revised and this Wiki page describes Lorin Anderson's revision of Benjamin S. Bloom's Taxonomy.
To calculate this, use the level of the most frequent level of questions if that is the highest level question asked; in all other cases, I will use standard mathematical averaging, rounding 0.49 and below down, and rounding .50 and above up.
Example 1: a teacher asks 3 knowledge level questions and 3 synthesis level questions, where is he? 3 knowledge (3 x 1 - 3) plus 3 synthesis (3 x 5 = 15). The way I am calculating the teacher is 15+3= 18 / 6 (number of questions asked) = overall level of 3.
Example 2: a teacher asks 3 knowledge level questions and 4 synthesis level questions, where is she? Since the majority is at synthesis level, she is at level 5. (No math is used here, as there are 4 synthesis [level 5] vs. 3 knowledge level [level 1] questions and because level 5 is higher than level 1 meaning it's the highest level of question used).
Example 3: a teacher asks 4 knowledge level questions and 3 synthesis level questions, where is she? While there is a majority, it's not the highest level of questions posed to the students, so averaging is uses 4 knowledge (4 x 1 = 4) + 3 synthesis (3 x 5 = 15). So the total is 15 + 4 = 19. 7 questions were asked, so 19 / 7= 2.71, which rounds up to 3.
Example 4: a teacher asks 2 knowledge level questions, 3 analysis questions, and 1 synthesis level question, where is he? Again, the majority of questions is not at the highest level of questions posed to the students, so averaging is used. (1 x 2) + (4 x 3) + 5 = 19; 19 / 6 = 3.17, which rounds down to 3.
In all cases, the context of the question overides actual words used. There is no time limit for context and re-asking the same question does not count as a separate question.
A math teacher has several problems for her students. On question 1, the teacher asks Jenny, "How would you solve this problem?". Jenny gives her answer, and the teacher moves on to question 2. She asks Billy, "How would you solve this problem?"
The teacher has asked two questions using the same words, "How would you solve this problem?" and that's because the questions focus on different math problems (the context). Each math problem will have to be analyzed and solved in its own way; the wording of the question is merely redundant, but it's still two questions.
At questions 2, when the teacher asks Billy, "How would you solve this problem?", as Billy starts, the student is called out of the room by the office. The teacher then asks Bobby, "Can you solve this?"
The context is the same (both questions focus on problem 2). The fact that the wording is different, the teacher is still having the class work on the same problem; thus, this is still the same question.
Our math teacher is working with a problem 2x+5=17. The teacher turns to Shaniqua and says, "In this problem, x is a variable. Shaniqua, can you tell me what a variable is?" Seven minutes and five problems later, while working on 3x-2=16, the teacher turns to Rajeesh and says, "Raj, can you remind us what a variable is?" This is the same question, worded differently, and at a later time, but it still counts as the same question. There is no penalty for rechecking comprehension, no matter how many times the question is repeated.
This has been changed from individual evaluations of each indicator to an overall look in one section at these indicators: ISTE's NETS-T (http://www.iste.org/Content/NavigationMenu/NETS/ForTeachers/2008Standards/NETS_for_Teachers_2008.htm)
This now looks overall how the educator performs on nets as 1 indicator, so as not to make the NETS weigh too heavily into the equation.
Therefore, there are now only three indicators looked at in a single visit: