Students who are just
entering into the university often take their introductory courses in large
lecture halls. Not only are these students receiving their first exposure
to novel topics, but they also are usually getting their first experience
with large classroom dynamics that present special challenges to both
teachers and students. Some educators have made a very good case that large
classrooms break many tenets of effective teaching and learning (Herreid,
2006; Trees & Jackson, 2007). Entire books are devoted to addressing the
unique challenges of this learning environment (e.g., McKeachie, Chism,
Menges, Svinicki, & Weinstein, 1994; Stanley & Porter, 2002). Perhaps the
greatest challenge mentioned is the lack of immediate personal communication
between students and teacher about what is and is not being learned (Nagle,
Roschelle, Penuel, and
Abrahamson (2004) observed that effective learning occurs in classroom
environments that are learner-, knowledge-, assessment-, and
community-centered. Obviously, teachers need to produce lectures that are
informative; that is, are knowledge-centered. Beyond this, however,
teachers need to create a learner-centered environment where their teaching
methods encourage students to think actively during lectures and to engage
with material they are hearing. It is also important that training be
assessment-centered so that it effectively positions students to learn more
by giving them immediate feedback about their understanding of the lecture
material. Finally, acknowledging that learning is a social event, Roschelle
and colleagues emphasized how important it is for a teacher to create a
sense of community in the classroom by pointing out in word and practice
that students are sharing a common purpose, which is to learn the material
at hand. Generally, the large classroom setting has excelled in the realm
of being knowledge-centered, but it has suffered in quality with respect to
being learner-centered, assessment-centered, and community-centered
especially when compared to smaller classes. I contend that the appropriate
use of a student-response system can help ameliorate this situation.
Students who are actively
engaged in the learning process learn more than do students not so engaged
(Benjamin, 1991; Langer, 2000; Stanley & Porter, Yoder & Hochevar, 2005).
They engage in learning when they personally identify with the material
taught, when they see that the material is relevant, or when they see that
the topic is important to other students around them. Students are more
likely to care about learning when they receive evidence that the teacher
cares about what they are learning, and teachers are more effective when
they obtain feedback about their teaching (Teven & McCroskey, 1997). In
other words, teachers in large classroom settings need feedback, too. They
need to know if their students are “with them” in the lesson or if the
material needs to be presented again in a different way.
Indeed, the large classroom
setting is a difficult place to ask and answer questions, and for effective
learning to take place questions and answers need to be generated (Sorcinelli,
2002). Often, however, the few students who are bold enough to ask
questions in this setting are not representative of the rest of the class
(Graham, Tripp, Seawright, & Joeckel, 2007; Herreid, 2006). The most vocal
students under these conditions, for example, sometimes ask strange or
personal questions that are difficult to answer effectively and sensitively
in a large classroom setting. Acoustic restraints and other interpersonal
factors associated with the large class can prevent one from probing and
clarifying with flexibility what the student is asking.
students ask questions in a large classroom setting that teachers cannot
always field well, there may develop an implicit understanding between the
teacher and student to forego these exchanges. Furthermore, answering
questions in this setting can be counterproductive to student learning
because the flow of a lecture is upset and the teacher cannot take into
consideration the various needs and perspectives of a large and diverse
audience efficiently or adequately.
Although not a panacea to
all the barriers inherent to the large classroom environment, when properly
introduced and used, student response systems can help teachers overcome
many of these sorts of challenges (Bruff, 2009; Graham et al., 2007; Herreid,
2006; Trees & Jackson, 2007). Although there are various student response
systems commercially available, they all contain three basic components: (1)
hand-held transmitters (hereinafter referred to as “clickers”) used by
students to answer questions posed to them, (2) a receiver that is connected
to a teacher’s computer to register clicker signals, and (3) software
installed on the teacher’s computer that is dedicated to processing and
graphically presenting students’ responses to questions. When an instructor
presents a multiple-choice item to the class on a PowerPoint slide during a
lecture, students are immediately able to register their answers using their
clickers, and the instructor can provide immediate visual feedback to the
whole class. By asking various types of questions, teachers can use this
technology effectively to tap into the diversity of a large classroom to
create teaching points and illustrate real world issues to which the whole
class can become attuned (Beekes, 2006; Bruff, 2009; Ferguson, 1992; Graham
et al., 2007).
To date, a number of
researchers have evaluated clicker technology to varying extents and
purposes and have found it to be quite promising (Stowell & Nelson, 2007).
Cleary (2008) has reported on the advantages of using clickers for gathering
research data. Others, including Morling, McAuliffe, Cohen, and DiLorenzo
(2008) have reported instructional gains when using clickers to administer
in-class quizzes. Researchers have examined student attitudes toward
clickers and the context within which clickers work best (Herreid, 2006;
Trees & Jackson, 2007). The current study adds to this growing literature
by investigating the relationship between clicker use and student test
performance by analyzing the detailed information collected on each student
with clicker software. I specifically address the question of whether
clicker activities help to create a class environment that contributes to
student test performance. Using the terminology of Roschelle et al. (2004),
my particular interest is in accessing a learner-centered application of
One hundred and seventy-one
students enrolled in an introductory psychology course taught in a large
R-1, land-grant University located in the Southeastern United States
provided data for the primary analyses of this study. Students
participating in the study had the following demographic characteristics:
58% male, 42% female; 81% Caucasian, 7% African-American, 3% Hispanic, 4%
Asian, 4% other, and 1% not specified. Student participants ranged in age
from 17 to 31 years, with a median age of 19. The majority of the students
(58%) were first-year undergraduates as might be expected in an introductory
level psychology course. The remainder of the participants consisted of 23%
sophomores, 9% juniors, 5% seniors, and 5% undergraduate special students
(i.e., students not enrolled in a degree program in the university).
Research shows that the way
teachers incorporate a student response system into their instruction and
grading procedures likely influences the impact that it will have on
students’ general acceptance of the system and its effects on learning (Bruff,
2009; Trees & Jackson, 2007). At the beginning of the semester, students
read on the course syllabus and heard from the instructor during an
introductory lecture that he would be using the response system mainly to
encourage them to actively participate and interact with the lecture
material. Students also heard that using their clickers would earn them a
small amount of participation credit. As much as possible, the lectures
included clicker activity in a way that might promote a sense of competence
and self-determination in a student (Graham et al., 2007; Ryan & Deci,
Throughout the semester,
students responded to various kinds of clicker items presented to them via
PowerPoint presentations. Since clicker activity emphasized
“learner-centered” more than an “assessment-centered” application of the
clicker technology, students received fractions of a point each day for
simply responding to items. In other words, the instructor used the clicker
questions to encourage students to actively attend to and respond to lecture
information. The students knew that the clickers were not going to be used
for administering quizzes or for just taking attendance; a practice that
would be primarily in line with assessment-centered application of clicker
To engage students during
lectures, the instructor posed various kinds of questions. Some of the
items to which students responded simply requested opinions and personal
information (e.g., “Do you know someone with Alzheimer’s disease;” “Are you
right or left-handed”). Other items assessed students’ knowledge of facts
(e.g., “The retina consists of rods and ___”). These items referred to
material already covered in class to review or check student learning. To
stimulate critical thinking and to extend the application of previously
covered information, however, students sometimes to responded to factual
items before hearing about the information in lecture. Finally,
lectures included other items to facilitate in-class demonstrations. For
example, the instructor converted demonstrations that formerly required
students in the class to respond by raising their hands to ones that
students could respond to by “clicking.” Thus, instead of “look around the
room” approximations that may be difficult to make in large auditorium
settings, students quickly received accurate visual displays of response
distributions on a PowerPoint slide. These visual displays literally and
symbolically incorporated the students’ involvement in the lecture
presentation. Students received a clear indication that others in class
were thinking and answering as a community of learners. Although the
instructor presented a variety of items to students during lecture, he did
not try to vary this systematically.
The instructor did not
radically change the content of his lectures when using the student response
system and was able to cover the same amount of material as in previous
semesters. The main impact on the lecture was the addition of the clicker
items to the PowerPoint presentations. Although there are many student
response systems available, the instructor used TurningPoint 2008 developed
by Turning Technologies, LLC (see Graham et al., 2007, for an excellent
description of this system).
Students were not required
to have clickers, but in fact, all but five students purchased them. It is
likely that the availability of participation credit encouraged them to
purchase the clickers, even though the instructor made it clear to students
that points not obtained by participation could otherwise be acquired by
answering bonus point questions on tests. Students indicated on a course
and instructor evaluation survey conducted at midterm that they were
generally positive about using clickers: specifically, about 70% of the
students strongly agreed that the “clicker was a valuable aid to learning.”
Clicker Activity. I
used the software accompanying the student response system to record clicker
activity on a daily basis for each student. The software creates a text
file that contains accounts of how many times each of the students used his
or her clicker. In addition, for items that had clear correct and incorrect
answers, the software also tracked the percentage of clicker items that each
student answered correctly. Daily files were merged to produce one large
dataset to assess how students used their clickers over the semester. From
this dataset, I derived two indices of clicker activity: 1) clicker use
was a measure of the total number of times a student used his or her clicker
during the semester, and 2) clicker performance was a measure of the
average percentage of correct/incorrect items that students answered
correctly per day over the semester.
Test Performance. I
measured test performance by summing the points scored by students on three
multiple-choice format tests administered at equal intervals across the last
three quarters of the semester. Each student’s first quarter test score
served as a control variable in the data analyses described later. It is
important to note that the tests did not include clicker items used in the
lectures, although these tests certainly assessed the information contained
in these items. Consequently, any relationship that might exist between
test performance and clicker performance would not be the result of these
two measures containing common items.
Class Absence. A
teaching assistant checked class attendance using a seating chart and
recorded students as being “present” if a seat assigned to them was
occupied. This measure also served as a control variable in a manner
described more fully later.
Descriptive statistics of
and intercorrelations among variables are presented in Table 1. As would be
expected, class absence had strong negative correlations with each of the
clicker activity scores (clicker use, r(171) = -.87; clicker
performance, r(171) = -.82) and was negatively correlated with test
performance (r(171) = -.44). The two clicker activity scores were
highly correlated (r(171) = .93). Those who used their clickers
frequently also answered a larger percentage of performance items correctly
on a daily basis. In addition, clicker performance was more highly
correlated with test performance (r(171) = .55) than with clicker use
(r(171) = .42) and the difference between these correlations was
significant, t(168) = 5.649, p < .001.
Table 1. Intercorrelations
between Clicker Activity Scales, Absence, and Test Scores
1) Class Absences
2) Score on Test 1
3) Test Performance
4) Clicker Use
5) Clicker Performance
N = 171. All
correlations are significant at p < .01 level (2-tailed).
Students used their clickers
an average of 83.8 times over 35 class meetings during the semester. In
addition, of those items that students could answer either correctly or
incorrectly, they answered an average 39% of the items per day correctly.
Clearly, students were answering these particular clicker questions
incorrectly the majority of time. It is possible that lower performance on
the “critical thinking” items, which I used to probe students’ knowledge of
subjects that I had not yet covered in lecture, partially accounts for the
low percentage of items answered correctly.
A three-step hierarchical
regression analysis assessed the impact of clicker activity on test
performance. In the first step, test performance was regressed onto the
score students achieved on Test 1 and the number of times they were absent
from class. The score on the first test of the semester was included to
control for the effect of individual differences in student test-taking
ability and for differences in general background knowledge of psychology
among students. The total number of classes missed was included in the
regression equation to control for the effect that simply being (or not
being) in the class had on test performance. Controlling for the number of
classes missed was particularly important to do given the amount of shared
variance between absence and the two clicker activity variables. In steps 2
and 3 of the regression analysis, clicker performance and clicker use,
respectively, were entered into the regression equation to predict test
Table 2 shows that a
considerable amount of test performance variance was accounted for with all
of the predictors in the equation, R2 = .472, F(4,
166) = 37.05, p < .001; Test 1 scores and class absences accounted
for the bulk of this variance (R2 = .381). Nevertheless,
clicker performance predicted a small but statistically significant
additional amount of test performance variance above the control variables,
∆R2 = .047, F(1, 167) = 13.62, p < .001.
Then, in step 3, clicker use also predicted a small but statistically
significant additional amount of test performance above that accounted for
by clicker performance, ∆R2 = .044, F(1, 166) =
13.94, p < .001. The standardized regression coefficients associated
with Test 1 (.34), clicker performance (.87) and clicker use (-.69) were
significant (p < .001). The coefficient associated with clicker
performance indicated that a high level of clicking correctly (i.e.,
answering correct/incorrect clicker items correctly) was associated with
higher test performance. Interestingly, the negative coefficient associated
with clicker use (i.e., simply answering or not answering with a clicker
after controlling for clicker performance) indicated that simply clicking a
lot was associated with lower test performance. Although each clicker
activity variable predicts a uniquely significant and meaningful amount of
test performance variance (both positive and negative) in the expected
directions, researchers should interpret the regression coefficients
associated with these two highly correlated variables cautiously (Cohen &
Cohen, 2003; Johnson, 2000).
Table 2. Summary of Hierarchical Regression Analyses
for Clicker Activity Predicting Test Performance
after Controlling for Score on Test 1 and Class Absences (N = 171)
Step and Independent Variables
Score on Test 1
Score on Test 1
Score on Test 1
**p < .001.
To get a better idea of how
using clickers influenced test scores, I compared the test performance of
students who used clickers to the test performance of students in two
previous semesters who had not used clickers. Other than class standing, no
demographic data (i.e., race, age, and sex) were available for the
comparison classes. The classes are slightly different with respect to
class standing. In one of the semesters (Class 2) 93% of the students were
either freshmen or sophomores in comparison to 83% and 85% of the students
in Class 1 and the class using clickers, respectively. Because in previous
semesters I administered four 55-item tests instead of four 53-item tests, I
adjusted test score values so that they would be comparable across all
semesters. I simply adjusted the current semester scores by multiplying the
percentage correct by 220. For example, a total score of 190 would be
adjusted to 197.12 (= .896 X 220). It is important to note that despite the
fact that the number of items on tests differed across the semesters,
analyses of performance data indicate that the tests were essentially
parallel in content.
Prior to classifying students in the current semester into low and high
clicker use groups, an ANOVA showed that
there were no significant differences in total test scores across the
three semesters, F(2, 528) = .047, ns.
I then tested for
differences in means across four groups of students: current semester, high
clicker activity; current semester, low clicker activity; previous semester
class 1, no clickers (n = 178); and previous semester class 2, no
clickers (n = 182). High and low clicker activity during the current
semester class was determined by taking a median split on clicker
performance and clicker use, respectively. Mean clicker performance and
clicker use for each group appear in Table 3.
Table 3. Test Score across
Three Classes as a Function of Clicker Activity
Low Clicker Use
High Clicker Use
Because clicker usage was so
highly correlated with class attendance, I conducted an ANCOVA to control
for class attendance when testing for differences between means. The results
of this analysis appear in Table 4. Although the high clicker use group had
the highest test performance and the low clicker use group had the lowest
test performance, there were no significant mean differences in test scores
when high and low clicker use groups were compared to the mean test scores
obtained in other semesters, F(3, 525) = 1.44, ns. Small but
significant differences in mean test scores were found, however, when groups
formed on the basis of clicker performance were compared, F(3, 525) =
4.48, p < .01, h2 = .03. Bonferroni t-tests
revealed that the test performance of the group that performed well on the
clicker questions for which a correct or incorrect response could be
assessed was significantly higher than the test performance of all the other
groups. Likewise, the group formed by low clicker performance had
significantly worse test performance than any of the other groups had.
Figure 1 shows differences in means across groups.
The premise of this study
was that teachers could offset some of the barriers to training in a large
classroom setting by using a student response system. Not surprisingly,
class attendance alone significantly improves test performance, but the
evidence presented here indicates that while effect sizes were not large,
“active attendance,” as implied by higher clicker performance, contributes
to test performance over and above mere class attendance. The results of
this study indicate that test performance increases by a small but
statistically significantly margin among students who used their clickers on
a regular basis. Even after controlling for Test 1 scores (which functioned
as a proxy for ability), answering clicker questions correctly during class
predicted later test performance. This discovery suggests that in-class
clicker performance could serve as an effective diagnostic tool to identify
students who are at risk for low test performance. Identifying such
individuals early on as a result of being sure to include performance items
along with other types of clicker items could enable targeted interventions
designed to improve learning and subsequent performance.
Table 4. Analysis of Covariance of Test Scores
across Different Semesters after Controlling for Absence from Class
aNumber of times
over the semester the clicker was used at least once during a class.
bPercentage of questions answered correctly each day averaged over the
semester. N = 530.
*** p < .0001. ** p < .01.
Figure 1. Mean test performance scores across three
High clicker performance = Current Semester HCP (n = 84), Low clicker
performance = Current Semester LCP (n = 87).
In general, student test performance correlated positively with higher
clicker activity and particularly among students who answered more clicker
questions correctly. Regression analyses suggested that merely “clicking,”
however, does not necessarily produce desirable learning outcomes for all
students and that students perform best when they mindfully and correctly
answer the clicker questions posed to them. These analyses indicate that
after controlling for clicker performance, just clicking may be actually
negatively associated with test performance. This suggests that perhaps for
some students, clickers can be distracting or otherwise unhelpful. Because
clicker use and clicker performance are so highly correlated, one should
interpret this last conclusion cautiously because regression weights are
notoriously unstable in conditions of high multicolinearity (Cohen & Cohen,
2003). Future research, therefore, should consider more directly the extent
and type of engagement that clickers engender.
Based on the current
findings, I would suggest that instructors continually remind students that
they should try to answer the clicker questions mindfully and to not “click
just for credit.” Perhaps removing the bonus credit offered for all clicker
activity but continuing to offer it for correct answers to clicker questions
that students can answer correctly or incorrectly would reduce this
phenomenon. This, however, might also inspire equally mindless answer
sharing among students – especially in the large classroom setting -- or
signal to students that clicker questions without right and wrong answers
(e.g., demonstrations and polling questions) are less worthy of their close
attention. Furthermore, students might become frustrated with the critical
thinking items. Recall that students can answer most of these items
correctly or incorrectly, but that these items often appear prior to
students’ direct exposure to the new material into which they are leading
the student. Although the intention is to encourage students to stretch
their reasoning abilities, they may perceive the questions as being unfair,
especially if credit is involved. For now, it would appear that a teacher
should explicitly encourage students to do their best to answer questions
correctly and to engage actively in lectures. In addition, teachers should
be sure to allow enough time after posing a question for students to think
through their answers so that they do not feel pressured to click
indiscriminately. Overall, students who frequently used their clickers to
give correct answers performed better on tests. Given the current empirical
evidence and an abundance of literature arguing for the pedagogical
advantages of using student response technology, there is a sound basis for
researchers to continue to examine the interesting and promising findings
I did not experimentally
manipulate clicker usage to observe its effects on test performance.
Rather, I collected clicker activity data within the context of ongoing
instruction. To strengthen the argument that using clickers improves test
performance, however, I compared these data with that collected in courses
taught in previous semesters in which students did not use clickers.
Although taught in different years, the content (lectures, order of
lectures, textbook, and assignments) and evaluative structure of the classes
(cut-off points for grades) across all the semesters was nearly identical.
One of the comparison semesters had a slightly larger percentage of freshmen
and sophomores enrolled. To the extent that all the classes were
practically similar, the comparison supports the conclusion that using
clickers leads to increased test performance especially for students giving
generally correct answers.
Results obtained by Morling
and colleagues (2008) who employed a quasi-experimental design to test the
effects of clicker use on test performance further support this conclusion.
Regrettably, as in the current study, the effect size they observed was
minimal. In addition, these researchers only used the clickers to
administer short quizzes at the beginning of class. In other words, they
did not use the clickers in an interactive manner during their
lecture. Stowell and Nelson (2007) evaluated the effectiveness of clickers
using a more controlled experimental design that compared using clickers to
other methods of soliciting student participation and drew conclusions that
were largely in favor of the effectiveness of the student response system.
Together these findings are encouraging, but to be more confident about the
causal influence clickers have on test performance, researchers should
continue to consider examining the student response system in a more
Future research should
include student grade point averages or similar assessments of general
academic ability – such as the SAT or ACT – to help to control for the
effect that academic capability might have on clicker use. Statistically
controlling for academic ability would help to determine whether clicker use
contributes to higher test performance or whether it simply co-varies with
academic ability. I intended that students’ scores on the first test of the
semester would serve this purpose in the current study. Nevertheless, it
was only a very rough proxy for general academic ability and entering
knowledge of the subject.
Sorcinelli (2002) has
suggested that teachers should use formative (i.e., ungraded) quizzes in
class to increase student engagement in large classes. This technique
allows students to practice multiple-choice items and analyze their
responses. The student response system (i.e., clickers) can handle this
activity very easily and well. It would be a mistake, however, to use the
student response system as simply a means to administer quizzes. Used
solely for this purpose, students might be lulled back into a “memorize and
respond” mode of participation. Furthermore, merely quizzing students using
the clickers without including additional instructional follow-up could
lower motivation and efficacy, especially among poorer students. Under such
conditions, clicking could become a deflating experience and produce
conditions associated with poor academic performance. It is possible that,
for some students, answering clicker questions incorrectly could lower
confidence and motivation to do well in the course over time. Future
research should consider whether student learning styles and goal
orientations influence how different students receive and respond to the
feedback provided by the student response system. Simply stated, the way
teachers incorporate clickers into instruction needs further research.
Although the current study
has provided evidence that students who used clickers to give
generally-correct answers in class perform better on tests, future research
should also explore the relative impact of different types of clicker items
on course performance. For example, it would be useful to determine if
factual, conceptual, and application items affect engagement and performance
differently. Perhaps instructors should use an assortment of such items to
engage students in a variety of ways. Well-timed factual items might
encourage students to review material on a more regular basis. These items
allow students to think about and respond to what they have just heard in
lecture. They might also help instructors clarify themes that tie concepts
together across the semester. Conceptual and application items likely do a
better job of stimulating critical thinking. Items soliciting opinions from
students may engage students at a more personal level than is otherwise
possible in a large classroom setting. Such a form of interpersonal
interaction might contribute to a sense of connectedness and engagement in
learning otherwise rarely experienced in large class settings.
If instructors use the
student response system thoughtfully and creatively, they might be able to
tap more fully into the resources of a large class. For instance, for
controversial and politically charged issues, instructors can use the
student response system effectively to discretely solicit and illustrate
opinions within a diverse community of learners. Students can express their
opinions in the relative safety not achieved by hand raising. Incidentally,
although teachers usually set software parameters to track the responses of
each student, they can quickly reconfigure clicker software during a lecture
to collect student responses anonymously. Furthermore, many response
systems include an assortment of question formats that allow users to poll
students quickly and spontaneously with pre-formatted true/false, multiple
choice, and likert-type items. When skillfully used, teachers can seize
“teachable moments” on the fly.
Conclusions and Implications
McKeachie et al. (1994) and
Stanley and Porter (2002) presented good arguments that large classes are
generally less effective than small ones, especially where higher-level
learning goals are concerned (e.g., critical thinking, application and
integration). Given the current economic environment, however, there is
increasing pressure on institutions of higher education to offer large
classes. It is imperative, therefore, that instructors of these classes
look for ways to counter the barriers inherent in a learning environment
that is likely here to stay (Benjamin, 1991). The findings of the current
study suggest that a student response system can positively influence class
performance, especially among those students who strive to give correct
answers to in-class clicker questions. When teachers use this technology
appropriately, it should counter some of the communication barriers
associated with large class environments and amplify the advantage of being
able to tap into the resources of many minds. Clickers may encourage some,
if not all, students to engage actively with lecture material as instructors
offer it. However, there is also evidence that increased clicker use among
students giving generally incorrect answers to in-class clicker questions is
negatively correlated with test performance. All students need to be
encouraged to answer all questions posed to them mindfully and to the best
of their ability. Teachers should use a variety of questions to encourage
optimal performance on all types of questions and to engage students in the
Thus, the student response
system is a relatively new technology that has many promising applications.
Nevertheless, as observed by ones who have both extolled and condemned
PowerPoint presentations (Stoner, 2007), researchers need to carefully
evaluate new teaching technology so that it is most effectively used.
Technology that is effectively appropriated can open up new horizons, but
technology that is poorly used can be mind-numbing and pedagogically
Beekes, W. (2006). The 'millionaire' method for encouraging participation.
Active Learning in Higher Education, 7, 25-36.
Benjamin, L. T. (1991). Personalization and active learning in the large
introductory psychology class. Teaching of Psychology, 18, 68-74.
Bruff, D. (2009). Teaching with Classroom Response Systems:
Creating Active Learning Environments. San Francisco: Jossey-Bass.
Cleary, A. M. (2008). Using wireless response systems to replicate
behavioral research findings in the classroom. Teaching of Psychology,
Cohen, J., & Cohen, J. (2003). Applied multiple regression correlation
analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: L. Erlbaum
Ferguson, M. (1992). Is the classroom still a chilly climate for women?
College Student Journal, 26, 507-511.
Graham, C. R., Tripp, T. R., Seawright, L., & Joeckel, G. L., III. (2007).
Empowering or compelling reluctant participators using audience response
systems. Active Learning in Higher Education, 8, 233-258.
Herreid, C. F. (2006). "Clicker" cases: Introducing case study teaching
into large classrooms. Journal of College Science Teaching, 36, 43.
Johnson, J. W. (2000). A heuristic method for estimating the relative weight
of predictor variables in multiple regression. Multivariate Behavioral
Research, 35, 1-19.
Langer, E. J. (2000). Mindful learning. Current Directions in
Psychological Science, 9, 220-223.
McKeachie, W. J., Chism, N. V., Menges, R., Svinicki, M., & Weinstein, C. E.
(1994). Teaching tips: Strategies, research, and theory for college and
university teachers (9th ed.). Lexington, MA: D.C. Heath.
Morling, B., McAuliffe, M., Cohen, L., & DiLorenzo, T. M. (2008). Efficacy
of personal response systems ("clickers") in large, introductory psychology
classes. Teaching of Psychology, 35, 45-50.
Nagle, R. (2002). Transforming the horde. In C. A. Stanley & M. E. Porter
(Eds.), Engaging large classes: Strategies and techniques for college
faculty (pp. 315-323). Bolton, MA: Anker.
Roschelle, J., Penuel, W. R., & Abrahamson, L. (2004). The networked
classroom. Educational Leadership, 61, 50-54.
Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations:
Classic definitions and new directions. Contemporary Educational
Psychology, 25, 54-67.
Sorcinelli, M. D. (2002). Promoting civility in large classes. In C. A.
Stanley & M. E. Porter (Eds.), Engaging large classes: Strategies and
techniques for college faculty. (pp. 44-57). Bolton, MA: Anker.
Stanley, C. A., & Porter, M. E. (2002). Engaging large classes:
Strategies and techniques for college faculty. Bolton, MA: Anker.
Stoner, M. R. (2007). Powerpoint in a new key. Communication Education,
Stowell, J. R., & Nelson, J. M. (2007). Benefits of electronic audience
response systems on student participation, learning, and emotion.
Teaching of Psychology, 34, 253-258.
Teven, J. J., & McCroskey, J. C. (1997). The relationship of perceived
teacher caring with student learning and teacher evaluation.
Communication Education, 46, 1-9.
Trees, A. R., & Jackson, M. H. (2007). The learning environment in clicker
classrooms: Student processes of learning and involvement in large
university-level courses using student response systems. Learning, Media
& Technology, 32, 21-40.
Yoder, J. D., & Hochevar, C. M. (2005). Encouraging
active learning can improve students' performance on examinations.
Teaching of Psychology, 32, 91-95.