FEEDBACK
Feedback regarding the editorial content of this book or any of its essays should be directed toward the individual
authors or the book’s editors. They (authors and editors) are solely responsible for the substance of the text.
Feedback regarding technical matters of formatting or accessibility of this text via the online environment of the
Internet should be directed to the Internet Editor. If you have any complaints or difficulties in accessing these
materials, be sure to provide as detailed a description of your problem(s) as you can; you should include information
about the browser you are using and the type of computer you are using.
COPYRIGHT AND OTHER LEGAL NOTICES
The individual essays and chapters contained within this collection are Copyright © 2018 by their respective authors.
This collection of essays and chapters as a compendium is Copyright © 2018 Society for the Teaching of Psychology.
You may print multiple copies of these materials for your own personal use, including use in your classes and/or
sharing with individual colleagues as long as the author’s name and institution, and a notice that the materials were
obtained from the website of the Society for the Teaching of Psychology (STP) appear on the copied document. For
research and archival purposes, public libraries and libraries at schools, colleges, universities and similar educational
institutions may print and store in their research or lending collections multiple copies of this compendium as a
whole without seeking further permission of STP (the editors would appreciate receiving a pro forma notice of any
such library use). No other permission is granted to you to print, copy, reproduce, or distribute additional copies of
these materials. Anyone who wishes to print, copy, reproduce, or distribute copies for other purposes must obtain
the permission of the individual copyright owners. Particular care should be taken to seek permission from the
respective copyright holder(s) for any commercial or "for profit" use of these materials. ISBN: 978-1-941804-49-0.
SUGGESTED REFERENCE FORMAT
WE SUGGEST THAT THE OVERALL TEXT BE REFERENCED IN THIS FASHION:
Harnish, R. J., Bridges, K. R., Sattler, D. N., Signorella, M. L., & Munson, M. (Eds.). (2018). The Use of Technology in
Teaching and Learning. Retrieved from the Society for the Teaching of Psychology web site:
http://teachpsych.org/ebooks/
INDIVIDUAL CHAPTERS MAY BE REFERENCED IN THIS FASHION:
Kavanaugh, M. (2018). The impact of technology on teaching and learning: Does anyone miss the chalkboard? In R.
J. Harnish, K. R. Bridges, D. N. Sattler, M. L. Signorella, & M. Munson (Eds.). The Use of Technology in
Teaching and Learning. Retrieved from the Society for the Teaching of Psychology web site:
http://teachpsych.org/ebooks/
The Use of Technology in
Teaching and Learning
Edited by:
Richard J. Harnish
K. Robert Bridges
David N. Sattler
Margaret L. Signorella
Michael Munson
THE USE OF TECHNOLOGY IN TEACHING AND LEARNING
WHAT TO EXPECT IN THIS COMPENDIUM
1.
The Impact of Technology on Teaching and Learning: Does Anyone Miss the Chalkboard?
16
Mark H. Kavanaugh, Kennebec Valley Community College
PART 1 – HOW TECHNOLOGY HAS RESHAPED HUMAN COGNITION
2.
Teaching 21st Century Brains: Activating Working Memory in the Online World
26
Lauren M. Littlefield & Anna R. Gjertsen, Washington College
3.
A “Toublesome Effort”: Focused Attention and Deep Processing in the Digital Age of
Teaching and Learning
38
Deborah Gagnon, Wells College
4.
When More Is Less: The Risks of Overscaffolding Learning
46
Bethann Bierer, University of Colorado – Denver
5.
F-atal Distraction: The Impact of In-Class Media Multitasking on Students’ Classroom
Learning
54
Amanda C. Gingerich Hall & Tara T. Lineweaver, Butler University
6.
Student Response Systems: A Mindful Approach
66
Darren Iwamoto, Chaminade University of Honolulu & Jace Hargis, University of California,
San Diego
7.
The Impact of Technology on How Instructors Teach and How Students Learn
74
Sarah Elaine Eaton, University of Calgary
PART 2 – EDUCATIONAL TECHNOLOGIES THAT INSTRUCTORS USE TO TEACH STUDENTS
8.
Designing Group Work in Blended Learning Environments
82
Barbara Brown, University of Calgary & Norman Vaughan, Mount Royal University
9.
Finding the Balance: Potential and Limits of the Hybrid-Blended Format
98
Darrell S. Rudmann, Shawnee State University
10. Incorporating Asynchronous Online Discussions in Blended and Online Education: Tools,
109
Benefits, Challenges, and Best Practices
Natalie B. Milman, George Washington University
11. Enhancing Student Engagement in Asynchronous Online Courses
Michael R. Stevenson & Damien C. Michaud, University of Southern Maine
P a g e |3
121
12. Leveraging the Features of Learning Management Systems (LMSs) in Psychology Courses
128
Jessica J. Cerniak, The Chicago School of Professional Psychology
13. Strategies for Promoting Student Engagement in Synchronous Distance Learning
142
Michael Munson, TrekNorth Junior and Senior High School
14. OER: A Pathway to Enhance Student Access and Motivation
153
Anton O. Tolman & Seth Gurell, Utah Valley University
15. Technological and Cultural Considerations in Lower-Resource Environments: An Account
161
from the Pacific Islands
Phililp Jefferies, Dalhousie University. Formerly at the University of South Pacific
16. Designing Inclusive Online Environments for Students with Disabilities
166
Amy Hebert Knopf, St. Cloud State University, Elise K. Knopf, State of Minnesota Vocational
Rehabilitation Services, Stephen G. Anderson, Hamline University, & Walter J. Waranka,
Lifetrack Resources
17. Digital Tools to Deliver Content and Allow for Interaction
175
Diana M. Milillo & Adam Pilipshen, Nassau Community College
18. Going Mobile in the College Classroom
187
Kimberly M. Christopherson, Morningside College
19. Integrating Academic Skills and Digital Literacy Training
193
Melissa Beers & Nicole Kraft, The Ohio State University
20. Moving Reading and Studying to the Screen: A Discussion of e-Books and Online Study
Tools
201
Kara Sage, College of Idaho
21. Virtual Parenting: Applying Developmental Psychology
211
Natalie Homa, Thiel College
22. Student and Faculty Experiences with GoAnimate4Schools: A Case Study
221
Richard J. Harnish & K. Robert Bridges The Pennsylvania State University, New
Kensington
23. Pesky Gnats! Using Computer Games And Smartphone Apps To Teach Complex Cognitive
Behavioural Therapy And Mindfulness Concepts To Children With Mental Health
Difficulties
231
Gary O’Reilly, University College Dublin
24. Online Activities for Teaching Students about Technology, Distraction, and Learning
Michelle D. Miller & John J. Doherty, Northern Arizona University
P a g e |4
248
25. Online Service Learning in Psychology: Lessons from Literature and Experience
259
Lindsay A. Phillips, Albright College, Jill K. Marron, Widener University, Chris Kichline,
Albright College, Christina D. Fogle, Albright College, & Ellen Pillsbury, Albright College
26. Always in Style: Using Technology Tools to Help Students Master APA
268
Maria Zafonte, Grand Canyon University
PART 3 – HOW SOCIAL MEDIA IS USED TO ENGAGE STUDENTS IN LEARNING
27. Class, Please Take Out Your Smartphone: Using Personal Response Systems to Increase
Student Engagement and Performance
276
Lauren Stutts, Davidson College
28. A Survey of Effective Blogging Practices
284
Stephen B. Blessing, University of Tampa, Bethany Fleck, Metropolitan State University of
Denver, & Heather D. Hussey, North Central University
29. #worthit? Integrating Twitter into Introductory Psychology Curriculum
295
Scott P. King & Mark Chan, Shenandoah University
30. Using PechaKucha in the Classroom
307
Jennifer Ann Morrow, University of Tennessee, Lisa Shipley, University of Tennessee, &
Stephanie Kelly, North Carolina A&T State University
P a g e |5
EDITORS AND CONTRIBUTORS
EDITORS
Richard J. Harnish, The Pennsylvania State University, New Kensington Campus
K. Robert Bridges, The Pennsylvania State University, New Kensington Campus
David N. Sattler, Western Washington University
Margaret L. Signorella, The Pennsylvania State University, Brandywine Campus
Michael Munson, TrekNorth Junior and Senior High School
CONTRIBUTORS
Stephen G. Anderson, Hamline University
Melissa Beers, The Ohio State University
Bethann Bierer, University of Colorado - Denver
Stephen B. Blessing, University of Tampa
K. Robert Bridges, The Pennsylvania State University, New Kensington
Barbara Brown, University of Calgary
Jessica J. Cerniak, The Chicago School of Professional Psychology
Mark Chan, Shenandoah University
Kimberly M. Christopherson, Morningside College
John J. Doherty, Northern Arizona University
Sarah Elaine Eaton, University of Calgary
Bethany Fleck, Metropolitan State University of Denver
Christina D. Fogle, Albright College
Deborah Gagnon, Wells College
Anna R. Gjertsen, Washington College
Seth Gurell, Utah Valley University
Amanda C. Gingerich Hall, Butler University
Jace Hargis, University of California, San Diego
Richard J. Harnish, The Pennsylvania State University, New Kensington
Natalie Homa, Thiel College
Heather D. Hussey, North Central University
Darren Iwamoto, Chaminade University of Honolulu
Phililp Jefferies, Dalhousie University. Formerly at the University of South Pacific
P a g e |6
Mark H. Kavanaugh, Kennebec Valley Community College
Stephanie Kelly, North Carolina A&T State University
Chris Kichline, Albright College
Scott P. King, Shenandoah University
Amy Hebert Knopf, St. Cloud State University
Elise K. Knopf, State of Minnesota Vocational Rehabilitation Services
Nicole Kraft, The Ohio State University
Tara T. Lineweaver, Butler University
Lauren M. Littlefield, Washington College
Jill K. Marron, Widener University
Damien C. Michaud, University of Southern Maine
Diana M. Milillo, Nassau Community College
Michelle D. Miller, Northern Arizona University
Natalie B. Milman, George Washington University
Jennifer Ann Morrow, University of Tennessee
Michael Munson, TrekNorth Junior and Senior High School
Gary O’Reilly, University College Dublin
Lindsay A. Phillips, Albright College
Adam Pilipshen, Nassau Community College
Ellen Pillsbury, Albright College
Darrell S. Rudmann, Shawnee State University
Kara Sage, College of Idaho
Lisa Shipley, University of Tennessee
Michael R. Stevenson, University of Southern Maine
Lauren Stutts, Davidson College
Anton O. Tolman, Utah Valley University
Norman Vaughan, Mount Royal University
Walter J. Waranka, Lifetrack Resources
Maria Zafonte, Grand Canyon University
P a g e |7
INTRODUCTION/PREFACE
PURPOSE OF THIS E-BOOK
Over the past 50 years, we have witnessed a revolution in how technology has affected teaching and learning.
Beginning in the 1970s with the use of television in the classroom, to video teleconferencing in the 1980s, to
computers in the classroom in the 1990s, to the social media technologies of today, advances in information
technology are affecting how students learn and how faculty teach. Indeed, recent research suggests that
information technologies may be both beneficial and harmful to how students learn. Some findings (e.g., Green &
Bavelier, 2012) suggest that today’s students have improved visual-spatial capabilities, reaction times, and the
capacity to identify details among clutter but show a decline in attention and critical thinking compared to
yesterday’s students. Thus, the challenge for faculty is to determine which technology to employ so that it will
facilitate learning for students. This is no small feat as each new wave of advancements in information technology
has produced an ever-increasing variety of tools from which to choose.
The idea for this text developed from conversations among the editors and from our experiences using different
technologies inside and outside the classroom to facilitate students’ learning. Our goal was to create a compendium
that presented an array of tools used by faculty that would be an accessible resource for those who are interested
in selecting the most appropriate technology that will facilitate learning for their students.
ORGANIZATION OF THE BOOK
The book consists of 30 chapters that describe a wide array of technologies that faculty have used in their teaching.
The text is divided into three sections: Part 1 explores how technology is reshaping human cognition; Part 2 discusses
a variety of educational technologies that are used to teach students; while Part 3 presents social media technologies
that can be used to engage students in learning.
We lead off with an overview of the eBook by Mark Kavanaugh. In his chapter, Kavanaugh reminisces about the
change from chalkboards to whiteboards as an analogy to the continuous changes occurring in regard to technology
and its applications to instruction. Taking a cultural viewpoint on these trends, this chapter explores the impact of
technology on classroom norms and expectations. These changes are not only driven by innovative instructors, but
also by other stakeholders such as school administrators and IT professionals. In the end, the technology must
accomplish the goals set forth by the teacher. This can only be done through a well-thought-out application of the
technology in the classroom by a teacher who understands both the strengths and limitations of the innovation. The
author introduces the reader to Kranzberg’s Six Laws of Technology, summarizes the content of the entire book, and
leaves would-be technology adopters with some hard-learned advice for exploring the wonderful, and challenging,
world of instructional technology in their own classrooms.
In the next chapter, Littlefield and Gjertsen discuss working memory and learning. Working memory is a critical
cognitive function that guides and supports learning. Littlefield and Gjertsen define working memory and discuss
five teaching techniques that stimulate and enhance working memory processing. These techniques include (1)
avoiding information overload, (2) encouraging handwritten notes, (3) intentionally organizing content while
learning, (4) teaching students how to wisely search the Internet, and (5) instructing students to practice retrieving
newly-learned information. To promote the continued quest for the most effective teaching and learning techniques,
the chapter closes with recommendations for future research. While it appears that both digitally-based and
P a g e |8
traditional teaching methods activate working memory, further exploration is needed to better understand how
technology impacts brain structure and working memory functioning.
Deborah Gagnon, in Chapter 3, examines the problem of ubiquitous distraction that our digital devices serve up and
the propensity toward multitasking that ensues. Both detract from the focused attention and deep processing of
information that learning requires. What can we do to discourage distraction and multitasking in our students? An
alternative view of attention – Lin’s (2009) narrow- versus breadth-biased focus – is described. Pedagogical
suggestions that ensue fall under the categories of Respite, Discipline, Balance, and Mindfulness. A final list of Best
Practices that instructors can adopt for encouraging focused attention and deep processing are provided.
In Chapter 4, Bethann Bierer presents a discussion about a possible downside to the use of technology in the
classroom. Using information gleaned from several developmental and educational theorists, she builds a case that
using technology to over-scaffold learning tasks may not always be in the students’ best interests. She suggests that
not only should individual educators think carefully about the learning supports they provide students, but that
departments should develop consistent models for removing scaffolds as students move through their program. The
aim of this process should be to help students develop the personal attributes that will support their future success
in addition to helping them master important material and skills.
Mandy Hall and Tara Lineweaver discuss multi-tasking in Chapter 5. They first review the literature on the prevalence
of media multi-tasking in the classroom, and then examine the empirical evidence regarding the effects of media
multi-tasking (mobile phone use and laptop use) on student learning. They conclude their chapter by offering
suggestions for approaches instructors can use to diminish the prevalence of media multi-tasking and its negative
effects in their classrooms.
In Chapter 6, Darren Iwamoto and Jace Hargis identify technological challenges with which 21st century teachers are
faced. This chapter addresses a simple yet complex question which is, how do we keep students engaged in the hereand-now of our classroom? Iwamoto and Hargis suggest that one solution is the incorporation of student response
systems (SRS), which has been shown to engage students and teachers in real-time. This real-time interaction
between teacher and students increases learning by keeping students in the here-and-now, which is the only time
the act of learning can take place. Because students find SRSs to be engaging, they allow themselves to be present
and in the moment. This mindful act quiets mind wandering and promotes learning in a fun and engaging manner.
Sarah Elaine Eaton examines the impact of technology on teaching and learning in Chapter 7. Eaton addresses
challenges teachers face when learning how to use technology in their classrooms. These include using the
technology in ways that are pedagogically sound and meaningful for learning, addressing increased demands on
time and tackling technology barriers beyond the teacher’s control. The author then goes on to address the impact
of technology on student learning, examining active learning through content creation and preventing cognitive
overload. Eaton concludes by contending that as technologies advance it is likely that teachers will continue to be
challenged to master new learning tools, such as virtual reality, to advance student learning, asserting that teachers
will need to be responsive and adaptive as technology evolves.
In Chapter 8, Barbara Brown and Norman Vaughan use Friesen’s (2009) five principles of teaching effectiveness as a
framework for designing group work in blended learning environments. The authors draw on their experiences in
designing, teaching and conducting research in post-secondary education courses as well as assumptions about
participatory cultures to provide detailed examples and images from in-class and online learning activities. Ten
P a g e |9
recommendations are offered to help instructors design blended learning environments that meaningfully
incorporate technologies to promote collaboration and group work.
In Darrell Rudmann’s chapter, he describes recent research on hybrid or “blended” course formats. Empirical
evaluations of student learning effectiveness of blended formats is mixed, but the rigor of much available research
is lacking. Meta-analyses tend to find small but positive results of using the format, in line with fully online courses,
an effect that seems present when online resources are provided and students work collaboratively. Common
student and faculty perceptions of the format include concerns over communication and an increased reliance on
instructional resources. To help those considering migrating a course to a hybrid format, the chapter closes with a
discussion of the known challenges involved in understanding when and why the format works.
In chapter 10, Natalie Milman describes what asynchronous online discussions are and how they promote studentstudent, student-instructor, and student-content interactions in asynchronous online and blended education
courses in institutions of higher education. Milman shares some of the most popular technology tools for hosting
asynchronous online discussions and introduces numerous benefits that instructors and students might experience
when incorporating asynchronous online discussions. Milman also outlines various challenges stakeholders may
encounter incorporating asynchronous online discussions, as well as several best practices to mitigate and better
structure asynchronous online discussions.
In their chapter, Michael Stevenson and Damien Michaud describe strategies and techniques that can be deployed
to increase student-content, student-student, and student-instructor interaction and engagement in asynchronous,
online courses. These approaches include: Providing immediate feedback to students responding to machine-scored,
multiple-choice questions; encouraging students to attempt assessments multiple times; graded discussion threads;
graded peer feedback on course artifacts; weekly announcements; instructor commentaries; and participation
monitoring. By focusing on the elements of the dynamic, triadic relationship between the instructor, the student,
and the content, the authors identify approaches and applications of sound pedagogy and effective technologies
that increase engagement for students, even in high enrollment, asynchronous, online courses.
Jessica Cerniak provides an overview of learning management systems (LMSs) used to deliver online courses in
higher education in Chapter 12. The author explores various features and functions of current LMSs and how these
aid student learning, faculty’s teaching and management, and program development and oversight in closed
enrollment courses that are part of degree-granting programs. The chapter concludes with a discussion of some
challenges encountered by, and recommendations for, all users of LMSs.
After 30 years of teaching high school in a traditional classroom, a veteran teacher transitions to a full-time online
teaching model. In Chapter 13, Michael Munson offers insights gained from his experience in a synchronous distant
learning setting for high school students. He presents specific strategies to promote student engagement in online
learning by focusing on ways to get the most from a learning management system (LMS) so that it becomes a tool
for growing student engagement, attention, and organization talents. He also explains practical ways to incorporate
other technology systems including Google Classroom, video conferencing, and digital white boards that are
becoming increasingly common American education.
Open Educational Resources (OER) are materials licensed in a way that allow faculty to use, modify, and adapt them
for teaching psychology and other courses. In Chapter 14, Anton Tolman and Seth Gurell note that most faculty
express concerns about the cost of commercial content to students, but relatively few are taking advantage of OER
P a g e | 10
despite growing evidence of equivalent or higher quality and significant benefits to student motivation and learning
outcomes. They review factors that influence faculty decisions whether to use OER as well as student reactions to
these materials, which are mostly positive. Tolman and Gurell also provide a personal perspective on curricular
integration and customization of OER in an Introductory Psychology course and describe the likely future of these
materials as part of a growing emphasis on open pedagogical practices.
In Chapter 15, Philip Jefferies recounts experiences in Fiji to highlight some of the challenges involved in teaching
within lower-resourced environments. Reflecting on his time teaching students across the Pacific Island nations,
where access to technology and learning resources cannot be taken for granted, he discusses alternative approaches
to supporting learning that may be beneficial in other resource-constrained regions.
In chapter 16, Amy Hebert Knopf, Elise Knopf, Steve Anderson and Wally Waranka provide information on making
online platforms accessible for students with disabilities. While the information presented is focused on inclusive
environments and equity access for students with disabilities, the principles benefit all learners. The underpinnings
of Universal Design in Learning are applied to course design and curriculum pedagogy. Practical examples are
provided that span across learning management systems. The authors introduce unique considerations for some
disability categories and provide resources.
In Chapter 17, Diana Milillo and Adam Pilipshen review examples of easily accessible and affordable digital tools to
create and manage course content. These tools may be used by instructors and students to create presentations,
videos, or assessments embedded into a digital format. Further, because of the accessibility of these tools,
instructors and students have many options for interactions – either as graded or collaborative work or to facilitate
informal discussions. The authors provide a short, guided discussion on the basic functions of several digital tools
and how they might be best used for pedagogy.
Kimberly Christopherson, in Chapter 18, discusses how the use of mobile devices (e.g. smartphones, tablets) has
become ubiquitous with many people using their mobile devices to conduct their daily business without needing to
actually enter the physical space of that business (e.g. online banking, shopping, information seeking). With more
people having access to mobile devices, how might mobile devices and a mobile-first design be incorporated into
higher education and what are the potential benefits and drawbacks to having students use mobile devices in the
classroom? This chapter explores several strategies that faculty might use to effectively and meaningfully
incorporate mobile devices into their classrooms to help increase student engagement while minimizing the
potential for these devices to be distracting.
In Chapter 19, Kara Sage reviews the current literature on e-books, with a particular focus on postsecondary
education. This chapter opens with a discussion of current trends in e-book use and why it is an important topic for
educators to consider. Students today often have multiple screens at their fingertips, and, accordingly, e-book
availability and use are on the rise. The advantages and drawbacks of utilizing e-books are evaluated, and the
literature comparing e-books to traditional paper textbooks is reviewed. A variety of factors regarding e-book design
and adoption are discussed, to provide recommendations to instructors and publishers. Ultimately, e-books can be
a valuable resource for students and offer some unique benefits beyond print books, but their status as a viable
alternative for print books is still somewhat questioned.
Melissa Beers and Nicole Kraft discuss digital competency in Chapter 20. They argue that students bring multiple
devices into the classroom, but they do not necessarily bring the digital skills to use those tools for academic good.
P a g e | 11
Their chapter discusses how the “digital native” myth may be a barrier to skill development. Technology is an integral
part of students’ lives, one that instructors can leverage as a tool for learning by integrating training and support in
our courses. Beers and Kraft suggest strategies to help students view technology as a resource for learning, motivate
them to use it creatively, and develop skills that will benefit them long after graduation.
In Chapter 21, Natalie Homa describes the simulation program, MyVirtualChild, and highlights its usefulness in
developmental psychology courses. The program, which provides students the ability to raise a virtual child from
birth to 18-years-old, has qualities of an effective experiential learning tool. A review of the existing empirical data
examining the program’s positive impact on learning is provided. Homa also describes possible challenges instructors
face such as technology difficulties, student concerns, and course implementation decisions. Finally, she provides
recommendations for the successful integration of MyVirtualChild into one’s classroom.
In their chapter, Richard Harnish and K. Robert Bridges present an overview of GoAnimate4Schools, an animated
video platform that can provide students with an opportunity to engage in course material, share ideas, solve
problems and demonstrate competency. After walking readers through each step of creating an animated video, the
authors discuss the benefits and drawbacks of GoAnimate4Schools from both student and faculty perspectives.
In his chapter, Gary O’ Reilly describes how we can custom design computer games and apps to better educate young
people about the complex concepts of effective mental health interventions such as Cognitive Behaviour Therapy
(CBT). He illustrates this with a CBT computer game and smartphone app called Pesky gNATs. Pesky gNATs is
designed to blend ideas from CBT, clinical psychology, developmental psychology and learning theory with
technology to provide an effective de-stigmatising therapy experience for young people. The Pesky gNATs computer
game is played in-session by a young person and his/her therapist to learn how to personally apply CBT concepts for
the management of anxiety and depression. The Pesky gNATs app allows young people to transfer what they learn
in therapy to their home school and community life.
In chapter 24, Michelle Miller and John Doherty present an approach to addressing digital distraction using an online
instructional module that draws on video and interactive demonstrations to vividly illustrate the limitations of
attention. Psychology, as a discipline, is well positioned to address the issue of distraction, but simply covering the
relevant research as part of introductory and cognitive classes may be insufficient to produce the desired impacts
on attitudes and beliefs about attention. The Attention Matters! module developed at Northern Arizona University
can be assigned as extra credit in a wide range of courses. At this institution, over 2,000 participants have enrolled
in it, with approximately 75% completing the full module. Open-enrollment online modules emphasizing multimedia
and interactivity are a way to disseminate research findings about attention outside of the traditional psychology
curriculum, thus encouraging students to make better choices about how they manage their personal technology.
Lindsay Phillips, Jill Marron, Chris Kichline, Christina Fogle, and Ellen Pillsbury, in Chapter 25, discuss the value of
service learning in psychology and present recent literature on how technology can be used to incorporate service
learning, both in online courses and in traditional courses where more flexible service learning opportunities may be
desired. Authors discuss both face-to-face service learning in online coursework and service learning projects that
occur entirely online. Authors provide suggestions and resources for online serving learning and conclude that
technology may increase implementation of this valuable pedagogical tool.
P a g e | 12
In the chapter “Always in Style: Using Technology Tools to Help Students Master APA,” Maria Zafonte aims to address
the frustration many instructors have with their students’ struggles to submit papers that are correctly formatted
according the guidelines in the Publication Manual of the American Psychological Association. Zafonte examines
ways to incorporate technology tools into the classroom to aid students’ understanding of and facility with APA Style.
Ranging from conventional tools such as Microsoft Word to online sharing applications such as Padlet or Pinterest,
the chapter provides resources that will make acquiring correct documentation skills easier and more engaging for
students.
In Chapter 27, Lauren Stutts specifically addresses the use of personal response systems (i.e., clickers) in the
classroom. The effectiveness of personal response systems and why they work are discussed. The author also
describes how to use a personal response system, Socrative, in the classroom to collect attendance, to engage in
formative assessment, to conduct anonymous polls, and to administer formal quizzes/surveys. Ultimately,
integrating technology in the classroom is highly valuable as our society will likely become increasingly dependent
on it for work and daily functions.
Stephen Blessing, Bethany Fleck, and Heather Hussey discuss the ways in which blogs have been used in higher
education classrooms in Chapter 28. Three types of blogs are considered: Traditional blogs, micro-blogs, and
photoblogs. Traditional blogs contain paragraph-long discussions between students. Micro-blogs (e.g., Twitter) allow
for short communications to be relayed, and photoblogs let students post relevant pictures to a social media stream.
In addition to considering the pedagogical reasons for using these kinds of blogs in classrooms, the authors also
analyze the empirical findings that instructors and researchers have discovered in their courses. They close with a
short list of considerations for instructors who may want to use blogs in their classrooms.
In Scott King and Mark Chan’s chapter, they provide an overview of existing literature about the use of Twitter in
class curricula, present an empirical study of the effects of Twitter on student engagement and academic
performance in Introductory Psychology, and discuss challenges and lessons learned from their experiences. The
authors performed a quasi-experiment across four semesters (two academic years) of teaching the course, with each
author teaching two sections requiring student Tweets as a course component, and two sections taught traditionally
without Twitter. While student engagement and academic performance were unaffected by incorporating Twitter,
small but notable correlations provide avenues for future research.
In Chapter 30, Jennifer Ann Morrow, Lisa Shipley, and Stephanie Kelly discuss the benefits of using the PechaKucha
presentation style in the college classroom. The authors describe how this fast-paced, interactive presentation style
can be utilized by both instructors and students. The benefits and challenges of using this presentation method are
presented as well as suggestions for how instructors can incorporate PechaKuchas in their classrooms. Helpful tips
for what to do and what not to do are summarized and a list of resources and examples are also provided within the
chapter.
REFERENCES
Green, C. S., & Bavelier, D. (2012). Learning, attentional control and action video games. Current Biology, 22, R197R206. doi:10.1016/j.cub.2012.02.012
P a g e | 13
ACKNOWLEDGMENTS
We want to thank our authors for their generous and thoughtful contributions to this e-book. In each chapter, the
authors have shared their insights into how the technology was used, the challenges, as well as the rewards,
experienced, and recommendations for adopting specific tools. Our hope is that the reader will find the text to be a
practical introduction the technologies available to faculty and students in meeting their learning goals.
Richard J. Harnish
K. Robert Bridges
David N. Sattler
Margaret L. Signorella
Michael Munson
March 2018
P a g e | 14
WHAT TO EXPECT IN THIS COMPENDIUM
P a g e | 15
CHAPTER
1
THE IMPACT OF TECHNOLOGY ON
TEACHING AND LEARNING: DOES
ANYONE MISS THE CHALKBOARD?
MARK H. KAVANAUGH
KENNEBEC VALLEY COMMUNITY COLLEGE
WHAT HAPPENED TO THE CHALKBOARD?
One of my most fond memories of high school was my Geometry class. I was not good at all the subjects I took in
high school, but I was good at Geometry. Part of this was due to a love of drawing and structures, but in equal
measure, it was due to my teacher. Mr. Tom Kelly was an interesting person and an energetic teacher. To this day,
I strive to model my own teaching after his enthusiasm and charisma and I hope I do him justice!
Mr. Kelly’s classroom was like any typical high school classroom with small chairs/desks (those seats that had a partial
desk incorporated into the design created by some designer who apparently skipped the ergonomics classes in
design school) and a huge chalkboard spanning the entirety of one wall from the windows on one side to the
American flag and pencil sharpener on the other. This chalkboard was, to Mr. Kelly the canvas upon which he
portrayed ideas, images, formulas, lines, angles, circles, and the occasional polyhedron in stark white chalk.
As he spoke, Mr. Kelly counted a quick staccato rhythm to his words as his chalk would CLICK against the surface and
SCRATCH new additions to his masterpiece of the day. To me, it seemed almost like an opera, with the words
(sometimes seeming to be in a different language, just a like a real opera!) erupting into the room to the sharp
rapport of drums (CLICK) and cymbals (SCRATCH) in a musical drama titled Pythagoras by some composer of the era
of Mozart. Being one of those who often wanted to add my own harmonies to the symphony, I often spoke out of
turn, whispered to my friends, and generally goofed around in a manner which I felt added color to the music. To
this, Mr. Kelly, conductor, would add a poignant kettle drum part. Mr. Kelly could hit me in the head with a chalk
infused eraser from any point in the room, and he often felt the need to do so.
Jump forward a few decades and I found myself standing in front of a chalkboard in my own classroom. I had gone
on to college (for way too long) and found myself teaching Psychology at a local community college (something I
have been doing for 20 years now, possibly, also, way too long!) In an attempt to replicate Mr. Kelly’s masterful
teaching style, and the masterful styles of many other teachers and mentors I had experienced along the way, I beat
my chalk against the black surface in time with the outpouring of my ideas. I sometimes did not even write anything
of any specific value or relevance, a circle, an arrow, something that punctuated an idea that I could only release if
it was accompanied by that CLICK and SCRATCH. I never threw erasers at my students (I only wanted to), but I do
believe I may have engaged some of them as much as I was engaged by Mr. Kelly.
Soon, however, a new technology was about to take hold of the classroom. This technology was supposed to play a
part in the transformation of higher education and make learning more engaging and teaching more energized. The
chalkboard was being replaced by the whiteboard. Someone got it into their head that markers of a splendid variety
of colors applied to a shiny white surface would create a better (if not at least, less chalk dust-filled) learning
experience for all. After all, one could use different colors to emphasize differences in concepts, ideas, and thoughts.
P a g e | 16
And, if one applied the correct method of erasing and cleaning the surface, the gleaming whiteboard could be
returned to its original pristine state for the next thought, concept, or class. Unlike the chalkboard that I was often
obliged to clean with soap and water after school because of my numerous attempts to add my own music to the
classes I attended. The whiteboard was new, and shiny, so it was better, right?
The classroom I was teaching in was the last classroom that had not been impacted by this new technology. I loved
my chalkboard. I liked the sounds it made when I was writing and how it played a part in my thinking and processing
of ideas. It may even have played out to some of my students the same way it did to me when I was in high school;
the staccato rhythm accompanying a dynamic lecture. Then it happened. Despite my insistence that I wanted to
keep my chalkboard, it was taken off the wall and replaced by a shiny new whiteboard. I approached this soul-less
surface armed with a box of “markers” which were bright and colorful but , for some reason, seemed less alive. They
did not even smell like different fruits like the last set of markers I had owned, I checked. I mean, should not the
orange marker smell like an orange? Technology moved forward and I was dragged along. I still miss the chalkboard.
The quite “dab” and “streak” sounds made by my scent-challenged markers has never really been a match to the
“CLICK” and “SCRATCH” of my chalk.
TECHNOLOGY, CULTURE, AND INSTRUCTIONAL PROBLEMS
According the Merriam-Webster dictionary (www.merriam-webster.com), technology may be defined as “the
practical application of knowledge especially in a particular area.” In its broadest sense, the term technology, when
applied to education, includes the skills, practices, and procedures of instruction itself. These would be the evidencebased teaching practices that teachers use in the classroom. In fact, one might say that the classroom culture is
made up of all the practices, expectations, and technology (including non-material and material technology) within
the classroom environment.
The relationship between culture and technology is a complex one. Since technology is a part of this culture, it both
shapes and is influenced by each of the other elements in the environment. The relationship is reciprocal.
Innovators within a culture produce solutions to these problems; however, once this solution appears, it has an
active impact on the culture itself. New technology, in many cases, becomes the new “normal.” Anyone who has
witnessed the birth and growth of the mobile phone market can attest to how a technology that was once new and
relatively trivial has become a common expectation. How many people do you know who do NOT have a mobile
phone?
According to the American Sociological Association, “Culture (is understood as) the languages, customs, beliefs,
rules, arts, knowledge, and collective identities and memories developed by members of all social groups that make
their social environments meaningful” (http://www.asanet.org/topics/culture). Culture surrounds and permeates
our surroundings and our mind in ways that are so immersive that we can be quite unaware of it until we find
ourselves in the presence of an alternative culture much different than our own. Our own norms, values, and
expectations for how the world works. This conception of how the world works, which we carry around with us is
identified by C. W. Mills (1959) in his book The Sociological Imagination. According to Mills, this internal construction
of reality can be challenged when we encounter different imaginations of how the world works that are different
from our own.
Education is a cultural institution in which this reciprocal interaction between the culture of education and
technology has been profound and transformative. Modern institutions of education face challenges and problems.
P a g e | 17
While there has been no shortage of new teaching and learning theories, innovative pedagogical practices, even new
brain science knowledge, we would be remiss to think that technology, particularly computer and communication
technology, has been at the forefront of education transformation.
My personal story about the loss of my chalkboard demonstrates how changes in technology bring about changes
in our culture and in our behavior. In this instance, it had an impact on my classroom. At times, this can come upon
us when we do not expect it, and we may not welcome it. The school, in my case, was shifting to a new technology
and I simply had to go along with it. At other times, we can be at the forefront of the wave of change, not only
welcoming technology into our practice, but also becoming leaders in our schools and professional communities.
I survived to thrive in the new era of whiteboards, but it was not without a transformation in myself. While it may
seem a minor issue, I truly miss the chalkboard and I had to change how I taught classes; both, to transform the good
teaching practice I had been doing into this new medium, and to take advantage of the pedagogical doors that had
been opened by the whiteboard.
Throughout history, teachers have encountered what we may call “instructional problems.” Technology, both the
physical tools and methods we use in employing these tools have been called upon to solve these instructional
problems. The chalkboard now makes a very suitable example of this process. According to www.slate.com, the
use of cheap, erasable slates to practice reading, writing, and math have been around for centuries. However, it
was not until some innovative teachers, early in the 1800s, connected several slates together into a larger surface
that the concept of the modern chalkboard emerged in our culture.
The promise of the chalkboard was apparent. Teachers could write and draw words and diagrams to support their
oral presentations to relatively large groups of learners. The chalkboard was simple, effective, erasable, reusable,
and cheaper than paper and ink for the purposes it was created. Implementation of chalkboards into classrooms
was rapid. The chalkboard provided a form of textbook, a blank page, a laboratory, and probably most importantly,
a focus of student attention. The promise of today’s instructional technology builds on these same expectations.
We want our newer technology to be a form of textbook, a blank page, a laboratory, and a focal point for students’
attention.
Not only do we live in a time of rapid change in information, computer, and communication technology, we live in a
time where the adaptation of these technologies to education is a primary strategy to address of the variety of
instructional challenges and problems that we face across the entire education industry. We are living in a new age
of Positivism where we turn toward scientific methods, measurement, invention, and technological innovation as
the primary means by which we address our questions and problems. While our politics may vacillate between being
pro-science and not, the populace generally feel our future is in the good hands of forward thinking innovators and
scientists, and I agree. In the presence of a problem, science and technology present fantastic tools to explore and
create solutions.
ADOPTING TECHNOLOGY
Another important factor about the impact of technology on education is to consider both the adoption and diffusion
of new innovations. According to Straub (2009), adoption and diffusion of technology is influenced by many factors.
Despite many technological advances being made at higher administrative levels within schools, individual adoption
by teachers can be mapped over time and used to identify those who may be considered “early”, “mid”, and “late
P a g e | 18
adopters.” Many readers of this book may be categorized as “early adopters” of technology in the classroom.
Possibly, some of you are ever ready to try new things, easily bored, and in possession of a highly developed “risktaking” personality. Many of you are attracted to the promise that technology brings to the pedagogical table. New
ways of interacting with students, redefined methods of presenting materials, the possibilities of social interactions
with individuals through advanced communication tools, and innovative ways of measuring learning may catch your
eye. Reading about these emerging technologies you may often say… “I could use that in my class!” Consider this
example:
At the time of this writing, Apple has released its flagship iPhone X. One interesting feature of the device is
its ability to animate emoji. Emoji are tiny icons, graphics, and characters than many users of social media
insert into their messages to emphasize and compress thoughts and ideas. Emoji are like Emoticons; those
little emotionally-laden characters we use like the sideways “smiley” face.
:-) = happy
:-D = very happy
:-( = sad
Emoji, however, are artfully created and cover a vast array of symbols representing faces, animals, flags, food, jobs,
and everyday objects.
Apple has introduced the ability to animate these emoji in their product called Animoji. With high definition camera
and the A11 Bionic processor at hand, the user of the iPhone X will be able to animate 12 different animoji to mimic
the facial expressions of the user! The animoji will mimic and animate the facial features of the user quite accurately.
Does anyone see how this could be used in a Psychology class exploring facial features and the recognition of
emotions? Could we apply this to teaching kids on the Autism Spectrum to recognize subtle changes in facial features
and their meaning? Apple only wanted to create a new spin on emoji and here we are looking for ways to incorporate
it into our classroom. (Yes, we can also use it to communicate real feelings about our students’ work!) This is what
happens! New technology emerges, and we begin to get creative about how we might enhance learning experiences
with it! It changes, we adapt.
Technology has not failed to keep its promise. There is no doubt that interaction between the worlds of technology
innovation and education have produced new and innovative methodologies for teaching and learning. These
results, however, have not been without their challenges. Technology has both solved and created instructional
problems. However, as psychologists and teachers of psychology, it is not enough that a particularly technological
innovation is popular, innovative, fun, or promising. It must work. To satisfy us, the technology that we embed into
our classrooms must meet a particular, though not often measured and researched, “return on investment,” or ROI.
By this we infer that the amount of investment of time, money, energy, support, and frustration is a cost that is
“reasonable” in relation to the pedagogical outcome. Essentially, do the benefits outweigh the costs?
It may be informative at this point to introduce the reader to Melvin Kranzberg. In 1985, Dr. Melvin Kranzberg, the
founding editor of the academic journal Technology and Culture, delivered the presidential address at the annual
meeting of the Society for the History of Technology. In this address, Kranzberg outlined a number of truisms about
the unique and impactful relationship that technology has on culture. His focus was to provide a balanced
perspective on the doctrine of technological determinism, namely that technology is the primary driving force in
“shaping our lifestyles, values, institutions, and other elements of our society” (Kranzberg, 1985). In light of how
technology in impacting education, it is useful to examine these impacts through these different lenses.
P a g e | 19
KRANZBERG’S SIX LAWS OF TECHNOLOGY
LAW #1: TECHNOLOGY IS NEITHER GOOD NOR BAD; NOR IS IT NEUTRAL.
Technology of all kinds has no minds of its own and is inert outside of the activity of the humans who use it. We
may hear this sentiment in the rhetoric regarding gun control where “Guns don’t kill, people do” can be often heard.
However, even though it is clear that the impact of the application of technology in the classroom depends on ways
in which teachers and students use the technology, it is, as Kranzberg asserts, not neutral. Technology can be viewed
as another member of the learning team; the teacher, the student and the technology. Technology, in this manner,
will never be completely transparent, at least until the time comes when it is difficult to determine where people
end and machines begin.
To this end, we must always perceive technology as an additional barrier to learning, even if it also provides solutions
to barriers in learning. The mind engaged with technology has to translate the medium into comprehensible
thoughts and processes and integrate these into current cognitive structures of both the concepts being learned,
and of the technology itself.
LAW #2: INVENTION IS THE MOTHER OF NECESSITY. EVERY TECHNICAL INNOVATION SEEMS TO REQUIRE
ADDITIONAL TECHNICAL ADVANCES IN ORDER TO MAKE IT FULLY EFFECTIVE.
We can best understand the application of this law to education if we examine, once again, the broad definition of
the term “technology” within the Merriam-Webster dictionary (www.merriam-webster.com), supporting the
premise that technology includes the skills, practices, and procedures of instruction itself. These would be the
evidence-based teaching practices that teachers use in the classroom.
When we add computer or communication technology into the instructional process, according to the second law,
we will need to advance the teaching technology as a result. We cannot engage in the same teaching practices that
we have used without specific technology after we have added technology, we have to modify our methods. In fact,
we have to advance these methods and practices in order to ensure that the inclusion of technology is “fully
effective.”
LAW #3: TECHNOLOGY COMES IN BIG AND SMALL PACKAGES. THE FACT IS THAT TODAY’S COMPLEX
MECHANISMS USUALLY INVOLVE SEVERAL PROCESSES AND COMPONENTS.
Just as we understand that we will need to change our teaching practices in order to ensure that our technological
innovations are “fully effective” we also realize that technology is rarely simple, it often implicates a number of
different processes that must be contended with in order for the initiative to work. Consider the simple addition of
using email to communicate with students regarding due dates for assignments in a course. While this may seem
like a small technical addition to classroom practice it implies a number of processes. Teachers need to create their
own email account and deduce a mechanism to isolate student communication from other emails. Students also
need to do the same with their email accounts, isolating specific messages about school from other emails.
Expectations as to when the emails would be forthcoming from the teacher need to be identified and behavioral
habits on the part of the students to check the email need to be maintained. Etiquette and expectations for the use
of this communication protocol need to be developed and consequences for infractions need to be outlined.
Recipient (mailing lists) need to be developed to make the effort more efficient and mechanisms to add and remove
recipients from the email list need to be developed and maintained. Finally, if the emails are a primary means of
P a g e | 20
communication, alternative methods need to be developed in the instance of lapses in Internet service, device
damage and loss, and to further document class expectations. It is remarkable that we implement complex
technologies such as Learning Management Systems with an expectation that these additional adjustments will
simply manifest themselves.
LAW #4: ALTHOUGH TECHNOLOGY MIGHT BE A PRIME ELEMENT IN MANY PUBLIC ISSUES, NONTECHNICAL
FACTORS TAKE PRECEDENCE IN TECHNOLOGY-POLICY DECISIONS. MANY COMPLICATED SOCIOCULTURAL
FACTORS, ESPECIALLY HUMAN ELEMENTS, ARE INVOLVED, EVEN IN WHAT MIGHT SEEM TO BE PURELY
TECHNICAL DECISIONS. TECHNOLOGICALLY “SWEET” SOLUTIONS DO NOT ALWAYS TRIUMPH OVER
POLITICAL AND SOCIAL FORCES.
One interpretation of this law in regard to education may be in our need to enforce the law. In this sense, we should
not allow technology to be the sole driving force in educational and instructional decisions. We should not embark
on a system where the technological application is driving educational practice but strive to ensure that education
practice is driving the technological application.
There are many instances where schools have made intensive investments into technology which is then forced
upon teachers in order to justify the costs. Technological innovation in the classroom should be first piloted by
enthusiastic early-adopters. This process will ensure a proper vetting of both the benefits and challenges of specific
technologies and will help define the alternative instructional practices necessary to ensure success prior to
distributing the technology on a broader scale.
LAW #5: ALL HISTORY IS RELEVANT, BUT THE HISTORY OF TECHNOLOGY IS MOST RELEVANT. ALTHOUGH
HISTORIANS MAY WRITE LOFTILY OF THE IMPORTANCE OF HISTORICAL UNDERSTANDING BY CIVILIZED
PEOPLE AND CITIZENS, MANY OF TODAY’S STUDENTS SIMPLY DO NOT SEE THE RELEVANCE OF HISTORY
TO THE PRESENT OR TO THEIR FUTURE. I SUGGEST THAT THIS IS BECAUSE MOST HISTORY, AS IT IS
CURRENTLY TAUGHT, IGNORES THE TECHNOLOGICAL ELEMENT.
We have already examined that there is an interaction between culture and technology, yet rarely do we see a
revelation of the day-to-day technological innovations that shaped the lives of people in history. Surely it would be
nearly impossible to write the history of modern civilization without including copious words on our devices. It
would be hard to imagine a modern study of our society failing to mention the important role smart phones and the
Internet play in our society and our world view.
While the technologies of the past may have not been as intricate as our modern-day devices, they were equal in
their impact on the daily lives of those who used them. It was not that long ago, as many readers of this book could
attest, that writing papers for class was done on a typewriter and the concepts of “white out” for correcting small
mistakes (and retyping the whole paper for large mistakes!) were part of our everyday language. These technologies
may seem quaint but they were tremendous leaps forward for those in the day and they played a major role in the
social lives of people.
P a g e | 21
LAW #6: TECHNOLOGY IS A VERY HUMAN ACTIVITY – AND SO IS THE HISTORY OF TECHNOLOGY. BEHIND
EVERY MACHINE, I SEE A FACE – INDEED MANY FACES: THE ENGINEER, THE WORKER, THE BUSINESSMAN
OR BUSINESSWOMAN, AND SOMETIMES THE GENERAL AND ADMIRAL. FURTHERMORE, THE FUNCTION
OF TECHNOLOGY IS ITS USE BY HUMAN BEINGS – AND SOMETIMES, ALAS, ITS ABUSE AND MISUSE.
One key point of this law is the recognition that human beings are responsible for the use, and misuse of technology.
Behind the creation of the technology can be found individuals. The creators of these technologies can no more
refrain from imprinting their personalities into their invention than an artist can paint without embedding their
signature styles of stroke and color. As creations of the human mind, they are figuratively, and sometimes literally,
created in the image of their creator.
Adopters of the technology, however, are co-creators of the technology. Mindful that the technology is dead when
it is not being used, the user embeds the technology with their own personalities through the selective use of specific
technologies in specific ways to solve specific problems. The technology is somewhat independent of reflective of
its creator, but in the use and application of the technology we will experience its true function and utility.
WHAT TO EXPECT IN THIS BOOK
The authors of the chapters in this book are most certainly early adopters/adaptors, skeptics, cheerleaders, and
pedagogical innovators. But above all, they are scientists and understand that “not all that shines is gold.” As social
scientists, they are also acutely aware of the impact, both positive and negative, that technology has had on the field
of education. This book attempts to provide you with some advanced knowledge as to the costs and benefits of
instructional technology so that you can select which ones you want to explore. To that end, the book has been
divided into three sections.
PART 1 - HOW TECHNOLOGY HAS RESHAPED HUMAN COGNITION
In this section, the authors explore not only how technology has had an impact on teaching practice, but how it has
a direct impact on the ways in which we perceive, remember, and learn. These writers shed light on some of the
most compelling questions of how the human mind interacts with the machines we have built. These topics include:
•
•
•
•
•
Adjusting our methods and approaches to account for the cognitive habits and skills developed in
the minds of digital natives, individuals who have grown up with computer and communication
technology.
An exploration of the possibility that our technologies may be too helpful and thus inhibiting the
development of effective problem-solving skills.
The prevelance of distractions competing for our students’ attention and solution to help them
focus.
The increasing concern regarding how students can access vast repositories of information, but
may be challenged by the ability to process this information on a deeper level.
How technology, designed specifically to attract our attention, may have both a positive and
negative impact on students’ ability to focus.
P a g e | 22
PART 2 - EDUCATION TECHNOLOGIES THAT INSTRUCTORS USE TO TEACH STUDENTS
In this section, we explore a variety of specific technologies and their implementation in the classroom. These essays
focus on the critical evaluation of their effectiveness and efficacy in teaching and learning. These technologies
include:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Engaging students in blended and online courses.
Exploring group-work within a blended learning environment.
Incorporating asynchronous online discussions in blended and online courses.
The use of Learning Management Systems
Engaging students in asynchronous online courses.
The use of Open Educational Resources to increase access to a wide range of material and to
increase motivation.
Designing inclusive environments for students with disabilities.
Digital tools that allow students to interact with materials and with each other.
Mobile Devices.
Electronic textbooks, eBooks, and Digital Study Aids.
The development of academic skills and digital literacy among student athletes.
Applying concepts of developmental psychology in a virtual parenting environment.
Using animation software to engage students in course material.
Using technology to teach CBT and mindfulness concepts to children.
Teaching students about technology, distraction, and learning.
An exploration of how technology can be used to for service learning projects.
How technology can be used to enhance specific writing skills, such as APA formatting.
PART 3 - HOW SOCIAL MEDIA IS USED TO ENGAGE STUDENTS IN LEARNING
Social Media is ubiquitous. As with many forms of technology and communication, many teachers are eager to
explore how this engaging environment can be used in the classroom. Our students have clearly indicated how
captivating these technologies can be, particularly when they are using them non-productively in our classrooms!
Clearly, they could be powerful instructional tools if examined in the following ways:
•
•
•
•
Using phone apps to increase student engagement.
Effective blogging practices.
Integrating Twitter into the Introduction to Psychology curriculum.
PechaKucha in the classroom.
P a g e | 23
This collection of work reflects an amazing variety of viewpoints. Ultimately the process of incorporating or adapting
to technology in your own classroom is an individual one. Hopefully, the information in this book will help you avoid
the same mistakes made by early adopters and to go into the process understanding the risks and benefits. For us
clinicians, this book serves as a form of “Informed Consent.” Use it well.
A SUMMARY AND SOME BASIC ADVICE
For over 20 years, this author has been using advanced communication and computer technology in the classroom
(not just chalk and markers!). Early adoption and experimentation with technology in the classroom, online teaching
and learning, and the exploration of such innovations as digital textbooks and 1:1 device programs have punctuated
my entire career. Through this experience, I offer a few tidbits of basic advice (feel free to write these out on your
whiteboard!):
1.
Investigate the pedagogical utility of technology thoroughly.
2.
If possible, pilot the technology, or at least embed the technology in “low-stakes” activities.
3.
If you select a technology for your students to use, be prepared to have a deep understanding of
WHY you are using that technology in class.
4.
You should be very “savvy” with the technology yourself. Despite the best intentions of student
service supports, the teacher is the primary “technical support” contact.
5.
Be patient with others who are not as savvy as you are. Someone has likely been patient with you.
6.
Be ready to have the technology fail. I mean this in two ways. First, the technology itself may fail
to work as it is supposed to. I believe Murphy would have had a law for this. Second, the
pedagogical intent of the technology may fail. We need to be ready to learn, right along with our
students.
7.
Maintain a healthy sense of humor. As with many things, the stress of change and innovation can
often hamper the very creative problem-solving skills needed to contend with it. Having fun with
these technologies and being able to play on our all-to-human struggles with something that may
seem easy can go a long way in managing change.
REFERENCES
Kranzberg M. (1986) Technology and History: “Kranzberg’s Laws”. Technology and Culture, 27(3), 544-560.
Mills, C.W. (1959). The Sociological Imagination. New York, NY: Oxford University Press.
OpenStax College. (2015). Sociology. (2nd Ed.). Houston, TX: OpenStax CNX
Straub, E.T. (2009). Understanding technology adoption: Theory and future directions for informal learning. Review
of Educational Research, 79(2), 625-649.
P a g e | 24
PART
HOW TECHNOLOGY HAS RESHAPED
HUMAN COGNITION
P a g e | 25
1
CHAPTER
2
TEACHING 21ST CENTURY BRAINS:
ACTIVATING WORKING MEMORY
IN THE ONLINE WORLD
LAUREN M. LITTLEFIELD
AND
ANNA R. GJERTSEN
WASHINGTON COLLEGE
INTRODUCTION
With advancing technology, it is becoming increasingly important to identify the types of digital media and teaching
techniques that best facilitate learning. This chapter explains the concept of working memory (WM), associates WM
with the frontal-most regions of the brain, and suggests teaching techniques that are likely to stimulate WM
processing. The chapter concludes with recommendations to guide further research of media-based teaching
techniques and their impact on learning.
Intentional learning requires intact executive functioning. This involves self-regulation of emotions and behavior
through skillful task-switching and control of physical impulses, as well as the ability to monitor and update
information in WM (Hofmann, Schmeichel, & Baddeley, 2012). All conscious thought involves WM, and therefore it
is crucial to new learning. A strong, positive correlation exists between WM capability and scores on tests of higherlevel cognition, such as reading comprehension, reasoning, and standardized achievement, among others (as
outlined by Lusk, Evans, Jeffrey, Palmer, Wikstrom, & Doolittle, 2009; St Clair-Thompson & Gathercole, 2006).
Although some questions about memory functioning are not fully answered, many learning phenomena are
accounted for by WM theory (Baddeley, 2012), which has developed across half a century. In 1968, Atkinson and
Shiffrin introduced the idea that WM consists of a combination of storage and control processes. Baddeley and Hitch
(1974) examined the storage capacity limitations of WM, or the idea that there is a limit to the amount of information
that can be considered at any given time; they also coined the term “central executive” to describe the control
processes that allow for higher-level cognition. Later, Baddeley (2000) explained that the central executive allocates
attention for the full range of routine monitoring to novel information processing. In the updated theory, an
“episodic buffer” was added, which accounts for the integration of verbal and acoustic material with visual
information and allows for new thought formation through joining long-term memories with current sensory
experiences (Baddeley, 2000).
The frontal lobe of the brain orchestrates “working-with-memory” on explicit, goal-directed tasks (Moscovitch,
1994). In Moscovitch’s model, brain regions are associated with specific learning and memory functions. The job of
the frontal lobe is to control attention and command strategic processing. The foremost part of the frontal lobe is
referred to as the prefrontal cortex, or the association frontal cortex, because it is extensively interconnected with
other regions of the brain. Neuroanatomically prepared to guide human behavior, the prefrontal region plays a
critical role in modulating inputs from posterior and lower brain tissues, which store long-term memories and
contribute emotional value to memories, respectively (Stuss & Benson, 1986).
P a g e | 26
In a review of the literature concerning the neurological bases of WM, Funahashi (2017) confirmed that the
prefrontal cortex in primates and humans not only maintains information but also directs information manipulation.
For learning to occur, new information needs to be encoded and later remembered. Frontal lobe dysfunction causes
poor organization during encoding and decreased search and monitoring during retrieval (Moscovitch, 1992).
Encoding processes often depend upon structures within the inferior, prefrontal areas whereas information retrieval
is typically processed in the dorsolateral and anterior prefrontal regions (Wagner, 1999). Specifically, the dorsolateral
prefrontal cortex (DLPFC) is flexibly responsive during goal-driven learning tasks (Dubin, Maia, & Peterson, 2010).
Lesion studies affirm the role of the DLPFC in WM tasks requiring the manipulation of spatial and/or verbal
information (Barbey, Koenigs, & Grafman, 2013).
To encourage WM processing (and thus, by extension, use of the prefrontal cortices), five general teaching tactics
will be reviewed. Special attention will be given to how technology contributes to, or detracts from, the learning
process.
TECHNIQUES FOR ENLISTING WORKING MEMORY IN SUPPORT OF LEARNING
1. AVOID MEMORY OVERLOAD
Fast-paced lessons and covering too broad a scope of information can adversely influence learning, particularly for
students with lower WM capabilities. Lusk et al. (2009) compared how well undergraduate participants with high
versus low WM capacity learned from a multimedia tutorial that was either nonsegmented or segmented (with
segmented meaning that the 11-min video was divided into 14 narrated segments of 45-60 s each). Higher WM
scores led to superior learning in the nonsegmented condition, but those with lower WM scores were able to
understand the video content as well as higher WM students when they viewed the segmented tutorial (Lusk et al.,
2009). In summary, the pacing became manageable for students with lower WM capacity when the video was broken
into manageable pieces.
Naïve learners are susceptible to WM overload. Sometimes presenting less information produces more learning,
especially with young learners or students diagnosed with learning disorders (Littlefield & Klein, 2005; Littlefield,
Klein, & Coates, 2012). When tangential information or too much information is offered during learning, more
attention needs to be allocated to decide which information is relevant. This overloads WM. An online measure
called the Association Memory Test (Klein, Littlefield, & Cross, in press) can be used to compare a simple learning
condition to a more detailed learning condition. During the simple learning condition, symbols are shown one at a
time while the word that goes with each one is heard. The detailed learning condition simulates what a teacher
might say to help a student learn by adding sentences that meaningfully associate each symbol with its paired word.
The simple learning condition typically results in better retention than when the sentences are also provided (Klein
et al., in press). Precise recall of the symbol-word pairs seems to be enhanced by keeping initial learning
uncomplicated.
St. Clair-Thompson and Gathercole (2006) recommend structuring lessons and activities to limit the amount of
information to be processed and allowing the use of external memory aids. A common technique used to reduce
cognitive load in new learners is being able to study worked examples, as opposed to solving the problems
themselves (Paas, Renkl, & Sweller, 2003). Paas and colleagues make the essential point that learner expertise and
instructional techniques interact, meaning that those who are more familiar with the foundational information in an
area of study will have capacity available to process more difficult material. This concept may be particularly
P a g e | 27
important for digitally-based assignments. For example, when completing online assignments, information provided
by hyperlinks can overwhelm a beginner. Providing a list of allowable websites can minimize this problem.
2. ENCOURAGE STUDENTS TO TAKE HANDWRITTEN NOTES
Taking lecture notes places high demands on the central executive component of WM, as it requires comprehension,
selection of information, and logical organization of current auditory inputs, while simultaneously writing what was
presented previously (Piolat, Olive, & Kellogg, 2005). Some research has provided evidence that handwritten note
taking can facilitate learning, whereas typing notes on a computer or tablet appears to be associated with poorer
learning outcomes. Sophomore students taking Principles of Economics at West Point were randomly placed in
conditions of technology-free classrooms versus technology-enabled classrooms (Carter, Greenberg, & Walker,
2017). Average final exam scores of those in technology-free classrooms was 18 percent of a standard deviation
higher than the exam scores of students who typed their notes. Mueller and Oppenheimer’s (2014) study utilized
15-minute TED talks. After a 30-minute delay period that included distractor tasks, they discovered that while there
was no difference in ability to respond to factual questions, handwritten note takers were much better at responding
to conceptual questions. Qualitatively, Mueller and Oppenheimer (2014) found that students who typed their notes
were more inclined to transcribe the talks whereas students who wrote out their notes tended to organize the
information from the TED talks. Their interpretation was that typing notes carries a tendency toward shallower
processing. Piolat et al. (2005) concurred that transcribing notes is less effortful than extracting ideas for notetaking.
Because notetaking is cognitively demanding, learners with lower WM capacities may meet with less notetaking
success. It is worth highlighting that findings from these notetaking studies were not attributable to multitasking,
although a related line of work discovered that shifting between note taking and online tasks unrelated to the lecture
not only leads to lower test scores for the multitasker, but also for peers who can see the screen of the multitasker
(Sana, Weston, & Cepeda, 2013).
One may wonder if certain handwritten notetaking methods are better than others. A simple distinction can be made
between traditional, linear techniques (like recording ideas using bullet points) and non-linear methods (such as
concept maps and diagramming that graphically organize content). Comparing knowledge of video clip content
between adult learners who had experience with non-linear notetaking techniques and those who did not, nonlinear methods resulted in 20% higher comprehension upon later testing (Makany, Kemp, & Dror, 2009). Refer to
Makany et al. (2009) for additional details about non-linear notetaking techniques, which appear to help the learner
organize content conceptually.
3. ORGANIZE CONTENT DURING THE LEARNING PROCESS
Information to-be-remembered is better recalled when it is processed for meaning and organized during the learning
process. For instance, organization of lists during encoding leads to better learning outcomes (Bellezza, Cheesman,
& Reddy, 1977; Dick & Engle, 1984). Dick and Engle (1984) tested 2nd graders on 20-item lists of pictures and found
that students demonstrated superior retention when they were directed to organize the pictures based on how they
were related to one another. Thinking of the purpose of the items pictured or their personal experience with the
items was not as fruitful.
Otten, Henson, and Rugg (2001) presented one word at a time on a screen and asked participants to make
dichotomous decisions about the words under two different conditions: whether or not the first and last letters of
the displayed word were properly alphabetically ordered (nonsemantic processing); or whether the word displayed
represented something that was living or nonliving (semantic processing). Recognition accuracy was significantly
P a g e | 28
higher for words that were processed semantically (Otten et al., 2001). This shows that even during a task when
participants did not know that they would be asked to recall the words, recollection was improved by meaningfully
processing them. This deep processing effect was documented in 1975 when Craik and Tulving determined that the
qualitative nature of judgments made about words influences their later recognition. Anderson and Reder (1979)
further asserted that increased elaboration leads to superior learning outcomes. They described this as a “breadth
of processing” effect (p. 385), where higher quantity of semantic elaboration is more important than the quality of
those elaborations. In other words, Anderson and Reder argued that the ability to recall information later
predominantly relies on the extent of information processed.
Descriptions, summarization, review, and evaluation can be valuable reflective learning tools because they increase
semantic elaboration. Reflection can be useful at any phase of learning (Pedaste et al., 2015). Journaling, with or
without guiding questions, is a widespread method for provoking reflection. Prompts for reflection can be delivered
face-to-face or online. A study of 9th graders provides an online example. Students completed an inquiry-based
science project via an online learning environment (Mäeots, Siiman, Kori, & Pedaste, 2016). Students who reflected
at a higher level throughout the project were also more successful in producing concrete learning outcomes. Another
example of semantic elaboration is adding teacher annotations to e-books. College students taking a computer
networking course were randomly assigned to paper textbook and electronic textbook groups, and were
subsequently assessed on their learning (Dennis, Abaci, Morrone, Plaskoff, & McNamara, 2016). The e-book group
was exposed to annotations from the course instructor, some of which elaborated upon the book content. Dennis
et al. (2016) found that groups performed equally well on a multiple-choice quiz, but that the e-book group
outperformed the paper textbook group on their responses to an open-ended, applied question.
4. TEACH STUDENTS HOW TO SEARCH THE INTERNET
Many university students and adults use the Internet as their primary resource when seeking information (Griffiths
& Brophy, 2005; Helsper & Enyon, 2010). Due to the perpetual accessibility of information, people are prone to
gathering information quickly without taking the time needed to critically evaluate it. Different strategies for reading
and knowledge gathering may be exercised when searching the Internet than when utilizing printed sources. Liu
(2005) recorded self-report data from 100 adults aged 30 to 45 in order to characterize their perceptions of how
their reading behavior had changed over the past decade, presumably as a result of having more Internet access.
At least 75% of Liu’s (2005) sample endorsed increases in non-linear reading (i.e., skipping around, such as through
following hyperlinks), more scanning resulting in cursory reading, and greater selectivity in choosing what to read.
How have Internet reading behaviors altered the way people think about and retain information? Are people
becoming more cognitively lazy without realizing it? In a study of university students who were divided into Internet
and encyclopedia groups and allotted 40 minutes to search for the answers to 60 questions, the encyclopedia group
was more accurate in recalling correct answers to questions (Dong & Potenza, 2015). Fisher, Goddu, and Keil (2015)
found that actively searching the Internet gives people the “illusion of knowledge,” meaning that they have high
confidence in the knowledge gained despite not fully understanding the content. This false sense of understanding
could be partly attributable to WM capacity limitations. Relatedly, Internet use may lead to quick acceptance of
ideas or solutions without serious consideration. When university students could view the analytical processing steps
followed by another person within a social network, researchers found that students tended to copy the response,
an effect they referred to as “unreflective copying bias” (Rahwan, Krasnoshtan, Shariff, & Bonnefon, 2014). Instead
of adapting the logical reasoning method for answering subsequent items, students passively accepted someone
else’s answers. Therefore, having solutions available led to poorer learning outcomes.
P a g e | 29
With the presence of irrelevant content, misleading information, and pre-packaged answers, learners need to
become wise searchers and deep thinkers. Otherwise, they may be susceptible to unreflective copying bias and the
illusion of knowledge. Adolescents, in particular, may not have the ability to assess the veracity of information
accessed (Michaud & Bélanger, 2010). Internet navigation skills can be taught for locating solid, peer reviewed
resources. Refer to Fornaciari and Loffredo Roca (1999) for a thorough review of the problems involved in Internet
searches, as well as “solution steps” for finding relevant, academic resources. Leitch (2014) offered 51 exercises for
evaluating authority. Some traditional college classroom exercises are shared, such as debate, but other suggestions
require assessing websites and analyzing Wikipedia. For younger students, assignments requiring the search for
multiple perspectives on one topic, as opposed to relying on one source, is a good start. When learners appreciate
the need to evaluate ideas, they are better poised to make inferences about the reliability of sources.
Interestingly, performing skilled Internet searches causes frontal lobe activation, which is a sign that effortful
thought is in progress. Studying people in their mid-50’s through mid-70’s during Internet searches, it was
determined that older individuals with technology experience employ more brain regions while searching compared
to those with less technology experience (Small, Moody, Siddarth, & Bookheimer, 2009). Incorporating a similar
method of comparing older individuals who were new to technology to those who were experienced with it, Small
and Vorgan (2008) found no differences in activation between groups while reading, but Internet-experienced
participants evidenced increased left dorsolateral prefrontal cortical (DLPFC) activation while performing Google
searches. After a week of daily Internet search practice, activation patterns became similar between the groups
(Small & Vorgan, 2008). Perhaps the increased activation is due to the top-down nature of an effective search task.
It is speculated that the low technology experience group learned how to perform a strategic search, rather than
just read off of the screen. Successful Internet searches require executive decisions about where to click next and
which key words to use.
5. INSTRUCT STUDENTS TO RETRIEVE NEWLY-LEARNED INFORMATION
With abundant knowledge available 24/7 on the Internet, students rely on devices more and more to access
information. Through a series of experiments, Sparrow, Liu, and Wegner (2011) discovered that people typically
remember where information is stored but are less likely to retain the specific information. Recalling the source of
knowledge (or where the answer can be found) may sometimes be perceived as more important than knowing the
answer itself. On the other hand, being informed that the information will be unavailable later increases one’s
chances of recalling it (Sparrow et al., 2011). It seems that unavailability increases the need for effortful processing.
Effort expended during the encoding process allows a cue to be available for later retrieval.
Increased DLPFC activation occurs while participants retrieve information learned from an Internet search (Wang,
Lu, Luo, Zhang, & Dong, 2017). In order to arrive at novel ideas and creative solutions, learners need to retrieve old
knowledge and logically integrate it with new information. This integration is made possible through the episodic
buffer of WM. According to Baddeley (2000), the episodic buffer is a temporary storage interface that pulls together
information from various sources to make sense of, and to accumulate knowledge about, the surrounding world.
Auditory-verbal and visual-spatial experiences are integrated with long-term memories, thereby updating one’s
knowledge. In this way, the episodic buffer retrieves pertinent information from long-term memory, integrates it
with newly perceived information, and then sends updated ideas back to long-term memory for later use.
Research has shown active retrieval solidifies long-term learning in classroom settings (Agarwal, Bain, &
Chamberlain, 2012) and in laboratory settings (Karpicke, 2012; Karpicke & Roediger, 2010). Reading and then trying
to retrieve what was just read produces better long-term recall than reading alone (Karpicke & Roediger, 2010). This
P a g e | 30
highlights the fact that, together, reading and retrieving are a powerful learning combination. Karpicke (2012) points
out that students do not think that retrieval will be as valuable as other methods, but it consistently yields higher
retention scores than other forms of learning. Interestingly, retrieval practice is effective when students produce the
response or simply think about it (Smith, Roediger, & Karpicke, 2013).
Reconstructing knowledge appears to be a robust way to enable future recall, and some forms of technology
facilitate retrieval practice. During classroom time, response systems can be used to intersperse review questions
with lecture so that students can both practice retrieval and verify understanding. Meta-analysis reveals that
classrooms which employ clicker response systems tend to have a small, but meaningful, learning advantage over
their non-response system counterparts (Hunsu, Adesope, & Bayly, 2016). The bottom line is that less practice leads
to lower mastery of content. Undergraduate marketing students were given the opportunity to review throughout
the semester by accessing online quizzes to practice course concepts; students with the lowest number of review
attempts performed the worst on the midterm and final examinations (Wellington, Hutchinson, & Faria, 2015).
Traditional retrieval methods presented online, like flashcards and quizzes, reveal scores and reaction time data for
students to compete with one another. Because these tasks are engaging and game-like, it is speculated that
students may choose to review more.
CONSIDERATIONS FOR FUTURE RESEARCH
How does technology impact human thought and brain structure? There are unanswered questions and mixed
findings concerning how WM and technology interact. In order to evaluate conclusions made in the literature, three
presumptions committed by past researchers are identified and described in this section.
One presumption is that younger people have a proclivity for multiple forms of media while older people have low
technological competence. The characterization of Prensky’s (2001) “digital native” has been referred to as a
stereotypical overgeneralization because there are differences in technology use based upon country of origin,
socioeconomic status, and educational background (Koutropoulos, 2011). Helsper and Enyon (2010) asserted that
within any group of people, there will be a continuum of technology skills. Instead of dividing research groups by
age, it is better to divide them by level of preference for, and amount of experience with, technology.
Another reason not to contrast older with younger participants is that comparing young brains to older ones is
problematic given the possibility of pre-existing processing differences that may have nothing to do with
technological exposure. Brain imaging reveals that, as adults age, linear decreases are seen in DLPFC activation
during both encoding and retrieval tasks (Grady, Springer, Hongwanishkul, McIntosh, & Winocur, 2006). Younger
adults and adolescents, on average, use DLPFC regions more than older adults when posed with new learning tasks.
When feasible, cross-sequential designs are recommended to better characterize thinking and brain changes by
following different aged cohorts over time.
A second presumption is that today’s adolescent thinks in a markedly different way than yesterday’s adolescent.
Dramatic commentaries about deteriorating attention spans and poor executive functioning prevail, but controlled
studies on today’s adolescent are needed to understand the benefits and risks that technology brings. Adolescent
and young adult learners prefer to task switch (Loh & Kanai, 2015; Rosen, Carrier & Cheever, 2013), but it is not
known if yesterday’s adolescents were prone to task-switching as well. Through surveying first-year university
students, Thompson (2103) concluded that the relationships between technology and learning styles are not strong,
but they are present. Frequent use of rapid communication tools and web resources is correlated with lower self-
P a g e | 31
reported productive learning habits, such as being able to resist distractions, reflecting when reading, and seeking
conflicting views when studying a subject (Thompson, 2013). Loh and Kanai (2015) describe poorer executive control
abilities of digital natives and speculate that the changes in thinking are probably associated with changes in brain
structure. In reviewing the literature concerning the Internet and its impact on adolescent cognitive development,
Mills (2016) concluded that there has simply not been enough empirically-based research to show a causal
connection between high Internet use and changes in adolescent cognitive development.
It is fairly well-documented that the teen brain is capable of flexible reorganization and that frontal lobe
development continues into the twenties (Giedd, 2012). As has always been true, any repetitive activity can
strengthen pathways within the brain, and while the brain remains somewhat malleable throughout life, its peak
changeability occurs prior to adulthood. Yet, most brain functioning studies examine university students or adults.
If knowledge about the developing brain is sought, then more research needs to incorporate younger participants.
The third, and final, presumption is that brain imaging data collected during learning tasks is easily translated and
understood. Modern neuroimaging technologies can provide a window into the structural correlates of brain
functions, but the human brain is extremely intricate with many circuits working in parallel. Therefore, interpreting
scans involves intelligent guesswork. One complexity is that experience results in differential activity levels. Those
with experience at a particular task seem to develop honed, organized brain networks that are activated during task
performance (Amoruso et al., 2016; Haier, Siegel, MacLachlan, Soderling, Lottenberg, & Buchsbaum, 1992), rather
than the more diffuse activation seen in novices. Similarly, prefrontal activation is not always consistent with
increased performance; novices may need to enlist prefrontal cortices more than experts due to the level of effort
required. A brief description of how functional magnetic resonance imaging (fMRI) works provides additional insight
into problems with interpretation. Analyzed from a subtraction method, fMRI signals during non-task trials are
subtracted from those produced during the task of interest. Pictures of the brain during task performance are
created in response to deoxygenated blood in a magnetic field. Logical suppositions are made about blood oxygenlevel dependent (BOLD) changes, which are indirect measures of task performance (Chouinard, Boliek, & Cummine,
2016). According to Logothetis (2008), thinking that the BOLD response is solely caused by excitatory neuronal
activity during task performance is a flawed assumption.
While functional imaging can be helpful in drawing inferences about structure-function correlates, other methods
of discovery should be taken into account, such as observations, rating scales, and neuropsychological tests. Future
research might incorporate scales of self-regulated learning, which can measure WM’s central executive. One
promising measure is the Self-Regulation of Learning Self-Report Scale that contains subscales for planning, selfmonitoring, and reflection, to name a few (Toering, Elferink-Gemser, Jonker, van Heuvelen, & Visscher, 2012).
FINAL THOUGHTS
Controlled research is needed on the use of different media-based teaching techniques and how they engage
students while benefitting learning. Regardless of teacher preferences, experience with technology, or age of the
learner, certain techniques will induce learning: those that enlist effortful WM functioning. Due to the ongoing
debate as to whether or not WM training is effective (Melby-Lervåg & Hulme, 2016; Morrison & Chein, 2011;
Shipstead, Redick, & Engle, 2012), it is best to focus on teaching techniques that support a range of WM capabilities.
Examples include teaching students to conduct wise Internet searches, incorporating presentation techniques that
increase organization of content and decrease memory load, encouraging handwritten notes, and practicing active
retrieval of newly-learned information.
P a g e | 32
Twenty-first century brains will slowly adapt to the technology-rich environment. Nonetheless, significant brain
changes due to heavy exposure to media technology appear to be overstated. Choudhury and McKinney (2013)
assert that the opposing views of “neuro-alarmism” and “neuro-enthusiasm” need to be tempered. Similarly,
Sprenger (2010) advocated for a balance of traditional and technological tools that can appeal across student
learning styles. We recommend adoption of the WM concept to facilitate the search for the most effective teaching
techniques.
REFERENCES
Agarwal, P. K., Bain, P. M., & Chamberlain, R. W. (2012). The value of applied research: retrieval practice improves
classroom learning and recommendations from a teacher, a principal, and a scientist. Educational
Psychology Review, 24, 437-448. doi:10.1007/s10648-012-9210-2
Amoruso, L., Ibáñez, A., Fonseca, B., Gadea, S., Sedeño, L., Sigman, M., . . . Fraiman, D. (2016). Variability in functional
brain networks predicts expertise during action observation. Neuroimage, 146, 690-700.
doi:10.1016/j.neuroimage.2016.09.041
Anderson, J. R., & Reder, L. M. (1979). An elaborative processing explanation of depth of processing. In L. S. Cermak
& F. I. M. Craik (Eds.). Levels of processing in human memory. Hillsdale, NJ: Lawrence Erlbaum.
Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W.
Spence’s (Ed.). The psychology of learning and motivation: Advances in research and theory, 2, 89-198. New
York: Academic Press.
Baddeley, A. (2000). The episodic buffer: A new component of working memory? Trends in Cognitive Sciences, 4(11),
417-423, doi: 10.1016/S1364-6613(00)01538-2
Baddeley, A. (2012). Working memory: Theories, models, and controversies. Annual Review of Psychology, 63, 1-29.
doi:10.1146/annurev-psych-120710-100422
Baddeley, A. D., & Hitch, G. (1974).Working memory. In G. A. Bower’s (Ed.). The Psychology of learning and
motivation, 8, 47-89. New York: Academic Press.
Barbey, A. K., Koenigs, M., & Grafman, J. (2013). Dorsolateral prefrontal contributions to human working memory.
Cortex, 49, 1195–1205. doi:10.1016/j.cortex.2012.05.022
Bellezza, F. S., Cheesman, F. L., & Reddy, B. G. (1977). Organization and semantic elaboration in free recall. Journal
of Experimental Psychology: Human Learning and Memory, 3(5), 539-550. doi:10.1037/0278-7393.3.5.539
Carter, S. P., Greenberg, K., & Walker, M. S. (2017). The impact of computer usage on academic performance:
Evidence from a randomized trial at the United Sates Military Academy. Economics of Education review, 56,
118-132. doi:10.1016/j.econedurev.2016.12.005
Choudhury, S., & McKinney, K. A. (2013). Digital media, the developing brain and the interpretive plasticity of
neuroplasticity. Transcultural Psychology, 50(2), 192-215. doi:10.1177/1363461512474623
Chouinard, B., Boliek, C., & Cummine, J. (2016). How to interpret and critique neuroimaging research: A tutorial on
use of functional magnetic resonance imaging in clinical populations. American Journal of Speech-Language
Pathology, 25, 269-289. doi:10.1044/2012_AJSLP-15-0013
Craik, F. I. M., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of
Experimental Psychology: General, 104(3), 268-294.
P a g e | 33
Dennis, A. R., Abaci, S., Morrone, A. S., Plaskoff, J., & McNamara, K. O. (2016). Effects of e-book instructor
annotations on learner performance. Journal of Computing in Higher Education, 28(2), 221-235. doi:10.1007/s12528-016-9109-x
Dick, M. B., & Engle, R. W. (1984). The effect of instruction with relational and item-specific elaborative strategies
on young children’s organization and free recall. Journal of Experimental Child Psychology, 37, 282-302.
doi:10.1016/0022-0965(84)90006-7
Dong, G., & Potenza, M. N. (2015). Behavioral and brain responses related to Internet search and memory. European
Journal of Neuroscience, 42, 2546-2554. doi:10.1111/ejn.13039
Dubin, M. J., Maia, T. V., & Peterson, B. S. (2010). Cognitive control in the service of self-regulation. In Encyclopedia
of Behavioral Neuroscience, 1, 288-293. doi:10.1016/B978-0-08-045396-5.00218-9
Fisher, M., Goddu, M. K., & Keil, F. C. (2015). Searching for explanations: How the Internet inflates estimates of
internal knowledge. Journal of Experimental Psychology: General, 144, 674-687. doi:10.1037/xge0000070
Fornaciari, C. J., & Loffredo Roca, M. F. (1999). The age of clutter: Conducting effective research using the Internet.
Journal of Management Education, 23(6), 732-742. doi:10.1177/105256299902300610
Funahashi, S. (2017) Working memory in the prefrontal cortex. Brain Sciences, 7 (5), 1-22. doi:10.3390/
brainsci7050049
Giedd, J. N. (2012). The digital revolution and adolescent brain evolution. Journal of Adolescent Health, 51(2), 101105. doi:10.1016/j.jadohealth.2012.06.002
Grady, C. L., Springer, M. V., Hongwanishkul, D., McIntosh, A. R., & Winocur, G. (2006). Age-related changes in brain
activity across the adult lifespan. Journal of Cognitive Neuroscience, 18(2), 227-241.
doi:10.1162/jocn.2006.18.227
Griffiths, J. R., & Brophy, P. (2005). Student searching behavior and the web: Use of academic resources and google.
Library Trends, 53, 539-554. Retrieved from https://core.ac.uk/display/4812111
Haier, R. J., Siegel, B. V., MacLachlan, A., Soderling, E., Lottenberg, S., & Buchsbaum, M. S. (1992). Regional glucose
metabolic changes after learning a complex visuospatial/motor task: A positron emission tomographic
study. Brain Research, 570(1–2), 134-143. doi:10.1016/0006-8993(92)90573-R
Helsper, E., & Eynon, R. (2010). Digital natives: Where is the evidence? British Educational Research Journal, 36(3),
503-520. doi:10.1108/10748120110424816
Hofmann, W., Schmeichel, B. J., & Baddeley, A. D. (2012). Executive functions and self-regulation. Trends in Cognitive
Sciences, 16, 174-180. doi:10.1016/j.tics.2012.01.006
Hunsu, N. J., Adesope, O., & Bayly, D. J. (2016). A meta-analysis of the effects of audience response systems (clickerbased technologies) on cognition and affect. Computers & Education, 94, 102-119.
doi:10.1016/j.compedu.2015.11.013
Karpicke, J. D. (2012). Retrieval-based learning: Active retrieval promotes meaningful learning. Current Directions in
Psychological Science, 21(3), 157-163. doi:10.1177/0963721412443552
Karpicke, J. D., & Roediger, H. L. (2010). Is expanding retrieval a superior method for learning text materials? Memory
& Cognition, 38, 116-124. doi:10.3758/MC.38.1.116
P a g e | 34
Klein, E. & Littlefield, L. with Cross, N. (in press). Association Memory (AM) Test: Inhibiting extraneous information
when learning [Edited study description and conversion of experiment to HTML-5]. American Psychological
Association’s Online Psychology Laboratory. Retrieval from opl.apa.org
Koutropoulos, A. (2011). Digital natives: Ten years after. MERLOT Journal of Online Learning and Teaching, 7(4).
Retrieved from http://jolt.merlot.org/vol7no4/koutropoulos_1211.htm
Leitch, T. (2014). Wikipedia U: Knowledge, authority, and liberal education in the digital age. Baltimore, MD: Johns
Hopkins University Press.
Littlefield, L. M., & Klein, E. (2005). Examining visual-verbal associations in children with and without Reading
Disorder. Reading Psychology: An International Journal, 26(4/5), 363-385. doi:10.1080/
02702710500285706
Littlefield, L. M., Klein, E. R., & Coates, S. (2012). An experimental evaluation of the effects of using training sentences
to aide young children's word recall. Effective Education, 4(1), 27-41. doi:10.1080/19415532.2012.712840
Liu, Z. (2005). Reading behavior in the digital environment: Changes in reading behavior over the past ten years.
Journal of Documentation, 61(6), 700-712. doi:10.1108/00220410510632040
Logothetis, N. K. (2008). What we can do and what we cannot do with fMRI. Nature, 453, 869-878.
doi:10.1038/nature06976
Loh, K. K., & Kanai, R. (2015). How has the Internet reshaped human cognition? The Neuroscientist, 22(5), 506-520.
doi:10.1177/1073858415595005
Lusk, D. L., Evans, A. D., Jeffrey, T. R., Palmer, K. R., Wikstrom, C. S., & Doolittle, P. E. (2009). Multimedia learning and
individual differences: Mediating the effects of working memory capacity with segmentation. British Journal
of Educational Technology, 40, 636-651. doi:10.1111/j.1467-8535.2008.00848.x
Mäeots, M., Siiman, L., Kori, K., & Pedaste, M. (2016, July). Relation between students’ reflection levels and their
inquiry learning outcomes. Proceedings of EDULEARN2016: 8th International Conference on Education and
New Learning Technologies, 5558-5564. Barcelona, Spain. Retrieved from https://telearn.archivesouvertes.fr/hal-01399062/document
Makany, T., Kemp, J., & Dror, I. E. (2009). Optimising the use of note-taking as an external cognitive aid for increasing
learning. British Journal of Educational Technology, 40(4). 619-635. doi:10.1111/j.1467-8535.2008.00906.x
Melby-Lervåg, M., & Hulme, C. (2016). There is no convincing evidence that working memory training is effective: A
reply to Au et al. (2014) and Karbach and Verhaeghen (2014). Psychonomic Bulletin and Review, 23, 324330. doi:10.3758/s13423-015-0862-z
Michaud, P. A., & Bélanger, R. (2010). Adolescents, Internet and new technologies: A new wonderland? [Abstract,
Article
in
French].
Revue
Médicale
Suisse,
6(253).
Retrieved
from
http://europepmc.org/abstract/med/20648786
Mills, K. L. (2016). Possible effects of Internet use on cognitive development in adolescence. Media and
Communication, 4(3), 4-12. doi:10.17645/mac.v4i3.516
Morrison, A. B., & Chein, J. M. (2011). Does working memory training work? The promise and challenges of enhancing
cognition by training working memory. Psychonomic Bulletin and Review, 18, 46-60. doi:10.3758/s13423010-0034-0
P a g e | 35
Moscovitch, M. (1992). Memory and working-with-memory: A component process model based on modules and
central systems. Journal of Cognitive Neuroscience, 4(3), 257-267.
Moscovitch, M. (1994). Memory and working-with-memory: Evaluation of a component process model and
comparisons with other models. In D. L. Schacter and E. Tulving’s (Eds.). Memory systems 1994, pp. 269310. Cambridge, MA: MIT Press.
Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over
laptop note taking. Psychological Science, 25(6), 1159-1168. doi:10.1177/0956797614524581
Otten, L. J., Henson, R. N. A., & Rugg, M. D. (2001). Depth of processing effects on neural correlates of memory
encoding: Relationship between findings from across- and within-task comparisons. Brain, 124, 399-412.
doi:10.1093/brain/124.2.399
Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: recent developments.
http://tipsforfaculty.com/wpEducational
Psychologist,
38(1),
1-4.
Retrieved
from
content/uploads/2016/02/Cognitive_Load__ID.pdf
Pedaste, M., Mäeots, M., Siiman, L. A., de Jong, T., van Riesen, S. A. N., Kamp, E. T., . . . Tsourlidaki, E. (2015). Phases
of inquiry-based learning: Definitions and the inquiry cycle. Educational Research Review, 14, 47-61. doi:
10.1016/j.edurev.2015.02.003
Piolat, A., Olive, T., & Kellogg, R. T. (2005). Cognitive effort during notetaking. Applied Cognitive Psychology, 19, 291312. doi:10.1002/acp.1086
Prensky, M. (2001, September/October)). Digital natives, digital immigrants part 1. On the horizon: The strategic
planning resource for education professionals, 9(5), 1-6. doi:10.1108/10748120110424816
Rahwan, R., Krasnoshtan, D., Shariff, A., & Bonnefon, J. (2014). Analytical reasoning task reveals limits of social
learning in networks. Journal of the Royal Society Interface, 11(93). doi:10.1098/rsif.2013.1211
Rosen, L. D., Carrier, L. M., & Cheever, N. A. (2013). Facebook and texting made me do it: Media-induced taskswitching while studying. Computers in Human Behavior, 29(3), 948-958. doi:10.1016/j.chb.2012.12.001
Sana, F., Weston, T., & Cepeda, N. J. (2013). Laptop multitasking hinders classroom learning for both users and
nearby peers. Computers and Education, 62, 24-31. doi:10.1016/j.compedu.2012.10.003
Shipstead, Z., Redick, T. S., & Engle, R. W. (2012). Is working memory training effective? Psychological Bulletin, 138(4),
628-654. doi:10.1037/a0027473
Small, G. W., Moody, T. D., Siddarth, P., & Bookheimer, S. Y. (2009). Your brain on google: Patterns of cerebral
activation during Internet searching. American Journal of Geriatric Psychiatry, 17(2), 116-126.
doi:10.1097/JGP.0b013e3181953a02
Small,
G., & Vorgan, G. (2008). Meet your
doi:10.1038/scientificamericanmind1008-42
iBrain.
Scientific
American
Mind,
19(5),
42-49.
Smith, M. A., Roediger, H. L., & Karpicke, J. D. (2013). Covert retrieval practice benefits retention as much as overt
retrieval practice. Journal of Experimental Psychology: learning, Memory, and Cognition, 39, 1712-1725.
doi:10.1037/a0033569
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having
information at our fingertips. Science, 333, 776-778. doi:10.1126/science.1207745
Sprenger, M. (2010). Brain-based teaching in the digital age. Alexandria, VA: ASCD.
P a g e | 36
St Clair-Thompson, H. L., & Gathercole, S. E. (2006). Executive functions and achievements in school: Shifting,
updating, inhibition, and working memory. The Quarterly Journal of Experimental Psychology, 59(4), 745759. doi:10.1080/17470210500162854
Stuss, D. T., & Benson, D. F. (1986). The frontal lobes. New York: Raven.
Thompson, P. (2013). Digital natives as learners: Technology use patterns and approaches to learning. Computers
and Education, 65, 12-33. doi:10.1016/j.compedu.2012.12.022
Toering, T., Elferink-Gemser, M. T., Jonker, L., van Heuvelen, M. J. G., & Visscher, C. (2012). Measuring self-regulation
in a learning context: Reliability and validity of the Self-Regulation of Learning Self-Report Scale (SRL-SRS).
International Journal of Sport and Exercise Psychology, 10(1), 24-38. doi:10.1080/1612197X.2012.645132
Wagner, A. D. (1999). Working memory contributions to human learning and remembering. Neuron, 22, 19-22.
doi:10.1016/S0896-6273(00)80674-1
Wang, Y., Wu, L., Luo, L., Zhang, Y., & Dong, G. (2017). Short-term Internet search using makes people rely on search
engines when facing unknown issues. PLoS ONE, 12. doi:10.1371/journal.pone.0176325
Wellington, W., Hutchinson, D., & Faria, A. J. (2015). The effectiveness of online quizzing as a repetitive learning tool
in a marketing class: A field study of the testing effect. Developments in Business Simulation and Experiential
Learning, 42, 42-49. Retrieved from https://journals.tdl.org/absel
P a g e | 37
CHAPTER
3
A “TROUBLESOME EFFORT”:
FOCUSED ATTENTION AND DEEP
PROCESSING IN THE DIGITAL AGE
OF TEACHING AND LEARNING
DEBORAH GAGNON
WELLS COLLEGE
Acknowledgements: The author would like to thank Dr. Steven Grant for fruitful discussions we have shared on this
topic. Some of the ideas contained in this chapter were previously presented at the State University of New York
College at Buffalo Communication Speaker Series, at the New York Library Association Academic Librarians
Conference, and in a Library Services and Technology Act Grants to States sponsored Webinar on Emerging Library
Technologies.
INTRODUCTION
“To most people nothing is more troublesome than the effort of thinking.” (Bryce, 1901)
We live in an era in which we are relentlessly presented with triggers distracting us from tasks that require focused
attention and deep processing of information. Teaching and learning are two such tasks. In truth, human kind has
always lived in an era with distractors; today’s digital distractors are merely different in kind, frequency, and
immediacy but not in effect. They present themselves in the form of auditory dings, pings, or rings, visuospatial cues,
or tactile vibrations emitted from our cell phones, tablets, and laptops to tell us that something else is clawing for
our attention at this particular moment. Usually this something else is seductive – so much less demanding of our
mental effort, so much more affectively alluring, so much more time sensitive (or so we convince ourselves). We are
conditioned through positive or negative reinforcement to allow our attention to drift to the digital triggers. This
reinforced behavior quickly becomes a habit or worse, an addiction (Berridge & Kringelbach, 2015; Lin, Zhou, Du,
Qin, Zhao, Xu, & Lei, 2012). Hardly anyone is immune: Whether we are digital natives or digital immigrants (Prensky,
2001), many of us find ourselves fighting the urge to check email and respond to texts throughout the course of the
day. Ubiquitous distraction can then lead us down a slippery slope to multitasking. Multitasking has always been
with us too, but has seemingly become much more pronounced in the digital age. What can we do to help our
students (and ourselves) resist or prevent distractions, avoid the allure of multitasking, engage in more focused
attention and deeper processes, and thus become better learners (and teachers) in the digital age?
THE DISTRACTION DILEMMA
Learning that happens in the classroom or during study – unlike classical or operant conditioning learning – requires
cognitive effort to stay focused and process information deeply (Goleman, 2013; Rosen, 2013). This is a
“troublesome effort” especially in the digital age for the many who are driven to digital distraction and compelled
into doing double (or multiple) duty with their limited attentional resources.
P a g e | 38
Consider the following excerpt from The Iliad:
As for now a black ship let us draw to the great salt sea
And therein oarsmen let us advisedly gather and thereupon a hetacomb
Let us set and upon the deck Chryseis of fair cheeks
Let us embark. And one man as captain, a man of counsel, there must be.
- Homer (quoted in Havelock, 1963, p. 81)
Reading this passage is laborious to most contemporary readers of English. Comprehending it requires allocation of
substantial attentional, working memory, and language processing resources and takes a second (or even third)
reading to interpret and fully appreciate the meaning of the words. If a text, Facebook, Twitter, Instagram, or email
notification happened to beckon while reading this passage, how likely would one be to sneak a glance to see who
or what sought attention? I can answer that as it happened even as I was typing the passage above: Very likely. Not
only did my cell phone start vibrating with an incoming call, but so did the Fitbit I wear on my wrist. I could not help
but glance at the phone, sitting on the table next to my laptop. I determined in that momentary glance that I would
let the call go to voicemail so as to not break my train of thought. But the problem is that it was already too late. In
the second or so that it took me to glance at the phone, read the caller’s name, and make a decision not to answer,
I had already broken focus and engaged in a conflicting, resource dependent task. I then had to devote cognitive
resources in order to redirect my focus, regain my train of thought, and pick up writing where I left off. This type of
resource-intensive activity inevitably detracts from learning or whatever cognitive task we are engaged in at the
time, such as writing in my own case. The obvious solution? Turn the cell phone ringer off, leave the phone in another
room, and/or remove the Fitbit while trying to engage in an attention- and process-intensive task.
Sometimes the distractions come from beyond our own sphere of control, however. In the classroom, the research
quite clearly shows that performance suffers for students who are subjected to their classmates’ Internet surfing
during class (as do the surfers’ grades themselves) (Fried, 2008). Anecdotally, we all know that our performance as
teachers suffers when we are distracted by students’ non-instructional laptop or cell phone use in the classroom.
The objective and subjective evidence completely validates and justifies instructors’ requests to refrain from
behaviors that are distracting to instructor and students alike.
Technology can certainly take away cognitive resources and negatively detract from both teaching and learning. But
technology is not all bad. Indeed, at the same time that the possibility exists that some form of technological trigger
could detract from comprehending, say, a passage from the Iliad, this same technology offers a tremendously
advantageous learning and teaching tool in other ways. For instance, in the passage from The Iliad, did you hesitate
at the word hetacomb? If so, did you then actually look up the word? Today’s readers are fortunate because
technology makes the process of look-up and access so much more facile, efficient, and immediate than was
previously possible. Earlier readers would have had to take a relatively pregnant pause in whatever they were doing
to perform a look-up in an actual physical dictionary. Thus, the very same technology that diminishes through
distraction also adds value through real time, immediate access to information. Other benefits of technology in
instruction are manifold. We can illustrate and communicate concepts and ideas through multimedia presentation
(e.g., YouTube videos, Webinars), develop creative, dynamic PowerPoint presentations instead of static overhead
transparencies, provide computerized experimental simulations, and ask our students to create Wikis, ePortfolios,
or Facebook pages to convey their learning (Gagnon, 2014).
P a g e | 39
These are all good uses of technology. But during real-time learning, technology that distracts or divides attention
is an overall negative. The benefit of technology – real time immediacy and the ability to create – also presents a
cost – real time distraction. We need to learn when to use technology for our betterment and when to let it go.
The question is not whether technology is good or bad but rather, when and how can it best be used to expand
and enhance human capacities, not “amputate” them (Lin, 2009)?
MULTITASKING MAYHEM
Gestalt psychologists knew more than a century ago that we cannot really hold two thoughts – or configurations –
in mind at the same time. The classic demonstration is the three-dimensional Necker Cube that can be perceived
from different orientations. The claim was that you cannot hold two opposing views in mind simultaneously, that
the nature of cognitive processing requires that you alternate – perhaps quickly, but still consecutively and not
simultaneously – between perceptions. Familiarity and practice make it easier to flip back and forth, eventually
creating the subjective impression of holding the two configurations in mind simultaneously.
Multitasking – trying to do two or more things at once – is akin to this subjective sense of holding two perceptions
at once. Anecdotally, students report that they can efficiently multitask across digital modes and media via their cell
phones, laptops, or tablets, and when they are studying. We can accept they believe this because we observe it with
our own eyes in the classroom. We see students sneaking text messages under their desks or surreptitiously sharing
Snapchat photos on their cell phones; we see their eyes dart across laptop screens or fingers fly across keyboards,
in a way that makes us suspect that something other than the class discussion is the focus of their attention. Students
also claim that they are perfectly capable of doing this without cost to their own learning, but in this the research
suggests that they are sorely deluded.
What the research shows is that multitasking is a misnomer. Rather than performing two tasks simultaneously, we
are really toggling quickly from one task to another (Bowman, Levine, Waite, & Gendron, 2010). With practice, of
the type that digital natives have acquired through a lifetime of using digital devices, the oscillation between digital
sources can happen very quickly, but the tasks are still being processed sequentially, not simultaneously. We do see
that multitasking can be improved through training and changes are observed at the neural level as well: the speed
of prefrontal cortical processing increases with practice in multitasking (Small, Moody, Siddarth, & Bookheimer,
2009). But the sense of task efficiency is illusory. Studies show that self-described high multitaskers are actually
worse than self-described low multitaskers at task switching (Ophir, Nass, & Wagner, 2009). The same study showed
that high multitaskers were worse in terms of memory and focused attention as well. In another study, Foerde,
Knowlton, and Poldrack (2006) showed that digital multitasking makes learning less efficient and the knowledge
gained less useful later on. The lesson is clear: Those who swim in multiple digital weeds do so at their peril - they
have shorter attention spans, are worse at shifting attention (slower and lose more information), have poorer
memories, and have less generalized learning. Overall cognitive function – and learning – is better off in a one-at-atime mode. As Rosen puts it:
For the younger generation of multitaskers, the great electronic din is an expected part of everyday life.
And given what neuroscience and anecdotal evidence have shown us, this state of constant intentional selfdistraction could well be of profound detriment to individual and cultural well-being. When people do their
work only in the “interstices of their mind-wandering”, with crumbs of attention rationed out among many
competing tasks, their culture may gain in information, but it will surely weaken in wisdom. (2008, p. 110)
P a g e | 40
The lesson for teachers and learners is that focused attention is necessary for learning. Doing less (one thing at a
time instead of several at once) truly is more. And to ensure that, mechanisms to diminish distraction and deter
multitasking need to be enacted in the classroom and at study. The empirical results simply do not support students’
claims that they can do it all without cost; they measurably lose out in learning, experiences, and ultimately, wisdom.
DEEP PROCESSING
Another downside to the ease of access that technology delivers is that this very ease may result in shallower
processing of information. In the pre-digital era, we might scribble down an unfamiliar word – hetacomb – on a piece
of paper, save it until finally reaching a dictionary, and look it up at some future time. Was all of this perhaps enough
active processing to burnish it more firmly into memory than a more immediate Google search does today? The
research suggests that we are, in fact, losing our ability to retain information - offloading it - even as we are gaining
skill at accessing it, a phenomenon known as the Google effect (Sparrow, Liu, & Wegner, 2011).
There is also empirical evidence that students learn better when notes are taken in longhand versus on a laptop
(Mueller & Oppenheimer, 2014). The slower, intentional process of handwriting that cannot keep up with spoken
speech leads listeners to have to think more deeply about what they are hearing and make decisions about what to
capture in their notes. With a laptop, listeners can better keep up with speakers and tend to type verbatim what
they hear, without deeply processing to determine what it is that is important to capture. Thus, laptop note taking
turns into a shallow shadowing task. As teachers and learners, we need to be mindful and intentional of when and
where technology is used both in the classroom and in study in order to encourage deep processing of information
which in turn leads to better learning.
FOCUSED (NARROW) ATTENTION
Lin (2009) suggests the useful idea of attentional breadth and a distinction between breadth-biased (akin to wide
bandwith) and focused (akin to narrow bandwith) cognitive control. I will simply refer to these as broad and narrow
attentional focus, respectively. Narrow focus tasks are effortful and deplete limited capacity resources, such as
working memory. Our prehistoric ancestors engaged narrow attentional focus to, for example, hunt, track, and
gather, just as we engage them today to read, study, learn, and teach (and to hunt, track, and gather still). The
alternative type of task engages broad focus and is the sort necessary to carry out a vigilance task. This is non-specific
allocation of resources, of the type needed for scanning the environment. It is relatively non-effortful, not resource
dependent, and not depletable in the same way that narrow focus tasks are. Broad attentional focus is not focused
a priori, but is captured by salient stimuli. The Internet (or any medium) can call on either form, depending on the
task. Surfing the net may call on broad focus, while writing emails may call on narrow. Watching videos or skimming
Wikis and blogs may require either. For a non-digital example, imagine being in the library stacks looking for a
particular book, call number in hand. This task requires a narrow focus of attention. Conversely, trolling the stacks –
i.e., broadly browsing them – can result in joyfully leaving the library with many more books than we had initially
come for because of all the “gems” we serendipitously found in our search. The two types of attentional breadth
have always been in play; they are not unique to digital media.
P a g e | 41
ANTIDOTES TO DISTRACTION, MULTITASKING, AND SHALLOW PROCESSING
What can we do to help our students (and ourselves) overcome the distractions and lures toward multitasking and
shallow processing that digital media seems to bring with it? Following are some general antidotes to the
phenomena described above that can get in the way of learning.
DISCIPLINE
Students (and teachers) need to be disciplined and moderators of their own attentional assets. Knowing when and
where to engage in broad versus narrow focus tasks is key. Compulsively checking text messages or Facebook during
class, studying, or reading (The Iliad or otherwise) are not the right times. Instructors can prohibit or limit cell phone
and laptop use in class. Students cannot allow broad digital tasks to crowd out and replace narrow focus tasks at
study either, but will have to institute discipline of their own volition. Sharing the research findings described above
with them may entice at least some to keep the digital media at bay when they study.
Instructors can encourage narrow focus in the classroom through activities that require deep processing: posing
thoughtful questions for class discussion and debate, challenging group work that requires problem solving or
innovation, providing time for in-class writing assignments, and posing problems to be solved independently. They
can discourage narrow focus through straight lecturing. While lecturing was de rigueur in the pre-digital era, it is
ineffective in an era where students have simply not had the amount of experience with focused attention that
digital immigrants might have had in their own educational experience. Focus will flag quickly and students will
succumb to digital lures. Instructors can encourage focused attention and deep processing during study time as well
by assigning meaningful and appropriately challenging assignments and readings. But ultimately, students will have
to learn to be protectors of their own attentional resources. They will have to learn digital discipline.
RESPITE
One could argue that our current educational system privileges the narrow, focused form of attention to the
detriment of processes that require breadth-biased cognitive focus. In doing so, do we chronically deplete our
students’ narrow focus resources? As I typed these thoughts, the familiar quote about the “forest for the trees”
came to mind. I happen to have a copy of Bartlett’s Familiar Quotations in my library so I decided I would take a
break, wander to my bookcase in another room, and look it up “old style.” As I walked across the house to get to the
shelves, it occurred to me that not only did the process of engaging with a physical artifact result in a deeper
processing of information for me, but it also offered me a noticeable respite from the narrow focus that I had been
employing the entire morning. Respite is something we ought to be doing in the classroom and encouraging our
students to adapt during study as well.
Lin’s view supports this. Lin (2009) asserted that moving into a broad mode of attention may allow restoration of
the narrow mode and may, in fact, be essential to it. Eco-psychologists likewise promote taking walks in nature – a
broad focus activity – as a way of regenerating our ability to engage in narrow focused activities (Greenway, 1995).
The resulting suggestion for encouraging focused attention and deep processing may seem quite counterintuitive,
but productive nonetheless: Encourage breadth-biased cognitive activities. In class, we can do this through handson activities, computerized experimental simulations, self-reflection/journaling, and multimedia presentations.
Students should be encouraged to take breaks from focused studying. Study break activities that promote broad
focus include dining, chatting with friends, walking outside or running, and yes, even Internet surfing.
P a g e | 42
BALANCE
If discipline is key for knowing when to engage in a particular type of focus – broad versus narrow – balance is key
for knowing how much of each to engage in. Perhaps the best way to help our students understand this is to be well
balanced ourselves. Live lives as scholars and teachers, but also reveal our non-scholarly interests and lives as family
members and athletes, musicians, activists, artists, or whatever our other interests may be. Demonstrate that one
can engage in activities that call on broad focus (e.g., taking hikes in the woods or strolling through a city park) in a
way that does not overwhelm our narrow focus occupations. In other words, be a role model of moderation and
balance. Applying Lin’s narrow versus broad attention to the classroom, it might behoove instructors to offer the
opportunity to engage in both, especially for longer class meeting periods. This can be achieved by interweaving
narrow focus (e.g., lecture) and broad focus (e.g., hands-on) activities in class.
MINDFULNESS
Instructors must also recognize that individual differences in experience will play a role in students’ ability to focus
narrowly versus broadly. A significant individual difference in today’s student population is relative exposure and
use of technology, which typically correlates with generation and age (Kaiser Family Foundation, 2006). Digital
natives are more practiced at digital multitasking (broad attentional allocation) than digital immigrants, who are
more one-at-a-time thinkers (narrow attentional allocation). If digital natives lack sufficient experience with narrow
attentional focus and are enticed into a life of distraction and multitasking, then one antidote is to teach
contemplative practices. There are many contemplative traditions, but the one that is currently in vogue is
mindfulness (Kabat-Zinn, 2012). Mindfulness is the very art of paying attention with intention. It focuses the mind,
i.e., narrows attentional focus. (See Iwamoto & Hargis’ chapter for a discussion of mindfulness.)
Teaching focusing skills may not only help digital natives with learning but also with anxiety, which seems to have
become epidemic in the digital age (Rosen, 2013). It is not surprising that an era of constant stimulation, information
overload, and a surfeit of distractions should be accompanied by an increase of anxiety. The changes in our society
now come so quickly and the demands to adapt so constant that it is no wonder everyone is on edge. Many
individuals suffering from anxiety have gained great respite from practicing mindfulness (Goldin & Gross, 2014).
Mindfulness may well be an antidote to distraction, multitasking, shallow processing, and anxiety.
ATTENTION HYGIENE: BEST PRACTICES FOR TEACHING AND LEARNING IN THE DIGITAL AGE
In summary, here is a list of “best practices” for encouraging focused (narrow) attention and deep processing, two
elements that are necessary for student learning in any era. These practices amount to what might be termed
attention hygiene, practices that allow teachers and students alike to decrease distraction, discourage multitasking,
encourage deep processing, and balance broad and narrow attentional focus which should lead to more effective
teaching and learning. They aim to make a “troublesome effort” less troublesome.
1.
Discipline and Choice: Distraction is a choice that can be decided against. Encourage students to study
and work in distraction free locations. Prohibit non-instructional forms of technology (cell phone,
Internet surfing, social media, etc.) use in the classroom and encourage students to adopt this policy
for out-of-class time that requires their focused attention/deep processing. Keep distractive
technology out of reach, sight, or earshot at such times.
P a g e | 43
2.
Reward: Think of broad focus activities as reward for engaging in narrow-based activities. Intentionally
schedule broad focus time for interacting with technology. Check email (social media, etc.) at
prescribed times of the day and only then. Think of it as “reward.”
3.
Habit: Good attentional hygiene is a choice that becomes habitual with practice. Just as multitasking
and digital distractions can become addictive through repetition, so too can healthy habits.
Choreographer and dancer Twyla Tharp has a wonderful book on the development of habit that is a
recommended read for anyone wanting to develop more discipline and healthy habit in their life
(Tharp, 2006).
4.
Mindfulness/Contemplative Practice: Encourage students to learn mindfulness or another
contemplative technique that encourages one-thing-at-a-time thinking.
5.
Modeling: Demonstrate healthy habits through role modeling. When meeting with students, turn your
own cell phone off and provide an example of the lost art of undivided attention. If your desk phone
rings while you are meeting with them, let it go to voicemail. Turn your computer screen off or close
your laptop so you will not be distracted by whatever may pop up on it.
6.
Deep Processing: Suggest that students take notes in longhand and that they employ non-digital
sources of information. Taking notes in longhand makes for better learning and memory (Mueller &
Oppenheimer, 2014; Piolat, Olive, & Kellogg, 2005). Suggest or require that students access
information beyond what is available online (Sparrow et al., 2011).
7.
Balance and Respite: If your class meeting period allows, allocate time to both narrow focus/deep
processing tasks and broad focus tasks. Strike a balance and also encourage students to do the same
with their own study time. Encourage them to take respite from narrow focus/deep processing with
broad focus activities.
REFERENCES
Berridge, K.C., & Kringelbach, M.L. (2015). Pleasure systems in the brain. Neuron, 86, 646-664.
Bowman, L.L., Levine, L.E., Waite, B.M., & Gendron, M. (2010). Can students really multitask? An experimental study
of instant messaging while reading. Computers & Education, 54, 927-931.
Bryce, J. (1901). Obedience. In Studies in history and jurisprudence, Vol. 1. New York: Oxford University Press.
Foerde, K., Knowlton, B.J., & Poldrack, R. A. (2006). Modulation of competing memory systems by distraction.
Proceedings of the National Academy of Sciences, 103 (31), 11778-11783.
Fried, C. B. (2008). In-class laptop use and its effect on student learning. Computers and Education, 50 (3), 906-914.
Gagnon, D. A. (2014, January). Facebook friends: Telling the stories of historically under-represented figures in the
history of psychology. Presented at the National Institute for the Teaching of Psychology, St. Pete’s Beach,
FL.
Goldin, P.J., & Gross, J.J. (2014). Effects of mindfulness-based stress reduction (MBSR) on emotion regulation and
social anxiety disorder. Emotion, 10(1), 83-91.
Goleman, D. (2013). Focus: The hidden driver of excellence. New York: Harper.
Greenway, R. (1995). The wilderness effect and ecopsychology. In T. Rozak, M. Gomes, & A. Kanner (Eds.),
EcoPsychology. San Francisco: Sierra Club Books.
P a g e | 44
Havelock, E. A. (1963). Preface to Plato. Cambridge, MA: Belknap Press of Harvard University Press.
Kabat-Zinn, J. (2012). Mindfulness for beginners: Reclaiming the present moment and your life. Boulder, CO: Sounds
True Inc.
Kaiser Family Foundation (2006). The media family: Electronic media in the lives of infants, toddlers, preschoolers,
and their parents. Menlo Park, CA.
Lin, F., Zhou, Y., Du, Y., Qin, L., Zhao, Z., Xu, J., & Lei, H. (2012). Abnormal whjte matter interity in adolescents with
Internet addiction disorder: A tract-based spatial statistics study. PLos ONE, 7(1). e30253.
https://doi.org/10.1371/journal.pone.0030253
Lin, L. (2009). Breadth-biased versus focused cognitive control in media multitasking behaviors. PNAS, 106 (37),
15521-15522.
Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over
laptop note taking. Psychological Science, 25 (6), 1159-1168
Ophir, E., Nass, C., & Wagner, A.D. (2009). Cognitive control in media multitaskers. Proceedings of the National
Academy of Sciences, 106 (37), 15583-15587.
Piolat, A., Olive, T., & Kellogg, R. T. (2005). Cognitive effort during note taking. Applied Cognitive Psychology, 19,
291-312.
Prensky, M. (2001). Digital natives, digital immigrants: Do they really think differently? On the
Horizon, 9(5), 1-6.
Rosen, C. (2008). The myth of multitasking. The New Atlantis: A Journal of Technology and Society, 20, 105-110.
Rosen, L. D. (2013). iDisorder: Understanding our obsession with technology and overcoming its hold on us. New
York: St. Martin’s Press.
Small, G.W., Moody, T.D., Siddarth, P., & Bookheimer, S.Y. (2009). Your brain on Google: Patterns of cerebral
activation during Internet searching. American Journal of Geriatric Psychiatry, 17 (2), 116-126.
Sparrow, B., Liu, J., & Wegner, D.M. (2011). Google effects on memory: Cognitive consequences of having
information at our fingertips. Science, 333 (6043), 776-778.
Tharp, T. (2006). The creative habit: Learn it and use it for life. New York: Simon & Schuster.
P a g e | 45
CHAPTER
4
WHEN MORE IS LESS: THE RISKS
OF OVERSCAFFOLDING LEARNING
BETHANN BIERER
UNIVERSITY OF COLORADO - DENVER
No one ever said teaching would be easy. The reality is that teaching is hard work. Even when it is done well, it is an
endeavor rife with frustrations, exasperations, and unmet goals. In today’s educational environment, it may be even
more so, as technology impacts the rate of change in the expectations, goals and techniques of educators. In the last
two decades alone, we have gone from chalkboards to overhead projectors to PowerPoint presentations and now
to Learning Management Systems (LMS’s). Since most teachers want to do well by their students and provide an
optimal learning environment that contributes to their students’ successes, they have worked assiduously to keep
up with these changes. Yet, as often is the case, haste may be precluding a careful consideration of the implications
of some of these changes. In this short chapter, I will discuss the possibility that by using all this new technology at
our disposal, we may be over-scaffolding learning for our students, and this may have some repercussions that
warrant both discussion and research.
Lev Vygotsky (1952) used the term scaffolding to refer to one of the processes whereby an expert supports a novice’s
learning by providing enough information or structure for the novice to be successful. A contemporary of Piaget,
Vygotsky believed that all learning is social and that experts should convey information to novices in specific ways.
Scaffolding requires the assessment of what a novice can already do, what a novice can do with help, and then the
development of supports so the novice can operate at a higher level, eventually without those supports or scaffolds.
At the point where performance is consistently at the higher level, the earlier supports, or scaffolding can be
removed or discontinued, and supports for moving onto a higher level can then be implemented. An example of this
is early counting behavior. A child might know the order of numbers, but not be good at one-to-one correspondence.
The expert (who already knows this skill) might help the novice point to each object while counting. Soon the
youngster is counting away, without the need for someone else to point out each object. If the next skill to be learned
is counting by “two’s” the expert will devise another method to scaffold that learning, and so on. Eventually, learners
should reach a level of understanding where scaffolds are self-constructed, or they actively seek advice of experts in
constructing their own scaffolds rather than relying on aids for their learning that are constructed by others.
Thus, the mature learner outcomes, according to Vygotsky, would be an individual who understands and scaffolds
his or her own learning with only appropriate reliance on others. This self-understanding is essential because
Vygotsky assumes that the now-expert learner will be able to use scaffolding to help others who have not mastered
relevant skills. It is based on Vygotsky’s ideas that I have come to use the term over-scaffolding to describe a perilous
practice in which educators do not adequately assess or understand what their students can do, or should be able
to do without assistance before offering them help to master a task. A brief search of the educational and
psychological literature found only one other use of this term, referring to teaching second language learners (Daniel,
Martin-Beltrán, Madigan, & Silverman, 2016). Interestingly, in this study, the practice of over-scaffolding language
learning with prompts rather than insisting on active practice was found to hinder the language acquisition of the
participants.
P a g e | 46
As modern teachers, we often use our LMS’s and other technologies to provide scaffolding for our students because
we not only think it will be helpful for our students to have as much of our help and guidance as possible as they
endeavor to learn the content we present, but also because it is so easy to do. Many teachers now provide their
students practice quizzes, handouts, slides, flashcards, sample papers and study guides without ascertaining first if
those scaffolds are actually necessary. In the days of needing to type out, copy and distribute by hand such
information, teachers surely were more judicious in the use of these supports. But now, with technology, it is
possible for instructors to provide all this scaffolding to students quickly and inexpensively.
I propose that providing more scaffolding than students require may not be serving the students’ best interests. As
scientists committed to relying on evidence-based methodologies, it behooves teachers of psychology to take a close
look at the ways we are incorporating technology in our teaching. Unfortunately, this change has happened so
quickly that research has not been able to keep up. There has been no comprehensive review about the efficacy of
this enhanced amount of scaffolding our students are subjected to. Cranney (2013) presented a cogent discussion
as to why this research is so difficult to complete including ethical limitations and the impact of inflexible curricula.
While there remains a considerable lack of consensus on the impact of technology-based scaffolds on learners and
the learning process, there are a couple of active dialogues occurring among educators related to this idea. There is
a longstanding conversation about the impact of pre- and self-testing (see Brown, Roediger and McDaniel, 2013)
with a consensus emerging that such supports can be helpful, and a sense that many of the struggles our students
have seem to be based on the understanding that today’s students simply do not read as much as their predecessors
did, nor are many of them reading at a level that will enable them to master material college material (Hoeft, 2012).
In the absence of a solid research base, I would like to utilize some anecdotal information to inform my discussion:
It does not appear that the rise in the availability of easy scaffolding has resulted in a subsequent rise in student
achievement or sophistication of work. While this is my personal experience, and that of many of my colleagues, I
believe that if, indeed, there had been such a spike in college performance related to increased scaffolding, there
would have been significant headlines to that effect. I will also note that there have been few studies that report a
sharp decline in college performance either. Thus, for the purpose of this discussion, I propose that we do not know
what effect this increase in scaffolding has on college achievement. However, I believe that there may be some noncognitive outcomes of teachers being too helpful that we should pay attention to.
Given this muddiness in the new research, I find myself considering the theoretical orientations and practical advice
of some of the early thinkers in this field as I consider my own teaching choices. Since understanding learning has
been foundational to psychology throughout its history, it is interesting to look back at what some of our pioneers
believed about the process of teaching and learning. Psychologists have always attempted to improve lives,
especially when it comes to the questions about what leads to good learning outcomes for the most people. I would
like to compile some of their ideas in the service of making sense about what our reliance on technology could be
doing to learning and learners.
Jan Piaget (1952) and the neo-Piagetians such as Robbie Case (Case, 1992) and Kurt Fisher (Fisher & Bidell, 1998)
proposed that humans construct their understanding of the world by directly acting upon it and that knowledge is
constructed in a stepwise, developmental process that proceeds from infancy to adulthood. Using this model, the
job of the teacher is to present the student with appropriate materials, tools and experiences to create higher and
higher levels of knowledge and more sophisticated concept formation. The end point of the process should be an
adult learner who can manipulate abstract concepts, analyze information coherently and convey that information
to others in meaningful ways.
P a g e | 47
Educators have long interpreted this theory as indicating that their job is to provide the correct environment for
students to explore as they create knowledge, and thinking skills, for themselves (see Tomlinson-Keasey, 1978). As
instructors, most of us are very careful to choose textbooks that provide our students with accurate and complete
information and to create classrooms that allow the exploration of new material through discussions, group
interactions and other activities. The Piagetian-style of education seems to be quite comfortable for many educators.
As discussed above, Vygotsky believed in the social basis of learning, to the extent that he declared that all learning
is social (1952). He was more comfortable with the notion that it is acceptable to actively teach students rather than
allow them to create their own understandings. He encouraged teachers to be very involved in both shaping and
guiding the educational experience, but also maintained that such involvement needed to be based on an intimate
understanding of both where the student was functioning presently, and what goal the student should be working
to attain.
Albert Bandura (1977) is a proponent of the importance of self-efficacy—the belief in one’s own ability to master
skills and perform tasks, and a large body of research exists that self-efficacy is a key to success in many endeavors.
According to Bandura’s theory, the best ways to increase self-efficacy are providing experiences that support a
student feeling capable of mastering whatever tasks are presented, and the necessary psychological strengths to
tolerate failures without losing a realistic sense of their own competencies. It is important to note that self-efficacy
is a concept that may or not be related to subject matter. A student may feel very confident in her ability to work
math problems, but not to complete a group project or write a paper. Other students may approach their learning
by feeling that they are capable of learning whatever they need to succeed.
Bandura’s concepts remain very current. For instance, recently, Charles Benight and his colleagues (Benight, Shoji &
Delahanty, 2017) proposed the theory that one’s belief in the ability to handle challenges is a factor involved in
whether PTSD symptoms develop following traumatic experiences. This research indicates that the relationship
between trauma and PTSD is not linear, but that it is affected by a complex web of environmental and personal
characteristics. In other words, a person seems to be able to handle traumatic events until and unless they feel the
challenges are simply more than they can manage.
In a similar vein, Erikson (1950) felt that key psychosocial developments related to the learning environment included
“Industry,” “Initiative, and “Autonomy.” Thus, to Erikson, a well-developed learner can function with an appropriate
level of independence, is able to be a self-starter, and genuinely understands that he or she can contribute
meaningfully to others, the environment, and society. The failure to develop these skills can result in an individual
who is dependent on others, experiences shame and doubt and harbors feelings of inferiority. It would seem unlikely
that this individual would be able to explore the environment, as Piaget encouraged, work comfortably with others,
or develop a sense of self-efficacy, which, as Bandura’s work has demonstrated, is essential to becoming a
competent learner.
And finally, we should remember Alfred Adler’s (1964) work. Adler was one of the early challengers to
psychoanalysis’ reliance on psychosexual development. He described appropriate environments for children, and
the appropriate roles of adults in children’s lives. He especially cautioned parents about pampering their children,
as he believed those children did not grow up with the internal strengths to function well in adult roles. Dreikurs
(Dreikurs & Stoltz, 1964) who continued Adler’s work after his untimely death, famously stated a belief that any time
an adult did something for a child that the child can do himself, he has robbed that child of the opportunity to
P a g e | 48
develop self-esteem and feelings of self-competence. The similarity between Adler and Vygotsky on the importance
of the environment, including the parents, teachers and peers on learning is striking.
In my oft-frustrated, wanting-to-do-better teacher persona, I try to rely on these theorists to inform many of the
things I do in the classroom. Thus, I believe that the role of the teacher is to provide an appropriate learning
environment with materials that can be explored and manipulated in the service of internalizing knowledge and
building more sophisticated thinking skills. For a child, this might be blocks to count and rearrange to master math
problems or construction paper, glue and toothpicks to explore bridge-building. For the older learner, it might
include a well-stocked library of carefully chosen books and articles (or at least an excellent textbook). It might
include high quality videos, access to scholarly research reports and presentations relevant to the material being
mastered. In addition to these materials, it seems likely that an expert who can scaffold the student’s learning is
important. The expert can set the pace of the presentation of the material, check for comprehension, and provide
necessary feedback to improve performance.
I often think that these theorists would shudder at some of the ways we are currently educating our students. They
would look at our use of technology and have some serious questions to pose to us. Piaget might worry that rather
than needing to explore the environment to construct their own knowledge, students are flooded with information
from which they simply need to choose the data most interesting to them. This leads to skimming material and
latching onto what is most attractive and easiest to digest. This is unlikely to lead to the in-depth reading and thinking
that is essential to developing high-level analytical skills. Vygotsky would be concerned that the direct social
interactions he believed were so important have been diverted into text-messages and whole class announcements.
He might wonder if scaffolding is being carried out in a manner that fosters the optimal growth of independent
learning skills or if students are becoming overly reliant on this scaffolding and not developing higher level selforganizational skills. Erikson might wonder how a child develops a sense of industry: young people today rarely have
the opportunity to directly serve their families or communities. Rather than acting on their own and deciding what
they want to do, they have either been kept busy with a wide variety of (adult organized) activities, or with multiple
jobs, or allowed to entertain themselves with screens instead of interacting with others. Adler would fear that we
are stunting our students by not allowing them to struggle, or deal with unpleasantness, boredom or even social
rejection. I think these men would expect our students to be somewhat lazy and lacking in initiative, seekers of
attention who are poor communicators, self-focused and unclear of their own skills and abilities to contribute to the
learning environment. These theorists would be concerned that, at a deep level, because of the current learning
environment, students do not feel competent to perform at the level being asked of them (even if they are) and that
this lack of self-efficacy could hinder their educational attainment.
However, while channeling Piaget and Vygotsky, I try to keep in mind that I am not just teaching content, and I
endeavor to pay attention to what a student is learning about learning, and about themselves, during the process of
mastering the material I present. A growing body of literature is indicating that these non-content, or process factors
play a particularly important role in student development. For instance, Dweck (2007) has explored the value of
developing what she calls a growth mindset. Her research indicates that students who believe in their ability to learn
and grow perform better academically than students who have a more fixed belief in what they can and cannot do.
Duckworth (2016) has demonstrated that students who know they have the skills to persevere under less than
optimal conditions have better learning outcomes than those who doubt their skills (even if there is no actually
difference in skill level), and Zimmerman (2000) has found that students who demonstrate a sense of self efficacy
and a willingness to actively engage in their own learning tend to be successful students. This growing pool of
research seems to indicate that it is not just what is being taught, but also what the students are learning about
themselves in the process that is important in student achievement.
P a g e | 49
I teach upper level courses at a large state research university. At the beginning of each semester I share with my
students the bad news that I do not post lecture notes, PowerPoint slides, chapter outlines or study guides. I do not
allow them to use their laptops, tablets or cell phones in class unless there is some compelling reason that they do
so, and I require classroom attendance and participation. They are invariably chagrined, because this is a different
learning environment than what they experienced throughout high school and even their early college years. A
student will invariably ask why I will not do these things for them, and I simply answer that they do not need me to.
As upper level college students, they should be able to read, take notes, make flashcards and set up their own study
groups. I then ask them if they have heard of any research that indicates that, since it has been so easy for instructors
to do these things via our LMS, college success has skyrocketed. They tend to look rather sheepish and admit that
no, graduation rates and GPA’s have not dramatically increased in the past few years. These students have been
victims of overscaffolding. Their teachers have provided more information, support, guidance and mentoring than
they need to be successful, and now they feel they need that scaffolding just to succeed. To the extent that
technology makes this easy to do this, technology has contributed to this situation.
I am a pragmatic sort, and in the light of clear research to the contrary, I could accept all this technological
intervention as a neutral factor in my students’ educational journeys. However, I do not think the outcome is at all
neutral. I believe we are creating a sense in our students that they need us to do these things for them. I know many
students who will not even try to add a series of single-digit numbers without pulling out their calculators. I once
had a student declare in class that he never would have made an “A” on his neuroscience exam if the instructor had
not posted a study guide. My answer was, “How do you know?” It might have been easier and quicker for students
to use instructor-generated materials, but since I dropped the extreme scaffolds in my classes, the grades have not
declined at all, my classes still fill, I have not seen an increase in students dropping my classes, and many students
have risen to the challenges presented to them. If they get an “A” in one of my classes, it is because they earned
that grade. The “A” does not have an asterisk by it, indicating that it was received due to all the extra help provided
by the teacher. (There is a similar situation in sports where records are posted with an asterisk, indicating that the
record was set while the player was illegally using performance enhancing drugs.) Additionally, the number of emails I get from my students saying that they have appreciated that I pushed them out of their comfort zone and set
the bar high in my classes has increased.
For instance, in one of my classes, I have students construct their own projects throughout the semester. I provide
what I consider enough material, but they need to decide on the constructs they will measure and the interventions
they will carry out. Just this semester, I had one student who returned to school after earning a Bachelor’s degree in
another area. She made it clear early on that she expected the course to be easy and that she should be able to sail
through it – if only I was giving her enough guidance. After completing the first large assignment she posted the
following: “I am happy that I chose this course even though I struggled some in the beginning. The challenge is what
makes learning fun.” I would like to think that she learned something important that was not related to the material
covered in the unit. Another student in the same course stated, “I learned how important it was to plan carefully in
advance. When I did that, the entire project went better.” And from a student in another class, “While I didn’t do as
well as I would have liked, I learned how to study for tests, and am proud of myself for getting better later in the
class.” While I would like to think these students will also carry away the content we covered in class, I am certain
that they will hang onto these non-content aspects of their learning processes.
So, what is a teacher to do? The answer lies in a clear process of discernment as the teacher establishes the
parameters and expectations of the course. One way to do this would be to consider the level of the course and
think about what a successful student would look like at the end of the course. Working backwards can clarify the
appropriate level of scaffolding and minimize the tendency to over-scaffold. For instance, upper level college courses
P a g e | 50
should be considered one step before the work world or graduate school. Thus, I do not provide chapter outlines or
study guides for that material because I cannot imagine a boss or graduate advisor providing such support, and I
know clients do not show up with neat outlines of their lives so the clinician only must make brief comments on a
form. Learning to read for meaning and importance, taking notes, preparing questions and understanding what is
being asked are all very important skills for college graduates. These expectations can be laid out clearly in the
syllabus and students can be put on notice that these are the course expectations. Similarly, I begin to build in the
understanding that to be successful, a student must go beyond the basic requirements of a course. On the rubrics I
provide, I always include a few “quality” points that are completely at my discretion. These have proven to be
interesting – I am fairly stingy in awarding them, but I rarely have students complain or question these scores. I infer
that most students know when they have not done an outstanding job, even if they have met the basic requirements.
Learning how to go above and beyond, to impress a boss or graduate advisor, and to focus on creating high quality
productions, are important skills to have, whatever the subject matter is.
Ideally an entire department would line up their scaffolding for their students. First year courses might include study
guides, chapter outlines, sample tests, sample papers, a publisher-developed learning system, and a calendar that
clearly lays out weekly expectations. These foundational courses might build in activities geared towards students
exploring both their understanding of their own strengths and weaknesses and opportunities to explore a variety of
learning situations such as small groups, online discussions, or self-study applications provided by the textbook
publisher. Second year classes might expect the students to outline their own chapters and create their own
flashcards and include assignments that begin to require a higher level of writing and more clarity of thought. Critical
thinking could be introduced and scaffolded with appropriate rubrics and feedback. Helping students understand
their own competencies and strengths, as well as what skills they should be working to develop would help these
second-year students understand what experiences they should be seeking for themselves.
As students move through the program, the scaffolds at the early levels should be removed and replaced with
scaffolding that the student creates for him or herself. Thus, junior level courses should have challenging textbooks
with a great deal of material that needs to be understood, even if it cannot be “covered” in class lectures. Senior
level courses should contain opportunities for self-directed work, high level analysis and integrative thought.
Communication with their “expert” should be ongoing with the goal of developing personal and interpersonal skills
that will translate to the next stage of their life or education.
A systematic reduction in the number of scaffolds students receive, and a reduction in the tendency of teachers to
over-scaffold their students’ learning combined with opportunities for reflection and evaluation should result in
students developing clearer understandings of their learning competencies. They will be less likely to see themselves
as learners who need others to tell them what to do and what to learn. By not overscaffolding course material,
students can learn what they are able to do and what they need to learn to do better. Hopefully through this process
they will gain a sense of self-efficacy as learners. They can reap the rewards of doing things for themselves (initiative),
move through life with a sense of self-direction (autonomy), know that they can help others, and contribute to the
learning environment and accomplish real tasks (industry). By surviving struggles, or even failures, in a supportive
environment they will not automatically feel inferior, or be afraid to take on new challenges.
I think the advice of Adler (1952) and Dreikurs (Dreikurs & Stoltz, 1964) is particularly salient – adults should not do
for students what the students can do for themselves, for every time they do that, the student loses an opportunity
to build self-esteem and experience self-efficacy. Just because over-scaffolding makes things “easier” for the
students, it does not mean it is the right thing to do. And here I will throw in one last founding father of our science,
Sigmund Freud (1901). Freud would remind us that our unconscious motivations, expectations and beliefs can be
P a g e | 51
very powerful. As teachers, we need to be very careful that we are not unconsciously giving our students messages
that they cannot learn by themselves, that they are not competent enough to master college-level material and that
they are dependent on their teachers’ largesse to carry them through their college experience. If our students pick
up these beliefs from us, consciously or unconsciously, we have done them a tremendous disservice.
To conclude, I would urge educators to think carefully about the unspoken messages they give their students through
the expectations held for their behavior. Every time we over-scaffold for our students, we deprive them of an ability
to explore what they are capable of and we give them an unspoken message that they need our help and are not
capable of doing things for themselves. We would do well to remember that we are not only teaching content but
we are helping our students develop their mindsets about learning, their feelings of self-efficacy as a learner, and
their understanding of their own role in their personal learning process. And, it may very well be that those outcomes
will be every bit as important as the subject matter we are conveying for their future success.
REFERENCES
Adler, A. (1964). The Individual Psychology of Alfred Adler. H. L. Ansbacher and R. R. Ansbacher (Eds.). New York:
Harper Torchbooks.
Bandura, A. (1997). Self-efficacy: the exercise of control. New York, NY: Henry Holt.
Benight, C., Shoji, K., & Delahanty, D. (2017). Self-regulation shift theory: A dynamic systems approach to traumatic
stress. Journal of Traumatic Stress, 30, 333-342.
Brown, P.C., Roediger, H.L. III & McDaniel, M. A. (2014) Make it stick: The science of successful learning. Cambridge,
MA: Belknap Press: An Imprint of Harvard University Press.
Case, R. (1992). The mind's staircase: Exploring the conceptual underpinnings of children's thought and knowledge.
Hillsdale, NJ: Erlbaum.
Cranney, J. (2013). Toward psychological literacy: A snapshot of evidence-based learning and teaching. Australian
Journal of Psychology, 03/2013, Volume 65, Issue 1
Daniel, S., Martin-Beltrán, M., Madigan, M. & Silverman, R. (2016). Moving beyond “yes” or “no”: Shifting from overscaffolding to contingent scaffolding in literacy instruction with emergent bilingual students. TESOL Journal,
7(2), 393-420.
Dreikurs, R., & Stolz, V. (1964). Children the challenge. New York, NY: Hawthorn Books.
Duckworth, A. (2016). Grit: The power of passion and perseverance. New York, NY: Scribner.
Dweck, C. (2007). Mindset: The new psychology of success. New York, NY: Ballantine Books.
Erikson, E. (1950). Childhood and society. New York, NY: W.W. Norton.
Fischer, K. W., & Bidell, T. R. (1998). Dynamic development of psychological structures in action and thought. In R.
M. Lerner (Ed.), & W. Damon (Series Ed.), Handbook of child psychology: Vol. 1. Theoretical models of human
development (5th ed.), pp. 467-561. New York: Wiley.
Freud, S. (1901). The psychopathology of everyday life – 2003 translation by A. Bell. London: Penguin Books, UK.
Piaget, J. (1952). The origins of intelligence in children. New York, NY: University International Press.
P a g e | 52
Tomlinson-Keasey, C. (1978). Chapter 1: Piaget's Theory and College Teaching. Essays from and about the ADAPT
Program. 29. http://digitalcommons.unl.edu/adaptessays/29
Vygotsky, L. S. (1962). Thought and language. Cambridge, MA: MIT Press.
Zimmerman, B. (2002). Self-efficacy: An essential motive to learn. Contemporary Educational Psychology 25, 82–91.
P a g e | 53
F-ATAL DISTRACTION: THE IMPACT
CHAPTER
5
OF IN-CLASS MEDIA
MULTITASKING ON STUDENTS’
CLASSROOM LEARNING
AMANDA C. GINGERICH HALL
AND
TARA T. LINEWEAVER
BUTLER UNIVERSITY
INTRODUCTION
Today’s students are attempting to multitask more than any generation of the past 70 years (Carrier, Cheever, Rosen,
Benitez, & Chang, 2009). This is problematic given that completing two cognitive tasks simultaneously is impossible—
multitasking is a myth (Rosen, 2008). Although people may believe they are dividing their attention among multiple
tasks, instead, at a cognitive level, they are rapidly shifting their attention between the tasks rather than completing
them at the same time (Ward, 2010). Every time we switch from one task to another, we lose time, and the amount
of time we lose increases with task complexity (Rubinstein, Meyer, & Evans, 2001). In addition to speed, accuracy
suffers when we rapidly switch between tasks (Rogers & Monsell, 1995). In fact, according to Bailey and Konstan
(2006), when a peripheral task interrupts the completion of primary tasks, people require up to 27% more time and
make twice the number of errors on the primary tasks compared to when the peripheral task appears between
them. Even those who frequently attempt to multitask are inefficient at doing so. People who report multitasking
regularly are more distracted by irrelevant information in the environment, are less efficient at distinguishing
relevant from irrelevant mental representations, and are less effective at fully switching to a new task without
interference from the previous one than those who report multitasking less often (Ophir, Nass, & Wagner, 2009).
Similarly, more frequent media multitasking is associated with poorer everyday executive functioning (i.e., selfmonitoring, emotional control, planning, task monitoring, and attention), although the direction of influence in this
correlational relationship is, as yet, unknown (Magen, 2017).
Although the loss of efficiency associated with task switching could certainly incur negative consequences for college
students’ academic success, another detrimental effect of multitasking on academic performance may emerge from
the impact multitasking has on learning and memory. Multiple studies have demonstrated that multitasking results
in weaker encoding of new information into long-term memory (Bailey & Konstan, 2006; Ophir et al., 2009). Because
effective encoding of information is vital to remembering it later, it is not surprising that multitasking is also
associated with impaired retrieval (Wickens & Hollands, 2000). Learning and memory are key components of
performing well on academic assessments like quizzes and examinations. Thus, multitasking-induced cognitive
difficulties have the potential to influence students’ academic success negatively and substantially.
PREVALENCE OF MULTITASKING IN THE CLASSROOM
Given the abundant evidence that “multitasking” is detrimental to cognitive efficiency and learning, ideally, students
would refrain from media multitasking during class, but ample evidence indicates that they do not. Although they
P a g e | 54
may believe that they can multitask effectively, recent research suggests that today’s students are no better at
multitasking than members of previous generations (Carrier et al., 2009). Unfortunately, this does not deter most
students from engaging in such behavior. In one survey-based study, 64.3% of students reported using laptops in at
least one class period; those who used laptops used them during 48.7% of their classes, on average (Fried, 2008).
Out of each 75-minute class period, students media multitasked an average of 17 minutes, or over 22% of the class
time. These students reported that they checked email (81%), used instant messaging (IM: 68%), surfed the Internet
(43%), played games (25%), and did “other” activities (35%). This is consistent with the results reported by McCoy
(2016), who found that college students reported spending an average of 20.9% of their class time on digital devices
for non-class purposes.
The high level of self-reported media multitasking in class may be, in reality, an underestimate of students’ actual
time spent off-task and on-media. Gathering student self-reports and simultaneously using spyware to observe
students’ classroom laptop behavior directly, Kraushaar and Novak (2010) found that the average student generates
more than 65 new active windows per 75-minute lecture, with 62% of those windows being off-task or “distractive.”
What is more, students significantly underreport both their email and IM laptop use. Approximately 87% of the
student sample reported using email during class, but the spyware indicated that 94% were on their email accounts.
Furthermore, only 25% of students reported using IM during lecture, compared to the 61% who actually engaged in
IM.
Although much of the research on students’ media multitasking focuses on laptop use, mobile phones are also a
significant source of distraction during class time. Junco (2012), for example, found that text messaging is a more
common form of media multitasking in the classroom than using Facebook, accessing email, searching the Internet,
or instant messaging. In fact, a majority of students (i.e., 69%) reported texting during class. Of course, students’ use
of these technologies is not mutually exclusive, and, in combination, they can significantly reduce students’ ability
to focus on learning during class time. Sovern (2013) classified 62% of students as “strongly distracted,” which he
defined as using laptops or cell phone for non-class purposes for at least half of the class period. Notably, these
participants were upper-year law students who had much to gain from paying attention in class.
MULTITASKING AND CLASSROOM LEARNING
The prevalence of mobile phone and laptop use in college classrooms has inspired many investigations of the effects
these technologies have on student learning. Willingham (2010) poses the thought-provoking, but as yet
unanswered question: “Have technology and multitasking rewired how students learn?”
Although the impact of media multitasking on the internal process of classroom learning remains unknown, Lee, Lin
and Robertson (2012) conducted an experimental study investigating how well students learn in the presence of (1)
no media distractions, (2) a background media distraction that could be ignored, and (3) a background media
distraction that required attention. They asked students to read and answer comprehension questions about a
passage that they read under one of three conditions: in silence, with a video that would not be tested playing in
the background, or with a video that would be tested playing in the background. They discovered that students
effectively ignored the non-tested video, performing similarly to the group that read in silence on the reading
comprehension test and remembering less content of the video than the group who paid attention to it. In contrast,
the students who divided their attention between the passage and the video did not comprehend the reading as
well as the other two groups.
P a g e | 55
This finding suggests that students can effectively ignore and learn in the presence of distractions, but typically
mobile phones and laptops provide distractions that demand attention (similar to the attended-to video in the Lee
et al. study). Therefore, these technologies have the potential to reduce the cognitive resources available for in-class
learning. Additionally, other researchers have demonstrated that, even when performance on a cognitive task
reaches the same level (like the similar reading comprehension scores of the students who read in silence and those
who read while ignoring the distraction), different types of learning may underlie memory performance in singletask versus dual-task situations (Foerde, Knowlton, & Poldrack, 2006). They suggest that the declarative memories
formed during focused attention afford learners more flexibility with applying the information to new contexts than
the habit-based learning associated with divided attention.
MEDIA MULTITASKING ON MOBILE PHONES AND CLASSROOM LEARNING
The literature examining mobile phones in classroom settings is not extensive, but the studies that have focused on
this issue have produced highly consistent results—mobile phones pose a significant distraction in the classroom
and negatively affect students’ ability to learn new material. Two studies to date have experimentally manipulated
texting during an actual or mock lecture before quizzing participants over the lecture content (Ellis, Daniels, &
Jauregui, 2010; Gingerich & Lineweaver, 2014). Ellis et al. (2010) randomly assigned 62 undergraduate business
students either to a texting condition in which they generated three texts to their instructor during a lecture or to a
non-texting condition in which they did not use their mobile phones during class. Students who texted scored
significantly lower than those who did not on a post-lecture examination. Similarly, in two experiments, Gingerich
and Lineweaver (2014) asked students to engage in a prescribed text conversation with another student in the room
(text group) or to refrain from using their phones (no-text group) during a lecture. Not only did the no-text group
significantly outperform the text group when answering content-focused questions on a subsequent quiz (79% vs.
60% in Experiment 1 and 83% vs. 73% in Experiment 2), but the no-text group also felt significantly more confident
in their learning. Thus, students seemed to be aware that texting generally interfered with their learning.
Interestingly, though, students in the no-text group were also much more accurate at predicting how well they would
perform on the quiz. This suggests that multitasking during the lecture may have interfered with the metacognitive
processes involved in the self-monitoring of learning for the students who divided their attention between the
lecture and texting.
The magnitude of the interruption posed by texting during class appears to depend on several factors. Rosen, Lim,
Carrier, & Cheever (2011) examined students across four different classrooms who received no text messages, four
text messages or eight text messages from the researchers during a lecture. Students responded to these text
messages, but they were also free to receive, respond to, or initiate other text messages with outsiders during the
lecture. When examining students’ scores on a test over lecture content, they divided their students into three
groups and found that those who sent or received 0-7 texts outperformed those who sent or received 16 or more
texts; neither of these groups differed from students who sent or received 8-15 messages. These results suggest a
dose-dependent effect of texting on learning. What is more, they found that students who received or sent longer
texts were more impaired on the test relative to those with shorter text messages, as were those who sent or
received texts closer together in time (i.e., longer pauses between texts corresponded with higher test scores).
The content of text messages also affects the magnitude of learning interference associated with texting. Texting
about course topics does not appear to interfere with learning in a classroom setting, whereas texting about
unrelated topics does (Kuznekoff, Munz, & Titsworth, 2015). Kuznekoff et al. (2015) found that students in an
irrelevant text message group scored 10-17% lower on a test over the content of a video they watched while texting
P a g e | 56
than students in a relevant text message group or a no text group, who performed similarly to each other. Students
in the irrelevant text message group also took poorer notes than the other two groups and were 70% worse at
recalling video content.
Only one study to date has focused on underlying mechanisms that may explain why texting disrupts learning. Wei,
Wang, and Klausner (2012) surveyed 190 students about their self-regulation, sustained attention, cognitive
learning, and texting behavior in the classroom. Using structural equation modeling, they examined the complex
relationships among these factors. Their results indicated that students with better self-regulation skills text less
frequently in class and sustain their attention better during class time. Texting during class was also negatively
correlated with sustained attention and partially mediated the relationship between self-regulation and sustained
attention. Furthermore, classroom texting behavior was related to academic performance and perceived cognitive
learning, but this relationship was dependent on sustained attention. Together these results begin to explain how
texting during class negatively affects academic success. Not surprisingly, texting seems to draw attention away from
classroom material, ultimately making learning it more difficult.
The attention-grabbing effect of mobile phones in classrooms is not limited to texting, however. A simple ring of a
mobile phone can distract a whole classroom of students. In a well-controlled experiment, a mobile phone ringing
during a video made it less likely that students recorded the interrupted information in their notes and remembered
the corresponding information later (End, Worthman, Matthews, & Wetterau, 2010). Thus, even unused mobile
phones can inadvertently affect students’ classroom learning negatively if they remain on and ring during class time.
MEDIA MULTITASKING ON LAPTOPS AND CLASSROOM LEARNING
The evidence that mobile phones can detract from students’ learning in the classroom is strong, but laptop
computers may have an even greater potential to distract students than phones. Some researchers have argued that
the vertical orientation of laptops and the movement and lighting of the displayed text make them inherently
distracting (Bhave, 2002; Meierdiercks, 2005; Wickens & Hollands, 2000). As such, laptops may not only affect the
user, but may also draw the attention of neighboring classmates. Students report that laptop use by others in the
classroom can be disruptive (e.g., Maxwell, 2007), and recent research that has directly investigated the influence
of being able to view the laptop screen of a multitasking “peer” has confirmed this impression. Sana, Weston, and
Cepeda (2013) created a simulated classroom and placed participants in various locations relative to confederates
who multitasked on a laptop during the classroom lecture. They found that scores on a post-lecture comprehension
test that contained both factual and application questions were 17% lower for participants in view of multitasking
peers than for participants who could not see the laptop screens of other students (Sana et al., 2013; Exp. 2).
Importantly, on a post-lecture survey, participants indicated that they were only “somewhat distracted” by the
nearby laptop use and that being in view of a multitasker “barely” hindered their learning of lecture material.
Therefore, despite the detriments in performance apparent in objective test scores, students were not aware of the
extent to which the activities of their classmates had affected their own understanding and retention of lecture
material.
Regardless of whether students recognize the negative influence of their peers utilizing laptops, many claim that
their own laptop use in the classroom is beneficial to their learning. For example, Demb, Erickson, and HawkinsWilding (2004) found that students think laptops support effective study habits and contribute to academic success.
However, the results of several empirical studies investigating the relationships between laptops and academic
outcomes do not support this widely held student belief. Granberg and Witte (2005), for example, found no
P a g e | 57
difference in overall course grades between class sections that used laptops and those that did not. Thus, in their
study, laptops neither improved nor hindered overall class performance. In a more direct study of the effects of
laptop multitasking on classroom learning, Hembrooke and Gay (2003) found that students who use laptops during
lectures perform more poorly on subsequent quizzes over lecture material than those who do not. Fried (2008)
documented a significant negative correlation between students’ self-reported laptop use and how much attention
they reported paying to lectures, how clear they found the lectures, and how well they felt they understood course
material. Even taking into account factors such as high school rank, ACT score, and class attendance, laptop use
correlated negatively with learning of lecture material in Fried’s study.
Using a more experimental approach, Carter, Greenberg, and Walker (2017) randomly assigned sections of an
introductory economics course at a military academy to one of three conditions. In one condition, students used
laptops and tablets as they would in a typical classroom. In the second, “modified-tablet” condition, students used
tablets (but not laptops), and the tablet had to remain flat and face-up on the desk at all times, allowing the instructor
to ensure that students only used the technology for class-related purposes. The third (control) condition prohibited
students from utilizing laptops or tablets for any purpose during class. The researchers compared scores across the
groups on an online final exam that included multiple-choice, short answer, and essay questions to determine the
effect that the various technology conditions had on students’ academic success. They found a statistically significant
reduction in final exam scores in both of the conditions that allowed students to use technology in the classroom,
regardless of whether the students experienced unrestricted or modified use. Thus, even when students applied
their technology use to classroom-related activities, evidence did not support their impression that laptops aid their
learning.
Several researchers have investigated on-task (e.g., taking notes or browsing the web for course-related information:
McCreary, 2009; Mueller & Oppenheimer, 2014) and off-task (e.g., checking email, surfing the Internet, playing
games: Fried, 2008; Ravizza, Hambrick, & Fenn, 2014) laptop use in the classroom. Not surprisingly, the extent to
which students apply their laptops to class-related versus distracting activities matters, with self-reported off-task
laptop use predicting lower semester grade point averages above and beyond other potentially confounding factors
like motivation, organization, and self-regulation (Gaudreau, Miranda, & Gareau, 2014).
One explanation for the impaired comprehension of and memory for lecture material that accompanies laptop use
is that insufficient allocation of attention to the to-be-learned information results in inadequate encoding (e.g.,
Broadbent, 1958; Craik & Lockhart, 1972; Sana et al., 2013, Exp 1). Thus, it is not surprising that off-task laptop use
may be particularly detrimental to student success. In Kraushaar and Novak’s (2010) Spyware study, students
typically opened twice as many browser windows for “distractive” activities than for “productive” activities. Students
whose ratio of distractive to productive browser windows favored distraction scored lower on homework, quizzes,
projects, and exams than their “productive” peers, resulting in worse overall course grades compared to the students
whose ratio favored productive activities. Furthermore, other researchers have shown that the length of web
browsing sessions during class correlates negatively with overall course performance (Grace-Martin & Gay, 2001)
and that students with higher intellectual ability increase their self-reported course-unrelated Internet use across
time despite this use being associated with lower test grades (Ravizza et al., 2014). Interestingly, when Hembrooke
and Gay (2003) classified their students into “browsers” (those who spent the majority of their online time on courseunrelated content) versus “seekers” (those who spent the majority of their online time focused on course-related
content), seekers actually spent more time on class-unrelated pages than browsers. That is, when seekers went off
task, they spent a large amount of time with the unrelated content, leading these authors to suggest that “browsing
style” may be a more important factor in determining the effect of laptop use on academic performance than ontask versus off-task laptop behaviors.
P a g e | 58
MANAGING STUDENTS’ CLASSROOM MEDIA MULTITASKING
Ample evidence points to the detrimental effects media multitasking, and particularly media multitasking that is
unrelated to course content, exerts in classroom settings. Yet, both students and faculty may promote the
integration and use of technology in classrooms. Mobile phones can support applications such as Poll Everywhere
(https://www.polleverywhere.com/) or other types of classroom-response-system platforms. Advocates of in-class
laptop use argue that notetaking may be more efficient on laptops and that students may find it easier to convert
typed notes than hand-written notes into outlines (McCreary, 2009). A related argument is that taking notes on a
laptop tends to be faster and neater than handwriting those same notes (McGaugh, 2006). Empirical investigations
comparing laptop-based notetaking to taking notes by hand, however, frequently suggest that typing notes on a
laptop may undermine student learning (Mueller & Oppenheimer, 2014). Using a laptop to take notes may
encourage students to simply transcribe a direct dictation of the instructor’s lecture rather than summarizing the
main points and consolidating the information in one’s own words (Chen, 2006; Maxwell, 2007; McGaugh, 2006;
Yamamoto, 2007; but see also Murray, 2011). Further, proponents of the haptics of writing claim that typed letters
and symbols are less recognizable than those that are handwritten (Longcamp, Boucard, Gilhodes, & Velay, 2006),
and many researchers argue that the physical formation of letters and the comparatively laborious process of
handwriting facilitate the development of mental representations involved in learning new material (e.g., Mangen
& Velay, 2010). Thus, the costs to active processing and learning that accompany notetaking on a laptop may
overshadow the practical efficiency gained when using this approach.
Another reason for allowing or integrating technology in the classroom centers on student satisfaction. Driver (2002)
found that students were more satisfied with a course and with group projects when instructors assigned web-based
activities on laptops. In a 2015 survey of 675 American college students from 26 states, 89% of respondents did not
think that instructors should ban technology in the classroom (McCoy, 2016). Although greater use of technology in
the classroom is not always associated with increased student satisfaction (Wurst, Smarkola, & Gaffney, 2008),
Driver (2002) is not alone in finding that students enjoy using technology for academic purposes; students believe
that technology enhances their ability to pay attention and increases their engagement with class topics, particularly
when instructors use technology strategically (Demb et al., 2004; Zhu, Kaplan, Dershimer, & Bergom, 2011).
Students’ self-perceived engagement may be genuine, and this sense of student engagement and active learning in
classrooms may be more responsible for the changes in student satisfaction that accompany the utilization of laptops
than the laptop use, itself. Nonetheless, the increased engagement associated with laptop-based assignments and
activities may be possible to achieve without the utilization of laptops or other forms of technological media and,
therefore, without introducing opportunities for distraction (Price, 2011).
SUGGESTIONS FOR INSTRUCTORS
After reviewing the literature on media multitasking and learning, the question arises: “What should instructors do?
Should instructors implement technology policies in the classroom or should students be responsible for making
decisions about their learning?” Because wireless classrooms are a relatively new phenomenon, the research
literature addressing these issues is sparse. On the 2015 McCoy (2016) survey, 71% of the large cohort of college
students sampled across the United States indicated that “most” of their instructors have policies about the use of
technology in the classroom. Just over half of the surveyed students (52.8%) viewed policies limiting the use of digital
devices as helpful, but, again, 89% also said that instructors should not ban technology. At the same time, more than
92% of the sample admitted that, on a typical day, they use technology for non-classroom-related activities such as
texting, making phone calls, emailing, surfing the Internet, Tweeting or other social networking activities at least
once.
P a g e | 59
As noted, students tend to underestimate their off-task technology use in classrooms (Kraushaar & Novak, 2010)
and they view their technology use as less disruptive to their own and to others’ learning than objective measures
indicate (Demb et al., 2004; Sana et al., 2013). Consequently, there may be merit in instructors’ being thoughtful
and deliberate about technology in their classrooms, possibly even limiting or prohibiting students’ technology use
during class time. Approaches to technology can take several forms. Several researchers and educators have offered
advice for how to integrate technology wisely (Levine, 2002; Willingham, 2010; Zhu et al., 2011), proposing ways to
increase faculty-student interactions and in-class participation using laptops (e.g., Fitch, 2004; Partee, 1996;
Stephens, 2005). Price (2011) advocated establishing highly engaging classroom environments that reduce the
likelihood that students will turn to technology and fall prey to its inherent distractions. Some instructors with
concerns about the disruptive nature of technology have created laptop- or technology-free zones (Aguilar-Roca,
Williams, & O’Dowd, 2012; McCreary, 2009) or have implemented explicit policies or banned its use altogether
(Maxwell, 2007; Zhu et al., 2011).
Perhaps because the direct academic applications of mobile phones are less extensive and less obvious than that of
laptops (i.e., students may be more likely to use phones than laptops for activities not related to coursework), some
empirical research and commentary have focused on the effects of implementing mobile phone policies or systems
for phone use designed to help students make better decisions in the classroom (Berry & Westfall, 2015; Burkholder,
2017; Katz & Lambert, 2016; Lancaster & Goodboy, 2015). Katz and Lambert (2016) investigated their
implementation of an incentive system that offers students one point of extra credit for each class period they leave
their cell phones on the instructor’s desk. They found that, with this extra credit system in place, the 104 students
who participated in the study surrendered their phones, on average, 18 out of 20-23 class meetings. Additionally,
they documented a significant correlation between how often students left their phone on the instructor’s desk and
the students’ grades on five exams across the course of the semester. Burkholder (2017) implemented this same
system in his courses and asked students their impressions of the extra credit policy. He found that “nearly everyone”
(88% in one class and 99% in another) chose to abandon their phones during class even though the amount of extra
credit he offered amounted to less than 2% of the overall course grade. He also reported that, although his students
did not believe that phones are distracting, 69% indicated that the extra credit system had a positive effect on their
learning, and 80% said they would like other professors to enact a similar approach.
One of the current authors (TL) has modified this approach in her classes. She introduced an optional “no technology
club.” Students join the club at the beginning of the semester, signing an agreement that says they will turn off their
mobile phones at the start of each class and leave their laptops stored unless a specific class activity calls for their
use (e.g., discussing a reading posted online or working on a collaborative assignment in the room). Those who join
earn five points of extra credit at the end of the semester if they remain an active member. The instructor explains
that she may ask students who opt not to join the club to sit in a laptop zone in the classroom. This has actually
never been necessary because every student in every class to date has opted to join the “no technology club” and
has maintained their memberships across the entire length of the course. Monitoring participation is extremely
simple, given that students’ signed contract earns them five extra credit points, and this approach eliminates
distractions due to both mobile phones and laptops. Although students have occasionally forgotten and have
accessed their phones during class, one reminder has always been sufficient to eliminate the behavior for the
remainder of the semester in all students enrolled. Additionally, in contrast to the concerns former students
sometimes voiced about the mandatory technology ban the instructor implemented prior to the club, she has
received no complaints about technology policies since introducing this option to students.
In other courses, we have simply recommended or asked students to refrain from technology use in the classroom.
In these instances, we frequently offer a brief in-class explanation of the literature that documents how distracting
P a g e | 60
technology can be in classroom settings followed by posting several readings (some primary and some secondary
sources) on our classroom management system about texting during class and the value of taking notes by hand
versus on a laptop. We have never monitored how often students read these articles or whether that influences
their technology-related classroom decision-making. As a result, we do not know whether we are offering students
too many or too few resources to influence their decisions about technology. However, Lancaster and Goodboy
(2015) conducted an experimental study of students’ attitudes toward classroom cell phone policies and found that
providing too many arguments when introducing those policies resulted in more negative student attitudes towards
them. Additionally, student attitudes towards the policies predicted their compliance with the rules; those who
viewed the policies more negatively were more likely to use their phones in classes that prohibited them. This study
raises interesting questions about the best way to present technology recommendations or policies to students,
questions that warrant further research given that this is the only study to investigate the issue to date. It certainly
suggests that it is important for instructors to not only think carefully about their policies but also to be wise in how
they present those policies to students.
Beyond providing research evidence and information to students to support classroom policies, we have also used
demonstrations to show students how disruptive dividing their attention can be. (See Gingerich & Lineweaver, 2011
or Lineweaver, Gingerich Hall, Hilycord, & Vitelli, accepted pending revisions, for a more detailed description of how
we attempt to make students aware of the decrements in attention that accompany attempting two tasks
simultaneously.) Reed and Pusateri (2007) introduced an example that consistently works well for giving students
direct experience with the perils of multitasking. We divide our class into two groups sitting on opposite sides of the
classroom. Group 1 attempts two tasks simultaneously, one visual and one auditory. Group 2 completes only the
auditory task and watches Group 1. Group 1 is consistently both slower and more error-prone on the auditory task
(the task that the two groups share in common). We then reverse the group assignments so that everyone not only
experiences the challenges associated with multitasking but also has a chance to observe the multitasking effect on
others. This demonstration takes only a few minutes and quickly shows students that they are not as efficient at
completing two tasks simultaneously as they previously believed.
CONCLUSION
As a relatively new issue, technological media use and the associated “multitasking” (or more aptly, task switching)
that accompanies it in college classrooms is an under-researched, yet highly relevant, topic for both new and
experienced instructors. As we outlined above, some limited research addresses, but much work remains to be done
to determine, best practices as applied to: 1) integrating technology into classrooms without introducing
distractions; 2) building engaging classroom environments that do not involve technology; 3) implementing wellsupported and well-reasoned technology policies; 4) encouraging students to accept and follow these policies; and
5) helping students reach their own good decisions about technology use. While awaiting the scientific evidence to
help guide their decisions about technology and media multitasking, we recommend that instructors think flexibly
about their goals, tailoring their approach to each particular course, to each specific group of students, or to
emerging trends in technology use across time, as all of these factors may call for unique ideas and solutions aimed
towards building the most productive and effective learning environment for our collegiate scholars.
P a g e | 61
REFERENCES
Aguilar-Roca, N., Williams, A. E., & O’Dowd, D. K. (2012). The impact of laptop-free zones on student performance
and
attitudes
in
large
lectures.
Computers
&
Education,
59,
1300-1308.
doi:10.1016/j.compedu.2012.05.002
Bailey, B. P., & Konstan, J. A. (2006). On the need for attention-aware systems: Measuring effects of interruption on
task performance, error rate, and affective state. Computers in Human Behavior, 22, 685-708.
doi:10.1016/j.chb.2005.12.009
Berry, M. & Westfall, A. (2015). Dial D for distraction: The making and breaking of cell phone policies in the college
classroom. College Teaching, 63, 62-71.
Bhave, M. P. (2002). Classrooms with Wi-Fi. T H E Journal, 30, 17.
Retrieved from https://thejournal.com/articles/2002/11/01/classrooms-with-wifi.aspx.
Broadbent, D. (1958). Perception and Communication. London: Pergamon Press.
Burkholder, P. (2017). Helping students make the right call on cell phones. Faculty Focus: Higher Ed Teaching
Strategies form Magna Publications, 1-4. Retrieved from https://www.facultyfocus.com/articles/effectiveclassroom-management/helping-students-make-right-call-cell-phones/
Carrier, L. M., Cheever, N. A., Rosen, L. D., Benitez, S., & Chang, J. (2009). Multitasking across generations:
Multitasking choices and difficulty ratings in three generations of Americans. Computers in Human
Behavior, 25, 483-489. doi:10.1016/j.chb.2008.10.012
Carter, S. P., Greenberg, K., & Walker, M. S. (2017). The impact of computer usage on academic performance:
Evidence from a randomized trial at the United States Military Academy. Economics of Education Review,
56, 118-132. doi:10.1016/j.econedurev.2016.12.005
Chen, E. (2006). Laptops nixed in some law classes: Profs split on whether the devices are ban or boon for learning.
Daily Pennsylvanian. http://www.thedp.com/article/2006/04/laptops_nixed_in_some_law_classes
Craik, F.I.M., & Lockhart, R.S. (1972). Levels of processing: A framework for memory research. Journal of Verbal
Learning and Verbal Behavior, 11, 671-684.
Demb, A., Erickson, D., & Hawkins-Wilding, S. (2004). The laptop alternative: Student reactions and strategic
implications. Computers and Education, 43, 383-401.
Driver, M. (2002). Exploring student perceptions of group interaction and class satisfaction in the web-enhanced
classroom. Internet & Higher Education, 5, 35.
Ellis, Y., Daniels, B., & Jauregui, A. (2010). The effect of multitasking on the grade performance of business students.
Research in Higher Education, 8, 1-10.
End, C. M., Worthman, S., Matthews, M. B., & Wetterau, K. (2010). Costly cell phones: The impact of cell phone rings
on academic performance. Teaching of Psychology, 37, 55-57. doi:10.1080/00986280903425912.
Fitch, J. L. (2004). Student feedback in the college classroom: A technology solution. Educational Technology
Research and Development, 52, 71-81.
Foerde, K., Knowton, B. J., & Poldrack, R. A. (2006). Modulation of competing memory systems by distraction.
Proceedings of the National Academy of Sciences, 103, 11778-11783. doi:10.1073/pnas.0602659103.
P a g e | 62
Fried, C. B. (2008). In-class laptop use and its effects on student learning. Computers & Education, 50, 906-914.
doi:10.1016/j.compedu.2006.09.006
Gaudreau, P., Miranda, D., & Gareau, A. (2014). Canadian university students in wireless classrooms: What do they
do on their laptops and does it really matter? Computers in Education, 70, 245-255.
doi:10.1016/j.compedu.2013.08.019.
Gingerich, A. C. & Lineweaver, T. T. (2011). Study smarter, not harder: Using empirical evidence to teach students
how to learn. Proceedings from the Lilly Conference on College and University Teaching and Learning, 2325.
Gingerich, A. C., & Lineweaver, T. T. (2014). OMG! Texting in class = U Fail :( Empirical evidence that text messaging
during class disrupts comprehension. Teaching of Psychology, 41, 44-51.
Grace-Martin, M., & Gay, G. (2001). Web browsing, mobile computing and academic performance. Educational
Technology & Society, 4, 95-107.
Granberg, E., & Witte, J. (2005). Teaching with laptops for the first time: Lessons from a social science classroom.
New Directions for Teaching & Learning, 2005, 51-59.
Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: The effects of multitasking in learning environments.
Journal of Computing in Higher Education, 15, 46–64. doi:10.1007/BF02940852.
Junco, R. (2012). In-class multitasking and academic performance. Computers in Human Behavior, 28, 2236-2243.
doi:10.1016/j.chb.2012.06.031
Katz, L., & Lambert, W. (2016). A happy and engaged class without cell phones? It’s easier than you think. Teaching
of Psychology, 43, 340-345. doi:10.1177/0098628316662767
Kraushaar, J. M., & Novak, D. C. (2010). Examining the affects [sic] of student multitasking with laptops during the
lecture. Journal of Information Systems Education, 21, 241-251.
Kuznekoff, J. H., Munz, S., & Titsworth, S. (2015). Mobile phones in the classroom: Examining the effects of texting,
Twitter and message content on student learning. Communication Education, 64, 344-365.
doi:10.1080/03634523.2015.1038727.
Lancaster, A. L., & Goodboy, A. K. (2015). An experimental examination of students’ attitudes toward classroom cell
phone policies. Communication Research Reports, 32, 107-111. doi:10.1080/08824096.2014.989977
Lee, J., Lin, L., & Robertson, T. (2012). The impact of media multitasking on learning. Learning, Media and Technology,
37, 94-104. doi:10.1080/17439884.2010.537664.
Levine, L. (2002). Using technology to enhance the classroom environment. T H E Journal, 29, 16-18. Retrieved from
https://thejournal.com/Articles/2002/01/01/Using-Technology-to-Enhance-the-ClassroomEnvironment.aspx
Lineweaver, T.T., Gingerich Hall, A., Hilycord, D. & Vitelli, S. (Accepted Pending Revisions). Introducing and evaluating
a “Study Smarter, Not Harder” study tips presentation offered to incoming students at a four-year
university. Under consideration for publication in Journal of the Scholarship of Teaching and Learning.
Longcamp, M., Boucard, C., Gilhodes, J.-C., & Velay, J.-L. (2006). Remembering the orientation of newly learned
characters depends on the associated writing knowledge: A comparison between handwriting and typing.
Human Movement Science, 25, 646-656.
P a g e | 63
Magen, H. (2017). The relations between executive functions, media multitasking and polychronicity. Computers in
Human Behavior, 67, 1-9. doi:10.1016/j.chb.2016.10.011.
Mangen, A., & Velay, J.-L. (2010). Digitizing literacy: Reflections on the haptics of writing. In M. H. Zadeh (Ed.),
http://www.intechopen.com/books/advances-in-haptics/digitizing-literacyAdvances
in
Haptics.
reflections-on-the-haptics-of-writing
Maxwell, N. G. (2007). From Facebook to Folsom Prison Blues: How banning laptops in the classroom made me a
better law school teacher. Richmond Journal of Law & Technology, 14, 1-43.
McCoy, B. R. (2016). Digital distractions in the classroom phase II: Student classroom use of digital devices for nonclass related purposes. Journal of Media Education, 7, 5-32.
McCreary, J. R. (2009). The laptop-free zone. Valparaiso University Law Review, 43, 989-1044. Available at:
http://scholar.valpo.edu/vulr/vol43/iss3/2
McGaugh, T. L. (2006). Laptops in the classroom: Pondering the possibilities. Perspectives: Teaching Legal Research
and Writing, 14, 163-165.
Meierdiercks, K. (2005). The dark side of the laptop university. Journal of Information Ethics, 14, 9-11.
Murray, K. E. (2011). Let them use laptops: Debunking the assumptions underlying the debate over laptops in the
classroom. Oklahoma City University Law Review, 36, 185-229.
Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over
laptop note taking. Psychological Science, 25, 1159-1168. doi:10.1177/0956797614524581
Ophir, E., Nass, C., & Wagner, D. (2009). Cognitive control in media multitaskers. Proceedings of the National
Academy of Sciences, 106, 15583–15587. doi:10.1073/pnas.0903620106.
Partee, M. H. (1996). Using e-mail, Web sites & newsgroups to enhance traditional classroom instruction. T H E
Journal, 23, 79.
Price, C. (2011). Incivility, inattention, and multitasking! Oh my! Creating effective learning environments for
millennial learners. In J. Holmes, S.C. Baker, & J.R. Stowell (Eds.), Essays from e-xcellence in teaching (Vol.
10, pp. 10-14). Retrieved from http://teachpsych.org/ebooks/eit2010/index.php
Ravizza, S. M., Hambrick, D. Z., & Fenn, K. M. (2014). Non-academic Internet use in the classroom is negatively related
to classroom learning regardless of intellectual ability. Computers & Education, 78, 109-114.
doi:10.1016/j.compedu.2014.05.007
Reed, S. K. & Pusateri, T. P. (2007). Reed’s Cognition: Theory and Applications, 7E: Electronic Transparencies and
Interaction Demonstrations. Belmont, CA: Wadsworth Thompson.
Rogers, R. D., & Monsell, S. (1995). Costs of a predictable switch between simple cognitive tasks. Journal of
Experimental Psychology: General, 124, 207-231.
Rosen, C. (2008). The myth of multitasking. The New Atlantis, 20, 105-110.
Rosen, L. D., Lim, A. F., Carrier, M., & Cheever, N. A. (2011). An empirical examination of the educational impact of
text message-induced task switching in the classroom: Educational implications and strategies to enhance
learning. Psicologia Educativa, 17, 163-177. doi:10.5093/ed2011v17n2a4.
Rubinstein, J. S., Meyer, D. E., & Evans, J. E. (2001). Executive control of cognitive processes in task switching. Journal
of Experimental Psychology: Human Perception and Performance, 27, 763–797. doi:10.1037/00961523.27.4.763
P a g e | 64
Sana, F., Weston, T., & Cepeda, N. J. (2013). Laptop multitasking hinders classroom learning for both users and
nearby peers. Computers & Education, 62, 24-31. doi:10.1016/j.compedu.2012.10.003
Sovern, J. (2013). Law student laptop use during class for non-class purposes: Temptation v. incentives. University of
Louisville Law Review, 51, 483-534.
Stephens, B. R. (2005). Laptops in psychology: Conducting flexible in-class research and writing laboratories. New
Directions for Teaching and Learning, 101, 15-26.
Ward, J. (2010). The student’s guide to cognitive neuroscience (2nd ed.). New York: Psychology Press.
Wei, F. F., Wang, Y. K., & Kausner, M. (2012). Rethinking college students’ self-regulation and sustained attention:
Does text message during class influence cognitive learning? Communication Education, 1-20.
doi:10.1080/03634523.2012.672755.
Wickens, C. D., & Hollands, J. G. (2000). Engineering psychology and human performance (3rd ed.). New Jersey:
Prentice Hall.
Willingham, D. T. (2010). Have technology and multitasking rewired how students learn? American Educator, 23-28,
42.
Wurst, C., Smarkola, C., & Gaffney, M. A. (2008). Ubiquitous laptop usage in higher education: Effects on student
achievement, student satisfaction, and constructivist measures in honors and traditional classrooms.
Computers & Education, 51, 1766-1783. doi:10.1016/j.compedu.2008.05.006
Yamamoto, K. (2007). Banning laptops in the classroom: Is it worth the hassles? Journal of Legal Education, 57, 477520. Retrieved from http://www.jstor.org/stable/42894041
Zhu, E., Kaplan, M., Dershimer, R. C., & Bergom, I. (2011). Use of laptops in the classroom: Research and best
practices. CRLT Occasional Papers, 30, 1-5.
P a g e | 65
CHAPTER
6
STUDENT RESPONSE SYSTEMS: A
MINDFUL APPROACH
DARREN IWAMOTO
CHAMINADE UNIVERSITY OF HONOLULU AND
JACE HARGIS
UNIVERSITY OF CALIFORNIA, SAN DIEGO
INTRODUCTION
We have come to the realization that the more we learn, the more complex the world becomes. A perfect example
of this is our careers as educators. For the first six years of teaching in higher education, the first author mimicked
the professors that he admired the most. His lectures contained both breadth and depth and he used real-world
examples and humor to engage the students. His student evaluations of teaching were fine, but something did not
feel right. He tried to enthusiastically feed his students every ounce of knowledge he had, yet the non-verbal
feedback that he would regularly receive from them were glassy eyes and dazed looks. He was also unimpressed
with their overall academic performance. They should be doing better, is what he would commonly say to himself.
Finally, he asked the crucial question: Is there another way to teach that will truly engage students?
IDENTIFYING THE CHALLENGES
Two distinctly different challenges began to present themselves during my search. The first challenge focused on
student perception. The second addresses the concept of mind-wandering. The irony and perhaps advantage, is that
both are very much related and therefore may lead to complementary instructional approaches.
PERCEPTION
Iwamoto, Hargis, and Vuong (2016) found that students’ mindsets were an essential factor in their abilities to
perform at a high academic level. Students who adopted an active and effortful learning approach were found to
have higher levels of classroom engagement and higher examination scores than those who did not. This type of
mindset is typically a learned behavior and there are scaffolding strategies, which can be employed to capitalize on
this phenomenon. The challenge is that frequently first year college students have formed their personal theories
about learning before they graduate from secondary education systems (Simpson, Stahl, & Francis, 2004). By the
time a student enters their post-secondary classroom, they would most likely be entering with learning habits that
have been conditioned, reinforced, and influenced from when they first began formal education. Often these habits
are lower level, action/reaction processes, which produced success, as measured by grades, now may be insufficient
and unsustainable for the complexities of advanced concepts. Therefore, there is commonly a misalignment
between the formal, linear method of processing information and the higher level, critical thinking and questioning,
which many post-secondary programs require.
More research has been addressing the critical nature of twenty-first (21st) century skills (Trilling & Fadel, 2009).
Frequently, these skills address four major pillars of communication, collaboration, critical thinking and creativity.
How 21st century students learn and perform academically is directly related to their personal belief system. Their
personal belief systems serves as their filter in how they interpret their academic responsibilities in and out of the
P a g e | 66
classroom (Thomas & Rohwer, 1986; Nist & Simpson, 2000; Simpson & Nist, 2002). Iwamoto, Hargis, Bordner, and
Chandler (2017) discovered that students in higher education have high self-confidence, which reduces their level
of anxiety, but this level of self-confidence did not motivate them to academically self-regulate. Zimmerman (1989)
defined self-regulated learning strategies as actions and processes directed at acquiring information or skill that
involve agency, purpose, and perceptions by learners. Students were observed exhibiting maladaptive and
counterproductive behaviors like procrastination and disengagement in and out of the classroom (Iwamoto et al.,
2017). Two studies in particular obtained data that pointed toward the idea that 21st century college students
possess an external locus of control and although lecture-based pedagogy/andragogy is not preferred, it is what
students are familiar with and expect (Iwamoto et al., 2016; Iwamoto et al., 2017). This behavior led to the next
question which is, how do you blend student and teacher-focused pedagogies into a similar instructional approach
and at the same time attend to research-based effective practices?
MIND WANDERING IN THE PRESENCE OF STRESS, CONFUSION AND FRUSTRATION
The second challenge of mind wandering could be classified under a broader term of distractions. Distractions are
an ongoing challenge especially among college students and the increased number of physical and mental activities
available to them. As we progress through the 21st century, distractions continue to be a prominent problem when
compared to previous generations; however, the types of distractions are changing in form, frequency and perhaps
intensity. For example, mobile devices have led professors to create technology policies in their academic syllabi,
resulting in some banning these devices during class sessions (de Vise, 2010). This was then followed by a push to
remove the banning of technology in the classroom (Lang, 2016). The reasons for banning devices vary, but the
underlying belief is that attempting to focus on more than one task at a time has a detrimental effect on other tasks,
specifically, learning (Fried, 2008; San, Weston, & Cepeda, 2013). Most researchers agree that the distraction is
seldom a device, but most often, it is the pedagogy, which drives students toward or away from distractions. An
interesting twist is that distractions do not have to involve the external environment. Distractions can also happen
internally due to how our brain has been evolutionarily wired. This concept of distraction is known as mind
wandering.
Through fMRI (Functional Magnetic Resonance Imaging) research, it was discovered that there is an area of the brain
that is active when we are in a resting state. That area is known as the default mode network (DMN). The DMN
consists of “areas in the dorsal and ventral medial prefrontal cortices, medial and lateral parietal cortex, and parts
of the medial and lateral temporal cortices” (Sheline et al., 2009, p. 1942). When we are in a resting state, the DMN
is highly active and creates self-generated cognitive narratives about our past and plans for our future (AndrewsHanna, 2012). These narratives are self-referential in nature and primarily focuses on survival-salient perceptions of
you and of the world (Sheline et al., 2009). This is because “when an event is flagged as negative, the hippocampus
makes sure it is stored carefully for future reference; one’s brain is like Velcro for negative experiences” (Hanson &
Mendius, 2009, p. 41). This served an evolutionary purpose for our ancestors. Running simulations (ruminating
thoughts) of past events promoted survival, as it strengthened the learning of successful behaviors by repeating
their neural firing pattern. Simulating future events also promoted survival by enabling our ancestors to compare
possible outcomes and to mentally prepare for immediate action (Hanson & Mendius, 2009). One’s DMN is an
evolutionary process developed over time to promote survival. The challenge is that one’s perceived threats in the
21st century are vastly different than the threats experienced by our distant ancestors. Yet, our brain functions in
the same way that it did centuries ago.
P a g e | 67
One’s DMN continues to be active for most of one’s waking moments and takes minimal effort to operate. We
unconsciously allow our mind to wander which leads to thoughts about “what is not going on around you,
contemplating events that happened in the past, might happen in the future, or will never happen at all”
(Killingsworth & Gilbert, 2010, p. 932). A failure to effortfully focus will allow the DMN to activate and that may cause
internal emotional distractions that can interfere with a student’s academic performance (e.g., learning).
Killingsworth and Gilbert (2010) found that people were less happy when their minds were wandering compared to
states when wandering was minimized. This unhappiness and the ruminating thoughts of our past and future tends
to increase one’s level of stress. Although some degree of stress has been shown to have a positive relationship with
academic performance, too much stress, confusion, and/or frustration leads to a decline performance (Baker et al.,
2013). This phenomenon is known as the Yerkes-Dodson Law (Yerkes & Dodson, 1908). Through the Yerkes-Dodson
Law it was discovered that there is an optimal zone for learning. This is when stress builds up to the point where one
becomes motivated and to an extent feels threatened by the stress. This focuses the mind due to the increase in
adrenaline and cortisol in one’s brain (Cozolino, 2016). The challenge comes when stress becomes too high and the
increase in adrenaline and cortisol activates one’s flight or fight system. At that point, one becomes impulsive and is
focused on removing the threat versus persevering through it (Siegel, 2010; Cozolino, 2016). The stress from
uncontrolled self-reference as well as the context of a student has the possibility of moving the student out of the
zone of optimal learning and into an overwhelmed state where learning becomes extremely challenging. So, it seems
that the driving question to address the challenge of mind wandering [into mindful] would be how do we keep our
students within the zone of optimal learning?
MINDFULNESS
Mindfulness is the practice of focusing your attention on the present moment and accepting the here and now
without judgment. The concept is a form of moment-to-moment awareness (Davis & Hayes, 2011). Mindfulness is
rooted in Buddhism, but its practice of nonjudgmental moment-to-moment awareness can be found in most
religions through some form of prayer or meditation technique (Davis & Hayes, 2011). One of the many benefits of
mindfulness is that by focusing on the here and now, many people reported not thinking of their past regrets or
worrying about their future. They were also able to connect with people and not as concerned about their success
and self-esteem (HelpGuide, n.d.). Neuroscience explains how a pedagogical approach that includes mindfulness can
positively influence student perception as well as helping to maintain their presence in the zone of optimal learning.
The DMN becomes active when our mind is in a restful state. In order to suppress and reduce the activation of the
DMN, one needs to focus on a direct experience (the here and now). By doing so, two major parts of the brain are
activated. The parts are the insula, which is the region of the brain that controls bodily sensations; and the anterior
cingulate cortex, which controls the switching of attention (Farb et al., 2007). With these two parts of our brain
activated, one can experience sensations in real time (the here and now). This mindful practice focuses or attention
on our present senses and not our past, future, ourselves, or others. Experiencing the world directly enables more
sensory information to be processed and subsequently perceived. This also allows a person to be psychologically
flexible when responding to the world because one is not imprisoned by the narratives of our DMN (Farb et al.,
2007). Only when we are present in the here and now can we take action and learn deeply and effectively (Cozolino,
2016). Perhaps attempting to guide students’ attention to the present, capitalizing on their information processing
(Atkinson & Shiffrin, 1971) is to provide engaging opportunities with appropriate ways to provide feedback, initiate
discussion and reflection.
P a g e | 68
STUDENT RESPONSE SYSTEMS
How, therefore, does an instructor create a student-centered learning opportunity where the student experiences
and practices mindful learning? One approach is by incorporating student response systems (SRS) as one component
of creating an active, engaging learning experience.
An SRS is an umbrella term used to classify a wide variety of methods that allow students to share their ideas. Student
response systems can also be known as audience response systems, classroom response systems, personal response
systems, or a subset of hardware (e.g., small remote devices called clickers). SRS can be deployed with or without
using technology; synchronous or asynchronous; and anonymous or self-identified. SRS examples from simplest to
more complex and enhanced use of educational technology could include:
•
•
•
•
Raising Hands/Fingers, where students hold up fingers or hands representing their response - if anonymity
is desired, students can be asked to close their eyes;
iClicker (https://www.iclicker.com/), students respond to questions using an iClicker device;
Four Color Quadrant (https://uminntilt.files.wordpress.com/2014/06/colored-abc_card.pdf), students
respond to questions holding a piece of paper with four color quadrant representing their responses;
Plickers (http://www.plickers.com), students hold pieces of paper, which contain QR codes which the
instructor scans remotely using a mobile device;
•
Polleverywhere (http://www.polleverywhere.com), free for less than 40 responses;
•
Answer Garden (https://answergarden.ch), for larger class enrollments;
•
Twitter (www.twitter.com), open ended micro-blog of 280 characters;
•
•
•
•
Goformative (https://goformative.com), real time monitoring of student responses, which could include
graphical representations;
Slido (https://www.sli.do), allows students to ask questions;
Kahoot® (https://getkahoot.com), a gamified SRS, where points are provided based on the time it takes to
produce a correct response; and
Google Slides, which allows students to post questions to each slide viewable only by the instructor.
There are now many free and low threshold methods to offer SRS using technology (Yee & Hargis, 2011; Galal et al.,
2015; Iwamoto et al., 2017). The power of these approaches is multi-directional, in that instructors can view the
responses as formative assessments and offer real time remediation decisions; and students are able to view how
they are understanding the concepts relative to their peers. Musselman (2012) reiterated that the main idea behind
the development of SRS is to create a conversation between the students and their instructors by providing a clear
way of communicating in an efficient, meaningful way. SRS are designed to transition teacher-focused pedagogy into
P a g e | 69
an interactive and formative assessment tool that would help students engage and process information and allow
instructors to assess, measure and evaluate the learning process. In addition to these pedagogical advantages, SRS
meets the needs of 21st century students, especially in the areas of communication, collaboration, and critical
thinking.
Traditional, didactic methods of instruction (lecture-only classes) may have limited success to the culture of today’s
students (Musselman, 2012). Twenty-first century students are comfortable with technology; they learn on their
own and satisfy their intellectual curiosity with tools like Google, Bing, and YouTube (Brown, 2006). They typically
seek quick and succinct answers to address their immediate problem. This new way of learning counters the
traditional factory model of education where students are herded into large buildings and provided large stocks of
knowledge for later use, although typically forgotten (Newell, 2003; Brown, 2006).
To prepare our students well, along with facilitating content within a context, instructors could assist students in
“learning how to learn,” which requires the ability to be mindful, think, consider, reconsider (metacognitive
strategies) and ask/receive feedback in a timely manner. As the world-of-work is constantly changing and most
students will be preparing themselves for a job that may not be invented yet. Classic behavioral research conducted
by B.F. Skinner in 1938 empirically showed the importance of immediate feedback. Learning occurred quicker and
stronger when feedback was timely and aligned. Therein lies the strength of SRSs. Students engage in real-time
dialogue and receive real-time feedback about the content they are learning at the time, which creates stronger
scaffolding, which can produce a stronger conceptual connection. Because everything is occurring in real-time,
students are engaged and focused on the here and now. This suppresses the DMN and students find themselves
unconsciously and effortlessly practicing mindfulness while actively engaging in their learning.
“Student response systems have been hypothesized to improve student learning through three broad categories,
which include improving the following: (a) student engagement; (b) student feedback; and (c) teacher feedback”
(Bartsch & Murphy, 2011, p. 25-26). Synder (2003) found that SRS increases student engagement because it allows
all students to engage actively during a lecture that prevented passive learning often typical in a lecture setting.
Stowell, Oldham, and Bennett (2010) added that students who were too anxious or shy to verbalize during class felt
more comfortable using a SRS to engage in class discussions due to the SRS’s anonymity. Ulrich (2006) observed
students being more attentive in classes because they knew a question would be asked that they could respond.
Iwamoto et al. (2017) observed a similar behavior where students would prepare prior to class because they knew
Kahoot® gamification SRS would be offered as an integral part of the learning experience.
Student engagement has been cited as having an overall positive correlation with college achievement (Bartsch &
Murphy, 2011). It has also been identified as being a critical component of student success (Kuh et al., 2008). Bartsch
and Murphy (2011) found that students who utilized a SRS in class significantly scored higher when compared to
students who did not. Similar results were found in a 2017 study where students who utilized the Kahoot® SRS scored
significantly higher during the mid-term examination when compared to students who did not (Iwamoto et al., 2017).
Hatch et al. (2005) found that 92% of students surveyed agreed that the SRS helped them understand what they did
and not know. This resulted in clear and real-time feedback, which is the goal of a formative assessment.
P a g e | 70
CONCLUSION
Through personal observation and an extensive literature review, it has become evident that 21st century students
have much more access to information due to technological advances when compared to previous generations. For
instance, 21st century students have many different ways to access information when compared to how learning
took place even one generation ago. Real-time information is expected and questions should be answered with a
few clicks on a mobile device. The pace of information sharing is increasing and instructors are discovering that if
they do not adapt, students identify their own way to access and interact with conceptual frameworks.
The need for immediate gratification poses a challenge to an educational system based solely on static, one-way
lectures. A student’s past academic experiences shape his or her perceptions about learning. The fact that there are
so many opportunities for students to be distracted poses another level of stress that did not exist a generation ago.
Neuroscience has shown that even when one’s brain is in a restful state, the DMN gets activated and thoughts of
one’s future and past begins to flood our minds.
These are the challenges that 21st century teachers are faced with. Instructors are battling Google, Bing, YouTube,
Facebook, Instagram, Twitter, among other distractors. How do we keep students engaged in the here-and-now of
our classroom? One solution is the incorporation of student response systems (SRSs), which has been shown to
engage students in real-time. This anonymous interaction has the potential of engaging the shyest of students
(Stowell, 2010). Because of anonymity, students find discussions safe so they can express their true thoughts versus
what they think the group would agree with. Another benefit in how an SRSs improves learning is that instructors
receive real-time feedback that can be used as a method of formative assessment. This real-time interaction
between teacher and students increases learning by keeping students in the here-and-now, which is the only time
when the act of learning can take place. It is common to hear students state that SRSs are found to be enjoyable,
maintains their interest, and encourages participation (Bartsch & Murphy, 2011). Because students find SRSs to be
engaging, they allow themselves to be present and in the moment. This mindful act quiets mind wandering and
promotes learning in a fun and engaging manner.
REFERENCES
Andrews-Hanna, J. R. (2012). The brain’s default network and its adaptive role in internal mentation. The
Neuroscientist, 18, 251-270.
Atkinson, R. C., & Shiffrin, R. M. (1971). The control of short-term memory. Scientific American, 225, 82-90.
Baker, R.S., Liu, Z., Ocumpaugh, J., & Pataranutaporn, V. (2013). Sequences of Frustration and Confusion, and
Learning. Proceedings of the 6th International Conference on Educational Data Mining, 114-120.
Bartsch, R. A., & Murphy, W. (2011). Examining the Effects of an Electronic Classroom Response System on Student
Engagement and Performance. Journal of Educational Computing Research, 44(1), 25-33.
Brown, J. (2006). New learning environments for the 21st century: Exploring the edge. Change, 38(5), 18-24.
Cozolino, L. (2016). Why therapy works: Using our minds to change our brains. New York, NY: W.W. Norton &
Company.
Davis, D. M., & Hayes, J. A. (2011). What are the benefits of mindfulness? A practice review of psychotherapy-related
research. Psychotherapy, 48(2), 198-208.
P a g e | 71
de Vise, D. (2010). Wide Web of diversions gets laptops evicted from lecture halls. Retrieved on September 28, 2017
http://www.washingtonpost.com/wpdyn/content/article/2010/03/08/AR2010030804915.html?referrer=emailarticle.
Farb, N. A. S., Segal, Z. V., Mayberg, H., Bean, J., McKeon, D., Zainab, F., & Anderson, A. K. (2007). Attending to the
present: Mindfulness meditation reveals distinct neural modes of self-reference. SCAN, 2, 313-322.
doi:10.1093/scan/nsm030.
Fried, C. B. (2008). In-class laptop use and its effects of student learning. Computers & Education, 50, 906-914.
doi:10.1016/j.compedu.2006.09.006.
Galal, S., Mayberry, J., Chan, E., Hargis, J., & Halilovic, J. (2015). Technology vs pedagogy: Instructional effectiveness
and student perceptions of a student response system. Currents in Pharmacy Teaching and Learning, 7(5),
590-598.
Hanson, R., & Mendius, R. (2009). Buddha’s brain: The practical neuroscience of happiness, love & wisdom. Oakland,
CA: New Harbinger Publications, Inc.
Hatch, J., Jensen, M., & Moore, R. (2005). Manna from Heaven or "Clickers" from Hell: Experiences with an Electronic
Response System. Journal of College Science Teaching, 34(7), 36-39.
HelpGuide. (n.d.). Benefits of mindfulness. Retrieved from https://www.helpguide.org/harvard/benefits-ofmindfulness.htm
Iwamoto, D. H., Hargis, J., Bordner, R., & Chandler, P. (2017). Self-regulated learning as a critical attribute for
successful teaching and learning. International Journal for the Scholarship of Teaching and Learning: 11(2),
Article 7.
Iwamoto, D. H., Hargis, J., Taitano, E., & Vuong, K. (2017). Analyzing the Efficacy of the Testing Effect Using Kahoot
on Student Performance. Turkish Online Journal of Distance Education, 18(2), 80-93.
Iwamoto, D. H., Hargis, J., & Vuong, K. (2016). The effect of project-based learning on student performance: An
action research study. International Journal for Scholarship of Technology Enhanced Learning, 1(1), 24-42.
Killingsworth, M. A., & Gilbert, D. T. (2010). A wandering mind is an unhappy mind, Science, 330, 932-933.
Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008). Unmasking the effects of student engagement
on first-year college grades and persistence. Journal of Higher Education, 79, 540-563.
Lang, J. (2016). No, Banning Laptops Is Not the Answer. Retrieved on September 28, 2017 from
http://www.chronicle.com/article/No-Banning-Laptops-Is-Not-the/237752.
Musselman, M. (2012). How are Middle School Teachers Using Student Response Systems? National Teacher
Education Journal, 5(3), 21-27.
Newell, R. J. (2003). Passion for learning: How project-based learning meets the needs of 21st – century students.
Lanham, MD: Rowman & Littlefield Education.
Nist, S. L., & Simpson, M. L. (2000). College studying. In M. Kamil, P. Mosenthal, P.D. Pearson, & R. Barr (Eds.),
Handbook of reading research, (Vol. III, p. 645-666). Mahwah, NJ: Erlbaum.
Sheline, Y. I., Barch, D. M., Price, J. L., Rundle, M. M., Vaishnavi, S. N., Snyder, A. Z., Mintun, M. A., Wang, S., Coalson,
R. S., & Raichle, M. E. (2009). The default mode network and self-referential processes in depression. PNAS,
106(6), 1942-1947.
Siegel, D. (2010). Mindsight. New York, NY: Random House Publishing Group.
P a g e | 72
Simpson, M. L., & Nist, S. L. (2002). Active reading at the college level. In C. Block & M. Pressley (Eds.), Comprehension
instruction: Research-based best practices (365-379). New York, NY: Guilford Press.
Simpson, M. L., Stahl, N. A., & Francis, M. (2004). Reading and learning strategies: recommendations for the 21st
century. Journal of Developmental Education, 28(2), 2-32.
Stowell, J. R., Oldham, T., & Bennett, D. (2010). Using Student Response Systems (“Clickers”) to Combat Conformity
and Shyness. Teaching Of Psychology, 37(2), 135-140.
Thomas, J. W., & Rohwer, W. D. (1986). Academic studying: The role of learning strategies. Educational Psychologist,
21, 19-41.
Trilling, B., & Fadel, C. (2009). 21st century learning skills. San Francisco, CA: John Wiley & Sons.
Ulrich, C. (2006). Remote control devices activate learning. Human Ecology, 34(2), 20-21.
Yee, K., & Hargis, J. (2011). Google Moderator & clickers. Turkish Online Journal of Distance Education, 12(2), 9-12.
Yerkes, R. M., & Dodson J. D. (1908). The relation of strength of stimulus to rapidity of habit-formation. Journal of
Comparative Neurology and Psychology, 18, 459-482.
Zimmerman, B. J. (1989). A social cognitive view of self-regulated academic learning. Journal of Educational
Psychology, 81(3), 329-339.
P a g e | 73
CHAPTER
7
THE IMPACT OF TECHNOLOGY ON
HOW INSTRUCTORS TEACH AND
HOW STUDENTS LEARN
SARAH ELAINE EATON
UNIVERSITY OF CALGARY
INTRODUCTION
The relationship between technology, teaching and learning is a complex one. Just a few of the factors that
contribute to these complex relationships include: Instructors’ confidence and competence using technology for
teaching; the kinds of technology that are available in a particular learning context; institutional policies on how
money is spent on technology and what kind of technology is licensed or purchased; and whether students bring
their own devices or use technology provided by their learning institution (Becker, Gereluk, Dressler, & Eaton, 2015;
Hourigan & Murray, 2010).
The push to add computers to the classroom first emerged in the 1980s and is now common in North American
classrooms (Christensen, Horn, & Johnson, 2017). However, the use of technology is no longer limited to high-income
or high-resource countries, but also includes developing countries (Bhuasiri, Xaymoungkhoun, Zo, & Rho, 2012;
Glewwe, Hanushek, Humpage, & Ravina, 2011; Kremer, Brannen, & Glennerster, 2013). In other words, the use of
technology for teaching and learning is now a worldwide phenomenon at every level of education.
In this chapter, I will examine the impact of technology on how instructors teach, as well as its impact on how
students learn. I will discuss how technology might evolve for teaching and learning and conclude with strategies for
how instructors can approach the adoption and integration of technology into their teaching practice.
IMPACT OF TECHNOLOGY ON HOW INSTRUCTORS TEACH
It is not enough for a school district or an institution to simply acquire a new technology and expect teachers to
immediately begin implementing it (Jacobsen & Lock, 2004; Kremer et al., 2013). New technology requires teachers
to think deeply about how they will use it to enhance learning, and this kind of deep thinking requires time and the
ability to engage in long-term reflection on their practice, in order to puzzle through how technology can make the
lives of students better. Improving the lives of our students is, after all, one of the motivators for why we teach
(Jacobsen & Lock, 2004). However, in today’s digitized and connected world, is not enough for instructors to teach,
or even think about teaching, in the ways that they themselves were taught (Jacobsen & Lock, 2004). Instead,
teachers think ahead to a time when the students they are teaching will be using technologies far beyond what we
have today.
This kind of blue-sky thinking is juxtaposed with the frantic nature of an instructor’s day-to-day teaching schedule
where there is rarely enough time to pause and engage in deep reflection. Instructors face a number of challenges
when it comes to adopting and integrating technology effectively. These include making use of technology in a
meaningful and relevant way, addressing increased demands on time and dealing with technology barriers beyond
P a g e | 74
the teacher’s control. In the sections that follow, I will address each of these as key considerations for how
technology impacts teaching.
USING TECHNOLOGY IN A MEANINGFUL WAY
It is not enough for technology to be available or installed on school computers. It must also be relevant to one’s
learning goals and objectives, and engaging for learners to use (Becker, Gereluk, Dressler & Eaton, 2015). Various
models have been developed to help educators conceptualize how to integrate technology effectively into their
teaching practice. Among the more notable is the TPACK (Technology, Pedagogy and Content Knowledge) model
(AACTE Committee on Innovation and Technology, 2008; Mishra & Koehler, 2006, 2009; Mishra, Koehler, & Kereluik,
2009). This model challenges teachers to incorporate new tools, platforms, software (Technology) with their
knowledge of a particular subject area (Content) and their knowledge of how to teach that subject area (Pedagogy).
At the heart of it is the notion that technology does not stand apart from the content or the pedagogy, but must be
integrated in a purposeful way (Snookes & Barker, 2009).
ADDRESSING INCREASED DEMANDS ON TIME
Incorporating technology into one’s teaching practice involves a complex set of tasks that put additional demands
on a teacher’s time. These tasks include: Testing the technology to determine its appropriateness; ensuring that
privacy settings align with institutional protocols around student privacy; introducing the technology to the students
and explaining how and why it is relevant to student learning; and then working with students to show them how to
use the technology for learning. It can take longer for a teacher to prepare lessons that incorporate technology
effectively and educators need to be mindful to avoid becoming frustrated with this additional requirement of
preparation time (Becker et al., 2015; Dragon, Peacock, Norton, Steinhauer, Snart, Carbonaro, & Boechler, 2012;
Kang, 2014). After implementing the technology, the teacher might also assume the role of technical troubleshooter, helping students when the technology does not perform as expected. There is no question that integrating
technology into learning is time-consuming.
In addition to demands on preparation and classroom time, teachers may also experience an increased
administrative or managerial load with tasks such as electronic content management (e.g., keeping a log of student
sign-in information) (Hourigan & Murray, 2010; Kang, 2014). As teachers increase their use of technology for
learning, so too must they increase their management of that technology. This increase in administrative load is
likely to become the norm for teachers in coming decades, as the management of technology will become a normal
aspect of learning management for students.
DEALING WITH TECHNOLOGY BARRIERS BEYOND INSTRUCTORS’ CONTROL
Technology barriers beyond instructors’ control are noted as being one of the primary reasons teachers either do
not or cannot incorporate technology effectively into their practice (Kim, Kim, Lee, Spector, & DeMeester, 2013).
These barriers include technology that is outdated or difficult to use, along with Internet connectivity issues. Among
educators who use and understand technology for learning there is a saying that the technology is effective only
when it becomes invisible. In other words, there is a correlation between the technology failing and learning
becoming less engaging and effective. The moment there are technology issues, there are learning issues.
P a g e | 75
These issues may be exacerbated for teachers in rural and remote areas, who may have more limited access to both
the technology itself, as well as to reliable connectivity, which may include telecommunications infrastructures such
as copper or fiber optic networks that deliver Internet services to a region, community, institution or dwelling, as
well as WIFI connectivity (Becker et al.,, 2015; Eaton et al., 2015; Howley, Wood, & Hough, 2011). When these kinds
of barriers persist, teachers may lack the motivation or resources to invest time and energy trying to incorporate
technology into their own teaching practice. In cases like this, the barriers may need to be addressed at the school
or district level, as there may be deeper policy factors at play.
IMPACT OF TECHNOLOGY ON HOW STUDENTS LEARN
Twenty-first century students are often referred to as digital natives, a term coined by Prensky (2001) at the turn of
the millennium. There has been an (erroneous) assumption, that because students have grown up with technology,
they automatically know how to use it for learning. Research has shown that is not always the case (Genç & Aydin,
2010; Hourigan & Murray, 2010, Stracke, 2007). What we know today is that students need explicit and hands-on
instruction not only on how to use technology for learning, but also to understand what purpose the technology
serves in terms of their learning. In other words, students still need their teachers to guide them in order to learn
how to use new technology and how to relate it to the content they are covering in their classes.
Students excel when they are engaged in active learning (Freeman et al., 2014) and technology can offer a way for
that to happen, provided that students are helped along with explicit instruction and given time to learn the
technology for themselves. One way student can engage in active learning using technology is through online
content creation (OCC), through activities such as blogging, tweeting or sharing content via social media (Brown,
Czerniewicz, & Noakes, 2016; Hourigan, & Murray, 2010). It is important to remember that students who have an
interest in the subject matter are more likely to have an interest in using technology to help them learn about that
subject (Genç & Aydin, 2010). That is, the more students are interested in learning about a topic, they more likely
they will engage with technology.
STUDENTS AS CREATORS OF CONTENT
When students are mere consumers of content, they are understandably less engaged (Brown et al., 2016).
Technology offers students the opportunity to move away from being consumers of knowledge, to creators of their
own content (Brown et al., 2016; Hourigan & Murray, 2010). When that happens, students can demonstrate not
only what they have learned, but how they make sense of what they have learned. Technologies that allow students
to create and learn in real time create a collaborative relationship between the teacher and the learner (Aoki &
Molnar, 2010). Examples of these real-time technologies include Google Docs, Skype, VoiceThread and others.
In addition, real-time mobile technologies provide students with even more flexibility in terms of learning anywhere
at any time (Brown et al., 2016; Chinnery, 2006; Eaton, 2010; Kukulska-Hulme, 2009). The notion of learning
anywhere, anytime is not restricted to desktop computers. It extends to mobile technologies such as cell phones
and tablets. Mobile learning technologies can provide students with opportunities to incorporate social media and
other Internet applications into their learning in innovative ways, such as social networking for learning or blogging,
for example (Brown et al., 2016; Hourigan & Murray, 2010). Brown et al. (2016) cautioned, however, that such
opportunities may be limited to those of high socio-economic status who can afford data plans and other ways of
accessing these learn-anywhere opportunities. That aside, mobile technologies can offer students an opportunity to
P a g e | 76
take photos, write, create and share their knowledge outside of classroom walls in ways that engage and excite them
(Brown et al., 2016; Hourigan & Murray, 2010).
PREVENTING COGNITIVE OVERLOAD
Given the prevalence of “learn-anywhere” technologies such as smart phones or portable computers (Chu, 2014)
and the likelihood that this trend will continue, teachers must be mindful of striking a balance between the use of
technology and the cognitive load on students. If the technology itself becomes too demanding, student learning
can actually decrease (Chu, 2014; deHaan, Kuwada, & Ree, 2010). Thus, it is very important for teachers to take the
time to guide students’ learning and to engage them in ongoing dialogue about what they are learning, as well as
reflection about how they are changing and growing as a result of what they are learning. This balance might be
achieved through formative assessment that helps students to understand what they are learning at different stages
of the learning process (Chu, 2014).
It is also appropriate for teachers to engage students in reflective dialogue about how the technology is facilitating
learning. This helps students develop more meta-cognitive awareness about their own learning (Lock, Kessy, &
Eaton, 2017). Students will need guided instruction on how to use these tools, but once they know they are
supported by their teacher, their confidence levels can increase and they can become more autonomous and selfdirected. For example, students might interact with one another in a real-time virtual environment such as Google
docs. At first, the teacher supports the students’ learning by showing them how to use Google docs for the purposes
of the learning task and setting up expectations for learning and productive interaction between students, but after
the students have learned the basic functions of the technology and have become accustomed to the social and
learning norms established with the teacher present, students can then move to learning more independently using
Google docs (Lock et al., 2017).
LOOKING AHEAD TO WHAT TECHNOLOGY MEANS FOR LEARNING IN THE FUTURE
At the beginning of this chapter, I contended that educators of today are called upon to think ahead to a time when
students will be using technologies far more advanced than what is commonly available in most classrooms today.
It is quite probable, for example, that virtual reality can be, and will be, used for learning in a variety of contexts
(Garcia-Ruiz, Edwards, El-Seoud, & Aquino-Santos, 2008; O’Brien & Levy, 2008). At the moment, technologies such
as virtual reality are too expensive and complicated to incorporate into most classrooms, but given time, the costs
will come down, making it more accessible and attractive.
Educators of today need to be prepared for a reality of tomorrow where technology such as virtual reality will be
normal. What will remain important through all of that, however, is for teachers to understand, and be able to
communicate, how technology relates to learning and to specific content. The teacher will continue to be the bridge
between technology and inspiring student learning.
CONCLUSION
Twenty-first century teachers are called upon today to have combined expertise that involves knowing what to
teach, how to teach it and how to engage their learners in a meaningful way using a variety of ever-changing tools
at their disposal. It also requires them to be responsive and adaptive when the technology may not work as expected
or becomes unavailable. Some of challenges that teachers face today in terms of incorporating technology into
P a g e | 77
learning such as setting aside additional time; being aware of cognitive overload and engaging students in formative
conversations that help them to develop self-regulated learning skills. Whether teachers of the future will face these
same challenges remains to be seen. What we can count on this is that real-time learning technologies will become
more prevalent and teachers will be called upon to incorporate their combined expertise into an ever-changing
landscape of learning.
REFERENCES
AACTE Committee on Innovation and Technology (Ed.) (2008). Handbook of technological pedagogical content
knowledge (TPCK) for educators. New York: Routledge.
Aoki, K., & Molnar, P. (2010). International collaborative learning using Web 2.0: Learning of foreign language and
intercultural understanding. In Z. W. Abas, I. Jung, & J. Luca (Eds.), Proceedings from GLAP 2010: Global
Learn Asia Pacific (pp. 3782–3787). AACE.
Becker, S., Gereluk, D., Dressler, R., & Eaton, S. E. (2015). Online Bachelor of Education Programs Offered in Colleges
and Universities Throughout Canada, the United States, and Australia. Retrieved from
http://hdl.handle.net/1880/50986
Bigatel, P. M., Ragan, L. C., Kennan, S., May, J., & Redmond, B. F. (2012). The identification of competencies for online
teaching. Journal of Asynchronous Learning Networks, 16(1), 59+.
Bhuasiri, W., Xaymoungkhoun, O., Zo, H., & Rho, J. J. (2012). Critical success factors for e-learning in developing
countries: A comparative analysis between ICT experts and faculty. Computers and education, 58(2), 843855. doi:10.1016/j.compedu.2011.10.010
Brown, C., Czerniewicz, L., & Noakes, T. (2016). Online content creation: looking at students’ social media practices
through a Connected Learning lens. Learning, Media and Technology, 41(1), 140-159.
doi:10.1080/17439884.2015.1107097
Chinnery, G. M. (2006). Emerging technologies: Going to the MALL: Mobile assisted language learning. Language
Learning & Technology, 10(1), 9–16. Retrieved from http://llt.msu.edu/vol10num1/emerging/default.html
Christensen, C. M., Horn, M. B., & Johnson, C. W. (2017). Disrupting class: How disruptive innovation will change the
way the world learns (Expanded 2017 edition ed.). New York: McGraw Hill.
Chu, H. C. (2014). Potential negative effects of mobile learning on students' learning achievement and cognitive load-A format assessment perspective. Journal of Educational Technology & Society, 17(1), 332-344.
deHaan, J., Kuwada, K., & Ree, W. M. (2010). The effect of interactivity with a music video game on second language
vocabulary recall. Language, Learning & Technology, 14(2), 74+. Retrieved from
http://llt.msu.edu/vol14num2/dehaanreedkuwada.pdf
Dragon, K., Peacock, K., Norton, Y., Steinhauer, E., Snart, F., Carbonaro, M., & Boechler, P. (2012). Digital
opportunities within the aboriginal teacher education program: A study of preservice teachers' attitudes
and proficiency in technology integration. Alberta Journal of Educational Research, 58(2), 263-285.
Eaton, S. E. (2010). Global Trends in Language Learning in the Twenty-first Century. Retrieved from
http://files.eric.ed.gov/fulltext/ED510276.pdf
P a g e | 78
Eaton, S. E., Dressler, R., Gereluk, D., & Becker, S. (2015). A review of the literature on rural and remote pre-service
teacher preparation with a focus on blended and e-learning models. Retrieved from
http://hdl.handle.net/1880/50497 doi:10.13140/RG.2.1.1644.4882
Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active
learning increases student performance in science, engineering, and mathematics. Proceedings of the
National Academy of Sciences of the United States of America, 111(23), 8410–8415. Retrieved from
http://www.pnas.org/content/111/23/8410.short doi:10.1073/pnas.1319030111
Garcia-Ruiz, M. A., Edwards, A., El-Seoud, S. A., & Aquino-Santos, R. (2008). Collaborating and learning a second
language in a wireless virtual reality environment. International Journal of Mobile Learning and
Organisation, 2(4), 369–377.
Genç, G. l., & Aydin, S. (2010). Students’ motivation towards computer use in EFL learning. Retrieved from
http://www.eric.ed.gov/PDFS/ED511166.pdf
Glewwe, P. W., Hanushek, E. A., Humpage, S. D., & Ravina, R. (2011). School resources and educational outcomes in
developing countries: A review of the literature from 1990 to 2010. National Bureau of Economic Research
Working Paper 17554. Retrieved from http://www.nber.org/papers/w17554
Hourigan, T., & Murray, L. (2010). Using blogs to help language students to develop reflective learning strategies:
Towards a pedagogical framework. Australian Journal of Educational Technology, 26(2), 209-225. doi:
https://doi.org/10.14742/ajet.1091
Howley, A., Wood, L., & Hough, B. (2011). Rural elementary school teachers’ technology integration. Journal of
Research in Rural Education, 26(9). Retrieved from http://jrre.psu.edu/articles/26-9.pdf
Jacobsen, D. M., & Lock, J. V. (2004). Technology and teacher education for a knowledge era: Mentoring for student
futures, not our past. Journal of Technology and Teacher Education, 12(1), 75+.
Kang, J. J. (2014). Learning to teach a blended course in a teacher preparation program. Contemporary Issues in
Technology and Teacher Education, 14(1), 54-71. Retrieved from http://www.citejournal.org/volume14/issue-1-14/current-practice/learning-to-teach-a-blended-course-in-a-teacher-preparation-program/
Kukulska-Hulme, A. (2009). Will mobile learning change language learning? ReCALL (Selected papers from EUROCALL
2008), 21(2), 157–165.
Kim, C., Kim, M. K., Lee, C., Spector, J. M., & DeMeester, K. (2013). Teacher beliefs and technology integration.
Teaching and Teacher Education, 29, 76-85. doi:10.1016/j.tate.2012.08.005
Kremer, M., Brannen, C., & Glennerster, R. (2013). The challenge of education and learning in the developing world.
Science, 340(6130), 297-300. doi:10.1126/science.1235350
Lock, J. V., Eaton, S. E., & Kessy, E. (2017). Fostering self-regulation in online learning in K-12 education. Northwest
Journal of Teacher Education, 12(2), 1-13. doi:10.15760/nwjte.2017.12.2.2
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher
knowledge. Teachers College Record, 108(6), 1017-1054.
Mishra, P., & Koehler, M. (2009). Too cool for school? No way! Using the TPACK framework: You can have your hot
tools and teach with them, too. Learning & Leading with Technology, 36(7), 14-18. Retrieved from
http://files.eric.ed.gov/fulltext/EJ839143.pdf
P a g e | 79
Mishra, P., Koehler, M. J., & Kereluik, K. (2009). Looking back to the future of educational technology. TechTrends,
53(5), 48-53. Retrieved from http://matt-koehler.com/hybridphd/hybridphd_summer_2010/wpcontent/uploads/2010/06/mishra_koehler_techtrends_2009.pdf
O'Brien, M. G., & Levy, R. M. (2008). Exploration through virtual reality: Encounters with the target culture. Canadian
Modern Language Review/ La Revue canadienne des langues vivantes, 64(4), 663–691.
Prensky, M. (2001). Digital natives, digital immigrants (part 1). On the Horizon, 9(5), 1-6.
Snookes, P., & Barker, J. (2009). Blended learning as a means to an end: Project report, University of Worcester.
Worcester Journal of Learning & Teaching 3. Retrieved from http://www.worc.ac.uk/adpu/1124.htm
Stracke, E. (2007). A road to understanding: A qualitative study on why learners drop out of a blended language
learning (BLL) environment. ReCALL, 19(1), 57–78.
P a g e | 80
PART
2
EDUCATIONAL TECHNOLOGIES THAT
INSTRUCTORS USE TO TEACH STUDENTS
P a g e | 81
CHAPTER
8
DESIGNING GROUP WORK IN
BLENDED LEARNING
ENVIRONMENTS
BARBARA BROWN
UNIVERSITY OF CALGARY AND
NORMAN VAUGHAN
MOUNT ROYAL
UNIVERSITY
INTRODUCTION
The purpose of this chapter is to discuss a framework that can guide the development of promising learning designs
intended to promote group work and collaborative knowledge building in higher education, specifically in blended
learning environments. Today’s learners need newly designed learning experiences leveraging collaboration
technologies (Vaughan, 2014). Learners expect to work collaboratively and experience engaging learning
experiences (Dunlap & Lowenhal, 2011). Some argue that professors are ill-equipped to shift from conventional
styles of teaching to new technology-rich forms (Becker et al., 2017; EDUCAUSE Learning Initiative, 2017). We argue
that using the five principles of the teaching effectiveness framework (Friesen, 2009) to design blended learning
environments along with collaboration technologies, instructors can provide students with opportunities to work in
groups, collaborate with each other, and amplify their learning experiences. The five core principles of the teaching
effectiveness framework include: (1) Teachers are designers of learning; (2) Teachers design worthwhile work; (3)
Teachers design assessment to improve student learning and guide teaching; (4) Teachers foster a variety of
interdependent relationships; and (5) Teachers improve their practice in the company of their peers. The Teaching
Effectiveness Framework provides a lens for designing and assessing learning designs (Friesen, 2009).
We both design our courses in post-secondary education classes using these five guiding principles. The first principle
suggests effective teaching practice begins with an intentional and iterative design cycle of learning. The second
principle describes how learning experiences need to be authentic, relevant to a broad audience, and promote deep
learning. The third principle specifies assessment practices are designed to inform next steps for students while the
learning is taking place and guide next steps for the instructor in making pedagogical decisions along the way. The
fourth principle indicates teachers need to foster a variety of interdependent relationships including human
relationships and relationships with the disciplinary learning. Relationships include teacher-to-student, student-tostudent, student-to-others outside of the class and school, and student-to-content. The fifth principle suggests
teachers improve their practice in the company of peers. This principle calls on teachers to shift from working in
isolation to working collaboratively to improve designs for blended learning.
Despite the perception that blended learning (also referred to as hybrid learning in the literature) is a combination
of online and face-to-face instruction, definitions of blended learning are varied among authors. Allen, Seaman and
Garrett (2007) suggest blended learning environments have between 30-79% of online content delivery combined
with face-to-face instruction. In contrast, Moskal, Dziuban and Hartman (2013) argue there is no specific amount of
time or ratio that defines blended learning; this type of learning should be interactive and provide active learning
opportunities with online components. According to Picciano (2014) there is no clear taxonomy or agreed-upon
definition of blended learning. In some cases, instructors use online spaces to complement classroom activities
during class time or use online spaces for learning activities outside of class time.
P a g e | 82
For some time now, authors have predicted that face-to-face, on-campus courses would eventually include online
components and perhaps the term blended would no longer be required (Garrison, 2017). Garrison and Vaughan
(2008) defined blended learning as “the organic integration of thoughtfully selected and complementary face-toface and online approaches and technologies” (p. 148). A decade later, we continue to use this definition and
consider blended learning environments as face-to-face, on-campus courses that meaningfully integrate online
components and promote participatory cultures. In this chapter, we maintain the term “blended” to emphasize how
our designs for learning include an intentional combination of in-person and online experiences. In order to design
blended learning experiences, we also draw on assumptions of participatory cultures as described by Jenkins and
colleagues (2009):
•
•
•
•
Expertise and teaching is distributed, so the most experienced can mentor new members in the community.
Group knowledge building is a collective responsibility and endeavor (i.e. collective intelligence, pooling
knowledge).
Learners are socially connected with one another within and beyond the classroom. A culture of inquiry
supports idea creation and the sharing of creations (i.e. networking).
Learners are provided with multiple opportunities for engagement, expression and representation (i.e.,
multimodality, transmedia navigation).
Collaboration and knowledge sharing is expected, learners believe their contributions matter, and embrace
diversity of views and multiple perspectives (i.e., negotiation, discernment).
We argue that combining the teaching effectiveness framework with assumptions about participatory cultures can
inform instructors designing blended learning, that is, technology-enhanced learning experiences in higher
education to promote collaborative knowledge building (Bereiter & Scardamalia, 2014; Sawyer, 2012; Thomas &
Brown, 2011). Thus, our classes can be considered blended learning that are technology enhanced learning
environments, defined as “complex learning environments that enable appropriate use of technological resources
in order to continually enhance the conditions conducive to learning” (Brown, 2013, p. 304).
DESIGNING WITH THE 5 PRINCIPLES OF THE TEACHING EFFECTIVENESS FRAMEWORK
In this section, we discuss each of the five core principles of the teaching effectiveness framework and how we design
blended learning for our undergraduate students drawing on the qualities of participatory cultures and using
collaborative technologies. It is important for readers to recognize the examples shared do not provide an
exhaustive list of the ways collaborative technologies may be used within each of the five core principles but give an
indication of many of the technologies we currently use in blended learning environments to support group work.
PRINCIPLE 1 – TEACHERS ARE DESIGNERS OF LEARNING
Our experience suggests that many students in higher education lack a clear understanding of how to work
collaboratively in groups (Thomas & Brown, 2017) and this often results in the Pareto Principle where 20% of the
students are doing 80% of the work (Sanders, 1987). In order to overcome this challenge, we recommend using
collaborative technologies to design a low stakes group activity at the beginning of a semester in order for students
to gain first-hand experience with Tuckman’s (1965) four stages of team development. These four stages consist of
P a g e | 83
Forming, Storming, Norming, and Performing. For example, students can use an online document to brainstorm
ideas, share ideas with a group and then invite the group to help prioritize ideas. Dotstorming (dotstorming.com) is
an example of an application that can be used for this type of low stakes group activity. The Team-Based Learning
Collaborative (2017) also provides additional strategies and digital technologies for supporting this approach to team
development.
DESIGN COMPLEMENTARY IN-CLASS AND ONLINE ACTIVITIES
For example, online discussion forums can be used to complement in-person discussions and engage undergraduate
students in dialogue with each other and with the instructor to document and develop understanding over time. We
start with developing a shared understanding of discussion norms, such as strategies for facilitating scholarly
discussion and debate (Jacobsen, 2009). A structured reading group (SRG) approach can be used to integrate online
and face-to-face discussions (Parrott & Cherry, 2011). For this activity, students are divided into groups of five to six.
Each week there is a course reading and students are assigned a specific role and task to complete online (see Figure
1). Then during class time, one student is responsible for directing the discussion and another has the task of taking
notes for the group’s face-to-face discussion.
Figure 1. Structured reading group assignment to integrate online and face-to-face discussions. Screen capture taken by Norman Vaughan (2017).
This type of low stakes activity provides students with an opportunity to work together and create together as a
group to develop understanding of a concept before engaging in group design work. Findings from Akyol and
Garrison’s (2008) study provided evidence of increased group cohesion when using an online discussion forum in
comparison to other forms of dialogue, such as in-person conversations.
In participatory cultures, group knowledge building is a collective responsibility and endeavor (Jenkins et al., 2009).
In other words, the online discussions and in-classroom discussions provide opportunities to foster collaborative
communities of inquiry to pool knowledge, advance collaborative knowledge building and amplify collective
intelligence. The role of the instructor is important through this process to help develop the online and in-class
community (Garrison & Vaughan, 2008). For example, the instructor may select excerpts from the online posts and
responses to share with students when meeting in-class. The instructor may provide small groups with several
P a g e | 84
anonymized excerpts from the online discussions and ask the group to use the assessment rubric to analyze the
posts. Groups may discuss the excerpts and write one question that could be asked to deepen understanding or
strengthen the quality of the post. The instructor may ask the groups to share all their questions generated from
this exercise and reinforce the value in working together and engaging in collaborative knowledge building activities.
Blending discussions occurring online and in the classroom, can help deepen student understanding and build a
sense of social connection and community in the class.
DISCUSS ETHICAL PRINCIPLES FOR USING MOBILE DEVICES AND OTHER LEARNING TECHNOLOGIES
Instructors may need to discuss ethical principles for using mobile devices in class if students are not accustomed to
using technology for learning in the classroom and when working in groups. For instance, the instructor may model
how to use VoiceThread to combine excerpts of visual, text and audio to demonstrate how to combine input from
group members. Figure 2 shows the instructor modelling how to use VoiceThread when engaging students in a
discussion about ethical principles for using mobile devices when working in groups. In participatory cultures,
learners are provided with multiple opportunities for engagement, expression, and representation of their learning
(Jenkins et al., 2009). As such, teachers are designers of learning and transmedia navigations. Using a tool, such as
VoiceThread can provide learners with opportunities to interconnect multiple forms of expressions that contribute
to group knowledge building.
Figure 2. Using VoiceThread for expression and representation of learning. Screen capture and image taken by Barbara Brown (2017).
PRINCIPLE 2 – TEACHERS DESIGN WORTHWHILE WORK
Authenticity is a key dimension when designing learning to ensure students are engaged in work worthy of their
time and attention (i.e. problem-oriented, intended for a real audience and purpose). This type of work mirrors the
Figure 3. Article critique of peer reviewed journal article with author’s response. Screen capture taken by Norman Vaughan (2008).
P a g e | 85
kinds of work an expert in the field would perform and fosters deep understanding. In participatory cultures,
collaboration and knowledge sharing is expected; learners believe their contributions matter, and embrace diversity
of views and multiple perspectives (Jenkins et al., 2009). For example, a group of students could use a blogging
application such as WordPress to critique a peer reviewed journal article. The author(s) of this article can then post
a response to the students’ critique on the blog (see Figure 3).
ESTABLISH SHARED WORK SPACES TO SUPPORT COLLABORATION AND KNOWLEDGE SHARING
In the classroom, seating can be arranged in small groupings to facilitate group discussions, sharing and collaborative
design or students may have options to move to group work areas (see Figure 4).
Figure 4. Classroom group work taken by Norman Vaughan in Calgary, Alberta (2017).
In online spaces, students can meet in virtual or breakout rooms (Figure 5). Groups can determine which physical
and online spaces to use for collaborating and designing together. We have found students regularly use online
spaces to complement the work conducted when meeting with a group in-person. These online spaces assist groups
when working together in physical proximity and also when making contributions or working apart from the group.
The role of the instructor may include setting up the online work space, establishing goals for the group work sessions
and arrange check points with each group. For instance, when using online rooms as shown in Figure 5, the instructor
can ask the students to record their session or submit a copy of the note pod or chat pod following the group
meeting.
P a g e | 86
Figure 5. Group work using a virtual break out room in Adobe Connect. Screen capture taken by Barbara Brown (2017).
The Google suite of applications can also be used to engage students in productive collaboration around authentic
work. For example, designing a unit plan is considered an authentic task for undergraduate education students, since
this mirrors the work of practicing teachers in schools. Online work spaces can be used to support meaningful
collaborative work. When undergraduate students in education work in groups to develop a unit plan, each student
may take the lead on different aspects of the unit similar to what teachers in the field do when designing a unit
together with their colleagues. Collaboration technologies, such as Google documents may be used to pitch ideas,
form groups and develop work plans. Google Sheets may be used for project management and Google Sites can be
used to develop unit components. Students can collaboratively draft their work in shared online spaces and also
make the planning visible to other groups.
PRINCIPLE 3 – TEACHERS DESIGN ASSESSMENT TO IMPROVE STUDENT LEARNING AND GUIDE TEACHING
Assessment in business settings is often called 360-degree feedback, which involves self, peer, supervisor, and
customer evaluations (Mamatoglu, 2008). We can replicate this approach in higher education by creating
collaborative learning activities beyond group discussion forums that allow students to receive self, peer, teacher,
and community feedback through the use of digital technologies. Collaboration technologies can help with
documenting evidence of learning and assessing group work. A variety of assessment data can be visible online and
gathered during the evolution of the project to monitor growth. Online environments can provide students with
mechanisms to track evidence of growth and learning. This can be helpful for students when working in groups and
reflecting on their individual and collective contributions and can inform instructors when assessing group work.
P a g e | 87
USE TECHNOLOGIES TO PROVIDE COMPREHENSIVE EVIDENCE OF LEARNING
For example, students can use a blogging application, such as WordPress, to self-assess and reflect on their progress
when completing a collaborative learning task. Students can document their learning and set goals for future growth
and development with an ePortfolio (Figure.6).
Figure 6. Example of an ePortfolio page used to document growth and goals for assessing student learning. Screen capture taken by Norman
Vaughan (2017).
Instructors can use learning technologies to support self-assessment (Wiliam & Leahy, 2015). For example, students
can use their mobile devices to capture their key learning moments during classroom conversations and include
multimodal elements into their self-assessment. Thinking processes and feedback can be made visible while the
work progresses in the classroom and outside of the classroom.
MAKE INSTRUCTOR FEEDBACK VISIBLE IN SHARED ONLINE WORK SPACES
Online collaborative spaces can also be used for formative assessment (Earl, 2013; Wiliam & Leahy, 2015) strategies
when students are in the classroom and outside of classroom time. Online spaces provide documentation and help
students track and record iterative design processes and draft work. Figure 7 demonstrates how an instructor can
provide feedback to students in an online space. The feedback can be reviewed by all students in the class and
students can respond or resolve each individual comment.
P a g e | 88
Figure 7. Instructor feedback using comments in Google Slides. Screen capture taken by Barbara Brown (2017).
Peer groups can review work and provide feedback in online spaces
“Participatory culture shifts the focus of literacy from individual expression to community involvement” (Jenkins et
al., 2009, p. 6). In terms of peer feedback on research papers, laboratory reports or developing collaborative unit
plans, students can use the Calibrated Peer Review Tool from the University of California (2017). Figure 8
demonstrates a Google Document used for group work. Shared documents can easily be shared with others for
gathering feedback.
Figure 8. A shared online workspace that can be shared with others for feedback. Screen Capture taken by Barbara Brown (2018).
P a g e | 89
ARRANGE FOR OUTSIDE EXPERTS TO PROVIDE FEEDBACK IN SHARED WORK SPACES
Experts in the field can be invited to online environments and provide students with feedback about their work.
Experts seem to be more willing to offer students with feedback in online spaces. The flexibility in meeting online
does not require travel to the university and time off from work. Instead, experts can join over their lunch break for
a short time period and provide advice to students. Web conferencing software can be used to arrange synchronous
feedback (i.e. 15-minute Adobe Connect or Skype sessions with an expert). In Figure 9, a classroom teacher provides
undergraduate students with feedback about their unit plan during her lunch break using web conferencing
software. In this case, the students shared their work with the expert and in turn, the expert provided immediate
feedback to the group. Group members used the microphone and chat box to ask questions and seek further
clarification from the outside expert.
Figure 9. Outside expert providing advice synchronously using Adobe Connect Web Conferencing taken by Barbara Brown (2018).
Friesen and Scott (2013) discuss the value in consulting with experts when students are engaged in exploring realworld problems during the inquiry process. Experts can be invited to interact with the online artifacts of learning
and can then provide students with formative feedback and inform the development of criteria for high quality
standards. For example, students can seek feedback from community experts by creating videos of their work and
posting the results to online video sites such as YouTube. Community members can then watch these videos and
provide feedback in the accompanying online discussion forum. TodaysMeet is an example of a tool that can used
to arrange synchronous or asynchronous textual feedback from outside experts. As shown in the transcript excerpt
(Figure 10) from TodaysMeet, this is an online space that can be used to record ideas and engage in student-expert
interactions.
P a g e | 90
Transcript:
Figure 10. TodaysMeet can be used for student-expert interactions. Screen capture taken by Barbara Brown (2017).
Formative feedback provided by the instructor, peers and outside experts can help guide students with their next
steps and also inform the instructors’ instructional design decisions. Collaborative technologies can be used to guide
next steps for learning and for teaching.
PRINCIPLE 4 – TEACHERS FOSTER A VARIETY OF INTERDEPENDENT RELATIONSHIPS
In the world of real estate, it is all about location, location, location whereas in education, the focus is on
relationships, relationships, and relationships. Students are challenged when working in groups and need supports
for working in collaboration and forming interdependent relationships (Thomas & Brown, 2017). The beauty of a
blended learning environment is that students are able to initially find their voices by using asynchronous or
synchronous communication technologies or a combination of both. Similar to our observations, other authors also
recognize collaboration technologies are useful for supporting interdependent relationships and group work (Clark
& Blissenden, 2013; O’Donnell & Hmelo-Silver, 2013).
USE TECHNOLOGY TO SUPPORT COMMUNICATIONS
In participatory cultures, learners are socially connected with one another within and beyond the classroom (Jenkins
et al., 2009). Communication can continue in blended learning environments using asynchronous and synchronous
spaces. Students can draw on each other’s strengths, negotiate ideas and work through their challenges as a group.
The number of asynchronous technologies and synchronous technologies continues to grow, so we will not attempt
to list them. The key is to take a strengths-based approach (Rath, 2007) to communication in a blended learning
environment. Students may select collaborative technologies to communicate with their project group and manage
their work. For example, students can use Google applications such as Docs and Hangout to support online group
work. Students can collaboratively write in a Google Doc while synchronously chatting with each other in a Google
Hangout (Figure 11).
P a g e | 91
Figure 11. Students using both Google Docs and Hangout to support collaborative online group work. Screen capture taken by Norman Vaughan
(2017).
The instructor may also select technologies to support group work. Figure 12 provides an example of text messages
sent to students using the Remind app. The instructor can use this application to communicate with individuals,
small groups or the whole class by broadcasting messages or other multimedia. Similarly, the students can send
instant messages back to the instructor or use the emotion icons to provide a response.
Figure 12. Broadcast messages using the Remind app. Screen capture taken by Barbara Brown (2017).
PRINCIPLE 5 – TEACHERS IMPROVE THEIR PRACTICE IN THE COMPANY OF THEIR PEERS
Instructors improve their practice by sharing their practice and artifacts of learning with their peers. One of the
strategies we use to gather artifacts of learning is to seek input from our students. A variety of classroom assessment
techniques have been developed that utilize digital technologies to effectively and efficiently support this process
(Martin, 2012).
P a g e | 92
SEEK STUDENT INPUT AND SHARE WITH COLLEAGUES TO IMPROVE PRACTICE TOGETHER
For instance, instructors can gather input, such as feedback about the learning design by using survey tools (i.e.
online forms). Gathering input from students during the course (i.e., exit slips at the end of class, survey for midcourse feedback) can inform redesign and improvements when working alongside other instructors teaching the
same course as part of a community of practice (Wiliam & Leahy, 2015). Figure 13 provides an example of a midcourse survey used to gather feedback from students. One of the ways the student input can be used is to share the
results with colleagues and discuss opportunities for growth.
Figure 13. Example of Mid-Course Survey. Screen capture taken by Barbara Brown (2017).
In participatory cultures, group knowledge building is a collective responsibility and endeavor (Jenkins et al., 2009).
A community of practice can work toward improving technology-enhanced learning environments and work
together to develop and improve designs for learning. Figure 14 provides an example of Mentimeter, a tool used to
gather input from students about the challenges with technology-enhanced learning. Colleagues may review this
type of student input together and use the results to inform future redesign ideas.
Figure 14. Mentimeter word cloud display of student responses. Screen capture taken by Barbara Brown (2017).
P a g e | 93
ENGAGE IN A RESEARCH-PRACTICE PARTNERSHIP TO TAKE COLLECTIVE RESPONSIBILITY FOR QUALITY
COURSE IMPROVEMENT
Instructors may work on teams to engage in action research to improve practice (Brown, Dressler, Eaton & Jacobsen,
2015). In terms of developing an action research mindset and sharing evidence based practice with peers, the Alberta
Initiative for School Improvement (AISI) (Parsons, 2011) was a model for facilitating this process and encouraging
research-practice partnerships. The AISI framework provided support, resources, time, and recognition for teachers
from across the Province of Alberta to collaboratively work together to meaningfully inquire into their teaching
practice and share their results (Hargreaves et al., 2009).
In this chapter we provided examples of learning designs in blended learning environments using the five principles
from the Teaching Effectiveness Framework (Friesen, 2009) and qualities of a participatory culture (Jenkins et al.,
2009). The five principles from the Teaching Effectiveness Framework include: (1) Teachers are designers of learning,
(2) Teachers design worthwhile work, (3) Teachers design assessment to improve student learning and guide
teaching, (4) Teachers foster a variety of interdependent relationships, and (5) Teachers improve their practice in
the company of their peers. In participatory cultures expertise and teaching is distributed and group knowledge
building is a collective responsibility; learners are socially connected; learners are provided with multiple
opportunities for engagement, expression and representation; and collaboration and knowledge sharing is expected.
The examples discussed in this chapter can serve to inform instructors designing blended learning environments and
higher education learners can benefit from learning experiences that meaningfully incorporate technologies to
promote collaboration and group work:
•
•
•
•
•
•
•
•
•
•
Design complementary in-class and online activities.
Discuss ethical principles for using mobile devices and other learning technologies.
Establish shared work spaces to support collaboration and knowledge sharing.
Use technologies to provide comprehensive evidence of learning.
Make instructor feedback visible in shared online work spaces.
Peer groups can review and provide feedback in online spaces.
Arrange for outside experts to provide feedback in shared work spaces.
Use technology to support communications.
Seek student input and share with colleagues to improve practice together.
Engage in a research-practice partnership to take collective responsibility for quality course improvement.
REFERENCES
Akyol, Z., & Garrison, D. R. (2008). The development of a community of inquiry over time in an online course:
Understanding the progression and integration of social, cognitive and teaching presence. Journal of
Asynchronous Learning Networks, 12(3-4), 3-22.
Allen, I. E., Seaman, J., & Garrett, R. (2007). Blending in: The extent and promise of blended education in the United
States. Retrieved from http://sloanconsortium.org/publications/survey/blended06
Becker, A. S., Cummins, M., Davis, A., Freeman, A., Hall, G. C., & Ananthanarayanan, V. (2017). NMC horizon report:
2017 higher education edition. Austin, TX: The New Media Consortium.
Bereiter, C., & Scardamalia, M. (2014). Knowledge building and knowledge creation: One concept, two hills to climb.
In S. C. Tan, H. J. So, J. Yeo (Eds.) Knowledge creation in education (pp. 35-52). Singapore: Springer.
P a g e | 94
Brown, B., Dressler, R., Eaton, S., E., & Jacobsen, M. (2015). Practicing what we teach: Using action research to learn
about teaching action research. Canadian Journal of Action Research, 16(3), 61-78.
Brown, B. (2013). Technology-enhanced learning environment. In R. C. Richey (Ed), Encyclopedia of terminology for
educational communications and technology (pp. 304-305), New York, NY: Springer
Clark, S., & Blissenden, M. (2013). Assessing student group work: Is there a right way to do it? The Law Teacher,
43(3), 368–381. https//doi.org/10.80/03069400.2013.851340
Dunlap, J. C., & Lowenthal, P. R. (2011). Learning, Unlearning, and Relearning: Using Web 2.0 Technologies to Support
the Development of Lifelong Learning Skills. In G. D. Magoulas (Ed.), E Infrastructures and Technologies for
Lifelong Learning: Next Generation Environments (pp. 46-52). Hershey, PA: IGI Global.
Earl, L. (2013). Assessment as learning: Using classroom assessment to maximize student learning (2nd Edition).
Thousand Oaks, CA: SAGE Publications Ltd.
EDUCAUSE Learning Initiative. (2017). The 2017 key issues in teaching and learning. Retrieved from
https://library.educause.edu/~/media/files/library/2017/2/eli7141.pdf
Friesen, S. (2009). What did you do in school today? Teaching effectiveness: A framework and rubric. Toronto, ON:
Canadian Education Association.
Friesen, S., & Scott, D. (2013). Inquiry-based learning: A review of the research literature. Retrieved from
http://galileo.org/focus-on-inquiry-lit-review.pdf
Garrison, D. R. (2017). E-Learning in the 21st Century: A community of inquiry framework for research and practice
(3rd Edition). London: Routledge/Taylor and Francis.
Garrison, D. R. & Vaughan , N. D. (2008). Blended Learning in Higher Education. San Francisco, CA: Jossey-Bass.
Hargreaves, A., Crocker, R., Davis, B., McEwen, L., Sahlberg, P., Shirley D., & Sumara, D. (2009). The learning mosaic:
A multiple perspectives review of the Alberta Initiative for School Improvement AISI. Edmonton, AB:
Government of Alberta.
IDEO LLC. (2012). Design thinking for educators. Retrieved from http://designthinkingforeducators.com/
Jacobsen, M. (2009). Cultivating a scholarly community of inquiry through shared leadership. Retrieved from
http://people.ucalgary.ca/~dmjacobs/eder679.12/scholarly_community.html
Jenkins, H., Purushotma, R., Weigel, M., Clinton, K., & Robison, A. J. (2009). Confronting the challenges of
participatory culture: Media education for the 21st century. Cambridge, MA: The MIT Press.
Mamatoglu, N. (2008). Effects on organizational context (culture and climate) from implementing a 360-degree
feedback system: The case of Arcelik. European Journal of Work & Organizational Psychology, 17 (4), 426449.
Martin, M. B. (2012). Classroom assessment techniques designed for technology. Retrieved from http://onlinecourse-design.pbworks.com/f/Classroom+Assessment+Techniques+Designed+Technology.pdf
Moskal, P., Dziuban, C., & Hartman, J. (2013). Blended learning: A dangerous idea? The Internet and Higher
Education, 18, pp. 15-23. doi: 10.1016/j.iheduc.2012.12.001
O’Donnell, A. M., & Hmelo-Silver, C. E. (2013). Introduction: What is collaborative learning? An overview. In C. E.
Hmelo-Silver, C. A. Chinn, C. Chan, & A. O’Donnell (Eds), The international handbook of collaborative
learning (pp. 1–15). New York, NY: Routledge
P a g e | 95
Parrott, H. M., & Cherry, E. (2011). Using structured groups to facilitate deep learning, Teaching Sociology 39(4): 354370.
Parsons, J. (2011). Eleven years of teacher action research: How AISI affects education. ATA Magazine, 9(4).
https://www.teachers.ab.ca/Publications/ATA%20Magazine/Volume-91/NumberRetrieved
from
4/Pages/Eleven-years-of-teacher-action.aspx
Picciano, A. G. (2014). A critical reflection of the current research in online and blended learning. Lifelong Learning in
Europe (LLinE), 4.
Rath, T. (2007). Strengths finder 2.0. New York, NY: Gallup Press.
Sanders, R. (1987). The pareto principle: Its use and abuse. Journal of Services Marketing, 1(2), 37-40,
https://doi.org/10.1108/eb024706
Sawyer, K. R. (2012). Explaining creativity: the science of human innovation (2nd Edition). New York, NY: Oxford
University Press Inc.
Team based learning collaborative. (2017). Team based learning collaborative website. Retrieved from:
http://www.teambasedlearning.org/
Thomas, C. & Brown, B. (2017). Strategies for successful group work. In A.P. Preciado Babb, Yeworiew, L., &
Sabbaghan, S. (Eds.). Selected Proceedings of the IDEAS 2017 Conference: Leading Educational Change, pp.
37-46.
Calgary,
Canada:
Werklund
School
of
Education,
University
of
Calgary.
http://hdl.handle.net/1880/52096
Thomas, D., & Brown, J. S. (2011). A new culture of learning: Cultivating the imagination for a world of constant
change.
Tuckman, B. (1965). Developmental sequence in small groups. Psychological Bulletin. 63 (6), 384–99.
University of California. (2017). Calibrated peer review tool. Retrieved from http://cpr.molsci.ucla.edu/Home.aspx
Vaughan N. (2014). Student engagement and blended learning: Making the assessment connection. Education
Sciences, 4(4), 247-264. Retrieved from: http://www.mdpi.com/2227-7102/4/4/247
Wiliam, D. & Leahy, S. (2015). Embedding formative assessment: Practical techniques for K-12 classrooms. West Palm
Beach, FL: Learning Sciences International.
APPENDIX A: SAMPLE APPLICATIONS AND LINKS SUPPORTING GROUP WORK IN BLENDED
LEARNING ENVIRONMENTS.
Five Core Principles of the Teaching
Sample Applications and Links
Effectiveness Framework
1. Teachers are Designers of Learning
Forming, Storming, Norming, and Performing: Guide to Developing
an Effective Team
• Design is focused on building
Team-Based Learning Collaborative
understanding
• Design is informed by disciplinary IDEO Design Thinking for Educators
Dotstorming
knowledge
VoiceThread
2. Teachers Design Worthwhile Work
Galileo Educational Network - Dimensions of Inquiry
Focus on Inquiry eBook
• Work is authentic
• Work fosters deep understanding ADDIE Model
P a g e | 96
3. Teachers Design Assessment to
Improve Students Learning and Guide
Teaching
• Assessment is comprehensive
• Clear criteria is established
• Students are self-directed
4. Teachers Foster a Variety of
Interdependent Relationships
• Students’ relationship to the work
• Teachers’ relationship to the work
• Students’ relationship with each
other
• Students’ relationship to their
local communities
5. Teachers Improve their Practice in the
Company of their Peers
• Teaching is a scholarship
Self-Assessment Tools
Blogging applications such as WordPress and Blogger
Peer Feedback Applications
Calibrated Peer Review Tool from the UCLA
Teacher assessment tools
Professional learning plans (ePortfolios) such as Google Sites, Wix,
and Weebly
Tools for community expert feedback
YouTube - videos and online discussions
TodaysMeet – engage in student-expert conversations
Asynchronous collaborative technologies
Voicethread
Instagram
Twitter
Snapchat
Facebook
Pinterest
RemindApp
Brightspace Learning Environment
Synchronous collaborative technologies
Skype
Facetime
Facebook Messenger
Zoom
Remind
Adobe Connect
Classroom Assessment Techniques (CATs)
Classroom assessment techniques designed for technology
Google Forms
Free Assessment Summary Tool (FAST)
Mentimeter
P a g e | 97
CHAPTER
9
FINDING THE BALANCE:
POTENTIAL AND LIMITS OF THE
HYBRID-BLENDED FORMAT
DARRELL S. RUDMANN
SHAWNEE STATE UNIVERSITY
INTRODUCTION
In this chapter, I summarize a representative sample of research on the strengths and limitations of the hybrid (or
“blended”) course format. I focus on what is known about the effectiveness of this format for student learning and
faculty and student perceptions of the format. In my conclusion, I provide what I hope are useful takeaways for those
considering adopting this format.
While there is a body of research literature on the hybrid course format, its quality varies significantly. For example,
Zhao and Breslow (2013) found only about 25 studies extending from the late 1990s met a rigorous research
standard. Bernard, Borokhovski, Schmid, Tamim, and Abrami (2014) proposed a set of steps researchers can take to
perform better meta-analyses of the literature in hopes of improving the state of understanding about the
effectiveness of hybrid courses. The situation may in part be due to the novelty of the format; Drysdale, Graham,
Spring, and Halverson (2013) noted that it is an active area of research in recent education-focused dissertations.
What is meant by a “hybrid” or “blended” course? Attempts to define the term makes up a sub-genre in education
research (e.g., Zhao & Breslow, 2013; Vignare et al., 2005; Doorn & Doorn, 2014). Common definitions include
viewing hybrid courses as the inclusion of online or web-based technology, a mix of pedagogical styles, a reduction
of face-to-face instruction (usually lectures), or a mix of instruction with real-world job tasks (Bernard et al., 2014).
Margulieux, Bujak, McCracken, and Majerich (2014) developed a complex taxonomy along two dimensions to
address the differences between types of hybrid courses. Complicating matters, the term hybrid can be used to
describe specific activities, courses, entire programs, or institutions (Graham, 2006). For this chapter, the term hybrid
or blended will always indicate instruction at the level of a course.
Typically, hybrid courses seem to involve mixed modes of instructional delivery, with some face-to-face or traditional
time that is enhanced with nontraditional activities to meet the learning objectives of the course. Graham (2006)
stated that the three major forms of hybrid formats are combinations of modes of instruction, combinations of
instructional methods, or a combination of online and face-to-face instruction. These nontraditional activities are
typically technologically-based (online, via the Web) but can involve a variety of activities or assignments in different
modes (McGee & Reis, 2012). The nontraditional portion of the course can involve activities that involve other forms
of learning processes, developing a product or proposal, or working on a project. These activities can include learning
diaries that students keep of their learning (Nückles, Schwonke, Berthold, & Renkl, 2004), group work, videos, or a
variety of options to in-class time. The time ratio between online and face-to-face activities are disparate from one
instance to another and the optimal ratio is not clear. Courses deemed “hybrid” could run from 87% face-to-face
down to 25% (Ranganathan, Negash, & Wilcox, 2007). Most authors tend to recommend 50% in-class and 50%
online, but that appears to be more of a heuristic for scheduling courses on a block system than an empirically
established optimal ratio.
P a g e | 98
Despite some confusion over precisely what hybrid courses involve, the approach has reached the milestone of
having a well-known handbook dedicated to it (Bonk & Graham, 2012). Many authors seem to prefer the term
“blended” (as in, blended learning) over “hybrid” as a way to make clear the goal is to create one larger learning
experience that is comprised of some disparate activities and instructional methods, not just two mini-courses titled
as one experience in parallel. The aim with “blended” courses is for something more symbiotic between the in-class
and out-of-class activities to reach the learning objectives. Whether a course is called hybrid or blended, what are
the perceived advantages of the format? According to Graham (2006), instructors pick the hybrid format to (1) alter
their pedagogical approach, (2) provide increased flexibility to students, and (3) increase cost effectiveness by
distributing some of the learning activities outside of the classroom, reducing the reliance on existing facilities.
Instructors may believe that moving away from direct instruction via lectures may improve their pedagogical
approach. For example, Kenney and Newcombe (2011) attempted a hybrid course in the hope of creating a more
active learning experience for a large lecture course. Others hope to produce higher grades and improve learning
(Babb, Stewart, & Johnson, 2010; McKenzie et al., 2013). A greater sense of community can be fostered, at a level
that students report as higher than in fully online courses (Dziuban, Moskal, & Hartman, 2005).
At the same time, more time flexibility is provided to the students. Increased flexibility for students is particularly
important if an instructor is hoping to provide increased access to nontraditional students who may already be
working or have other obligations (Graham, 2006). Some authors see the combination of face-to-face instruction
with online activities as providing the flexibility afforded by an online format with the personal connections from a
traditional setting (Lamport & Hill, 2012), and giving students more chances to participate (Rovai & Jordan, 2004).
Some authors believe that a hybrid format can ameliorate the known dropout problem that fully 100% online courses
have (Brunner, 2006; Doorn & Doorn, 2014). It is possible that hybrid courses facilitate student performance, acting
as a bridge to the more autonomous online format.
In sum, hybrid/blended courses have provided interesting opportunities for faculty and students because of their
ability to provide new instructional methods to meet existing learning objectives, to provide students with more
flexibility, and to make offering courses more affordable. In the next section, I review what has been established
about the effectiveness of hybrid courses on learning.
HYBRID/BLENDED COURSES & STUDENT LEARNING
Some recent studies have found empirical evidence for improved student performance in hybrid/blended courses
over face-to-face formats. McFarlin (2008) found an improvement of 9.9%, one letter grade, for students in a hybrid
format of a class over the traditional format. McFarlin believed this was the result of the students’ ability to access
course content through a learning management system (WebCT, in this instance). McFarlin’s study employed test
bank items that were the same in both conditions, but the testing environment itself is not detailed. What is not
always made clear in hybrid-course research reports is whether the assessments being used to compare the
conditions are kept standardized across formats. In itself, this does not indicate whether the hybrid benefit is an
illusion or not: Online tests can have additional challenges above pencil-paper tests, including random selection and
an inability to return to prior items. Without a standardized format of assessment, parceling out the causal factors
of a performance benefit with hybrid-course learning is not possible. Another study found that students in a hybrid
course were more likely to use online resources (Blackboard in this instance) and performed better (DeNeui & Dodge,
2006), implying that better use of the online resources encouraged better learning. In a small study, Akyol and
P a g e | 99
Garrison (2011) found evidence for deep conceptual learning by students in a hybrid class using a mix of qualitative
and quantitative measures.
Some studies found that the move to a hybrid format from face-to-face format provides similar learning outcomes
with reduced face-to-face contact time with the instructor (Baepler, Walker, & Driessen, 2014; Ashby, Sadera, &
McNary, 2011; Bowen, Chingos, Lack, & Nygren, 2014). Baepler et al. (2014) found similar results for hybrid courses
with a reduction of two-thirds face time with an instructor, although many elements changed between conditions,
precisely what caused the change in performance is unclear. Ashby et al. (2011) described a standardized assessment
comparison of traditional, hybrid, and online formats and found better retention for hybrid and online formats;
students in the hybrid option performed worse than the other two.
Not every comparison finds hybrid formats improve or maintain student learning. Olitsky and Cosgrove (2014) found
the differences in student performance in either an online or hybrid economics course disappeared once the
students’ academic background and ability was taken into account. In one study, the traditional face-to-face class
did better on a final exam by 5%. As reported, the format for the exam may have varied by condition; the face-toface, hybrid, and online students were given different directions about the exam, with hybrid students specifically
admonished not to work with others. In a comparison of introductory teacher education courses, final course grades
were higher for an entirely online section than either hybrid or traditional formats (Reasons, Valadares, & Slavkin,
2005). As noted earlier, much of the course content was made available online in one or more sections, diluting the
ability to make causal statements about the relative effectiveness of the formats. The exams involved the same
items, but the conditions of testing are not reported clearly. That online, un-proctored testing could produce scores
that are higher than tests provided in-class could be due to reasons other than the instructional format.
To recap, not everyone has been able to demonstrate that hybrid/blended formats improve student learning, and
empirical comparisons between hybrid/blended learning and online learning are mixed. One early literature review
that compared online with hybrid courses found the majority of the selected studies did not find a difference in
student learning between the two formats, and the remainder provided mixed results (Means, Toyama, Murphy,
Bakia, & Jones, 2009). A major challenge for comparative research in this area is the variety of conditions and
features that are present in different formats. Many features, such as online videos or discussion boards, are
provided in both hybrid and online courses within studies, making a comparison of benefits difficult.
Given the mixed results of individual studies, meta-analyses are helpful. Lamport and Hill (2012) noted that studies
vary tremendously in terms of the time spent online, content area, and student population, so generalizing from
these results could be difficult. In a meta-analysis of 45 studies, Means, Toyama, Murphy, and Baki (2013) reported
that online learning (whether fully or partially online) produced modestly better student performance than
traditional instruction, with a mean effect size of +0.20. Comparing formats, blended learning provided a mean
student performance effect of +0.35 opposed to an online mean improvement effect of +0.05. According to Lipsey
and Wilson (1993), most effect sizes in the social sciences are between -.08 and 1.08, so this small effect size is
meaningful (an effect size of +0.20 may be enough to be worth considering in educational policy research). This
mean effect size means 64% of the students in a hybrid course performed above what they would have performed
in a traditional course, an improvement of 14% above chance. Of course, the usual caveats about meta-analyses
apply: Usually research published about interventions tend to be those that show positive support for the
intervention, although Means et al. (2013) tried to address the potential for a publishing bias statistically.
P a g e | 100
Given the evidence that hybrid/blended formats can modestly improve student learning, what are the causal factors
for that improvement? Several possibilities have been proposed (Graham, 2013). First, students in hybrid/blended
courses may accept the online homework more readily than in the traditional courses. Evidence for this possibility
is the higher completion rates of online work of students in a hybrid format than the attendance at lectures for
students in a traditional course (Riffell & Sibley, 2004). Access, use, and re-use of online formative assessments has
been found to improve scores on summative assessments (McKenzie et al., 2013). Second, collaborative learning
activities may be more effective for online work than online active learning activities. From the Means et al. (2013)
meta-analysis, collaborative instruction produced a mean effect size of +.25 relative to independent online activities
(+0.05) on student performance. Expository instruction produced a mean effect size of +0.39.
Third, many authors believe that the “blended” part of hybrid course formats may be the important distinction in
determining when hybrid courses will perform well (McGee & Reis, 2012). The instructional goals and the alignment
of the in-class and online activities may matter more than the format itself. Coincidentally, the comparison between
hybrid course performance and traditional course performance is nearly always based on exam scores or final grades
in the literature, not how well students meet a set of predefined learning objectives.
Finally, the differences between the formats may not be entirely the result of the format itself. A particular challenge
with comparing formats that are this diverse (across institutions, content areas, program and course objectives,
student populations, and instructor) is that a combination of features may be what works in one situation or another
(Means et al., 2013). When courses use blended learning, they tend to encourage more study time, provide more
resources, and explicitly involve activities that require interaction between students.
So how does one empirically answer the question, “are hybrid courses better than online or traditional formats for
student learning?” Perhaps this is a paradox within the question: To the extent that studies deploy different activities
across different formats, results of differences by format can be attributed simply to the differences in activities. If
different formats employ similar activities (e.g., online video clips, handouts available in each format), then the
benefits of one format over another will become muted. Not all benefits of hybrid courses can be said to be solely a
benefit of the hybrid approach (Brunner, 2006). It is worth noting that the typical assessment of student learning in
comparison studies are multiple choice exams, not a comprehensive assessment of how well students are meeting
pre-defined learning objectives. Reliance on a recognition-based memory test for validation may not be a sufficient
measure for what hybrid/blended learning proponents claim are the strengths of the hybrid approach. Additionally,
student characteristics may play a role too: The needs of students may vary depending on factors such as rank and
employment. Students might benefit from starting in traditional face-to-face courses in college before attempting
hybrid or online classes (Doorn & Doorn, 2014), particularly since there appears to be a greater need for selfregulation in those formats (Lynch & Dembo, 2004).
In sum, individual studies find that hybrid/blended learning may or may not provide a benefit for student learning.
Meta-analyses indicate that there is at least some benefit, but it is not clear why. In the next section, I examine
student perceptions of hybrid courses.
STUDENT PERCEPTIONS OF HYBRID/BLENDED LEARNING
Hybrid/blended learning may improve student learning in some situations, and does not appear to weaken it. What
is the level of “learner satisfaction” (Graham, 2013) with hybrid courses? Fortunately, many more studies report on
the results of surveys of students in hybrid courses than conduct comparisons of effectiveness. Overall, research on
P a g e | 101
student attitudes on hybrid/blended learning are positive (Buzzetto-More & Sweat-Guy, 2006; Marques, Woodbury,
Hsu, & Charitos, 1998; Yudko, Hirokawa, & Chi, 2008; Salamonson & Lantz, 2005) although Millennials reported being
less excited than other age groups on one survey (Dziuban, Moskal, & Hartman, 2005). Using qualitative and
quantitative measures, Leh (2002) found students in hybrid courses to be generally supportive of them. Having said
that, some have found that hybrid courses can suffer from the same problems as online courses and may exhibit as
many newly created problems as those they are trying to resolve by moving away from traditional formats (Jackson
& Helms, 2008), such as issues of communication and a greater reliance on technology. Students who report being
the most computer literate have been found to prefer hybrid formats the most (Yudko et al., 2008).
Students in hybrid courses are concerned about the quality of the communication between students and faculty
(Babb et al., 2010), but most studies concluded that students liked the level of communication they experience in a
hybrid format (Mansour & Mupinga, 2007; Riffell & Sibley, 2005), including video feedback (Borup, West, Thomas,
& Graham, 2014). Students also reported talking to other students more frequently in hybrid courses (43%) than
traditional courses (29%), assuming a particular student communicated with another student at all (Riffell & Sibley,
2005).
Hybrid courses require students to rely more on the instructional resources provided to them. Eighty percent of
students in a hybrid course reported using their textbooks once or twice a week, which was twice as often as students
in a traditional course (Riffell & Sibley, 2005). This may or may not be what students want: In one study, higher
performing students expressed a preference for a traditional format (Salamonson & Lantz, 2005). Hybrid courses do
not appear to naturally improve student interest in the subject matter (Bowen et al., 2014).
Of course, perceptions about hybrid/blended courses may vary greatly by the nature of the students and their
perceived needs. Hybrid courses should provide more flexibility over traditional courses, in order that they can meet
a greater number of student needs (Poon, 2013) especially for non-traditional college students (Lamport & Hill, 2012)
and precollege students (Means et al., 2013). In fact, most students who use a learning management system (e.g.
Blackboard, WebCT) come to like it over time (Pan, Gunter, Sivo, & Cornell, 2005), and students appreciate the
flexibility (Vaughan, 2007). However, students can report struggling with time management and understanding
technology at the beginning of the course (Vaughan, 2007). While students report being familiar with the Internet
in general, they are not necessarily well-versed in specific technologies, such as RSS feeds, online library databases,
and file transfer technologies such as FTP and Gopher (Marques, Woodbury, Hsu, & Charitos, 1998).
Studies have found that students who are conscientious and usually get high grades are more likely to do well in a
hybrid course (Nakayama, Yamamoto, & Santiago, 2007). Another study found upperclassmen performed better
than freshmen (Riffell & Sibley, 2005), which could be a problem for underprepared students or those who are new
to college. However, a meta-analysis of hybrid and online courses did not find differences in student populations to
be a significant moderator of learning effectiveness (Means et al., 2013). This finding might oversimplify many
factors, including which instructors chose to create a hybrid course, the lack of random selection of students who
elect to take a hybrid course, or the lack of random assignment of which courses are modified for hybrid formats.
In sum, there is at least some concern that the needs of the student population have to be taken into account with
hybrid courses. Logically this should not be as important an issue one might expect with fully online courses, but
some attention needs to be paid to the readiness of students to handle more of their own learning than in a
traditional format. What other aspects of the hybrid format should be considered when evaluating the format?
P a g e | 102
OTHER ASPECTS OF HYBRID/BLENDED LEARNING
According to Graham (2013), what makes hybrid courses unique beyond student learning and satisfaction are faculty
satisfaction, access and flexibility, and cost effectiveness. First, faculty find themselves satisfied with hybrid/blended
courses (Dziuban, Moskal, & Hartman, 2005; Lindsay, 2004; Marques, Woodbury, Hsu, & Charitos, 1998). Most
reported that student learning is improved in hybrid formats, or that there is little difference compared to face-toface formats (Dziuban, Moskal, & Hartman, 2005). Some authors warn that the instructor needs to have a heightened
focus on open communication due to the mixed nature of the course (Heinze & Procter, 2004; McGee & Reis, 2012),
and that instructors may need to be open to continuous interaction. Miscommunication in hybrid courses can
encourage lower engagement and completion, and lower course evaluations (McGee & Reis, 2012).
Second, compared to a face-to-face format, hybrid courses certainly appear to offer more access and flexibility,
leading some to find that they can reduce seat time in lectures by two-thirds and encourage more active learning
(Baepler et al., 2014). Of course, this may hide a contradiction: If a hybrid course requires more interactivity and
group activities, is it really more flexible and adaptive (McGee & Reis, 2012)? It is possible that the requirement to
physically meet--at all--will still not provide the amount of flexibility that nontraditional students require, and
departments could offer courses in both online and hybrid formats, undermining the hybrid format (Mandernach,
Mason, Forrest, & Hackathorn, 2012).
Finally, the cost effectiveness of hybrid and online formats have been widely touted in the media (Lewin, 2013).
However, it is not entirely clear that a cost-benefit analysis would show this to be the case. Online and hybrid formats
require some form of learning management system (most of which are not free), Internet access on campus for
faculty and off campus for students, productivity software, supplemental online resource access via library
subscriptions or video editing facilities and support, and the cost of computer equipment on and off campus. The
institution may cut costs by moving online, but some of the cost is likely to be borne by students.
CONCLUSIONS
Taken together, the hybrid/blended format appears to be a promising option for faculty who are looking to mix up
their teaching or incorporate some new pedagogical options to their traditional face-to-face teaching. The current
state of research indicates that hybrid courses can improve student learning, may address retention better than fully
online courses, and are received positively by students and faculty alike, with some exceptions. Below, I offer some
literature that offers guidelines for creating a hybrid course, and end with some conclusions of my own based on
what I observed in my own courses and how hybrid courses may connect to modern theories of learning and
memory.
RESOURCES FOR INSTRUCTORS
Hensley (2005) presented a case study of the steps taken to create a hybrid course; Porter, Graham, Spring, & Welch
(2014) provided an example of institutional adoption of the hybrid course format. Babb et al. (2010) provided some
thoughtful insights into how to handle the distance component of a hybrid course. Most of these involve providing
clear communication about assignments, designing activities that require participation, creating and holding firm to
deadlines, and providing feedback on assignments. Of course, the recent handbook of blended learning will also be
useful (Bonk & Graham, 2012; Graham, 2006; Graham, 2013).
P a g e | 103
Among the challenges facing instructors choosing to experiment with a hybrid format is that much of what is in the
literature is anecdotal experience or normative practice, rather than based on experimental results (Means et al.,
2013). In many cases, best practices may be in conflict with the pedagogical beliefs of the instructor, which may be
the reason hybrid course design can vary so much (McGee & Reis, 2012). Instructors need to try to avoid the “course
and a half syndrome” (creating too much work for one class) to make sure the connection between the traditional
and non-traditional portions of the course is apparent (Brunner, 2006).
CONCLUDING REMARKS
It is not clear in much of the published research on hybrid/blended course effectiveness that the dependent variables
in testing and assessment are held constant across conditions. The information is usually underspecified, and not
explicitly tied to the learning objectives of the course. When some researchers compare traditional, hybrid, and
online courses, they intentionally include every format element in each condition, so the testing environment is
intentionally varied. In some cases, the assessments used appear to be similar, but the details are very vague
(whether students were allowed to pick the location and place of exams across sections, whether notes and books
were allowed, or if the exams were timed and proctored are typically unstated). It is not necessarily true that open
book tests are easier than closed book exams; however, I wonder about the nature of the contextual cues that are
learned when students are required to study from their resources in hand at home at their convenience and are
then assessed in the same fashion, compared to students in a traditional format who have been encouraged to listen
and record from oral presentations, read later, and then take a test in a less comfortable environment than their
own home. Encoding specificity theory (Tulving & Thomson, 1973) would expect there to be a difference between
the retrieval cues created at learning between those conditions; students taking the test at home after studying at
home should incur a natural cue retrieval advantage. Finally, was collusion with other students possible in some
formats and not others? One article acknowledges the issue by reporting on the use of an honor code to reduce selfreported cheating in an online section (LoSchiavo & Shatz, 2011).
Ultimately, the issue may be an unresolvable. The confounding variables are impossible to avoid when comparing
online, hybrid, and traditional course formats. If the content is provided equally across sections (such as interactive
web activities, online videos), then the benefits from each format may be largely neutralized. If different content or
activities are provided, then differences in performance can be attributed to those activities rather than the format.
The success of a particular pedagogical approach may rely on the interaction of a particular instructor with a
particular student population and a specific set of learning objectives. This is a common obstacle in educational
research: Teaching is as much art as science, and trying to divine the precise causal role of any one element is
extremely difficult in the mix of interacting variables. I experience this when I incorporate a new activity in my
courses: Initially, my enthusiasm and the newness of the activity seems to drive both student interest and thus
performance; after several semesters, my enthusiasm wanes, as does student interest and performance. Selecting
an activity that I am excited to share with students may be as important as the performance benefit from the activity.
Additionally, the context may be important in a postsecondary environment; it seems bizarre to suggest that there
is one pedagogical format that is so content-neutral that it will always perform better regardless of content or
learning objectives. The interaction among many factors is probably what matters.
An educator wanting to try a hybrid format can move ahead without fear. There is no persuasive evidence at this
time that hybrid/blended learning is harmful to student learning, so an instructor can cite existing research to
support the attempt to administration. It may not be all that different from what some faculty already do. In higher
P a g e | 104
education, the level of instructor autonomy leaves a lot of room for different approaches and styles even with
predefined learning outcomes. What some instructors do in their traditional classrooms would look a lot like hybrid
courses; some faculty hold part of their courses in a computer lab once a week to accomplish online tasks, or will
hold occasional off-campus activities yet the courses are scheduled and billed as traditional courses. Active learning
activities can be included in many face-to-face courses.
Ultimately, the learning that occurs from a course needs to meet at least two broad objectives: The development of
a meaningful understanding of the fundamentals of the material, and an ability to transfer that knowledge to
appropriate, novel situations for problem-solving. Long-term memory for knowledge appears to be meaning-based
(Craik & Lockhart, 1972; Rudmann, 2017), so a pressing question for a hybrid course instructor is, how well does the
time spent in class and outside of class develop meaningfully comprehension and retention of the material?
Additionally, the contextual cues we develop at the point of learning are important for retrieval. What are the
situations that the students will need to retrieve the pertinent information, whether later in the same course, in
other courses, or in real-world situations? Will they need to use the information to identify critical concepts in
research articles, or to apply the knowledge in team-based project situations? Awareness of how the students will
be expected to transfer the knowledge one day can guide the instructor in deciding which forms of study and
retention can be appropriately handled in the class, and which should be migrated to an authentic, real-world activity
that may or may not be developed during formal in-class time.
REFERENCES
Akyol, Z., & Garrison, D. R. (2011). Understanding cognitive presence in an online and blended community of inquiry:
Assessing outcomes and processes for deep approaches to learning: Cognitive presence in an online and
blended community of inquiry. British Journal of Educational Technology, 42(2), 233-250.
doi:10.1111/j.1467-8535.2009.01029.x
Ashby, J., Sadera, W. A., & McNary, S. W. (2011). Comparing student success between developmental math courses
offered online, blended, and face-to-face. Journal of Interactive Online Learning. Retrieved from
http://www.ncolr.org/jiol/issues/pdf/10.3.2.pdf
Babb, S., Stewart, C., & Johnson, R. (2010). Constructing communication in blended learning environments: Students’
perceptions of good practice in hybrid courses. Journal of Online Learning and Teaching, 6(4), 735.
Baepler, P., Walker, J. D., & Driessen, M. (2014). It’s not about seat time: Blending, flipping, and efficiency in active
learning classrooms. Computers & Education, 78, 227-236. doi:10.1016/j.compedu.2014.06.006
Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., & Abrami, P. C. (2014). A meta-analysis of blended
learning and technology use in higher education: From the general to the applied. Journal of Computing in
Higher Education, 26(1), 87-122. doi:10.1007/s12528-013-9077-3
Bonk, C. J., & Graham, C. R. (2012). The handbook of blended learning: Global perspectives, local designs. Retrieved
from https://tojde.anadolu.edu.tr/yonetim/icerik/makaleler/542-published.pdf
Borup, J., West, R. E., Thomas, R., & Graham, C. R. (2014). Examining the impact of video feedback on instructor
social presence in blended courses. The International Review of Research in Open and Distributed Learning,
15(3). doi:10.19173/irrodl.v15i3.1821
Bowen, W. G., Chingos, M. M., Lack, K. A., & Nygren, T. I. (2014). Interactive learning online at public universities:
Evidence from a six campus randomized trial. Journal of Policy Analysis and Management, 33(1), 94-111.
P a g e | 105
Brunner, D. L. (2006). The Potential of the Hybrid Course Vis-à-Vis Online and Traditional Courses. Teaching Theology
& Religion, 9(4), 229-235.
Buzzetto-More, N. A., & Sweat-Guy, R. (2006). Incorporating the hybrid learning model into minority education at a
historically black university. Journal of Information Technology Education, 5(1), 153-164.
Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal
Learning and Verbal Behavior, 11(6), 671-684. doi:10.1016/S0022-5371(72)80001-X
DeNeui, D. L., & Dodge, T. L. (2006). Asynchronous learning networks and student outcomes: The utility of online
learning components in hybrid courses. Journal of Instructional Psychology, 33(4), 256.
Doorn, J. R. V., & Doorn, J. D. V. (2014). The quest for knowledge transfer efficacy: Blended teaching, online and inclass, with consideration of learning typologies for non-traditional and traditional students. Frontiers in
Psychology, 5, 1-14.
Drysdale, J. S., Graham, C. R., Spring, K. J., & Halverson, L. R. (2013). An analysis of research trends in dissertations
and theses studying blended learning. The Internet and Higher Education, 17, 90-100.
doi:10.1016/j.iheduc.2012.11.003
Dziuban, C., Moskal, P., & Hartman, J. (2005). Higher education, blended learning, and the generations: Knowledge
is power-no more. Needham, MA: Sloan Center for Online Education. Retrieved from
http://desarrollodocente.uc.cl/images/Innovación/Flipped/Knowledge_is_power_no_more.pdf
Graham, C. R. (2006). Blended learning systems. Retrieved from
https://books.google.com/books?hl=en&lr=&id=tKdyCwAAQBAJ&oi=fnd&pg=RA1PA3&dq=psychology+hybrid+courses&ots=BgpGNzDzep&sig=lvl7cAhkQDHO9K1ev0zN90PKHy4
Graham, C. R. (2013). Emerging practice and research in blended learning. In Handbook of Distance Education.
Routledge. doi:10.4324/9780203803738.ch21
Heinze, A., & Procter, C. T. (2004). Reflections on the use of blended learning. Manchester, UK.
Hensley, G. (2005). Creating a hybrid college course: Instructional design notes and recommendations for beginners.
Journal of Online Learning and Teaching, 1(2), 66-78.
Jackson, M. J., & Helms, M. M. (2008). Student perceptions of hybrid courses: Measuring and interpreting quality.
Journal of Education for Business, 84(1), 7-12. doi:10.3200/JOEB.84.1.7-12
Kenney, J., & Newcombe, E. (2011). Adopting a blended learning approach: Challenges encountered and lessons
learned in an action research study. Journal of Asynchronous Learning Networks, 15(1), 45-57.
Lamport, M., & Hill, R. (2012). Impact of hybrid instruction on student achievement in post-secondary institutions:
A synthetic review of the literature. Journal of Instructional Research, 1, 49-58. doi:10.9743/JIR.2013.7
Leh, A. S. C. (2002). Action research on hybrid courses and their online communities. Educational Media
International, 39(1), 31-38. doi:10.1080/09523980210131204
Lewin, T. (2013). Colleges adapt online courses to ease burden. The New York Times. Retrieved from
http://www.ece.neu.edu/edsnu/mcgruer/USC/NYTCollegesAdaptOnlineCourses1304.docx
Lindsay, E. B. (2004). The best of both worlds: Teaching a hybrid course. Academic Exchange, Winter. Retrieved from
https://research.libraries.wsu.edu/xmlui/handle/2376/746
LoSchiavo, F. M., & Shatz, M. A. (2011). The impact of an honor code on cheating in online courses. Journal of Online
Learning and Teaching, 7(2).
P a g e | 106
Lynch, R., & Dembo, M. (2004). The relationship between self-regulation and online learning in a blended learning
context. The International Review of Research in Open and Distributed Learning, 5(2).
doi:10.19173/irrodl.v5i2.189
Mandernach, B. J., Mason, T., Forrest, K. D., & Hackathorn, J. (2012). Faculty views on the appropriateness of
teaching undergraduate psychology courses online. Teaching of Psychology, 39(3), 203-208.
doi:10.1177/0098628312450437
Mansour, B. E., & Mupinga, D. M. (2007). Students’ positive and negative experiences in hybrid and online classes.
College Student Journal, 41(1), 242.
Margulieux, L. E., Bujak, K. R., McCracken, W. M., & Majerich, D. M. (2014, January). Hybrid, blended, flipped, and
inverted: Defining terms in a two dimensional taxonomy. Proceedings from 12th Annual Hawaii
International Conference on Education, Honolulu, HI.
Marques, O., Woodbury, J., Hsu, S., & Charitos, S. (1998, November). Design and development of a hybrid instruction
model for a new teaching paradigm. Proceedings from Frontiers in Education Conference, Chicago.
McFarlin, B. K. (2008). Hybrid lecture-online format increases student grades in an undergraduate exercise
physiology course at a large urban university. Advances in Physiology Education, 32(1), 86-91.
doi:10.1152/advan.00066.2007
McGee, P., & Reis, A. (2012). Blended course design: A synthesis of best practices. Journal of Asynchronous Learning
Networks, 16(4), 7-22.
McKenzie, W. A., Perini, E., Rohlf, V., Toukhsati, S., Conduit, R., & Sanson, G. (2013). A blended learning lecture
delivery model for large and diverse undergraduate cohorts. Computers & Education.
Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online
learning: A meta-analysis and review of online learning studies. Washington, D.C.: U.S. Department of
Education, Office of Planning, Evaluation, and Policy Development. Retrieved from
http://www.ed.gov/about/offices/list/opepd/ppss/reports.html
Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A metaanalysis of the empirical literature. Teachers College Record, 115(3), 1-47.
Nakayama, M., Yamamoto, H., & Santiago, R. (2007). The impact of learner characteristics on learning performance
in hybrid courses among Japanese students. Electronic Journal of E-Learning, 5(3), 195-206.
Nückles, M., Schwonke, R., Berthold, K., & Renkl, A. (2004). The use of public learning diaries in blended learning.
Journal of Educational Media, 29(1), 49-66.
Olitsky, N. H., & Cosgrove, S. B. (2014). The effect of blended courses on student learning: Evidence from introductory
economics
courses.
International
Review
of
Economics
Education,
15,
17-31.
doi:10.1016/j.iree.2013.10.009
Pan, C.-C., Gunter, G., Sivo, S., & Cornell, R. (2005). End-user acceptance of a learning management system in two
hybrid large-sized introductory undergraduate courses: A case study. Journal of Educational Technology
Systems, 33(4), 355-365.
Poon, J. (2013). Blended learning: An institutional approach for enhancing students’ learning experiences. Journal of
Online Learning and Teaching, 271-289.
P a g e | 107
Porter, W. W., Graham, C. R., Spring, K. A., & Welch, K. R. (2014). Blended learning in higher education: Institutional
adoption and implementation. Computers & Education, 75, 185-195.
Ranganathan, S., Negash, S., & Wilcox, M. V. (2007, March). Hybrid learning: Balancing face-to-face and online class
sessions. Proceedings from Tenth Annual Southern Association for Information Systems (SAIS) Conference,
Chicago.
Reasons, S. G., Valadares, K., & Slavkin, M. (2005). Questioning the hybrid model: Student outcomes in different
course formats. Journal of Asynchronous Learning Networks, 9(1), 83-94.
Riffell, S., & Sibley, D. (2005). Using web-based instruction to improve large undergraduate biology courses: An
evaluation of a hybrid course format. Computers & Education, 44(3), 217-235.
Riffell, S. K., & Sibley, D. F. (2004). Can hybrid course formats increase attendance in undergraduate environmental
science courses? Journal of Natural Resources and Life Sciences Education, 33, 1-5.
Rovai, A. P., & Jordan, H. (2004). Blended learning and sense of community: A comparative analysis with traditional
and fully online graduate courses. The International Review of Research in Open and Distributed Learning,
5(2). doi:10.19173/irrodl.v5i2.192
Rudmann, D. S. (2017). Learning and Memory. Thousand Oaks, CA: Sage Publications.
Salamonson, Y., & Lantz, J. (2005). Factors influencing nursing students’ preference for a hybrid format delivery in a
pathophysiology course. Nurse Education Today, 25(1), 9-16. doi:10.1016/j.nedt.2004.09.006
Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological
Review, 80(5), 352-373.
Vaughan, N. (2007). Perspectives on blended learning in higher education. International Journal on E-Learning, 6(1),
81-94.
Vignare, K., Dziuban, C., Moskal, P., Luby, R., Serra-Roldan, R., & Wood, S. (2005). Blended learning review of
research: An annotative bibliography. Proceedings from Educational Media International.
Yudko, E., Hirokawa, R., & Chi, R. (2008). Attitudes, beliefs, and attendance in a hybrid course. Computers &
Education, 50(4), 1217-1227. doi:10.1016/j.compedu.2006.11.005
Zhao, Y., & Breslow, L. (2013). Literature review on hybrid/blended learning. Teaching and Learning Laboratory (TLL),
1-22.
P a g e | 108
CHAPTER
10
INCORPORATING ASYNCHRONOUS
ONLINE DISCUSSIONS IN BLENDED
AND ONLINE EDUCATION: TOOLS,
BENEFITS, CHALLENGES, AND BEST
PRACTICES
NATALIE B. MILMAN
GEORGE WASHINGTON UNIVERSITY
INTRODUCTION
An important instructional strategy employed in most online and blended education courses in institutions of higher
education (IHE) is asynchronous online discussions (AODs), which are sometimes referred to as asynchronous online
conferences, forums, or threaded discussions. Asynchronous online discussions (AOD), a form of computermediated communication (CMC), are a popular and effective strategy for engaging students (Swan, 2002). They
provide students with the opportunity to learn the course content from and with their peers and instructor(s).
Because significant student-student and student-instructor interaction in online courses occurs in AODs, and
certainly also in blended education, it is imperative for educators, students, and other stakeholders involved in online
and blended education to comprehend the implications of their use. It is through AOD that interaction most often
occurs in asynchronous online courses. In blended courses, clearly students and instructors meet part of the time in
person, but they can also use AOD to expand students’ learning beyond their face-to-face (F2F) meetings.
Interaction, “commonly understood as actions among individuals” (Abrami et al., 2011, p. 86), is an important
characteristic of online education because students learn through student-student, student-instructor, and studentcontent interaction (Abrami et al., 2011; Bernard et al., 2009; Moore, 1989). Bernard et al. (2009) found that
interaction positively affects student learning. However, they cautioned that instructors should not focus on
increasing the quantity of interactions, but rather on increasing the quality of interactions. This is precisely why it is
important to understand how AOD are utilized in online and blended education, as well as the benefits, challenges,
and best practices for their success. This chapter introduces some of the most popular technology tools for hosting
AODs, as well as several benefits, challenges, and best practices for incorporating AOD in IHE.
AOD TECHNOLOGY
Various technology tools can be used for hosting AODs. Some were specifically designed for fostering interaction
and engagement in online and blended education courses, whereas others were designed for other uses, for
instance, messaging between work teams (e.g., slack: https://slack.com/). Tools that can be used for AOD range
from stand-alone technology tools such as Backchannel chat (http://backchannelchat.com/) or Piazza
(https://piazza.com/), or tools integrated within learning management systems (LMS) such as Blackboard, Canvas,
Coursera, or Moodle. Most LMS allow for posting of various media formats (e.g., video, audio, photos).
P a g e | 109
Alternatively, instructors might use tools that take advantage of the multimedia capabilities built into the tool, such
as
Flipgrid
(https://info.flipgrid.com/),
Vialogues
(https://vialogues.com/),
or
VoiceThread
(https://voicethread.com/products/highered/). Both Flipgrid and Vialogues are stand-alone, video-based discussion
tools. Flipgrid allows users to create short 15-90 second video responses for the free version and longer 15 seconds
to 5 minutes videos for the paid version. Vialogues functions more like a video annotation tool allowing users to post
text-based responses visible alongside the video question or prompt. A useful feature is the ability to timestamp a
text-based response so that users can go directly to the point in the video that the text-based response refers. On
the other hand, VoiceThread provides a forum for users to respond to a variety of media (e.g., a video recording,
graphic, slideshow) by responding via text, telephone, microphone, webcam, or a file upload. Although each tool is
different, each promotes the use of media such as video for responding to or commenting on some type of prompt.
Flipgrid and Voicethread offer free trial versions and Vialogues is available for free to anyone who opens an account.
WHAT ARE THE BENEFITS OF AOD?
There are many benefits to using AOD in online and blended education for both students and instructors. Not only
do they generally foster development of a shared learning community where students and the instructor learn from
and with one another, but they also cultivate the learning of content through reflection, debate, questioning, and
summaries (deNoyelles, Zydney, & Chen, 2014; Kayler & Weller, 2007; Swan, 2002). Additionally, because they are
asynchronous, students and instructors can participate when most convenient for them. As such, students’ access
to learning expands beyond the local community.
STUDENT BENEFITS
AOD provide students the opportunity to post questions for clarification, answer questions that help them solidify
their own learning, as well as foster their peers’ learning. Instead of being put on-the-spot to answer a question in
front of an entire class, students would have the time and intellectual space to study the content, reflect on the
questions/prompts, and craft a thoughtful response. This can be especially beneficial to students who might need
more time such as shy, nonnative speakers, and/or those with exceptional learning needs. Such students’
participation levels may even be greater than compared to their F2F course participation because the pressure to
answer questions immediately after being asked no longer serves as an impediment to their participation. Other
advantages involve learning about others’ different perspectives, as well as through multimedia, and the flexibility
to participate when most convenient for them without having to commute to an IHE. For students who live in
different time zones and countries, this can expand the accessibility to learning anytime, anywhere! Although not
unique to online and blended learning, this flexibility is an important benefit to most students.
INSTRUCTOR BENEFITS
AOD offer several advantages to instructors who use them. For instance, online and blended education instructors
have the opportunity to interact with their online and blended education students with depth and breadth outside
of brick-and-mortar classrooms. They can read, review, and reflect on their students’ posts, including information
they share about themselves. In a F2F course, for example, the instructor may not have the opportunity to engage
as much with her/his students because of the bounded class meeting time (e.g., a F2F course may only meet once,
twice, or three times a week), number of groups, and/or quantity of students. On the other hand, in an online or
blended education course, instructors have access to all of their students’ postings, in addition to a history of their
postings consisting of their questions, comments, and responses to their peers. This provides a “window” into
P a g e | 110
students’ thinking, as well as paints a picture of the amount of effort they seem to be dedicating to their studies. It
also offers instructors information about who their students are as learners and individuals. Rather than only having
a few hours to engage with a whole class of students F2F, in online and blended education courses, instructors can
connect with all of their students throughout a week or other set period of time in which the AOD takes place.
Instructors also can correct misconceptions, expand on concepts, ask questions, and/or review and reflect on
students’ learning based on their posts in the AOD. They might also ask students probing questions to help them
delve deeper into the content by providing examples, categorizing information, debating various sides,
substantiating their ideas, creating concept maps, and/or making connections to the real-world or other relevant
resources. All of these strategies can help promote greater learning of content.
WHAT ARE THE CHALLENGES OF AOD?
There are several challenges to incorporating AOD in online and blended instruction. Being aware of the limitations
of AOD can help instructors better mitigate the challenges. Some of the challenges associated with AODs include:
1.
2.
3.
4.
5.
AOD ARE ASYNCHRONOUS: Although this is clearly a benefit for many, this can also be a challenge. If
students post when convenient for them, discussions may not seem very continuous. Some students, for
instance, might find that waiting for a thoughtful response to a question or comment takes a while, and
they lose motivation or fail to remember what the question or comment was that they were hoping a peer
would respond to. Moreover, some students might find they are late to a discussion and have little more
to offer because of earlier responses.
AOD REQUIRE A LOT OF TIME/READING: AOD can require a lot of time to read them, not only because
of the quantity of posts, but also because of their quality. Some instructors and students might be
overwhelmed by having to read lots of posts, have trouble keeping up, or experience difficulties figuring
out how to manage their own participation and postings in the AOD. Some might also feel pressure to
respond and/or read every post so they do not miss out on learning the content.
AOD QUALITY VARIES: AOD typically are more informal in nature; therefore, the quality of their posts
may not be very good. They may be very wordy or have grammatical or spelling errors that might impede
not only comprehension of the post, but also learning about the topic studied.
AOD MIGHT RESULT IN MISUNDERSTANDING: Even though it seems that communicating in AOD
might promote understanding, in some cases, they might result in misunderstanding because there are no
visual cues, verbal intonation, or ability to see how others react to one’s words. This might happen not only
if a student posts a reply that is misunderstood, but it is also possible that some students might even be
offended, when that was not the intention! Use of emojis, graphics, links with definitions, among other
strategies might help ensure one’s good intentions.
DESIGN OF GOOD AOD QUESTIONS/PROMPTS IS CHALLENGING: It can be very challenging for
instructors (or students if they have this responsibility) to develop questions that will keep students’
interest during the full duration of a discussion, as well as foster student participation. For example, read
the following two questions – which might you consider a strong one and which a weak one? Stronger
questions are open-ended, often have no “right” answer, and are applicable to students’ real-life.
a. What are the main characteristics of adult learning theory?
P a g e | 111
b. Reflect on your own learning experiences in one of your favorite courses and how the instructor
incorporated adult learning theory. Be sure to substantiate your ideas and share concrete
examples.
6.
7.
SOME AOD FALL FLAT: Sometimes, the challenge is keeping the discussion going. Some discussions,
simply put, fall flat. This might happen because the instructor (or student) developed a question that falls
flat because it was answered early and it does not address higher order levels of thinking. It might also
happen if some students answer questions so thoroughly that some students feel as if they have nothing
new or interesting to contribute.
AOD REQUIRE THE USE OF HARDWARE AND SOFTWARE: Although one might assume students will
only enroll in a course if s/he has the requisite technology, students may not have up-to-date hardware and
software making participation in AOD difficult. Also, some AOD may be impossible or difficult to read on all
devices.
WHAT ARE SOME BEST PRACTICES FOR AOD?
Thinking about how best to design, implement, assess, and engage in AOD is important to their successful
implementation in any online or blended learning course. Equally important is communicating expectations for
participation in the AOD. Following are several recommended best practices for implementing AOD in online and
blended education courses.
DESCRIBE PURPOSE
To ensure quality engagement in AOD, it is important for instructors to be clear about the purpose, expectations,
and grading criteria for the AOD. For example, is the purpose of the AOD for students to demonstrate application of
the content being learned, to prepare for a face-to-face class, or to expand their learning of the content studied?
Whatever the purpose of the AOD, instructors should provide an explanation about their purpose. The following is
an example of a purpose description for an AOD that an instructor might include to describe the AOD assignment:
Purpose: Research shows that students who participate fully in well-designed asynchronous online
discussions (AOD) develop a stronger sense of community among their peers and perform better
academically (Zhou, 2015). Moreover, AOD help students view course content from different perspectives,
foster a shared learning community where students learn from and with one another, and provide a place
where they can share questions, answers, ideas, examples, and resources about the content they are
learning in a focused, meaningful way.
PROVIDE STRUCTURE AND GRADING CRITERIA
Gilbert and Dabbagh (2005) conducted a study examining how the established protocols for participating in and
grading of AODs promoted meaningful learning. They found “three elements of structuring online discussions that
significantly impacted meaningful discourse … [which] were, (a) facilitator guidelines; (b) evaluation rubrics; (c)
posting protocol items” (p. 16). Facilitator guidelines involves providing the guidelines for the facilitator of the
discussion, if it is a student. Assessment rubrics (Wegmann & McAuley, 2014) communicate the grading criteria for
participation (see Appendix A for a sample rubric). Finally, the posting protocols refer to not only the quantity,
P a g e | 112
frequency, and timing of posts, but also appropriate protocols for posting, such as incorporating respectful behavior
(netiquette).
Similarly, Salter and Conneely (2015) discovered that structured forums were more engaging than non-structured
ones in a study of undergraduate students that compared participation in and student perceptions of structured and
unstructured AODs. Therefore, it is important for instructors to be clear about the expectations for students’ and
instructors’ own participation in AODs, as well as to describe how students will be graded. Moreover, instructors
should learn about how to craft good questions (Milman, 2009), in addition to a variety of ways to structure
discussions so they go beyond simple Q & A. For example, instructors can incorporate compare/contrast, debate,
cases, role-playing, and problem-based learning, just to name a few different types of strategies instructors might
use.
MODEL EXPECTATIONS
Instructors should engage in the same behaviors and practices they expect from their students. For instance, if
students are expected to post three times a week every other day of the week, then instructors should ensure they
are also modeling these same expectations by participating in the AOD three times a week every other day of the
week, too. Also, if the AOD are offered through a particular technology tool such as a LMS or other CMC technology,
instructors should ensure students know how to make, read, and manage posts in the AOD tool. Instructors can
create a video demonstrating how to use the LMS AOD tool or locate resources that can provide students step-bystep instructions. Most LMS provide job aids or videos with information for both instructors and students who wish
to use AOD. For example, the Blackboard LMS has several job aids with videos demonstrating:
1.
2.
how instructors can create AOD:
https://help.blackboard.com/Learn/Instructor/Interact/Discussions/Create_Discussions
how students can post to AOD:
https://help.blackboard.com/Learn/Student/Interact/Discussions
Clarke and Bartholemew (2014) conducted a study using an analytical tool to investigate participation in AODs by
instructors. Instructor interactions in the AODs involved postings that were categorized as “cognitive,” “teaching,”
and/or “social” (p. 6). In “cognitive posts,” instructors asked questions that required students to inquire into the
topic at a deeper level. On the other hand, “teaching posts” involved instructor elaboration and clarification about
the course content/topic studied in the AOD. In “social posts,” instructors shared their personal experiences and
provided encouragement. Yet, it is important to emphasize that quality teaching posts were found to be more
important than the quantity of teaching posts. For example, “instructors who posted less frequently but with more
purpose had a higher level of critical thinking in their discussions” (Clarke & Bartholomew, 2014, p. 18). Therefore,
instructors should think thoughtfully and strategically about the quality and type of posts (i.e., cognitive, teaching,
or social) they make to promote AOD interaction.
MONITOR PARTICIPATION
Instructors should ensure that students are participating at the level expected, particularly if points are earned for
participating in the AOD. If students’ participation is lacking and/or if they do not participate/post at all during the
specified time period, then the instructor should contact them to find out if they need help and also understand the
participation expectations and grading criteria (this can be presented as a grading rubric – for an example, see
P a g e | 113
Appendix A). In some cases, the instructor may find that something grave has happened to a student (e.g., a bad car
accident); in others, it might be the student is feeling overwhelmed or simply very busy. Whatever the reason,
reaching out to students to check on them will help instructors learn what is going on in the student’s life that may
be impeding her/his learning. Moreover, it will help students understand their instructor cares about them and
expects them to participate. However, some students may find themselves too busy to post and willing to lose points.
In such cases, instructors should help students recognize what they have to gain from their participation in AOD.
Instructors should ensure that students remain on topic. It is easy for a discussion question to veer off topic. If it
does, the instructor will need to decide if the post should be deleted or redirected. If deleted, the instructor should
ensure there is a policy in place that has been communicated to students that posts that are off topic or
inappropriate may be deleted. To redirect a post, the instructor might comment on a tangential posting simply by
re-posting about the topic discussed as a response to the thread.
In many courses, sensitive or controversial topics are discussed and/or opposing views may surface. Clearly,
academic freedom, free speech, civility, and respect are important in any educational environment. In these cases,
instructors should ensure that postings follow policies and guidelines within the IHE and course. As with off-topic
posts, instructors should have a policy in place to remove any posts that are inappropriate. Yet, in some cases,
students will not realize that their comments, questions, and or posts are offensive or claim that their ideas should
remain because of free speech. In such cases, instructors may use the “offensive” post as a teaching moment to
discuss with students. In others, the instructor may remove the post altogether. In either instance, contacting the
“offending” student directly to discuss the matter is important. Instructors may also wish to provide resources for
students who experience challenges with processing the “offensive posts” such as a university’s counseling center.
For topics that might be “triggers” for some students (e.g., gun violence, sexual assault), instructors should offer an
alternative assignment for those who choose not to participate in the discussion.
DESCRIBE HOW TO MANAGE AOD PARTICIPATION
It is important for instructors to describe how students can best manage not only reading AOD posts by their peers,
but also monitoring their own participation. For example, some students might feel overwhelmed reading posts by
everyone in the class. Even a course consisting of 10 students can have hundreds of posts within a week. A simple
technique students can employ is changing the subject of the post. By reading the subject, they can then determine
if the post is worth their time/effort to read (e.g., a post by a student with the subject: “Thanks – I agree” or directed
to a specific student may not be directed at the individual student and therefore, may not need to be read or merit
a response). Also, students can choose to concentrate on one or two questions instead of all of the questions (Note:
In some cases, instructors require their students to respond to all posts—this is why it is important to communicate
and model expectations).
Instructors should also encourage their students to read posts. Researchers Goggins and Xing (2016) found the
following:
Though the research community usually focuses on number of posts students make when examining
student learning in using online discussion tools, our research demonstrates that the number of posts
students read, the time delay for responding, and the time spent on reading posts also matters. (p. 249)
P a g e | 114
Consequently, instructors should explicitly advise their students to read as many posts as possible and allocate
sufficient time for reading them thoughtfully.
DESIGN QUALITY QUESTIONS OR PROMPTS
Online and blended education instructors must create AOD questions and/or prompts that help students learn the
content. However, this is not an easy task. Berge and Muilenburg (2002) recommended using Bloom’s Taxonomy for
developing questions tend to foster higher order thinking. (Note: Bloom’s Taxonomy has since been revised, see
Anderson et al., 2001). Another framework for developing online discussion questions is the CREST+ model (Akin &
Neal, 2007). The CREST+ model “covers the cognitive nature of the question, the reading basis, any experiential
possibility, style and type of question, and…ways to structure a good question” (Akin & Neal, 2007, para. 1).
Whichever approach is used, varying the types of questions is vital, as well as determining the learning goals of the
AOD (Bradley et al., 2008).
ENCOURAGE CONVERSATIONAL STYLE
In a review of the literature on AODs, Fear and Erickson-Brown (2014) recommended using a conversational style
in the AODs, which “means writing in conversational form and style; using personal anecdotes and affective verbal
immediacy; the expression of appropriate emotion through the use of capitals, bold, italics; emoticons; and so on”
(p. 25). This recommendation is in line with Clark and Mayer’s (2016) Personalization and Embodiment Principles
that suggests that “people learn better when e-learning environments use a conversational style of writing or
speaking … polite wording for feedback and advice, and a friendly human voice” (p. 179).
INCORPORATE STUDENT FACILITATION OF DISCUSSIONS
Several researchers have touted the benefits of student-facilitated AODs (Milman, 2009; Milman, Hillarious, &
Walker, 2012; Snyder, & Dringus, 2014). Student-facilitation of AODs involves students serving in the facilitator role
for the discussion. It is important to ensure that student-facilitators are informed of the expectations and grading
criteria, but also that they are set up for success by being provided content and discussion questions ahead of time.
Also, it is important for student-facilitators to know who to contact for support (even if to ask for advice about how
to respond to a peer’s post). Appendix B provides a sample assignment description for student facilitation of AOD,
as well as a reflection “debrief” assignment.
PARTICIPATE IN THE DISCUSSIONS!
Many investigators have found that instructor presence and participation in AOD can impact students’ learning and
participation (deNoyelles, Zydney, & Chen, 2014; Kayler & Weller, 2007; Swan, 2002). However, frequency of
participation is not the greatest factor influencing student participation and learning. For instance, Nandi, Hamilton,
and Harland (2012) found that “periodic feedback from instructors is always valued highly by students and keeps
the students on track” (p. 23). Therefore, instructors should be more concerned with ensuring they are posting
periodically rather than worrying about how frequently or how many posts they might make.
P a g e | 115
SUMMARY
The use of AOD in online and blended education courses in IHE shows great promise for building community,
fostering deep learning, and providing student-student, student-content, and student-instructor interaction.
However, there are many different types of tools that can be used for housing them, as well as numerous approaches
that instructors can take for structuring them. This chapter provided information about some of the tools, research,
challenges, benefits, and recommendations for using AOD in IHE. As technologies change and as new ones are
developed, so will the tools for hosting AOD. Therefore, it will be important for instructors to learn about their
affordances and challenges, as well as experiment with new tools to ensure they best meet the needs of their
students.
REFERENCES
Abrami, P. C., Bernard, R. M., Bures, E. M., Borokhovski, E., & Tamim, R. M. (2011) Interaction in distance education
and online learning: Using evidence and theory to improve practice. Journal of Computing in Higher
Education, 23, 82-103.
Akin, L., & Neal, D. (2007). CREST+ model: Writing effective online discussion questions. Journal of Online Learning
and Teaching, 3(2). Retrieved from http://jolt.merlot.org/vol3no2/akin.htm
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Wittrock, M. C.
(2001). A taxonomy for learning, teaching, and assessing. New York, NY: Longman.
Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A
meta-analysis of three types of interaction treatments in distance education. Review of Educational
Research, 79(3), 1243-1289.
Berge, Z., & Muilenburg, L. (2002). Designing discussion questions for online adult learning. In A. Rossett (Ed.), The
ASTD e-Learning Handbook: Best practices, strategies, and case studies for an emerging field (pp. 183-190).
New York, NY: McGraw Hill.
Bradley, M. E., Thom, L. R., Hayes, J., & Hay, C. (2008). Ask and you will receive: How question type influences
quantity and quality of online discussions. British Journal of Educational Technology, 39(5), 888-900.
Clark, C. C., & Mayer, R. E. (2016). E-learning and the science of instruction: Proven guidelines for consumers and
designers of multimedia learning (4th ed.). Hoboken, NJ: John Wiley & Sons.
Clarke, L. W., & Bartholomew, A. (2014). Digging beneath the surface: Analyzing the complexity of instructors'
participation in asynchronous discussion. Online Learning, 18(3), 1-22.
deNoyelles, A., Zydney, J. M., & Chen. B. (2014). Strategies for creating a community of inquiry through online
asynchronous discussions. MERLOT Journal of Online Learning and Teaching, 10(1), 153-165.
Fear, J. W., & Brown, X. (2014). Good quality discussion is necessary but not sufficient in asynchronous tuition: a
brief narrative review of the literature. Journal of Asynchronous Learning Networks, 18, 21-28.
Gilbert, P. K., & Dabbagh, N. (2005). How to structure online discussions for meaningful discourse: A case study.
British Journal of Educational Technology, 36(1), 5-18.
Goggins, S., & Xing, W. (2016). Building models explaining student participation behavior in asynchronous online
discussion, Computers & Education, 94, 241-251.
P a g e | 116
Kayler, M., & Weller, K. (2007). Pedagogy, self-assessment, and online discussion groups. Journal of Educational
Technology & Society, 10(1), 136–147.
Milman, N. B. (2009). Crafting the “right” online discussion questions using the revised Bloom’s Taxonomy as a
framework. Distance Learning, 6(4), 61-64.
Milman, N. B., Hillarious, M., & Walker, B. (2012). An exploratory qualitative analysis of graduate student learning
and division of labor resulting from student cofacilitation of an asynchronous online discussion. Quarterly
Review of Distance Education, 13(2), 51-64.
Moore, M. G. (1989). Three types of interaction. American Journal of Distance Education, 3(2), 1-6.
Nandi, D., Hamilton, M., & Harland, J. (2012). Evaluating the quality of interaction in asynchronous discussion forums
in fully online course. Distance Education, 33(1), 5-30.
Salter, N. P., & Conneely, M. R. (2015). Structured and unstructured discussion forums as tools for student
engagement. Computers in Human Behavior, 46, 18-25.
Snyder, M. M., & Dringus, L. P. (2014). An exploration of metacognition in asynchronous student-led discussions: A
qualitative inquiry. Journal of Asynchronous Learning Networks, 18(2), 1-19.
Swan, K. (2002). Building learning communities in online courses: The importance of interaction. Education,
Communication & Information, 2(1), 23-49.
Wegmann, S. J., & McAuley, J. K. (2014). Investigating asynchronous online communication: A connected stance
revealed. Journal of Asynchronous Learning Networks, 18, 99-113.
Zhou, H. (2015). A systematic review of empirical studies on participants’ interactions on Internet-mediated
discussion board as course component in formal higher education settings. Online Learning Journal, 19(3),
1-20.
APPENDIX A: AOD GRADING RUBRIC
Grading
Criteria
Excellent (2 pts)
Good (1 pt)
Needs Improvement (0 pts)
Timing
(When
post is
made)
Student posted before/on and
after Saturday
Student posted on and after
Sunday
Student posted on or after
Monday
(1 pt)
(.50 pt)
(0 pt)
Quality
and
Quantity
of posts
Student posted three or more
quality* discussion postings.
Student posted two quality*
discussion postings.
Student posted one or no
quality* discussion postings.
(1 pt)
(.50 pt)
(0 pt)
Total possible points each week: 2
Each week starts Wednesday 12AM EST and ends on Tuesday 11:59PM EST.
P a g e | 117
*Quality posts are those that involve:
Responding thoughtfully to a topic/question,
Posing a thought-provoking question related to a topic; and/or
Providing links and resources related to a topic and an explanation of the link/resource(s) shared (simply
posting a link or resource without an explanation does not “count” as a quality post).
Posting of “agree,” “great job,” or “thanks” types of posts are not considered quality posts, nor are they “counted”
as postings that go towards your posting grade.
APPENDIX B: AOD FACILITATOR ASSIGNMENT DESCRIPTION AND GRADING RUBRIC
Purpose: Research shows that online students who moderate (facilitate) online discussions develop a deeper
understanding of the content and strategies for facilitating online discussions. They also further cultivate their
leadership and collaboration skills and abilities while also contributing to a shared learning community where they
learn from and with their peers. The purpose of the debrief statement is for you to reflect on your experience cofacilitating one week's discussion, as well as describe how responsibilities were shared among all co-facilitators.
Discussion Facilitation & Debrief Statement (4%) [Due when assigned] - You will be asked to co-facilitate one week's
discussion. To earn the full points, you must:
1. plan and prepare by (with your co-facilitators):
a. reviewing the draft lecture;
b. communicating w. your co-facilitator about distribution of responsibilities; and
c. examining the 3 open-ended questions included in the email sent with the draft lecture - these will
be the questions you co-facilitate.
2. co-facilitate the discussion by:
a. guiding the discussions by posting thought-provoking questions and interjections;
b. expanding upon the other students' viewpoints, provide help, and feedback;
c. promoting politeness and courtesy;
d. being supportive and complimentary to those who provide good effort, and responding to as many
postings as possible.
3. submit a 250-350 word debrief statement that describes:
a. how responsibilities were divided/shared between you and your partner(s), and
b. what you learned about the content and co-facilitation from this experience.
Submission: Submit your debrief statement no later than the following Tues after your facilitation week ends. Note
that each facilitator must upload his or her own statement - this is an assignment required by each individual!
Grading Criteria: This assignment will be graded using the Co-Facilitation Rubric.
Note: You should not answer the questions for the week's discussion as would be expected by your peers who are
not co-facilitating. Also, you will you not earn points for "weekly participation" in discussions. Instead, you will earn
a facilitation grade.
P a g e | 118
Co-Facilitation Grading Rubric
Criteria
Facilitation of
Discussion
3 pts
Debrief
Statement
(Individual—
this is not to
be completed
as a team)
1 pt
Excellent
The facilitator meaningfully and without
prompting from her/his partner, IR, or SL,
successfully co-facilitated the week’s
discussion by:
1. guiding the discussions by posting
thought-provoking questions and
interjections;
2. expanding upon the other students'
viewpoints, provide help, and feedback;
3. promoting politeness and courtesy;
4. being supportive and complimentary to
those who provide good effort, and
5. responding to as many postings as
possible.
The facilitator submitted a 250-350 word
debrief statement describing:
1. how responsibilities were divided/shared
between you and your partner(s), and
2. what you learned about the content and
co-facilitation from this experience.
Satisfactory
The facilitator
required some
prompting and/or
guidance from
her/his partner, IR,
or SL, to address the
requirements in the
“Excellent” section.
Unsatisfactory
The facilitator
required prompting
and/or guidance
several times from
her/his partner, IR,
or SL, to address the
requirements in the
“Excellent” section,
or was absent from
the discussion.
The facilitator
submitted a
somewhat wordy,
too brief, and/or
unclear debrief
statement
addressing the
requirements in the
“Excellent” section.
The facilitator failed
to submit a debrief
statement or
submitted one that is
very wordy, too
brief, and/or unclear,
and/or that failed to
address the
requirements in the
“Excellent” section
NOTE: Co-facilitators should not answer the questions for the week they are co-facilitating. They will not earn
“student participation points” either; however, they will earn a co-facilitation grade. You will not see any points
recorded during your co-facilitation week for student discussions.
P a g e | 119
Co-Facilitation Grading Rubric
Criteria
Facilitation of
Discussion
3 pts
Debrief
Statement
(Individual—
this is not to be
completed as a
team)
1 pt
Excellent
The facilitator meaningfully and without prompting
from her/his partner, IR, or SL, successfully cofacilitated the week’s discussion by:
6. guiding the discussions by posting thoughtprovoking questions and interjections;
7. expanding upon the other students'
viewpoints, provide help, and feedback;
8. promoting politeness and courtesy;
9. being supportive and complimentary to those
who provide good effort, and
10. responding to as many postings as possible.
The facilitator submitted a 250-350 word debrief
statement describing:
3. how responsibilities were divided/shared
between you and your partner(s), and
4. what you learned about the content and cofacilitation from this experience.
Satisfactory
The facilitator required
some prompting
and/or guidance from
her/his partner, IR, or
SL, to address the
requirements in the
“Excellent” section.
Unsatisfactory
The facilitator required
prompting and/or
guidance several times
from her/his partner,
IR, or SL, to address the
requirements in the
“Excellent” section, or
was absent from the
discussion.
The facilitator
submitted a somewhat
wordy, too brief,
and/or unclear debrief
statement addressing
the requirements in the
“Excellent” section.
The facilitator failed to
submit a debrief
statement or submitted
one that is very wordy,
too brief, and/or
unclear, and/or that
failed to address the
requirements in the
“Excellent” section
NOTE: Co-facilitators should not answer the questions for the week they are co-facilitating. They will not earn
“student participation points” either; however, they will earn a co-facilitation grade. You will not see any points
recorded during your co-facilitation week for student discussions.
P a g e | 120
CHAPTER
11
ENHANCING STUDENT
ENGAGEMENT IN ASYNCHRONOUS
ONLINE COURSES
MICHAEL R. STEVENSON
AND
DAMIEN C. MICHAUD
UNIVERSITY OF SOUTHERN
MAINE
INTRODUCTION
Many faculty and students refuse to teach or take courses online because of their perception that the online
environment lacks opportunities for student-student and faculty-student interaction (Vivolo, 2017). However, easy
to use and cost effective technologies give instructors the ability to engage with students and enhance their
interaction with the content, while giving students ways to interact with each other, even in asynchronous online
courses. As Bowen (2012a) suggested, “Technology provides tools to motivate students for deeper critical
exploration, application, and integration of the information now available to them, and e-communication provides
strategies for building intellectual communities” (p. 130). A key factor in this quest is to structure classes so that
everyone has to participate and is rewarded for doing so (Bowen, 2012b). In this chapter, we briefly describe
strategies and techniques that can be deployed to increase student-content, student-student, and studentinstructor interaction and engagement in asynchronous, online courses using readily available tools that do not
increase costs to students. As with many pedagogically innovative practices common online (Vivolo, 2007), these
practices are consistent with cognitive science regarding how people think and learn (Miller, 2014) and can often be
adapted readily to other modalities of instruction (e.g. hybrid, synchronous online, or face-to-face).
STUDENT-CONTENT ENGAGEMENT
Perhaps it is a truism that a student’s interaction with the content is the raison d’etre for a college course! Regardless
of one’s opinion on this point, in our experience advising students taking conventional face-to-face classes, especially
those students who have not met their own academic expectations, many admit to avoiding engagement with the
content (e.g., not reading the textbook, routinely skipping in-person lectures, failing to complete required
assignments). In large lecture halls filled with students (sometimes hundreds), it can be difficult to provide the
personal attention that might benefit students. Online, there are a variety of strategies and techniques that can be
used to encourage student-content interaction.
The test function in Blackboard, which is similar to many Learning Management Systems (LMS), offers faculty the
opportunity to provide on-the-spot feedback to students responding to multiple choice questions. When the correct
alternative is chosen, students can see “bravo” or “good choice” or some other brief and encouraging feedback
provided by the instructor. More importantly, choosing an incorrect alternative can trigger feedback that refers the
student to specific material (e.g., a particular reading or video, a specific section of a chapter). Such feedback
encourages students to re-engage with specific content and deepen their understanding (Miller, 2014). Rather than
provide the correct answer, this approach encourages students to discover the answer for themselves.
P a g e | 121
Another strategy for increasing student-content interaction is to offer students multiple attempts on assessments
(Miller, 2014). This is especially easy to accomplish for quizzes or exams that are taken online and automatically
scored. For example, in our introductory level course, students are expected to complete a weekly quiz via the test
function in Blackboard. Allowing students the opportunity to take these quizzes more than once lowers the stakes
(and hopefully the anxiety) and allows weekly quizzes to serve as a formative assessment and study guide, helping
students determine what they do (and do not) know. Each quiz covers the assigned readings, videos, and other
materials for the corresponding week. To increase engagement with the content, students are encouraged to read
the material and watch the lectures, then take the relevant quiz for the first time early in the week. Their success (or
lack thereof) on the quiz can then be used to guide their next steps. If they did not achieve the desired score, they
are encouraged to spend more time studying, especially the material related to the questions they missed. They can
take the quiz again, repeating this process before the deadline for a particular quiz. Blackboard allows the instructor
to determine the number of attempts (we usually allow two or three) and which score will count toward the final
grade (e.g., last, highest, lowest, first, average).
To allow students multiple attempts at quizzes and exams requires a large pool of quiz questions. When using large
publisher-provided test banks, instructors may be tempted to allow students unlimited attempts. However, in our
experience this appears to exacerbate counterproductive behavior, like taking quizzes repeatedly in rapid succession
at the last minute, in hopes of a higher score, rather than encouraging more productive approaches to mastering
content. Although this behavior occurs in some students even when the number of attempts is limited, our
experience suggests that limiting the number of attempts to two or three increases the probability that students will
follow the provided guidance and temporally space their attempts.
A third strategy for encouraging student-content interaction is to offer extra credit for identifying errors in course
materials and assessments. 1 For example, syllabi can include language like the following:
You may find minor errors in some of the quizzes or in the other online materials you will engage during
the course, often due to technical glitches and other issues that are beyond the instructor’s control. If you
find an error that I can fix, bring it to my attention. Indicate where you spotted the problem (e.g., the
chapter quiz number and the first few words of the question). If you are among the first to tell me about it,
you will earn an extra credit point.
Similarly, course management systems like Blackboard require literally thousands of option selections and
(as you may have experienced) technology does not always interface perfectly. Please let me know if you
find an error – I will correct it (if I can) and give you an extra credit point for helping me to improve the
experience for other users. On rare occasion, the technology designates a wrong answer as the “correct”
answer on a quiz question. If you are convinced you have found an example of such an error, please email
me. Be certain to indicate the course, the quiz number, and the first few words of the question in addition
to your explanation as to why your selected answer is a better than the “correct” answer. If I agree with
your assessment, you will earn an extra credit point.
This approach encourages students to become co-creators of their learning experience, at least at a minimal level.
It does, in fact, allow the instructor to correct errors that might otherwise distract or annoy other students in the
1
The first author is grateful to the late John Broida for suggesting this strategy.
P a g e | 122
class and to do so before the impact of the errors becomes unmanageable or disruptive. Implicitly, it also gives
students permission to make (and hopefully correct) their own mistakes. Finally, it creates opportunities for faculty
to interact with individual students, a topic to which we will return.
STUDENT-STUDENT ENGAGEMENT
Some students are reluctant to take coursework online due, at least in part, to their perception that interaction with
online classmates will be limited (Vivolo, 2017). Contrary to this expectation, a LMS offers a variety of mechanisms
that can be employed to encourage student-student interaction thereby increasing students’ social presence in the
course and their satisfaction with the medium (Gordon, 2017). They allow instructors to monitor, archive, and assign
points to (e.g., grade) student-student interaction in ways that are very difficult to replicate in a face-to-face
environment. For example, course requirements can include graded discussion threads and graded peer feedback
on course artifacts.
Blackboard’s discussion board function makes it easy for instructors to manage, archive, and grade these activities.
For example, on the discussion board, students can be required to post a response to a prompt provided by the
instructor by a specified deadline and then reply in a substantive fashion to a specific number of other students’
posts. In addition to text-based replies, students can also be encouraged to reply using video if the instructor wishes
to do so and the LMS allows. It is important to operationally define “substantive” in this context so students know
their posts need to include something more than “good job” or “I agree.” For more complex assignments, a grading
rubric can be shared with students in advance (Stevens & Levi, 2012). Such rubrics can also make assigning points to
(even large numbers) of discussion threads relatively painless (even if time consuming) for instructors or teaching
assistants. They can also be used by students to assess their own work (Ko & Rossen, 2017).
Similarly, we frequently require students to provide peer feedback on course artifacts. This can be done in the
Blackboard’s discussion board as well. Regardless of the length or complexity of the writing assignment, students
can be required to post their artifact on the discussion board and subsequently provide peer review or feedback to
a specified number of peers on the same assignment. We frequently ask students to provide such feedback on drafts
of complex assignments. That way students receive feedback from the instructor as well as from their peers that
they can incorporate into their final draft regardless of whether the assignment is a book review, wiki entry, letter
to the editor, lab report, etc. It may be useful to point out that students may receive contradictory feedback, in this
context, and can be empowered to consider the alternatives and make whatever editorial decision they think best
serves their audience and the purpose of the assignment.
Although graded discussion threads and required peer review encourage student-student interaction regarding
course content, some students seek less formal interactions with their peers as a way to enhance their social
networks among other non-academic goals (Lewis, Dikkers, & Whiteside, 2017; Whiteside, Dikkers, & Lewis, 2017).
These less formal interactions can be encouraged in a variety of ways. For example, we often encourage students to
post a brief introduction or bio (and to respond to some of their classmates posts) during the first week of class (e.g.,
Gunawardena, 2017). If given options, some students will post video whereas others prefer text. Even without
assigning a point value to such a post, in our experience, most students appear to do so with pleasure. We often
encourage students to take note of others who share their interests or background or who posted something of
particular interest so that they can use the email function in Blackboard to further their interaction. (This simple
exercise also accomplishes another important goal, namely, demonstrating students’ knowledge of how to navigate
the discussion board.)
P a g e | 123
In addition to providing students with a mechanism through which they can potentially solve a problem quickly,
perhaps especially when the instructor is not immediately available, a help wanted/help offered forum
(Gunawardena, 2017) can promote productive, but less formal, student-student interaction. The discussion board in
our classes often includes a forum with the following instructions.
If you have a question, need some assistance, or have experience or expertise to offer, post a note on the
Help wanted…Help offered forum! If you subscribe to this forum, you will be alerted to new posts via email.
Because the instructor isn't always online, especially in the evening and on weekends, it may also be a way
to get an immediate answer to a question. To subscribe, simply open the forum and click on the "subscribe"
button.
You are also welcome to post an offer of assistance. For example, if you are willing to help others make
connections to people in your network, have technological expertise to share, or are skilled in the kind of
research required for this course, you can let others know of your willingness to assist through this forum.
Students have used this forum to resolve minor technical problems, announce local events (e.g., lectures,
workshops) relevant to course content, request clarification on assignment instructions, and organize independent
study groups via social media, among other things. Because these discussion threads are visible to all students in the
class, it also allows the instructor to answer a question once rather than repeatedly to any number of individual
students.
STUDENT-INSTRUCTOR ENGAGEMENT
In some respects, student-instructor interaction is the most “expensive” type of engagement. After all, there is often
only one instructor and there are usually many students. Regardless, there are a variety of ways instructors can make
their virtual presence known in asynchronous online classes even if there are few, if any, opportunities to meet faceto-face or in real time (Whiteside, Dikkers & Swan, 2017). These include, but are certainly not limited to, weekly
announcements, instructor commentaries, and participation monitoring.
We use weekly announcements in much the same way many faculty use the first few minutes of a face-to-face
lecture. We typically send the weekly announcement early Monday morning, especially in courses where the default
deadline for weekly assignments is 11:59 PM on Sunday. Blackboard allows instructors to deliver announcements to
students’ email inboxes. Announcements can also remain on the Blackboard announcement page for the duration
of the week (or for whatever period an instructor deems appropriate). They can also be written well in advance,
deployed on a schedule, and edited for reuse in subsequent semesters. In fact, we usually draft a weekly
announcement for each week of the semester while designing the course and edit or update each message as it is
made available to students.
Although some users advocate using “text-short” emails (e.g., Boettcher & Conrad, 2016), weekly announcements
can vary in length depending upon the amount of information that needs to be delivered. Not unlike the introduction
to a lecture in a face-to-face class, these messages can include a wide range of information (e.g., closing the loop on
last week’s content, summarizing feedback on a homework assignment, briefly introducing this week’s content,
deadline reminders, reminders regarding underutilized LMS functions, or study strategies that facilitate online
learning).
P a g e | 124
Whether course material is presented in a conventional textbook, eBook, or as curated collections of readings from
a variety of sources, the instructor’s presence and engagement with students can be highlighted by providing an
instructor commentary for each unit. We think of these commentaries in much the same way as the commentaries
museum curators provide when mounting an exhibition. They vary in length depending on the context. However,
they can include highlights of the most important points, new material (especially when a reading of the appropriate
length and complexity is not available on an important topic), and points of disagreement among scholars, among
other things. This is exactly the kind of content that instructors use to make a live lecture more engaging and to put
their personal stamp on the material. In addition to enhancing student-instructor engagement, such commentaries
act as a road map and a frame of reference for students as they progress through the assigned material.
Blackboard also collects information on student activity which provides instructors the opportunity to monitor
student participation and potentially shape student behavior early in the course. For the first few weeks, we carefully
monitor student activity in Blackboard. Students receive an email alert from the instructor when they miss an
assessment deadline or have failed to log in for three or four days. In addition to highlighting the instructor’s
presence, closely monitoring student engagement provides an opportunity to troubleshoot. This is especially
important for students who are new to online learning. Although some students quickly admit to forgetting to take
a quiz or submit other required artifacts, responses to these early alerts include a wide variety of concerns, most of
which are easily remedied with a bit of extra coaching (e.g., “I couldn’t find the quiz link,” “I posted the assignment
but forgot to click SAVE,” “I’m having trouble accessing the eBook,” “I can’t open the videos”). By monitoring student
engagement through the LMS, the instructor can help students more quickly adapt to the online environment. In
our experience, most students welcome these interventions and are grateful for the support.
APPROACHES FOR BRINGING IT ALL TOGETHER
As should be clear from some of these examples, some approaches can increase student engagement with some
combination of the three (not so) discrete kinds of engagement discussed above. In this section, we discuss three
approaches that help students engage with the content, other students, as well as the instructor.
VoiceThread (http://voicethread.com; Ko & Rossen, 2017; Vickers & Shea, 2017) is an inexpensive, commerciallyavailable product that allows instructors to create narrations for the slide decks they might otherwise be using to
structure a conventional lecture. One important advantage to this product over other approaches to lecture capture
is that the creator can alter, add, or replace individual slides without the need to re-record an entire lecture! So, if a
student identifies an ambiguity in the narration or the material on a slide requires updating, specific changes can
readily be made without needing to re-record the lecture from scratch. Using VoiceThread, students also have
control over the pace at which they advance the slides.
Depending on instructor preferences, students can attach their questions and comments (using voice or text) to
individual slides. In doing so, they share their questions and comments with the instructor as well as other students
and others can add to the discussion thread. In short, VoiceThread allows students to engage actively with content
by commenting on or asking questions about specific slide content. Such comments also allow them to engage with
the instructor or with other students in ways that are not possible with other approaches to recording lectures.
Social media of various sorts can be also used to encourage students to engage with content, other students, and
the instructor (Boettcher & Conrad, 2016). As noted earlier, students can set up study groups using Facebook-independent of the course, or the instructor can create such groups and invite students to join. I (the first author)
P a g e | 125
ask students to engage further with content, with me, and with other students by inviting them to follow me on
Twitter or to search for or post course-relevant tweets using a hashtag (specifically created for this purpose) that is
relevant to course content. We consider this a digital equivalent for the “suggested readings” lists that sometimes
appear at the end of textbook chapters. In addition to sharing additional material with students, this is an easy way
to track material that an instructor might like to embed in a future iteration of a frequently taught course (Carrigan,
2016). Because it is independent of course enrollment, it also gives students an opportunity to remain engaged with
course-related content, if they wish to do so, long after the course ends.
Although online lab work presents unique challenges (Ko & Rossen, 2017), faculty in the physical sciences have spent
considerable energy creating online lab-like activities that can be embedded in asynchronous online courses (e.g.,
Kennepohl & Shaw, 2010). Some textbook publishers (e.g., Pearson) also are invested in this effort. The availability
of such activities in the social and behavioral sciences is much more limited, especially in introductory psychology
courses (Peterson & Sesma, 2017). However, The Society for the Teaching of Psychology (https://teachpsych.org/)
is making progress in filling this void.
CONCLUSION
As opportunities for learning online become available to more and more students (Lederman & Jaschik, 2017) in
public institutions as well as private, the importance of efforts to enhance online engagement should not be
underestimated. The federal government requires regular and substantive interaction between students and faculty
as a basis for granting financial aid (see Lieberman, 2017). National organizations, like Quality Matters
(www.qualitymatters.org), include indicators of student engagement in the standards and rubrics they use to assess
the quality of online courses and programs. Perhaps most importantly, students expect and benefit from it (e.g.,
Bowen, 2012a; Miller, 2014).
The online learning environment presents challenges for learners and instructors alike. But, it is our experience that
with a degree of forethought and intentionality in course design, it provides opportunities as well. By focusing on
the elements of the dynamic, triadic relationship between the instructor, the student, and the content, we have
identified approaches and applications of sound pedagogy and effective technologies that have increased
engagement for students, even in our high enrollment, asynchronous, online courses.
Practitioners are well aware of the constraints of the online learning environment, particularly in regard to creating
a sense of connection and being there for their students. And while no LMS is perfect, we have undertaken an
iterative process of investigating and piloting the use of many of Blackboard’s tools with the intention of enhancing
student engagement. We are confident that the recommendations presented here have accomplished just that.
REFERENCES
Boettcher, J. V. & Conrad, R. (2016). The online teaching survival guide: Simple and practical pedagogical tips. San
Francisco: Jossey-Bass.
Bowen, J. A. (2012a). Technology for engagement. In Teaching naked: How moving technology out of your college
classroom will improve student learning (pp 129-152). San Francisco: Jossey Bass.
Bowen, J. A. (2012b). The naked classroom. In Teaching naked: How moving technology out of your college classroom
will improve student learning (pp 185-214). San Francisco: Jossey Bass.
P a g e | 126
Carrigan, M. (2016). Social media for academics. Los Angeles: Sage.
Gordon (2017). Creating social cues through self-disclosures, stories, and paralanguage. In Whiteside, A. L., Dikkers,
A. G., & Swan, K (Eds.). Social presence in online learning (pp. 99-112). Sterling, VA: Stylus.
Gunawardena, C. N. (2017). Cultural perspectives on social presence. In Whiteside, A. L., Dikkers, A. G., & Swan, K
(Eds.). Social presence in online learning (pp. 113-129). Sterling, VA: Stylus.
Kennepohl, D., & Shaw, L. (2010). Accessible elements: Teaching science online and at a distance. Edmonton, AB: AU
Press.
Ko, S. & Rossen, S. (2017). Teaching online: A practical guide (4th ed.). New York: Routledge.
Lederman, D. & Jaschik, S. (2017, Sept 7). New directions in online education [webcast]. Retrieved from
https://www.insidehighered.com/audio/2017/08/07/new-directions-onlineeducation?utm_source=Inside+Higher+Ed&utm_campaign=ed0c402dedWebcast_new_directions_in_online_learning_Rec&utm_medium=email&utm_term=0_1fcbc04421ed0c402ded-197519781
Lewis, S., Dikkers, A. G., & Whiteside, A. L. (2017). Personalized learning to meet the needs of diverse learners. In
Whiteside, A. L., Dikkers, A. G., & Swan, K (Eds.). Social presence in online learning (pp. 170-179). Sterling,
VA: Stylus.
Lieberman, M. (2017, Sept 27). Online education on notice. Inside Higher Education. Retrieved from
https://www.insidehighered.com/digital-learning/article/2017/09/27/threat-rescinded-federal-fundsprompts-range-responses-online
Miller, M. D. (2014). Minds online: Teaching effectively with technology. Cambridge, MA: Harvard University Press.
Peterson, J. J. & Sesman, A. Jr. (2017). Introductory psychology: What’s lab got to do with it? Teaching of Psychology,
44, 313-323. doi: 10.1177/0098628317727643
Stevens, D. D. & Levi, A. J. (2012). Introduction to rubrics: An assessment tool to save grading time, convey effective
feedback, and promote student learning (2nd ed.). Sterling, VA: Stylus.
Vickers, J. C. & Shae, P. (2017). Future directions for social presence. In Whiteside, A. L., Dikkers, A. G., & Swan, K
(Eds.). Social presence in online learning (pp. 191-206). Sterling, VA: Stylus.
Vivolo, J. (2017). Active learning: Interaction, diversity, and evolution in online learning. In R. Ubell, Going online:
Perspectives on digital learning (pp. 18-32). New York, NY: Routledge.
Whiteside, A. L., Dikkers, A. G., & Lewis, S. (2017). Overcoming isolation online. In Whiteside, A. L., Dikkers, A. G., &
Swan, K (Eds.). Social presence in online learning (pp. 180-187). Sterling, VA: Stylus.
Whiteside, A. L., Dikkers, A. G., & Swan, K. (Eds). (2017). Social presence in online learning. Sterling, VA: Stylus
Publishing.
P a g e | 127
CHAPTER
12
LEVERAGING THE FEATURES OF
LEARNING MANAGEMENT
SYSTEMS (LMSs) IN PSYCHOLOGY
COURSES
JESSICA J. CERNIAK
THE CHICAGO SCHOOL OF PROFESSIONAL PSYCHOLOGY
INTRODUCTION
Extant research lacks specific statistics regarding the annual number of psychology students enrolled in any online
course(s) at any level of postsecondary education, as well as the number of psychology students enrolled in hybrid
or fully online psychology degree programs. How technology is used in the teaching and learning of specific
psychology content also is lacking in the research. Psychology continues to be one of the most popular
undergraduate majors (National Center for Education Statistics (NCES), 2015). More than one in four postsecondary
students are enrolled in at least one online course, with 83% of these students studying at the undergraduate level
(Allen & Seaman, 2017). More specifically, a recent survey of regionally accredited, non-profit, undergraduate
institutions indicated 26% of psychology courses were offered online, representing approximately 17% of
undergraduate psychology credits earned in the 2016-2017 academic year (Hailstorks et al., 2017). Although
enrollment rates in graduate-level, online psychology courses and programs are unclear (Newhouse & Cerniak,
2015), the Commission on Accreditation of the American Psychological Association (APA) acknowledges the role
online predoctoral coursework has in supplementing traditional, face-to-face (FTF) instruction (APA, 2017; Clay,
2012). Taken together, these statistics suggest many psychology students have or will take an online course at some
point in their education.
Web-facilitated FTF courses, specifically those utilizing web-based tools, and hybrid and fully online courses
delivered by learning management systems (LMS) come in myriad forms, and LMSs increasingly are seen as but one
point on an evolving continuum of the modern learning ecosystem (EDUCAUSE, 2012). For the purposes of this
chapter, selected research focused on closed enrollment online courses part of degree-granting programs will be
reviewed, as opposed to that pertinent to classes with open or free enrollment, such as massive open online courses
(MOOCs). To address the needs of readers who range in familiarity with and experience using LMSs at any level of
postsecondary psychology education, and who may teach in FTF, hybrid, or fully online programs, this chapter will:
provide a brief overview of the different types of LMSs; review some of the features embedded in many current
LMSs; explore how LMSs aid student learning, faculty’s management of courses, and program development and
oversight; and discuss challenges encountered by and recommendations for users of LMSs.
P a g e | 128
OVERVIEW OF LMSs
CLASS FORMAT AND LMSs
Whether the course is delivered solely in an online, distance education (DE) format or through a hybrid learning
experience, also referred to as a blended course combining scheduled, periodic, in-person contact with online
learning, a LMS will be utilized. LMSs also are used in many FTF classes. Referred to as web-facilitated courses, faculty
may use the LMS as an adjunctive tool to complement and support the teaching and learning occurring in FTF
settings.
Wright, Lopes, Montgomerie, Reju, and Schmoller (2014, p. 3) define a LMS as a “comprehensive, integrated
software that supports the development, delivery, assessment, and administration of courses.” An LMS should offer
functions and features aligning with institutional priorities, the range of degree programs and educational delivery
methods offered at the institution, and individual program needs (Wright, Lopes, Montgomerie, Reju, & Schmoller,
n.d.). While a discussion of the criteria used by institutions to select an LMS is outside the scope of this chapter,
Wright et al. (2014) discuss the advantages and disadvantages of each LMS type.
TYPES OF LMSs
There is considerable variability among LMSs, also referred to as course management systems (CMSs), course
delivery systems (CDS), or e-learning platforms. Among other characteristics, some LMSs may be: better suited to
synchronous or asynchronous interaction; focused on content, activity, or networking; accessible on mobile devices;
and, proprietary (e.g., BlackBoard), open-source (e.g., Moodle), or cloud-based systems. Cloud-based LMSs may be
a newer term warranting clarification of two of its common definitions. Some LMSs may require installation on an
institution’s servers, whereas others are cloud-based, hosted on the Internet, and accessed via a service provider’s
secure website (Wright et al., 2014). Some also define cloud-based LMSs as an alternative to proprietary and opensource LMSs, in that institutions can pick from an array of existing, often free, web-based “resources that might
include social bookmarking tools, document sharing applications, social networking sites, timeline tools, and media
options” (EDUCAUSE, 2010, p.1). Thus, these are referred to as LMS alternatives (EDUCAUSE, 2010). For example, a
class project may divide students into groups and require them to develop collaboratively a course Wiki page that
will provide future users with video-based material. In doing so, students may use Skype for synchronous meetings;
Dropbox to share their respective work; and YouTube to share the presentation. (For a list of the top 100 web-based
tools used in education, please see Hart (2016).)
LMS AS A ONE-STOP SHOP
There are some obvious advantages to using the functions and features embedded within a given LMS. Learning how
to use effectively any new technology takes willingness, time, and practice. If students and faculty are directed to
use a single system, their time and efforts are focused on becoming acquainted with and eventually mastering that
single system. This decreases confusion about where to go for course information or assignment details, for example,
and familiarity with some LMS features may facilitate the learning about and use of others embedded within it. Also,
users likely will have one login name and password, which is helpful when needing to access and participate in class
at different times of the day, across multiple computers, or across devices, should the LMS be compatible with
mobile technology.
P a g e | 129
Institutional information technology (IT) departments also can be alerted readily to any access difficulties users
experience, prompting institution- or program-wide announcements that the LMS is offline or under repair. The
same cannot be said about access to or use of open web-based tools. Students use different types of computers,
browsers, or other mobile devices that may be or will become incompatible with the continually evolving range of
web-based tools and applications. This may impact adversely some students’ ability to engage in class and complete
individual or collaborative assignments.
When programs, individual courses, or specific faculty require students to use a number of tools and resources
external to the LMS, students may experience discontinuity in the learning process from course to course. The locus
of control also shifts from the institution to individual faculty and students to locate, learn about, practice using, and
troubleshoot those tools or applications (EDUCAUSE, 2012). In many instances, time spent mastering different
technology is time diverted from teaching and learning psychology course content for faculty and students,
respectively. Capitalizing on existing features of, and thoughtful use of additional web-based applications within, the
LMS may facilitate course engagement and course satisfaction for students and faculty alike. Accordingly, the
remainder of this chapter will focus on leveraging the embedded features and functions common to many current
LMSs.
HOW LMSs CAN BENEFIT STUDENT LEARNING
VIRTUAL REPOSITORY
Whether used adjunctively or to deliver a course in part or in its entirety, the LMS provides faculty and students with
a single, virtual location that can serve as a repository of course documents, such as the syllabus, and programmatic
and institutional resource information, including library databases and institutional policies. Other essential
information can be housed there, too, including weekly announcements, recommended resources, and recently
published research.
While faculty teaching blended and fully online courses typically are required to post weekly announcements, FTF
faculty may want to do so as well, as this provides students with a published record of information delivered initially
in-person. Similarly, students in blended and fully online courses routinely submit assignments to the LMS, and
faculty, using embedded editing tools (e.g., highlight, text comments), can provide detailed formative and
summative feedback. In addition to decreasing paper waste, centralizing all work and feedback in one virtual location
permits students and faculty to see if feedback from a prior assignment is applied to future ones, and the grade book
provides a handy, at-a-glance visual representation of how a given student has progressed over the duration of the
term.
ADDRESSING LEARNING STYLES, NEEDS AND PREFERENCES
While research pertinent to learning styles, preference for course format, and online course completion rates is
mixed (Newhouse & Cerniak, 2015), LMS-delivered courses can appeal to different learning styles. Text-based
materials likely will have an indefinite place in online learning, just as they do in FTF courses, and course developers
and faculty can capitalize on the wide range of audio and visual-based learning materials available at this time. For
example, faculty can embed links to podcasts, TED talks, and YouTube videos, among other resources, placing
students one-click away from required or recommended materials. Faculty also can provide written transcripts of
P a g e | 130
these materials, if needed, for students with a preference or need for text-based materials. Doing so helps to address
the nuanced needs of students with ability differences, such as those who are deaf/hard of hearing.
Similar to flipped classrooms, many assignments used in LMS-delivered courses require students to demonstrate
their understanding and application of information about which they have learned (Clay, 2014). For example, weekly
discussion forums are a common component of hybrid and DE courses. By a designated deadline, students must
reply to question prompts in a discussion forum, and the requirements may include word counts, a specific number
of citations stemming from assigned readings, and the integration of additional scholarly research students must
locate. After completing the assigned readings, students have the opportunity to reflect on the material and conduct
additional research before demonstrating their understanding to the class (Al-Shalchi, 2009). Subsequently, students
typically are required to read and reply to some peers’ initial discussion posts. This blend of independent and
collaborative investigation and discussion also appeals to some of the learning styles and needs evinced in a modern
classroom.
ACADEMIC INTEGRITY & WRITING SUPPORT
A primary focus of postsecondary faculty’s efforts is students’ original academic writing, and ensuring the integrity
of students’ work is a prominent concern in DE (Howell, Sorenson, & Tippets, 2009). To address this, many LMSs
seamlessly integrate plagiarism detection software (e.g., TurnItIn, Plagscan); papers submitted to the LMS are
checked automatically for originality, permitting faculty to assess academic integrity and facility with APA style.
Before deadlines, students can submit written assignments, review the associated reports, and edit specific passages
before grading occurs.
This may not be the case for other assignments routinely used in FTF, blended or online courses, however. For
example, checking the originality of the content of discussion posts or PowerPoint presentations may not be possible
within a given system. Perhaps, additional academic integrity tools suitable to a wider range of assignment types will
be available in the future. To support the originality of discussion posts in the meantime, faculty may use an
embedded feature requiring students to post first to a discussion forum before seeing peers’ responses.
Some institutions also subscribe to online writing tutoring services (e.g., Smarthinking). In advance of deadlines,
students electronically submit written assignments to a trained third party to receive personalized writing feedback
primarily focused on grammar and composition. These resources can be advertised directly within the course,
programs can promote its use, and faculty may require specific students use this or other institutional-based writing
assistance programs.
DIVERSE LEARNERS
The postsecondary student body increasingly is diverse in terms of all demographic variables, such as ethnicity, age,
and ability level (Betts, Cohne, Veit, Alphin, Broadus, & Allen, 2014; Clinefelter & Aslanian, 2016; EDUCAUSE, 2016;
Heitner & Jennings, 2016; Kumi Yeboah & Smith, 2016), and other relevant statuses characterize these modern
learners, including: active duty military personnel and veterans (NCESb, 2015); and those working in positions or
fields with unpredictable or shifting schedules, such as nursing (Costello et al., 2014) or law enforcement. Moreover,
research consistently indicates the majority of students currently enrolled in higher education are non-traditional
students (NCESb, 2015; Pelletier, 2010), thereby making non-traditional learners the new traditional students (Smith,
2013) across all learning formats. Non-traditional students typically are defined as having membership in one or
P a g e | 131
more of the following groups: they did not obtain a traditional high school diploma; they delayed enrollment into
postsecondary education; they attend school on a part-time basis; they work full-time; they are financially
independent for the purposes of financial aid; they have dependent(s); and/or they are a single parent (NCESb,
2015). While the term “non-traditional student” continues to be used for federal data collection and record-keeping
purposes, Smith (2013) asserts the use of “post-traditional student” to be a more empowering term acknowledging
students’ diversity and wide-ranging experiences as a “value-add,” rather than aberrations within the higher
education landscape. To be consistent with the majority of research published to date on this student group,
however, the term non-traditional will be used throughout the remainder of this chapter.
Given approximately 70 to 85% of current college students possess at least one non-traditional student characteristic
(NCESb, 2015; Smith, 2013), administrators and faculty of all program types should assume these learners populate
contemporary classrooms. This data clearly suggests many students are balancing postsecondary school attendance
with work and familial responsibilities, among other life roles, making the option of attending a FTF program, one
requiring students to be on-campus on (multiple) specific day(s) at specific time(s) challenging, if not impossible. In
addition to serving non-traditional students, blended and fully online courses make available educational
opportunities to those who are distant from brick-and-mortar institutions. Fully online courses open to international
student enrollment further diversify the student body. While most online courses have established deadlines for
weekly assignments, often referred to as deliverables, the asynchronous format of many DE courses permits
students to complete course activities in line with personal schedules occurring in their time zones.
HOW LMSs SHAPE FACULTY’S APPROACH TO TEACHING
FACULTY’S ROLE
As early as 1993, scholarly works explored the difference in role and presence of DE faculty, from “sage on the stage”
in FTF courses to “guide on the side” in online courses (King, 1993). This difference in role and approach may be
more or less evident in a given course depending on several factors, such as institutional and program expectations,
guidelines from accrediting bodies, faculty members’ teaching styles, and course content. Another critical role
difference concerns hybrid and DE faculty’s course preparation and course facilitation. Many LMS-delivered courses
are prepared with all needed materials at the beginning of the term. Some may permit students to see all areas and
weeks of the course at once, or specific settings may be used to delay the presentation of material over time. Either
way it is generally expected the course is fully developed by the first day of class, impacting the way faculty prepare
for class, facilitate course material, and develop their teaching style.
COURSE PREPARATION
As in FTF courses, blended and online students can see the syllabus, required and recommended readings,
assignments, rubrics, and due dates at the outset of the term. However, faculty teaching FTF courses may have
greater flexibility to modify the content of a given week’s class time to reflect or incorporate, for example, newly
published research or a current event. While DE faculty also can share this information, requiring students to engage
with the material may be limited by the parameters of existing, viewable assignments.
Similar to FTF course materials, the same blended or online course materials may be reused in subsequent terms.
For example, the syllabus, required readings and media files, among other components, may be utilized the next
P a g e | 132
time a course is offered through the LMS. Many LMSs include a function to import or transfer selected materials
used in one term to the next, saving faculty some time in course preparation.
Related to this, programs must weigh the advantages and disadvantages of students having access to past DE courses
should they need to retake them. For example, students could have access to past discussion forums, complete with
former classmates’ responses and faculty feedback, and quizzes or tests that may have been auto-graded within the
LMS. To support students’ development and to ensure the originality of work submitted when retaking a course,
programs have some options, including: hiding the prior course from selected students’ view; reordering the
presentation of quiz items; selecting a different subset of items from the test bank; providing individualized guidance
about approaching in a new way the same written assignment; or amending the directions and associated rubric for
an assignment for selected students.
FACULTY AVAILABILITY AND PRESENCE
Beyond posted class times and office hours, FTF students may not have a keen sense of when their professors are
available or working on class-related activities and materials, such as grading. Best practices in online teaching
indicate blended and online faculty should post times when they are available for online (e.g., virtual office hours)
and real-time contact, via the phone or audio or video conferencing (Keeton, 2004), and they also should clearly post
when they are offline. Because online students engage in their courses on days and at times in line with their
personal schedules, they may misperceive faculty as being available whenever they are engaged in class or studying.
Specifically stating and reminding students of days and times faculty are available and unavailable helps to: clarify
and demonstrate faculty’s presence in the course; set reasonable expectations for faculty’s availability and
responsiveness; and facilitate faculty’s work-life balance. They, too, can access their classes anytime any day, and
determining and abiding by boundaries is important for faculty as well.
SYNCHRONOUS INTERACTION
Much research continues to focus on satisfaction and learning outcomes associated with different course delivery
methods (EDUCAUSE, 2016; Newhouse & Cerniak, 2015), as well as how much synchronous contact students desire.
To address this, many LMSs’ features create opportunities for real-time, virtual interaction. Two examples are
commonly labeled chats and conferences. Chats tend to be used for unscheduled, informal, synchronous contact
with others who happen to be online at the same time. Many LMSs allow users to notify others when they are online,
or this is signaled automatically after logging onto the course. Interestingly, many LMSs maintain a log of chat
activity; students and faculty who logon at a later time can see when and about what others last chatted.
Conferences, on the other hand, are akin to scheduled, in-person meeting times. Faculty may use this function on a
regular basis to host virtual office hours, or to offer focused discussion and activities in specific weeks of a course.
Many conference features offer audio- and video capability, screen-sharing, as well as virtual white boards on which
attendees can “write,” akin to blackboards in traditional classrooms. Seamless integration of these features means
users only need to logon to the LMS to participate; there is no need for additional browser windows, usernames and
passwords, or facility with other software. Once scheduled, some LMSs automatically display these conferences
within the embedded course calendar, providing students with a visual reminder when they next logon.
P a g e | 133
ASYNCHRONOUS INTERACTION
Asynchronous courses and class activities appeal to many students for the aforementioned reasons of balancing
school attendance with work, familial and other personal roles and responsibilities occurring in various time zones
around the word. LMSs bridge that asynchronicity by tracking the digital footprints students and faculty leave in
LMS-delivered courses. Many behaviors are viewable and time-stamped, thus, students using LMSs can see when
faculty reply to discussion boards, post announcements, or return graded assignments. This helps to demonstrate
faculty’s presence and engagement to students.
Faculty also can provide more personalized contact to students through use of other audio and video LMS features.
In addition to embedding presentations or resources with audio and video accompaniment, faculty can: record audio
or video class announcements; provide audio or video replies to discussion posts; and, provide audio or video
feedback on graded assignments. Depending on the way learning activities are framed, students also may be
required or have the option to provide audio and/or video accompaniment in their assignments.
While these tools may enhance users’ sense of interpersonal connection within a course vis-à-vis providing more
“face time” and opportunities to hear each other, receptivity to these tools may depend on students’ preferences
or needs. For example, some students may prefer or need text-based grading feedback that is displayed at once, is
easy to return to, and permits copying and pasting into a document for use at a later time.
COURSE ANALYTICS
One final useful LMS feature to note is how faculty can see some concrete markers of students’ engagement in class.
As mentioned earlier, both faculty and students leave digital footprints in the course. More specifically, faculty may
be able to see when students last logged on, on what pages they spent time, how many times they visit specific
pages in the course, and which pages they visit minimally, if at all. This data is tracked automatically within many
LMSs and may be available through specific reporting functions of a given system.
Reviewing these analytics provides helpful information to teaching faculty and program administration. For example,
there are multiple ways students enter and move through the LMS. Some students may go to the home screen of
the course and open a tab specific to a certain type of assignment, such as discussions. They will look for the
discussion due that week and proceed to work on the assignment. Other students may look at the entirety of a
weekly module, starting with the overview screen, which orients them to the topic and learning goals, and then they
progress methodically through each subsequent screen. Whatever path is chosen, faculty can see where individual
students go and how often they are engaging in the course. For example, do students logon frequently and
participate consistently across the week, or is their activity mostly at the last minute? Upon reviewing such data,
faculty may see relationships between specific students’ behavior in the course and their course progress (e.g.,
grades), prompting the provision of additional feedback and guidance (Clay, 2014). Course analytic data also may be
used to assess the utility, placement, or presentation of information on infrequently visited portions of the course,
or to run an analysis on quiz items most frequently answered correctly and incorrectly. Lastly, some LMSs have tools
that can look for behavioral patterns, such as students who have not logged onto the course in a certain number of
days, and a message or email can be sent automatically to prompt their return to class.
While helpful, some of this data should be interpreted cautiously. Some LMSs, for example, track the number of
minutes a student was logged onto the system. In a given week, one student may have spent 180 minutes in the
course, whereas another spent 1,200. These totals are not synonymous with meaningful engagement in the course,
P a g e | 134
however. The first student may have accessed the LMS, downloaded specific course materials, logged out, drafted
her assignments, and logged on again to post her work at a later time; the second student may have logged on
multiple times on multiple days, but was idle within the course for various reasons, such as surfing the web in another
browser window during some of this time.
PROGRAM MANAGEMENT & DEVELOPMENT
The visibility of student and faculty presence and behavior in LMS-delivered courses is amenable to ongoing
observation and assessment by program administration and other institutional personnel (Zen-Chen, Lowenthal,
Bauer, Heaps, & Nielsen, 2017; Clay, 2014). Most programs delivering some or all of their coursework online establish
expectations for faculty teaching those classes. For example, these teaching expectations may include the frequency
with which faculty logon to the course; how many weekly, personalized contacts they have with each student; and,
the turnaround time for grading assignments, all of which are time-stamped within the course. Program
administrators overseeing the teaching of these courses may use this data as part of supportive feedback on teaching
presence and style. In the discussion forums, for example, a faculty member could encourage more interaction
among students or provide end-of-week summary posts highlighting key themes and how these relate to learning
objectives.
This data also should be considered in light of other important teaching behaviors. For example, quickly returning
graded assignments in line with programmatic expectations for turnaround is not the same as providing students
with clear, detailed feedback about strengths and areas on which to focus in future assignments. Similarly, requiring
faculty to have a high number of weekly, individualized contacts with each student does not assure the student
experiences that contact as meaningful, productive, or supportive. A balance between timely and meaningful
feedback and contact should be reflected in these program-level teaching expectations.
Lastly, many LMSs permit seamless integration of software to assess student learning. Again, embedding this within
the LMS provides a single, centralized location for all users. Students will submit their assignments for grading as
usual within the LMS, and faculty can grade them alongside completing programmatic rubrics to assess each
student’s attainment of program learning outcomes or other measurements.
CHALLENGES AND RECOMMENDATIONS FOR LMS USERS
Use of any technology requires first learning about it, and the same is true with LMSs: Faculty and students must
learn how to use a LMS before effectively teaching through and learning within it, respectively. With increased
exposure, users understand how the LMS impacts their learning or teaching processes and whether it is, in fact, a
good fit for them. What follows is a brief overview of some of the challenges students and faculty may encounter,
as well as some recommendations for all LMS users.
TECHNOLOGY REQUIREMENTS
Whether web-facilitated, hybrid, or fully online, all courses should clearly state the basic computer specifications
needed to use the LMS, and the browsers and versions thereof supported by the LMS. At minimum, both faculty and
students need: Consistent access to a reliable Internet connection, which could be problematic in some rural and
international locations and during specific times of the year for those living in locations subject to serious and
unpredictable weather, such as hurricanes; a computer with up-to-date software; and, two or more installed
P a g e | 135
browsers to troubleshoot any difficulties accessing the LMS. Users also need a backup plan should primary
computers or Internet connections fail, and they should look for and complete software updates when available.
Students and faculty also need information about how (e.g., by phone or email) and who to contact when confronted
with IT difficulties (Quality Matters, 2014; APA, 2013; Merisotis & Harvey, 2000). In some instances, they will contact
their institutional IT department (e.g., unable to login to institutional email), whereas other concerns should be
directed to the LMS’s helpdesk (e.g., unable to see assignment feedback). Knowing how and when to make this
distinction is essential.
CHALLENGES STUDENTS MAY ENCOUNTER
When taking FTF classes, the physical campus, fellow students, faculty and the like provide concrete cues class is in
session, students are in the learner role, and help is available. Courses delivered in a blended or fully online format
place significant responsibility on students’ shoulders to be: Excellent managers of their time; self-starting; and
proactive in identifying when they need help. Students should be encouraged to consider their course calendars in
light of employment and personal ones, and to identify specific blocks of time on multiple days each week to devote
to course activities and requirements (Newhouse & Cerniak, 2015). Classrooms are filled with non-traditional
students who are trying to learn while fulfilling employment, familial, and other role expectations. In many ways, it
is unsurprising why DE students become overwhelmed and have difficulty persisting or excelling in their classes
(Hart, 2012). However, Romero and Barberà (2011) found greater time flexibility, meaning having multiple times
each day to engage in online course activities, correlated positively with successful completion of assignments.
Identifying a sufficient number of times each week may provide needed flexibility should an unexpected issue arise,
and protecting these times may help students maintain a predictable and organized approach to their DE
coursework.
As mentioned earlier, asynchronous DE classes may be populated with students located around the world. Because
students and faculty are not participating simultaneously in class, it is important for faculty to clearly state the time
zone within which they work, when they will be on and offline, and how quickly they will respond to emails or
messages from students (e.g., within 24 hours). Doing so helps to create a frame and rhythm for the course and
offers some predictability about the asynchronous contact common to many LMS courses. It is helpful to know where
students live as well, especially when synchronous contact is required in a course or when scheduling telephone or
video-conferencing appointments with students requesting such contact. Lastly, clarifying the time zone within
which assignment deadlines occur is crucial for all users.
LEARNING TO TEACH ONLINE
One common misperception of teaching online is that which “works” in FTF classrooms translates readily to the
virtual classroom (Haughton & Romero, 2009; Martin, 2009). As technology evolves so does its place in education,
thus, requiring psychology faculty to stay abreast of technological innovation, how it impacts student learning, and
how it informs teaching style. A significant body of research concerning best practices in online teaching (The
Hanover Research Council, 2009; Merisotis & Harvey, 2000), DE education and learning styles (Aragon et al., 2002;
Eom et al., 2006; Harrell & Bower, 2011; Perez Cereijo, 2006; Wang & Newlin, 2000), e-learning readiness (Chen,
Lambert, & Guidry, 2010; Lorenzetti, 2005; Welsh, 2007), factors contributing to students’ successful completion of
online psychology courses and programs (Newhouse & Cerniak, 2015; Waschull, 2005; Waschull, 2001), and
students’ preferences for specific learning contexts (Clinefelter & Aslanian, 2016; EDUCAUSE, 2016), among others,
P a g e | 136
is available. While much of this research is focused on undergraduate populations and is not specific to the study
and teaching of psychology, important insights can be gleaned. This is especially true as innovation within and
improvements to LMSs occur, including the integration of artificial intelligence, simulation, and virtual reality, thus,
prompting faculty to not only keep an eye on what already exists in DE but what is to come, too.
In addition to institutions providing faculty with initial and ongoing instruction in online teaching and use of the
institution’s chosen LMS (APA, 2013), faculty interested in or required to teach online should look to this literature
for guidance at the outset and throughout their online teaching endeavors. To facilitate the development of an
engaging and effective teaching style informed by existing LMSs and other technology, Table 1 provides a list of
preliminary questions psychology faculty may ask of others more familiar or experienced with DE and the use of
LMSs.
Table 1: Questions for Psychology Faculty Developing a DE/Hybrid Teaching Style
1) In terms of teaching style, what similarities and differences have you observed when teaching
psychology in a FTF vs. hybrid vs. DE context?
2) What psychology content have you taught in a DE or hybrid setting and what was that like?
3) What psychology content seems most or least suited to DE or hybrid contexts and why?
4) What features of this institution's chosen LMS are most/least helpful and why?
5) How do you provide feedback to DE or hybrid students?
6) Are students required to have synchronous contact in your course(s)? How do you schedule and
manage that?
7) How have you addressed instances of plagiarism or fabrication in DE or hybrid settings?
8) How and when do you facilitate your DE or hybrid courses during the week?
9) Do you have any suggestions about how to interact with and/or demonstrate teaching presence to
students located around the world? Are their additional cultural issues to consider when teaching online
international students?
10) What is your late work policy and how did you determine that?
PROVIDING FEEDBACK TO STUDENTS
Courses characterized in degree or in their entirety by asynchronous interaction warrant special consideration of
how to deliver assignment feedback to online learners (Pyke & Sherlock, 2010). In a FTF course in which a specific
student is struggling, a professor may provide extensive written feedback, complete with red ink, but discuss it inperson before sending it home with the student. Doing so may soften the receipt of constructive feedback while
affording the student an opportunity to receive additional guidance and support to improve. In online courses,
faculty need to be more deliberate in creating these opportunities. Of course, a DE professor could request the
student arrange a synchronous meeting before releasing the feedback within the LMS; or, she may post the feedback
first and suggest subsequent one-on-one contact. Either way, faculty using a LMS to deliver grades should give
thought to how constructive feedback is phrased. This is especially true for asynchronous, DE faculty-student
relationships which develop from afar: the ability to draw on verbal and nonverbal cues, such as vocal intonation
P a g e | 137
and facial expression, may facilitate students’ receptivity to constructive feedback and their willingness to seek
future assistance.
COURSE POLICIES
Progression through a course, completion of its assignments, and related academic policies are determined with
purpose by faculty and program administration, and related expectations for class participation and assignment
submissions should be expected and understood by all students. Non-traditional students, however, may face a
wider range of possible disruptions to their studies or experience more events-turned-obstacles to their ability to
study than traditional students (e.g., transnational or international travel required by one’s employer, a dependent
child’s or older relative’s illness). Course policies regarding late work, for example, should be clear, discussed at the
outset of the term, and include some reasonable provision for extenuating circumstances impeding students’ timely
engagement in class or completion of assignments.
CONCLUSION
Technology has a clear role in 21st century postsecondary education, and its place in the teaching and learning of
psychology evolves. LMSs and other web-based services provide programs and faculty with a range of features and
tools to support synchronous and asynchronous interaction, academic integrity, writing support, the provision of
feedback to students, and the development of a teaching style informed by technological innovation. To further
support psychology faculty’s learning about and adoption of technological tools, significantly more research is
needed about how faculty uses technology, including LMSs, in the teaching of specific psychology content areas.
Discussions about and related research exploring which psychology content areas and related competencies are
more or less suited to hybrid and DE education is needed as well.
REFERENCES
Allen, I. E., & Seaman, J. (2017). Digital learning compass: Distance education enrollment report 2017. Retrieved
from https://onlinelearningsurvey.com/reports/digtiallearningcompassenrollment2017.pdf
Allen, I. E., & Seaman, J. (2009). Learning on demand: Online education in the United States. Retrieved from
http://files.eric.ed.gov/fulltext/ED529931.pdf
Al-Shalchi, O. N. (2009). The effectiveness and development of online discussions. Journal of Online Teaching &
Learning, 5(1), 104-108.
American Psychological Association. (2017). Implementing regulations. Section C: IRs related to the Standards of
Accreditation. Retrieved from http://www.apa.org/ed/accreditation/section-c-soa.pdf
American Psychological Association Committee of Psychology Teachers at Community Colleges (PT@CC). (2013).
Navigating the unique challenges of online teaching: Considerations and recommendations for colleges and
faculty. Retrieved from https://www.apa.org/ed/precollege/undergrad/ptacc/online-teaching.pdf
Aragon, S. R., Johnson, S. D., & Shaik, N. (2002). The influence of learning style preferences on student success in
online versus face-to-face environments. The American Journal of Distance Education, 16(4), 227-244.
Betts, K., Cohen, A. H., Veit, D. P., Alphin, H. C., Broadus, C., & Allen, D. (2013). Strategies to increase online student
success for students with disabilities. Journal of Asynchronous Learning Networks, 17(3), 49-64.
P a g e | 138
Chen, P. D., Lambert, A. D., & Guidry, K. R. (2010). Engaging online learners: The impact of Web-based learning
technology on college student engagement. Computers & Education, 54, 1222-1232.
Chen, K. Z., Lowenthal, P. R., Bauer, C., Heaps, A., & Nielsen, C. (2017). Moving beyond smile sheets: A case study on
the evaluation and iterative improvement of an online faculty development program. Online Learning,
21(1), 85-111.
Clay, R. A. (2012). What you should know about online education. Monitor on Psychology, 43(6), 42.
Clay, R. A. (2014, Winter). 2014 Education Leadership Conference: Learning in a digital world. The Educator.
Retrieved from http://www.apa.org/ed/about/educator/2014/12/issue.pdf
Clinefelter, D. L., & Aslanian, C. B. (2016). Online college students 2016: Comprehensive data on demands and
preferences. Retrieved from http://www.learninghouse.com/wp-content/uploads/2017/09/OnlineCollege
Students2015.pdf
Costello, E., Corcoran, M., Barnett, J. S., Birkmeier, M., Cohn, R., Ekmeki, O., Falk, … & Walker, B. (2014). Information
and communication technology to facilitate learning for students in the health professions: Current uses,
gaps and future directions. Online Learning: Official Journal of the Online Learning Consortium, 18(4).
Retrieved from https://olj.onlinelearningconsortium.org/index.php/olj/article/view/512
EDUCAUSE. (2010). Seven things you should know about LMS alternatives. Retrieved
https://library.educause.edu/resources/2010/7/7-things-you-should-know-about-lms-alternatives
from
EDUCAUSE. (2012). Seven things you should know about navigating the new learning ecosystem. Retrieved from
https://library.educause.edu/~/media/files/library/2012/5/eli7084-pdf.pdf
EDUCAUSE. (2016). ECAR study of undergraduate students and informational technology 2016. Retrieved from
https://library.educause.edu/resources/2015/8/~/media/24ddc1aa35a5490389baf28b6ddb3693.ashx
Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction
in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education,
4(2), 215-235.
Hailstorks, R., Norcross, J. C., Pfund, P. A., Stamm, K. E., Christidis, P., Lin, L., & Hill, Y. (2017, August). Poster session
presented at the meeting of the American Psychological Association, Washington, D.C.
Harrell, I. L., & Bower, B. L. (2011). Student characteristics that predict persistence in community college online
courses. American Journal of Distance Education, 25(3), 178-191.
Hart, C. (2012). Factors associated with student persistence in an online program of study: A review of the literature.
Journal of Interactive Online Learning, 11(1), 19–41.
Hart, J. (2016). Top 100 tools for education. Retrieved from http://c4lpt.co.uk/top100tools/top100-edu/
Haughton, N. A., & Romero, L. (2009). The online educator: Instructional strategies for effective practice. Journal of
Online Teaching & Learning, 5(3), 570-576.
Heitner, K. L., & Jennings, M. (2016). Culturally responsive teaching knowledge and practices of online faculty. Online
Learning: Official Journal of the Online Learning Consortium, 20(4), 54-78.
Howell, S. L., Sorenson, D., & Tippets, H. R. (2009). The new (and old) news about cheating for distance educators.
Online Journal of Distance Learning Administration, 12(3). Retrieved from https://www.researchgate.net/
publication/268359500_The_New_and_Old_News_about_Cheating_for_Distance_Educators
P a g e | 139
Keeton, M. T. (2004). Best online instructional practices: Report of phase I of an ongoing study. Journal of
Asynchronous Learning Networks, 8(2), 75-100.
King, A. (1993). Sage on the stage to guide on the side. College Teaching, 41(1), 30-35.
Kumi Yeboah, A., & Smith, P. (2016). Relationships between minority students’ online learning experiences and
academic performance. Online Learning: Official Journal of the Online Learning Consortium, 20(4). Retrieved
from https://olj.onlinelearningconsortium.org/index.php/olj/article/view/577
Lorenzetti, J. P. (2005). Lessons learned about student issues in online learning. Distance Education Report, 9(6), 14.
Perez Cereijo, M. V. (2006). Attitude as a predictor of success in online training. International Journal on E-Learning,
5(4), 623–639.
Pyke, J. G., & Sherlock, J. J. (2010). A closer look at instructor-student feedback online: A case study. Journal of Online
Teaching & Learning, 6(1), 110-121.
Martin, J. (2009). Developing course material for online adult instruction. Journal of Online Teaching & Learning,
5(2), 364-371.
Merisotis, J., & Harvey, M. (2000). Quality on the line: Benchmarks for success in Internet-based distance education.
Retrieved from http://www.ihep.org/sites/default/files/uploads/docs/pubs/qualityontheline.pdf
Newhouse, N., & Cerniak, J. (2015). Student success factors in graduate psychology professional programs. Online
Learning Journal: Official Journal of the Online Learning Consortium, 20(1), 70-92.
Pelletier, S. G. (Fall 2010). Success for adult students. Public Purpose. Retrieved from
http://www.aascu.org/uploadedFiles/AASCU/Content/Root/MediaAndPublications/PublicPurposeMagazi
nes/Issue/10fall_adultstudents.pdf
Quality Matters. (2014). Non-annotated standards from the QM Higher Education Rubric (5th ed.). Retrieved from
https://www.qualitymatters.org/sites/default/files/PDFs/StandardsfromtheQMHigherEducationRubric.pdf
Romero, M., & Barberà, E. (2011). Quality of e-learners’ time and learning performance beyond quantitative timeon-task. The International Review of Research in Open and Distance Learning, 12(5), 122–135.
Smith, E. J. (October 21, 2013). Post-traditional learning amid a post-traditional environment. National Association
of Student Personnel Administrators (NASPA). Retrieved from https://www.naspa.org/rpi/posts/NASPAposttraditional
The Hanover Research Council. (2009). Best practices in online teaching strategies. Retrieved from
http://www.uwec.edu/AcadAff/resources/edtech/upload/Best-Practices-in-Online-Teaching-StrategiesMembership.pdf
United States Department of Education, Institute of Educational Sciences, National Center for Statistics. (2015).
Postsecondary
education.
Digest
of
Educational
Statistics.
Retrieved
from
https://nces.ed.gov/programs/digest/d15/ch_3.asp\
United States Department of Education, Institute of Educational Sciences, National Center for Statistics (NCES).
(2015). Demographic and enrollment characteristics of nontraditional undergraduates: 2011–12. Retrieved
from https://nces.ed.gov/pubs2015/2015025.pdf
Wang, A. Y., & Newlin, M. H. (2000). Characteristics of students who enroll and success in psychology web-based
classes. Journal of Educational Psychology, 92(1), 137-143.
P a g e | 140
Waschull, S. B. (2001). The online delivery of psychology courses: Attrition, performance, and evaluation. Teaching
of Psychology, 28(2), 143-147.
Waschull, S. B. (2005). Predicting success in online psychology courses: Self-discipline and motivation. Teaching of
Psychology, 32(3), 190-192.
Welsh, J. B. (2007). Identifying factors that predict student success in a community college online distance learning
course. Unpublished doctoral dissertation, University of North Texas.
Wright, C., Lopes, V., Montgomerie, T., Reju, S., & Schmoller, S. (2014). Selecting a learning management system:
Advice from an academic perspective. Retrieved from http://er.educause.edu/articles/2014/4/selecting-alearning-management-system-advice-from-an-academic-perspective
Wright, C., Lopes, V., Montgomerie, T., Reju, S., & Schmoller, S. (n.d.). Selecting a learning management system:
Questions to consider. Retrieved from http://www.educause.edu/visuals/shared/er/ero1434/
selecting_lms.pdf
P a g e | 141
CHAPTER
13
STRATEGIES FOR PROMOTING
STUDENT ENGAGEMENT IN
SYNCHRONOUS DISTANCE
LEARNING
MICHAEL MUNSON TREKNORTH JUNIOR AND SENIOR HIGH SCHOOL
INTRODUCTION: THE CHANGING CLASSROOM
In August of 1984, I walked into the first classroom that an American high school had designated as “mine.” In
addition to the standard student desks, built-in book shelves, and cabinets in the room, there were specific items
designated as mine to use for instructional purposes. These items were likely familiar to every teacher at that time:
A steel desk with a Formica surface, a manual typewriter, an overhead projector, a pull-down screen for projecting
notes, films, filmstrips, and one box each of yellow and white chalk for the wall-length blackboard. Yes, it was black.
In August 2017, I began a new school year and walked into the “classroom” currently designated as “mine” at a small
school in northern Minnesota. The equipment in this classroom includes: Two ceiling mounted projectors that
project the desktop of my computer (and the work of students I choose to share with the class for discussion
purposes), 40 individual computers on desks that form a sort of horseshoe around a centralized row of computerless tables which represents our “home-base” for discussion and traditional lecture instruction, a mounted digital
camera and microphone that allow me to visually and conversationally interact with students from my home office
in suburban Philadelphia, two Wi-Fi hubs, and the school network.
The opportunity to teach from home in a manner that is almost exactly the way one teaches while in front of a
traditional class in a brick and mortar school was something close to unimaginable in 1984. Today, the online
education setting is not only possible but increasingly common. Powerful communication technology tools are
ubiquitous to the point of permeating our social and professional lives and are accessed on a daily basis by people
of all ages. These same tools allow teachers to deliver real-time lectures, discussions, assignments, assessments, and
the typical classroom banter that elevates engagement and enthusiasm for a class and its content. Incorporating
these tools into an online-learning environment does, however, come with some risk that requires deliberate and
persistent attention. There is an intuitive assumption that an online education scenario raises the level student
autonomy and responsibility necessary for successful learning of content. This is true to a point and we must
recognize that there are two sides to this assumption. First, there is an assumed requirement that students must
possess advanced skills in autonomy and self-direction before enrolling in an online course. This assumption often
leads to directing students away from online courses because they are considered to be, or labeled as, lacking
fundamental skills necessary for success in an online setting. Second is the assumed inability of teachers to design
courses and interact with students in ways that deliberately nurture an adjusted approach to learning in an
P a g e | 142
environment that essentially mirrors the prevailing ways in which our society communicates, accesses, and acquires
information.
For the past three years, I have taught high school courses online in a real-time distance learning scenario. The realtime interaction between a teacher and students who are in different physical locations is known as Synchronous
Distance Learning. This chapter will offer suggestions in how the strategic use of technology tools can promote the
development of skills that will allow students to successfully function with greater independent direction in an online
educational setting.
THE BASICS OF TEACHING PSYCHOLOGY ONLINE: ESSENTIAL COMPONENTS
LEARNING MANAGEMENT SYSTEMS
From a teaching perspective, the LMS provides the starting point for student entry to, and ongoing participation in,
an online course. Schools and school districts will usually adopt a LMS so teachers typically do not need to worry
about comparison and selection. It is, however, important that teachers explore, assess, and become comfortable
with all the tools and options that their particular LMS provides. The vision of what most fully meets the needs of
the students in an effective and efficient manner is the lens through which LMS tools should be evaluated. All of us
are likely to define these terms differently, but understand that the LMS provides the constant online location for
dissemination of course material, scheduling assignments, and assessments. It is the digital “home” for quick and
meaningful information.
There are many LMS options but they all provide a similar menu of course services and features. The following
features are the most important:
1.
Home page for posting essential course information. When students log in, this is the location of
all they need to know in order understand what is happening in the course on a day-to-day basis.
2.
Easily accessed and operated by teachers, students, and parents. Most learning management
systems work efficiently and effectively. Most also require time for teachers to become familiar
with using the LMS features most beneficial to their course goals.
3.
Course calendar that clearly shows what is expected each day.
4.
One-click access to course activities such as assignments, assessments, resources, forums, blogs,
announcements, discussions, etc.
5.
Grade book that allows for linkage between the LMS and the grading system used by the school or
district. A well-organized LMS grade book makes the manual transfer of grades into school system
less time consuming.
6.
Test bank/question bank that can be built and quickly edited to allow efficient creation of
assessments. Packaged test banks accompanying text books often need to be substantially
supplemented by teacher created questions. The LMS should feature an uncomplicated process of
writing, editing, and storing questions.
P a g e | 143
The big picture objective with the LMS is for teachers to become talented in using it as a deliberate vehicle for
growing student engagement and adaptation to learning online. Remember the goal is to make the daily login
attention-getting, engaging, and useful through easy accessibility to resources and clear directions.
Figure 1.1 shows the announcements forum of last years’ General Psychology course. This image shows the teacher
view of the page. Notice that this view includes the option to edit content. By clicking on the “Edit” menu for each
item, teachers can easily alter or delete their previous postings. Teachers add materials in the “Add a Resource” or
“Add an Activity” menus. Examples include announcements, webpage links, quizzes, discussion forums, wikis, chats,
and more.
Figure 1.1
Figure 1.2 shows the same General Psychology course as figure 1.1. The difference is that this image shows the
student view of the forum at the top of the home page. The student page lacks the Edit, Add a Resource, and Add
an Activity menus. Students click on the red links to access resources.
Figure 1.2
P a g e | 144
Figure 1.3 shows one way in which a typical content unit can be designed in a Moodle course. Other learning
management systems have similar layout options. This image shows the first four days of the biopsychology unit. It
provides quick and easy access to the daily agenda and daily activities. I use Google Docs to create a table in which I
enter the day-to-day calendar of activities. I copy each unit portion of that calendar and paste it into the unit on the
LMS. Items in red are links to the specific assignment or assessment.
Figure 1.3:
USING THE LMS AS A TOOL FOR STUDENT ENGAGEMENT
As the LMS is the starting point for an online course, it must be considered in the context of goals we wish to achieve.
Our student engagement and growth of skills translate to the use of the LMS in the following ways. First, it is the
digital point at which we grab the attention of our students. Second, it should stimulate interest in the course. Third,
it should conjure a sense of urgency regarding the course and completion of learning and assessment activities.
P a g e | 145
ATTENTIONAL STRATEGIES
View the LMS in the same way you view your brick and mortar classroom. Most teachers recognize that when
students physically walk into a classroom it is important that they are cued to mentally transition from the less formal
setting and expectations of the hallway to the more formal academic setting and its elevated expectations. When
students enter the online course by logging in to the LMS each day, they benefit from same type of cue that will
signal that they should shift their focus and cognition toward engaged academic thinking. There are many ways to
provide this. For example, starter activities that appear with instructions when a student enters the class home page
on the LMS are an excellent way to stimulate their cognitive presence. Keep in mind that starter activities should
last only 5 or 10 minutes and in addition to functioning as an attention-getter, they should also as review of, or
transitions from previous learning, or as foundations from which to move into new content material.
The following are effective and fairly easy examples of starter activities designed to shift student attention to
mindset of more focused academic thinking.
•
•
•
Visual sources with a specific question that require short written responses that involve student application
of previous learning, or speculation regarding content that will be presented later in the class period. Types
of visual stimuli that can be effectively used as starters in a psychology course are virtually limitless. I have
used brain images, CAT scans, MRI and fMRI images, bobo doll video clips, Skinner boxes, optical illusions,
mazes, graphs, Little Albert video clips, etc.
Short text excerpts (not longer than a paragraph) can be used as a stimulus to a question prompt to which
student write and submit a short response. Quotes from famous psychologists, research abstracts, patient
transcripts, etc., have been used.
Short quizzes (10 to 12 multiple choice questions) on previously learned material that is relevant to new
material that will be covered, or current reading assignments, etc., also have been employed.
Whatever starter activity you create and use, design it to meet the goal of shifting the mindset of the student to an
engaged academic mode. Also keep it shorter rather than longer. Our goal at the start of an online class is the same
as the goal of a physical classroom, to prime students for learning and conjure focus that will be sustained with
further class activities.
PROVIDING ORGANIZATION AND STRUCTURE TO ENHANCE ENGAGEMENT
Student confusion about what they are expected to do and when they are expected to do it leads to lost learning
time. The LMS is the home of your course. Structure it to eliminate as much confusion as possible from the moment
a student logs into the course. Create a day-to-day calendar that includes an agenda for each day. See Figure 1.3 to
see an example of how this might appear on the course home page. For assignments, provide clear timeframes and
deadlines for every assignment and assessment.
P a g e | 146
STRATEGIES TO STIMULATE INTEREST AND URGENCY
Designing activities and presenting them with clear time expectations is an automatic signal that “this is important
and needs to be done.” Practicing this consistently will help students define the class in like terms. When we provide
the structured calendar with a daily sequence of events, that include clearly labeled assessments regularly woven
into the course, we certainly reduce confusion and lost time. We also signal that course content is important because
it will be regularly assessed. By infusing our home-page and course activities with ubiquitous signals that “this is
important” we promote a positive shift in student perception toward a mindset aimed to increase engaged attention.
LOCATION FOR WRITTEN WORK: GOOGLE CLASSROOM
A more recent introduction to the online learning and teaching scenario is Google Classroom. The use of Google
products in American education has grown substantially since Google Classroom was introduced in 2014. Google
Classroom is one of eleven components that make up Google’s G Suite for Education and each of these components
are valuable additions to any virtual classroom.
For most teachers, Google educational components arrive as part of the G Suite in which the school or district has
enrolled. G Suite provides tools through which teachers and districts can organize assignments, track student
performance, and initiate communication with parents, students, other teachers. The most apparent pieces of this
integrated organizational system are Classroom, Gmail, Google Drive and its applications, and Google Calendar. All
of these tools are immediately available to teachers when their school or district subscribes to G Suite.
For teachers, Classroom is instantly usable and productive. A clear and direct tutorial provided with the first login
makes it easy to create a course. Navigating this straight-to-the-point tutorial takes only a few minutes but enables
one to move directly into creating classes into which students enroll using an access code. Once students enroll, they
receive announcements and assignments posted by the teacher. They complete and submit these assignments
entirely online. Teachers score and “return” the assignments completely online also. While scoring student work,
teachers can directly edit the student document, add commentary, or post private remarks regarding the assignment
if they choose.
The process of assigning course work in Google Classroom is achieved in a variety of ways. One common approach
is to create a Google Docs assignment in Google Drive. In Classroom, select “Create assignment” and insert the
assignment document from Google Drive. Teachers have options regarding how the assignment is presented and
how the assignment is completed. Selecting “Make a copy for each student” means that a personal copy of the
assignment will be available for each student. This is the digital version of making a paper copy of an assignment and
distributing it to individual students in class. The student then completes the assignment individually without
collaboration. Another assignment option is to use an assignment you create, and insert into the Classroom page.
Select “Students can edit” as a way to encourage collaboration because all students are editing the same document.
P a g e | 147
Figures 1.4, 1.5, and 1.6 provide examples of Google Classroom assignments. Figure 1.4 shows an assignment posted
in Google Classroom.
Figure 1.4
The assignment includes a link to an online activity and a study guide that goes with it. This assignment followed an
introductory discussion on biopsychology and a description of neurons and their functioning. Notice that each
student has their own copy of this assignment. This means that they will complete it and submit it individually. Figure
1.5 is a document as it would appear to students when they open an assignment. This is also an assignment in which
each student receives their own copy to complete and submit individually.
Figure 1.5
Figure 1.6 is an assignment in the “Students can edit file” mode. This allows all students to enter comments and edits
to a single document. On this assignment, the next day in class I added a column for rating the strength of evidence
(0 = No Relevant Evidence, 1 = Weak Evidence, 2 = Adequate Evidence, 3 = Strong Evidence) in support of the position
P a g e | 148
expressed by the student in the row below. After the ratings were entered and we discussed the qualities of
research-based evidence students edited their responses.
Figure 1.6
USING GOOGLE CLASSROOM AS A TOOL FOR STUDENT ENGAGEMENT
As with the LMS, we need to consider goals beyond simply transmitting course content. If we accept the idea that
the LMS is the attention grabbing starting point that encourages students to mentally shift into learning mode, we
must also realize that the activities assigned and worked on in Google Classroom (and other platforms) must
maintain, rather than detract from, the focus and mental activity gained with the starter in the LMS. In my classes, I
follow a sequence in which I try to keep students engaged, active, and focused on content goals while also supporting
their sense of “this is important” and personal responsibility to know and understand. This sequence, even with its
variations, is reflected in the day-to-day schedule shown in Figure 1.3. It allows students to become familiar with the
pace and approach I expect and the manner in which we use our time during class. It also establishes a routine that
requires less explaining of “What are we supposed to do?” so that I can spend more time teaching and discussing
content and students can spend more time learning and applying it. Designing assignments for use in class, and also
as homework, for an online class always requires tending to the goal of maintaining enthused engagement while
also promoting content acquisition. The reality of online teaching is that students and teachers are in two different
physical locations and this distance can inhibit the sense of connection in the academic relationship between teacher
and student. One way to work against the potential disconnect is to use our technology tools to create an active
class culture by thoughtfully scheduling class time in ways that engage the student in activities that are inherently
valuable, but also because they are scored and part of the course grade.
P a g e | 149
Table 1.1 shows the variations of the common sequence I employ my online classes. Classes are 70 minutes long and
start with me using Google Meet to take attendance and quickly explain the starter.
Table 1.1
Starter
5-10
minutes
Visual sources, text excerpts, short quizzes
Moodle
New content
options
30-35
minutes
Direct Instruction (lecture/discussion)
Google Meet
Multiple Choice Quiz
Moodle
Collaborative/Group assignment that requires more
time
Classroom
Moodle or Classroom
Forum discussion
Content
application or
assessment
25-30
minutes
Assignment requiring application of lecture/discussion Classroom
or reading topic/s
Quiz
Moodle
Students adapt to this routine rather quickly and become more productive, more engaged, and more interested over
time. When I started teaching online in 2014, I commonly filled class time with lectures or assignments that took too
much time for students to maintain focus and enthusiasm waned. One of the most important qualities of this
structure is that even when I am not lecturing/discussing, I can directly interact on a one-to-one basis with any
student at any time. In both Moodle and Google Classroom teachers can “pop-in” on student work and offer
suggestions and comments. For example, often when students are taking a quiz on Moodle, I will check in on their
quizzes as they answer questions. As they progress through a quiz, they are instructed to “flag” questions they find
confusing. When I check their quizzes as they work, I click the flagged questions and offer advice or clarification.
When they are working on an assignment in Google Classroom, I click on student documents to monitor progress,
make recommendations, and add comments or encouragement. Often I will ask a student if I can anonymously share
their work with the class because it represents an exemplary response. When they consent, I copy and paste the
exemplary response in a document on my computer, share my screen with the class, and explain how the sample is
P a g e | 150
a perfect way to approach the question, prompt, or assignment. This always produces responses such as: “Oh, now
I get it. Thanks.” Or “Wow, that’s great. Whose is it?”
Remember, one of our main goals is to generate and maintain engagement and enthusiasm for our course while not
being physically present. Pop-ins like these help students understand that you care about their work and are helpful
and accessible.
WEBEX AND GOOGLE MEET
Synchronous Distance Learning courses operate with real time class meetings through which students and teacher
interact through video conferencing. The teacher instructs students directly during a specified, synchronous, class
period in which students and teacher are in different physical locations. This is my experience with online teaching.
I meet every day with students, who attend a “brick and mortar” school a thousand miles away, from my home
office. My students are scheduled into the computer lab setting that was described briefly above. From my office, I
join the class via video conferencing in which students see me, and any props I choose to introduce such as brain
models, eye models, for example. The computer lab camera is positioned to allow me to view the entire class and
the microphone allows me to clearly hear questions, comments, and responses to my direct questioning, and
discussion activity.
Video conferencing ability is an area of ever-evolving options that are increasingly cost-effective as prices range from
free to around $30 a month. I am currently using the Google Meet application available through G Suite for
Education. In various capacities, I have also used WebEx, Skype, and Big Blue Button. Google Meet provides the
essential features of video conferencing and there is currently no fee for its use because it is a component of the G
Suite for Education package. The synchronous format of my courses requires only one AV link for a video conference.
The configuration of the classroom allows me to directly link my computer, camera, microphone, and computer
desktop to a single computer in the classroom. That computer projects my visuals (me and my computer screen)
onto to the two screens located so that students can see what I am presenting from wherever they are sitting. My
voice is transmitted through speakers connected to that computer. I hear students through a strategically placed
microphone and see them through the camera connected to the computer.
The audio and video aspect of teaching via a video conference is only one important consideration. Equally important
is the ability to share your computer desktop because this is our visual explanation platform, just as blackboards,
whiteboards, or projectors are in traditional classes. Screen sharing means you share whatever is on the desktop of
your computer. In all of the video conferencing tools I have used, it is a one click “button” that causes your screen
to be shared with others in your conference (also referred to as meeting). When I share my desktop, my computer
screen is projected onto the screens in our classroom. Anything on your screen is shared and for teaching purposes,
presentations, digital white boards, documents, webpages, etc. can be shared with smart boards, projectors, or
groups of computers through the video conference. When sharing information with a class, I find it essential to be
able to annotate as I present. I have used the various presentation applications for imagery and text lectures and
this works quite well in explaining content.
An option I now use daily is a digital pen and MS OneNote. The digital pen allows me to write on my screen as though
I were writing on a whiteboard and OneNote serves as the canvas on which I can write. Figure 1.7 shows an example
of how digital whiteboards can be used while screen sharing. Users can import text, images, PDF documents, and
presentations as foundations over which annotations are added. The image shows a text outline created in Google
P a g e | 151
Docs that was copied and pasted into a page in OneNote. I added a screen shot of a blank neuron from a series of
multiple choice questions. I removed letter labels from the screen shot and pasted it on the same page as the outline.
In class, rather than me simply describing neurons, I directly call on specific students asking for their explanation or
labeling, etc. The result is a more participatory environment which serves the goal of nurturing active student
engagement. The annotations are student responses and correctly labeled structures.
Figure 1.7
CONCLUSION
By deliberately applying strategies while using readily available technology, teachers can influence students to higher
levels of engaged learning and acceptance of personal responsibility in well-structured online educational settings.
P a g e | 152
CHAPTER
14
OER: A PATHWAY TO ENHANCE
STUDENT ACCESS AND
MOTIVATION
ANTON O. TOLMAN
AND
SETH GURELL UTAH VALLEY UNIVERSITY
INTRODUCTION
In what is perhaps the most famous of presidential speeches to the American Psychological Association, George
Miller in September 1969 challenged the field to rethink how it could contribute to human welfare (Miller, 1969).
The core theme of his remarks was that “Psychological facts should be passed out freely to all who need and can use
them” (p. 1070). He further stated, “Our responsibility is less to assume the role of experts and try to apply
psychology ourselves than to give it away to the people who really need it – and that includes everyone” (p. 1071).
Miller’s encouragement undoubtedly resonates with many psychologists teaching today in higher education who
make sincere efforts to encourage the application of psychological principles in the lives of their students, but when
it comes to openly sharing the remarkable findings of the field, the reality is that they are often anything but free.
Allen and Seaman (2017) note in their survey of faculty that 68% required a textbook and 53% required articles or
case studies. However, only a small percentage of courses were using some form of openly licensed content material
such as those with a Creative Commons license or that come from the public domain (5% print textbooks and 11%
digital textbooks). Such materials are usually referred to as OER, Open Educational Resources. The majority of faculty
(90%) described the cost of textbooks and materials as an important or very important consideration in their courses,
a fact in direct contrast to the small numbers of those using OER (see also Green, 2016). Clearly, faculty are worried
about the cost of student textbooks, but many are not yet taking direct steps to reduce that cost. This represents an
opportunity for improvement, not to mention reduction of professional cognitive dissonance.
This need for action is becoming more urgent. For example, Jhangiani and Jhangiani (2017) noted a 129% rise in the
price of textbooks over 15 years in the United States; together with increases in college tuition, the long-standing
goal of broadening access for all students to higher education is threatened. Although not specific to psychology,
the Florida Virtual Campus (2016) conducted a survey of 22,000 student enrolled in Florida public institutions of
higher education, concluding that “The financial burden that students must bear for textbooks and course materials
– and its impact on their academic choices and success – is a mounting concern for Florida’s higher education
community” (p.7). Among their key findings, they note that cost of textbooks significantly reduces student access
to required materials (67% of students did not purchase a required text), influences grades, and increases time to
graduation. They also reported: Students purchased fewer textbooks than noted in a 2012 study; students tried to
lower costs by purchasing from sources other than the institution’s bookstore; almost half of students took fewer
courses; and significant percentages of students used rented textbooks, either print or digital. Also, fewer students
received financial aid and significant numbers of those who did noted that those funds did not cover textbook costs.
A disturbing result reported by the Book Industry Study Group (2013) was that 34% of their sample reported that
they or someone they knew had downloaded textbooks illegally.
P a g e | 153
Taken together, these reported trends paint a broad picture of students struggling with difficult decisions about the
cost, and therefore the accessibility, of course materials. The impact of these decisions are still being evaluated, but
given the primacy of commercial textbooks in higher education, it appears to be substantial. For many students,
decisions about whether to continue their education, what to sacrifice, and alternate routes to access are all shaped
by the cost of their textbook and course materials. Additionally, the stress generated by these decisions as well as
delayed or lack of access to materials also impacts on how well they learn.
Concerns with student learning and performance lead many professors to describe quality as their primary
consideration for course materials and presumably influences their expressed worries about the quality of OER
materials, a concern that is unfounded. Empirical evidence indicates that OER materials are of equal or possibly even
higher value to student performance than traditional commercial texts. For example, in a conference panel
discussion (Hendricks, Jhangiani, & Madland, 2016), Jhangiani described a study involving the random assignment
of a printed version of the OER OpenStax text, the online digital version of the same text, or a commonly used
commercial text such as the David Myers text frequently used in introductory psychology courses. He found no
significant differences on exam performance for the second and third exams between the text versions, but
surprisingly, on the first exam, students using the commercial text performed significantly worse than students using
the printed OER text. Jhangiani believes (personal communication) the students with the OER text performed better
on the first exam due primarily to access: Students were able to use the text from the first day of the semester to
study and learn because it was free, while students with the commercial text likely waited to purchase it until they
believed it was critical or had the funds to do so, a pattern in student behavior that has been reported elsewhere
(Florida Virtual Campus, 2016).
Jhangiani and Jhangiani (2017) also summarized the results of thirteen studies with more than 100,000 students,
reporting that virtually all of the students achieved the same or better outcomes (including lower withdrawal and
higher completion rates) when they used OER in their classes. They conclude that “the significant cost savings to
students … do not come at the cost of academic outcomes.” These findings support the quality of OER as a viable
alternative to traditional commercial textbooks, at least for introductory psychology courses.
Some readers may ask why a chapter on OER is included in a book about technology in teaching psychology. Although
OER texts can be ordered in print form, the reality is that most of them are downloadable as PDF files, often including
embedded links to other digital materials such as TED Talks, YouTube videos, demonstrations, and other useful
materials. Additionally, as online education becomes mainstream (Allen & Seaman, 2017) and faculty increase their
use of institutional Learning Management Systems (LMS; see Cerniak’s chapter in this volume), the opportunity to
integrate OER textbooks directly into digital resources already available to students is growing. This is certainly true
in psychology. Two major sources of OER materials in introductory psychology, NOBA (http://nobaproject.com) and
OpenStax (https://openstax.org/details/books/psychology), are digital texts accompanied by a suite of
supplementary materials including quizzes, test banks, PowerPoint slides, and other materials. By exploring OER in
this chapter, we also are encompassing the use of digital materials in psychology courses.
FACTORS INFLUENCING FACULTY DECISIONS ABOUT OER
In order to understand the factors that influence an instructor’s decisions regarding course materials, it is important
to consider systemic factors including cultural, social, technical, programmatic, and pedagogical infrastructures
within higher education. These factors can, at times, significantly influence what forms of OER might be considered
viable for adoption. For example, if department policy mandates that all sections of a specific course, such as
P a g e | 154
introductory psychology, utilize the same text and core materials or adhere to a course structure imposed by either
the institution’s LMS or by agreements with publishers, this can shape and limit the options available to those
seeking to adopt OER. On the other hand, some institutions are now beginning to create “Z degrees”, complete
degree programs built entirely on OER (Jhangiani & Jhangiani, 2017); if this trend continues, faculty teaching in these
programs would adopt OER as a routine matter. Some institutions have financial “incentives,” providing one-time
stipends to faculty to adopt and adapt OER into their courses. All of these forces can have significant effects on
faculty willingness to consider adopting OER and, over time, grow the pool of instructors who are doing so.
Research into the motivation of faculty to adopt OER is emerging as the use of OER increases. Martin et al. (2017)
surveyed faculty at Brigham Young University about open textbooks in particular. The top response, by a significant
margin, was to “save students money” with roughly three-quarters of respondents indicating it was a factor. The
second most common response was the belief that OER was of equal quality with commercial offerings. Jhangiani et
al. (2016), in a survey of faculty in the British Columbia system examined several “enabling factors” that motivated
faculty to adopt OER. The top enabling factors were whether a resource was relevant, from a reputable producer,
and easy to access. The authors also found that the perception of quality was influenced by awareness and
familiarity. Faculty who had previously adopted OER rated the quality of OER materials significantly higher than
those faculty who had not adopted OER.
Further research into factors shaping adoption suggest pragmatic considerations when considering OER. McKerlich,
Ivies, and McGreal (2013) surveyed faculty at Athabasca University and found that faculty were most concerned
about “academic quality” and the time needed to search for OER. This finding was supported by Allen and Seaman
(2017) who noted that finding resources relevant to a faculty member was the biggest barrier to adopting OER. It
certainly seems reasonable to conclude that faculty reluctance to adopt OER is also likely to be influenced by social
and institutional culture considerations such as the degree of encouragement of OER adoption by faculty peers,
department chairs, deans, and administrators as well as the degree to which the institution publicly links use of OER
to outcomes such as access, retention, and student success, or even better, might be seen as validation of a
professor’s commitment to teaching in the tenure process.
Given available studies, it can be concluded that faculty are willing to adopt OER but are cautious of quality. Given
the multiple demands on their time, faculty are understandably reluctant to engage in searches of OER repositories
that vary widely in formats, features, and quality. This particular problem is relevant to psychology. Apart from
NOBA and Openstax, mentioned above, which focus primarily on providing resources for introductory psychology,
identifying OER materials for use in other courses in the psychology curriculum can be difficult and time-consuming.
This is certainly an area ripe for development and improvement in the field, perhaps led by the Society for the
Teaching of Psychology.
STUDENT REACTIONS TO OER COURSE MATERIALS
Most of the existing data regarding OER adoption is based on faculty surveys and perspectives, but it is also
important to evaluate and hear students’ perspectives on these issues; students are the ones directly using these
materials, and if OER is to succeed, it must ultimately work for the students. As with research on faculty perceptions,
student research is also nascent but growing. In a survey of eight community college institutions, Bliss et al. (2013)
found that students enjoyed their OER textbooks though some of the features they enjoyed the most were related
to the fact that the textbook was online (not specific only to OER). Hilton et al. (2013) examined student perception
of OER across multiple math courses. Overall, students believed that the work materials met their needs. (See Sage’s
P a g e | 155
chapter in this volume for a discussion of electronic textbooks). Cost savings, understandably, are popular with
students. Lindshield and Adhikari (2013) found that both residential and online students appreciated free textbooks.
Martin et al. (2017) found that many students self-reported reinvesting the money saved from free textbooks
primarily into housing, food, and savings. Jhangiani and Jhangiani (2017) summarized multiple studies of student
perceptions of OER and reported that a significant majority of students found OER “easy to use,” felt they were more
current than commercial texts, and felt the quality of OER was equal or superior to commercial texts. This finding is
apparently true for nations beyond the United States. In their own survey of Canadian students enrolled in higher
education, Jhangiani and Jhangiani (2017) found that 63% rated their OER text to be of above average or excellent
quality and only 3.5% of the sample rated it as below average or of poor quality. More than half of their respondents
disagreed with the statement that they would have preferred to have purchased a traditional textbook.
A PERSONAL PERSPECTIVE
Every semester Anton Tolman (the first author) teaches a large online section of introductory psychology. Similar to
other professors, Tolman has long been interested in reducing costs to students while still maintaining a high level
of quality in the materials. He also was interested in customization – the ability to use a Creative Common license to
integrate his own content, experiences, and clarifications to the existing text, something that often happens
naturally in face-to-face courses. After reviewing available materials, he selected the Openstax text. The psychology
department had already been making attempts to lower costs to students, dropping textbook costs from
approximately $200 to the $60-70 range, but Tolman was able to implement the OpenStax text in a way that was
customizable using TopHat.com, an external software platform for $8 per student. While this system was fairly new
and worked well overall, it had several weak points, the largest being the external nature of the platform and lack
of integration into the Canvas LMS system, a major aspect of online coursework. Many students expressed gratitude
for the lowered cost and described enjoying the text as well as the professor’s perspective and additional comments.
After further discussion and exploration, Tolman and colleagues in the psychology department launched in Fall 2017
a modified version of the Openstax text using a new platform from Lumen Learning called Waymaker
(http://www.lumenlearning.com/what/waymaker/). This version of the OER text was incorporated into almost all
sections of the course, including online, hybrid, and face-to-face courses, with the plan for full implementation in
the following semester. Waymaker is integrated into the Canvas LMS and although not as customizable as the
TopHat version and somewhat more expensive ($25 total), it is easily accessible by students with all embedded
material displaying within the LMS; it also has features of formative assessment and mastery learning including selfchecks and the opportunity to take quizzes more than once. Further, ancillary assignments were impressive in scope
and the depth of application of course concepts to students’ lives. Another nice feature is that Waymaker provides
automated messages to students to both recognize student success for those doing well and encouragement and
suggestions for struggling students on how to improve their performance. It makes it simple for instructors to track
struggling students in order to reach out to them and support them in their learning. Although this implementation
is still early, multiple students have thanked Tolman for the new system, are responsive to the messages, and
appreciate being able to access their text directly through Canvas. Discussions with other instructors in the
department, some using the OER text and platform in face-to-face courses, demonstrated that faculty enthusiasm
for the platform and the text is fairly high and that students’ reactions are quite positive, overall.
Although specific data is lacking, Tolman’s impression is that use of the integrated OER textbook and materials has
at least resulted in a decreased withdrawal rate from his course. The fact that both students and faculty are generally
positive in their impressions of the text and the Waymaker platform suggests that OER represents a valuable
P a g e | 156
pedagogical shift in introductory psychology courses in this department. This is especially true because prior to this
implementation several of the instructors involved had no prior experience with OER and only decided to “buy in”
to the program because of the groundwork and encouragement by Tolman and Dr. Jessica Hill, the lead instructor
for introductory psychology.
Waymaker does provide customization options for the module quizzes, but at this point, customization and
adaptation of the OpenStax text in the platform is not available, although professors can choose which modules to
keep or hide and can delete or reorganize the text-based self-check, and ancillary materials. Tolman has opted to
retain the customized materials he used previously in the TopHat course including notes, stories, explanations of
concepts, and links to external public domain or OER materials through an editable document that is included in
each course module.
THE FUTURE OF OER TEXTS AND OPEN EDUCATIONAL PRACTICE
As various faculty and institutions continue to adopt OER, there are increasing numbers of faculty with multiple
semesters of experience in using OER in the classroom. Although these courses meet the laudable goals of saving
money for students and giving them immediate access to the course on the first day of the semester, there remains
additional untapped potential to OER. To better understand the possibilities of OER the permissions underlying it
should be considered.
OER allows not just for the free adoption and dissemination of a resource as-is, but it also allows for modification
such as that described by Tolman above. However, while some faculty may make minor modifications to a resource
before use, reports indicate most do not put significant effort into editing materials (Hilton, Lutz, & Wiley, 2012). De
Liddo (2010) suggests that one reason for this may be that instructors are not accustomed to thinking in terms of
openness. Rao, Hilton, and Harper (2017), pointing to translation efforts of well-known OER projects, suggest that
modification does occur but only if the project is sufficiently well-known and the modification is occurring on a large
scale. These authors cite MIT OpenCourseWare, which is been translated into several different languages, as an
example. Mtebe and Raisamo (2014), in their interviews with instructors in Africa, found that barriers to adoption
ranged from infrastructure to lack of interest. Cannell, Page, and Macintyre (2016) note in the United Kingdom that
as OER adoption increases and relationships are built between institutions and supportive organizations,
opportunities to remix begin to emerge. For example, the Open University, which has expertise in publishing OER,
collaborated with Scottish scholars to produce OER in the Gaelic language. Jhangiani et al. (2016) reported some
instructor concerns about support in adapting an OER. Although the details vary, the research regarding remixing of
open educational resources shows common themes of lack of support (technology, policy, or resource allocation)
and instructor confidence as some of the primary barriers.
Over time, theorists and practitioners have begun to explore the concept of “open” beyond the boundaries of
textbooks or other materials; these explorations center around how to promote student learning outcomes by being
open with respect to course content and the instructor’s pedagogy. This field of research is known as Open
Educational Practice (OEP). Glennie et al. (2012) cite the International Council for Open and Distance Education
(ICDE) definition of OEP as “practices which support the production, use and reuse of high quality open educational
resources (OER) through institutional policies, which promote innovative pedagogical models, and respect and
empower learners as co-producers on their lifelong learning path” (p. 112-113). Many OEP elements such as selfdirected learning, differentiated learning pathways, collaboration and sharing, already will be familiar to faculty and
staff promoting advanced pedagogy. Ehlers (2011) categorizes open educational practices into “high,” “medium,”
P a g e | 157
and “low” levels of implementation with highest form of open educational practice including both open course
objectives and methods.
Robin DeRosa teaches American Literature at Plymouth State University and wanted to reduce costs to students.
However, rather than simply selecting freely available sources and presenting them to the class, DeRosa involved
students in the process. They were asked to find resources and modify them as necessary to become part of the
planned anthology that would become her course textbook. As part of their work on the anthology, students posted
their thoughts on social media, making the process participatory across boundaries (DeRosa & Robinson, 2017).
Additionally, Tolman and Lee (2013) described approaches to integrate student decision-making and participation
in the classroom that include course design, assignment options, and course policies. DeRosa and Robinson (2017)
noted that The Noba Project holds a contest for student-created videos related to psychology concepts that are
posted on the site and can be accessed by other faculty to use.
Examples of OEP can extend beyond interactions with students. Strohmetz, Ciarocco, and Lewandowski (2017)
discussed the creation of TeachPsychScience.org, which is a website dedicated to freely sharing resources for
teaching statistics and research methods in psychology. Harnett (2017) described sharing psychology resources on
a blog as part of a natural evolution of her practice. In both of these instances teachers in the field are opening their
practice and describe the process as being mutually beneficial to them individually and to the field generally.
CONCLUSIONS
In 1969, George Miller challenged psychologists to “give psychology away” for the benefit of society. Given the
current costs of textbooks, one of the best ways to get the findings of this remarkable field into the minds of as many
people as possible is by making OER widely available to students in psychology courses, especially introductory
psychology. Absent OER adoption, there is evidence to suggest serious and growing negative impact on students’
abilities to learn, remain in higher education, and graduate.
The progress of OER in psychology by that measure is promising. The foundational work of NOBA and OpenStax,
along with the personal experience by one of the authors, has demonstrated that widespread adoption of OER in a
department is achievable with persistence and support. Further effort is needed to promote the creation of open
texts in other key courses in the psychology curriculum. In particular, creation of OER texts in abnormal psychology,
development, statistics, research methods, and neuroscience would be beneficial. Laudable efforts through websites
such as teachpsychscience.org shows the field beginning to respond to these needs, but continued expansion is vital.
The major factors that appear to influence the faculty decision to adopt OER includes a desire to reduce student cost
and the growing evidence that OER texts are of equal or better quality than traditional commercial textbooks.
Concerns about the time and effort needed to identify appropriate OER remains. Institutions and departments could
encourage adoption of OER through incentives, formal declarations, recognition, peer support, and inclusion of OER
as evidence of teaching commitment in the tenure process.
OER does more than reduce costs. It provides a form of technology that promotes access and innovation. The ability
to license course materials gives freedom to adapt a resource to one’s classroom. It breaks down the traditional
boundaries and understandings of what a textbook can be. It allows for open and transparent dialogue about the
discourses within psychology. Looking forward as OER evolves into OEP, Miller’s challenge could be exceeded by
giving psychology away while fostering a nurturing and vibrant network among theorists, practitioners and students.
P a g e | 158
REFERENCES
Allen, I. E., & Seaman, J. (2017). Digital learning compass: Distance Education Enrollment Report 2017. BABSON
Survey Research Group. Retrieved from
https://onlinelearningsurvey.com/reports/digtiallearningcompassenrollment2017.pdf
Bliss, T., Robinson, T.J., Hilton, J., & Wiley, D.A. (2013). An OER COUP: College teacher and student perceptions of
open educational resources. Journal of Interactive Media in Education, 2013(1), Art. 4.
http://doi.org/10.5334/2013-04
Book Industry Study Group. (2013). Student attitudes toward content in higher education. Book Industry Study
Group, 4(1). New York: Author.
Cannell, P., Page, A., & Macintyre, R. (2016). Opening Educational Practices in Scotland (OEPS). Journal of Interactive
Media in Education, 2016(1), 12. DOI: http://doi.org/10.5334/jime.412
De Liddo, A. (2010). From open content to open thinking: World Conference on Educational Multimedia, Hypermedia
and
Telecommunications
(Ed-Media
2010).
Toronto,
Canada.
Retrieved
from
http://oro.open.ac.uk/22283/1/De-Liddo-Ed-Media2010.pdf
DeRosa, R. and Robison, S. (2017). From OER to open pedagogy: Harnessing the power of open. In Jhangiani, R S and
Biswas-Diener, R. (Eds.), Open: The Philosophy and Practices that are Revolutionizing Education and Science
(115-124). London, United Kingdom: Ubiquity Press Limited. doi:10.5334/bbc.i
Ehlers, U. D. (2011). Extending the territory: From open educational resources to open educational practices. Journal
of Open Flexible and Distance Learning, 15(2), 1-10.
Florida Virtual Campus. (2016, October 7). 2016 Student Textbook and Course Materials Survey: Results and
Findings.
Office
of
Distance
Learning
&
Student
Services.
Retrieved
from:
https://florida.theorangegrove.org/og/file/3a65c507-2510-42d7-814cffdefd394b6c/1/2016%20Student%20Textbook%20Survey.pdf
Glennie, J., Harley, K., Butcher, N., & van Wyk, T. (2012). Open Educational Resources and Change in Higher
Education: Reflections From Practice. Commonwealth of Learning. Retrieved from:
http://oasis.col.org/bitstream/handle/11599/80/pub_PS_OER_web.pdf?sequence=1&isAllowed=y
Green, K. (2016). Going digital: Faculty perspectives on digital and OER Course materials. Campus Computing
Project. Retrieved from: https://www.campuscomputing.net/content/2016/2/19/going-digital-2016
Hartnett, J. (2017). DIY open pedagogy: Freely sharing teaching resources in psychology. In: R.S. Jhangiani and R.
Biswas-Diener. (Eds.) Open: The Philosophy and Practices that are Revolutionizing Education and Science
(245–254). London: Ubiquity Press Limited. doi:10.5334/bbc.t.
Hendricks, C., Jhangiani, R., & Madland, C. (2016, November 2). Experiences, perceptions, and outcomes of using
open textbooks: Research from the BC OER Research Fellows. Conference presentation at the 13th Annual
Open Education Conference, Richmond, VA.
Hilton, J. L., III., Gaudet, D., Clark, P., Robinson, J., & Wiley, D. (2013). The adoption of open educational resources
by one community college math department. The International Review of Research in Open and Distributed
Learning, 14(4), 37-50.
Hilton, J., III., Lutz, N., & Wiley, D. (2012). Examining the reuse of open textbooks. The International Review Of
Research In Open And Distributed Learning, 13(2), 45-58. doi:10.19173/irrodl.v13i2.1137
P a g e | 159
Jhangiani, R.S. & Jhangiani, S. (2017). Investigation the perceptions, use, and impact of open textbooks: A survey of
post-secondary students in British Columbia. The International Review of Research in Open and Distributed
Learning. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/3012/4214
Jhangiani, R. S., Pitt, R., Hendricks, C., Key, J., & Lalonde, C. (2016). Exploring faculty use of open educational resources
at British Columbia post-secondary institutions. BCcampus Research Report. Victoria, BC: BCcampus.
Lindshield, B. L., & Adhikari, K. (2013). Online and campus college students like using an open educational resource
instead of a traditional textbook. Journal of Online Learning and Teaching, 9(1), 26.
McKerlich, R., Ives, C., and McGreal, R. Measuring use and creation of open educational resources in higher
education. The International Review of Research in Open and Distributed Learning. Retrieved from:
www.irrodl.org/index.php/irrodl/article/download/1573/2684
Martin, M. T., Belikov, O. M., Hilton, J., Wiley, D., & Fischer, L. (2017). Analysis of student and faculty perceptions of
textbook costs in higher education. Open Praxis, 9(1), 79.
Miller, G.A. (1969, December). Psychology as a means of promoting human welfare. American Psychologist, 24(12),
1063-1075.
Mtebe, J., & Raisamo, R. (2014). Investigating perceived barriers to the use of open educational resources in higher
education in Tanzania. The International Review of Research in Open and Distributed Learning, 15(2).
doi:10.19173/irrodl.v15i2.1803
Rao, A., Hilton III, J., & Harper, S. (2017). Khan Academy videos in Chinese: A case study in OER revision. The
International Review of Research in Open and Distributed Learning, 18(5). doi:10.19173/irrodl.v18i5.3086
Seaman, J.E. & Seaman, J. (2016). Opening the textbook: Educational resources in U.S. higher education 2017.
BABSON Survey Research Group. Retrieved from:
https://www.onlinelearningsurvey.com/reports/openingthetextbook2017.pdf
Strohmetz, D B, Ciarocco, N J, and Lewandowski, Jr. G W. (2017). TeachPsychScience. org: Sharing to improve the
teaching of research methods. In: Jhangiani, R S and Biswas-Diener, R. (eds.) Open: The Philosophy and
Practices that are Revolutionizing Education and Science (237–244). London: Ubiquity Press Limited.
doi:10.5334/bbc.s.
Tolman, A.O. & Lee, C.S. (2013). True collaboration: Building meaning in Learning through sharing power with
students. In O. Kovbasyuk & P. Blessinger. (Eds). Meaning Centered Education: International Perspectives
and Explorations in Higher Education. Routledge Publishing.
P a g e | 160
CHAPTER
15
TECHNOLOGICAL AND CULTURAL
CONSIDERATIONS FOR TEACHING
AND LEARNING IN LOWERRESOURCED ENVIRONMENTS:
AN ACCOUNT FROM THE PACIFIC
ISLANDS
PHILIP JEFFERIES DALHOUSIE UNIVERSITY. FORMERLY AT UNIVERSITY OF THE SOUTH
PACIFIC
INTRODUCTION
Some time ago, I had the fortune and privilege to be invited to lecture in psychology for a university that serves 12
Pacific Island countries. This university, headquartered in Fiji, also involves students from its member countries in
the Cook Islands, Kiribati, the Marshall Islands, Nauru, Niue, Samoa, the Solomon Islands, Tokelau, Tonga, Tuvalu,
and Vanuatu. Most of these students remain in their respective countries and engage courses through what had
been termed an “online” mode of education. While I was present, this involved a blended approach of e-learning
and a correspondence course style of learning; in other words, students were supplied with course materials either
in print, or with access to print, but they also engaged in learning activities individually and with others via the
Internet.
Coming from a background of teaching psychology in Western universities, and having some experience of teaching
via the Web, there were technological as well as cultural challenges and considerations that I immediately
encountered, and I learned that these were often interlinked. In this chapter, I want to share some of these
encounters across themes of learning materials, satellite classes, and assignments. These discussions are applicable
to the respective countries and cultures with which I worked, but I hope that they are also received such that their
transferability to other lower-resourced environments might be apparent, particularly those where there is a need
to take into account cultural differences.
LEARNING MATERIALS
At one stage, I was invited to redevelop courses for my school. While I was relatively confident with the topic content
that could substantiate each of the courses, I was initially thrown by some basic considerations related to access.
For one, the use of textbooks needed to be considered. Textbooks are a prominent feature of courses in the
university, as they remain in many others worldwide, and while some suggest a move away from these to Wikis and
other web-based content (e.g., Cragun, 2007; Lewin, 2009), not all students will have consistent access to the
Internet, and so a book is something they can take away to use wherever and whenever they need to quickly look
P a g e | 161
up and check terms and topics, as well as read further on a broad subject. However, the university receives students
from diverse backgrounds, some of whom can acquire textbooks without issue, but others are precluded due to the
cost or other access issues. For instance, textbooks suitable for a course, which I thought to be reasonably priced
back home, were very expensive when reviewing options for buying and shipping to locations in the Pacific. The
university also sought to address the issue of cost by only approving textbooks priced at $100 FJD ($50 USD) or less,
which presented a significant challenge when it came to identifying affordable core texts of a high quality. This meant
passing over what might be considered staple textbooks to consider more obscure texts, often in their first edition,
and potentially from publishers that do not send out reviewer copies, in order to identify cheaper alternatives. These
texts ranged significantly in quality, sometimes swapping out expected core content found in course leaders for
seemingly unconventional content, perhaps in an effort to differentiate themselves.
As an alternative, we discussed the potential of incorporating Open Education Resources (OERs) into courses. (See
Tolman & Gurell’s Chapter for a discussion of OER.) Such cost-free resources are ideal for lower-resourced
environments and are often appreciated by students in terms of the quality of information (Cooney, 2017), and for
those without easy access to computers or the Web, they can be printed at a cost much lower than a regular
textbook. However, suitable OERs can be hard to come by, and for the courses I were working on, at the time of
development there was not enough content across various sources to substantiate a textbook alternative. A further
possibility was to develop in-house OERs, but these naturally require a significant time commitment, and depending
on how far an author goes, significant knowledge of the subject area. For instance, a form of OER had been
developed by the university for students to use prior to the online mode of learning coming into force. However,
these were typically the minimum the student would require to learn about a topic and would be the only way this
information would be delivered. Consequently, and unlike most textbooks, they would not tend to provide extended
discussions of subjects for further reading, or the student’s own interest, nor could they be used to clarify content
that may be unclear when first presented (as a textbook might). To cater for these needs requires an OER that goes
beyond minimum learning requirements to become a form of OER textbook. While desirable, this exceeded the
resources available at the time.
Some publishers are aware of the high cost of textbooks and are starting to offer “e-rentals,” where students can
access an e-book at a cheaper rate, but for a limited period. While a good model for those only interested in books
for the duration of the course, this system may rely on access to computers and the Internet, depending on the
proprietary software systems the books would need to be viewed within. These are important considerations for
students coming from lower or intermittently resourced backgrounds, who may be unable or unwilling to access
content in this manner.
SATELLITE CLASSES
The university I worked at also sought to bring students together in what was termed “satellite classrooms.” This
meant that a classroom on the main campus would be set up in such a manner that it could be live-streamed to
classrooms in campus centres at remote locations, which were simultaneously shared back to the hub classroom.
The idea was to share the same classroom experience across multiple locations and at a relatively low cost. In other
parts of the world, live MOOCs allow students to connect with teachers and other learners in similar ways via various
P a g e | 162
devices and from various locations, but due to limited access to technology and unstable connections 2, the current
set up in the Pacific Islands is to use the dedicated facilities in the campus centres.
I had experience in coordinating classes via the Internet, managing external participation through audio and video,
as well as chat window contributions. However, I soon realised that participation from those connecting remotely
was especially important in these classes, as these students came with their own particular cultural understandings
and experiences that did not always align with the predominantly Fijian students who were physically present. For
example, when discussing suicide prevention, a nationwide programme that had been run in Fiji was raised as an
example by the students present to illustrate a means of raising awareness and decreasing the incidence of suicide.
This programme would not necessarily be known in all Pacific Island countries, and may not be appropriate if run
elsewhere, given cultural and political differences between countries. 3 While including examples from all
backgrounds was not always possible nor sensible, the example illustrates that wider participation is important to
encourage greater understanding. In life beyond Fiji, I am becoming aware of other kinds of dominant and typical
environments that may constitute a source of centricity, such as thinking in terms of urban vs. rural, affluent vs.
poorer backgrounds, or local vs. international learners. In addition to avoiding alienating learners, centricity is worth
reflecting and bringing to the attention of students simply to encourage a diversity of perspectives and experiences
that may come from those beyond our immediate environment.
There were also various reasons why students could not attend live sessions. In additional to personal circumstances
that prevented their presence, power outages and connection difficulties sometimes rendered the remote centres
unavailable. Recording these sessions at least provided access to the content later, but I found it important to
encourage students to engage with the recordings as if they were present, sharing thoughts on content and
completing activities via forums and in emails directly to me. In this way, absent individuals had the next best thing
to being present in the live session, but it also meant I could share a summary of their contributions with the class
in the next session. Not only does this foster a greater sense of inclusion in the individuals contributing, but those
absent tended to be the same each time, and so represented particular demographics of the class. Affording these
opportunities to be included ensures the contributions of these groups are not missed out on by the rest of the class.
Therefore, although the structure of the series of satellite classes could be planned well in advance, it was important
to make time at a point during the session to incorporate the input of those who had missed the previous live
sessions, giving a sense of greater interactivity, inclusion, and continuity.
In Pacific Island countries, as in many parts of the world, language comprehension is an additional concern, if the
language or accent of a speaker or participants is not common to all. The courses at the university are delivered in
English, but ability in understanding and writing English varies between and within countries. I had been advised to
speak as slowly (and clearly) as possible to cater to the lowest ability, but some students from Fiji, where
encountering English and a variety of accents is fairly common, expressed frustration with teachers who conducted
classes at a particularly slow pace, which they found off-putting and made concentrating difficult. To manage varying
abilities and preferences, I discovered that some streaming platforms (such as YouTube) are able to manipulate the
pace of recordings to speed them up or slow them down, without significant distortion. While this may not be of
much use in live sessions, where I tried to speak clearly and steadily, without drastically reducing natural speed, and
where I invited participants to request clarification whenever they needed, I was able to provide recordings in a form
2
I had been advised that some students would need time to get to these classes from their home and from work,
some of whom used paddle boats to get from villages to their local campuses.
3
Milner and de Leo (2010) cite Tonga’s stance of suicide as a criminal offence as an example of the need for
initiatives to be developed per area or country to take into account cultural differences.
P a g e | 163
that students could manipulate to suit their individual needs. Interestingly, services like YouTube also provide a
closed captions service, so that subtitles can accompany recordings. This text can be downloaded separately and
uploaded alongside recordings as transcripts, which serve students with varying accessibility requirements. While
this technology is in its early days and the accuracy of transcripts will regularly need to be monitored, it does serve
as a quick and free alternative to traditional transcription.
COURSEWORK/ASSIGNMENTS
Setting and managing assignments also requires some additional considerations in less resourced environments, as
I discovered during my time in Fiji. For instance, in psychology we are encouraged to involve the evidence base when
writing essays on topics, but if the current “gold standard” of evidence tends to be found in higher ranking peerreviewed journals, typically behind publisher paywalls, the journals may not be subscribed to, and thus, much of the
material may not be accessible. Therefore, I would encourage access to whatever sources were available, but also
encourage greater scrutiny of these resources. We often try to encourage critical thinking about knowledge and
sources, especially as journal articles themselves are often contentious, but in these situations where access may be
restricted, I would advocate greater space to thinking about source validity and credibility in assignments, rather
than focusing on the number of primary sources involved.
One student also described to me the challenge of finding time to visit the campus centre to work on a computer to
do her assignment and to submit it. Therefore, an additional concern is the real time students can access computers,
the Internet, or other resources (e.g., campus libraries) to work on assignments. This may vary significantly and can
be quite a contrast from a typical Western student who is able to remotely work on assignments whenever they
have time to do so. For students in lower-resourced environments, it is important to allocate sufficient time for
individuals to receive assignment instructions, to consider their approach, the length of time they can realistically
dedicate to accessing sources, and if these are spread out, additional time to recap and reflect before redrafting or
working further.
Similarly, when group work took place, I became aware of the need to share such considerations with others so that
those with greater access to resources could be more sympathetic and able to manage time and input from groups
with members who had lower access. Those with greater access to sources (through larger campus libraries or
greater availability of Internet) could sometimes take roles within groups of information gatherers, while others held
roles as evaluators. As group work allows for connections to be made between individuals of varying backgrounds, I
was also aware of the need to encourage listening without judgement and other ways that students could get along
with each other better.
CONCLUDING REMARKS
Working in Fiji with students from across Pacific Island countries gave me insight into the challenges involved in
teaching and learning within lower resourced environments, as well as important considerations when working with
individuals from diverse backgrounds. Some of the approaches to situations and workarounds to issues may be
useful to other comparable learning environments. I found that fostering an atmosphere of openness helped to
identify some of these issues, and while resources may not always be plentiful, creative solutions can generally be
found to support those that are enthusiastic about learning wherever they are.
P a g e | 164
REFERENCES
Cooney, C. (2017). What impacts do OER have on students? Students share their experiences with a health
psychology OER at New York City College of Technology. The International Review of Research in Open and
Distributed Learning, 18(4), 1–24.
Cragun, R. (2007). The future of textbooks. Electronic Journal of Sociology, 1(1), 1–14.
Lewin, T. (2009). In a digital future, textbooks are history. New York Times, 8.
Milner, A., & de Leo, D. (2010). Suicide research and prevention in developing countries in Asia and the Pacific.
Bulletin of the World Health Organization, 88, 795–796. https://doi.org/10.1590/S004296862010001000019
P a g e | 165
CHAPTER
16
DESIGNING INCLUSIVE ONLINE
ENVIRONMENTS FOR STUDENTS
WITH DISABILITIES
AMY HEBERT KNOPF
ST. CLOUD STATE UNIVERSITY ,
ELISE K. KNOPF STATE OF
MINNESOTA VOCATIONAL REHABILITATION SERVICES, STEPHEN G. ANDERSON HAMLINE
UNIVERSITY , AND WALTER J. WARANKA LIFETRACK RESOURCES
INTRODUCTION
Creating an inclusive and equitable online environment requires that faculty have a clear understanding of the
underlying principles of Universal Design in Learning (UDL). The underpinnings of UDL postulate that barriers are not
solely inherent to the individual with the disability, but lie in the barriers created by society and the environment
(Meyer & Rose, 2005). Thus, faculty play a critical role in ensuring courses are designed to meet the needs of all
students and to remove barriers.
According to the National Center for Educational Statistics (2016) approximately 11% of undergraduate students
report having a disability. UDL does not only benefit students with disabilities, but its principles of representation,
expression and engagement aim to meet the needs of all students (Rose & Meyer, 2002). The success of students
with disabilities in online education is precipitated on how faculty design their courses for access, inclusion and
engagement. Betts (2013) purports that “Success for online students with disabilities requires an institutional
commitment to accessibility.” Not only is an institutional commitment necessary, but a personal commitment by
each and every faculty member also is required. While some higher education institutions have made strong
commitments to supporting the needs of students with disabilities by providing training to faculty, many fail to hit
the mark. Indeed, the availability of training for faculty on how to provide inclusive strategies in the online
environment is very limited on many campuses.
Online teaching environments vary within institutions and across institutions in how courses are designed, how
content is delivered, and by level of engagement of the user. The student experience from one online course to
another, even within the same program, can look very different. When an instructor creates a course, the content
can be displayed in a variety of formats, and in several places in the Learning Management Systems (LMS).
Navigating the information can be frustrating and confusing for any student, but can pose an even greater challenge
for students with disabilities. Taking into consideration a few simple guidelines will not only make the online
environment more accessible, it will make the experience more positive for all students.
CONSISTENT AND STANDARDIZED COURSE SET-UP
When designing the online class environment, faculty should work together to ensure students access information
the same way across all courses. For example, below are screen shots from an online LMS highlighting three different
classes within the same program. Each class has appropriately labeled folders, but still present challenges for
P a g e | 166
students who use assistive technology to access the course materials. In Example 1, the Table of Contents labels
each folder with the week and the date range. This set-up is helpful for students using screen reader technology
when trying to find the correct folder to access information. In Example 2, the folders are randomly labeled by week,
sometimes by date, and sometimes by topic. A student in this course would most likely need to frequently refer to
the syllabus to make sure which week and topic was needed. In example 3, the folders are labeled by week number,
such as week 1, week 2 and so forth. This set-up also requires students to refer to the syllabus to check the
appropriate “week” folder the student should be accessing.
EXAMPLE 1
EXAMPLE 2
EXAMPLE 3
The greatest challenge with the three examples above is that students are required to navigate each course
differently. Learning to navigate each course in such different ways can cause frustration and confusion, particularly
for students who struggle with the use of technology and those needing to access technology in different ways.
Conversations across programs and disciplines should be focused on how an institution can provide students with
an experience that is similar in the online environment. When students enroll in a class they should feel like they are
enrolled at a specific university and have the same feeling across all of the courses they take. Course set-up
consistency will reduce barriers, reduce frustrations, and allow for easy navigation.
PROVIDE CLEAR INSTRUCTIONS FOR COURSE REQUIREMENTS AND ACTIVITIES
Most LMSs have common features such as modules for content, discussions, assignments, and announcements.
Frustration can ensue if courses are setup to require the student to check each module every time they log on for
learning. Navigating each of these modules should be clearly explained in the same place each week to reduce
unnecessary “running around” in each module. In the example below, instructions for Week 1 were uploaded in the
Table of Content folder labeled Week 1: Aug 22-26. In reviewing the content, the instruction provides the student
with clear directives about what tasks are required and where to find the activities and materials. It is important to
note that the names used in the instructions are the exact labels used by the LMS at the institution. Using consistent
labels is critical in making sure students who are accessing the information through a screen reader can find the
appropriate modules.
P a g e | 167
ONLY USE MATERIALS THAT ARE ACCESSIBLE
The Internet provides a smorgasbord of media for faculty to choose from when creating activities and choosing
materials for their courses. Adopting an inclusive philosophy in your course means if it is not accessible to all students
then you should either make it accessible or use something else that is accessible.
VIDEOS
When you choose media for your course activities use only media that includes captions and audio description. If
captions are not available, you can check to see if there is a transcript, but that is not ideal. This includes both rented
and online media. If you request a student to watch a specific movie or video about which they will respond, you
should only choose those which are accessible. When choosing captioned or described videos posted online make
sure to check for accuracy because some videos are not captioned or described correctly. Many YouTube videos are
captioned, but they are not always accurate due to the use of automated voice recognition software, which has
errors. Sometimes podcasts and webinars are not captioned but a transcript is provided. If a transcript is provided
make sure you upload the transcript with the media. When creating your own videos choose an accessible media
player. You should choose a player that is fully accessible and can be used across platforms. There are free online
tools available to use that add captions and audio description to videos you or your students produce.
GRAPHICS IN POWERPOINT AND MICROSOFT WORD
PowerPoint presentations are a great tool to supplement an online lecture. Whether it is used as a stand-alone
presentation, has audio embedded or is used in in a video presentation, steps should be taken to ensure accessibility.
PowerPoint has many accessibility features included in the software, but you must use the slide templates to make
sure that your content is accessible. Secondly, you need to add descriptions to any graphics that are used in your
presentation. In the example below, the PowerPoint slide highlights how to add text descriptions to your graphics.
Please note that your description should go in the description box, not the title box.
P a g e | 168
There are several online websites that provide in-depth information on how to make your Microsoft office
documents more accessible. If you add audio to your PowerPoint then you should either add captions or provide a
transcript.
BE A ROLE MODEL FOR INCLUSIVENESS BY REQUIRING STUDENTS TO DEMONSTRATE THE SAME
Demonstrating inclusive and accessible environments goes beyond simply accessing the online environment. It is
also a critical teaching tool in demonstrating how students should approach future work environments to be more
inclusive. If students are required to post videos, all efforts should be made to ensure they are closed captioned and
to clearly describe any visual information. Requiring students to caption their own presentation is not difficult, and
it reinforces the philosophy that faculty should create more inclusive environments that are accessible to all. There
are several do-it-yourself captioning tools including YouTube CaptionTube. Below is an example of a presentation
created by a student (Wilbert, 2017) using MediaSpace’s automatic captioning software. Below the video is a
transcript of the presentation.
P a g e | 169
Other accessible techniques that students should employ in their presentations include teaching students to use
minimal text on the PowerPoint slide, and only use the text to make a critical point. Students should speak clearly
and read what is on each slide to provide access to students listening to the presentation.
Below are some considerations for designing inclusive environments for students with specific disabilities. Although
this is not a comprehensive review of all disability categories, the recommendations mentioned may be helpful in
general and should start you thinking about potential universal design strategies to remove any barriers.
CONSIDERATIONS FOR STUDENTS WHO ARE DEAF OR HARD OF HEARING
The number of students who are deaf or hard of hearing entering postsecondary education settings continues to
grow (National Deaf Center on Postsecondary Outcomes, 2017). One of the challenges of helping students stay in
postsecondary education until graduation is ensuring that they have an equal platform to learn and succeed with
their peers who can hear. This challenge can be especially daunting in an online learning environment.
One of the first and foremost principles of providing access to any environment is asking the person what they need
for accommodations and not assuming anything. This is of course, assuming that the person has disclosed they have
a need for accommodations. People who are deaf or hard of hearing vary greatly in their level of hearing loss, how
they communicate, what they prefer for communication, how they learn and how they receive information. It is not
a-one-size-fits all, like building a ramp for someone who uses a wheelchair for mobility. However, there are some
things that can be done to ensure basic online access for a student who is deaf or hard of hearing. Some of these
tips may help students who can hear as well.
Here are some general recommendations for providing accommodations to online learning environments for a
student who is deaf or hard of hearing:
•
•
•
•
•
•
•
•
•
Making all information that is auditory, visual.
Captioning all videos or providing transcripts as needed when captioning is not available. There are many
ways to provide captioning, and free software to assist in doing so. Try to avoid captioning through Voice
Recognition Software or automatic voice recognition systems as those tend not to be as accurate.
Using PowerPoint, KeyNote or some other type of tool to present information visually, and if voicing at the
same time, ensuring that is captioned, either through purchasing Video Remote Captioning services live or
through a pre-recorded captioning option.
When hosting a synchronous class, ensuring access to ASL interpreters for students who use sign language
to communicate or CART services for students who prefer captioning for accessing communication in their
classes.
Ensuring the student is involved in the conversation by making the interaction a pace that can be
interpreted or captioned accurately and timely.
Inserting the ability to enlarge text or change colors or fonts as needed.
Allowing auditory information to be controlled by students if the volume needs to be raised or FM system
used.
Having one person speak at a time and identify himself or herself by name before speaking will help reduce
confusion and provide respect for individual input.
Using a platform that allows for IM chat boxes so that all students can interact visually rather than by phone.
P a g e | 170
CONSIDERATIONS FOR STUDENTS WITH PHYSICAL DISABILITIES
When planning an online class, there are several things that you will want to take into account when you encounter
a student with a physical disability. Do not assume that because your class is online that it is accessible. A student
may have a speech impairment, or you may have a student with a sensory disability. Students with a physical
disability affecting their motor control, (e.g., Cerebral Palsy, Muscular Dystrophy, Multiple Sclerosis, or Arthritis) may
use adaptive software on their computer; however, if the software is not compatible with the content of the course,
your class would not be accessible.
Depending on the particular disability, additional time may be needed to allow a student to complete assignments
and projects. For example, if a student’s disability affects his or her fine-motor skills, he or she will more than likely
require a much longer time to type out words on their keyboard. You may also have a student in your class who, due
to his or her disability may require the use of a mouth-stick or head-stick to type, which of course will require a much
longer time and much more energy to type, which in many cases will result in a lot of fatigue for the student. You
may also have a student, who, due to his or her disability, cannot use a keyboard at all, in which case he or she may
use their voice to dictate commands to a voice-recognition program on his or her computer. When it comes to voicerecognition programs, one thing to be aware of is whether the content of your class is compatible with the student’s
voice-recognition technology. Naturally, this will mean working closely not only with your Disability Resource office
to determine what accommodations may be appropriate, but also with your IT department to ensure that different
assistive technologies are compatible with the course content. There are of course other situations that may arise
regarding accessibility of your online course. However, by working closely with your Disability Resource office, many
issues can be resolved.
Here are some general recommendations for providing accommodations to online learning environments for a
student with a physical disability:
•
•
•
•
•
•
•
If the student is using assistive technology to access the class make sure you understand how that impacts
the student in responding to discussions, accessing course content, navigating the site and taking online
quizzes and exams.
Provide a copy of instructions or class notes.
Provide ample time to complete online quizzes and exams. Consider if a timed test is necessary. Are you
testing knowledge or how fast one can recall the information? If you are concerned about using course
materials, consider a proctored exam. Most disability services offices have testing rooms for students with
disabilities.
When designing online class activities consider how the student with a physical disability will be able to
participate equally.
If the student has a communication disability take time to understand what his or her comments are and
repeat them to ensure communication is clear and accurate.
One person speaking at a time and identifying himself or herself by name before speaking will help reduce
confusion and provide respect for individual input.
Using a platform that allows for speech to text for IM chat boxes so that all students can interact.
P a g e | 171
CONSIDERATIONS FOR STUDENTS WITH VISION LOSS
Students with vision loss use a variety of tools and devices to access online content depending on level of visual
impairment. Regardless of the type of tool a student uses to access the course it generally takes a lot longer to read
or listen to the material. In addition to accessing the course material, students who use a screen reader must rely on
headings and links being appropriately labeled to get from one link or folder to another. New technologies are being
developed every day to assist in accessing online information, but not every student is aware of what is available or
has learned to use all the technology. Take the case of Wally.
Wally’s Story: For 58 years, I have been living my life as a blind person. I attended college in the early 1980s.
At that time, online instruction wasn’t an option. If I had the opportunity to access my classes online like
students do today that would have been the biggest game changer for me. The introduction of the Internet
has made a significant impact on my life and access to a world of information. As a person, who is blind, the
Internet has opened up a whole new world for me. I am able to communicate with others, and access
information that I never knew was available. More important in the development of the Internet was the
creation of screen-reading software. This means literally anything I have on my screen will be read back to
me. To make it even better for me was the advancement of software that has made it possible to use touch
screens; something I thought I would never be able to do. My college experience would have been so much
better if I were afforded these opportunities when I was in school. There would have been no dragging
around books or carrying that heavy tape recorder to record and playback the audio of my classes; the issue
of getting to class would have been less stressful. Accessing all the visual information in the class such as
class notes from the professor, information written on board, handouts and materials in an online course
would have been a big benefit for me (W. Waranka, personal communication, July 28, 2017).
Online courses provide a less stressful environment for the student with a vision loss when considerations for
accessing visual information is taken into consideration.
Here are some general recommendations for providing accommodations to online learning environments for a
student who has a vision loss:
•
•
•
•
•
•
•
•
•
Making all information that is visual, auditory.
Provide all video media such as television shows, movies and videos with audio descriptions. Audio
description provides important information about what is happening on the scene that is seen but not
heard. It provides what is happening so the student can follow the plot of the scene.
Add alternative text descriptions (alt tags) to any pictures or graphics used in class materials.
Be considerate of how much reading is required; this includes online instruction, discussion posts and
dialogs. It can get very difficult and time consuming to get through.
When providing handouts to students make sure to ask the student if they need a specific font, size or color.
If there are graphs or other visual charts work with disability services to provide raised line drawings and
tactile modes.
Provide detailed instructions on completing and submitting assignments. For example, do not type “post in
the dropbox;” instead post “submit assignment in the folder labeled xxxx located under the Assignments
Tab.”
Relay visual information that is happening in the online environment such as announcing when the “raise
hand” tab is being used.
Alternative testing formats may be necessary. Work with the student to find out what his or her needs are.
P a g e | 172
•
•
One person speaking at a time and identifying himself or herself by name before speaking will help reduce
confusion and provide respect for individual input.
In a synchronous environment make sure all visual information is read such as PowerPoints, information on
the white board and handouts.
CONSIDERATIONS FOR STUDENTS WITH ATTENTION DEFICIT (ADD/ADHD) AND LEARNING
DISABILITIES
According to the Learning Disabilities Association of America (2017), Attention Deficit Disorder (ADD) and Attention
Deficit Hyperactivity Disorder (ADHD) are not considered learning disabilities. However, they share some of the same
challenges in higher education. Students may have difficulty with focusing, comprehension, and listening to lectures.
Providing instruction in a multi-modal way is the best approach to meeting the needs of diverse student learners.
Here are some general recommendations for providing accommodations to online learning environments for a
student who has ADD, ADHD or a Learning Disability:
•
•
•
•
Make sure all information that is auditory is visual, and all visual information is auditory.
Provide information in multiple formats.
Organization is key. Make sure your course is highly structured and organized.
Develop several shorter presentations that use a variety of aids to keep students’ interest instead of one
long presentation.
CONCLUSION
Incorporating inclusive teaching strategies provide students with disabilities the access needed to be successful in
their courses. The benefits of inclusive designs reach far beyond students with disabilities and helps all students,
students from diverse backgrounds, non-traditional students, and ESL students. Additionally, students come with a
range of learning styles and providing them a variety of accessible formats meets the needs of all learners. Applying
the principles of UDL helps to create an online environment that is inclusive and equitable. This philosophy compels
faculty to design environments that are accessible to all students regardless of their abilities. While this article
provided practical considerations for designing an inclusive online platform, additional and continuing education
training for the platforms used at your institution is highly recommended.
RESOURCES TO CONSIDER:
National Deaf Center on Postsecondary Outcomes - http://www.nationaldeafcenter.org
American Foundation for the Blind - http://www.afb.org
United Cerebral Palsy - http://www.ucp.org
Center for Applied Special Technology - http://www.cast.org
Learning Disabilities Association - https://ldaamerica.org
Disabilities, Opportunities, Internetworking and Technology - http://www.washington.edu/doit/
National Center on Universal Design for Learning - http://www.udlcenter.org
P a g e | 173
REFERENCES
Betts, K. (2013). National Perspective: Q&A with National Federation of the Blind & Association of Higher Education
and Disability. Journal of Asynchronous Learning Networks, 17(3), 107-114.
Meyer, A., & Rose, D. H. (2005). The future is in the margins: The role of technology and disability in educational
reform. In D. H. Rose, A. Meyer & C. Hitchcock (Eds.), The universally designed classroom: Accessible
curriculum and digital technologies (pp. 13-35). Cambridge, MA: Harvard Education Press
National Deaf Center on Postsecondary Outcomes. (2017). Postsecondary enrollment completion for deaf students.
https://www.nationaldeafcenter.org/sites/default/files/Postsecondary%20Enrollment%20and%20Comple
tion%20for%20Deaf%20Students.pdf
Rose, D. H., & Meyer, A. (2002). Teaching every student in the Digital Age: Universal design for learning. Alexandria,
VA: Association for Supervision and Curriculum Development.
Wilbert, C. (2017). Educational group for domestic violence advocates who are deaf. Class presentation for Dr. Amy
Hebert Knopf’s CEEP 666 Group Counseling.
P a g e | 174
CHAPTER
17
DIGITAL TOOLS TO DELIVER
CONTENT AND ALLOW FOR
INTERACTION
DIANA M. MILILLO
AND
ADAM PILIPSHEN
NASSAU COMMUNITY COLLEGE
INTRODUCTION
Any introductory psychology professor can tell you it is not an easy task to teach upwards of 15 chapters of content
in a given semester. After making time for introductions and assessments, this leaves professors with little time to
both ensure students understand key concepts and allow for deeper interaction with the material and each other.
Many instructors look to digital tools - either web-based or downloadable software - to help with the demands of
teaching content heavy subjects to a tech-savvy student body in short periods of time (Perez & Hurysz, 2011).
Earlier chapters in this e-book describe how technology has revolutionized education. For digital natives, or those
who have grown up with access to the Internet, finding information online is easy, and students today no longer
need to have solely content-driven classes. The increasingly popular model of the “flipped classroom” is one in
which instructors can direct students to lecture or content-based resources outside the classroom, and then spend
their face-to-face time participating in deeper cognitive or interactive-type learning activities (LaFee, 2013). While
there are many well-made psychology videos (audio, visual, and content) or podcasts (audio and content) already
available online (e.g., Khan Academy), instructors may also choose to create their own. By making their own content
materials, instructors can: a) feel confident that the material conveyed is specific to their teaching goals, and b)
increase the sense of connection to their students. Instructors may also want a digital record of their own lectures
for students to review on their own time. As we will describe, presentation-making software has come a long way
since the inception of Microsoft’s PowerPoint, and today’s tools will (with sometimes minimal effort) allow
instructors to create engaging presentations that are portable and work with many sharing platforms.
Besides using digital tools (e.g., videos, slideshows, podcasts) as an outside resource to learn or review content
independently, many digital tools described in the sections below allow for more interaction between students or
to have the students interact with the material on a deeper cognitive level and check for comprehension (Gupta,
2014). Yet, how do psychology instructors master the ever-changing world of digital technology? The choices of
digital tools that can be applied to education are overwhelming, and are the subject of many education-related blogs
which have the ability to report on newer, cutting-edge tools faster than can be published in journals. We
recommend one need not feel overwhelmed in keeping abreast of all the newest tools, but experiment to find one
that works for one’s primary needs and goals and develop some technological expertise with that.
SECTION 1: DIGITAL TOOLS TO DELIVER CONTENT
PRESENTATION-MAKING TOOLS
The first section will be a short review of a few tools that create or organize content to be used for self-instruction
outside the classroom (i.e., flipping the classroom), storing one’s class lectures in a format that allows students to
P a g e | 175
revisit the material after class, or having students generate their own content. We will review several presentationmaking programs: Microsoft Mix, Emaze, Canva, and Powtoon. While this is not a thorough review of presentationmaking programs, it showcases features, differences, and possible areas of utilization in courses. (A discussion of
Google Slides, another popular presentation-making program, will be discussed in Section 2 because of its highly
interactive capabilities).
MICROSOFT MIX
While many presentation-making options are available today, Microsoft’s PowerPoint continues to be a favorite
among professors to use in the classroom for helping to deliver content or emphasizing key points (Clark, 2008). Its
familiarity among students perhaps plays into the perception that lecture material presented using PowerPoint
seems more clear and organized, although there are dangers of PowerPoint presentations being dry and unengaging
(Eves & Davis, 2008). A free downloadable add-on for PowerPoint versions 2013 and 2016, called Microsoft Mix,
can help make static PowerPoint presentations become more dynamic, portable, and interactive.
Microsoft Mix uses PowerPoint slides as its base to incorporate video recordings from a webcam, screen recordings,
inking (highlighting or pen notations), and quizzes. Mix can be especially helpful in transforming PowerPoints into
videos that instructors can share with students or use as a supplement in face-to-face classes. With an easy-to-use
functionality, Mix embeds itself as a tab within PowerPoint. Once you click on the Mix tab, a menu of options allows
you to insert video, audio, inking, or quizzes into each slide (see Fig. 1).
FIGURE 1. MENU OPTIONS IN MICROSOFT MIX
Mix appears as a menu tab directly in PowerPoint.
The first author often creates video recordings of her explaining key points on each slide, which can then be placed
anywhere on that slide and appear as big or small as desired. One nice thing about Mix is that users can record and
edit slide by slide; making a mistake does not necessitate that the user has to re-record an entire presentation. Using
the tab menu, one can also check for comprehension of content by inserting quiz questions (also see section below
on Adding Comprehension Checks in Presentations). Selecting this menu choice connects to Microsoft’s app store
where users can choose to embed one of their popular apps for multiple choice or polling-type questions. Finally,
Mix has the capability of allowing users to finalize their presentation and store it in a number of ways. Because of
the video files that may be embedded on each slide, the full presentation may be too large to expect students to
download onto their own devices, and hosting these presentations on a web-streaming service is recommended.
Users can save the entire presentation as an mp4 (video) file, which can then be uploaded to various video hosting
services (e.g., YouTube) or one’s learning management system (e.g., Blackboard or Canvas). Microsoft also allows
users to sign-in and upload videos to their own “Mix” site, which can be shared via a web link. The benefit of hosting
one’s videos on their own Mix channel is that analytic data is recorded to see how long students viewed each video
and spent on each slide, which may also be available if the video is embedded in a learning management system.
P a g e | 176
Mix is directly downloadable for Windows computers (https://mix.office.com/en-us/Home), but not the Mac-based
version of Microsoft Office. (See Mix’s downloadable help guide for teachers, Office Mix Help Guide, 2017).
EMAZE AND CANVA
While Prezi (https://prezi.com) takes credit for being PowerPoint’s main competitor in popularity (especially for its
non-linear and dynamic design) (Chou, Chang, & Lu, 2015), there are still many other programs seeking notoriety as
equivalents to PowerPoint. Web-based programs such as Emaze (https://www.emaze.com) and Canva
(https://www.canva.com) offer the capability of being a “one-stop shop” to create many other types of media, such
as websites, blogs, advertisements, infographics, as well as presentations. Both platforms provide beautiful and
artistic templates, including use of a large repository of high-quality photos. Students may be impressed by the artsy
and eye-catching designs of these presentations (see Fig. 2), which can be made to look more like an advertisement
than a standard slideshow.
FIGURE 2. PRESENTATION DESIGNS IN CANVA
Canva features artistic design templates for presentations.
Presentations can be designed directly on line, in a way very similar to PowerPoint, and there is no need to install
any additional software. Both websites offer many free utilities, though there is a nominal fee to use certain pictures
or templates on Canva (most $1 per high-quality picture) or a monthly single-user fee for creating more than 5
presentations or websites on Emaze (which equates to one annual charge of $54 for a Pro Edu account). Emaze
boasts much more functionality in creating presentations however. It can sync with personal YouTube or Flickr
accounts to seamlessly integrate media. All presentations are saved in real-time, with cloud-based storage. Both
sites offer the ability to download your presentations (see Fig. 3 next page), though Canva only allows users to
download their presentations as picture (jpg) or PDF file formats, while Emaze allows users to download their
presentations as PDF, HTML (web-based), mp4 (video) files, or by viewing through its own Emaze Viewer (which
requires a separate install to a computer or tablet).
P a g e | 177
FIGURE 3. FILE SAVING OPTIONS IN EMAZE
Emaze offers options to save completed presentations as a static pdf file or a video file.
Professors may also opt to create a free website with Emaze, and house presentations on their site, which will allow
Emaze to record data analytics of presentation views. Instructors may want to use Canva for student assignments –
such as creating a detailed one-slide infographic (see Fig. 4).
FIGURE 4. CREATING INFOGRAPHICS IN CANVA.
Canva can allow students to create their own infographic.
P a g e | 178
Besides the web-based functionality, an assured highlight in tools such as Emaze and Canva is in design; these are
sure to create presentations with beautiful, artistic quality.
POWTOON
In a similar vein, Powtoon will dazzle an audience with clever and highly customizable animation videos. Want a
hand to write out the title of a slide, or a stick figure body with your face attached (see Fig. 5)?
FIGURE 5. ANIMATION OPTIONS IN POWTOON
Customizable options for short, animated videos in Powtoon.
The possibilities for creating crafty and engaging slides are endless – and with that, can take a long time to craft.
However, the end product will have students thinking you are a witty and creative professor. Powtoon is a tool that
can best be used if there is a quick concept needed to be addressed in a highly engaging manner – such as a firstday introduction. With a basic and free account, users can make up to 5 minute videos, with limited storage on their
site. Creating these animations is similar to other presentation-making programs; users can start with a blank slide
or choose from a template (where much of the work is done for you) from which to start personalizing. Due to the
motion and animations, the end product works best if it is run as a video slideshow; however, presentations can be
downloaded and run offline as a PowerPoint file as well.
ADDING COMPREHENSION CHECKS IN PRESENTATIONS
EDPUZZLE
As we described above, many of the presentation-making programs have the capability of saving one’s presentations
as a video file to be downloaded or shared directly on YouTube. With a handy (and free) application called EDpuzzle
(https://edpuzzle.com), users can embed questions directly into YouTube videos, including videos that one uploads
themselves or videos that already exist on YouTube. Within the EDpuzzle Teacher menu, users can search for a video
P a g e | 179
on YouTube, and decide to crop a portion of the video if necessary. Then, at self-selected points in the video, users
choose to insert an open-ended or multiple-choice question, indicating the correct answer choice (see Fig. 6).
FIGURE 6. EMBEDDING QUESTIONS IN EDPUZZLE.
The squares on the timeline above represent separate questions that will appear at those points in the video.
EDpuzzle videos work seamlessly with Google Classroom, in which teachers can assign to a class, and set control
features such as assigning a due date and turning on/off skipping question capabilities. If an instructor does not
have Google Classroom, they can still use EDpuzzle with students and capture analytic data, although the process
requires sending out a link to the video and having students sign-in. If an instructor is not concerned with having
analytic data, the video can be played in the classroom and students can simply write down their answers. While
this platform transforms the passive nature of watching videos into a more cognitively active one, a major drawback
of EDpuzzle is that instructors must consistently monitor videos that they have not themselves uploaded on
YouTube. If videos are taken down, then the EDpuzzle integrated quiz will no longer work.
POLLEVERYWHERE
As a free competitor to clicker-based interactive tools, PollEverywhere (https://polleverywhere.com) is a great
platform to engage audiences in many different types of comprehension-checking, critical thinking, or brainstorming
questioning. It is designed to be used in a live presentation, with real-time audience participation, though it needs
to be prepared and embedded strategically beforehand. There are several enticing aspects to PollEverywhere. First,
audience members can participate in answering questions via an online link or a randomly-generated text code.
Second, PollEverywhere has greatly increased the types of questions and answer display options, now offering
multiple-choice, open-ended, survey, word clouds (see Fig. 7 next page), rank ordering, emotion scales, numeric
scales, and matrices. These poll questions can be shown directly from the PollEverywhere website, or each question
P a g e | 180
can be downloaded as a PowerPoint slide and saved as a part of a standard PowerPoint presentation. However,
there are a few logistical considerations in deciding to use PollEverywhere. Thought must be given to the time
participants will need to respond to questions in a live presentation, given that it is based on live Internet or cellular
service. Further, if participants are texting in their answer, it is a two-step process: first, they must “join” the poll,
by texting a unique numeric code and waiting for a response so that they can move to step two, which is texting that
number back with their answer to the given question.
FIGURE 7. POLLING OPTIONS IN POLLEVERYWHERE
PollEverywhere results can appear in real-time directly from the poll’s website.
That process can stall a live presentation as audience members may enter the code incorrectly, not be able to see
the code from where they are sitting, or not receive a response right away. When it works, it is impressive to see
words or charts appear and update in real-time. However, when audience members are stuck, it can slow the flow
of a presentation. Another consideration is that presenters need to be aware of approximately how many
respondents they intend to have during each presentation. The higher education, free account allows users to create
as many polls as they like; however, each poll can only accept and record 40 responses, otherwise an instructor
version that could accommodate up to 400 responses cost $349/semester.
SECTION 2: DIGITAL TOOLS TO ALLOW FOR INTERACTION
This section will focus on a few tools that can be used to help foster interaction with and around educational content.
Interaction is an important factor affecting educational success, especially in online learning environments (Tsui &
Ki, 1996). For instance, studies have found students who report positive interactions with faculty are more likely to
experience higher GPA and persistence levels, in addition to higher levels of satisfaction regarding their overall
P a g e | 181
academic experience (Kuh, Kinzie, Buckley, Bridges, & Hayek, 2006). As first put forth by Moore (1989), interaction
is a three-dimensional construct which includes learner-to-content, learner-to-instructor, or learner-to-learner
interaction. Learner-to-content interaction refers to a student’s interactions with content that results in changes in
understanding, perceptions, and cognitive structures (Moore, 1989). Learner-to-teacher interaction refers to all
communications between the teacher and the student (Moore, 1989). It can also refer to the curriculum
development methods an instructor uses to guide learning. Learner-to-learner interaction refers to the
communication between learners and includes both the cognitive and social dimensions of the interaction (Moore,
1989). Moore’s framework has been widely accepted in the literature and the importance of each of these various
types of interaction on academic success has been well documented (Dennen, Darabi, & Smith, 2007; Burnett, 2007;
Zimmerman, 2012).
GOOGLE DOCS
Google Docs is a free Web-based application from Google in which teachers and students alike can import, create,
edit, update, and share documents in various styles and formats (.docx, .html, .txt, .rtf, .odt). Documents created
with Google Docs are compatible with most word processor applications and can be accessed from just about any
computer, tablet, or smart phone with an Internet connection, making it easy for anyone to create and share
educational content. The document’s author can also control who has access to the Google Doc via a sharable link
and can also determine what level of access others can have to the document. Whereas anything that takes place in
a learning management system is usually accessible to only those enrolled in the course, a Google Doc can be made
accessible to whomever the instructor wants.
The document management benefits of Google Docs alone can save an instructor many headaches. How often has
an instructor distributed a syllabus, only to discover an error or omission moments later? Google Docs allows an
instructor to revise a work at any time without having to redistribute a second or third set. Because revisions made
to a Google Doc automatically cascade to wherever its sharable link is used, instructors can make changes or updates
to a document as needed.
Google Docs also lends itself to collaborative knowledge building, as its commenting and editing features allow
students to work together synchronously or asynchronously. For example, by allowing students to comment and
ask questions directly on the syllabus (a feature controlled by the permission settings), any confusion about course
objectives or assignments can be addressed by the instructor or a knowledgeable student. Instead of getting flooded
with emails from many students asking the same questions, general concerns can be asked and answered directly
on the syllabus.
This approach to community knowledge construction can also be used to turn mundane reading assignments into
collaborative reading experiences. In short, collaborative reading is a technique that encourages students to work
together on a reading assignment to promote better comprehension (Klingner & Vaughn, 1999). While this is easily
accomplished in face-to-face settings, the nature of online learning often isolates students from the teacher, as well
from each other. By sharing a required course reading via Google Docs, the three types of interactions can be
promoted. Not only can students reflect on the reading by offering their own commentary, they can also discuss
divergent understandings. The instructor can also use this discourse to pinpoint systemic misunderstandings which
can help shape the direction of the class.
P a g e | 182
GOOGLE SLIDES
Like Google Docs, Google Slides can help make course content more immersive and effective. While it appears to be
nothing more than a PowerPoint equivalent, Google Slides is much more than a tool for delivering presentations - it
is a powerful and easy-to-use tool for delivering interactive student experiences.
One of the most significant advantages of Google Slides is that it is easier to use and share than PowerPoint. Because
it is cloud-based (as are the rest of the components of Google Suites), it does not need to be uploaded to a platform
first, and updates/changes can be reflected immediately. This is an ideal feature for educators who may need to
provide their students with presentations on a regular basis.
Google Slides can also be used for group projects and assignments that encourage students to interact and
collaborate with their peers. For example, students can be asked to work together to create slideshows synthesizing
various course topics or they can be asked to develop presentations that explore a particular subject matter.
Students have the ability to add and edit slides simultaneously with their peers, generating a faster, more efficient
way of completing a presentation. This solves the issue of several versions of the same presentation floating among
collaborators, which may be a source of confusion and frustration. One of the fascinating aspects of Google’s user
interface for all of their Suite products is that it mimics the commands of the most widely used software for their
paid counterparts, thus easing the transition of use between their products and office software giant Microsoft. This
makes the instructors who are hesitant about trying new software more likely to integrate it into their classroom
settings (which is often reported as one of the main reasons for resisting the acceptance of new technology by
educators who may not be as tech-savvy as younger generations (Demetriadis et al., 2003).)
The finished presentation can then be downloaded in the Microsoft PowerPoint format for use without a network,
or can be opened directly in Google Slides if Internet access is available. Once the presentation is over, those in the
audience who wish to receive a copy can receive the link to a version that cannot be edited, and that they can access
on their own time. If the link is provided with the ability for viewers to comment (before, during, or after the
presentation), students are able to add their own thoughts and questions to the slides which can then be answered
by either the presenters or other viewers. They only need to right click on the object they wish to comment on (be
it image or text), click the “comment” choice on the drop down menu, and a text box with the commenter’s Google
profile will pop up, at which time they may add their questions, concerns, or encouragements. This collaborative
capability allows not only for online learners to feel that their interactions are more concrete with their instructors
and their peers, but for students who may have social anxiety, certain disabilities, or language barriers in which their
writing ability is stronger than their speaking level, it gives a sense of comfort that they are not lost in the audience.
GOOGLE FORMS
Google Forms is Google’s answer to creating a direct path between the instructor and each individual student,
providing an avenue for assessment, as well as streamlining certain procedural paperwork. While it will not get its
own section here (as this chapter is designed specifically for tools used in collaborative learning), Google Sheets (the
Microsoft Excel equivalent) is the backend where the answers and data received from Google Forms will be sent. It
also gives the instructor the ability to receive an email each time a form is completed, therefore making it easier to
keep track of where everyone is without having to sign into the platform itself.
With the ability to create each question in the format of short or long answers, multiple choice, checkboxes, drop
down menus, or even image responses, Google Forms has a robust platform that can greatly enhance student
P a g e | 183
assessment in the educational environment. For example, it can be used to create a survey that asks for students’
interests, weaknesses, strengths, and questions and concerns for the course he or she are about to take. This way
instructors may be able to tailor the class in a more nuanced way if he or she finds a large portion of the student
population is either lacking in an area or very interested in a particular topic.
Google Forms can also grade any tests that instructors create if they have close ended questions. The instructor
clicks on the “Quizzes” tab, selects “Make This a Quiz,” and provides the answer key that Google Forms will compare
the students’ results against (see Fig. 8).
FIGURE 8. SETTING THE QUIZ OPTION IN GOOGLE FORMS
Google Forms can allow any web-based survey to become a graded quiz.
These results, as mentioned before, will be placed into a Google Sheets spread for review by the instructor, but can
also eventually be shown to the students if they are interested in the curve of a particular quiz or test.
An even more advanced setting is Google Forms’ ability to add outside media content (such as a video from YouTube)
in which a student can play the video directly in the form and answer any questions to measure the student’s
comprehension of the material provided.
P a g e | 184
Finally, it can be used to share ideas among students where all of the students’ answers are aggregated and can then
be provided via a shareable link from Google Sheets. This allows students to see each other’s ideas in a collaborative
environment with organized answers, and for teachers to sift through answers first in the event that a particular
topic is sensitive or inappropriate answers are expected. While there is some debate about whether this application
would be better in Docs, Slides, or Forms, the fact is that all three are capable in allowing students to interact with
their classes in a way that is easy to use, efficient, and builds a stronger sense of community in the online learning
environment.
GENERAL DISCUSSION
Collectively, the digital tools described above offer flexibility to meet an instructor’s needs in the design, modality,
and comprehension-checking aspects of creating course content and interactive experiences. These digital tools are
all freely available and can be used with just a little experimentation, bringing students and professors into a more
collaborative space both in and outside the classroom.
REFERENCES
Chou, P.-N, Chang, C.-C., & Lu, P.-F. (2015). Prezi versus PowerPoint: The effects of varied digital presentation tools
on students’ learning performance. Computers & Education, 91, 73-82.
doi.org/10.1016/j.compedu.2015.10.020
Clark, J. (2008). PowerPoint and pedagogy: Maintaining student interest in university lectures. College Teaching,
56(1), 39-45.
Demetriadis, S., Barbas, A., Molohidis, T., Palaigeorgiou, G., Psillos, D., Vlahavas, I., Tsoukalas, I., & Pombortsis, A.
(2003). Cultures in negotiation: Teachers' acceptance/resistance attitudes considering the infusion of
technology into schools. Computers & Education, 41. doi:10.1016/S0360-1315(03)00012-5.
Dennen, V. P., Darabi, A. A., & Smith, K. J. (2007). Instructor-learner interaction in online courses: The relative
perceived importance of particular instructor actions on performance and satisfaction. Distance Education,
28, 65-79.
Eves, R. & Davis, L. E. (2008). Death by PowerPoint? Journal of College Science Teaching, 37(5), 8-9.
Gupta, S. (2014). Choosing Web 2.0 tools for instruction: An extension of task-technology fit. International Journal
of Information and Communication Technology Education, 10(2), 25-35. doi:10.4018/ijicte.2014040103
Klingner, J. K., & Vaughn, S. (1999). Promoting reading comprehension, content learning, and English acquisition
though Collaborative Strategic Reading (CSR). Reading Teacher, 52(7), 738-747.
Kuh, G.D., Kinzie, J., Buckley, J.A., Bridges, B.K., & Hayek, J.C. (2006). What matters to student success: A review of
the literature. Retrieved from https://nces.ed.gov/npec/pdf/kuh_team_report.pdf
LaFee, S. (2013, November). Flipped Learning. The Education Digest, 79(3), 13-18.
Moore, M. G. (1989). Three types of interaction. American Journal of Distance Education, 3(2), 1-7.
Office Mix (2017). Office Mix Help Guide. Retrieved from:
http://www.mixforteachers.com/uploads/5/0/2/4/50241791/office_mix_help_guide_final.pdf
P a g e | 185
Perez, J. & Hurysz, K. (2011). I didn’t know I could do that: Using web-based tools to enhance learning. In D. S. Dunn,
J. H. Freeman, & J. R. Stowell (Eds.), Best practices for technology-enhanced teaching and learning:
Connecting psychology and the social sciences (pp. 207-221). New York, NY: Oxford University Press.
Tsui, A. B. M., & Ki, W. W. (1996). An analysis of conference interactions on TeleNex–a computer network for ESL
teachers. Educational Technology Research and Development, 44(4), 23-44.
Zimmerman, T. (2012). Exploring learner to content interaction as a success factor in online courses. The
International Review of Research in Open and Distributed Learning, 13(4), 152-165.
doi:/10.19173/irrodl.v13i4.1302
P a g e | 186
CHAPTER
18
GOING MOBILE IN THE COLLEGE
CLASSROOM
KIMBERLY M. CHRISTOPHERSON
MORNINGSIDE COLLEGE
INTRODUCTION
The use of mobile devices (e.g. smartphones, tablets) is becoming ubiquitous and many people use their mobile
devices to conduct their daily business without needing to actually enter the physical space of that business (e.g.,
online banking, shopping, information seeking). Mobile-first design is the act of creating the mobile experience for
some electronic service prior to a desktop version. With more people having access to mobile devices, how might
mobile devices and a mobile-first design be incorporated into higher education? This chapter will explore several
strategies that faculty might use to effectively and meaningfully incorporate mobile devices into their classrooms.
E-learning is any effort to deliver and/or customize learning via an electronic medium (Rajasingham, 2011). Elearning strategies can be as simple as using PowerPoint slides during class or can be as complex as creating an online
collaborative learning activity. With the advent of smart phones and other small yet powerful devices, a newer term
is gaining momentum: mobile learning, or m-learning. Similar to e-learning, m-learning refers to any learning that is
mediated through the use of a mobile device (Rajasingham, 2011). A mobile device is an electronic device that is
portable, fits in a pocket or purse, is carried on a regular basis by its user, and allows for the use of high powered
technology and Internet access from nearly any location (Rajasingham, 2011). Mobile devices are often the primary
device used by people to access the Internet (Pew Research Center, 2015), prompting a “mobile-first” movement in
many sectors of business that have an online presence. The “mobile-first” design movement simply refers to the
notion that when designing an online presence, one must assume that most people will access this via their
smartphone first. Webpages are designed and developed for the smaller screen of a mobile device and mobile
applications are prompted for download when a user visits a business website from their smartphone. What might
a mobile-first design mean for the classroom? Is it possible or desired to create learning experiences that are catered
to the use of mobile devices? What are the possible benefits and challenges of leveraging mobile technologies within
the learning environment whether this be face-to-face, blended, or online? This chapter will provide a number of
different strategies that faculty might use to leverage mobile devices within their courses.
First, let us explore the question of why we might consider taking a “mobile-first” mentality when designing courses.
Consider the reality of many of our learners today. Within our classrooms, the fact that nearly every student has a
powerful Internet-connected device readily at hand is an awesome notion. Imagine if every classroom in a university
was equipped with a one-to-one ratio of devices, each worth about $500 (or more) each. Even in a smaller classroom
of 20 students, this is like having a $10,000 system installed in a small classroom. Scale this up to a large lecture hall
and this might be a $200,000 system. This has become a reality with the advent of smartphones and tablets. Not
only is it likely that most students will possess a mobile device, but many students today use these devices as a
primary communication device, mechanism for looking up information, and source of entertainment (Pew Research
Center, 2015). Mobile phones are ingrained into daily life and faculty members likely possess a smartphone of their
own, but some might be skeptical about allowing these devices to be used within the learning environment because
of the distractive nature of these devices (ECAR, 2017). Some may outright ban devices in their classroom, but how
might a faculty member begin to use these devices within their classroom? What if a student becomes bored and
P a g e | 187
drifts off when listening to long periods of a lecture (Langner & Eickhoff, 2013), perhaps checking in on e-mail or
social media on their device? Enter in a “mobile-first” approach to course design. Designing learning experiences in
a way that attempt to leverage the mobile device while also discouraging the possible distractions of the mobile
device is arguably the approach that is necessary for faculty to take if they wish to create effective learning
environments where mobile devices are used.
MOBILE-FIRST DESIGN IN THE FACE-TO-FACE CLASSROOM
One rather simple mobile-first strategy is to design each class period into a series of “mini-lectures” on the topics
for the day rather than on one (or two) major topics. Shorter periods of information delivery (usually recommended
10 minutes or less) on specific topics broken up by periods of short activity can help students stay on task and be
attentive (Halpern & Hakel, 2003), thus reducing the risk of students being tempted to escape into their
smartphones. Deciding on the activity between these mini-lectures can then be a place to have students use their
mobile device for learning purposes (see Stutts chapter). For example, an instructor might pose a series of questions
to students on the content and have students respond using a student response system (SRS) from their phone
rather than giving a paper and pencil quiz or having students raise their hands. There are a number of free
applications that allow faculty to do this including PollEverywhere, Socratic, and Kahoot. Many faculty want in-class
time to be interactive but may find it difficult to do so with larger enrollments (Mangram, Haddix, Ochanji, &
Masinglia, 2015). SRS like these allow for all students in the classroom to be active participants in the question and
answer session and provides the instructor with immediate performance information from all students who respond.
When choosing to incorporate a SRS into a course, one must be sure to use the technology effectively. Designing
effective questions takes planning as it is difficult to ask on-the-fly questions with some of these applications, but
these systems allow faculty to easily see how many students understand the information and can allow for the
instructor to clear up misconceptions or to adjust the lecture content to spend more or less time on a particular
topic. (See Iwamoto & Hargis chapter.) Several of these SRS can also be used on a computer and the use of a
smartphone is not necessarily needed. For those who are concerned about the opportunity for distraction from
smartphones and computers, there are systems that require the use of a separate remote control device (such as
TurningPoint) which avoids the possible distractions of smartphone use during class. However, these remote control
systems must be purchased either by the institution or the student. Mobile applications like PollEverywhere,
Socratic, and Kahoot are free to use but may provide the opportunity for students to use their device for other
purposes, such as checking in on social media. One strategy to help students avoid this distraction is to require the
devices be put away when there are no questions being asked. Additionally, shortening the length of the lecture
into shorter segments should help decrease the temptation to escape into social media (Halpern & Hakel, 2003).
Some SRS systems are limited in the type of questions that can be asked (some only allow for multiple-choice
questions) and sometimes a paid version is needed in order to record individual student data. Additionally, some
systems do not integrate into PowerPoint or other presentation software requiring the use of two systems to deliver
information and to host the question/answer sessions. Nearpod is an application that allows for instructors to create
both content and activity slides that are presented on the students’ individual devices. Nearpod is specifically
designed to work best on tablet devices, but work on any Internet connected device. Instructors can create
PowerPoint-like slides to present content and then show an activity slide that requires all students to complete (e.g.,
quiz, polls, draw, post ideas). Nearpod saves all student responses and could be used by an instructor to award
points for completing the activities. It also allows for the instructor to easily share student work anonymously to the
class providing for teachable moments on the fly. Once again, though, care must be taken to design the class period
P a g e | 188
such that content is presented in short segments with interactive activity requiring student response to discourage
students from using their devices for non-classroom activities.
Another technique for incorporating mobile devices into their classrooms is allowing students to use their devices
to either look up some information or to contact someone they know to answer a question that the instructor may
not know. For example, in a human biology course, a student may ask a question about a new treatment he or she
heard about in the media for a major disorder. The instructor (a biologist) may not know the answer to this. But
the instructor could ask if someone in the class could search for this information. Allowing students to use their
devices for this specific purpose can be a powerful teachable moment and also keeps the focus of the use of the
devices on classroom purposes. To design this activity in a way that helps avoid the potential for classroom disruption
from the use of mobile phones to look up information, the instructor might call on a random student to report what
he or she found and the source of that information. The uncertainty of whether or not one might be called upon
can help encourage students to stay on task. Once some answers have been found, students can be instructed to
put their devices away.
A final strategy that can be rather easily incorporated into the face-to-face classroom is the use the mobile devices
as a way of delivering an advanced organizer. An advanced organizer is used to introduce a topic to students in a
way that is simple and connected to past knowledge (Woolfolk, 2016). For example, an advanced organizer might
be sent prior to a lecture on the topic of positive and negative reinforcement providing an image or table of
information for students to view prior to coming to class. This will provide the student advanced notification of the
focus of the day’s class and provides some overview information and maybe examples before they come to class.
Thus, students come into class primed with the content for the day as well as a basic overview of the information.
Many faculty are often willing to e-mail students to communicate. An advanced organizer could be as simple as
sending an e-mail prior to class that day to prepare students for that day’s class session. Taking a mobile-first
mentality, the instructor should assume that the student will access this via his or her mobile device and the message
should read well on a small screen. Creating a short e-mail with a bullet list of topics or a diagram that might read
well on a small screen might help prepare students for the class period.
MOBILE-FIRST DESIGN CONSIDERATIONS WITH LEARNING MANAGEMENT SYSTEMS
Many faculty utilize their campus learning management system (LMS) for face-to-face or online courses and most
LMS have a mobile version (see Cerniak chapter for a discussion of LSMs). Sending class messages like the previous
example of an advanced organizer can be easily facilitated through one’s LMS. Because students might access their
LMS through a mobile device, when designing materials within a LMS, faculty must consider features such as ease
of navigation, reading ease, and ability to submit assignments via the mobile site. Designing a course for the mobile
version of the LMS can provide students with the ability to access and easily use course materials (e.g., readings,
videos, websites) while on the go. A student having the ability to read or listen to the article for that day’s class or
even a pre-recording of a lecture while on his or her daily commute allows that student to productively use that
time. Making these materials easy to read on a small screen and even making it possible for a screen reader to speak
the text would need to be considered for consuming this information on a mobile device.
Making course materials easily usable in a multitude of formats (i.e., both text and audio) can make it possible for
students to complete their readings while on the go, but there are disadvantages to this as well. It is not as easy for
students to actively take notes while reading in this way. Students may be multi-tasking (i.e., driving while listening
to the book reading for the day), which will likely result in less deep processing of the material, especially if the
P a g e | 189
reading or content is particularly difficult (Ruthruff, Johnston, & Remington, 2009). Therefore, even though it is
possible to provide these affordances to the student, doing so does not guarantee that the student has deeply
processed that information (see Littlefield and Gjertsen’s chapter on working memory).
Considering the mobile capabilities of the LMS also means addressing the activities within the course. Instructors
need to know if it is easy for students to submit or complete the course activities on a mobile device. Two of the
most common forms of activities in an LMS are drop boxes for written assignments and discussion forums. Engaging
in a discussion forum may be easier to complete on a mobile device than a written assignment. Discussion forums
tend to be shorter written works and typing on the device’s keyboard for this type of short activity is likely easier
than trying to complete a short paper. The discussion forum may be video based (such as VoiceThread) which
removes the need to type extensively on a small keyboard. Additionally, questions of whether the mobile device has
the applications to complete longer written assignments must be considered. If a worksheet is in a particular file
format (e.g., .docx, .pdf), then the device must be capable of not only being able to view that document, but to also
alter that document. Because this may not be possible, it is unlikely that a course (even an online course) can go fully
mobile. Most college courses still require some form of lengthy written work that is simply too difficult to do well
on a small mobile device. Therefore, only some of the LMS activities within a course might be fully mobile and
students in these courses will still need access to a “workhorse” device (i.e. laptop or desktop computer) to complete
all activities in the course.
MOBILE-FIRST DESIGN IN INSTRUCTOR-STUDENT COMMUNICATION
Arguably, one of the most important elements of any course is effective and efficient interaction between the
instructor and student both in and out of the classroom (Chickering & Gamson, 1987). Another consideration of
mobile-first design in the classroom is using alternative communication platforms to interact and communicate
electronically with students. Though most students and faculty still use e-mail as their primary mode of electronic
communication with one another, there are other communication applications that make use of some of the unique
features of mobile devices (i.e., push notifications). A push notification is an alert that is given on a device any time
a new message is available. Most people likely turn off any push notifications on their e-mail simply because having
this on would result in a new tone every few minutes. However, users often leave push notification on for
applications such as a messaging or chat programs. It might be important for certain messages that are sent to
students to have a push notification. For example, in the previous discussion about advance organizers, a student
who receives this message via e-mail may not check their e-mail prior to class, thus making this advanced organizer
less effective and misses the purpose of the message. But if this message was sent via text, then the student may
be more likely to check this message prior to class because he or she will receive a push notification of that message.
There are a number of tools that could be used to change the mode of electronic notifications. Several LMS have a
newsfeed or course announcement feature that might have notifications associated with it if the student has the
mobile version of the LMS on their device. This is an easy and integrated way to send an entire class a message, but
it is generally just one-way communication (instructor to student). Other communication applications can allow for
two-way communication. Texting is probably the easiest example. There are programs that allow individuals to
send bulk text messages without actually having to share phone numbers (e.g. Cell.y, Remind.com). Programs such
as Slack provide yet another avenue for communication between people and keep a Twitter-like feed of the
conversations and allow for attachments more easily than text messages. When considering these alternative modes
of communication, the important question to answer is how important is it for students to check and read this
P a g e | 190
message quickly after receiving it. If this is not a priority, then a messaging tool with a push notification is likely not
important and adding a superfluous electronic tool into the classroom.
GOING FULLY MOBILE IN LEARNING
Up to this point, I have discussed methods of incorporating a “mobile-first” mentality in course design without
actually assuming the course being delivered entirely in a mobile format. The impediments to doing certain types of
academic activities exclusively on a mobile device challenge even the most creative faculty, although elements of
these courses may be used on a mobile device, the entire course is not something that could be effectively conducted
on a mobile device. Additionally, faculty may not have the knowledge or skills necessary to actually create mobile
environments and applications themselves. Some highly innovative and motivated faculty may be interested in
creating fully online and fully mobile courses. A fully mobile course would likely contain materials that are inherently
interactive and adaptive to individual learners and where classroom activities can be easily completed on one’s
smartphone. Gone might be the traditional term paper, an activity extremely difficult or near impossible to conduct
exclusively on a smartphone. Instead, alternative activities such as the creation of videos might be included.
Some faculty may also find that existing electronic materials and services are insufficient in their desire to go
completely mobile and may have the resources and skill to develop mobile applications themselves. They may create
specific games appropriate to the course that provide not only motivation but also deep learning. They may create
their own e-books with the text, images, and interactive activities they wish. They may create interactive and
dynamic websites that are responsive to mobile devices. Some of these initiatives might be easier than we think.
For example, there are e-book building applications that are relatively easy to use (iBooks Author as an example).
The challenge here for faculty includes the time, resources, and skills to carry these major projects out. Therefore,
converting an entire course to be delivered via a mobile device is a daunting task and likely to be riddled with setbacks.
CONCLUSION
This chapter focused on mobile-first design as it might apply to education. In this chapter, I have discussed what it
might mean to use a mobile-first design mentality within the educational environment. A mobile-first mentality
requires the faculty member to design learning environments, materials, and activities for use on the student’s
smartphone or tablet. A variety of strategies for taking a mobile-first approach have been offered, but again
converting a course in to an exclusively mobile experience is unlikely to occur without extensive faculty time and
resources, so instead instructors can attempt to incorporate some elements of mobile-first design into their courses.
Mobile devices are truly changing the fabric of not only our educational environments, but also in how we conduct
our business and everyday communications and these devices afford opportunities for interactivity and
personalization that are worth exploring.
REFERENCES
Chickering, A. W. & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE
Bulletin, 39(7), 3-7.
P a g e | 191
EDUCAUSE Center for Analysis and Research (ECAR, 2017). The EDUCAUSE almanac for faculty and technology
survey. Retrieved from https://library.educause.edu/resources/2017/8/the-educause-almanac-for-facultyand-technology-survey-2017
Halpern, D.F., & Hakel, M.E. (2003). Applying the sciences of learning to the university and beyond. Change, 35(4),
36-41. Retrieved from http://www.tandfonline.com/toc/vchn20/current
Langner, R. & Eickhoff, S.B. (2016). Sustaining attention to simple tasks: A meta-analytic review of the neural
mechanisms of vigilant attention. Psychological Bulletin, 139(1), 870-900. doi:10.1037/a0030694
Mangram, J.A., Haddix, M., Ochanji, M.K., & Masinglia, J. (2015). Active learning strategies for complementing the
lecture teaching methods in large classes in higher education. Journal of Instructional Research, 4, 57-68.
Retrieved from https://cirt.gcu.edu/jir
Pew Research Center (2015), The Smartphone Difference. Retrieved from
http://www.pewinternet.org/2015/04/01/us-smartphone-use-in-2015
Rajasingham, L. (2011). Will mobile learning bring a paradigm shift in higher education? Education Research
International, 2011, 1-10. doi: 10.1155/2011/528495
Ruthruff, E., Johnston, J.C., & Remington, R.W. (2009). How strategic is the central bottleneck: Can it be overcome
by trying harder? Journal of Experimental Psychology: Human Perception and Performance, 35(5), 13681384. doi: 10.1037/a0015784
Woolfolk, A. (2016). Educational Psychology, 13th ed. New York, NY: Pearson.
P a g e | 192
CHAPTER
19
INTEGRATING ACADEMIC SKILLS
AND DIGITAL LITERACY TRAINING
MELISSA BEERS
AND
NICOLE KRAFT
THE OHIO STATE UNIVERSITY
INTRODUCTION
As each new semester approaches, college professors thoughtfully prepare courses and plan dynamic lectures,
engaging class activities, and challenging assignments. It is common practice to assign work to be done electronically
and submitted online, yet in doing so we may never question students’ proficiency in technology – a logical response
when, in a typical class, nearly all students have a laptop in front of them. Have instructors considered how students
use technology in courses? If you assess students’ knowledge or critical thinking skills at the start of your course,
why not also consider evaluating students’ digital literacy skills? This may be a critical gap in students’ skill sets, one
that is exacerbated by misconceptions about digital literacy skills. In this chapter, we share a cross-disciplinary
perspective to integrate digital literacy and academic skills training and encourage instructors to consider how digital
skills can be implemented in your courses.
Millennial students have been labeled “digital natives.” Because students have grown up with technology, many
assume they have become fluent as a natural consequence of this exposure (e.g., Prensky, 2001, 2006). Digital
natives are considered eager to use new technologies and quickly able to adapt to them. They also are deemed to
be excellent “multitaskers” who can effectively manage multiple activities simultaneously and are facilitated in this
enhanced efficiency by their technological savvy. Unfortunately, the assumption that students have gained digital
expertise by mere exposure constitutes an urban myth that is inaccurate and potentially harmful to teaching and
learning. Subscribing to this belief overestimates students’ skills and abilities, which may adversely impact what
students learn.
THE TRUTH ABOUT DIGITAL NATIVES
There is no evidence that “digital natives” easily adapt to and integrate new technologies; in fact, university students’
actual use of technology has been found to be quite limited in range (Kirschner & De Bruyckere, 2017; Kirschner &
van Merriënboer, 2013; Margaryan, Littlejohn, & Vojt, 2011). Students underestimate the detrimental effects of
multitasking (Brasel & Gips, 2011). Multitasking in class negatively impacts students’ own performance and has
detrimental effects on other students who become distracted, much like the effects of secondhand smoke on nonsmokers (Kirschner & De Bruyckere, 2017; Sana, Weston, & Cepeda, 2013; Ward, Duke, Gneezy, & Bos, 2017). In
response to the growing body of research showing technology increases distraction and impairs performance, some
instructors implement “technology bans” in their classes (Dynarski, 2017; Rosenblum, 2017).
Nevertheless, technology is deeply integrated into students’ lives and learning environments, and taking even firmer
root. As one example, Ohio State University recently announced a partnership with Apple to become a “Digital
Flagship” University (Davey, 2017). Through this partnership, all incoming students will receive iPads, keyboards and
Apple pencils, and a core set of applications. The stated goal is “integrating learning technology throughout the
university experience” (Davey, 2017). The initiative aims to integrate common tools and learning technologies in all
aspects of students’ experiences, including classrooms.
P a g e | 193
While this is a new and unique initiative, it points to the fact that educators have an opportunity to consider the role
of technology in student learning. Rather than banning technology altogether, faculty can consider digital skills as a
learning objective. Banning technology, while well-intended, is impractical, and unlikely to change students’ use of
technology outside the classroom. It also does not help students build skills in the long term. Technology bans may,
in fact, negatively impact students in other ways, such as their engagement in the class or rapport with their
instructor (Hutcheon, 2017).
THE IMPORTANCE OF FOCUSING ON SKILLS
The reality is that technology is becoming more highly integrated into all aspects of students’ lives; it is not going
away. By addressing digital skills, we have an opportunity to help students build the critical skills they need to be
successful in their college career and, ultimately, in the workplace (c.f., Strohmetz et al., 2015). The APA Guidelines
for the Undergraduate Major explicitly emphasize interpersonal skills, written and oral communication skills, and
“professional skills” – including self-efficacy and self-regulation skills, project-management skills, and teamwork skills
(American Psychological Association, 2013). Digital skills are an increasingly important part of any professional skill
set, relevant in any field. Technological literacy is critical to enhancing the professional skills employers seek in new
employees. The National Association of Colleges and Employers Job Outlook Survey (NACE, 2016) identifies the skills
employers consider most important when evaluating job applicants; the top skills mentioned by employers are listed
in Table 1.
TABLE 1. NATIONAL ASSOCIATION OF COLLEGES AND EMPLOYERS (NACE) TOP ATTRIBUTES EMPLOYERS
SEEK IN JOB CANDIDATES
Leadership
80.1%
Ability to work in a team
78.9%
Communication skills (written)
70.2%
Problem-solving skills
70.2%
Communication skills (verbal)
68.9%
Strong work ethic
68.9%
Initiative
65.8%
Analytical/quantitative skills
62.7%
Flexibility/adaptability
60.9%
Technical skills
59.6%
Interpersonal skills (relates well to others)
58.4%
Computer skills
55.3%
Detail-oriented
52.8%
Organizational ability
48.4%
Friendly/outgoing personality
35.4%
Strategic planning skills
26.7%
Creativity
23.6%
Tactfulness
20.5%
Entrepreneurial skills/risk-taker
18.6%
Digital skills are inherently important to students’ future jobs. The same students who might be underutilizing (or
misusing) technology in our classrooms today will soon be in internships, meetings, job trainings, or press
conferences where the stakes will be higher than their score on an exam.
An approach that integrates digital skill development is not only consistent with a skills-based perspective, but
technology is most attractive to students when it is meaningfully integrated into a curriculum (Lengel & Lengel,
P a g e | 194
2006). For example, Kraft and Seely (2015) studied the impact of integrating mobile technology (iPads) into a
journalism course. They provided students with iPads, applications, and training to use the device for assignments
in the course – doing research, interviewing, taking notes and writing news stories, shooting photos, recording video
and audio, and using social media. Because students were working in the same platform using the same tools, the
instructors were better able to embed training into assignments. Training on the technology became an integral part
of the course experience; students who received training produced higher quality work.
Beyond individual classes, many institutions have developed initiatives to integrate technology into students’
experiences. An increasing number of college athletics programs have mobile technology programs to support the
unique needs of college athletes. Penn State University’s Mobile Technology Program, for example, provides
student-athletes iPads along with training and support, as does the University of Cincinnati, Texas Tech, and Ohio
State. Student-athletes can use this technology to manage their academic as well as their athletic commitments and
programs can deploy training and resources to all student athletes fairly and efficiently. Yet, we wondered, how do
they actually use their devices?
EXPLORING STUDENTS’ TECHNOLOGY USE: ATHLETES AND IPADS
The population of student-athletes at Ohio State afforded us an opportunity to explore how students were using the
technology provided to them, and how effectively they were leveraging it for academic purposes. All studentathletes received the same technology (the iPad itself, four applications for academics, four relevant to their sports,
and the university app) but no training was provided on how to use the device. We considered this an ideal
opportunity to investigate how student-athletes spontaneously used their devices for academics. We conducted an
anonymous survey of Ohio State student-athletes in January of 2016. All student athletes (N = 969) received a copy
of the electronic survey and 178 responded (18% response rate). This pilot study was a first attempt to explore how
student-athletes were leveraging their technology for academic purposes and what, if any, barriers they experienced
in doing so. Highlights from the survey results include:
•
•
•
•
•
•
•
Most of the respondents (81.6%) had not used an iPad for academic purposes before arriving at Ohio State.
When asked how much time they used their iPads during a typical week, 71% reported using their iPads
four hours or less.
When asked how they used their iPads, student-athletes’ most frequently-mentioned activities were email
(82%), homework (72%) reading (67%), watching movies (65%), watching online course videos/lectures
(55%), social media (55%), taking notes in class (51%), listening to music, 50%, taking photos (21%), and
writing assignments (18%).
Student-athletes were most likely to report using their iPads to “take notes in class,” with 24% identifying
this activity as the one way they were most likely to use their device, followed by “homework” (16%) and
watching movies (14%).
When asked to identify the top three Apps they use most often, specific applications most frequently
mentioned were Safari (22%), email (20%), and Netflix (13%).
When using their device for note-taking, students indicated they were using the basic Notes app (8%).
When doing academic work, students indicated that when given a preference, 67% preferred to work on a
laptop, with only 14% preferring to work on an iPad or tablet.
Although preliminary and not generalizable to all college students, these results suggested to us that student
athletes were underutilizing their technology for academic purposes, with the majority of respondents using their
devices for academics only a few hours each week – including their use for entertainment. Use of applications to
organize their work or collaborate (two important areas for skill development) were limited in this sample, and most
students who used the iPad for note-taking were using the most basic applications (e.g., Safari, email) in a limited
P a g e | 195
way. In fact, follow-up meetings with athletes revealed that when they did use the device for note-taking, they
typically used the basic Notes app and transcribed lecture notes. Most had not explored other applications that offer
more enhanced features for note taking – for example, applications that allow users to integrate notes with pictures,
video, or other media, such as Notability or Evernote. Even with the limitations of these data, we found the
implications provocative.
As teachers who value skill development, we saw this as an opportunity to enhance student athletes’ digital literacy
skills for their immediate academic benefit as well as for their long term professional development. Working in
partnership with Ohio State’s Student Athlete Academic Support Services (SASSO), we collaborated on a series of
trainings for student-athletes that embedded activities to learn how to use various applications relevant to academic
skills, and integrated information to raise awareness about the science of learning and effective study skills and
strategies. Initially, workshops employed an iTunes U course – free to set up and free for students to enroll in. After
our campus conversion to a new learning management system (LMS), the training was integrated into a Canvas
course shell, and we also practiced how to use the iPad in Canvas to submit assignments. Building on lessons learned
from Kraft and Seely (2015), we collaborated with Ohio State Athletics so that student athletes received keyboards
and protective cases for their devices, and also provided a set of key applications that focused on specific digital
literacy skills. Students were given a tutorial and an opportunity to practice using the following applications. Modules
for the class included:
•
•
•
•
•
•
An introduction to the iPad, framing the device as an “academic partner” that was uniquely useful for
academic tasks such as organization, communication, and integrating class information.
Information about the “digital native” myth and why students benefit from learning technology like any
other subject.
How to effectively use the iPad in class, including information on the science of learning: The “learning
style” myth, how multitasking can be harmful, and how dual coding (e.g., combining images or pictures with
words) can enhance retrieval. Students practiced taking notes in ways that integrated video and images,
not just transcribing text.
How to use the iPad at home to organize documents by scanning them via applications like CamScanner,
how to use cloud storage, and how to use various word processing applications as well as how to submit
them via a course LMS.
How to use the iPad on the road, including applications that would allow them to collaborate using tools
such as Google Docs, as well as tools to meet/connect remotely with peers or professors (e.g., FaceTime,
Skype, or Zoom) when travelling for their sport.
How to work with faculty and tutors, including applications like Explain Everything, an interactive
whiteboard that would allow students to record a tutoring session or visit to an instructor’s office hours for
review while studying or travelling for their sport.
A pilot group of 13 student-athletes participated in workshops in March and April 2016. Following the training,
students were unanimous that the training was relevant and useful to their academic life, that they had a better
understanding of how to use the iPad academically, and that they were more confident using the iPad academically
than before the workshops. Although student-athletes expressed a preference for doing academic work using a
laptop (57%), 29% of the pilot group indicated they now preferred working on their iPad. More compelling were the
comments from student-athletes who participated in the workshops, who hoped for more information and support
from their faculty on options for using technology effectively, both in class and for assignments.
P a g e | 196
LEVELING UP: INTEGRATING DIGITAL SKILLS INTO YOUR COURSES
Based on our experience, we suggest instructors consider the technology students may be using in your courses and
how to support high-quality work by enhancing their digital skills. Start by reviewing your course assignments. What
are the opportunities to integrate technology and enhance students’ skills? In courses where the assignments are
primarily exams, essays, or research papers that only the instructor or TA will read, it may be difficult to see how
technology plays a role. However, instead of requiring such “disposable” assignments, consider opportunities for
coursework that builds skills and holds more meaning for students (Jhangiani, 2016). Some possibilities may be:
•
•
•
•
Hold a digital poster session, by having students create and comment on one another’s work – such as this
example, using the online discussion board.
Have students submit video assignments, which they edit and caption to share with their classmates (c.f.,
King, 2018, Bell, 2018).
Have students annotate research articles using ExplainEverything, or create animations or gifs to explain
key findings to share with other students or members of a group project – see one example shared on
Twitter (Sievers, 2017).
Assign students to create a podcast – ask them to conduct an interview or write/edit and record a script
(Neufeld & Landrum, 2018).
These assignments help students apply what they are learning and differentiate themselves when applying for jobs
or graduate programs. Students will benefit most from these tasks with training and support.
Reflecting on our experiences in working with students and integrating technology in our courses, we recommend
you consider one or more of the following:
CHECK YOUR ASSUMPTIONS ABOUT STUDENTS
Having a device does not equate to having skills. As one example, one of us knows of a student taking an online
course who was asked to make a chart and submit the assignment in a PDF format. The student wrote out the
answers to the assignment by hand on paper, took a picture of it, and then saved the picture (.jpg) as a PDF. Although
the student technically met the requirement of the assignment, the student was clearly under skilled and could have
identified a more efficient way to submit the assignment that would be more effective in the long run. Consider
assessing students’ digital skills at the start of your course to ensure they know how to use the tools you expect
them to use in your course, and offer assistance for those who need it.
TALK WITH YOUR STUDENTS ABOUT THE RESEARCH ON ATTENTION AND MULTITASKING, AND ABOUT
THE RESEARCH ON HOW STUDENTS LEARN
Knowing how to use technology effectively – and how to avoid behaviors that can potentially negatively impact
learning – will benefit students long after they leave our classes. Share research on effective learning strategies with
students (see for example http://www.learningscientists.org/), as well as the research that shows the detrimental
effects of technology on learning (e.g., Sana, Weston, & Cepeda, 2013; Mueller & Oppenheimer, 2014; Ward et al.,
2017). You might even review this research with students when considering your own course policies regarding
technology use – or better yet, let students contribute to course policies around the use of technology in the
classroom once they are aware of the research. A potential assignment might be for students to create a video
recommending effective strategies for using technology for their peers.
P a g e | 197
SET ASIDE TIME IN YOUR COURSE TO WORK ON SKILL DEVELOPMENT
Consider what skills students have the potential to gain in your course – not just the knowledge and facts they will
learn – and be sure that you are carving out time in your class to work on building those skills. It is not difficult to
imagine that you might need to set aside time in a statistics class to review how to use a particular data analysis
software. Consider setting aside time in a class to review how student groups can effectively use Google Docs or
other tools to collaborate. This may entail making an online module available to students that provides support if
they are under skilled in this area. Demonstrate how to use video conferencing tools so students can collaborate
remotely. If you need to “skill up” yourself, seek out collaborations with other faculty members at your institution
who use such tools, and ask them to share their strategies for training and supporting students. Alternatively, seek
out your campus educational technology specialists to assist you.
ASSESS STUDENTS’ DIGITAL SKILLS AS PART OF WHAT STUDENTS SHOULD KNOW AND BE ABLE TO DO
AT THE END OF YOUR COURSE
Students value what is assessed in a course. If a particular skill set is important, put it on your syllabus. Assess
students’ “grade of execution” on your assignment rubrics to give feedback on how students executed their
assignment and used technology. With the exception of software that is unique to a particular course (for example,
SPSS, R, or other data analysis software, or online software required with an eBook), instructors rarely take time to
review technology with students – we may assume that it is something students should come “pre-loaded” with, or
that it is someone else’s responsibility to teach them the tools. But where precisely in a student’s academic career
will this happen? In introductory level courses in particular, instructors are often on the front lines in terms of helping
students build literacy with their campus technology, but in more advanced courses, showing students tools that
will benefit them in their professional lives is a worthwhile investment of time. Review tools in class, when reviewing
the details of an assignment, or make/post videos for students to highlight key features or stumbling blocks. Better
yet, engage students to make videos for future students.
GIVE STUDENTS AN OPPORTUNITY TO EXPLORE TECHNOLOGY THEMSELVES AND SHARE WITH YOU HOW
THEY USE IT
Consider creating an assignment that asks students to explore a new technology for collaboration (Google Docs,
Office 365, Box, Huddle), communication (Zoom, Slack, Explain Everything), presentations (Prezi, SlideDog, Google
Slides) data analysis (Excel, R, JASP), or organization (Evernote, Trello, Wunderlist, Todoist). Have students practice
using the tools and report back on strengths and weaknesses. They get the benefit of trying new tools, and you can
learn about them too!
Technology in education is here to stay. To prepare our students for their futures, use this opportunity. Let us work
with students on more than just the content of lectures and help them to be productive and effective, as well as
knowledgeable. The idea of the “digital native” is a myth, but the opportunity we have is very real.
REFERENCES
American Psychological Association. (2013). APA Guidelines for the Undergraduate Psychology Major: Version 2.0.
Retrieved from http://www.apa.org/ed/precollege/undergrad/index.aspx.
Bell, K.M. (2018, January). Defining abnormality: A movie-clip analysis. Teaching demonstration at the 40th annual
National Institute on the Teaching of Psychology (NITOP), St. Pete Beach, Florida.
Brasel, S. A., & Gips, J. (2011). Media multitasking behavior: Concurrent television and computer use.
Cyberpsychology, Behavior, and Social Networking, 14, 527-534.
P a g e | 198
Davey, C. (2017, October 04). Ohio State collaborates with Apple to launch digital learning initiative. Retrieved
October 15, 2017, from https://news.osu.edu/news/2017/10/04/digital-flagship/
Dynarski, S. (2017). Laptops are great. But not during a lecture or a meeting. The New York Times. Retrieved from
https://www.nytimes.com/2017/11/22/business/laptops-not-during-lecture-or-meeting.html.
Hutcheon, T. (2017). Technology bans and student experience in the college classroom. STP E-Xcellence in Teaching
Blog. Retrieved from https://teachpsych.org/E-xcellence-in-Teaching-Blog/5068179.
Jhangiani, R. (2016, December 7). Ditching the “Disposable assignment” in favor of open pedagogy. Retrieved from
http://osf.io/g4kfx
King, S.P. (2018, January). Pop-song personality: A project to teach scale development and construct validity.
Teaching demonstration at the 40th annual National Institute on the Teaching of Psychology (NITOP), St.
Pete Beach, Florida.
Kirschner, P.A., & De Bruyckere P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher
Education, 67, 135-142.
Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education.
Educational Psychologist. 48, 169-183. doi:10.1080/00461520.2013.804395
Kraft, N. & Seely, N. (2015). Making mojos: How iPads are enhancing mobile journalism education. Journalism &
Mass Communication Educator, 70, 220-234.
Lengel, J. G., & Lengel, K. M. (2006). Integrating technology: A practical guide. Boston, MA: Pearson Allyn & Bacon.
Mueller, P. & Oppenheimer, D. (2014). The pen is mightier than the keyboard: advantages of longhand over laptop
note taking. Psychological Science, 25, 1-10.
Neufeld, G & Landrum, R.E. (2018, January). Podcasts: A versatile pedagogical vehicle. Teaching demonstration at
the 40th annual National Institute on the Teaching of Psychology (NITOP), St. Pete Beach, Florida.
Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9, 1-6.
Prensky, M. (2006). Listen to the natives. Educational Leadership, 63, 8-13.
Margaryan, A., Littlejohn, A., & Vojt, G. (2011). Are digital natives a myth or reality? University students’ use of digital
technologies. Computers and Education, 56, 429-440. doi:10.1016/j.compedu.2010.09.004
National
Association of Colleges and Employers. (2016) Job Outlook Survey. Retrieved from:
http://www.naceweb.org/career-development/trends-and-predictions/job-outlook-2016-attributesemployers-want-to-see-on-new-college-graduates-resumes/
Rosenblum, D. (2017, January 2). Leave your laptops at the door to my classroom. The New York Times. Retrieved
from
https://www.nytimes.com/2017/01/02/opinion/leave-your-laptops-at-the-door-to-my-classroom.html
Sana, F, Weston, T., & Cepeda, N. J. (2013). Laptop multitasking hinders classroom learning for both users and nearby
peers. Computers & Education, 62. 24-31.
Sievers, B. [beausievers]. (2017, Dec 28). An animated GIF version of work with @ThaliaWheatley and @parkinsoncm
showing the Bouba-Kiki effect extends to non-… https://t.co/cq8aLfooqf [Tweet]. Retrieved from
https://twitter.com/beausievers/status/946182861115228160.
P a g e | 199
Strohmetz, D. B., Dolinsky, B., Jhangiani, R. S., Posey, D. C., Hardin, E., Shyu, V., & Klein, E. (2015). The skillful major:
Psychology curricula in the 21st century. Scholarship of Teaching and Learning in Psychology, 1, 200-207.
doi:10.1037/stl0000037
Ward, A.F., Duke, K., Gneezy, A., & Bos, M.W. (2017). Brain drain: The mere presence of one’s own smartphone
reduces available cognitive capacity. Journal of the Association for Consumer Research, 2, 140-154.
P a g e | 200
CHAPTER
20
KARA SAGE
MOVING READING AND STUDYING
TO THE SCREEN: A DISCUSSION OF
E-BOOKS AND ONLINE STUDY
TOOLS
THE COLLEGE OF IDAHO
Acknowledgements: The author would like to thank Kaelah Bakner and Hannah Shand for providing helpful
comments on a prior version of this chapter. Gratitude as well to Chase Downey and Hailee Kiser for relevant
discussions regarding this material.
INTRODUCTION
In this digital age, students can access and read more textbooks online than ever before. E-books are an emerging
technology, and come in many forms. For instance, some e-books are downloadable while others are only accessible
over the Internet. They may be free or only purchasable through publishers’ digital libraries. In addition, app stores
provide a practically endless supply of online study tools, such as flashcards and quiz programs, which are sometimes
paired as useful resources with specific e-books.
Though students have all of these resources at their fingertips, traditional, paper course materials remain
advantageous in some ways for learners, such as by having an appealing tactile sensation for learners and not being
prone to technological issues like Wi-Fi access or download speeds. The digital world has benefits, such as its
inherent convenience, but there remains something special about paper for learners. That said, e-books are quickly
becoming more available on the market and typically offer a cheaper alternative for students. Thus, educators must
become more familiar with their use.
The following chapter will provide a discussion of the advantages and disadvantages of e-books, in addition to a
comparison of e-books to print books. Key considerations to make when designing and adopting e-books will also
be highlighted.
DEFINING E-BOOKS AND CURRENT TRENDS
The definition of an e-book is somewhat variable. Most typically, e-books are defined as “texts that are digital and
accessed via electronic screens” (Rockinson-Szapkiw, Courduff, Carter, & Bennett, 2013, p. 260). A definition is not
as simple as saying an e-book is “an electronic version of a printed book” (as defined by the Oxford dictionary), as
many e-books are not available in print. E-books are designed to be read on a multitude of electronic devices, such
as laptops and e-readers like Kindles. These devices can be used in stationary or mobile locations.
P a g e | 201
In recent years, students’ use of e-books and technology has increased. In the 2014 ECAR study of undergraduate
students and information technology, 213 institutions of higher education and ~75,000 students shared information
on their use of technology in education (Dahlstrom & Bichsel, 2014). Approximately 60% of students reported using
an e-book in at least one course. Students also had a vast array of technology at their fingertips. More than 80% of
students owned laptops and smartphones, a little less than 50% of students owned tablets, and just below 25% of
students owned e-readers. All of these percentages were on the rise from prior years’ data and were expected to
continue to rise in subsequent years. In a later article, deNoyelles, Raible, and Seilhamer (2015) clarified that e-book
users had increased ~18% over the prior two years.
E-books are progressively becoming more familiar and accessible for college students. Additionally, some
universities even require students have e-books, with no print books available (Osborne, 2012). Given that primary
and secondary schools are more often incorporating tablets or similar devices into their learning activities and
finding positive outcomes (see Haßler, Major, & Hennessy, 2016, for a review), students will increasingly already be
familiar with using such devices for educational purposes when they enter college. Thus, a thorough discussion of
the pros and cons of this emerging technology for college students is merited.
WHAT ARE ADVANTAGES TO USING E-BOOKS FOR LEARNING?
E-books come in many flavors. Some are interactive and provide unique traits that go beyond what a print book can
offer. Smyth and Carlin (2012) surveyed university students, finding that advantages of e-books were convenience
(30%), remote access (20%), flexibility (20%), searchability (14%), cost (4%), choice (3%), and environmental
incentives (2%). Lim and Hew (2014) pointed out that e-books can also provide students with interactive experiences,
such as sharing content with others, and can thus be more engaging for students than print textbooks.
In a similar vein, Nie, Armellini, Witthaus and Barklamb (2011) provided students with e-book readers preloaded
with course materials. Students’ learning was enhanced, with potential reasons being the more adaptable
presentation of material, flexible timing, use of new reading strategies, and cheaper cost. Ji, Michaels, and Waterman
(2014) also reported that students viewed cost as an overwhelming advantage to electronic resources.
E-readers can also be easier to use than print books. For instance, Kiriakova, Okamoto, Zubarev, and Gross (2010)
reported that students found that e-book pages could be bookmarked and turned easily. Students also stated that
the search-and-find feature was a helpful tool. They viewed e-readers as portable and convenient, and mentioned a
“coolness factor” to e-books (p.22). Similarly, Jang, Yi, and Shin (2016) reported that digital textbooks may enhance
student motivation, as they are the more exciting option for learners, perhaps because they are so new with unique
features.
According to Hernon, Hopper, Leach, Saunders, and Zhang (2007, as cited in Lim & Hew, 2014), some key perks of ebooks are (once again) cost and general convenience, such as searching for key terms and copy/pasting out portions
of text. Lam, Lam, Lam, and McNaught (2009) further suggested that e-books are inherently portable and can
optimize reading time since students can read on the go, such as during a break between classes. Given that paper
books can also be heavy and cause orthopedic damage (Talbott, Bhattacharya, Davis, Shukla, & Levin, 2009), the
lightweight nature of e-books may be another incentive.
An interesting piece of research also reported that students who embrace new technologies like e-books may
perceive themselves as more capable learners (Johnson, 2016). Similarly, Rockinson-Szapkiw and colleagues (2013)
P a g e | 202
found higher levels of affective and psychomotor learning in students opting for e-books for their education courses
over print books. Perhaps the readiness and confidence to engage with e-books reflects something inherent about
the learner’s personality or work ethic.
WHAT ARE DISADVANTAGES TO USING E-BOOKS FOR LEARNING?
Though e-books have some potential benefits for learners, there are also possible downsides. Smyth and Carlin
(2012) surveyed students regarding what they viewed as primary disadvantages of e-books. Students reported a
general preference for paper (38%), reading online text as challenging (38%), issues with access (13%), distractions
like social media (9%), difficult navigation (8%), lack of choice (7%), and cost (4%). Regarding distraction, students
may find it difficult to reserve their computer time for the sole purpose of studying; other forms of computer
entertainment often interfere. Lim and Hew (2014) also commented that students may feel unfamiliar with the
technology (reporting that the digital interface was overwhelming and/or did not match their learning style).
Students may also have a low patience threshold for technology issues like upload speeds and Internet reliability,
given that these problems may or may not resolve in a timely manner, and paper books may act as permanent
solutions.
Reading from e-books can also strain one’s eyes. Polonen, Jarvenpaa, and Hakkinen (2012) investigated adults’
reading from e-books versus print books. When using near-to-eye displays, eyestrain and sickness symptoms were
relatively frequent. Reading from print copies was simply more comfortable for users, perhaps especially over the
long-term. Supian (2011, as cited in Lim & Hew, 2014) agreed, writing that “smart phones are good for bite-sized
reads, but painful for long, sustained periods of reading” (p.1). Possible reasons for why the e-books may have
induced such symptoms include low screen resolution, brightness/glare, and text layout.
There may also be a decrement to students’ learning, or at least their perception of their own learning. In Ji et al.’s
(2014) study, students reported reading and learning more content from print readings. Perhaps as a direct result,
many students opted to print out at least some of their readings that were only available in electronic form.
Agreeably, Chang and Ley (2006) found that ~2/3 of their participants printed out 75-100% of their electronic
materials for class, and 76% of students often or almost always used hard copy materials for studying. Similarly, Ji et
al. (2014) found that ~65% of students in their research had done the same. Thus, though the initial cost of e-books
might be lower, students may be spending a significant amount of additional money on printing fees. Further, there
may be an opportunity cost to consider; time that could have been spent reading is now spent on printing the
reading.
The ability to cut/paste text from e-books may also contribute to plagiarism. Smyth and Carlin (2012) did note that
some respondents reported this potential activity as a disadvantage of e-books. Thus, teachers might consider giving
additional reminders about plagiarism and schools might be inclined to purchase software to specifically check
student papers for such occurrences.
COMPARING E-BOOKS TO PRINT BOOKS
Despite being in the digital age, students often have a distinct preference for paper books over e-books. One
Canadian study showed that ~83% of college students preferred print over digital materials (Kaznowska, Rogers, &
Usher, 2011). In their research with graduate students in the U.S., Chang and Ley (2006) also reported that ~2/3 of
students preferred printed materials, 27% had no preference, and only 9% preferred material on the screen. These
P a g e | 203
students were distance and hybrid learners (taking classes all or partially online), so it is possible that they engaged
with printed materials as one way to reduce the cognitive load felt from their already frequent use of the screen.
Students may regulate themselves less well when using a digital medium relative to a printed book (Ackerman &
Goldsmith, 2011). As mentioned previously, distractions can mount when on an electronic platform. Students have
reported preferring print books for the pleasure and ease of reading (Smyth & Carlin, 2012).
Similarly, Spencer (2006) surveyed distance learners, finding that they preferred printed materials for portability,
flexibility, dependability, and less eye fatigue. Students also reported generally enjoying the feel and permanence of
paper. Precel, Eshet-Alkalai and Alberton (2009) have reported that students believe that print materials contribute
more to their learning than electronic materials. Though more than half of their students preferred using both the
print and digital textbook to some degree, ~37% preferred the print version only while a small ~5% preferred the
digital textbook only. Students liked the convenience and accessibility of the print book in addition to the simplicity
of finding information. Top reasons influencing their responses included difficulty in reading long passages of text
on the screen, familiarity with printed books, searchability, and being unable to directly write on digital copies.
Whether or not a student opts for an e-book may depend on the type of reading. For instance, Lamothe (2010)
reported that students preferred online resources for reference material but paper versions for complete books.
However, other researchers have suggested that students may use paper books more effectively for finding
information than e-books, perhaps due to students’ familiarity and confidence with print books (Berg, Hoffman, &
Dawson, 2010). In this research, students also seemed to use print books more linearly, reading them straight
through, while e-books were used more non-linearly, such as browsing or skipping around the text. Students’ tactile
sensations can also aid in locating information (e.g., marking pages with fingertips, flipping through pages). The
feeling of the weight of pages can be a cue to progress (Wilson, Landoni, & Gibbs, 2003). Advancing through pages
and judging progress may simply be more burdensome in an e-book.
Though there seems to be a general preference for print, numerous researchers have reported no difference in
achievement or learning between e-book and print readers (e.g., Annand, 2008; Connell, Bayliss, & Farmer, 2012;
Murray & Perez, 2011; Rockinson-Szapkiw et al., 2013; Young, 2014). Other researchers have reported some
differences on related measures. For instance, though Daniel and Woody (2013) reported no difference in learning,
students took longer to read text on the screen than when in print (perhaps due to multi-tasking), with a stronger
effect noted when reading at home. Other researchers have confirmed longer reading times on electronic devices
relative to paper, though the difference tends to be minimal (Connell et al., 2012; Kang, Wang, & Lin, 2009). If
students believe reading from the screen is slower and less accurate, they may develop a belief that print books lead
to better comprehension (e.g., Garland & Noyes, 2004). Agreeably, Ji et al. (2014) reported that students reported
studying and learning more from print materials.
For the small percentage of students who do prefer e-books over paper textbooks, the unique characteristics of ebooks tend to be attractive. For instance, Precel et al. (2009) reported that students appreciated speedy access to
online examples, embedded links, and the straightforward navigation system.
Interestingly, cost has been reported as both a pro and con of e-books. Perhaps this dichotomy relates to the fact
that used print books are also available, in addition to options for renting books. Print books can also often be sold
back to the bookstore or to another student, which recoups some of the initial cost. Additionally, if students need
to buy the technology platform on which to read the book, such as an e-reader, this purchase can be quite expensive
P a g e | 204
(though it is, hopefully, a one-time cost). That said, rising costs in college are of paramount concern to most students,
so, if e-books can reduce the cost of textbooks overall, they deserve our attention.
It seems then that students may not currently view e-books as suitable substitutes for print books. But, this status
may be changing. Ismail and Zainab (2005) reported that users’ previous experience with e-books lowers their
preference for print books, though this difference was trivial. It can also be noted that, at this current point in time,
students likely used mostly paper books in their primary and secondary schools. But, as technology in the younger
classrooms becomes more commonplace, perhaps e-books will also become more familiar to students. Students
also may not be aware of the unique features of e-books, or, if they are aware, they may not use them (Ackerman
& Goldsmith, 2011; Chen, 2012). Thus, as familiarity and popularity of e-books increases, perhaps e-books will also
be more accepted by students as viable learning platforms. Moving forward, key goals include making e-books more
approachable for students and highlighting the distinctive benefits of e-books.
FACTORS TO CONSIDER WHEN DESIGNING AND ADOPTING E-BOOKS
One factor to consider when a teacher recommends an e-book or a student opts to purchase an e-book is screen
size. Recent literature has investigated the effects of smaller screens in education, such as tablets and smartphones.
Kim and Kim (2010) reported that smaller screen sizes can present difficulties for vocabulary learners; they may
induce extraneous cognitive load due to features like font size or added glare. Similarly, Reeves, Lang, Kim, and Tatar
(1999) found that humans’ attention is enhanced by media messages on larger screens. That said, smaller screens
do have promise, perhaps especially given that each new model of smartphone or tablet seems to have enhanced
resolution and traits such as reduced glare.
Students are using many different devices for e-reading. Smyth and Carlin (2012) reported that 68% of students use
a laptop or PC for e-books, followed by 21% smartphone, 9% e-book reader, and 1% tablet computer. Connell et al.
(2012) reported that participants found iPads more usable than e-book readers, perhaps given their enhanced
functionality. Zeng, Bai, Xu, and He (2016) compared four different devices for e-reading – laptop, tablet (iPad), ereader (Kindle), and smartphone (iPhone). They also had students read a chapter in static .pdf format or more
dynamic .epub format. Students read the .epub chapter file more quickly than the .pdf file, and the three handheld
devices resulted in enhanced reading comprehension for the .epub files. One possible reason is that .epub files
automatically adjust content to fit the screen size. The .pdf content may have been quite small on those same
screens. Thus, for students using e-readers and smaller screens like phones, fluid format files like .epub files may be
optimal for displaying content.
Characteristics of the user may also affect students’ choice to use, and opinions of, e-books. Age may be one factor.
Chang and Ley (2006) reported that older learners (36 years+) were more likely to use print materials than younger
learners. There was a relatively stark linear trend in print materials by age, as older learners were compared to the
26-35 age group and the traditional college-age student of less than 26 years. Availability of technology may also be
an issue. Annand (2008) reported that some students found it difficult to gain access to a shared computer for
enough time to complete their reading or reported that their computer was in a distraction-filled location. The
special needs of the learner may also come into play. Though many publishers include aspects like text resizing that
are helpful for learners with special needs, others do not (Mune & Agee, 2016). McNaught and Alexandra (2014, as
cited in Mune & Agee, 2016) provided a list of key features for accessibility in e-books, including screen reader
compatibility, screen magnifiers, text-to-speech programs, customizable backgrounds, and alternative text for
visuals.
P a g e | 205
A well-structured design is of utmost importance for an e-book. Based on student reports, Wilson et al. (2003)
recommend incorporating search features, embedded links, and interactive opportunities into e-books. They also
promote using a sensible line length (similar to the length of a printed page), reducing the need for scrolling,
including some marker of reader progress, utilizing visually appealing images to capture attention, recruiting
comfortably-sized fonts, offering short chunks of text to improve scanning, and providing students with
bookmarking/highlighting tools. Overall, appearance and functionality of the e-book are critical (Crestani, Landoni,
& Melucci, 2006). Other students have also reported desiring more versatility in their e-books (e.g., a less rigid
presentation of material), integrating more resources (e.g., dictionaries or YouTube links), adding features to reduce
eye strain (e.g., a text-to-voice feature), and including a live chat feature (Lim & Hew, 2014).
Many of these features echo what students enjoy about paper books. Though the screen cannot mimic the exact
feel of a print book, perhaps it can mimic other features. Increased attention to such details may enhance the unique
assets of e-books. Given all the social media platforms out there, one could also consider how the bar has been set
high for the expected nature of digital platforms. Students want interactive and engaging experiences that are userfriendly. One suggestion for publishers might be to enforce some sort of standardization for e-books that includes
the aforementioned features. This standardization might act simultaneously to ensure a certain quality for e-books
in addition to making the e-books more familiar to students. Currently, the functionality and offerings of e-books
can vary widely, perhaps enhancing the feeling of unfamiliarity. E-books are not all identical. Eliminating some of
these differences might make it so that once a student has used one e-book, the next e-book is easier to use. And if
some of the required features mimic the perks of print books, perhaps e-books as a whole would feel even more
familiar to students.
Lastly, to increase the approachability of e-books for students, deNoyelles et al. (2015) suggest campus-wide etextbook initiatives (for instance, availability and marketing of e-books in conjunction with the university’s
bookstore) in addition to professional development for faculty (for instance, discussions of how to integrate e-books
successfully and adjustment of course policies to allow for technology use). If there is more recognition of e-books
on campus, and faculty are in tune with how these e-books function and work, students may also be more likely to
purchase these e-books and use them in an effective manner. Faculty members should educate themselves on
features available in their e-books, so that they are familiar with them and can provide helpful advice to students.
Faculty members could even incorporate some of the features unique to e-books into their classroom, such as asking
students questions about embedded videos or opening up supplementary resources, like comprehension quizzes, in
class, if offered with that e-book. Perhaps publishers could include a tutorial with each e-book purchase - a thorough
but brief demonstration of the organization of the e-book and the additional resources it has to offer. Knowledge is
power; if faculty and students are informed on how to use e-books and do not have to hunt and peck to figure it out,
they might reap more advantages than drawbacks in their e-book use. And if the e-book content is clear and
engaging, and supported by their teacher, it might subsequently encourage students to also regulate their time more
effectively.
SPOTLIGHT ON ONLINE STUDY TOOLS
Some e-books may encourage their readers to engage with complementary online resources, and this feature is
certainly one that students have reported wanting to see more of. There are countless online study tools available,
such as flashcard generators and quizzing programs. E-books can often embed links to videos, such as those available
on YouTube, to engage the learner. Linking in specific study tools might also encourage students to practice the
P a g e | 206
material once they have completed reading it. This feature is something that e-books can offer over-and-above print
books.
There is a fair amount of research on digital flashcard programs. Green and Bailey (2010) stated that students
frequently use online tools to locate ready-made flashcards to save them the time it would take to make flashcards
on their own. They also mentioned that these tools can include additional functionality, such as linking to social
media or providing the ability to share cards with others. These tools would thus help provide that interactive
component that students are seeking in e-books. However, students also did not often use these additional
functions. Similar to e-books then, students might not be aware that such features exist, and thus education on how
to use these tools might be important. Nakata (2011) also suggested that digital flashcard programs can assist in
vocabulary learning by including unique program features like audio pronunciation. Several research studies have
suggested that, at minimum, digital flashcards are as effective as paper flashcards (Sage, Rausch, Quirk, & Halladay,
2016) and that students may find mobile versions (such as on a tablet) more useful for their studying than reviewing
flashcards on a stationary computer (Sage, Krebs, & Grove, in press). Additionally, some research has suggested that
the accessibility and portability of mobile phones and similar devices might lead to large gains from mobile flashcard
apps relative to paper versions (Basoglu & Akdemir, 2010).
There is also a growing body of literature on Quizlet. Quizlet includes not only flashcards, but also tests and study
games to help enrich one’s learning experience. Barr (2016) reported that learners using gap-fill flashcards on Quizlet
scored higher on vocabulary tests than non-users. Similarly, Dizon (2016) found that Japanese EFL students felt like
Quizlet was a useful and easy-to-use tool, which resulted in significant gains on their vocabulary tests. Though the
research on this resource appears relatively limited to vocabulary learning, this tool offers the potential for deeper
learning and should be explored further.
Overall, by incorporating links to study aids like those mentioned above, e-books could perhaps simultaneously boost
engagement for learners and deepen learners’ knowledge of the material. This amalgam could be a unique asset of
e-books relative to print books.
CONCLUDING REMARKS
Though e-books are becoming increasingly popular on college campuses, they do not yet seem poised to take over
the market. E-books offer unique benefits for learners, such as search/find tools and embedded links, but also have
notable drawbacks, such as potentially longer read time and distracting elements. Students have similar learning
outcomes from both book types, but their experiences can be quite different. When faculty and publishers work
together to create an e-book, organization, appearance, and functionality must be considered. Some standardization
might be helpful in increasing the approachability of e-books for students. When an instructor reviews books for
their classes, they might also consider reviewing the e-version too to see what it can offer that may be above-andbeyond the print version. They can then encourage their students to engage in those offerings, such as viewing
embedded links, to strengthen their learning of the material. Students may enjoy the reduced price of e-books, and
may be pleased to discover that e-books can offer something special. That said, being plugged in can bring a myriad
of distractions and may be more frustrating than helpful for some learners. Thus, it is an individual decision, and
both teacher and student should be aware of the (good and bad) differences in experience that these book types
may entail. Ultimately, e-books will likely continue to increase in availability and popularity given the digital age that
we live in. Thus, educators should be familiar with what exactly e-books can offer students and how we can harness
the positive nature of e-books to boost students’ learning.
P a g e | 207
REFERENCES
Ackerman, R. & Goldsmith, M. (2011). Metacognitive regulation of text learning: On screen versus paper. Journal of
Experimental Psychology: Applied, 17(1), 18-32. doi:10.1037/a0022086
Annand, D. (2008). Learning efficacy and cost-effectiveness of print versus e-book instructional material in an
introductory financial accounting course. Journal of Interactive Online Learning, 7(2), 152-164.
Barr, B. (2016). Checking the effectiveness of Quizlet as a tool for vocabulary learning. The Center for ELF Journal,
2(1), 36-48.
Basoglu, E. & Akdemir, O. (2010). A comparison of undergraduate students’ English vocabulary learning: Using
mobile phones and flash cards. The Turkish Online Journal of Educational Technology, 9(3), 1-7.
Berg, S., Hoffmann, K., & Dawson, D. (2010). Not on the same page: Undergraduates’ information retrieval in
electronic and print books. The Journal of Academic Librarianship, 36(6), 518-525.
doi:10.1016/j.acalib.2010.08.008
Chang, S. & Ley, K. (2006). A learning strategy to compensate for cognitive overload in online learning: Learner use
of printed online materials. Journal of Interactive Online Learning, 5(1), 104-117.
Connell, C., Bayliss, L., & Farmer, W. (2012). Effects of ebook readers and tablet computers on reading
comprehension. International Journal of Instructional Media, 39(2), 131-140.
Crestani, F., Landoni, M., & Melucci, M. (2006). Appearance and functionality of electronic books. International
Journal on Digital Libraries, 6(2), 192-209. doi:10.1007/s00799-004-0113-9
Dahlstrom, E. & Bichsel, J. (2014). ECAR study of undergraduate students and information technology. Research
report. EDUCAUSE. Louisville, CO: ECAR.
Daniel, D. & Woody, W. (2013). E-textbooks at what cost? Performance and use of electronic v. print texts.
Computers & Education, 62, 18-23. doi:10.1016/j.compedu.2012.10.016
deNoyelles, A., Raible, J., & Seilhamer, R. (2015). Exploring students’ e-textbook practices in higher education.
EDUCAUSE review.
Dizon, G. (2016). Quizlet in the EFL classroom: Enhancing academic vocabulary acquisition of Japanese university
students. Teaching English with Technology, 16(2), 40-56.
Garland, K. & Noyes, J. (2004). CRT monitors: Do they interfere with learning? Behavior and Information Technology,
23(1), 43-53. doi:10.1080/01449290310001638504
Green, T. & Bailey, B. (2010). Digital flashcard tools. TechTrends, 54(4), 16-17. doi:10.1007/s11528-010-0415-2
Haßler, B., Major, L. & Hennessy, S. (2016). Tablet use in schools: A critical review of the evidence for learning
outcomes. Journal of Computer Assisted Learning, 32(2), 139-156. doi: 10.1111/jcal.12123
Ismail, R. & Zainab, A. (2005). The pattern of e-book use amongst undergraduates in Malaysia: A case of to know is
to use. Malaysian Journal of Library & Information Sciences, 10(2), 1-23.
Jang, D., Yi, P., & Shin, I. (2016). Examining the effectiveness of digital textbook use on students’ learning outcomes
in South Korea: A meta-analysis. The Asia-Pacific Education Researcher, 25(1), 57-68. doi:10.1007/s40299015-0232-7
Ji, S., Michaels, S., & Waterman, D. (2014). Print vs. electronic readings in college courses: Cost-efficiency and
perceived learning. Internet and Higher Education, 21, 17-24. doi:10.1016/j.iheduc.2013.10.004
P a g e | 208
Johnson, G. (2016). The influence of student learning characteristics on purchase of paper book and ebook for
university study and personal interest. Educational Psychology, 36(9), 1544-1559. doi:
10.1080/01443410.2014.1002831
Kang,
Y., Wang, M., & Lin, R. (2009).
doi:10.1016/j.displa.2008.12.002
Usability
evaluation
of
e-books.
Displays,
30,
49-52.
Kaznowska, E., Rogers, J. & Usher, A. (2011). The state of e-learning in Canadian universities, 2011: If students are
digital natives, why don’t they like e-learning? Toronto: Higher Education Strategy Associates.
Kim, D. & Kim, D. (2010). Effect of screen size on multimedia vocabulary learning. British Journal of Educational
Technology, 43(1), 62-70. doi:10.1111/j.1467-8535.2010.01145.x
Kiriakova, M., Okamoto, K., Zubarev, M., & Gross, G. (2010). Target: Pilot testing ebook readers in an urban academic
library. Computers in Libraries, 20-24.
Lam, P., Lam, S., Lam, J. & McNaught, C. (2009). Usability and usefulness of eBooks on PPCs: How students’ opinions
vary over time. Australasian Journal of Educational Technology, 25(1), 30-44.
Lamothe, A. (2010). Electronic book usage patterns as observed at an academic library: Searches and viewings. The
Canadian
Journal
of
Library
and
Information
Practice
and
Research,
5(1).
doi:10.21083/partnership.v5i1.1071
Lim, E. & Hew, K. (2014). Students’ perceptions of the usefulness of an e-book with annotative and sharing
capabilities as a tool for learning: A case study. Innovations in Education and Teaching International, 51(1),
34-45. doi: 10.1080/14703297.2013.771969
Mune, C. & Agee, A. (2016). Are e-books for everyone? An evaluation of academic e-book platforms’ accessibility
features.
Journal
of
Electronic
Resources
Librarianship,
28(3),
172-182.
doi:
10.1080.1941126X.2016.1200927
Murray, M. & Perez, J. (2011). E-textbooks are coming: Are we ready? Issues in Informing Science and Information
Technology, 8.
Nakata, T. (2011). Computer-assisted second language vocabulary learning in a paired associate paradigm: A critical
investigation of flashcard software. Computer Assisted Language Learning, 24(1), 17-38.
doi:10.1080/09588221.2010.520675
Nie, M., Armellini, A., Witthaus, G., & Barklamb, K. (2011). How do e-book readers enhance learning opportunities
for distance work-based learners? Research in Learning Technology, 19(1), 19-38.
doi:10.3402/rlt.v19i1.17104
Osborne, N. (2012). The best of both worlds: Indiana University pioneers e-textbook model. EdTech.
Precel, K., Eshet-Alkalai, Y., & Alberton, Y. (2009). Pedagogical and design aspects of a blended learning course. The
International Review of Research in Open and Distributed Learning, 10(2). doi:10.19173/irrodl.v10i2.618
Polonen, M., Jarvenpaa, T., & Hakkinen, J. (2012). Reading e-books on a near-to-eye display: Comparison between a
small-sized multimedia display and a hard copy. Displays, 33, 157-167, doi:10.1016/j.displa.2012.06.002
Reeves, B., Lang, A., Kim, E., & Tatar, D. (1999). The effects of screen size and message content on attention and
arousal. Media Psychology, 1(1), 49-67. doi: 10.1207/s1532785xmep0101_4
P a g e | 209
Rockinson-Szapkiw, A., Courduff, J., Carter, K., & Bennett, D. (2013). Electronic versus traditional print textbooks: A
comparison study on the influence of university students’ learning. Computers & Education, 63, 259-266.
doi:10.1016/j.compedu.2012.11.022
Sage, K., Krebs, B., & Grove, R. (in press). Flip, slide, or swipe: Learning outcomes from paper, computer, and tablet
flashcards. Technology, Knowledge, and Learning.
Sage, K., Rausch, J., Quirk, A. & Halladay, L. (2016). Pacing, pixels and paper: Flexibility in learning words from
flashcards. Journal of Information Technology Education: Research, 15, 431-456.
Smyth, S. & Carlin, A. (2012). Use and perception of ebooks in the University of Ulster: A case study. New Review of
Academic Librarianship, 18(2), 176-205. doi:10.1080/13614533.2012.719851
Spencer, C. (2006). Research on learners’ preferences for reading from a printed text or from a computer screen.
Journal of Distance Education, 21(1), 33-50.
Talbott, N., Bhattacharya, A., Davis, K., Shukla, R., & Levin, L. (2009). School backpacks: It’s more than just a weight
problem. Work, 34, 481-494.
Wilson, R., Landoni, M., & Gibb, F. (2003). The WEB book experiments in electronic textbook design. Journal of
Documentation, 59(4). doi: 10.1108/00220410310485721
Young, J. (2014). A study of print and computer-based reading to measure and compare rates of comprehension and
retention. New Library World, 115(7/8), 376-393. doi:10.1108/nlw-05-2014-0051
Zeng, Y., Bai, X., Xu, J., & He, C. (2016). The influence of e-book format and reading device on users’ reading
experience: A case study of graduate students. Publishing Research Quarterly, 32, 319-330.
doi:10.1007/s12109-016-9472-5
P a g e | 210
CHAPTER
21
VIRTUAL PARENTING: APPLYING
DEVELOPMENTAL PSYCHOLOGY
NATALIE HOMA
THIEL COLLEGE
INTRODUCTION
Experiential learning is an important component of successful higher education (Gosen & Washbush, 2004; Kolb,
1984; Kolb, Boyatzis, & Mainemelis, 2011), especially within psychology programs (Stoloff, Curtis, Rodgers, Brewster,
& McCarthy, 2012; Svinicki & McKeachie, 2011). New technologies have provided unique opportunities to
incorporate experiential learning in a psychology classroom, such as virtual case studies, a visual brain program
(Pearson, 2017c), and interactive lab experiments (De Jong, 2015; Francis & Neath, 2015). One area of psychology
that could benefit from such virtual technologies is developmental psychology. It can be very challenging for some
instructors to provide students, for example in a child development course, with hands-on experience working with
or observing children. In addition, the area of developmental psychology is interdisciplinary focusing on physical,
social, emotional, and cognitive development. Therefore, it can be challenging to highlight the complex, interactive
nature of development in these areas without hands-on experience. Developmental psychology instructors may
utilize MyVirtuals (Pearson, 2017a) to address some of these issues.
MyVirtuals (Pearson, 2017a) are simulation programs published by Pearson to allow students to raise a virtual child
as well as make decisions about their future older selves. There are three programs instructors may choose from:
MyVirtualChild (Manis, 2014a), MyVirtualTeen (Manis, 2014b), and MyVirtualLife (Manis & Buckner, 2014). Briefly,
MyVirtualChild simulates raising a child from birth to 18 years of age, MyVirtualTeen also simulates raising a child
from birth to 18 with more focused questions and responses throughout adolescence, and MyVirtualLife provides
simulation of being a virtual parent raising a child from birth to 18 as well as being an adult from emerging adulthood
through older adulthood. Therefore, these programs may be appropriate for use in a Child Development, Adolescent
Development, Adult Development, or Lifespan Development course, respectively. Access to MyVirtuals can be
purchased as a standalone product (approximately $50); however, they can be bundled with
MyPsychLab/MyDevelopmentLab through a Pearson textbook (e.g., Arnett & Maynard, 2017; Berk, 2013). A more
in-depth review of specifically MyVirtualChild capabilities along with identification of challenges and
recommendations will be provided.
MYVIRTUALCHILD
The layout of MyVirtualChild (MVC; Manis, 2014a) includes presenting students with various scenarios about their
child and life, such as breastfeeding, stranger anxiety, or discipline that require the student to make a decision from
three or four options. In addition to scenarios, students are also presented with life event items that describe various
events, milestones, or problems in the child’s life (e.g., achieved object permanence, started walking, you got a
divorce, you received a job promotion). The number of scenarios and life events vary by age with on average eight
scenarios and six life events each year. From birth to 18, students are provided with approximately 250 scenario
choices and life event items (Manis, 2015). At approximately 12 time points throughout the program students are
also presented more in-depth reports about their child (e.g., pediatrician report, school report, developmental
assessment, psychologist’s report) and may also respond to open-ended reflective questions. The latter can be
P a g e | 211
controlled by the instructor, which will be discussed in more detail later. Students see an image of their child
throughout the program that changes periodically with age. The program does not allow the student to go back and
change any responses; however, all scenarios, life events, reports, and reflective questions remain accessible to the
student. The students (and instructors) can go back at any time to recall, for instance, when their child began walking
or how the child performed academically in 5th grade.
Some unique features of the program include the inclusion of genetic-environmental influences on child outcome
based on questions students respond to during initial setup of MVC as well as preset variables (Manis, 2015).
Students are asked personality (e.g., emotionality, extroversion/introversion, aggression, self-control, and activity
level) and ability (e.g., verbal, logical-mathematical, spatial, musical, and bodily-kinesthetic) questions that directly
influence some of the child’s outcomes. In addition, the program is also setup to reflect various cultural trends. For
instance, 50% of the class will experience marital conflict with 25% of those resulting in divorce. About 60% of the
class will require some type of childcare as both parents work outside of the home. Various health variables are also
preset to relate to premature birth, breastfeeding, and other events. In addition, 10% of the students will have a
virtual child that shows symptoms of dyslexia and 10% will have a virtual child experiencing symptoms of ADHD.
Students also get to choose options for the appearance of the child: ethnicity/race (African-American, Asian
American/Pacific Islander, European-American, Hispanic, and Middle Eastern), skin tone, hair color, and eye color.
However, these options do not impact the behavior or outcome of the child.
In addition to incorporating relevant examples of genetic-environmental interactions, the program incorporates
several more developmental theories. These theories include: Piaget, Vygotsky, Bronfenbrenner, Kohlberg, social
learning theory, attachment theory, developmental systems theory, and family systems theory (Manus, 2015). This
allows instructors many opportunities to make direct links to course concepts throughout the experience. For
instance, I have students who commented that their child was being “too shy” around others or crying too frequently
when he/she leaves the child. However, once we learn about attachment theory in class the students reflect back
on their experience and recognize these behaviors as typical and healthy. (This experience also allows for great
conversation about cultural differences.)
If instructors are hesitant about incorporating MVC or are new to the program, Manis (2015) provides a helpful and
comprehensive instructor’s manual. The manual provides tips for classroom discussion, integrating MVC into lecture,
examples of activities from other MVC instructors, and tips for using MVC into a variety of classes. Manis also has a
series of YouTube videos that walk one through a variety of topics (e.g., “Are you being a good parent?” “How do I
assign MyVirtualChild?”).
LEARNING OUTCOMES
One important question for instructors using digital tools is whether there is evidence that they positively affect
learning. Oftentimes, there is little evidence of the effectiveness of new digital tools beyond anecdotal reports. This
was the case for MyVirtualChild when first developed. However, a case study is now available for prospective users.
This study (Babcock, n.d.) is specific to MyVirtualLife and is more heavily focused on outcomes related to use of
MyPsychLab. However, results from a sample of 146 undergraduate students taken in 2013 revealed a significant
positive correlation (r = .49) between performance on MyVirtualLife assignments and in-class exams.
Additional research is available beyond Pearson sponsored research. One recent study examined qualitative data
focusing on students’ opinions of the program’s effectiveness and their psychological engagement with the program
P a g e | 212
(Symons & Smith, 2014). Analysis of 113 undergraduate student teaching evaluations showed that 88% of the
students reported the helpfulness of MVC and the ability of MVC to encourage critical thinking as good or very good.
In addition, students reported feeling a bond with their virtual child (85%) as well as feeling happiness (44%) and
pride (33%). Some students (51%) were self-reflective reporting they learned something about themselves as a
future parent or about their own parents. A third of the students mentioned course content when evaluating MVC
and 17% specifically stated the assignment helped them learn course content. The authors asserted that this
feedback is reflective of psychological engagement, which is related to teaching and learning outcomes. Symons and
Smith (2014) did not have a control group nor did they explicitly explore the effects of MVC on learning; however,
Zimmerman (2013) did compare exam scores of 100 undergraduate students in either a control group (textbook
only) or MVC only group. Results revealed significantly better exam grades for the MVC group compared to control
group on exams with content most relevant to MVC. Qualitative responses from survey questions revealed 61%
from the MVC group believed the program helped them learn class material, 45% stated that compared to a textbook
MVC was more interactive, and 52% stated the MVC program made it easier to apply class concepts.
I also conducted a quasi-experiment in four sections of my own Child and Adolescent Development course taught at
a small liberal arts college in the Midwest. My study compared outcomes of two experiential learning projects:
students using MVC and students completing an interview project (Homa, 2014). The interview project required
students to interview three different parents throughout the semester about their child’s development and their
experience as a parent. Participants included 91 undergraduate students (71.4% female; 90% Caucasian; Mage=19.97
years) in a Child and Adolescent Development course in fall 2013 (NInterview=18; NMVC=20) and spring 2014 semesters
(NInterview=24; NMVC=29). The majority of students were freshman (20%) and sophomores (44%) as well as nonpsychology majors (77%).
On the first and last day of class, students completed the following to assess their knowledge of child development:
Developmental psychology concepts questionnaire (DPC; 46 items; adapted from a variety of sources) and the
Knowledge of Child Development Inventory (KCDI; 55 items; Larsen & Juhasz, 1986). In addition, throughout the
semester students were required to respond to specific reflective questions via MVC or use related questions in their
interview with parents. All students were also required to complete three short papers that asked them to find and
summarize two empirical journal articles for each paper that related to their experience as a virtual parent or related
to their interview experience. In addition, students participated in three corresponding discussion days in class.
Finally, students completed a final paper that asked them to identify some major changes or patterns that emerged
throughout their MVC experiences as well as reflect upon their experience as a virtual parent. The specific
assignments can be provided upon request.
Results revealed no significant outcome differences between the MVC or interview group. This included final grade
(t(89) = .71, p = .48), DPC (t(87) = -.06, p = .95), and KCDI total score (t(87) = -.85, p = .40). However, pre and posttest
differences revealed both projects to be effective in increasing knowledge of developmental psychology. Pre and
post test results for the MVC group in particular revealed significant results for KCDI total score (t(44) = -3.26, p <
.01) and DPC total score (t(45) = -9.94, p < .001).
P a g e | 213
Figure 1. Results of pre and posttest analysis for MVC group. DPC = Developmental psychology concepts. KCDI: Knowledge of Child Development
Inventory. **p < .01. ***p < .01
Total Subscale Score
Repeated measures ANOVA revealed significant pre to posttest changes within the four subscale scores, F(4, 82) =
15. 42, p < .001. The emotion development (F(1, 85) = 15.85, p < .001), social development (F(1, 85) = 30.59, p <
.001), and cognitive development (F(1, 85) = 15.65, p < .001) subscale scores of the KCDI revealed significant
improvement from pre to posttest; however, no changes were seen in knowledge for physical development (F(1, 85)
= .16, p = .69). See Figures 1 and 2 for mean scores from the MVC group.
14
12
10
8
6
4
2
0
***
***
***
Pretest
Posttest
KCDI Emotion KCDI Social
KCDI
Cognitive
KCDI Physical
Figure 2. Results of pre and posttest analysis on KCDI subscales for MVC group. KCDI: Knowledge of Child Development Inventory. ***p < .001.
In addition to the quantitative research, I have also examined qualitative results from two sections of the Child and
Adolescent Development course during Spring 2017. I focused the analysis on the last reflective question of the MVC
program stating, “Congratulations!! You’ve raised your virtual child! Reflect on the process of raising your virtual
child from birth through adolescence. What have you learned from this experience? As you think about your 18 year
old son or daughter today, what do you think were the key events or experiences that shaped his or her
developmental outcomes?” Four research assistants and I reviewed responses from 59 students, we discussed the
themes that emerged, then coded the responses, and resolved any disagreements that emerged (see the Table 1
below for examples). Each response could contain multiple themes. The most common theme was the discussion of
parental decision-making. Most of the responses reflected either the idea that there are many difficult decisions to
make as a parent or how influential each decision is to the development of their child. This is a great lesson to learn;
however, I do worry that especially the latter is a bit exaggerated. Yes, decisions do matter but MVC reflect single
decisions about single moments in time. It is unclear from this response whether or not students were able to
synthesize their experience. Could they see patterns in their decision-making that would be more impactful than a
single decision?
P a g e | 214
Table 1
Content Themes from Student Reflective Responses (N = 59)
Content Theme
Student Example
(% reported)
Parental decision making
“I’ve learned that babies are hard to raise and all your choices go into how well your
kid turns out.”
(49%)
“From this experience I have learned that making decisions as a parent are difficult
but they seemed to get easier the older that Hunter got. I also learned that I used my
mom as a reference a lot of the time.
“What I’ve learned from this experience is that every choice you make as a parent is
crucial to how your child will turn out.”
Application to
“While answering questions, I thought about what I was actually going to choose to
do when I had my own.”
future self
(25%)
“I thought it was really fun to raise a virtual child and I always chose options that
reflected how I would raise a child in real life.”
“I have learned many things throughout this experience of raising Emma. … This
experience gave me insight on how I would raise my child as well as the things that I
would do different when it comes to raising a real child.”
Engagement
“I really enjoyed raising my virtual child. I actually got pretty attached to him. … I kind
(20%)
of feel at a loss without having to make any more decisions for him or trying to help
him be a good man when he grows up.”
“I really enjoyed this simulation and it was a great and fun experience. It made you
really think about what would be the best decision for your child. I took on a lot of the
scenarios as if I was a real parent in that situation and what I would do in that
situation.”
“I have learned from this Virtual Child experience that becoming a parent is scary
because you have so many responsibilities that you have never had to deal with
before.”
Difficulty of Parenting
“I have learned that parenting is hard. There is no easy way to parent and you try to
do what is best for your child but you never know what the outcome will be.”
(19%)
“What I have learned from this experience is that parenting is harder than it looks. I
understand my parents a little more after raising my virtual child because I understand
how they had to make hard choices when it came to how to raise me.”
“From raising my virtual child something that I learned is that raising a child is going
to be difficult and even though my child turned out pretty great virtually, I’m going to
be in for a journey when I do eventually have my own real child.”
Application to class
“I thought that raising a virtual child was a great experience. I could relate the
MyVirtualChild questions and choices to class.”
(8%)
“This was a very eye opening assignment and project that allowed me to put in to
practice what we were learning in class.”
“I was also able to observe the different theories of development as well as apply a
lot of what we were reading about in class to raising my child.”
From these responses, I also identified content from the class that they used to support their answer. See Table 2.
The most frequently reported topics related to school and intelligence, which may have been primed by their child’s
most recent life events (i.e., graduating from high school and making decisions about tertiary education). Regardless,
the top topics discussed do reflect the cognitive and socioemotional development knowledge that seemed impacted
by the MVC experience as revealed in the quantitative data (Homa, 2014). There were only a handful of discussions
about physical development including fine motor skills, brain plasticity, and puberty.
P a g e | 215
Table 2
Course Topic Coding from Student Reflective Responses (N = 59)
Course Topic
School/intellectual ability (day care, school choice, success, disability)
Parenting style
Divorce/marital conflict
Language development
Social development (peers, social skills, attachment)
Adolescence (puberty, risk taking)
Socioeconomic status
Sibling relationships
% Reported
54%
40%
20%
20%
20%
19%
12%
10%
Overall, both quantitative and qualitative research suggest MVC is effective in getting students engaged,
encouraging critical thinking, specifically application, and learning course content. Additional research is needed to
further explore the effectiveness of MVC on learning, especially long-term effects.
CHALLENGES AND RECOMMENDATIONS
Upon completion of a semester using MVC, my feelings about the use of MVC tend to be positive. Students’ final
papers were enjoyable to read. It is clear they were engaged in the experience and were able to apply content
thoughtfully from class to their experience. However, the week-to-week experiences with MVC are not always as
problem-free. Most of the challenges I encounter fall under technological difficulties, student concerns, and
implementation.
TECHNOLOGICAL DIFFICULTIES
GLITCHES
The only glitches I came across, as the instructor, was from user error. Somehow, I had a previous class linked to my
new class, so students were logging into the wrong MVC account. However, once the students and I noticed the
error, my students were able to change the course ID easily and I was able to get the students back on track. In
addition, I did not realize that once students have accessed a reflective question section of the program, any changes
to those questions by the instructor will not appear changed for those students. Therefore, be sure all reflective
questions are chosen and edited before students obtain access.
Most students’ problems have been with saving responses to the program. Some students typed the response right
into the textbox but the program did not save the entire response or it saved nothing at all. Unfortunately, Pearson
has no way to retrieve this work or confirm the student’s attempt. I have become aware of two possible
explanations: 1) The program times out on the student, but does not provide any warning; or 2) the student used an
ampersand or apostrophe in the paper. The latter is obviously a more difficult problem to solve as any proper citation
would include an ampersand and apostrophes are very common. My solution has been to warn the students of this
glitch and encourage them to type their response in a word document so that regardless of whether MVC saves their
work they will have their work saved.
P a g e | 216
GRADING
To my knowledge, MVC does not communicate with any learning platform such as Blackboard or Moodle. In addition,
instructors may read question responses and see students’ progress within the program but there is no place to
record grades within MVC nor provide feedback to students. However, you can export responses into an Excel sheet
to record feedback and aid in grading.
STUDENT CONCERNS
COST
Each semester I do hear from a handful of students with genuine concerns about the cost of the program (especially
when bundled with MyPsychLab). Bundling MVC with MyPsychLab does provide the student with the eText, which
can be more affordable than a print version of the textbook; the one-time access code for MVC is not helpful
financially. Students do not have the option to rent MVC nor can they sell it back to the publisher. Therefore, the
use of MVC (especially when bundled with MyPsychLab) may present a financial problem for some students.
MyPsychLab does provide a 14-day trial period so students who may need more time to acquire funds for the
materials will not get behind in the class.
PROCRASTINATION
I was most frustrated with some of my students’ procrastination. I meant for this to be a semester-long experience
that required thoughtful reflection and connection to class. However, in reality some students would raise their child
several years the night before the due date. The program does provide a timestamp for when students have viewed
life events or reports as well as when students have submitted answers to scenarios and reflective questions. While
I have not gathered evidence to support this claim, I do believe those who would raise their child the night before
were not benefiting from the experience. Seemingly, they would be rushed in their responding to scenarios and
likely skim through the reading of events and reports. In addition, because they were only spending a snapshot of
time with their virtual child it was likely they would not encode much of the experience to reflect on later in the
child’s development or successfully connect the experience to class material.
I would recommend setting multiple deadlines to encourage paced completion of the assignments. I have three to
four separate deadlines throughout the semester that correspond with course exams; therefore, class discussion of
MVC also acts as an exam review. However, I have considered making a total of six to eight deadlines to ensure some
work has been completed before the “final” deadline. For instance, if students were to raise their child from birth to
30 months for a discussion day in class, then I would break that down to ensure students raised their child from birth
to 18 months by X deadline and 18 to 30 months by Y deadline.
REACTIONS
As noted above, it seemed most reactions to the overall experience were positive and enlightened. However, there
were some aspects of the program that surprised the students. First, I heard often that they wish the program had
more options because the response they would have given as a parent in that scenario was not one of the three or
four options. In addition, some students reacted defensively to some life events. For instance, as in the United
States, about half of the students in the class will experience marital conflict with many leading to divorce. Many of
my students asserted they would never get divorced and were appalled that they got divorced. The one challenge
P a g e | 217
this presented was to encourage the student to understand conditions that might lead to divorce despite one’s
convictions to stay married. In addition, the student must be challenged to identify evidence-based risk factors for
negative outcomes of divorce to discourage blaming all struggles with the child on the divorce. Additionally, some
students would express disappointment that they shared similar experiences with classmates such as the pet
goldfish dying, child getting a concussion, and teenager getting a tattoo. The program asserts that “...depending on
the child’s and the student parent’s personality characteristics at birth, decisions the student makes about child
rearing, and random events in the environment, each child will develop along a unique path” (Manis, 2015, p. 5).
However, not every single student will have a distinct, fully unique experience. I actually found discussion about
these similar life events to be more engaging. The students would be able to discuss their thoughts about the event
or their different responses to the scenario.
IMPLEMENTATION
TOPICAL V. CHRONOLOGICAL.
The first time I incorporated MVC into my classes I was using a topical style approach to teaching child and adolescent
development. Teaching topically was my preference and I had been using Berk (2013) for a few years; therefore, I
added MVC bundled with MyPsychLab. Because of the ability to edit your own reflective questions into the program,
it was not that difficult to ensure course content could be linked to the experience. For instance, we did not cover
language development in class until the students had raised their virtual child to six years of age. They responded to
the following: “Describe your child's language development since birth. Has he/she developed typically? Struggled
in areas? Continues to struggle? Give specific examples. Also, describe how any struggles were dealt with or are
being dealt with.” In theory, this would be a great question that requires students to revisit information in the
program and integrate that information for a complete evaluation of language development from birth to six years.
However, in practice, students often did not thoroughly analyze the development. The manual (Manis, 2015) does
provide helpful tips for use of MVC while teaching topically; however, it is more challenging than teaching
chronologically.
The last two times using MVC I taught the course chronologically and the students seem to make connections to the
class far more easily; however, I have no data to back up this claim. Their child experienced the same developmental
concept we discussed in class (e.g., first word; separation anxiety; risky behaviors). However, using Arnett and
Maynard (2017) presented me with the challenge of continuing to emphasize a cultural approach throughout the
course but with very little to no discussion of culture throughout MVC. Manis (2015) explained that “A generic
American middle-class environment is assumed to be in place” (p. 20) throughout the program. He noted because
of the variation in cultural values and experiences of different racial, ethnic, and religious groups in America it is too
difficult for the program to simulate them; he feared the attempt may be stereotypical or offensive. Therefore, it is
important to include discussion of cultural differences as part of the reflective questions and class discussion. It is
especially important to not only identify some cultural differences in America but also within other developed
nations as well as developing nations.
LEARNING OBJECTIVES
When integrating any technology into the classroom, one should only do so if it meets the learning objectives of the
class. Therefore, I believe it is important that instructors spend time editing or creating their own reflective questions
for the program to achieve your learning objectives for the course. Relying on the default questions (i.e., three
questions will automatically appear to students in all 12 reflective question sections) may not be as effective as
P a g e | 218
tailoring the questions to your class goals. Luckily, editing these questions or adding new questions is very easily
done. However, one must have all edits and decisions about what questions and how many questions to choose
prior to the students virtually raising their child to that age.
CONCLUSION
MyVirtuals can be an effective tool for students to engage with developmental psychology, critically evaluate the
application of key concepts, and retain information long term. MVC encompasses many of the key components of
experiential learning: Simulates real-world situations and problems; situations involve ill-defined problems with
multiple response options; students lead the problem-solving; and students spend time reflecting and receiving
feedback on their responses (Svinicki & McKeachie, 2011). However, to be effective as an experiential learning tool
one must properly implement and fully integrate the simulation program into your course. If it is simply used as a
fun activity or “busy work,” it is unlikely to aid in learning.
REFERENCES
Arnett, J. J. & Maynard, A. E. (2017). Child development: A cultural approach (2nd Ed.). New York, NY: Pearson.
Babcock, R. L. (n.d.). Case study: Central Michigan University. Retrieved from
https://www.pearsonmylabandmastering.com/northamerica/results/files/MyPsychLab_Central_Michigan
_University_Babcock_CS.pdf?_ga=2.88532632.2088836767.1507903275-533664596.1503507443
Beck, L. E. (2013). Child development (9th Ed.). New York, NY: Pearson.
De Jong, T. (2015). ZAPS: The Norton psychology labs, version 2.0. New York, NY: W.W. Norton & Company, Inc.
Francis, G. & Neath, I. (2015). CogLab online with access code, version 5.0. Australia: Wadsworth Cengage Learning.
Gosen, J. & Washbush, J. (2004). A review of scholarship on assessing experiential learning effectiveness.
Simulation and Gaming, 35(2), 270-293. doi:10.1177/1046878104263544
Homa, N. (October, 2014). Comparison of semester long experiential learning projects for child and adolescent
development courses. Poster presented at the meeting of the APA Division 2: Society for the Teaching of
Psychology, Atlanta, GA.
Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. New Jersey:
Prentice-Hall.
Kolb, D. A., Boyatzis, R. E., Mainemelis, C. (2011). Experiential learning theory: Previous research and new
directions. In R. J. Sternberg & L. Zhang (Eds.), Perspectives on thinking, learning, and cognitive styles (pp.
227-247). New York, NY: Routledge.
Larsen J. J. & Juhasz, A. M. (1986). The Knowledge of Child Development Inventory. Adolescence, 21(81), 39-54.
Manis, F. R. (2014a). MyVirtualChild. New York: Pearson. Retrieved from http://www.myvirtualchild.com/
Manis, F. R. (2014b). MyVirtualTeen. New York: Pearson. Retrieved from
https://mvt.prod.mvx.pearsoncmg.com/landing
Manis, F. R. (2015). My virtual child: An instructor’s manual. Retrieved from
http://media.pearsoncmg.com/ab/MyVirtualChild/Instructor_Manual_MyvirtualChild.pdf
P a g e | 219
Manis, F. R. & Buckner, J. P. (2014). MyVirtualLife. New York: Pearson. Retrieved from
https://www.myvirtuallife.com/
Pearson. (2017a). MyVirtuals. Retrieved from https://www.pearson.com/us/higher-education/products-servicesteaching/learning-engagement-tools/myvirtuals.html
Pearson. (2017b). Results: How the MyVirtuals can help. Retrieved from https://www.pearson.com/us/highereducation/products-services-teaching/learning-engagement-tools/myvirtuals/results.html
Pearson. (2017c). Tour the features of MyLab Psychology. Retrieved from
https://www.pearsonmylabandmastering.com/northamerica/mypsychlab/educators/features/index.html
Stoloff, M. L., Curtis, N. A., Rodgers, M., Brewster, J., & McCarthy, M. A. (2012). Characteristics of successful
undergraduate psychology programs. Teaching of Psychology, 39(2), 91-99. doi:
10.1177/0098628312437721
Svinicki, M. & McKeachie, W. J. (2011). Experiential learning: Case-based, problem-based, and reality-based. In M.
Svinicki & W. J. McKeachie (Eds.), McKeachie’s teaching tips: Strategies, research, and theory for college
and university teachers (13th Ed.) (pp. 202-212). Belmont, CA: Wadsworth.
Symons, D. K., & Smith, K. H. (2014). Evidence of psychological engagement when raising a virtual child. Psychology
Learning & Teaching, 13(1), 52-57. doi: 10.2304/plat.2014.13.1.52
Zimmerman, L. K. (2013). Using a virtual simulation program to teach child development. College Teaching, 61(4),
138-142. doi: 10.1080/87567555.2013.817377
P a g e | 220
CHAPTER
22
STUDENT AND FACULTY
EXPERIENCES WITH
GOANIMATE4SCHOOLS:
A CASE STUDY
RICHARD J. HARNISH AND K. ROBERT BRIDGES THE PENNSYLVANIA STATE UNIVERSITY,
NEW KENSINGTON
INTRODUCTION
It comes as no surprise that students of today learn differently than those of yesterday largely because of how
technology has affected how we design and deliver learning experiences to our students. Indeed, the rapid change
of technological advances we witness in the classroom continuously challenges us to experiment and reflect on how
technology can be leveraged to our students’ benefit. As seen in this eBook, a variety of technological innovations
are being used in the classroom ranging from specific applications like MyVirtuals (see the Homa chapter) or Pesky
Gnats (see O’Reilly chapter) to learning management systems like Canvas or Blackboard (see the Cerniak chapter).
One technology that our group of authors have not discussed is animated text-to-video web platforms that can
provide students with an opportunity to engage in course material, share ideas, solve problems and demonstrate
competency. There are a number of text-to-video technologies available to instructors including but not limited to
Nawmal for Schools (https://school.nawmal.com), Plotagon Education (https://plotagoneducation.com),
Moviestorm (http://www.moviestorm.co.uk), and GoAnimate4Schools (https://goanimate4schools.com). Because
GoAnimate4Schools has become the leading software in this niche market (Stratton & Julien, 2013) and because it
has been used by the authors, our chapter will provide an overview of this technology.
WHAT IS GOANIMATE4SCHOOLS?
GoAnimate4Schools is a web platform that allows for the creation of customized animated videos.
GoAnimate4Schools’ design is intuitive, especially for those who are familiar with navigation buttons and pull-down
menus. (See Figure 1.)
Figure 1. GoAnimate4Schools’ Interface
P a g e | 221
Perhaps the most challenging task for students is to write the monologue or dialogue for their animated video (more
on this later). Once this task is completed, students are lead through eight steps that result in a finished animated
video. We will now detail each of these steps.
STEPS IN CREATING A GOANIMATE4SCHOOLS VIDEO
STEP 1: SELECT A THEME
GoAnimate4Schools has a number of themes from which to choose that sets the tone for the animated video. (See
Figure 2).
Figure 2. Example Themes in GoAnimate4Schools
STEP 2: SELECT A SCENE
Based upon the Theme selected, a number of scenes are available in which to base an event or interaction. (See
Figure 3.)
Figure 3 Example scenes in GoAnimate4Schools
P a g e | 222
STEP 3: SELECT CHARACTER(S)
Depending on the Theme selected, a variety of existing characters is available. Each character can be modified if
desired (e.g., clothing) or if the student is adventurous, characters can be generated by selecting from existing
character templates (e.g., gender, hair color, hair style) that produce a unique character. (See Figure 4.)
Figure 4. Example characters in GoAnimate4Schools
STEP 4: SELECT PROPS FOR THE SCENE
Additional items can be added to a scene. The props are specific to a scene, and although plentiful, they are limited
in the sense that props cannot be created by the student. (See Figure 5.)
Figure 5. Example props in GoAnimate4Schools
P a g e | 223
STEP 5: ADD A CALLOUT BOX IF DESIRED
Callout boxes may be added to emphasize an important point being made in the animated video. The callout box
and font styles can be selected by the student. Students cannot create callout boxes or font styles. (See Figure 6.)
Figure 6. Example callout boxes and fonts in GoAnimate4Schools
STEP 6: ADD BACKGROUND SOUNDS IF DESIRED.
GoAnimate4Schools has pre-loaded music clips and sound effects files that may be used in students’ animated
videos. Using the music clips and sound effects are not needed (and in our experience, few students use them) but
a well thought-out and placed piece of music or sound effect can focus viewers’ attention to important information
being presented. (See Figure 7.)
Figure 7. Example music files in GoAnimate4Schools
P a g e | 224
STEP 7: ADD DIALOGUE
Once Steps 1-6 have been completed, students are ready to add dialogue to their animated video. There are a
number of options available to students including adding an MP4 file (recording the dialogue outside of
GoAnimate4Schools), recording the dialogue using the built-in recording option in GoAnimate4Schools (a
microphone is needed on the device being used), or having GoAnimate4Schools generate the dialogue from
students’ text input (a keyboard is needed on the device being used). (See Figure 8.)
Figure 8. Example dialogue entry options in GoAnimate4Schools
STEP 8: ADD MOVEMENT.
Movement is what differentiates an animated video from a still video (i.e., video imagery that is not intended to
convey the appearance of movement). This is easy done by choosing a character and then selecting from a preloaded movement. Custom movements can also be created, if desired.
Figure 9: Example movements in GoAnimate4Schools
P a g e | 225
The preceding is all that is necessary to create an animated video in GoAnimate4Schools. Interested faculty should
visit the GoAnimate4Schools website (https://goanimate4schools.com) and sign up for a free 14 day trial
(https://goanimate4schools.com/public_signup/order/trial) to try the web platform. Faculty should know that
although the web platform is used at the post-secondary education level, its primary audience is K-12. Thus, students
cannot export their work; only faculty can. Relatedly, faculty have the option of moderating student videos before
they become visible to other students (which in today’s hypersensitive environment may be useful even at the
collegiate level). Should faculty be uninterested in moderating student videos, they should also be aware that
student videos remain private until faculty approve the work. As potential privacy concerns may cause pause for
adoption, GoAnimate4schools promises never to collect email addresses and contact information from its users.
PERSPECTIVES
In this section, we discuss the benefits and drawbacks of GoAnimate4Schools from both student and faculty
perspectives.
STUDENT PERSPECTIVES: THE BENEFITS
As noted previously, the authors have used GoAnimate4Schools in their courses. To assess students’ perspectives of
using the software, they surveyed students at the close of course asking them for their reactions. Below are the
themes that emerged from the feedback:
•
•
•
Creativity. Students noted that GoAnimate4Schools allows for more creativity compared to writing a
traditional term paper. For example, students often have psychologists of disparate theoretical orientations
debate each other in unusual circumstances that have an unexpected storyline twist. For example, one
memorable animated video created by a student featured B.F. Skinner and Sigmund Freud, who had a
chance encounter at a local bar. After each had a few drinks (and snide comments were made about each
other’s theories), they had a heated argument over how the unconscious influences behavior. When
fisticuffs was imminent, Abraham Maslow, who also is in the bar, attempted to defuse the situation. He did
so by stating that a truly self-actualized person would not have to win such a silly argument as they are
having. Embarrassed, Skinner and Freud begin to tease Maslow about his ideas on free will. They leave the
bar as friends agreeing that all behavior is determined.
Engagement. Students reported that they were more engaged in the assignment compared to traditional
term papers. They reported that having to create the video was fun (i.e., “It didn’t seem like school work
but something that I would do in my free time”). Student also reported that because the assignment was
engaging, they were more motivated to produce a higher quality product. Motivation is the most important
factor that impacts learning (Pintrich & Schunk, 2002; Pintrich, Wolter, & Baxter , 2000) and that student
motivation is determined by the student, the teacher, the content, the method/process, and environment
(Williams & Williams, 2011). In the case of GoAnimate4Schools, the method/process seemed to have played
a more prominent role in students’ motivation to produce a quality product.
Familiarity. Although writing a term paper may not be familiar to all students, the technology used in
GoAnimate4schools is. Students reported very few problems with the interface and web platform. Much of
the extant research (e.g., Courts, & Tucker, 2012; McHaney, 2011) on Millennials and technology suggests
P a g e | 226
that they are comfortable and proficient with technology. As a result, GoAnimate4Schools seems to be one
way to capture students’ attention and empower them to take control of their education.
•
Diversity. Finally, students noted that creating an animated video added to the diversity of their academic
and professional experiences. This is important because multiple approaches to learning, including those
that feature technology, enhance critical thinking that serves the student well for their next stages in life
(Behar-Horenstein & Niu, 2011).
STUDENT PERSPECTIVES: THE DRAWBACKS
•
•
•
•
Monologue/Dialogue. The primary drawback to GoAnimate4Schools from the student perspective is the
observation that writing the monologue or dialogue was less interesting than creating the video. This
resulted in some students spending too much time on the look and feel of the video than on the content.
Time Management. Although not unique to the use of GoAnimate4Schools, several students often
underestimated the amount of time needed to complete the assignment. The lack of proper planning
resulted in a product that either lacked quality content or was executed (i.e., animated) poorly.
Internet Speed. Because GoAnimate4Schools is a web-based program, students need to have high speed
Internet access. Although students could use the widely available computers on campus, many opted to
use their computers and/or tablets at home. A few noted that because they did not have high speed
Internet access at home, the creation of the animated video took longer to create compared to classmates
who had high speed Internet access at home.
Computer Access. As smartphones have evolved to become portable microcomputers, students often forgo
the purchase of a desktop or laptop computer. For such students, having to use a campus desktop computer
was reported to be a hassle and these students complained and questioned why they could not create their
animated video on their smartphones. (GoAnimate4Schools will not work on smartphone operating
systems.)
FACULTY PERSPECTIVE: THE BENEFITS
In reflecting on our use of GoAnimate4Schools and students’ feedback, we drew the following observations
concerning its use:
•
•
Creativity. We were impressed by some of our students’ creativity when using GoAnimate4Schools;
however, not all animated videos scored high in creativity. But for those with an artistic flare, this was an
option they were more likely to choose over writing a traditional term paper. We recommend providing the
choice – create an animated video or a traditional term paper – so that students can select the option best
suited for their talents.
Engagement. We observed that students seemed to enjoy creating their animated videos because it was
something different (and perhaps something that they might do in their free time). However, not all
students were successful in the creation of their animated video because of poor time management, or
spending too much time on non-content related issues.
P a g e | 227
•
•
•
Familiarity. Students did not have difficulty working with the technology. This is always a concern when
using technology in the classroom because it can pose problems for completing the assignment (i.e.,
students can become frustrated and give up).
Peer Review. Because the video tends to capture the interest of students, they are interested in what their
peers have created. This creates the opportunity for peer reviews where students provide feedback on
others work and receive feedback on their own work. This is important because research has demonstrated
that the production of feedback and the receipt of reviews may enhance students’ learning (Nicol,
Thomson, & Breslin, 2014). Indeed, feedback from instructors does not seem to improve student learning
(Crisp 2007; Bailey & Garner 2010; Wingate 2010).
Plagiarism May Be Held In-Check. In as much as students must write their monologue/dialogue prior to
animation, the monologue/dialogue can be checked for plagiarism through TurnItIn. Additionally, with the
use of storyboarding, the potential for simply downloading an animated video and reproducing it may be
reduced. Access to social media such as YouTube is blocked inside of GoAnimate4Schools to prevent
plagiarizing an existing animated video.
FACULTY PERSPECTIVE: THE DRAWBACKS
•
•
•
•
Monologue/Dialogue. As noted by students, a potential drawback is that some students will spend more
time on the look and feel of their animated video than on its content. Faculty interested in using
GoAnimate4Schools should require multiple drafts of the monologue or dialogue before having students
use the software.
Time Management. Students often underestimate the amount of time needed to create their animated
video. Faculty should provide students with a timeline that includes hard deadlines where components are
due (e.g., a draft of the monologue/dialogue is due in week 5).
Accessibility for Visually Impaired. GoAnimate4schools can be a challenge for the visually impaired
student. Although a screen reader may be able to relay textual elements aloud, the visually impaired
student will have difficulty creating an animated video. Accommodations will need to be made such as
having the visually impaired student work with a sighted helper. (See the Knopf, Knopf, Anderson, and
Waranka chapter for a more detailed discussion.)
Cost. Another potential drawback is the cost. Although minimal compared to software such as SPSS which
we use in our research methods and statistics courses, a one-year license for GoAnimate4Schools is $668
for a 5 teacher, 200 student plan. Faculty interested in using the software will need to add a line item to
their department’s ever dwindling budget.
GRADING
A different kind of grading is required if students decide to create an animated video. A common method of assessing
performance is through the use of a rubric. A rubric provides students (as well as the grader) with descriptions of
levels of quality, usually on a point scale (Sadler & Good, 2006) where the goal of the rubric is to provide definitions
of quality for both the student and instructor (Shepherd & Mullane, 2008). Additionally, rubrics can assist students
P a g e | 228
in developing a sense of ownership for their work because it helps focus students on elements associated in the
assessment process (Shepherd & Mullane, 2008).
To help our students focus on the content of their animated video, we provide them with a grading matrix that
emphasizes content. This is because one of the drawbacks recognized by our students (and the authors as well) was
they often spent less time on the content of their videos. Thus, we weighed content much more heavily than the
other components of the assignment. An example of our grading matrix is presented below. (See Table 1.)
Table 1. GoAnimate4Schools Grading Matrix
Assessment Criteria
Knowledge of
content (50%)
Poor (1)
Content is vague or
inaccurate.
Fair (2)
Facts are presented.
Good (3)
Thorough
understanding of
content is evident.
Organization of
content (20%)
Information is
presented in a
random fashion.
Information is
presented in orderly
manner but may
ramble.
Information presented
in a logical and concise
manner.
Alignment with
learning objective
(10%)
Little connection
with intent of
student learning.
Information presented
supports learning
outcome.
Learning outcome is
completely addressed
in video.
Animation (5%)
Lacks congruence
between speaking
and actions of
figure(s).
Presentation is dry.
Movements correspond
to speech.
Movements of actors
enhance content.
Presentation is
interesting.
Uses humor to
emphasize points.
Uses language
audience will not be
likely to
understand.
Contains errors.
Language is appropriate
for audience.
When difficult or new
terms are used, they
are explained.
Almost grammatically
correct.
Grammatically correct.
Personality (5%)
Audience specific
(5%)
Grammar (5%)
Excellent (4)
Content presented
shows synthesis of
information in concise
and understandable
manner.
Information presented in
a way that helps listener
remember content (e.g.,
callout boxes,
summaries).
Learning outcome is
addressed and
connections made to
related topics.
Uses sound and motion
effects to enhance
knowledge presented.
Very creative and clever.
Personalizes to
audience.
Clip created specific for
the listener and not a
general audience.
Words used are perfect.
Changes would not
enhance the
presentation.
CONCLUSIONS
The use of GoAnimate4Schools can motivate students to produce a higher quality product because it allows them
to be creative while using a technology with which they are familiar. Students also may be more engaged in the
assignment because it is different from a typical term paper. Additionally, creating an animated video adds to the
diversity of students’ academic experiences which may aid in the development of life-long critical thinking skills.
P a g e | 229
REFERENCES
Bailey, R., & Garner, M. (2010). Is the feedback in higher education assessment worth the paper it is written on?
Teachers’ reflections on their practices. Teaching in Higher Education, 15, 187-198.
Behar-Horenstein, L., & Niu, L. (2011). Teaching critical thinking skills in higher education: A review of the literature.
Journal of College Teaching and Learning, 8, 25-41.
Crisp, B. (2007). Is it worth the effort? How feedback influences students’ subsequent submission of assessable work.
Assessment & Evaluation in Higher Education, 32, 571-581. doi:10.1080/02602930601116912
Courts, B., & Tucker, J. (2012). Using technology to create a dynamic classroom experience. Journal of College
Teaching and Learning, 9, 121-128.
McHaney, R. (2011). The new digital shoreline: How Web 2.0 and millennials are revolutionizing higher education.
Sterling, VA: Stylus.
Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: A peer review
perspective. Assessment & Evaluation in Higher Education, 39, 102-122.
doi:10.1080/02602938.2013.795518
Olson, G. (1997). Motivation, Motivation, Motivation - Secondary School Educators. Retrieved from
sysiwyg://934/http://7-12educators.about...-12educators/library/weekly/aa071897.htm
Pintrich, P. R. & Schunk, D. H. (2002). Motivation in education: theory, research, and applications (2nd ed.). Upper
Saddle River, NJ: Merrill Prentice-Hall.
Pintrich, P. R., Wolters, C. & Baxter, G. (2000). Assessing metacognition and self-regulated learning. In G. Schraw &
J. Impara (Eds.), Issues in the measurement of metacognition (pp. 43–97). Lincoln, NE: Buros Institute of
Mental Measurements. Retrieved from http://digitalcommons.unl.edu/burosmetacognition/3/
Sadler, P. H., & Good, E. (2006). The impact of self-and peer-grading on student learning. Educational Assessment,
11, 1-31. doi:10.1207/s15326977ea1101_1
Shepherd, C. M., & Mullane, A. M. (2008). Rubrics: The key to fairness in performance based assessments. Journal
of College Teaching & Learning, 5. Retrieved from
https://www.cluteinstitute.com/ojs/index.php/TLC/article/view/1231
Williams, K. C., & Williams, C. C. (2011).Five key ingredients for improving student motivation. Research in Higher
Education Journal, 12(August). Retrieved from
http://www.aabri.comwww.aabri.com/manuscripts/11834.pdf
Wingate, U. (2010). The impact of formative feedback on the development of academic writing. Assessment &
Evaluation in Higher Education, 35, 519-533. doi:10.1080/02602930903512909
P a g e | 230
CHAPTER
23
PESKY GNATS! USING COMPUTER
GAMES AND SMARTPHONE APPS
TO TEACH COMPLEX COGNITIVE
BEHAVIOURAL THERAPY AND
MINDFULNESS CONCEPTS TO
CHILDREN WITH MENTAL HEALTH
DIFFICULTIES
GARY O’ REILLY
UNIVERSITY COLLEGE DUBLIN
INTRODUCTION
The aim of this chapter is to describe how technology can be used as a very effective tool in the delivery of evidencedbased mental health interventions. It will focus on the role of technology in the psychoeducation of children about
Cognitive Behaviour Therapy (CBT).
The application of technology to mental health is intuitively appealing to many people. Its most common use is to
solve the widespread problem of the lack of availability of mental health services in the face of large and unmet
public need (Bower & Gilbody, 2005; Clarke 2011). In essence, this is a problem of availability tackled through the
reach of Internet technology. There are an ever-increasing number of high quality evidenced-based mental health
programmes available online attaining a small to moderate improvement on a range of psychological difficulties
such as anxiety and depression for those who use them (Twomey & O’ Reilly, 2013; Twomey & O’ Reilly, 2016;
Twomey O’ Reilly & Meyer, 2017). However, the focus of this chapter concerns a different use of technology to solve
another problem faced in mental health intervention for young people. That is, how we can use technology during
mental health interventions to help young people better understand and apply the often-complex concepts of
evidenced-based models of psychological functioning such as CBT. This is a problem of complex concept education
tackled through the medium of technologies such as specially designed computer games and smartphone apps.
This chapter begins by outlining the underlying theoretical model of CBT, the mental health intervention with
arguably the strongest evidence base (Dozios & Beck, 2011; Hofmann, Asnaani, Vonk, Sawyer, & Fang, 2012).
Although there are many successful applications of this model for children, they all face the obstacle of making the
complex, and at times abstract, ideas of CBT easily accessible to children whose cognitive development may not
readily lend itself to abstract thinking. This chapter will then describe the design of Pesky gNATs, a not-for-profit
computer game and app for children aged seven and older with clinically significant anxiety or low mood. It is
designed for them to play in session with a mental health professional, and uses technology to make CBT concepts
more accessible and applicable.
P a g e | 231
THE COGNITIVE BEHAVIOUR THERAPY MODEL
The description of the CBT theoretical model presented in this section is based on Beck (2011), Beck & Haigh (2014),
and Dozios and Beck (2011). The CBT model can seem like a relatively simple representation of human functioning
proposing that there is an important relationship between our thinking, our feelings, and our behaviours. However,
in reality, the model is more sophisticated than this and its individual application requires the development of
complex self-monitoring, openness to flexible evaluative thinking, emotion regulation and behavioural change. The
left side of Figure 1 and the description below presents the traditional generic CBT model as typically applied to
clinical anxiety or depression where psychological processes are divided into three levels.
Level One refers to "Surface Level Cognition." In response to any given situation in the world "automatic thoughts"
continuously flow through our mind priming our feelings and behaviours. Our automatic thoughts are usually
involuntary, often unnoticed and typically accepted without reflection or deliberation. They can be a brief verbal
thought or a quickly passing image. Automatic thoughts can be positive, negative or neutral. Our automatic thinking
and its relationship with our feelings and behaviour usually remains outside or below our conscious awareness.
When experiencing a mental health difficulty our automatic thoughts are predominately, but not necessarily,
exclusively negative (i.e., either simply unhelpful or more negative than warranted compared to objective deliberate
thinking).
CBT for anxiety or depression usually begins by orienting people to the CBT model with the aim of helping them
bring the usually unnoticed process of the relationship between automatic thoughts, feelings and behaviours into
awareness with a special attention directed to the identification of potential negative automatic thoughts. Negative
automatic thoughts can be “distorted” or unhelpful in a number of ways. On examination, the distortion of some of
our thoughts will be clear (e.g., “no one ever appreciates a single thing I do…”). Others may appear reasonable but
their implication belies a distortion (e.g., on seeing someone I forgot to do something for I am reminded of my lapse
and have the simple accurate thought “I forgot to do that.” If this implies to me that I am a no good, unreliable
incompetent fool then understanding the implication of this brief thought reveals its associated negative feeling and
behaviour). Other negative thoughts may be accurate, but in a given situation unhelpful (e.g., I find myself at an
airport thinking “Aeroplanes crash…what if this plane crashes?” as I board. It is accurate that aeroplanes do indeed
crash but considering the personal safety implications of this may be better done at a time other than pre-boarding).
As we become more aware of our personal moment-to-moment automatic thoughts, feelings and behaviours, they
become available for reflection and evaluation. This allows us to begin the next step in managing anxiety or low
mood which is to objectively judge the helpfulness (utility) and accuracy (validity) of our automatic thinking, and its
feeling and behavioural trajectory, particularly at times of distress or difficulty. The early phases of CBT are oriented
to level one (or surface level cognitions) in Beck’s model and are concerned with developing cognitive monitoring
and cognitive appraisal and modification skills.
P a g e | 232
Figure 1. The integrated CBT and mindfulness model in the cCBT computer game Pesky gNATs
Awareness of the relationship between our thinking-feeling and behaviour opens up the possibility of understanding
level two of the generic Cognitive Model. Beck describes these as "Intermediate Cognitive Processes" including our
memory systems, our attentional processes, our beliefs or attitudes, and how we interpret neutral or ambiguous
events or information when we encounter them. The operation of these cognitive processes can maintain mental
health difficulties (for example if our memory system has a biased recall of negative past events relative to neutral
or positive ones). Insight into how these intermediate cognitive processes work can be gleaned from observing
many examples of the relationship between our thoughts, feelings and behaviours. These observations allow us to
judge our recall: Do we tend to recall negative events, or negative aspects of events, or is there a balanced positive,
neutral and negative aspect to our memory? Do we become attenuated in our attention, focusing on the negative
or focusing on solutions that were previously problematic? Or do we operate with an open, flexible attention? What
intrinsic attitudes and beliefs are revealed from our thinking, feeling and behaviour? Do they support good mental
health or maintain a difficulty? Do we recognise ambiguous or neutral events for what they are, or do we tend to
put aside their ambiguity and interpret them in an unhelpful negative manner? CBT usually proceeds by helping us
to use many observations of our thought-feeling-behaviour in our everyday life over time to begin to build insight
into our secondary cognitive processes, how they contribute to or maintain a mental health difficulty, and how they
can be re-directed for self-benefit.
Level Three is described by Beck as a deeper organising cognitive structure he refers to as "Schemas" and the specific
content of which he refers to as "Core Beliefs" (Beck, 1964). A schema is the cognitive organising structure of
information distilled from our life experience which can be summarised as a Core Belief. It is both the outcome of
our past automatised processing of information, and the current and future organising direction of our automatic
processing of information. Core Beliefs are such an embedded natural part of our life experience we often simply
feel they are true. As such, they have both a cognitive and an emotional component. In terms of our mental health,
Beck points to the importance of schemas about our view of self, our view of other people and our view of the world.
P a g e | 233
Healthy core beliefs are usually balanced recognising both positive and negative qualities. Core beliefs that support
mental health difficulty have usually lost that balance and are unrealistically and predominantly negative. For
example, an anxious person whose self-schema distils and organises their mind around the idea, “I simply cannot
cope,” or a depressed person with the self-schema, “I am just a bad person.” Beck argues CBT allows us insight into
key core beliefs that support or maintain mental health difficulties. This insight emerges from making many
observations of automatic thinking, feeling and behaviours over the course of therapy. A client and therapist may
consider the meaning of many observed thought-feeling-behaviour interactions, and what their commonality reveal
of the underlying organisation of information about the self, others and the world. Beck (1977) described three
categories of core beliefs associated with anxiety and depression: 1. Helpless Core Beliefs; 2. Unlovable Core Beliefs;
and, 3. Worthless Core Beliefs. Examples of each are provided in Table 1. Once identified, these Core Beliefs can
also be carefully considered for their validity and utility and when appropriate can be re-crafted into more balanced,
more helpful representations of the self, others and the world leading to lasting mental health benefits.
Table 1: Common core beliefs about the self associated with anxiety and depression
Helpless Core Beliefs
I am helpless
I am powerless
I am out of control
I am weak
I am vulnerable
I am needy
I am trapped
I am inadequate
I am ineffective
I am incompetent
I am a failure
I am disrespected
I am defective compared to others
I am not good enough (in terms of
my achievements)
Unlovable Core Beliefs
I am unlovable
I am unlikable
I am undesirable
I am unattractive
I am unwanted
I am uncared for
I am bad
I am unworthy
I am different
I am defective so others will not
love me
I am not good enough to be loved
by others
I am bound to be rejected
I am bound to be abandoned
I am bound to be alone
Worthless Core Beliefs
I am worthless
I am unacceptable
I am bad
I am a waste
I am immoral
I am dangerous
I am toxic
I am evil
I do not deserve to live
This process of engaging in CBT is represented in Figure 1 by the boxed arrow on the left side of the diagram which
characterises it as a method allowing people increased awareness and skill in understanding and shaping how their
mind automatically processes and responds to information that may result in, or maintain, mental health difficulty,
or may be used for recovery. As such CBT promotes and requires meta-cognition, or thinking about thinking which
may lead to behavioural and emotional change. Working with a client involves linked phases of psycho-education
about the CBT model, using the model to co-develop an ever-evolving CBT formulation of the client’s difficulties, and
the active application of the model by the client to his/her life to manage his/her current difficulties. The client
usually moves through related phases of monitoring his/her thinking and its relationship to his/her feelings and
behaviour, appraising the validity and utility of the thoughts observed, challenging those thoughts through the
generation of alternative thoughts and experimenting with the impact this has on behaviours and emotions.
Together clients and therapists also use their many observations of thought-feeling-behaviour examples to generate
an understanding of Level Two cognitive processes. Ultimately, the client uses his/her increased understanding of
all of the above to identify unhelpful core beliefs and how they support clinical anxiety or depression. With help
from his/her therapist, the client challenges dysfunctional core beliefs, replacing them with more evidenced and
balanced ones.
P a g e | 234
Relaxation skills, behavioural activation and more recently mindfulness skills are also routinely an effective part of
CBT interventions (Bandelow, Reitt, Röver, Michaelis, Görlich & Wedekind, 2015; Kabat-Zinn, 1982; Richards et al.,
2016; Segal, Williams, & Teasdale, 2002). My view is the common active mechanisms through which mindfulness
skills offer a helpful addition to CBT across all stress reduction or mindfulness based mental health interventions is
they operate through cognitive processing mechanisms that can be understood within Beck’s Three Levels of
Cognitive Processing Framework.
Figure 1 illustrates this along the right-hand side of the diagram. Rather than an automatic thought-feelingbehaviour processing of information an alternative “mindful” response to the “automatic” processing of stimuli is
possible. Along this “mindful” route of information processing Level One represents surface level cognition and is
concerned with attention. In this mode when responding mindfully to stimuli we purposely pay attention to it. This
is difficult to do unless practiced as our attention tends to switch to other stimuli. In the diagram this switching is
represented as “distraction” which returns us to the left side of the model where information processing transfers
from a deliberate present moment focus of our awareness to automatic processing and its associated priming of
feelings and behaviours.
Consequently, mindfulness exercises frequently aim to help us notice this switching feature of our attention and
train or encourage us to return our attention to a current (non-automatic) mindful focus. In the mindful mode of
processing we consciously aim to combine our mindful awareness with a deliberately chosen attitude to produce a
desired outcome (Shapiro, Carlson, Astin, & Freedman, 2006). In Figure 1 it is represented as “+ consciously selected
attitude” to the right of mindful awareness in surface level processing.
We represent the specific intentions themselves as Level Two cognitive processes. During commonly used
Mindfulness Based Interventions we intentionally combine attitudes of (1) openness, (2) acceptance, (3) curiosity,
(4) detachment, (5) compassion, and (6) receptiveness to the observed content of our mind of to produce an
intended cognitive outcome. These cognitive outcomes are (1) less rumination, (2) reperceiving, (3) deautomatisation, and (4) decentration. These are defined in Table 2. Different mindfulness skills can be used to
promote different attitudes and related benefits. For example, "Mindfulness of Objects" (using multiple senses to
pay attention in detail to familiar everyday objects) may be used to promote attitudes such as openness and
curiosity. While a "Leaves on a Stream" exercise (noticing the emergence and flow of thoughts that pass through
our mind while imagining them as leaves floating on a stream) may be helpful to promote the cognitive process of
de-automatisation.
Level Three represents our deeper cognitive organisation of information through underlying schema or Core Beliefs.
These are similarly open to conscious consideration and re-organisation as part of a Mindfulness Based Intervention.
Alternatively, our underlying schemas may re-organise as our mindful awareness and skills develop. The role of
Mindfulness Based Interventions is represented in Figure 1 by the boxed arrow on the right side of the diagram
which characterises it as a method allowing people increased awareness and skill in understanding and shaping how
their mind automatically processes and responds to information that may result in, or maintain, mental health
difficulty, or may be used for recovery. Mindfulness based interventions promote meta-cognition.
P a g e | 235
Table 2: Core concepts in understanding the potential mechanisms of change in mindfulness based interventions.
Outcome
Rumination
Reperceiving
De-Automatisation
De-centration
Definition
Automatic recurring attention on thoughts and feelings of distress and their possible
causes, circumstances or consequences without resolution.
Deliberately focussed attention producing awareness of the present moment
intentionally combined with attitudes of curiosity, compassion, and open-heartedness,
even in response to negative stimuli that lead to positive outcomes through selfregulation, clarification of values, and cognitive, emotional, and behavioural flexibility.
Deliberately focused attention producing awareness of the present moment
intentionally combined with acceptance to promote less habitual reaction, increased
cognitive control, greater meta-cognitive insight, and fewer tendencies towards
thought suppression and distortion.
Deliberately focussed attention producing awareness of the present moment
intentionally combined with an open receptive attitude resulting in increased
awareness that currently perceived events are in essence temporary, alterable and
subject to inevitable passing.
It is hoped that this outline of the theory and practice of CBT provides clarity about how it is so effective but also
reveals the significant difficulties in making this model accessible to children. The primary developmental difficulty
for CBT with children is that in essence it is a metacognitive process. It requires us to think about our thinking, to
think about how our thinking effects our feelings and behaviour, to think about whether our thinking has a previously
unnoticed unhelpful negative bias and how this effects our feelings and behaviours, to think about alternative more
evidenced or helpful ways of thinking, how these effect our feelings and behaviour, and to ultimately identify some
super thoughts or Core Beliefs that are an organising centre for our difficulties, which in turn we need to appraise
and challenge. As such, CBT is developmentally challenging to children whose cognitive development may not
include a ready capacity for metacognition.
Our team is involved in an on-going research project to design computer games and apps that make CBT concepts
more accessible to those who may struggle to understand them, including children and adults with an intellectual
disability. In doing so we draw on ideas from clinical psychology, developmental psychology, and learning theory.
Mental health professionals can access online training and our computer games on a not-for-profit basis through
our website www.PeskyGnats.com. One of our main aims in combining technology with CBT is to use it to address
the developmental unsuitability of CBT and make the model more accessible and understandable to children. In the
next section of this chapter I will describe how we have used ideas from developmental psychology and social
learning theory to translate the complex meta-cognitively oriented ideas of CBT into something much more
accessible to children in a computer game called Pesky gNATs!
INTEGRATING DEVELOPMENTAL PSYCHOLOGY AND LEARNING THEORY INTO THE DELIVERY OF CBT
FOR YOUNG PEOPLE
Piaget's theory of cognitive development gives us guidance on the age we would expect children to begin to engage
in the kind of abstract meta-cognitive thinking associated with the successful application of CBT (Flavell, 1963; Piaget,
1970; Piaget & Inhelder, 1969). Piaget outlined a theory of child cognitive development where the concept of an
organising schema is central. Piaget describes children as engaged in a process where they assimilate knowledge
into existing schemas and accommodate to new knowledge by altering schemas as they progress through
P a g e | 236
qualitatively different stages of thinking. Piaget argues that children's cognitive development begins in a sensorimotor stage (0-2 years) where they learn about the physical properties of the world by interacting with it. This is
followed by a pre-operational stage (2-7 years) where children begin to develop an internal mental representation
of the world. This stage is associated with the development of language, symbolic representation, imaginative play,
and theory of mind. Although children at this stage develop an understanding of other's minds, they struggle to see
the world from others’ perspectives, are magical in their thinking, and find it hard to focus on more than one
dimension of a problem at a time. The third level in Piaget's model is the concrete operational stage (7-12 years).
Here children develop the use of logic to solve concrete problems and better grasp the perspectives of others.
Finally, Piaget describes a formal operational stage (12 years onwards) where children's thinking becomes more
abstract and hypothetical, although not all children achieve this (Bridges & Harnish, 2015).
If we were to apply Piaget's model to guide us on the correct time to offer CBT to children we might conclude that
it is during the formal operational period which for most young people is from roughly 12 years onwards. The
abstract CBT tasks of noticing thoughts, considering their evidence, generating alternative attributions,
experimenting with new ways of thinking, and identify and challenging core beliefs fits well with this phase of the
Piagetian model.
Vygotsky (1978) suggests we should usefully distinguish between the cognitive accomplishments of children when
they think independently from when they think with assistance from adults or more advanced peers. He argues that
we can assist children to be more cognitively accomplished when we provide them with educational and teaching
support that takes account their current level of skill and proximity to more advanced types of thinking. He refers
to this as scaffolding the child's educational experience within their Zone of Proximal Development (Daniels, 2017).
(See Bierer chapter for a more detailed discussion of scaffolding.) Applying Vygotsky's theory opens the possibility
that we could consider whether children younger than 12 might benefit from a CBT approach if we appropriately
scaffold it within their zone of proximal development. This could be achieved through a concrete description of CBT
concepts in a psycho-educational context where the child is supported in thinking through the application of CBT to
their everyday life and current difficulties by a cognitively more advanced adult, such as a suitably qualified mental
health professional.
The work of developmental psychologists such as Donaldson (1978) and Dunn (1988) argues that Piaget's model was
broadly correct but under-recognises the important role of social context in supporting more sophisticated thinking
among children. That is, when children are presented with cognitive problems in a meaningful social context are
more sophisticated in their thinking than Piaget's model allows. They support this view through observation and
experimental work (Donaldson, 1978; Dunn, 1988). This suggests that a further part of the scaffolding that makes
the concepts of CBT more accessible to younger children is to introduce them through a medium that presents them
in a naturalistic social context.
Bandura (1971; 1977) also offers us an understanding of how children learn in a social context. He theorises that
while many new patterns of behaviour are learnt from operant conditioning, they can also be learnt through
observing the behaviour of others. Learning through observation is mediated in Bandura's theory by cognitive
processes. That is, we observe the behaviour of others and its consequences and we think about these observations
and use our thinking to guide our behaviour. As Bandura puts it, "Man's cognitive skills thus provide him with the
capability for both insightful and foresightful behavior" (1971, p.3). Bandura describes four requirements of
observational learning: 1. Attention to essential features of the model's behaviour; 2. Retention in memory of the
model's behaviour so it can be replicated in the absence of the model; 3. Motoric reproduction of the behaviour
symbolically represented in memory; and, 4. Reinforcement and motivation for the behaviour’s retention.
P a g e | 237
Our approach in development of Pesky gNATs was to take the CBT model developed by Beck and blend it with these
ideas from Piaget, Vygotsky, Donaldson, Dunn and Bandura within a computer game to make it accessible to children
7 years and older. We have used the technology of computer games and apps to fully integrate these ideas with
each other and make them accessible to young people and their families. The next section describes the format and
content of Pesky gNATs.
Figure 2: The integration of CBT with developmental psychology and learning theory in the Pesky gNATs computer game and app
USING TECHNOLOGY TO TEACH CBT TO CHILDREN
Pesky gNATs is a 3-D computer game that teaches young people CBT concepts and skills (see figure 3; O' Reilly &
Coyle, 2015a). The gNATs of the game’s title is a play on the CBT concept of NATs (Negative Automatic Thoughts).
The game is played by young people with clinically significant anxiety or low mood during traditional therapy sessions
with a suitably qualified mental health professional. The game is supplemented by a Pesky gNATs smart-phone app
that supports a young person’s transfer of what they learn in therapy to his/her home, school and community life
(See figure 3) (O' Reilly & Coyle, 2015b). Our aim in the development of Pesky gNATs was to combine the highest
quality evidence-based psychological content for mental health difficulties experienced by young people, with the
highest quality purposely designed gaming technology, delivered on a sustainable not-for-profit basis. Pesky gNATs
and seven hours of online training are available to mental health professionals on a not-for-profit basis through our
website www.PeskygNATs.com. The training videos provide professionals with a description of the concepts of the
programme and a walk-through of the full content of the game and app, illustrating how they are typically used
P a g e | 238
during therapy. Our intention with Pesky gNATs is to assist mental health professionals in their delivery of a
genuinely “cognitive” CBT intervention filtered through the ideas of developmental psychology and learning theory,
packaged within a computer game played within the supportive context of a therapeutic relationship.
Figure 3 The Pesky gNATs computer game and Smartphone App
Drawing on Piaget's theory of childhood cognitive development, a key aim in our design was to make CBT ideas more
concrete. We attempted to do this in the two ways. Firstly, we use concrete ratings of the child’s current
psychological functioning through-out the game. Pesky gNATs begins with the completion of a standardised rating
scale of anxiety and low mood (The Revised Children’s Anxiety and Depression Scale; Chorpita, Yim, Moffitt,
Umemoto, & Francis, 2000). Once completed it is scored, profiled and displayed instantly to the young person. This
allows the young client and his/her therapist to see how his/her anxiety or low mood compares to other similarly
aged young people. This information is then used to plan intervention goals. Each level of the game also allows
young people to complete a four-item rating scale of their everyday functioning in the areas of personal, at home,
at school and overall (The Children’s Outcome Rating Scale; Miller, Duncan, Brown, Sparks, & Claud, 2003). Weekby-week this allows young clients and their therapists to monitor how their difficulties impact these important
aspects of life, monitor progress and plan intervention goals for each session. Every level concludes with a similar
four-item rating scale that allows the young person to give feedback on each session in terms of the quality of the
therapeutic relationship, goals, approach and overall rating of the session (Children’s Session Rating Scale, Miller &
Duncan, 2000). This informs the therapist about the young person’s engagement with the intervention and allows
both to judge together if they are on-track or need to adjust their work to better engage the young person in the
programme.
Secondly, we use the seven levels of Pesky gNATs to unfold a concrete metaphor to make the ideas of CBT more
accessible (see Table 3). The game begins by introducing children in level one to the idea that thoughts-feelingsand-behaviours go together. They learn that some thoughts are unhelpfully negative. The NATs (Negative Automatic
Thoughts) of CBT are equated to gNATs or little flies that can sting us, affect our feelings and behaviour without us
P a g e | 239
noticing. Game levels two and three proceed by teaching young people CBT skills embedded in this unfolding
concrete metaphor. "Cognitive Monitoring" in the game becomes learning how to set a gNAT trap. Different types
of negative thinking are explained as different species of gNAT the player explores through the gNAT gallery. In
game level four, "Cognitive Restructuring" becomes learning how to swat gNATs using four gNAT swatting questions.
In level five, identifying underlying core beliefs is pursued by hunting all of the trapped gNATs back to their Hive.
The young person learns that "Unhelpful Hives" have a common idea expressed in different ways by the gNATs who
come from there. They review all of the thoughts recorded up to that point in the programme and return them to
the Unhelpful Hive they come from. Young people are presented with examples of typical Unhelpful Hives (core
beliefs) to assist them. In game level six, challenging unhelpful Core Beliefs is presented as a "Hive Splatting" exercise
where the young person with help from his/her therapist carefully considers the evidence for and against the
identified core beliefs. Developing a more balanced understanding of self, others and the world is achieved by
building a more balanced and realistic "Bee Yourself Hive." For example, “I worry more than I need to because I care
about people. I have many skills and supports to help me. I can cope very well when I need to.” The final level of
the game helps young people develop relapse prevention skills by building a six-part Healthy Life Plan (See figure 4).
Table 3: CBT Content of Pesky gNATs (O’ Reilly & Coyle, 2015a).
Game Level
Central CBT Concept
Game Level 1:
Thoughts Feelings &
Behaviours
Game Level 2:
Cognitive Monitoring
Session concept: Thinking-Feeling-Behaviour (TFB) go together. Young person with
assistance from therapist creates a positive and a negative TFB from the last 24 hours.
Between session task to record one positive and one negative TFB.
Session concept: Some thoughts are unhelpfully negative and we usually do not notice
them. Negative Automatic Thoughts (NATs) animated as gNATs or little flies and young
person reviews 6 different types (species) of gNATs and learns how to record them
using a gNAT trap. Young person reviews their negative TFB to see if it contains a gNAT.
Between session task to trap some gNATs focusing on times of anxiety or low mood.
Session concept: Young person is introduced to 5 more common species of gNAT. They
review their gNAT trapping to-date. Young person responds to their gNATs with some
PATs (positive automatic thoughts). Between session task to trap some more gNATs
focusing on times of anxiety or low mood.
Session concept: Introduction of cognitive restructuring presented as gNAT swatting.
Young person applies four gNAT swatting questions to (i) consider evidence for and
against their thoughts, (ii) to consider alternative ways of looking at things, (iii) to brainstorm alternative plans for similar situations and (iv) to pick and test a new plan.
Between session task to trap and swat gNATs focusing on times of anxiety or low mood.
Session concept: Introduction of negative Core Beliefs and review of common
examples. Young person gathers up all of the gNATs they trapped so far and hunts
them back to a suitable Hive (Core Belief). Between session task to hunt gNATs back to
a Hive.
Session concept: Socratic questioning of identified core belief. Young person considers
evidence for and against their core belief in the areas of self, family, school, friendships
and other. Young person decides if Core Belief is true or not (Hive splatting) and builds
a positive belief called a Bee Urself Hive. Continued as a between session task.
Session concept: Relapse prevention. The young person identifies signs of relapse and
plans their response. The young person also develops a healthy life plan setting positive
goals in the areas of 1. Fun 2. Personal Goals 3. Having a Purpose 4. Emotional and
Physical Health 5. Being Me. 6. Having People in My Life.
Game Level 3:
Cognitive Monitoring
Game Level 4:
Cognitive
Restructuring
Game Level 5:
Negative Core Belief
Identification
Game Level 6:
Negative Core Belief
Re-appraisal.
Game Level 7:
Relapse Prevention
P a g e | 240
Figure 4: An example of a Healthy Life Plan developed in level 7 of Pesky gNATs.
In developing Pesky gNATs we also wished to give therapists and young people the option of incorporating
behavioural activation, relaxation and mindfulness skills into the intervention. We did this by introducing a game
character, called Ben the Beach Dude, who young people can choose to visit at the end of each level of the game.
This character gives young people the option to choose an activity scheduling, relaxation or mindfulness skill to learn
and practice. They are presented in an open format choice with all options available in all levels so the young person
and their therapist can select an appropriate skill for the young person given their needs at that moment in time and
stage of the intervention. The content of the Beach Dude Character are described in Table 4.
P a g e | 241
Table 4: Relaxation, Mindfulness & Activity Scheduling Content of Pesky gNATs (O’ Reilly & Coyle, 2015a, 2015b).
Skill
Awareness of your body
1. Paced breathing
2. Relaxation
3. Body Scan
Awareness of your mind
4. Leaves on a stream
5. What’s on your mind
Awareness of your world
6. Mindfulness of sounds
7. Visual illusion
8. Mindfulness of an
object
Activity Scheduling
9. Activity scheduling
Description
A self-regulatory breathing exercise. The player tries different paces for breathing-in,
holding their breath, and breathing out. They reflect on their physical experience of
pacing their breathing and changing their breathing. Aims to increase awareness of
your breathing and physical self-regulation.
A progressive muscle relaxation exercise. The player works their way around their body
tensing and releasing muscles noticing their experience as they do so. They also engage
in some relaxation visualization. The aim of this skill is also self-regulation and
anxiety/stress relief.
A mindfulness exercise. The player deliberately and progressively moves their attention
from the tips of their toes to the top of their head. The aim is to increase physical
awareness.
A mindfulness exercise. The player imagines they are sitting on the bank of a stream
watching leaves flow past. They begin to notice their own thoughts and imagine placing
each thought on a leaf on a stream and allow it to flow on and wait to observe the next
thought that comes along. The aim is to notice and refocus shifting attention, increase
awareness of thoughts and awareness that they are passing events.
A mindfulness exercise. This skill builds on the awareness of thoughts and their passing
nature introduced in leaves on a stream. The player watches a visual display that lists
thoughts, imaginings, judgments, memories, feelings, and other things. As they notice
one of these things on their mind they acknowledge their awareness of it and allow it to
pass. The aim is to notice and refocus shifting attention, to increase awareness of the
content of your mind and its transitory nature.
A mindfulness exercise. The player listens to the sounds they can hear for one minute.
They are invited to notice the difference between the sound they hear and the
label/judgment our mind makes of that sound. For example notice that I heard the
sound “tweet-tweet” rather than report the label my mind offers (birdsong). The aim is
to increase awareness of the world, to notice and refocus shifting attention, and to
observe non-judgmentally.
A cognitive perspective exercise. The player views a single image that some people see
as the head of a native American warrior while others see it as the full body of a person
who lives in the Artic. The aim of the exercise is to become aware that how we see
something depends on how we observe it and different people can observe different
things while looking at exactly the same picture.
A mindfulness exercise. The player uses all of their physical senses to observe
something everyday and familiar. The aim is to pause and observe in detail something
we usually do not take time to pay attention to.
An activity scheduling exercise. The player plans activities for the week ahead and
reviews their completion by rating each activity for its sense of mastery and pleasure.
The design of our programme is also influenced by the post-Piagetian research that emphasises the supportive
nature of social contexts for children's thinking. Computer games afford a wonderful opportunity to support the
understanding of CBT ideas within a social story. Pesky gNATs uses the following social narrative. The player selects
a friendly, young person avatar to represent them in the 3-D gameworld where they travel to gNATs Island, a distant
tropical world where gNATs come from. There they meet a world famous explorer called David gNATtenborogh who
P a g e | 242
thinks gNATs are so extraordinary that he set up the world's first gNAT lab in order to study them. He invites the
player to help him and his team of researchers. The young person meets the different members of David
gNATenborough’s team across the various game levels of the programme as they learn how to trap gNATs, identify
different species of gNAT, swat gNATs, hunt them back to their Hives, and splat Unhelpful Hives. In this way, the
computer game world and its story provide a supportive narrative backdrop for the young person’s grasp of complex
CBT concepts.
We used Vygotsky’s concept of a scaffolded Zone of Proximal Development to further help young people. For this
reason, we have designed a game that is not played independently or online. Instead, the game is played by a young
person with clinically significant anxiety or depression with a suitably qualified mental health professional during
regular therapy session. Each level of the game is designed to take about 50 minutes to play, which is the typical
length of therapy sessions. Young people play the game with their therapist to allow them to benefit from their
therapist’s understanding of CBT concepts and their application. The therapist discusses and further explains the
ideas presented in the intervention and helps the client think through and apply them to his/her current difficulties.
For children aged 7-9, we recommend the game is played by a therapist with a young person and a parent or caregiver. For those aged 9 and older, it is increasingly suitable for the therapist to play with only the young person. This
blending of a game with discussion between a young people (and parent) and a therapist allows a very nice
combination of integrity of programme delivery combined with appropriate idiosyncratic individual flexibility. That
is, the game characters are programmed to deliver the concepts of the programme as we intended and the
discussion of these concepts by young people with their therapist allows their modification and application with the
personal nuance that every individual’s life will require.
Figure 5: The social learning theory structure repeated in each Pesky gNATs game level
We used Bandura’s (1977) model to inform the design of each game level. The same structure is used in every game
level to offer a social learning theory informed presentation of content (see Figure 5). In every level of the game a
character introduces a single CBT concept that is the main idea of that level. The concept is then modelled for the
young person with reference to the experience of a fictitious previous player who illustrates how the idea applied
P a g e | 243
to him/her. The game characters then ask the young person to consider how this idea applies to his/her own
difficulties. The young person must discuss this with the therapist he/she is working with and together they figure
out its application for the young person. The young person then explains to the game characters how it applies to
him/her. The game characters reply by inviting the young person to put the idea into practice until their next
appointment using the Pesky gNATs smartphone app or hard copy workbook. The smartphone app is described
below and is intended to aid Bandura’s conditions for social learning. That is, it helps young people to attend to the
key features of the behaviour to be modelled, it aids recall of the content, it assists in converting his/her
understanding of the model into behavioural action, and it rewards the use of the model by unlocking fun games
within the app when the young person completes between session therapy tasks.
The Pesky gNATs App is available for free in iOS and Android formats and is downloaded to a smartphone or tablet
device by a young person (O' Reilly & Coyle 2015b). The content requires an unlock code to open and this is provided
to the young person by the therapist. It is designed to support young people to complete between session tasks for
every level of the game. The young person uses it to (1) notice his/her thinking-feeling-and-behaviour going
together; (2) to trap gNATs and identify the species of gNAT recorded; (3) to swat gNATs; (4) to hunt gNATs back to
their Hives; (5) to splat Hives; (6) to build Bee Yourself Hives; and (7) to develop and apply a healthy life plan. These
activities are the between session therapy tasks for young people as they complete the programme. They allow
young people to apply and practice the CBT skills they are learning to their life and record the outcome which they
can share with their therapist during sessions. The full content of the Pesky gNATs App is outlined in figure 6.
Figure 6: The content of the Pesky gNATS App designed to support transfer of CBT skills and knowledge to home, school and community life.
EVIDENCE
We are engaged in the on-going evaluation of the technology based programmes we develop. This began with pilot
studies that demonstrate that children with clinically significant anxiety or low mood attending mental health
P a g e | 244
services, or working in educational settings with school psychologists, benefit from our programmes (Chapman,
Loades, O' Reilly, Coyle, Patterson, & Salkovskis, 2016; O'Dwyer-O'Brien, 2012; Ryan, 2013). We also conducted
experimental studies to establish that children learn mindfulness and relaxation skills by using our technology (O'
Reilly, Coyle, & Tunney, 2016; Tunney, Cooney, Coyle, & O' Reilly, 2017). Most recently we reported a randomised
control trial of a version of Pesky gNATs for adults with an Intellectual Disability who have clinically significant
anxiety, demonstrating post intervention clinical improvement maintained and increasing at three month follow-up
compared to psychiatric treatment as usual (Cooney, Jackman, Coyle, & O' Reilly, 2017).
CONCLUSIONS
This chapter outlines our approach to using the educational potential of technology to teach young people about
their mental health and to make the approach of CBT more accessible to them. We have described how CBT is
effective in the treatment in of the most common mental health problems such as anxiety and depression. We
identified that the successful application of CBT requires metacognition. This makes it developmentally challenging
for children. We described the ideas from developmental and clinical psychology that we feel help young people
bridge their position from earlier more concrete stages of cognitive development to the more advanced abstract
thinking required for the successful application of CBT. We have used the technology of computer games and
smartphone apps to integrate CBT with these ideas and make them available to young people and mental health
professionals on a not-for-profit basis. We plan to continue to develop this integration of the instructional potential
of computer games and apps with evidenced based mental health interventions for a range of disorders and
therapeutic approaches.
REFERENCES
Bandelow, B., Reitt, M., Röver, C., Michaelis, S., Görlich, Y., & Wedekind, D. (2015). Efficacy of treatments for anxiety
disorders: A meta-analysis. International Clinical Psychopharmacology, 30 (4), 183-192.
Bandura, A., (1971). Social Learning Theory. New York: General Learning Press.
Bandura, A. (1977). Social Learning Theory. Oxford: Prentice-Hall.
Beck, A. (1964). Thinking and depression II: Theory and therapy. Archives of General Psychiatry, 10, 561-571.
Beck, A. T., & Haigh, E. A. (2014). Advances in cognitive theory and therapy: The generic cognitive model. Annual
Review of Clinical Psychology, 10, 1-24.
Beck, J. (2011). Cognitive Therapy: Basics and Beyond; 2nd Edition. New York: Guilford Press.
Bower, P. & Gilbody, S. (2005). Stepped care in psychological therapies: Access, effectiveness and efficiency.
Narrative literature review. British Journal of Psychiatry, 186, 11–17.
Bridges, K. R., & Harnish, R. J. (2015). Gender differences in formal thinking: Their impact on right-wing
authoritarianism and religious fundamentalism. Psychology, 6, 1676-1684.
Chapman, R., Loades, M., O' Reilly, G., Coyle, D., Patterson, M., & Salkovskis, P. (2016). Pesky gNATs: investigating
the feasibility of a novel computerized CBT intervention for adolescents with anxiety and/or depression in
a Tier 3 CAMHS setting. The Cognitive Behaviour Therapist, 9; 1-22.
P a g e | 245
Chorpita, B.F., Yim L., Moffitt, C., Umemoto, L.A., Francis, S.E., (2000). Assessment of symptoms of DSM-IV anxiety
and depression in children: a revised child anxiety and depression scale. Behaviour Research and Therapy
38, 835–855.
Clark, D.M. (2011). Implementing NICE guidelines for the psychological treatment of depression and anxiety: the
IAPT experience. International Review of Psychiatry, 23, 375-84.
Cooney, P., Jackman, C., Coyle, D., & O' Reilly, G. (2017). Computerised cognitive behavioural therapy for adults with
intellectual disability: randomised controlled trial. British Journal of Psychiatry, 211 (2), 95-102.
Daniels, H. (2017). Introduction to the Third Edition. In H. Daniels (Ed.) Introduction to Vygotsky (3rd Edition).
London, Routledge.
Donaldson, M. (1978). Children’s Minds. London: Fontana/Croom Helm.
Dozois, D. & Beck, A. (2011) Cognitive therapy. In J. D. Herbert, & E. M. Forman (Eds.). Acceptance and Mindfulness
in Cognitive Behaviour Therapy. Understanding and Applying the New Therapies. New York: Wiley.
Dunn, Judy (1988). The Beginnings of Social Understanding. Cambridge, MA: Harvard University Press.
Flavell, J.H., (1963). The Developmental Psychology of Jean Piaget. New York; Van Nostrand.
Gilbert, P. (2010). Compassion Focused Therapy. London: Routledge.
Gilbert, P. (2017) Compassion. Concepts, Research and Applications. London: Routledge
Hayes, S. C., Strosahl, K. D., & Wilson, K. G. (1999). Acceptance and Commitment Therapy. New York: Guilford Press.
Hofmann, S.G., Asnaani, A., Vonk, I.J.J., Sawyer, A.T., & Fang, A., (2012). The efficacy of cognitive behavioral therapy:
A review of meta- analyses. Cognitive Therapy Research, 36 (5): 427–440.
Kabat-Zinn, J. (1982). An outpatient program in behavioral medicine for chronic pain patients based on the practice
of mindfulness meditation: Theoretical considerations and preliminary results. General Hospital Psychiatry,
4 (1), 33-47.
Miller, S.D., Duncan, B.L., (2000). The Outcome and Session Rating Scales: Administration and Scoring Manual.
Chicago: Institute for the Study of Therapeutic Change.
Miller, S.D., Duncan, B.L., Brown, J., Sparks, J.A., Claud, D.A., (2003). The outcome rating scale: a preliminary study
of the reliability, validity, and feasibility of a brief visual analog measure. Journal of Brief Therapy 2, 91–100.
O'Dwyer-O'Brien, A (2012) Computer Assisted CBT for Children. Unpublished Thesis. Dublin: University College
Dublin.
O' Reilly G. & Coyle, D. (2015a) Pesky gNATs: A Cognitive Behaviour Therapy Computer Game for Young People with
Anxiety or Low Mood. Bristol: Handaxe CIC.
O' Reilly G. & Coyle, D. (2015b) The Pesky gNATs App: A Smartphone App to Aid Young People with Anxiety or Low
Mood in the Completion of between Session CBT Tasks. Bristol: Handaxe CIC.
O' Reilly, G. & Coyle, D. (2015c) The Mindful Gnats App: A Mindfulness and Relaxation Skills App for Young People.
Bristol: Handaxe CIC.
O'Reilly, G., Coyle, D., & Tunney, C (2016). Even Buddhist Monks Use a Gong: A Mindfulness Skills Programme for
Young People Delivered through the Mindful Gnats Computer Game and App. International Journal of
Game-Based Learning, 6 (4), 38-50
P a g e | 246
O' Reilly G., Tunney, C., & Coyle, D. (2016) Mindful Gnats: A Mindfulness and Relaxation Skills Training Computer
Game for Young People. Bristol: Handaxe CIC.
Piaget, J. (1970). Piaget's theory. In P.H. Mussen (Ed), Carmichael's Manual of Child Psychology Volume 1. New York:
Wiley.
Piaget, J., & Inhelder, B., (1969). The Psychology of the Child. New York: Basic Books.
Richards, D. A., Ekers, D., McMillan, D., Taylor, R. S., et al., (2016). Cost and outcome of behavioural activation versus
cognitive behavioural therapy for depression (COBRA): A randomised, controlled, non-inferiority trial. The
Lancet, 388 (10047), 871-880.
Ryan, A., (2013). A Comparison Between the Effectiveness of a Computer Assisted CBT Programme and Treatment
As Usual for Irish Adolescents with Internalising Difficulties. Unpublished Thesis. Dublin: University College
Dublin.
Segal, Z. V., Williams, J. M. G., & Teasdale, J. D. (2002). Mindfulness-Based Cognitive Therapy for Depression: A New
Approach to Preventing Relapse. New York: Guildford.
Shapiro, S. L., Carlson, L. E., Astin, J. A., & Freedman, B. (2006). Mechanisms of mindfulness. Journal of Clinical
Psychology, 62 (3), 373-386.
Stallard, P (2002). Think Good Feel Good: A Cognitive Behavioural Therapy Workbook for Children and Young People.
New York: Wiley
Tunney, C., Cooney, P., Coyle, D. & O' Reilly, G. (2017). Comparing Young People's Experience of TechnologyDelivered versus Face-to-Face Mindfulness and Relaxation. British Journal of Psychiatry, 210 (4), 284-289.
Twomey, C & O' Reilly, G (2017). Effectiveness of a freely available computerised CBT programme (MoodGYM) for
depression: A meta-analysis. Australian and New Zealand Journal of Psychiatry, 51 (3):260-269.
Twomey, C., G. O' Reilly, G., & Byrne, M. (2013). Computerised cognitive behavioural therapy: helping Ireland log
on. Irish Journal of Psychological Medicine, 30 (1), 29-56.
Twomey, C., O' Reilly, G., & Meyer, B. (2017) Effectiveness of an individually-tailored computerised CBT programme
(Deprexis) for depression: a meta-analysis. Psychiatry Research, 256, 371-377.
Vygotsky, L.S., (1978). Mind in Society: The development of Higher Psychological Processes. Edited and Translated
by M. Cole, v. John-Steiner, S. Scribner and E. Souberman. Cambridge, MA: Harvard University Press.
P a g e | 247
CHAPTER
24
ONLINE ACTIVITIES FOR TEACHING
STUDENTS ABOUT TECHNOLOGY,
DISTRACTION, AND LEARNING
MICHELLE D. MILLER
AND
JOHN J. DOHERTY
NORTHERN ARIZONA UNIVERSITY
INTRODUCTION
Both inside and outside of academia, multitasking with technology has been a concern among faculty, students, and
the general public (Carr, 2010). Divided attention has become a common feature of contemporary life, with all kinds
of tempting media just a click away even as we attempt to carry out our everyday activities—such as working,
reading, driving, and learning. The issue is especially concerning for college teaching, with numerous faculty raising
concerns about chronically distracted students and their inability to concentrate, which is a prerequisite for learning
at the college level (see, e.g., Lang, 2017; Miller, 2014, 2015; Straumshein, 2015, and the Hall and Lineweaver in this
eBook). Faculty are right to be concerned, given the array of studies reporting negative impacts on academic
performance associated with using devices for non-educational purposes in class (Conard & Marsh, 2014; Ellis,
Daniels, & Jauregui, 2010; Junco, 2012; Junco & Cotten, 2012.; Wood et al., 2012).
Digital distraction among college students is a problem that psychology is well positioned to address. We psychology
teachers do, in fact, address it; typically, attentional limitations are explicitly covered in the psychology curriculum,
including the introductory course as well as specialized courses in cognition. However, even these well-placed efforts
may not reach enough students, or reach them soon enough. Specialized cognitive courses, the ones with the most
content directly relevant to attention, attract only a narrow segment of the student population and are generally
geared to students who are already a year or more into their college careers, after the critical first semester or two
when the academic experience is critical for persistence and success (Tinto, 2012). Introductory courses do engage
a much broader swath of students, many of whom are first-year students, but the content on attention and
distraction is usually folded into a more general chapter on consciousness, which is itself only one chapter among
12 or more covered within the course (see, e.g., Myers & Dewall, 2016). Regardless of the level of the course,
another concern is that the research-oriented approach that is appropriate for a college psychology course may not
be engaging to a broader audience of college students, and might emphasize theory or research findings more than
the practical goals of changing attitudes and behaviors relevant to everyday life.
Taking such a practical approach is important, because simply disseminating information, even very convincing and
compelling information, is insufficient for producing behavior change. It is a near-universal truth that we can know
all the reasons for making a given choice and still find ourselves struggling to make it. The literature on intentional
behavior change (DiClemente, 1993; Norcross, 2012; Prochaska, DiClemente, & Norcross, 1992) illustrates this gap
between knowledge and behavior, showing how behavior change is a complex process involving multiple stages,
from pre-contemplation to preparation to long-term maintenance, and one in which relapses are likely even among
motivated individuals. In short, producing substantive change in a problematic behavior such as using distracting
technology in class takes much more than simply informing people about a body of research.
P a g e | 248
Another reason why reading or hearing a lecture about distraction may not be effective is that it may not be
memorable. By contrast, multimedia and interactive demonstrations can illustrate psychological principles in a vivid,
compelling, and thus lasting way. Later in this chapter, we will offer specific examples from the present project, but
in general, by interactive and multimedia demonstrations we mean materials that are designed to induce and
thereby illustrate psychological phenomena, often by incorporating a task or challenge or eliciting responses to
perceptual stimuli. For example, there are a number of freely available online demonstrations of the McGurk effect,
an illusion in which seeing lip movements alters perception of ambiguous speech sounds; one can be found here:
https://www.youtube.com/watch?v=PWGeUztTkRA (Mitton & Aviner, 2011). The McGurk effect, which is important
for understanding how multiple sources of information combine during speech perception, is difficult to explain with
text alone, but easy to illustrate with the aid of a compelling multimedia demonstration in which the viewer/listener
actually experiences the effect. Other demonstrations take the approach of having the participant input responses,
again to give a subjective sense of what a phenomenon or experimental paradigm is like and to build conceptual
understanding. Examples include this online demonstration of the lexical decision paradigm,
http://www.psytoolkit.org/experiment-library/experiment_ldt.html, or online versions of the implicit association
test (IAT) available through Harvard University’s Project Implicit,
https://implicit.harvard.edu/implicit/selectatest.html.
Engaging with these kinds of materials can be broadly categorized as a form of active learning. Taking this kind of
active-learning approach is likely to be more memorable, and thus more effective, than simply informing students
about facts drawn from psychology research (see Benjamin, 1991, for contrasts between active learning and
traditional lecture approaches in psychology courses). Fortunately, there are many multimedia demonstrations such
as those described above readily available online. However, these online resources – especially those produced
primarily for entertainment – have to be chosen cautiously, as they may get the underlying science wrong in ways
that are not immediately apparent to the non-expert viewer. One YouTube video, for example, presents viewers
with a perception challenge involving ambiguous figures and photographs, claiming that “the faster you can see the
images and switch between them, the quicker your brain works,” which is not supported by evidence
(https://www.youtube.com/watch?v=CYD8zRDaE1I ; Mind Oddities, 2017) Another purports to type viewer’s
personalities according to behaviors such as how they carry a handbag or whether they leave dirty dishes overnight
(https://www.youtube.com/watch?v=wv3X5uHGKKs ; BRIGHT SIDE, 2017). Similarly, numerous online versions of
the Myers-Briggs personality test have proliferated, despite concerns about the unscientific nature of the MyersBriggs framework (Grant, 2013; McCrae & Costa, 1989). In sum, freely available online multimedia has great potential
to promote understanding of psychological phenomena, but given the substantial variability in quality, it works best
when selected and contextualized by an expert.
In the present project, our goal was to promote understanding of the relationship between attention, memory and
learning, as well as elicit reflection and planning about how students can manage digital distraction, in a fully online
format emphasizing interactive multimedia demonstrations and active engagement. To this end, we developed the
Attention Matters! module, which is now in use at our institution, Northern Arizona University, as an educational
resource that our students can complete free of charge. In designing Attention Matters!, we strove to incorporate
memorable, engaging demonstrations of psychological phenomena, coupled with brief, informal content
presentation, used to springboard discussion and reflection.
P a g e | 249
DESIGN FEATURES OF ATTENTION MATTERS!
As the module came together it was clear that we needed to have a high degree of control over students’ interaction
with the content. Therefore, a major design feature was to selectively release content to participants based on what
they completed. This led to the decision to create three units for students to complete – and that progression
through the module necessitated the completion of all activities in each unit.
The first task in the module is a pre-assessment (discussed later) that, once completed, allows access to the first
unit, What Do You Know about Attention? The unit introduces students to core concepts in attention and distraction
through a series of activities that draw on freely available YouTube videos and web sites that we selected. (Appendix
A gives a list of links to these resources and to the Attention Matters! faculty guide.) The unit progressively releases
content based on students’ self-identifying that they have watched the video or visited the web site and completed
the activity. We discuss some of the resources later. While completing the activities some original explanatory
content is also released to the students that discusses some of the science of what they have done. This then releases
a reflective asynchronous discussion moderated by a cognitive psychologist (the first author). They then complete
the unit by taking a brief quiz with built-in feedback. Completion of the quiz opens the next unit, What Happens
When We Overload Attention?, which follows the same format. The last unit, What’s Your Plan?, is somewhat
different as it focuses on asking students to develop a plan for managing attention. Students are allowed multiple
attempts to complete the end of unit quizzes. Completion of the entire module earns a badge and a certificate of
completion.
HOW ATTENTION MATTERS! TEACHES CONCEPTS IN ATTENTION, MEMORY AND LEARNING
We chose several phenomena having to do with attention, memory, and learning to use as demonstrations. In
choosing these, we sought effects that were robust and striking, and that would work well in a fully online
environment. Most of these will be very familiar to psychologists, and thus we have described in detail only one the visual cocktail party effect - that is less common.
VISUAL COCKTAIL PARTY EFFECT
This demonstration taps into the classic research finding that unattended auditory information will, under certain
circumstances, enter conscious awareness. In particular, one’s name will be consciously perceived even on an
unattended channel (Moray, 1959).
The classic procedure, which involves a binaural shadowing paradigm, is not practical to carry out within an
unsupervised online module. However, the first author has adapted for class use a visual version of the phenomenon,
invented by a former student for a class project (Drummond, 2002; see also Wolford & Morrison, 1980). In it,
alternating lines of text are presented, one line in bold and one in regular typeface. Participants are instructed to
read only the bold text, ignoring the rest. The bold text contains innocuous wording about how attention works, and
thanks the participant for reading the text. The light-faced type, i.e., the unattended verbiage, contains a variety of
potentially attention-grabbing words such as sex, murder, 911 and mentions a number of names which we put
together from a list of first names that are currently the most frequent among traditional-aged college students.
The intended result is that participants will read the attended, bold-face text, but, the more emotionally charged
words and potentially, if it is present, one’s own name, will intrude into conscious awareness. This illustrates how
P a g e | 250
we process a great deal of environmental stimuli but do so outside of conscious awareness, and that we are not
always in control of where our attention is directed from moment to moment.
CHANGE BLINDNESS
We incorporated change blindness into Attention Matters! into the first unit with an embedded video called The
Amazing Color Changing Card Trick (Wiseman, 2012). Created by psychologist Richard Wiseman, the video misdirects
viewers’ attention toward an underwhelming card trick involving red and blue cards, while in the meantime,
unattended parts of the scene undergo major changes. The video is then replayed with the changes highlighted.
This phenomenon involves attempting to detect changes in a visual scene. Frequently, when attention is disrupted
or attracted elsewhere – for example, by the introduction of a flicker between changes, or patterned “mudsplashes”
that flash over the scene – these changes escape detection, even when they occupy large swaths of the visual field
(O’Regan, Rensink, & Clark, 1999; R. A. Rensink, O’Regan, & Clark, 1996; Rensink, O’Regan, & Clark, 2000; Ronald A
Rensink, 2002).
STROOP EFFECT
We incorporated the Stroop effect (Stroop, 1935) in the second unit by directing students to click on a link to an
online demonstration of this effect (https://faculty.washington.edu/chudler/java/ready.html). Notably, this
particular Stroop effect demonstration includes a version of the task that is accessible to colorblind individuals,
modeled on a picture-word interference task; in this alternative version, participants read animal names
superimposed on conflicting pictures (e.g., the word “cow” superimposed on a picture of a chicken).
MEMORY FOR A COMMON OBJECT
We incorporated this phenomenon by directing participants to draw a penny on scratch paper, then to describe any
inaccuracies in the associated discussion forum after comparing the drawing to the actual object (or, to a referent
picture we provided within the unit). Optionally, they could upload a photograph of their drawings to the discussion
forum as well.
This demonstration draws on Nickerson and Adam’s now-classic article in which they asked participants from the
United States to demonstrate recall for a common object – a U.S. one-cent coin – by sketching the object. Typically,
these drawing from memory contain glaring inaccuracies, such as reversing the direction in which the portrait of
Lincoln is facing (Nickerson & Adams, 1979).
DRIVING AND TEXTING VIDEO
Texting while driving is not directly related to learning but given its importance as a safety issue (Strayer, Watson, &
Drews, 2011), and the fact that it is a dramatic illustration of how distraction affects performance, we chose to
include it. This video (RYDBELGIUM, 2012) depicts a sophisticated prank in which student drivers in Belgium are
informed by a person posing as a driving examiner that as part of their driving test they will need to demonstrate
proficiency in texting while driving. The ensuing video montage features students frantically insisting that this is a
dangerous activity, as they run over traffic cones and otherwise flub the test.
P a g e | 251
CONTENT
Each set of activities and media is followed by a brief (approximately one web page) written explanation titled, "What
was that about?” These did not emphasize the technical aspects of the various phenomena, but rather focused on
the relationship between attention, memory, learning and other practical considerations.
The third unit, What’s Your Plan?, was structured somewhat differently given that the objective for this unit was to
introduce students to techniques of intentional behavior change and invite them to apply those concepts to
distraction and related issues in their own study habits. Another objective was to follow up information about
distraction with practical tools for acting on that information, such as links to resources such as researcher John
Norcross’ Changeology book and website (Norcross, 2010) (http://www.changeologybook.com).
The idea here was to give students strategies for making behavior changes involving technology use. For example,
in the case of a student who wanted to break a habit of texting during class, relevant strategies could include
planning for how to cope with the temptation to pull out the phone when class became boring or otherwise aversive,
and establishing a new habit such as leaving it in a zipped backpack. Based on the intentional behavior change
literature, students should also be aware that most changes like this take time, failures along the way are likely, and
trying again even in the face of failure is worthwhile. As in any other form of substantive behavior change, altering
technology use patterns is effortful, so expecting difficulty and setting an intention to persist after backsliding are
important if the behavior change plan is to succeed.
ACTIVITIES AND ASSESSMENTS
PRE-ASSESSMENTS
At the beginning of Attention Matters!, students completed two pre-assessments. These served a dual purpose, first
to encourage students to reflect on what they believe to be true about retention, as well as how they handle
multitasking in classes, work situations, and social settings. Second, they allowed for assessment of impacts and data
gathering related to the project. These two assessments, each involving twenty closed-ended questions, are
described in more detail elsewhere (Miller, Doherty, Butler, & Coull, 2017). The first, which we call the
Counterproductive Beliefs Survey, probed to what extent students accepted ideas such as the belief that they could
learn by osmosis, or that they personally had an exceptional ability to multitask. The second, the which we call the
Multitasking Behaviors Inventory, was a self-report survey probing how often participants engage in behaviors such
as doing non-class related emails during classes, texting while at work, or gaming non-socially in a social setting.
DISCUSSIONS
Each unit culminated in a discussion forum in which students were asked to comment and reflect on the activities in
the unit, with particular attention to whether they were surprised by their performance. For the third unit, students
were asked to discuss actions they would take to help themselves resist distractions during classes and study
sessions, or other related goals such as avoiding texting while driving. Our intent was to take advantage of the
hypocrisy effect (Festinger, 1957; Stone, Aronson, Crain, Winslow, & Fried, 1994), encouraging students to commit
in a public way to what they would do to better manage attention in the future.
The discussions were also intended to encourage students to share ideas with peers and offer encouragement for
putting plans into practice. Lastly, we specifically invited students to consider the case in which they were being
P a g e | 252
distracted by other people (e.g., classmates who are watching videos on easily visible laptop screens), with the goal
of uncovering ideas about how they might navigate such a potentially awkward social situation.
QUIZZES
Each unit also included a brief, ten-question multiple-choice quiz. Scores were displayed immediately after the quiz,
with feedback about wrong answers.
CLOSING REFLECTION
At the end of Attention Matters!, students completed a brief open-ended self reflection about how they would apply
the material going forward.
IMPACTS
This resource has been well-utilized at our institution. As of this writing, 2,969 participants have enrolled in Attention
Matters! at our home institution. Of those enrolling, approximately two-thirds completed each end of unit
assessment: 79% for unit 1, 76% for unit 2, and 73% for unit 3. While these numbers fall well short of 100%, they are
relatively high compared to those often found for other types of open online modules such as MOOCs (massive open
online courses; Pursel, Zhang, Jablokow, Choi, & Velegol, 2016). This may reflect the short time commitment and
the incentives provided by instructors assigning the module for extra credit.
The Attention Matters! module has become a standard for the design of other self-paced modules and tutorials that
are included in our institution’s options for student, faculty and staff training. Modules include topics such as
academic integrity, supporting first generation college students, and student as well as faculty introductions to
Blackboard Learn. The format has also become the de facto standard for any online, self-paced workshops that are
being designed by the institution’s eLearning Center.
We do not presently know whether students completing the module engage in fewer problematic multitasking
behaviors during learning activities. However, there is some evidence that beliefs relating to multitasking may be
affected in the desired direction. During the 2016–2017 academic year, we randomly assigned some students to
complete the Counterproductive Beliefs Survey before working through the Attention Matters! module, and some
to complete it after. Analyses indicate that counterproductive beliefs were significantly reduced for those who
completed the module after, compared to before completing the module, suggesting positive impacts on awareness
and knowledge about how attention and memory work (Coull, Miller, Butler, & Doherty, 2017; Miller, Doherty,
Butler, & Coull, 2017).
Comments students make within the discussion forums have also been a source of insight. Students commonly
express surprise about the demonstrations, and encouragingly, sometimes spontaneously express that they find
them fun or interesting. The closing discussion in which they talk about in-class distraction has been eye-opening for
us as well. Students express a range of preferred approaches for managing not only their own distractions, but also
the difficult situation in which other students’ in-class behaviors are impeding their learning.
Many are quite self-aware regarding the temptations posed by having devices such as smartphones at their fingertips
during long and challenging classes, and are committed to trying strategies such as making sure that phones are in
P a g e | 253
airplane mode, tucked out of sight in backpacks and so forth. Some endorse, in particular, the Pocket Points app
(www.pocketpoints.com), which we were unaware of prior to starting this project. This app allows students to earn
coupon points redeemable at local businesses in return for keeping their phones off during class time. As for handling
distracting behaviors from other students, preferred strategies break into two categories, the non-confrontational
type involving finding a better place to sit during class, and the more direct, speaking to the professor or to the
offending classmates themselves.
CHALLENGES
Attention Matters! does require ongoing monitoring, primarily to ensure that the discussions are functioning
smoothly and to interject occasional responses within them, and to handle occasional maintenance issues. However,
this is considerably less than what would be involved in, for example, the typical online for face-to-face course. In
this sense Attention Matters! resembles a MOOC, with most of the labor front loaded in the construction and
dissemination of the module, and relatively little intensive interaction, feedback or personalized grading thereafter.
The module is designed to be self-paced and faculty who assign their students to complete it only have to ensure
that the Certificate of Completion is submitted if required. Otherwise there is minimal need on their part to monitor
their students’ participation.
And, much like a MOOC, we the designers exert minimal control over who enrolls in the course. We could, but have
opted not to, question participants extensively about what brought them to the module or set entry criteria,
preferring to leave it open to anyone at our institution while minimizing the time participants spend on
administrative aspects of the course that are not germane to the material. This does mean that we know relatively
little about the course load, academic specialization, motivations or background of the individuals completing the
module.
But for psychologists, the most vexing challenge of offering this kind of experience is perhaps a philosophical one. In
Attention Matters!, we use a number of memorable, well known effects to try to make a point to our participants
about how limited their attention is, and to encourage them to reflect on how multitasking impacts performance in
a variety of everyday tasks and settings. We offer explanations for these phenomena that we believe to be generally
accurate, but that may not reflect the full level of detail found within theoretical accounts favored by academic
experts. For example, change blindness probably involves a complex interplay among multiple factors, including the
ability to focus attention on specific locations and the relationship among objects in a display (Schanking, Bergmann,
Schubert, & Hagemann, 2016). Thus, although it is generally accurate to say that change blindness occurs as a
function of failed attention, this is a fairly substantial simplification. Our challenge, as we saw it, was to find the
balance between fidelity to the science and utility for reaching our particular goal and target audience - i.e., to build
better understanding and behaviors relating to distraction, among a general population of college students.
Does our simplified presentation of these cognitive phenomena mean that students who complete Attention
Matters! will be at a disadvantage if they later take cognitive psychology courses that feature detailed theoretical
explanations? We proceed with the belief that they will not. However, we wish to draw attention to this issue as an
important one for other psychologists seeking to adapt this approach to other topics, especially if they plan to create
materials that are engaging for general audiences in a fast-paced, interactive format.
A final consideration is that of accessibility. Our two videos were captioned and thus usable for hearing-impaired
individuals, as were all other materials, and as described earlier, the Stroop effect demonstration includes a version
P a g e | 254
that is usable for colorblind individuals. However, there are inherent limitations in accessibility to low-vision
individuals for our demonstrations involving visual attention. In the instructions to students, we acknowledge this
limitation and encourage students with accessibility concerns to contact us or work directly with their professors to
determine an alternative. Our faculty guide (see Appendix A) also contains a section, titled Accessibility, that lists
which of the materials are accessible to students with sensory limitations and which are not, so that faculty can
determine appropriate alternatives.
CLOSING SUMMARY
The Attention Matters! project represents a novel approach to giving psychology away (Miller, 1969), one that we
have found to have a high degree of appeal to faculty, as evidenced by the willingness of faculty to incorporate the
module into their courses. Because of this, it has engaged numerous students across the curriculum. It has also
allowed us to learn more about the ways in which college students view the role of technology in their own lives and
how they manage distractions, both through the assessment instruments and through the qualitative comments
offered in the discussion forums.
Institutions wishing to create similar resources should start by establishing a collaboration between an instructional
designer or design team, and a subject matter expert. Within the frame of an online module, instructional design
and subject matter experts can collaborate to identify short, compelling multimedia, video and online activities that
actively engage students in the principles the module is intended to address. Short auto-graded assessments can
then be coupled with reflective discussion to further engage students in questions raised by the activities.
Lastly, student audiences can be targeted by reaching out to program leaders, such as in our case the director of our
institution’s First Year Learning Initiative (who is the first author of this publication; https://nau.edu/provost/vptlda/fyli/). These program leaders can then promote the module among their affiliated faculty. It has been our
consistent experience that when presented with a module that is easy to use, well grounded in evidence, and
relevant to their own teaching concerns, faculty across disciplines are highly receptive to the idea of using it in their
own courses.
REFERENCES
BRIGHT SIDE. (2017, September 25). This surprising test will reveal the truth about you [Video file]. Retrieved from
https://www.youtube.com/watch?v=wv3X5uHGKKs.
Carr,
N. (2010). How the Internet is making us stupid. Telegraph.co.uk. Retrieved from
http://www.telegraph.co.uk/technology/internet/7967894/How-the-Internet-is-making-us-stupid.html
Conard, M. A., & Marsh, R. F. (2014). Interest level improves learning but does not moderate the effects of
interruptions: An experiment using simultaneous multitasking. Learning and Individual Differences, 30,
112–117. doi: 10.1016/j.lindif.2013.11.004
Coull, W., Miller, M.D., Butler, N., & Doherty, J. (April, 2017). The Attention Matters! project: Addressing college
students’ beliefs about attention. Poster abstract submitted to the Annual Meeting of the Rocky Mountain
Psychological Association, Salt Lake City, UT.
David
Strayer, PhD - Cognitive neuroscientist. (2014). Retrieved
http://www.apa.org/action/careers/improve-lives/david-strayer.aspx
P a g e | 255
July
28,
2017,
from
DiClemente, C. C. (1993). Changing addictive behaviors: A process perspective. Current Directions in Psychological
Science, 2(4), 101–106. doi:10.1111/1467-8721.ep10772571
Drummond, J. (2002). Cognitive demonstration project submitted for PSY 651: Human Cognition, Northern Arizona
University.
Ellis, Y., Daniels, B., & Jauregui, A. (2010). The effect of multitasking on the grade performance of business students.
Research in Higher Education Journal, 8, 1–10.
Festinger, L. (1957). A theory of cognitive dissonance. Scientific American. doi:10.1037/10318-001
Grant,
A. (2013, September 18). Goodbye to MBTI, the fad that won't die. Retrieved
https://www.psychologytoday.com/blog/give-and-take/201309/goodbye-mbti-the-fad-won-t-die
from
Junco, R. (2012). In-class multitasking and academic performance. Computers in Human Behavior, 28(6), 2236–2243.
doi:10.1016/j.chb.2012.06.031
Junco,
R., & Cotten, S. R. (n.d.). No
doi:10.1016/j.compedu.2011.12.023
A
4
U:
Computers
&
Education,
59(2),
505–514.
Lang, J. M. (2017, March). The Distracted Classroom. The Chronicle of Higher Education. Retrieved from
http://www.chronicle.com/article/The-DistractedClassroom/239446?cid=wc&utm_source=wc&utm_medium=en&elqTrackId=7560542e010c4f1898f0ecba
39fd7011&elq=884379ce8ab448378fd22e4bb38b76f4&elqaid=12944&elqat=1&elqCampaignId=5337
Larry D., R., Alex F., L., L. Mark, C., & Nancy A., C. (2011). An empirical examination of the educational impact of text
message-induced task switching in the classroom: Educational implications and strategies to snhance
learning. Revista de Psicología Educativa, 17(2), 163–177. doi:10.5093/ed2011v17n2a4
Miller, G. A. (1969). Psychology as a means of promoting human welfare. American Psychologist, 24(23), 1063–1075.
doi:10.1017/CBO9781107415324.004
Miller, M. (2014, December). Tweet and You’ll Miss It. Inside Higher Education. Retrieved from
https://www.insidehighered.com/views/2014/12/02/essay-calls-professors-start-teaching-studentsabout-distraction-and-attention
Miller, M. D. (2015). Can millennials pay attention to classwork while texting, tweeting and being on Facebook ? The
Conversation, (June 26).
Miller, M. D., Doherty, J. D., Butler, N., & Coull, W. (2017). Changing counterproductive beliefs about attention,
memory and multitasking: Impacts of the Attention Matters! open online module. Manuscript in
preparation.
Mitton, M., [Mark_Mitton] & Aviner, J. (2011, November 6). McGurk effect [Video file]. Retrieved from
https://www.youtube.com/watch?v=PWGeUztTkRA
Mind Oddities. (2017, June 8). 12 illusions that will test your brain [Video file]. Retrieved from
https://www.youtube.com/watch?v=CYD8zRDaE1I
Moray, N. (1959). Attention in dichotic listening: Affective cues and the influence of instructions. Quarterly Journal
of Experimental Psychology, 11(1), 56–60. doi:10.1080/17470215908416289
Myers, D., & Dewall, C.N. (2016). Exploring psychology (10th ed.). New York: Worth Publishers.
Nickerson, R. S., & Adams, M. J. (1979). Long-term memory for a common object. Cognitive Psychology, 11(3), 287–
307. doi:10.1016/0010-0285(79)90013-6
P a g e | 256
Norcross, J. C. (2012). Changeology: 5 steps to realizing your goals and resolutions. Changeology: 5 Steps to Realizing
Your Goals and Resolutions. Retrieved from
http://ovidsp.ovid.com/ovidweb.cgi?T=JS&PAGE=reference&D=psyc9&NEWS=N&AN=2012-32899-000
O’Regan, J. K., Rensink, R. a, & Clark, J. J. (1999). Change-blindness as a result of “mudsplashes”. Nature, 398(6722),
34. doi:10.1038/17953
Prochaska, J. O., DiClemente, C. C., & Norcross, J. C. (1992). In search of how people change. American Psychologist.
doi:10.1037//0003-066X.47.9.1102
Pursel, B. K., Zhang, L., Jablokow, K. W., Choi, G. W., & Velegol, D. (2016). Understanding MOOC students:
Motivations and behaviours indicative of MOOC completion. Journal of Computer Assisted Learning, 32(3),
202–217. doi:10.1111/jcal.12131
Rensink,
R. A. (2002). Change detection. Annual
doi:10.1146/Annurev.Psych.53.100901.135125
Review
of
Psychology,
53,
245–277.
Rensink, R. A., Kevin O’Regan, J., & Clark, J. J. (2000). On the failure to detect changes in scenes across brief
interruptions. Visual Cognition, 7(1–3), 127–145. https://doi.org/10.1080/135062800394720
Rensink, R. A., O’Regan, J. K., & Clark, J. J. (1996). To see or not to see: The need for attention to perceive changes in
scenes. Investigative Ophthalmology and Visual Science, 37(3). doi:10.1111/j.1467-9280.1997.tb00427.x
RYDBELGIUM. (2012, April 27). The impossible texting & driving test. [Video file]. Retrieved from
https://www.youtube.com/watch?v=HbjSWDwJILs
Schankin, A., Bergmann, K., Schubert, A.-L., & Hagemann, D. (2016). The allocation of attention in change detection
and change blindness. Journal of Psychophysiology, 1–13. doi:10.1027/0269-8803/a000172
Stone, J., Aronson, E., Crain, A. L., Winslow, M. P., & Fried, C. B. (1994). Inducing hypocrisy as a means of encouraging
young adults to use condoms. Personality and Social Psychology Bulletin, 20(1), 116–128.
doi:10.1177/0146167294201012
Straumshein, C. (2015, June). Take Note. Inside Higher Ed. doi:10.1242/jcs.065524
Strayer, D. L., Watson, J. M., & Drews, F. A. (2011). Cognitive distraction while multitasking in the automobile. In B.
Ross (Ed.), The Psychology of Learning and Motivation (Vol. 54, pp. 29–58). Burlington: Academic Press.
doi:10.1016/B978-0-12-385527-5.00002-4
Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18(6), 643–
662. doi:10.1037/h0054651
Tinto, V. (2012). Completing college: Rethinking institutional action. Chicago: University of Chicago Press.
Wiseman, R. [Quirkology]. (2012, November 21). Colour changing card trick. [Video file]. Retrieved from
https://www.youtube.com/watch?v=v3iPrBrGSJM
Wolford, G., & Morrison, F. (1980). Processing of unattended visual information. Memory & Cognition, 8(6), 521–
527. doi:10.3758/BF03213771
Wood, E., Zivcakova, L., Gentile, P., Archer, K., De Pasquale, D., & Nosko, A. (2012). Examining the impact of off-task
multi-tasking with technology on real-time classroom learning. Computers & Education, 58(1), 365–374.
doi:10.1016/j.compedu.2011.08.029
P a g e | 257
APPENDIX A - ONLINE RESOURCES
Unit One: What Do You Know About Attention? (in order of presentation in the unit)
-
The Colour Changing Card Trick: https://youtu.be/v3iPrBrGSJM
-
Sports Focus Test: https://youtu.be/vJG698U2Mvo
o
(We changed the name of this test to avoid spoilers)
-
Mental Multitasking: https://faculty.washington.edu/chudler/java/ready.html
-
ALTERNATIVE: Mental Multitasking: https://faculty.washington.edu/chudler/java/ready.html
o
This second link is provided for participants who may be color blind
Unit Two: What Happens When We Overload Attention?
-
The Impossible Texting and Driving Test: http://www.youtube.com/watch?v=HbjSWDwJILs
Unit Three: What’s Your Plan?
Changeology: http://www.youtube.com/watch?v=HbjSWDwJILs
Attention Matters! Faculty Guide:
http://bit.ly/2BGTNV5 or
https://drive.google.com/open?id=1b5D_8mUXcSvkoeu9Bho_zmmNNHmCo8mV
The Attention Matters! faculty guide includes instructions for Northern Arizona University faculty on how to include
the module in their own courses. It also provides some background and lists the unit level learning outcomes for the
module.
P a g e | 258
CHAPTER
25
ONLINE SERVICE LEARNING IN
PSYCHOLOGY: LESSONS FROM
LITERATURE AND EXPERIENCE
LINDSAY A. PHILLIPS ALBRIGHT COLLEGE, JILL K. MARRON
CHRIS KICHLINE ALBRIGHT COLLEGE, CHRISTINA D. FOGLE
ELLEN PILLSBURY ALBRIGHT COLLEGE
WIDENER UNIVERSITY,
ALBRIGHT COLLEGE, AND
INTRODUCTION
Service learning is a beneficial pedagogical tool (as reviewed by Bringle, Reeb, Brown, & Ruiz, 2016a; Bringle, Ruiz,
Brown, & Reeb, 2016b; Chapdelaine, Ruiz, Warchal, & Wells, 2005; Heckert, 2010), that can be defined as, “a
teaching and learning strategy that integrates meaningful community service with instruction and reflection to
enrich the learning experience, teach civic responsibility, and strengthen communities” (National Service Learning
Clearinghouse, 2012, para. 1). Because this chapter is written for instructors of psychology, it is important to note
that service learning pedagogy is related to Goal Three (Ethical and Social Responsibility in a Diverse World) of the
American Psychological Association’s (APA) Guidelines for the Undergraduate Psychology Major
(www.apa.org/ed/precollege/about/psymajor-guidelines.pdf), as well as APA’s Ethical Principles of Psychologists
and Code of Conduct (http://www.apa.org/ethics/code/). The importance of integrating service learning into
psychology education is well-documented (Bringle et al., 2016a; Bringle et al., 2016b).
With an increase in online and hybrid education, as well as practical barriers to implementing service learning,
psychology instructors might find it difficult to incorporate this valuable pedagogy. This chapter addresses how we
can increase flexibility when using service learning by creating opportunities for face-to-face service learning in
online courses, and by creating online service learning opportunities (a technological advancement for instructors)
that can be used in traditional, online, and hybrid courses at all levels of education. Waldner, McGorry, and Widener
(2012) defined “e-service-learning” as, “when the instructional component, the service component, or both are
conducted online” (p. 125). A related term for this endeavor might be virtual service learning. For clarity, we will
discuss two distinct options for delivering service learning: Face-to-face service learning in online coursework and
service learning projects that occur entirely online (and can be implemented in online, hybrid, or face-to-face,
traditional courses). We begin our chapter with a review of literature and conclude with resources and suggestions
for psychology instructors who wish to utilize online service learning.
RATIONALE: SERVICE LEARNING IN PSYCHOLOGY AND THE NEED TO OVERCOME BARRIERS
Staying abreast of how technology has changed service learning incorporation is important for psychology
instructors due to the relevance of service learning pedagogy in the field of psychology. Bringle et al. (2016a) argued
for “the need in education in psychology to be concerned with the public good and to be sensitive to pressing social
issues such as justice, peace, the alleviation of suffering, and the promotion of human flourishing to complement
the prestige-oriented culture in academic psychology that is heavily weighted toward basic research” (p. 23). Service
P a g e | 259
learning is related to Goal Three (Ethical and Social Responsibility in a Diverse World) of the APA’s Guidelines for the
Undergraduate Psychology Major (http://www.apa.org/ed/precollege/about/psymajor-guidelines.pdf). Service
learning is also consistent with aspirational statements in the preamble of APA’s Ethical Principles of Psychologists
and Code of Conduct (http://www.apa.org/ethics/code/), in Principle B (Fidelity and Responsibility) and Principle D
(Justice) to make sure that psychologists contribute time, even without personal benefit, and ensure that all
individuals have access to and can benefit from the contributions of our field. An issue that emerges, however, is
that while most would agree upon the importance of service learning, “not all educators, including instructors of
psychology courses, are interested in civic learning for their students, neither in each course in the curriculum nor
as an area of work for themselves and their teaching, research, or professional service” (Bringle et al., 2016, p. 26).
Barriers to service learning have been well-addressed in the literature, in relation to both psychology and other
disciplines. As previously reviewed by Phillips, Marron, Baltzer, Kichline, Filoon, and Whitley (2017), barriers can
include making time to integrate service learning, ethical dilemmas and risks associated with projects, finding
projects that are appropriate for instructors’ student population (such as the working adult) or teaching modality
(such as online or hybrid courses), and instructors needing professional development (i.e., limited knowledge about
service learning). Technology can help to alleviate the barriers of limited time, finding projects that are appropriate
for diverse student populations (like working adults who might have scheduling difficulties when projects occur
outside of class time), and teaching modalities (by taking service learning projects online for students in online and
hybrid courses).
FACE-TO-FACE SERVICE LEARNING IN ONLINE COURSES
Although the bulk of this chapter will focus on online service learning opportunities (online projects that allow
students to assist a community partner), we will first discuss how face-to-face service learning opportunities can still
be available to students in online courses by reviewing articles by Guthrie and McCracken (2010) and Bossaller
(2016).
In their case study of two undergraduate courses offered at the University of Illinois at Springfield, Guthrie and
McCracken (2010) conducted interviews with students, reviewed course documents, examined course technology,
and observed students engaged in two service learning courses. In both of the examined courses, students were
required to locate a non-profit or other community-based organization to which they would provide 60 hours of
service. The online courses were structured such that course content (which included public policy, leaders and
activists, social change, and a history of service) served to contextualize the service learning experiences, and course
assignments were designed to allow reflection on those experiences (e.g., journals and a culminating reflective
project). In this way, instructors were able to make the learning process both concrete and meaningful; as described
by Guthrie and McCracken (2010), communities became “laboratories.” The tools available in the online
environment facilitated live chats, asynchronous discussion board conversations (i.e., online conversations that can
occur whenever students are available to read and post), and collaborative projects. Because of the online delivery
method, the University of Illinois at Springfield courses included students from diverse communities across the
United States and around the world. This diversity in student enrollments contributed to the sharing of unique
perspectives and rich discussions among course participants. Another key finding was that students reported being
engaged in community service after they had completed their online service learning course.
Bossaller (2016) explored a graduate-level online service learning course in a library and information science
program at the University of Missouri in a qualitative study. The goal of the course was to develop students’
P a g e | 260
awareness of the needs of the community outside of the library, and students were responsible for implementing
and working on projects that connected their local libraries to other organizations. Requirements for the project
were flexible; for example, the type of service organization was not specified, and students’ purpose in their
placements was not clearly defined.
Bossaller (2016) interviewed eight students who completed the online service learning course, five of whom were
considered nontraditional students and three of whom were traditional students, in two group-interview settings.
Two benefits of service learning that emerged from the interviews included students’ increased empathy for
marginalized groups and a new awareness of the importance of listening to the needs of the community.
Several challenges related to service learning were reported as well, though the three traditional students did not
report any major challenges with implementing or completing their service learning (Bossaller, 2016). Challenges
related to service learning included finding a placement, explaining the purpose of service learning to volunteer
coordinators (i.e., meaningful work that leverages students’ expertise and developing knowledge, as opposed to
menial tasks), and scheduling the service learning to fit with students’ other commitments. To overcome the
challenge of finding a placement, Bossaller (2016) suggested that instructors across institutions develop a network
of service organizations that can be used to facilitate students’ service learning placements.
These two studies by Guthrie and McCracken (2010) and Bossaller (2016) indicate that there are options for face-toface service learning even when instructing an online class. We now turn our focus to how instructors can find
projects that serve a community partner and can be completed entirely online.
ONLINE SERVICE LEARNING OPPORTUNITIES
While the previous two studies examined service learning projects that took place in face-to-face settings, other
courses incorporate online service learning that takes place entirely in a virtual setting. We are not making the
argument that online service learning projects have more value than face-to-face opportunities. However, we do
believe that service learning can be incorporated into online courses, and we believe that designing online service
learning projects may help to overcome the barriers that time and scheduling pose in all course formats (online,
hybrid, and traditional). Furthermore, online opportunities may permit for this important pedagogy to be
implemented in a variety of psychology courses, including those that are taken online and those that are taught to
groups that tend to have scheduling limitations (such as working adult students; see Phillips, 2013).
Bringle et al. (2016a and 2016b) discussed four types of service learning. The first is direct service learning, in which
students work face-to-face with service recipients (e.g., individuals experiencing homelessness, students at a local
school, older adults in a gerontology course, etc.). Although this type of service learning could not occur online, the
other three types of service learning projects could occur in an online implementation. The remaining three types of
service learning include indirect service learning (“working behind the scenes to improve, expand, or coordinate
resources for a community agency or neighborhood association,” Bringle et al., 2016b, p. 301), research service
learning (for which students could conduct or contribute to an online research project), and advocacy service
learning (such as creating online presentations to raise awareness of a concern, contacting local politicians to
advocate for specific groups, etc.).
An example of indirect service learning is provided by Waldner et al. (2012), who used the example that “students
in an online grant-writing class might help write grant proposals for a nonprofit community partner” (p. 125) to
P a g e | 261
illustrate how students in both traditional and online courses could complete a service learning project that requires
only online work.
Waldner et al. (2012) noted that complications with online service learning projects may involve difficulties with the
coordination of community partners related to technology, such as synchronous and asynchronous tools. Regarding
community partners, it is imperative that they would be both amenable to having and using the technology needed
or be willing to learn any technology related to the project when applicable (Waldner et al., 2012). Location could
be a barrier as well, particularly if the community partner is in a different region or country, which could affect their
availability or impact timelines and deadlines. To overcome difficulties, a clear and thoughtful course design is
important (Waldner et al., 2012). Course design can include aligning the online activities to learning objectives,
making sure there is ample time to accomplish the service learning project, and monitoring students’ experiences.
Unfortunately, there is limited literature on the use of online service learning opportunities. Becnel and Moeller
(2017) described a project in which graduate students in an online library science program assisted their local
libraries to refine book purchase lists. Although implementing this project came with challenges (such as needing to
have very clear assignments and guidelines since instructors could not provide direct supervision in this online
course), the authors concluded the project was very beneficial, and an analysis of themes in students’ responses
(completed by researchers who reviewed journals and reflection papers of 22 students) indicated that the students
found the project and course discussions to be valuable. We believe the project described by Becnel and Moeller
(2017) could occur as face-to-face service learning in online coursework (since students assisted their local libraries)
or as a service learning project that occurs entirely online (depending on the type of assistance provided to the
library).
In addition to the Becnel and Moeller (2017) example, Heckert (2010) provided two examples of service learning
projects that might alleviate the barrier of instructors and students having limited time. Although these projects
were not discussed as being online, these examples could be implemented online.
As a first example, Heckert’s (2010) suggestion is to seek project partners within one’s own campus community, such
as a campus office or organization. Heckert (2010) refers to these partners as “internal clients” (p. 33). This method
reduces faculty time spent coordinating with a community partner and avoids potential transportation difficulties
associated with off-campus partners, but does not provide outside connections that could be beneficial to students,
as many other service learning opportunities could provide. Psychology instructors could implement Heckert’s
(2010) example by completing a project for their own campus online, for example, surveying undergraduates’
perspectives on the mental health services provided in the college counseling center.
The second example of reducing the barrier of time that Heckert (2010) identified is “end product collaboration”
(p. 33), in which students work together to produce a document or product that will assist an identified partner
either in the campus community or in the external community. Examples of products Heckert (2010) described
include students working together to design a workshop for a local agency or develop curriculum for a local school;
we believe both of these projects could be completed via virtual collaboration.
A case example of an online service learning project is described by Phillips (2014) who maintains an ongoing project
for psychology students to find resources for individuals who reenter society following a prison or jail sentence. This
project allows students to develop resource guides for this population that are then shared on a public website.
P a g e | 262
Developing a resource guide (whether for a website or for a community agency to distribute) is an additional example
of a service learning project that could occur entirely online.
IMPLEMENTING AND ASSESSING ONLINE SERVICE LEARNING PROJECTS
Helms, Rutti, Hervani, LaBonte, and Sarkarat (2015) provided recommendations on how to implement and assess
service learning projects in online courses. Recommendations for implementation include allocating time to
thoroughly describe expectations for the project, including milestones and due dates, and identifying community
partners for the projects. This identification can occur either through the instructor making the connections with
community partners who could benefit from student assistance, or through the students independently making
connections with organizations in their own communities.
Helms et al. (2015) also suggested that the instructor utilize groups for the service learning projects, rather than
having students work individually. Groups can be assigned by the instructor, giving consideration to factors such as
student interests, demographics, and location. Groups should then develop their project outline and action plan.
The former should include a problem statement, the goal of the project, and how the project connects to course
content, while the latter should include the specific steps that will be undertaken to complete the project, the
responsibilities of each group member, and any data that will be collected as a part of the project. In addition, the
action plan portion of the project should involve students in learning more about the problem they are trying to
solve via online research and communication with related organizations.
The authors’ recommendations for project evaluation include having student groups submit both a written project
and an oral presentation, and having instructors create midterm and final assessments that are administered online
(Helms et al., 2015). A 5-point Likert scale is the suggested format for the rubrics used to assess student submissions.
The instructor should also involve other entities in the evaluation process, such as community members, other
faculty members or students, or the community partners. The involvement of multiple sources in the evaluation
process can allow for richer feedback and a more authentic project assessment. Finally, because students are
working in groups, an online survey that allows students to provide feedback on their other group members should
be included.
Because students in online courses are likely to be geographically dispersed, the authors make recommendations
for staying connected via course technology (Helms et al., 2015). For example, the instructor may post a video to the
course of him/herself explaining the project. Online discussion boards should be used to reflect on the projects and
to receive feedback from the instructor and from peers. Webinar technology, such as Skype, can be used for group
meetings, as could conference calls. Because students may not have easy access to the university librarian, the
librarian could be enrolled in the course in order to provide research assistance. Web links related to research and
course content could be posted in the online course, as well.
SUGGESTIONS FOR PSYCHOLOGY INSTRUCTORS
In review, Helms et al. (2015) provided general tips for implementing and assessing online service learning. As we
conclude this chapter, we intend to provide resources and tips that will help psychology instructors, in addition to
those tips suggested by Helms et al. (2015). As past literature demonstrates, students in online courses can
participate in face-to-face service learning (Bossaller, 2016; Guthrie & McCracken, 2010). Additionally, an online
service project (i.e., a project in which the students’ service occurs entirely online) can be implemented in online
P a g e | 263
courses, hybrid courses, and even in traditional courses when the flexibility of an online project could make service
learning more accessible to students. We first share resources to assist psychology instructors in this endeavor.
RESOURCES FOR SERVICE LEARNING IN GENERAL.
With respect to service learning in general, professional journals for service learning include: Journal for Civic
Commitment, Journal for Civic Engagement, Journal of Community Engagement and Higher Education, Journal of
Community Engagement and Scholarship, Journal of Higher Education Outreach and Engagement, Michigan Journal
of Community Service Learning, and Partnerships: A Journal of Service-Learning and Civic Engagement.
Another significant service learning resource is the Campus Compact coalition (www.compact.org). Campus
Compact is comprised of almost 1,100 colleges and universities across the country, with the purpose of supporting
its member institutions with improving their local communities and educating their students to be socially
responsible citizens. One of the priorities of Campus Compact is to “establish meaningful, reciprocal community
partnerships” (Campus Compact, 2015, Strategic Plan, para. 5); to support this priority, Campus Compact offers a
number of resources and services for its members, including providing professional development to empower
instructors to integrate service into their courses and research, connecting institutions of higher education with
important issues in their local communities, and maintaining an online database of resources for community-based
teaching.
Resources available on the Campus Compact website include books and publications for purchase, blogs, news
articles, links to other web resources, videos and presentations (including slides addressing topics such as engaged
scholarship and community-based participatory research), and a searchable database of syllabi for service-oriented
courses offered by other colleges and universities. Additional resources specifically related to service learning
include a guide for incorporating structured reflection into service learning courses and a service learning toolkit for
administrators that includes information on planning for and measuring community engagement.
RESOURCES SPECIFIC TO PSYCHOLOGY.
Campus Compact contains both general information and discipline-specific information for psychology. Another
excellent resource for service learning in psychology is the book Service Learning in Psychology: Enhancing
Undergraduate Education for the Public Good (Bringle et al., 2016a). The Society for Teaching of Psychology also
maintains a listserv and public Facebook discussion group where instructors of psychology could explore online
service learning opportunities and seek feedback and collaboration from other instructors at various institutions.
Finally, examples of service learning projects for various psychology courses are discussed by Bringle et al. (2016a
and 2016b) and Schmidt and Zaremba (2015).
RESOURCES FOR ONLINE SERVICE LEARNING PROJECTS.
There are also some excellent websites that provide ideas, examples of online service learning projects, and
resources. These websites are organized by several institutions of higher learning, including Boise State University
(https://servicelearning.boisestate.edu/about/sl-online-education/) and University of Central Arkansas
(http://uca.edu/servicelearning/virtual-service-opportunities/), as well as the Center for Digital Civic Engagement,
developed by Minnesota Campus Compact (https://cdce.wordpress.com/service-learning-in-online-courses/).
P a g e | 264
CONCLUSIONS: LESSONS LEARNED
In closing, service learning is a valuable pedagogical tool that benefits students (as reviewed by Bringle, Reeb, Brown,
& Ruiz, 2016a; Bringle, Ruiz, Brown, & Reeb, 2016b; Chapdelaine, Ruiz, Warchal, & Wells, 2005; Heckert, 2010) and
is especially important in the teaching of psychology (Bringle et al., 2016a; Bringle et al., 2016b). Learning how to
incorporate service learning in online courses, as well as organizing online service learning projects for all types of
courses may help psychology instructors to utilize this pedagogy in a wide variety of course settings and with a
diverse group of students.
Instructors who wish to develop online service learning projects and/or implement face-to-face projects in online
courses can benefit from utilizing the resources we provided in the previous section. Instructors will also want to
consider any possible technological challenges (Helms et al., 2015; Waldner et al., 2012), any challenges related to
overseeing such projects, any possible ethical considerations (for more, see Chapdelaine et al., 2005), and how they
will assess their project prior to introducing service learning projects to students (for more, see Helms et al., 2015,
and Schmidt & Zaremba, 2015). Bossaller (2016) pointed out that some of these projects might allow for flexibility,
which could cause frustration if project descriptions are vague, but could also be beneficial for students in that
flexibility allows students to choose projects that are tailored to their interests and career goals (which may be
especially important for some groups of students, such as adult learners; see Reed, Rosing, Rosenberg, & Statham,
2015).
We suggest that when implementing service learning projects, the assignment guidelines and grading rubrics be as
specific as possible, and that instructors be available for support and to answer questions. When implementing
service learning in psychology courses specifically, the objectives of the service learning project should be explicitly
linked to the course objectives, and perhaps to overarching goals in the field of psychology (e.g., Goal Three in
Guidelines for the Undergraduate Psychology Major; www.apa.org/ed/precollege/about/psymajor-guidelines.pdf).
A good reference for writing learning objectives for service learning that are linked to APA goals is provided by Bringle
et al. (2016b).
Service learning projects should also include some type of reflection on the experience, which instructors can
incorporate into online group discussions and individual writing. For example, Schmidt and Zaremba (2015) discuss
how an ongoing written exploration (such as a journal, which could be completed online) allows students and
instructors to continuously explore and assess the service learning experience, as well as to process emotions and
understand more about the group of individuals they are assisting (which might be especially important in online
service learning projects since services are not face-to-face with a population in need).
Finally, instructors should consider assessment in advance, including ways to assess whether or not course objectives
were met, and ways to assess student satisfaction with the service learning experience (for more suggestions, see
Helms et al., 2015, and Schmidt & Zaremba, 2015). For example, we have found it beneficial to utilize an online
student satisfaction survey (in addition to the standard course evaluation) where students can provide feedback
specific to the service learning component of the course.
Service learning can be incorporated into online courses, and designing online service learning projects for all course
types can be a beneficial endeavor for students, instructors, and communities. We also believe that the resources
and options discussed in this chapter can benefit instructors of all levels of education in psychology, from high school
to graduate school. It is important to note that we are not arguing that online projects are more beneficial than faceto-face projects in which individuals have the opportunity to directly assist people in need; however, online projects
P a g e | 265
can still offer excellent opportunities for students that also assist agencies and populations in need. Developing
online service learning opportunities allows this important pedagogy to be implemented in a variety of psychology
courses to benefit our diverse groups of students.
REFERENCES
American Psychological Association (2017). Ethical Principles of Psychologists and Code of Conduct. Retrieved from:
http://www.apa.org/ethics/code/
American Psychological Association (2012). Guidelines for the Undergraduate Psychology Major, Version 2.0.
Retrieved from: http://www.apa.org/ed/precollege/about/psymajor-guidelines.pdf
Bossaller, J. S. (2016). Service learning as innovative pedagogy in online learning. Education for Information, 32, 3553. doi:10.3233/EFI-150962
Bringle, R. G., Reeb, R. N., Brown, M. A., & Ruiz, A. I. (2016a). Service learning in psychology: Enhancing
undergraduate education for the public good. Washington, DC: American Psychological Association.
Bringle, R. G., Ruiz, A. I., Brown, M. A., & Reeb, R. N. (2016b). Enhancing the psychology curriculum through service
learning. Psychology Learning and Teaching, 15(3), 294-309. doi:10.1177/1475725716659966
Campus Compact. (2015). Retrieved from: www.compact.org
Chapdelaine, A., Ruiz, A., Warchal, J., & Wells, C. (2005). Service-learning code of ethics. Bolton, MA: Anker Publishing
Company.
Guthrie, K. L., & McCracken, H. (2010). Making a difference online: Facilitating service-learning through distance
education. The Internet and Higher Education, 13(3), 153-157. doi:10.1016/j.iheduc.2010.02.006
Heckert, T. M. (2010). Alternative service learning approaches: Two techniques that accommodate faculty schedules.
Teaching of Psychology, 37, 32-35. doi:10.1080/00986280903175681
Helms, M. M., Rutti, R. M., Hervani, A. A., LaBonte, J., & Sarkarat, S. (2015). Implementing and Evaluating Online
Service
Learning
Projects.
Journal
of
Education
for
Business,
90,
369-378.
doi:10.1080/08832323.2015.1074150
National Service Learning Clearinghouse. (2012). Retrieved from: www.servicelearning.org
Phillips, L. A. (2013). Working adult undergraduates’ interest and motivation in service learning and volunteering.
The Journal of Continuing Higher Education, 61(2), 68-73. doi:10.1080/7377363.2013.796239
Phillips, L. A. (2014). Spotlight: Building a website for returning citizens. Register Report from National Register of
Health Service Psychologists. Retrieved from: https://www.findapsychologist.org/psychologist-spotlightbuilding-a-website-for-returning-citizens-by-dr-lindsay-phillips/
Phillips, L. A., Marron, J. K., Baltzer, C., Kichline, C., Filoon, L., & Whitley, C. (2017). Overcoming Barriers to Service
Learning in Higher Education. Service Learning: Perspectives, Goals and Outcomes, Nova Science Publishers,
Inc.
Reed, S. C., Rosing, H., Rosenberg, H., & Statham, A. (2015). "Let us pick the organization": Understanding adult
student perceptions of service-learning practice. Journal of Community Engagement & Scholarship, 8(2),
74-85.
P a g e | 266
Schmidt, M. E., & Zaremba, S. B. (2015). Service learning and psychology. In D. Dunn (Ed.). The Oxford Handbook of
Undergraduate Psychology Education. New York: Oxford University Press.
Waldner, L. S., McGorry, S. Y., & Widener, M. C. (2012). E-service-learning: The evolution of service-learning to
engage a growing online student population. Journal of Higher Education Outreach and Engagement, 16(2),
123-150.
P a g e | 267
CHAPTER
26
ALWAYS IN STYLE: USING
TECHNOLOGY TOOLS TO HELP
STUDENTS MASTER APA STYLE
MARIA ZAFONTE
GRAND CANYON UNIVERSITY
INTRODUCTION
The perfect reference page seems to be elusive and always just out of reach. Even in upper-division psychology
courses, students struggle with demonstrating competency in APA Style. The Publication Manual of the American
Psychological Association itself states that its inception aimed to provide “sound and rigorous standards for scientific
communication” and that codification was adapted for scholarly endeavors in fields within social and behavioral
sciences (American Psychological Association, 2010, xiii). As students enter the discipline it is a marker of their
progress and immersion in the field when they can accurately employ correct APA Style in their work. Yet, again and
again, college instructors report that student skills in this area are inadequate and a great source of frustration
(Mandernach, Zafonte, & Taylor, 2016).
Exacerbating the long-reported faculty frustration with student documentation skills, the web content revolution
has made what was always a tedious and detailed process one that is increasingly murky and confusing. Back in
2001, Davis and Cohen noted that the increase in web sources was resulting in even less accurate citations. If that
was the case then, surely the explosion of websites and online content in the ensuing years has only made this
problem worse. In the sixth edition of the Publication Manual, the editors note that electronic sources have gone
from “the exception to the rule,” rendering prior models for citing out of date (American Psychological Association,
2010, p. 187). Yet the expectation is that students will be proficient at correctly documenting their sources and that
they will be graded accordingly.
Though grading for correctness in APA Style may be the driving factor for both students and faculty, at heart correct
APA Style can help students avoid plagiarism. As Sheehan (2014) argues, instructors are better off being proactive
and educating students about plagiarism at the outset of an assignment, rather than addressing the issue with
awkward conversations and ethical conduct committees after the fact. Between the availability of accessing other
students’ papers for free or for a fee and the facility with which content can be cut and pasted into papers, the web
has made plagiarism easier, more tempting, and nearly omnipresent. To increase the stakes, students report being
confused about what exactly constitutes plagiarism (Childers & Bruton, 2016). In this academic culture, providing
students with the reasons and the means to cite correctly is more important than ever.
While technology and the omnipresence of electronic sources have made citing and referencing in correct APA Style
that much more challenging, it is the premise of this chapter that technology tools can also help provide a solution.
As Kirkwood and Price (2013) caution, it is important that the teacher is still the “agent” of learning, actively using
technology to create or enhance meaningful educational activities, as opposed to technology being the agent or sole
means of instruction (p. 333). Technology itself will not solve the problem; however, different online resources can
help to engage students and allow them to work with APA Style in different modalities. A few caveats: One tool in
P a g e | 268
and of itself may not be the complete solution but may instead be one tool among many. Arming students with
various resources that they can easily access, combined with technology-aided feedback and engaging instruction
can be a magical combination. Slezak and Faas (2017) identify this integrated and multi-pronged approach to
learning as “interteaching,” while others might simply call it good pedagogy.
Consider also the modality. While online classes might have little choice but to provide an electronic review of APA
Style, instructors need not be limited to a static text or even a video. Although face-to-face instruction appears to
be more effective than simply assigning online tutorials (Zafonte & Parks-Stamm, 2016), it is clear that the web offers
endless opportunities for innovation beyond, or in conjunction with, APA Style tutorials.
The remainder of this chapter highlights various online resources that help students to gain mastery of APA Style.
Some are intuitive and obvious though no less helpful (i.e., the blog of the APA) while others, like Pinterest, may be
a bit outside the conventional box. The goal is to provide a bevy of resources for students that will be readily available
in the electronic spaces in which they already live and which bring some level of engagement and interaction to what
is often a dreary and desultory topic.
START WITH THE SOURCE
While the bound copy of the sixth edition of the Publication Manual of the American Psychological Association likely
sits dog-eared on the professor’s shelf, it is likely that students, particularly those who are not psychology majors,
are going to instead reach for the ubiquitous and seemingly helpful world of online resources when they need help
with APA Style. As such, instructors can direct them to the online version of the trusty manual:
www.blog.apastyle.org. It might be worth the time to preface an introduction to this great resource by clarifying
“blog” for students, reminding them while a blog is not typically considered a reliable source in college papers, this
is an exception. This is the actual blog of the APA Style Guide and so it is an authoritative source. It is dubbed as the
“official companion” to the Manual and “run by a group of experts” in APA Style (American Psychological Association,
2009). It is here that students can find either direct or round about answers to some very specific and one-off APA
Style questions, such as citing a TED Talk or a tweet. This resource is particularly useful for more obscure citing needs
or electronic resources that are not included in the official manual. The site provides a good overview of APA Style,
in addition to their weekly posts focused on various citation quandaries. They are also active on social media but the
indispensable part of the website, and why it should be heartily recommended to students, is the search function
that it includes. Where one of the challenges of the hard copy of the Publication Manual is finding the information
you need when you need it (thank heavens for an index!) the search box function at the APA Style blog does a very
good job of getting students the information they need with a minimal amount of frustration.
One of the other useful aspects to share with students is the blog’s advice on citing web sources. While the
Publication Manual does a fusty yet serviceable job of providing this info, the blog has a great post and visual that
helps students identify key aspects of how to cite a website when none of the examples are matching up. This, of
course, brings up the idea that citations are only as good as the information provided by the user and so in clearly
explaining and identifying those key aspects of a website (website versus web page, for example) it is a handy tool
for students to refer to when their citation need is murky, as increasingly happens in our online world.
P a g e | 269
ONLINE CITATION GENERATORS
Though Kessler and Van Ullen (2005) reported citation generators to be mostly accurate and advantageous to
undergraduates who need to cite correctly, classroom experience does not always bear that out. In this author’s
quick exploration of three popular, free online APA Style citation generators, using the same recent journal article,
the generators located the article that I was citing so I did not have to input any information. Of the three, BibMe,
Citation Machine (those two are essentially the same and both are Chegg products) and Citethisforme all came back
with errors. All of them lost the italics in being cut and pasted into Word and erroneously capitalized each word in
the article title. In addition to that, Citethisforme did not include the doi number, left off the author’s middle initial
and included pp. before the page numbers. Similar types of errors are found when students pull citations from
electronic library databases and it makes citation generators live up to the label of dubious that instructors and
researchers have given them (Stevens, 2016).
In cases where the information cannot be directly called up by the generator and instead must be input by the
student, say in the case of an organization’s web page, the citations are only as good as the information the user
inputs. So if students cannot differentiate a volume number from an issue or even a newspaper article from a journal
article, the output of the generators will likely be faulty as well, and citations will be inaccurate. Stevens (2016) found
that the most frequent student error in citations was the omission of required, often key, information and, as a
result, citation generators would be useless in this regard. There are so many nuances to be explained to
undergraduates for them to correctly input information – for example, when citing a newspaper it is not sufficient
to just include the year since there were 365 New York Times published in that time and more specificity is needed
– that citation generators are really a last resort and some form of APA Style instruction is necessary to reap any
benefit.
MICROSOFT WORD
It is possible that the most used tool is the one that we overlook as playing a large and important role in teaching
students APA Style. While certainly instructors spend copious amounts of times in Word whether writing or grading,
there are many tools within the program that can be harnessed to help students with their APA Style accuracy.
Prewritten comments for the most common APA Style errors can be input into Word using the AutoText feature.
While general comments such as “This reference page does not meet APA style requirements” may be a good start,
instructors can go even further to highlight some of the more nuanced issues: “Be sure your references are aligned
left and then insert a hanging indent. For instructions on inserting hanging indent please visit
https://support.office.com/en-us/article/Create-a-hanging-indent-7bdfb86a-c714-41a8-ac7a-3782a91ccad5
for
step-by-step instructions.” By typing in that sentence and link once and then setting it up as AutoText and assigning
it a simple code created for that particular comment, perhaps “HI1,” the instructor can insert that whole block of
text in a comment box for students to correct on future papers. Harnessing the power of this feature in Word not
only frees up precious grading time for instructors, but allows students to learn where they are erring and how to
correct it in the future. For more information on using the Autotext feature visit the Microsoft support page:
https://support.office.com/en-us/article/Automatically-insert-text-0bc40cab-f49c-4e06-bcb2-cd43c1674d1b.
It is often assumed that the current generation of college students is extraordinarily computer savvy because they
grew up with smart phones and laptops. While that is sometimes the case (A+ at navigating Snapchat!), instructors
are often surprised at how little proficiency students have with Microsoft Word. Specific instructions and comments
with imbedded resources would be a great help to getting them up to speed on mastering all the little indents, tabs
P a g e | 270
and headers that the exacting APA Style demands of them. In addition to using autotext in grading, instructors can
create screenshots of the features that help students set up their papers correctly prior to submission. For example,
compiling and posting a document that includes screenshots and instructions for accessing the “Special” dropdown
box in the Paragraph tab that allows users to create a hanging indent would be of great help. While the initial set up
of such a document might be time-consuming up front, it could be used for several semesters and will save APA Style
grading frustration in the long run.
CREATING A TEMPLATE IN WORD
Because setting up headers correctly can be a confusing and frustrating process even to those who have done it
before (I have a four page screenshot pdf that I follow when I create a document in APA Style from scratch)
instructors might consider that because the focus is to get APA Style right and not have a course in the nuances of
Microsoft Word, providing students with a template into which they can directly type their information is a great
tool, particularly for underclass students with no prior exposure to setting up a paper in APA Style. Explaining the
template prior to first use (“Yes, I know it sounds strange but you keep the words ‘running head’ at the top”) can
also be a help to students as they acclimate to the formatting requirements. Again, creating the template requires
an investment on the part of the instructor but will pay off when every student submission has a correctly capitalized
header title.
SCREEN RECORDING SOFTWARE
Screen recording software (for example, Screencast-o-matic, Snag-it, Loom, or Jing) can serve instructors well both
to work pre-emptively and post-assessment. One initial use might be to record setting up a paper in APA Style or
walking students through the components of a title page. I often have students download a template and yet some
still get it wrong because they do not realize that the words “running head” stay at the top or that the capitalization
is intentional. Pointing these things out on a video can be helpful to students who are unfamiliar with what belongs
in a paper in APA Style. Madson (2017) argues that the use of screencasts in conjunction with written comments
might provide the best of both worlds for many students. This combination of visual and auditory feedback,
combined with the narration of the instructor’s voice highlighting APA Style errors and correcting them, would likely
serve as a really personal and effective means of having students tune into their APA Style errors and fix them with
intention. This technique would be particularly useful in online instruction where a recording of the instructor’s voice
can help students to feel more connected and engaged with the class.
There are several different resources available for use, some free, others with a yearly subscription fee, that can
capture and record your screen and a voice over. Most companies offer free trials so an instructor can actually get
some hands-on experience with the tool prior to committing, although yearly subscriptions are generally quite
reasonably priced. Some of the free software limits recording time to five minutes which may be a consideration
depending on how extensive the feedback an instructor provides. Additionally, some require software installation
and others are web-based. Some require students to download and open a file while others simply produce a link
that students can click on to view their feedback online. Budget, student needs and hardware are considerations
that instructors must wade through prior to using screen-recording software but the benefits for students are
manifold.
P a g e | 271
ONLINE TUTORIALS
Many universities have created tutorials in APA Style to help their students, with various grades of interactivity and
production values. Often these are for internal use or housed within a LMS or another system the student must log
into. Hood (2014) describes the creation of a self-paced online APA Style course for her university that faculty
members can place within their course and modify as needed in preparing their students. There are also some openuse, outward facing ones that can be helpful. I have found the Harvard School of Business’ online tutorial APA
Exposed (Mages, 2009) particularly helpful in providing both a great overview, smaller modules for students to focus
on their needs and interactive quizzes at the end, allowing students to test what they have learned. There is also
evidence that these online tutorials are most effective when used in conjunction with additional instruction (Zafonte
& Parks-Stamm, 2016).
REFERENCE ORGANIZERS
For graduate students in psychology, along with some ambitious undergraduates, citation and reference
organization software offers advantages. Common software such as Endnote, Mendeley, or Refworks can be great
resources to store and organize articles and they can also help students easily transfer correctly formatted citations
from the software into their word processing program. Like other citation generators though, the accuracy of the
citations is only as good as the information input. For newer articles the fields populate with greater accuracy than
older articles but there is also a pretty large amount of errors in these uploads. Again, the citations have the capacity
to be accurately formatted but only after they have been checked and input with some care. These resources come
with the obvious added bonus of their actual intended use which is to provide a housing mechanism for research
articles; upper division students who may be working on a thesis or have an interest in a particular subject area
would benefit from not just the citation function but the organization and ease of access to their research as well.
SOCIAL MEDIA TOOLS
PINTEREST
Pinterest calls itself the “world’s catalog of ideas” that helps “people discover the things they love” (Press, n.d.). For
those unfamiliar with the concept, it essentially works like a modern day bulletin board or vision board where users
create different themed boards and then digitally “pin” ideas, images, or articles to their board. It is the high tech
equivalent of ripping out a magazine article to save but in this case it never goes missing when it is most needed
because it is safe and organized online. Boards can be private or public to be shared with others. Saying “I’m going
to spend 5 minutes on Pinterest” is an oxymoronic statement as it is a time-sucking rabbit hole of wonder and joy.
Though this endorsement might not seem to make it an intuitive classroom technology tool and despite the fact that
sometimes male students can be off-put by it and all users may need to create an account to view any content,
Pinterest can be used for classroom purposes with great success because it allows instructors to take something
students think is kind of fun and that they are already using and adding an educational element to it (Terry, Valenti,
& Farmer, 2017)
Because this tool is so popular and attractive, it is a natural place to cultivate a space for all things APA that can then
be shared with students. In a first-year composition class, rather than hand out a list of links to students, posting in
the LMS a URL to a Pinterest page pinned with strong reliable APA Style sources (i.e., APA style blog page, your
school’s library home page, OWL at Purdue) can make following up with those sources more intuitive and fun for
students. It puts all those sources online in an engaging way so that the good information is literally at their
P a g e | 272
fingertips. Alternatively, you can flip the exercise and make students collect and pin their own resources that they
have sleuthed out to help them with their papers. Ultimately, teachers may find Pinterest to be a great place to
organize their ideas and resources for their own work. Here is a link the board created for my freshman composition
class: https://www.pinterest.com/mariazafonte/apa-style/
PADLET
Padlet is online software that allows users to easily “create and collaborate” (What is Padlet, n.d.) on pages that they
create and then invite users to view or edit. While Pinterest allows users to find and save content they come across
online, Padlet is focused instead on providing a space for the creation of original content, although images and
articles can be pasted into a Padlet as well. Padlet can be used in both synchronous and asynchronous environments.
One possible way it can be harnessed is to create an incorrect reference page which students must go into to
collaboratively edit and note the changes they made. By giving users the URL and full edit privileges, students can
go in and change the references and correct them. In addition to having students work on this page individually or
in groups, this Padlet page can simultaneously be projected on a screen and the whole class and instructor can view
the edits in real time. Personal experience with this tool has demonstrated that this works best in relatively small
classrooms, and maximum effectiveness tops out at 35-40 students. Although it certainly can accommodate greater
numbers, too many users can provide a situation where students can hide behind inappropriate or distracting
anonymous posts in a classroom, which could ultimately take away from the learning experience. In total this small
drawback is vastly outweighed by the flexibility and engagement provided by this online collaborative space. An
example of a Padlet reference page in need of student correction can be found at the following URL:
https://padlet.com/maria_zafonte/konzbxvj1w22
CONCLUSION
This chapter provided a sampling of the ever-expanding technology tools that can be used to help students
understand and master APA Style. These tools are just that: tools. There is no panacea for correcting all student
citation errors. However, judicious use of technology tools, in combination with each other and grading and feedback
can allow students engaging and dynamic ways to gain proficiency at a rote and confusing procedural necessity.
REFERENCES
American Psychological Association. (2009, June 30). About us/APA Style Experts. Retrieved from
http://blog.apastyle.org/apastyle/apa-style-experts.html
American Psychological Association (2010). Publication manual of the American Psychological Association.
Washington, DC: American Psychological Association.
Childers, D., & Bruton, S. (2016). “Should it be considered plagiarism?”: Student perceptions of complex citation
issues. Journal of Academic Ethics, 14(1), 1-17.
Davis, P. M. & Cohen, S. A. (2001). The effect of web on undergraduate citation behavior 1996-1999. Journal of the
American Society for Information Science and Technology, 52(4), 309-314.
Hood, C. L. (2014). LOOMing possibilities: Learning APA style the self-paced way. Journal of Teaching and Learning
with Technology, 3(2), 87-90.
P a g e | 273
Kessler, J. & Van Ullen, M. (2005). Citation generators: Generating bibliographies for the next generation. Journal of
Academic Librarianship, 31(4), 310-316.
Kirkwood, A. & Price, L. (2013). Missing: Evidence of a scholarly approach to teaching and learning with technology
in higher education. Teaching in Higher Education, 18(3), 327-337.
Madson, M. (2017). Showing and telling! Screencasts for enhanced feedback on student writing. Nurse Educator,
42(5), 222-223.
Mages, W. K. (2009, October). APA exposed: Everything you always wanted to know about APA format but were
afraid to ask. Retrieved from http://isites.harvard.edu/icb/icb.do?keyword=apa_exposed
Mandernach, B. J., Zafonte, M. & Taylor, C. (2016). Instructional strategies to improve college students’ APA style
writing. International Journal of Teaching and Learning in Higher Education, 27(3).
Press. (n.d.). Pinterest. Retrieved from https://about.pinterest.com/en/press/press
Sheehan, E. A. (2014). That’s what she said: Educating students about plagiarism. Essays from E-xcellence in
Teaching, Volume XIII. Editors Altman, W. S., Stein, L., Stowell, J. R. Society for the Teaching of Psychology.
Retrieved from http://teachpsych.org/Resources/Documents/ebooks/eit2013.pdf#page=46
Slezak, J. M. & Faas, C. (2017). Effects of an interteaching probe on learning and generalization of American
Psychological Association (APA) Style. Society for the Teaching of Psychology, 44(2), 150-154.
Stevens, C. R. (2016). Citation generators, OWL, and the persistence of error-ridden references: An assessment for
learning approach to citation errors. The Journal of Academic Librarianship, 42(6), 712-718.
Terry, L., Valenti, E., & Farmer, J. (2017, April 5). Using technology to enhance group collaboration in blended
learning. Presented at Online Learning Consortium Innovate, New Orleans, LA.
What is padlet? (n.d.). Padlet. Retrieved from https://padlet.com/support/whatispadlet
Zafonte, M., & Parks-Stamm, E. J. (2016). Effective instruction in APA Style in blended and face-to-face classrooms.
Scholarship of Teaching and Learning in Psychology, 2(3), 208.
P a g e | 274
PART
3
HOW SOCIAL MEDIA IS USED TO ENGAGE
STUDENTS IN LEARNING
P a g e | 275
CHAPTER
27
CLASS, PLEASE TAKE OUT YOUR
SMARTPHONE: USING PERSONAL
RESPONSE SYSTEMS TO INCREASE
STUDENT ENGAGEMENT AND
PERFORMANCE
LAUREN STUTTS
DAVIDSON COLLEGE
INTRODUCTION
Due to the advancement of technology, there are a number of innovative ways that it can be used in the classroom.
Some individuals may argue that technology should not be integrated in the classroom due to its potential
distraction; however, because technology is such an integral part of students’ lives today, it is worthwhile to
capitalize on it rather than avoid it. This chapter will specifically address the use of personal response systems (i.e.,
clickers) in the classroom. The effectiveness of personal response systems and why they work will be discussed.
Subsequently, I will describe how I use a personal response system, Socrative, in the classroom.
WHAT ARE PERSONAL RESPONSES SYSTEMS?
Personal responses systems or “clickers” have commonly been used in the past decade as a method of interacting
electronically with instructors during class. These devices are typically small and simple in instruction and have keys
for multiple choice answers. Mobile phones can now be easily used as personal response systems instead of using
stand-alone clickers. The instructor typically processes the responses from the clickers/mobile phones through an
individual computer using either special software or a specific website to manage the data.
Dallaire (2011) found that instructors most commonly use clickers in class to collect attendance, foster improved
discussion, administer quizzes, and assess student mastery of the material. In addition, clickers can help students
work through challenging questions. For example, students can provide their initial response to a challenging
question, then discuss in a small group which answer is correct and why and then respond again via the clicker.
Another advantage of using personal response systems is the ability for students to respond anonymously, which
can provide a less pressured interaction with the instructor.
DO PERSONAL RESPONSE SYSTEMS WORK?
The effectiveness of clickers has been studied experimentally. For example, Morling, McAuliffe, Cohen, and
DiLorenzo (2008) conducted a randomized, controlled experiment in which they assigned two introductory
psychology classes to using clickers and two classes to not using clickers. They found that the classes that used
P a g e | 276
clickers had slightly higher grades than classes that did not use clickers. However, they did not find differences in
engagement in class due to clickers. That said, they explained that one of the limitations was that the clickers were
used minimally in class (i.e., at the beginning of class five questions were posed via clickers), so perhaps the
engagement outcome would be different if clickers had been used throughout the class. In another experimental
study, Stowell and Nelson (2007) assigned psychology classes to one of three conditions: a clicker class, a response
card polling class (i.e., students raised paper cards with their answers), and a hand-raising class (i.e., students simply
raised their hands for polling purposes). They found that individuals in the clicker group participated the most,
reported greater positive emotion, and were more honest in responses to in-class questions than the other two
groups.
In addition, Shapiro and Gordon (2012) found that in-class clicker questions improved performance on an exam by
10 to 13% in a psychology class. Dallaire (2011) indicated that performance benefit of clickers depends on amount
of clicker use, however. Shapiro and Gordon (2012) found that having too high or too low of clicker use was
associated with lower grades and having moderate use (about four reported uses in class) was associated with higher
grades. Furthermore, Dallaire (2011) concluded that clickers were helpful for some students but not others. For
example, seniors reported less benefits of using clickers than non-seniors, and students who were
biology/neuroscience majors perceived less benefits than students who were psychology majors.
There are a number of disadvantages to using clickers. For example, Dallaire (2011) found three main problems to
using clickers in the classroom: 1) students would sometimes forget to bring the clicker to class, 2) the clicker would
malfunction at times, and 3) the students did not like having to pay for a clicker. Therefore, using students’ individual
mobile devices such as smartphones and iPads instead of clickers are more recent ways to incorporate response
systems in the classroom without the potential drawbacks Dallaire (2011) identified. This shift makes sense due to
the increase in prevalence and accessibility of smart devices. In fact, polls indicate that as of 2016, 92% of 18- to 29year-olds in the United States own a smartphone (Smith, 2017). Phones have advantages over the use of personal
clickers such that instructors do not have to pass them out, students know how to use them, and they typically do
not cost the students’ additional money.
Using mobile devices in the classroom has also been examined. For example, Voelkel and Bennett (2014) used the
Poll Everywhere system (http://www.polleverywhere.com) in introductory biology classes. This entailed having the
instructor set up poll questions in PowerPoint and then students would use their mobile phones to text their
responses. The class responses would then appear in real time in the PowerPoint. The instructors reported that it
was easier and faster to use this system compared to using traditional clickers. In addition, the students reported
that the polls made the class content more engaging and that they learned more. However, the students who did
not participate said that it was too expensive (e.g., some students had a phone plan that charged them for individual
text messages), they preferred to just think about the question, or did not have their phone in class. The researchers
compared student performance on exams in these classes to the performance of students in these classes in the
previous two years and found that the students using the poll software had significantly higher exam grades than
the previous years (Voelkel & Bennett, 2014).
While the effectiveness of personal response systems has been demonstrated in single studies, Hunsu, Adeosope,
and Bayly (2016) conducted a meta-analysis on the use of personal response systems in the classroom to determine
overall effectiveness across multiple studies. They found that using personal response systems had a small, positive
effect on cognitive learning outcomes (e.g., grades, recall) in the classroom and a small to medium effect on noncognitive learning outcomes (e.g., engagement, self-efficacy).
P a g e | 277
Research has demonstrated that clickers and mobile phones are effective, but which is more effective? Stowell
(2015) examined the difference in using clickers versus using mobile phones in two psychology classes. He let the
students choose which version to use for the semester, and 58.1% of students chose to use a mobile phone.
However, grades in the courses did not differ based on whether the student chose to use a clicker or a mobile phone;
higher percentage of correct responses of questions on either polling device was associated with higher exam
grades. Students also perceived both devices as positive. The main problems cited by the students about the mobile
phone were difficulty connecting to the Internet at times and being distracted by their device. Overall, though,
Stowell (2015) concluded that clickers and mobile phones are likely similarly effective.
WHY DO PERSONAL RESPONSE SYSTEMS WORK?
Clickers have been shown to improve student engagement (Stowell & Nelson, 2007). Actively participating and
seeing the results in real-time allows students to be more present in the classroom and have a more unique
interaction with their peers and the instructor. In addition, Shapiro and Gordon (2012) found that clickers likely work
by enhancing students’ memory through increasing exposure to the material and helping students practice
answering questions.
Son and Rivas (2016) indicated that using analogical reasoning questions via personal response systems allow
students to transfer and apply knowledge in an existing domain to a new domain. They conducted a quasiexperimental study in which one class was assigned to a “testing” group and another was assigned to a “notes”
group. Both groups used the clickers, but the testing group received analogical reasoning questions and the notes
group did not. They found that the testing group performed better on the final exam than the notes group suggesting
that the analogical reasoning questions were helpful in generalizing knowledge.
INTRODUCTION TO A CLASSROOM MOBILE PHONE APPLICATION: SOCRATIVE
While there are a number of mobile phone applications to use in the classroom, this chapter will focus specifically
on Socrative. Socrative is an application that was designed to be used as a personal response system in the
classroom. The free version allows instructors to have one public room and can use the system with up to 50 students
per session. It is available on most computer platforms (e.g., Microsoft Windows, MacOS, Linux) and on most mobile
devices (iOS, Android, Kindle). The creators recommend using the most recent versions of Chrome, Safari, or Firefox
browsers to launch Socrative.
SETTING UP SOCRATIVE
Instructors set up accounts through the Socrative website (www.socrative.com ) and can use the website or the
“Socrative Teacher” phone application to launch it. The instructor creates a “room number” that students use to log
in. Students only have to download the free “Socrative Student” phone application or go to the website to login as
a student. They login via the designated room number and then wait for the instructor to load a question. Instructors
can easily set up “quizzes” where they can add multiple choice questions, true/false questions, or short answer
questions ahead of time. They can also designate correct/incorrect answers. If instructors already have questions
set up in an Excel document, they can be imported into Socrative.
P a g e | 278
LAUNCHING PRE-MADE QUIZZES
When launching the quiz, instructors choose from allowing the students to receive instant feedback on whether or
not their answers were correct, allowing students to have open navigation where they can change answers and go
back and forth between questions, or allowing instructors to take control of when each question appears on the
screen. An example of what the student’s screen looks like when he or she answers questions incorrectly or correctly
below.
In addition, instructors decide if they want to require names for the quizzes and whether they want the
questions/answers shuffled across students. Once the quiz begins, instructors receive feedback in real-time about
students’ responses. Usually instructors do not show this screen to students as to not embarrass students who are
not performing well. Instructors can subsequently download the answers and scores into an excel document and
use as part of a class grade. An example is shown below.
P a g e | 279
Instructors can also disseminate quizzes through a “space race” where students are automatically randomized to a
team color, and if anyone in their team color gets a correct answer, a spaceship in that designated color moves
forward on the screen. This option can help increase engagement and inspire motivation in students to complete
questions correctly.
LAUNCHING QUICK QUESTIONS
Many times instructors will have a question that is not pre-made. Instructors can use the “quick question” option
for this reason. When launching a quick question, instructors pick from multiple choice, true/false, or short answer.
On the students’ screens, they will see A through E options for multiple choice (see figure below), T/F, or a blank
window in which they can type an answer.
The instructor needs to identify for students what the question is and what the options are (typically done verbally,
on a whiteboard, or on a PowerPoint). The answers appear in real-time and are anonymous. Therefore, these
answers are typically shown to the students (See figure below).
P a g e | 280
LAUNCHING EXIT TICKETS
Instructors may want to launch an “exit ticket” at the end of the class period to assess how the overall class went.
There are three automatic questions that are asked when the exit ticket is launched: 1) How well did you understand
today’s material? (i.e., totally get it, pretty well, not very well, not at all), 2) What did you learn in today’s class?
(open-ended), and 3) Please answer the instructor’s question (open-ended). For the last question, the instructor
would designate a specific question based on the class material for the students to answer.
SOCRATIVE PRO
Socrative came out with a “Socrative PRO” option, which requires paying for a license. It allows instructors to have
up to 10 private or public rooms and allows for up to 150 students per session. In addition, it has more customizable
options as well as the ability to import class rosters into the system, among other perks.
USING SOCRATIVE IN A PSYCHOLOGY CLASSROOM
Socrative works well in introductory and intermediate courses that are too large for frequent individual participation
by every student. I use it in my courses that have 32 students but not in my smaller, discussion-based classes that
have only 12 students. It could easily work for any type of psychology course topic. In this section, I first describe
how I introduce Socrative to students and then describe the four main ways I use it in my classes.
INTRODUCING SOCRATIVE TO STUDENTS
On the first day of class, I make it clear to students that they will be using their smartphones in class as one method
of communicating with me. I also include information about it on my syllabus: “Technology etiquette: You are
encouraged to bring a Smart device with you for classroom activities. We will be using an application in which you
will submit attendance and answers to questions. However, your devices should be set to silent, and you are not
allowed to use your devices for activities unrelated to class.” I also ask students to let me know if they do not own a
smart device the first week of class, so I can provide them with one. However, I have never had a student that did
not have access to any form of smart device (smartphone, iPad, and laptop devices all work). The first week of class,
I have students download the “Socrative Student” application for free on their smartphones or iPads. If they are
using a computer, then they go to this website: www.socrative.com. Subsequently, they login to my last name as
the “room number.”
ATTENDANCE AND LATENESS
The first way I regularly use Socrative is taking attendance. As soon as students arrive in class, they are responsible
for logging into Socrative and entering their name. I also have a question of whether they were late or not to class.
This information is stored in an Excel file, which I download after class to note which students were absent and which
students were late.
FORMATIVE ASSESSMENT
The second way I use Socrative is through formative assessment by using the quick question option. The first
question I ask at the beginning of every class is a review question from the previous class. I encourage them to review
their notes as they wait for class to begin in preparation for a review question. In addition, I include formative
P a g e | 281
questions throughout the class to assess if they are learning the material. If a majority of the class misses a question,
then I spend more time reviewing that topic. I also review why certain answers are incorrect.
ANONYMOUS POLLS
The third way I use Socrative is by taking anonymous polls about personal topics through the quick question option.
For example, when I teach a class on child maltreatment, I poll the class to see how many students experienced
spanking as a punishment as a child. These results show up as aggregate data and do not identify any student.
Another example would be to ask an opinion question such as “Should adolescents be tried as adults for capital
offenses?” After seeing the distribution of responses, a good discussion often emerges from the students with
differing responses.
FORMAL SURVEYS/QUIZZES
Lastly, at several points throughout the semester, I administer pre-made surveys via Socrative. For example, the first
week of class, I have students answer questions about their demographics, major, need for accommodations, and
personal preferences in order to get to know them better. I also use surveys to collect feedback about their class
participation and group members’ participation. In some classes, I use weekly low-stakes quizzes where I prepare
five questions ahead of time about the material. The students receive feedback about whether or not they answered
each question correctly and are given a score at the end. I receive their total score as well as their responses to each
question in real-time and can download it into an excel file.
ADDRESSING CHALLENGES
Technology can malfunction at times. For example, if the Internet or the Socrative server is down, then Socrative will
not launch. However, that has only occurred once in my four years of using this software. Sometimes a student’s
individual device is not working. In that case or if a student forgets his/her device, I have one to two extra devices
on hand for them to use.
Furthermore, sometimes there are student problems with the devices. For students who become easily distracted
by their devices, I ask them to stay logged in to Socrative for the entire class and to put their phone upside down
when they are not responding to a poll question; this helps reduce the likelihood that they will be distracted by any
incoming messages. Students could technically also share answers if they showed their phones to their peers;
however, I had not noticed this to be a problem, especially since I use it in a low-stakes capacity.
CONCLUSION
Overall, using technology in the classroom can be advantageous. As technology evolves, instructors will need to be
flexible in adapting to the changing technology and software. Future research should explore the optimal frequency
and length of time used with technology in the classroom and what activities and specific applications are most
expedient to promote student learning. In addition, finding out how to make technology work for all students is
another area that should investigated. Ultimately, integrating technology in the classroom is highly valuable as our
society will likely become increasingly dependent on it for work and daily functions.
P a g e | 282
REFERENCES
Dallaire, D. H. (2011). Effective use of personal response 'clicker' systems in psychology courses. Teaching of
Psychology, 38(3), 199-204. doi:10.1177/0098628311411898
Hunsu, N. J., Adesope, O., & Bayly, D. J. (2016). A meta-analysis of the effects of audience response systems (clickerbased technologies) on cognition and affect. Computers & Education, 94, 102-119.
doi:10.1016/j.compedu.2015.11.013
Morling, B., McAuliffe, M., Cohen, L., & DiLorenzo, T. M. (2008). Efficacy of personal response systems ('clickers') in
large,
introductory
psychology
classes.
Teaching
of
Psychology,
35(1),
45-50.
doi:10.1080/00986280701818516
Shapiro, A. M., & Gordon, L. T. (2012). A controlled study of clicker-assisted memory enhancement in college
classrooms. Applied Cognitive Psychology, 26(4), 635-643. doi:10.1002/acp.2843
Smith, A. (2017). Record shares of Americans now own a smartphone, have home broadband. Retrieved from:
http://www.pewresearch.org/fact-tank/2017/01/12/evolution-of-technology/
Son, J. Y., & Rivas, M. J. (2016). Designing clicker questions to stimulate transfer. Scholarship of Teaching and
Learning in Psychology, 2(3), 193-207. doi:10.1037/stl0000065
Stowell, J. R. (2015). Use of clickers vs. mobile devices for classroom polling. Computers & Education, 82, 329-334.
doi:10.1016/j.compedu.2014.12.008
Stowell, J. R., & Nelson, J. M. (2007). Benefits of electronic audience response systems on student participation,
learning, and emotion. Teaching of Psychology, 34(4), 253-258. doi:10.1080/00986280701700391
Voelkel, S., & Bennett, D. (2014). New uses for a familiar technology: Introducing mobile phone polling in large
classes.
Innovations
in
Education
and
Teaching
International,
51(1),
46-58.
doi:10.1080/14703297.2013.770267
P a g e | 283
CHAPTER
28
A SURVEY OF EFFECTIVE
BLOGGING PRACTICES
STEPHEN B. BLESSING
BETHANY FLECK METROPOLITAN
HEATHER D. HUSSEY NORTH CENTRAL UNIVERSITY
UNIVERSITY OF TAMPA,
STATE UNIVERSITY OF DENVER, AND
INTRODUCTION
This chapter explores the use of web logs, more commonly known as blogs, in higher education. Blogs allow nontechnical users to quickly and easily publish material on the world-wide web (Ifinedo, 2017). Most blogs allow
readers to write comments, so that a discussion of the original post may ensue. Many course management systems
such as Blackboard, Canvas, and Moodle have a blogging module ready for use, and some online services offer
blogging services for free. They can be an excellent way to allow students to communicate with one another, their
instructor, and the larger community outside of the classroom. Blogging often consists of writing a short analysis on
a course concept or commenting on another student’s ideas. Blogging can also overlap social media functions, such
as sharing a link, darting off a quick one sentence thought, or posting a photo. These can help students understand
and absorb course material in a way that enhances what happens inside the classroom. We will consider the different
blog types, the theoretical basis for their use, and research examining their effectiveness in various courses that
inform best practices of blogging.
WHY USE A BLOG?
Gill (2006) lists four qualities of what makes blogs and other online discussion forums effective: Efficiency, openness,
sense of safety, and spirit of collaboration. Efficiency, or ease of use, refers to writing blogs easily. Next, after
students post their thoughts, they are readable and ready to be commented upon by the instructor, other students,
and community members. Third, students have a sense of safety in which a student might voice an idea or opinion
they would not in a classroom environment, because they feel the blog space is their own (Hew & Cheung, 2013;
Kim, 2008). Lastly, blogs promote a spirit of collaboration by allowing blog users (students, instructors, and others)
to build off everyone’s ideas to create a shared idea space. Deng and Yuen (2011) stressed the use of blogs as both
reflective and interactive discourse. They allow students to take the material they learned in the course, reflect on
it, and produce their own material.
Instructors and researchers can use blogs to foster a constructivist learning environment, where students are actively
engaged in constructing knowledge instead of merely receiving it from the instructor (e.g., Kuo, Belland, & Kuo,
2017; Lee & Bonk, 2016; Miyazoe & Anderson, 2010; Osman & Koh, 2013). These technologies can promote Socratic
questioning, peer review, and self-reflection. Other researchers have noted that blogs encourage critical thinking
(e.g., Ellison & Wu, 2008; Oravec, 2002). Oravec (2002) discussed how traditional blog entries usually require
students to transform the content into their own words versus merely repeating course concepts. In short, these
positive features of blogs provide a space for student meaning-making on their own terms using these constructivist
principles.
P a g e | 284
TRADITIONAL BLOGS
In a traditional blog, posts can be several paragraphs long, with entries usually arranged in reverse chronological
order (Halic, Lee, Paulus, & Spence, 2010). Previous posts are commonly retained. Most blogging software allows
for readers to comment on the entry, allowing for a threaded conversation to follow the initial post. Within this
space of traditional blogs, two main types exist: Group and individual. In a group blog, all students are responsible
for providing and keeping up the content of the blog. Conversely, with the use of individual blogs, each student
maintains their own blog space.
Although research examining specific learning outcomes from student blog writing is lacking (Hew & Cheung, 2013;
Sim & Hew, 2010), there is research and scholarly literature focused on student attitudes towards blogs. Research
has found that students like and appreciate the use of blogs in a course, though as with any intervention there are
pluses and minuses (e.g., Halic et al., 2010; Kuo et al., 2017; Sullivan & Longnecker, 2014; Top, 2012). In a typical
finding, Top (2012) noted that undergraduate students held significantly positive attitudes about the collaborative
and perceived learning potential of the blog assignment. These students used blogs to assist in a larger group project
they had been assigned. Sullivan and Longnecker (2014) had students work on a group blog, and found students felt
they communicated more with their fellow students on an intellectual level, and believed they were motivated to
write better because their peers would see their work. In this particular study, students wrote blog posts throughout
the semester on scholarly articles they had read for class, and commented on a certain number of other student’s
blog posts. However, some researchers have found students are less interested in maintaining shared blogs in
comparison to individual blogs, perhaps due to the issue of ownership (Kim, 2008). Not surprisingly, there is a
diffusion of responsibility with shared blogs, and students feel less responsible for maintaining its content. Walatka
(2012) used a hybrid model, where students started work on a class blog (the “hub”) and then as the semester
progressed moved toward writing individual blogs (the “spokes”). He found that this model had a significant and
positive impact on how both students and the instructor approached the class, because students were better
prepared for class, allowing for deeper discussions to happen both inside and outside of class time.
It is also worth noting that the scholarly literature suggests that students believe individual and group blogging
assignments are useful, but do not like some aspects of it (e.g., Deng & Yuen, 2013; Duarte, 2015; Ellison & Wu,
2008; Halic et al., 2010; Samuel-Azran, & Ravid, 2016). This finding appears to hold in both group blogs (Duarte,
2015) and individual blogs (Deng & Yuen, 2013). For example, Deng and Yuen (2013) found that students liked to
read other students’ blog posts, but thought less of commenting on other students’ blog posts and receiving
comments back on their own posts. Blogging enabled a feeling of increased social connectivity, but did not enable
social interactivity to the same extent, allowing students to connect with each other via their blog writing. Deng and
Yuen concluded that blogs might fill a role between formal written assignments and casual written communication.
Indeed, in many studies the findings indicate that the students enjoy the less formal nature of writing typical of
blogs, and felt that it improved their writing (e.g., Sullivan & Longnecker, 2014).
Related to social connectivity, researchers have examined how using blogs over the course of a semester changed
who communicated with whom within the course. Lee and Bonk (2016) developed network graphs of how students
knew each other at the beginning and the end of the semester. The number of both incoming and outgoing
connections for each student increased during that time frame, showing that indeed students increased in their
connectivity. Students reported they felt closer to their peers at the end of the course.
Researchers have also examined the content of the blog posts and found that students put effort into the posts,
fulfilling the constructivist goal of a meaning-making exercise where students internalize course content. For
P a g e | 285
example, Dos and Demir (2013) found students wrote in their own words and constructed their own knowledge
versus merely parroting back course content. Miyazoe and Anderson (2010) discovered that vocabulary use among
their English as a second language students became richer in blog posts than it was in forum posts, indicating that
students realized the difference in intent between these two types of writing and adjusted appropriately. Lastly,
Osman and Koh (2013) performed a content analysis on their students’ blog posts. They noted high levels of critical
thinking, with the ability to link course concepts to real-life examples.
MICRO-BLOGS
In contrast to a traditional blog, a micro-blog is very short in length. Users post small pieces of digital content in the
form of text, photographs, short videos, or links (Educause, 2009). Other users can add comments below the original
post but those comments are also limited in length. The best-known micro-blog platform is Twitter with
approximately 68 million users (Fiegerman, 2017). It allows users 280 characters of text per message. Messages
appear on users’ profiles and in the twitter feed for their followers. Other micro-blogs include Posterous, FriendFeed,
Tumblr, Dailybooth, 12 seconds, Plurk, and Jaiku (Educause, 2009). While less popular for educational purposes
compared to Twitter, the platforms mentioned above offer interesting user variations in experiences. For example,
on Tumblr users can post text as well as other media. Plurk users can have threaded conversations where discussions
can occur.
Should student preferences affect what social media, particularly micro-blogs, are used in classrooms? Although
student preferences do not always correlate with learning outcomes, they can still be useful in thinking intentionally
about the teaching methods we choose to employ. This is especially true when we embark on new techniques like
micro-blogging that hold the possibility of crossing boundaries into personal space in the way social media
technology does (Gikas & Grant, 2013; Roblyer, McDainiel, Webb, Herman, & Witty, 2010). What we notice is that
student views are always changing most likely because the technology itself is changing as are people’s comfort in
using it. Within our own research (Fleck & Hussey, 2017; Fleck, Richmond, & Hussey, 2013) we have observed this
change in student preferences. In these studies, students were asked about the appropriateness and
implementation of YouTube, Facebook, and Twitter in the classroom. Overall, the percent of students who had been
asked to use social media in the classroom significantly decreased from 56% in 2012 to 42% in 2017. In 2012, when
asked to use social media in class, 43.7% indicated they were excited yet in 2017 when asked the same questions
only 29.8% responded excited. Focusing on the micro-blog platform Twitter, perceptions of its appropriateness for
classes increased slightly from 29.2% in 2012 to 30.4% in 2017. Of course this means that in 2017 69.9% still view
Twitter as not appropriate. In 2012, 7% of students had been asked to use Twitter for a class and in 2017 this held
steady at 8%.
From the survey numbers reported above, it seems that positive perceptions and the use of micro-blogs is declining
(Fleck & Hussey, 2017; Fleck, Richmond, & Hussey, 2013). However, others have not found this to be true. Neier and
Zayer (2015) found openness to using social media in education, though not blogging specifically. They reported
being motivated by its possible use for interaction and presenting information. Communication and collaboration
potential uses were themes present in Fleck and colleagues (2017) data as well as others (see also Johnson, 2011;
Lowe & Laffey, 2011; Wright, 2010).
Micro-blogs have been used in different ways in education. Twitter can be used as a backchannel forum where
students ask questions during a live class (Educause, 2009). A backchannel is a private online space where
conversations can occur while another activity is going on (Byrne, n.d.). For example, students could tweet questions
P a g e | 286
or thoughts live during a class period. However, this requires the use of a teaching assistant to monitor the Twitter
feed especially because inappropriate messages or any objectionable material would need to be deleted right away.
More often micro-blogs are used in a non-synchronous way, such as sending out reminders of course due dates,
notifications of class cancelations, or to collaborate among students (Educause, 2009). Tarantino, McDonough, and
Hua (2013) presented a strong case that social media platforms, such as blogs, help foster collaborative and social
learning as well as create a virtual community of learners that helps students feel more connected with peers
(Jackson, 2011; Tomai et al., 2010). Tarantino and colleagues argued that the virtually created community
contributes to more discussion, and when coupled with critical analysis of course content seemed to increase
perceived learning.
Teams of students can also use micro-blogs. Students can share multiple points of view, send virtual, motivating
messages to each other (Educause, 2009), or can link each other to relevant research from the web. In this way,
micro-blogging platforms are used to help students co-create and discuss information (Tarantino et al., 2013). Junco,
Elavsky and Heiberger (2013) mapped Twitter assignments onto Chickering and Gamson’s 1989 principles of good
practice in the education of undergraduates. They mapped Twitter into their course by including for low-stress
question asking, book discussions, campus event reminders, class reminders, organizing service learning projects,
organizing study groups, academic and personal support, optional assignments, and facilitating class discussions.
After studying their approach, the authors found that faculty participation and having a strong theoretical
foundation for use were important components for improved outcomes.
The growing empirical literature related to the impacts of micro-blogging on student learning is promising. Junco,
Heibergert, and Loken (2011) found that students who used Twitter increased their engagement, from the start of
the class to the end, significantly more than the control group. Engagement is noteworthy because it is a predictor
of learning (Kuh, 2002; 2009). Overall GPA was also higher for the Twitter students controlling for preexisting
academic ability. Blessing, Blessing and Fleck (2012) also examined Twitter use on learning outcomes. All students
used Twitter, though only half received tweets relating directly to course content. The other half received an equal
number of humorous tweets as a control. The group receiving content-focused tweets performed better on test
items relating to those tweets. The authors concluded that the tweets allowed students to reflect on class material
for a few additional moments outside of class, creating a basic cued recall task that appeared to be beneficial.
PHOTOBLOGS
Depending on pedagogical purposes, instructors might consider the use of photoblogs to achieve positive student
learning outcomes. Photoblogs are like blogs, but the main form of communication is through photos or imagebased media and minimal text (Cohen, 2005). There are several online tools, including popular photography websites
and portfolios (e.g., Photoshelter, Zenfolio, FolioHD) as well as social media sites that focus on image-based
communication (e.g., Pinterest). Such platforms offer a new medium for instructors to create visual presentations
of course content as well as for students to apply content knowledge to their surroundings and life’s moments.
Further, the use of social media can be used to engage students outside the classroom and build course community
(Al-Bahrani & Patel, 2015; Phua, Jin, & Kim, 2017). However, it will be important for the teacher to set clear guidelines
on how the tool is to be used, monitor appropriate usage, and have an action plan in place should the guidelines be
violated.
Three popular photoblogs are Snapchat, Instagram, and Flickr. Snapchat is an online instant messaging application
that allows users to send videos or pictures, called “snaps” to other users. Tools exist within the app allowing users
P a g e | 287
to include text, emojis (small images used to express emotion, such as a happy face), and stickers on their photos.
Snaps can be shared with selected individuals or to one’s own “story,” but are deleted from the Snapchat interface
after a certain length of time, varying from seconds to hours depending on the type of snap. Users can record snaps
using other tools (e.g., taking a screenshot of a snap), resulting in a potentially permanent and sharable record
(Bayer, Ellison, Schoenebeck, & Falk, 2016). Instagram and Twitter are very similar in terms of operational structure
and functionality, with the main difference being the focus of image sharing on Instagram (Al-Bahrani & Patel, 2015).
Like other image-sharing platforms, Instagram provides users with several tools to enhance or alter their photos
prior to posting (Al-Ali, 2014). Lastly, Flickr focuses on image management and organization, allowing users to go
beyond albums and create their own image groups based on digital metadata. In addition, this application is owned
by Yahoo! and has access to some additional features that some image-sharing platforms do not, such as geographic
tagging with Yahoo! Maps (Campbell, 2007).
Although photoblogging has quickly risen in popularity among social media users, the empirical research around its
use is still growing (Al-Ali, 2014; Grieve, 2017). The descriptive scholarly literature appears to be in consensus that
users of image-based sharing platforms tend to use these applications as a form of communication through pictures
(Bayer et al., 2016; Grieve, 2017). Campbell (2007) highlights how these visual communication applications help
break down barriers between students with English as a second language. Further, educators highlight their
importance to better meet the learning needs of younger generation college students who are more visual and want
to be involved in the creation of knowledge (Bussert, Brown, & Armstrong, 2008).
Like other social media used for communication outside of class, image-based communication platforms offer
students an opportunity to share their thoughts and ideas with their peers for those who might not be comfortable
speaking in class (Punyanunt-Carter, De La Cruz, & Wrench, 2017). Pittman and Reich (2016) found this use of imagebased communication was related to decreased loneliness in students, in comparison to text-only communication.
Other researchers have found Snapchat interactions were related to more positive moods versus other forms of
communication (Bayer et al., 2016).
Bussert and colleagues (2008) had students create a Flickr photo stream as a project demonstrating information
literacy concepts. Although no direct learning outcomes were analyzed, students reported the use of Flickr helped
them to build community and get to know their peers, as well as a fun way to learn course content. Others have
found Instagram use increased teacher perceptions of student learning and production of quality work (Al-Ali, 2014).
In addition, image-based communication tends to be related to decreased misinterpretation of message
(Punyanunt-Carter et al., 2017). Students have also reported that photoblogs have helped them learn how others
perceive the world (Fleischmann, 2014; Guertin & Young, 2009).
Image-based applications might also increase comprehension, recall, and accessibility of certain curriculum
components (Paige et al., 2017). Although the use of photoblogs did not increase content knowledge, Castañeda
(2011) found they aided students in the recognition of key concepts in the course. However, it should be noted that
this treatment group also included the use of videoblogs. Additional research is warranted to determine the isolated
impact of incorporating photoblogs as a learning tool.
In terms of choosing a photoblogging platform, instructors might let students choose the image-sharing platform
that best suits their learning and mobile application preferences. For example, Castañeda (2011) let students selfselect whether they would be in a blogging or wiki project group based on whether they preferred to work
individually or in teams. Al-Ali (2014) had students share a photoblog using either Twitter, Tumblr, or Instagram,
P a g e | 288
with all students choosing Instagram. Campbell (2007) preferred the use of Flickr, providing several helpful
suggestions for teaching and learning opportunities using it. Regardless of the image-sharing platform, photoblogs
appear to be a useful tool for engaging students in course content, getting them involved in knowledge creation,
and breaking down communication barriers (Campbell, 2007; Paige et al., 2017; Punyanunt-Carter et al., 2017).
INCORPORATING BLOGS INTO YOUR COURSE
What should you consider if you decide to utilize some type of blog in your classroom? The first point we want to
emphasize is to simply apply your already existing teaching practices to blogging. For example, we know that great
teachers give clear feedback and do so often and early (Chickering & Ehrmann, 1996). As a very practical
consideration, when giving feedback on a blog make sure you are not violating Family Educational Rights and Privacy
Act regulations, called FERPA for short, (Educause, 2017; Rafferty, 2017). Consider if the feedback should exist on
the blog at all or should be delivered via a private email or on the official learning management system. Another
example is that great teachers provide clear directions for assignments with well-defined expectations for success
and that they provide many opportunities for students to practice what they have learned (Byrnes, 2008). Apply
these principles to your blogging assignments.
Our next suggestion is to think about the authenticity of blogging. Is using this technology true to who you are as a
teacher, scholar, and social person? Although blogging is fun and interesting, it might not be for everyone. If you are
not genuinely interested in it as a learning tool or method, you should not use it. Students can tell when something
is unfamiliar or when you do not buy into it yourself. Great teachers have a plan and purpose for what they do
(Whitaker, 2004). Blogging will then become an unneeded stress for everyone. We recommend that you only adopt
technology into your teaching that resonates with you. This will allow your students a real and genuine experience.
If it does resonate with you, then implement it in a deliberate and intentional way.
What are the considerations that are unique to blogging? Blogging allows students hands-on learning experiences
by creating pages, posts, or videos (Shatto & Erwin, 2016). Furthermore, it encourages collaboration (Tarantino et
al., 2013) and promotes inclusivity (Shatto & Erwin, 2016). Make sure your blogging assignments reflect these facets
by including group work or peer commenting, as well as guidelines that foster a safe environment for discussion.
The group work could even focus on varying viewpoints to reinforce this (Shatto & Erwin, 2016).
Another consideration is to be mindful of your institution’s official policy on digital media. If there is not a policy,
you should work to create one with your department. The policy should be linked into your syllabus as well as the
blogging assignment itself so that students are fully informed (Rafferty, 2017). It is also very important that your
assignments follow the policy and that the policy itself is compliant with all Family Educational Rights and Privacy
Act regulations (i.e. FERPA) (Educause, 2017). Privacy can be an issue when using blogging platforms. Ways to
address this include using a work specific account rather than your personal account as well as using aliases or
pseudonyms whenever possible to protect identities (Educause, 2017; Rafferty, 2017). You can also use your blog
assignment to teach students how to be digitally responsible (Rafferty, 2017). Not all students will know what is
appropriate to post and what is not, or issues regarding plagiarism. Digital media responsibilities include not posting
things without proper publishing rights and understanding the consequences that can arise when you publish
something not representative of yourself (Educause, 2017). Instructors should consider a clear statement in their
syllabus outlining their expectations for use of technology and the consequences of misuse. However, keep in mind
that these could be teachable moments and coming down too hard on students might not be appropriate. Such
lessons will serve students well in their future careers and personal lives. However, students have not been the only
P a g e | 289
ones to inappropriately use social media. More and more universities and departments are adopting social media
policies to govern faculty, staff, and student appropriateness of posts (Pomerantz, Hank, & Sugimoto, 2015).
Therefore, it is important that faculty ensure they are following all department and institutional policies when
implementing blogs in their courses.
Junco and colleagues (2013) recommend that instructors require students to use blogs rather than have it as an
optional assignment; they specifically used Twitter. Students who were required to use the platform experienced
greater engagement and academic benefits than students not so required. Second, they recommend blogging
platforms only be used in educationally relevant ways. They found having a theoretical framework maximized its
usefulness. Lastly, in their use of Twitter, they found strong instructor engagement on the platform positively
impacted student outcomes.
Finally, we ask you to consider the amount of time you are requiring students to spend blogging. Average screen
time for traditional college female students is 10 hours a day while males average almost 8 hours (Roberts, Petnju
Yaya, & Manolis, 2014). Should we meet students where they already are (i.e., in the digital world on their phones)
or should we encourage other academic venues for gathering, interpreting, and reporting on information? Some
scholars suggest that mobile technology should be used whenever possible in the implementation of social media
platforms such as blogging (Shatto & Erwin, 2016). This includes reading on tablets or smart phones or reinforcing
concepts with videos on YouTube (Shatto & Erwin, 2016). Others have found that students are reluctant to use their
devices for educational purposes. Gikas and Grant (2013) studied cell phone use in class for social media purposes
such as Twitter. While the students considered mobile devices helpful, they were frustrated by device challenges,
professors who were anti-technology, and the problem that devices can be a major distraction. There are some tools
that can help with the latter. Focus Booster, Rescue Time, and Cold Turkey are applications that can help people stay
focused on tasks by monitoring and tracking online use and even blocking out distractions. Each has its own
functionality usually involving downloading an application or signing up on their website to activate their specific
services.
CONCLUSION
Blogging can provide a rich educational experience for students, but instructors need to be mindful of these different
considerations. As illustrated above, blogging can take many different styles, from longer pieces of discourse, to
short tweets, and even to photographs. We encourage instructors to consider how best to authentically incorporate
these kinds of experiences into their courses. As each of these uses can promote constructivist learning principles,
blogs can be a good pedagogical tool to deploy under the right circumstances in any class.
REFERENCES
Al-Ali, S. (2014). Embracing the selfie craze: Exploring the possible use of Instagram as a language mlearning tool.
Issues and Trends in Educational Technology, 2(2), 1-16. doi:10.2458/azu_itet_v2i2_ai-ali
Al-Bahrani, A., & Patel, D. (2015). Incorporating Twitter, Instagram, and Facebook in economics classrooms. The
Journal of Economic Education, 46(1), 56-67. doi:10.1080/00220485.2014.978922
Bayer, J. B., Ellison, N. B., Schoenebeck, S. Y., & Falk, E. B. (2016) Sharing the small moments: Ephemeral social
interaction
on
Snapchat.
Information,
Communication
&
Society,
19(7),
956-977.
doi:10.1080/1369118X.2015.1084349
P a g e | 290
Blessing, S. B., Blessing, J. S., & Fleck, B. K. B. (2012). Using Twitter to reinforce classroom concepts. Teaching of
Psychology, 39, 268-271. doi:10.1177/0098628312461484
Bussert, K., Brown, N. E., & Armstrong, A. H. (2008). IL 2.0 at the American University in Cairo. Internet Reference
Services Quarterly, 13(1), 1-13. doi:10.1300/J136v13n01_01
Byrne, J. (n.d.). Backchannels. Retrieved from: https://sites.google.com/site/richardbyrnepdsite/backchannels
Byrnes, J. P. (2008). Cognitive development and learning in instructional contexts. (3rd Ed.). Boston, MA: Allyn & Bacon
Campbell, A. (2007). Motivating language learnings with Flickr. The Electronic Journal for English as a Second
Language, 11(2), 1-17.
Castañeda, D. A. (2011). The effects of instruction enhanced by video/photo blogs and wikis on learning the
distinctions of the Spanish preterite and imperfect. Foreign Language Annals, 44(4), 692-711.
doi:10.1111/j.1944-9720.2011.01157.x
Chickering, A. W., & Ehrmann, S. C. (1996). Implementing the seven principles: Technology as lever. AAHE Bulletin,
49(2), 3–6.
Chickering, A. W., & Gamson, Z. F. (1989). Seven principles for good practice in undergraduate education.
Biochemical Education, 17(3), 140–141. doi:10.1016/0307-4412(89)90094-0
Cohen, K. R. (2005). What does the photoblog want? Media, Culture & Society, 27(6), 883-901.
doi:10.1177/0163443705057675
Deng, L., & Yuen, A. H. (2011). Towards a framework for educational affordances of blogs. Computers & Education,
56(2), 441-451. doi:10.1016/j.compedu.2010.09.005
Dos, B., & Demir, S. (2013). The analysis of the blogs created in a blended course through the reflective thinking
perspective. Educational Sciences: Theory and Practice, 13(2), 1335-1344.
Duarte, P. (2015). The use of a group blog to actively support learning activities. Active Learning in Higher Education,
16(2), 103-117. doi:10.1177/1469787415574051
Educause Learning Initiative. (2009). 7 things you should know about micro-blogging. Retrieved from:
https://library.educause.edu/resources/2009/7/7-things-you-should-know-about-microblogging
Educause Learning Initiative. (2017). Is your use of social media FERPA compliant?
from: http://www.educause.edu/ero/article/your-use-social-media-ferpa-compliant
Retrieved
Ellison, N. B., & Wu, Y. (2008). Blogging in the classroom: A preliminary exploration of student attitudes and impact
on comprehension. Journal of Educational Multimedia and Hypermedia, 17(1), 99.
Fiegerman, S. (2017, July 27). Twitter is now losing users in the U.S. Retrieved
http://money.cnn.com/2017/07/27/technology/business/twitter-earnings/index.html
from:
Fleck, B., & Hussey, H. (2017, May). What’s trending? Social media in the college classroom, 2017. Invited Talk at the
Association for Psychological Science 29th Annual Convention (APS), Boston, MA.
Fleck, B. K. B., Richmond, A. S., & Hussey, H. D. (2013). Using social media to enhance instruction in higher education.
In S. Keengwe (Ed.), Research perspectives and best practices in educational technology integration.
(pp.217-241). Hershey, PA: IGI Global publication.
Fleischmann, K. (2014) Collaboration through Flickr & Skype: Can Web 2.0 technology substitute the traditional
design studio in higher design education? Contemporary Educational Technology, 5(1), 39-52.
P a g e | 291
Gikas, J., & Grant, M. M. (2013). Mobile computing devices in higher education: Student perspectives on learning
with cellphone, smartphones & social media. Internet and Higher Education, 19, 18-26.
doi:10.1016/j.iheduc.2013.06.002
Gill, G. (2006). Asynchronous discussion groups: A use-based taxonomy with examples. Journal of Information
Systems Education, 17(4), 373-383.
Grieve, R. (2017). Unpacking the characteristics of Snapchat users: A preliminary investigation and an agenda for
future research. Computers in Human Behavior, 74, 130-138. doi:10.1016/j.chb.2017.04.032
Guertin, L. A., & Young, C. L. (2009). Using Flickr to connect a multi-campus honors community. Journal of The
National Collegiate Honors Council, 10(2), 57-60.
Halic, O., Lee, D., Paulus, T., & Spence, M. (2010). To blog or not to blog: Student perceptions of blog effectiveness
for learning in a college-level course. The Internet and higher education, 13(4), 206-213.
doi:10.1016/j.iheduc.2010.04.001
Hew, K. F., & Cheung, W. S. (2013). Use of Web 2.0 technologies in K-12 and higher education: The search for
evidence-based practice. Educational Research Review, 9, 47-64. doi:10.1016/j.edurev.2012.08.001
Ifinedo, P. (2017). Examining students' intention to continue using blogs for learning: Perspectives from technology
acceptance, motivational, and social-cognitive frameworks. Computers in Human Behavior, 72, 189-199.
doi:10.1016/j.chb.2016.12.049
Jackson, C. (2011). Your students love social media... and so can you. Teaching Tolerance, 39, 38-41. Retrieved from:
http://www.tolerance.org/magazine/number-39-spring-2011/your-students-love-social-media-and-socan-you
Johnson, K. A. (2011). The effect of Twitter posts on students’ perceptions of instructor credibility. Learning, Media,
and Technology, 36(1), 21-38. doi:10.1080/17439884.2010.534798
Junco, R., Elavsky, M. C. & Heiberger, G. (2013). Putting Twitter to the test: Assessing outcomes for student
collaboration, engagement and success. British Journal of Educational Technology, 44(2), 273-287.
doi:10.1111/j.1467-8535.2012.01284.x
Junco, R., Heibergert, G., & Loken, E. (2011). The effect of Twitter on college student engagement and grades. Journal
of Computer Assisted Learning, 27, 119-132. doi:10.1111/j.1365-2729.2010.00387.x
Kim, H. N. (2008). The phenomenon of blogs and theoretical model of blog use in educational contexts. Computers
& Education, 51(3), 1342-1352. doi:10.1016/j.compedu.2007.12.005
Kuh, G. D. (2002). The national survey of student engagement: conceptual framework and overview of psychometric
properties. Bloomington: Center for Postsecondary Research, Indiana University. Retrieved from
http://nsse.iub.edu/pdf/psychometric_framework_2002.pdf
Kuh, G. D. (2009). What student affairs professionals need to know about student engagement. Journal of College
Student Development, 50(6), 683–706. doi:10.1353/csd.0.0099
Kuo, Y.-C., Belland, B. R., & Kuo, Y.-T. (2017). Learning through blogging: Students’ perspectives in collaborative blogenhanced learning communities. Educational Technology & Society, 20(2), 37–50.
Lee, J., & Bonk, C. J. (2016). Social network analysis of peer relationships and online interactions in a blended class
using blogs. The Internet and Higher Education, 28, 35-44. doi:10.1016/j.iheduc.2015.09.001
P a g e | 292
Lowe, B., & Laffey, D. (2011). Is Twitter for the birds? Using Twitter to enhance student learning in a marketing
course. Journal of Marketing Education, 33(2), 183-192. doi:10.1177/0273475311410851
Miyazoe, T., & Anderson, T. (2010). Learning outcomes and students' perceptions of online writing: Simultaneous
implementation of a forum, blog, and wiki in an EFL blended learning setting. System, 38(2), 185-199.
doi:10.1016/j.system.2010.03.006
Neier, S., & Tuncay Zayer, L. (2015). Students’ perceptions and experiences of social media in higher education.
Journal of Marketing Education, 37(3), 133-143. doi:10.1177/02734751315583748
Oravec, J. A. (2002). Bookmarking the world: Weblog applications in education. Journal of Adolescent & Adult
Literacy, 45, 616-621.
Osman, G., & Koh, J. H. L. (2013). Understanding management students' reflective practice through blogging. The
Internet and Higher Education, 6, 23-31. doi:10.1016/j.iheduc.2012.07.001
Paige, S. R., Stellefson, M., Chaney, B. H., Chaney, D. J., Alber, J. M., Chappell, C., & Barry, A. E. (2017). Examining the
relationship between online social capital and ehealth literacy: Implications for Instagram use for chronic
disease prevention among college students. American Journal of Health Education, 48(4), 264-277.
doi:10.1080/19325037.2017.1316693
Phua, J., Jin, S. V., & Kim, J. J. (2017). Uses and gratifications of social networking sites for bridging and bonding social
capital: A comparison of Facebook, Twitter, Instagram, and Snapchat. Computers in Human Behavior, 72,
115-122. doi:10.1016/j.chb.2017.02.041
Pittman, M., & Reich, B. (2016). Social media and loneliness: Why an Instagram picture may be worth a more than a
thousand Twitter words. Computers in Human Behavior, 62, 155-167. doi:10.1016/j.chb.2016.03.084
Pomerantz, J., Hank, C., & Sugimoto, C. R. (2015). The state of social media policies in higher education. PLoS ONE,
10(5): e0127485. doi: 10.1371/journal.pone.0127485
Punyanunt-Carter, N. M., De La Cruz, J. J., & Wrench, J. S. (2017). Investigating the relationships among college
students' satisfaction, addiction, needs, communication apprehension, motives, and uses & gratifications
with Snapchat. Computers in Human Behavior, 75, 870-875. doi:10.1016/j.chb.2017.06.034
Rafferty, J. (2017, April 10). 8 things you should know before using social media in your course. Online Learning
Consortium Blog. Retrieved from: https://onlinelearningconsortium.org/8-things-you-should-knowbefore-using-social-media-in-your-course
Roberts, J. A., Petnji Yaya, L. H., & Manolis, C. (2104). The invisible addiction: Cell-phone activites and addiction
among male and female college students. Journal of Behavioral Addiction, 3(4), 254-265,
doi:10.1556/JBA.3.2014.015
Roblyer, M. D., McDainiel, M., Webb, M., Herman, J., & Witty, J. V. (2010). Findings on Facebook in higher education:
A comparison of college faculty and student uses and perceptions of social networking sites. Internet and
Higher Education, 13, 134-140. http://dx.doi.org/10.1016/j.iheduc.2010.03.002
Samuel-Azran, T. & Ravid, G. (2016) Can blogging increase extroverts’ satisfaction in the classroom? Lessons from
multiple
case
studies,
Interactive
Learning
Environments,
24(6),
1097-1108,
doi:10.1080/10494820.2014.961483
Shatto, B., & Erwin, K. (2016). Teaching millennials and generation z: Bridging the generational divide. Creative
Nursing, 23(1), 24-28. doi:10.1891/1078-4535.23.1.24
P a g e | 293
Sim, J. W. S., & Hew, K. F. (2010). The use of weblogs in higher education settings: A review of empirical research.
Educational Research Review, 5(2), 151-163. doi:10.1016/j.edurev.2010.01.001
Sullivan, M., & Longnecker, N. (2014). Class blogs as a teaching tool to promote writing and student interaction.
Australasian Journal of Educational Technology, 30(4). doi:10.14742/ajet.322
Tarantino, K., McDonough, J., & Hua, M. (2013). Effects of student engagement with social media on student
learning: A review of literature. The Journal of Technology in Student Affairs. 42. Retrieved from
http://studentaffairs.com/ejournal/Summer_2013/EffectsOfStudentEngagementWithSocialMedia.html
Tomai, M., Rosa, V., Mebane, M. E., D’Acunti, A., Benedetti, M., & Francescato, D. (2010). Virtual communities in
schools as tools to promote social capital with high school students. Computers & Education, 54, 265-274.
doi:10.1016/j.compedu.2009.08.009
Top, E. (2012). Blogging as a social medium in undergraduate courses: Sense of community best predictor of
perceived learning. The Internet and Higher Education, 15(1), 24-28. doi:10.1016/j.iheduc.2011.02.001
Walatka, T. (2012). Hub-and-spoke student blogging and advantages for classroom discussion. Teaching Theology &
Religion, 15(4), 372-383. doi:10.1111/j.1467-9647.2012.00830.x
Whitaker, T. (2004). What great teachers do differently 14 things that matter most. New York, NY: Routledge.
Wright, N. (2010). Twittering in teacher education: Reflecting on practicum experiences. Open Learning, 25(3), 259265. doi:10.1080/02680513.2010.512102
P a g e | 294
CHAPTER
29
#WORTHIT? INTEGRATING
TWITTER INTO INTRODUCTORY
PSYCHOLOGY CURRICULUM
SCOTT P. KING
AND
MARK CHAN
SHENANDOAH UNIVERSITY
INTRODUCTION
Founded in 2006, the social networking service (SNS) known as Twitter is now ubiquitous in the developed world.
The site allows registered users to post short (280-character maximum) messages (“tweets”), which are then
available for other users (depending on privacy settings) to read, reply to, or retweet. Users can direct their tweets
toward other users by putting an “@” with their target’s username, or allow their tweets to be thematically archived
by adding a “#” to the topic about which they are tweeting. As of the writing of this chapter, there were 328 million
monthly active users of Twitter worldwide, 82 percent of whom used it via mobile phones (Twitter, 2017). Its cultural
impact is widespread, as social commentators credit Twitter for energizing social movements ranging from the Arab
Spring and Black Lives Matter to the 2016 election of Donald Trump, who has continued as U.S. President to use
Twitter as his primary mode of communication (Luckerson, 2017).
Although not as popular as Facebook or Instagram among university students (Pew Research Center, 2016), Twitter,
as a combination of a microblogging site with a social networking service (Junco, Heiberger, & Loken, 2011), is bettersuited for use in higher education than other popular SNS sites because of its ease of use, topical searchability, and
self-archiving features. The first author of this chapter witnessed both Twitter’s popularity among students and its
potential to be harnessed as a teaching tool during one of his classes in the early 2010’s. A handful of students in his
Social Psychology course used Twitter to communicate with each other during class, violating course policies about
accessing social media, but at the same time providing insight into their opinions about course content and the
instructor’s teaching methods. Upon discovery of the students’ in-class Tweets about the course, the students were
given the option of writing a short review of research on the use of Twitter in the classroom instead of being
penalized class participation points. The paucity of experimental studies on the effects of Twitter on student
engagement in higher education at that time is what led to the study described in this chapter.
Fortunately, since that chance encounter with students Tweeting during class, experimental research on the use of
Twitter in the classroom has grown, although it still lags behind studies about Facebook in educational settings (Fleck,
Richmond, & Hussey, 2013; Tang & Hew, 2017). In this chapter, we will review such research, and introduce our own
experiment performed in an introductory psychology course. While not an exhaustive or meta-analytical review, we
hope to provide the reader with a context justifying our attempt to add to extant literature, and a rationale for our
experiment’s design.
In quasi-experimental comparisons of integrating Twitter with coursework and other methods (including traditional
teaching methods or other SNSs), results have been mixed. Junco and colleagues (Junco et al.,, 2011; Junco, Elavsky,
& Heiberger, 2013) found positive results when comparing first-year health professions seminar class sections
requiring Twitter with sections without a Twitter requirement, showing more student engagement and higher GPA
for the Twitter sections. Welch and Bonnan-White (2012), however, found no effect on student engagement when
P a g e | 295
comparing sociology and anthropology courses requiring and not requiring Twitter, although they did find a positive
relationship between self-reported Twitter enjoyment and perceived engagement.
Blessing, Blessing, and Fleck (2012) compared psychology class sections where instructors tweeted either daily
course-related material or daily jokes, and found that students receiving the course-related tweets remembered
more concepts than those receiving the joke tweets. Kassens-Noor (2012) in a small, exploratory, mostly qualitative
study, allowed students in an urban planning course to choose to either tweet or keep diaries relating to
environmentally unsustainable practices they witnessed, and found that students in the Twitter group identified
more unsustainable practices, but recalled fewer practices and generated fewer remedies for those practices,
compared to the diary group. Kim et al. (2015) used in-class pop quizzes requiring students to tweet a response as
quickly as possible (similar to typical clicker practices) in large engineering lecture courses, finding that a section
utilizing in-class pop quiz tweets performed better on exams than a section taught without in-class tweets. It is
notable, however, that in these three studies, tweets were either generated by the instructors (Blessing et al., 2012)
or in direct response to instructor prompts (Kassens-Noor, 2012; Kim et al., 2015), as opposed to being
independently student-generated.
Munoz, Pelligrini-Lafont, and Cramer (2014) allowed for independent student-generated tweets in their study
comparing an online teacher education course section using Twitter, an online section using a Blackboard-based
(Learning Management System) microblog, and a face-to-face section using Twitter. In this study, students in the
Blackboard-only condition felt more engaged and connected with other students than students in either of the
Twitter conditions, but the study’s sample only included 46 total students and the authors only provided descriptive
statistics, thus limiting this claim’s ability to be generalized or internally valid.
Kuznekoff, Munz, and Titsworth (2015) were able to simulate a classroom setting via a true experiment in which
participants were randomly assigned to one of several conditions including whether they were texting while taking
notes, tweeting while taking notes, or simply note-taking during a lecture they would be later quizzed about.
Participants performed better in the note-taking-only condition than in the tweeting condition, even when they were
asked to tweet about lecture content, suggesting that tweeting while in class can be more of a distraction than a
help.
The lack of consistent findings from the aforementioned studies points to the need for more research on the use of
Twitter in education, at least as long as Twitter remains ubiquitous in popular culture. Large-scale reviews have been
relatively scarce, as well, but are presented here to conclude this introduction.
Gao, Luo, and Zhang (2012) performed an early critical analysis of studies about microblogging in education
published from 2008 to 2011, reviewing 21 studies. They highlighted the need for more systematically designed
research on using Twitter in education, noting that many studies relied on case studies or descriptive data. Alias et
al. (2013) reviewed studies involving Twitter in seven journals from 2007 to 2012, but were only able to analyze the
content of four articles, none of which utilized an experimental design or explored measurable outcomes of
integrating Twitter into a course. Buettner (2013) reviewed “substantial” (p. 246) studies from 2006 through 2013
and found that 16 of 17 relevant studies showed positive student outcomes when using Twitter, but did not examine
closely those outcomes besides a discussion of improved collaboration and teamwork through Twitter.
Fleck and colleagues (2013), in a chapter that would not be out of place in this current volume, reviewed research
on SNSs and reported results from a then-unpublished survey of college students about their preferences in SNSs in
P a g e | 296
the classroom. The authors provided guidance about incorporating social media in the classroom in general, with a
focus on both Facebook and Twitter. Grounding their recommendations in relevant learning theories, they
concluded with useful suggestions for instructors wishing to implement SNS use into their classrooms, stressing the
instructor’s need for familiarity with the SNS, clarity in expectations provided to students, consistency in selfdisclosure, awareness of privacy options, versatility in communication modes, cognizance of SNS post orders, and
knowledge of current research about SNS in the classroom.
Tang and Hew (2017), in a comprehensive review of studies about Twitter from 2006 to 2015, identified and analyzed
51 studies that met their criteria of empirical, peer-reviewed articles dealing with Twitter’s effect on teaching and
learning in an educational setting. Of these 51 studies, only six involved experimental comparisons of Twitter to nonTwitter methods and utilized quantitative outcomes. This lack of rigorous comparisons led the researchers to posit
that evidence of Twitter’s causal effectiveness as a teaching tool remains to be seen, but it does have promise as a
mode of communication (e.g., a “push” technology to send students course-related information, or for collaboration
between students) and assessment (e.g., as an in-class response system).
THE CURRENT STUDY
With this shortage of controlled research on the effects of Twitter in a teaching and learning context in mind, we
designed a quasi-experimental study to compare Twitter to traditional teaching methods in an introductory
psychology course. True experiments, with each participant randomly assigned to a condition, are difficult to achieve
in a real educational setting given that students in higher education choose their course sections based on a variety
of both practical (e.g., time of day) and preference-based (e.g., reputation of instructor) reasons. It would also be
logistically difficult for one instructor to structure a single course section with students in both Twitter and control
groups. Thus, in the current study we employed a quasi-experimental design, with entire sections assigned to either
use Twitter or not.
Based on previous research (e.g., Junco et al., 2011) showing positive effects of Twitter on student engagement, we
hypothesized that students in courses utilizing Twitter would show more improvement in self-reported student
engagement over the course of a semester than students in courses utilizing traditional teaching methods. We also
explored the effect of Twitter quantitatively through examining potential differences in academic performance, as
measured by course grades, semester grade point average (GPA), and overall GPA.
METHOD
PARTICIPANTS
We employed a quasi-experimental design with order counterbalanced between semesters and across sections.
Eight lecture-based sections of introductory psychology participated in the study over four 15-week academic
semesters. In Semester 1, the first author’s introductory psychology section served as the treatment group, while
the second author’s introductory psychology section served as the control group. The assignment to treatment or
control group switched each semester, following an ABBA pattern of counterbalancing for each instructor (see Table
1).
P a g e | 297
Table 1. Number of participants in each group each semester.
Semester 1
Semester 2
SK section
Twitter (n = 23)
Trad. (n = 25)
MC section
Trad. (n = 24)
Twitter (n = 20)
Semester 3
Trad. (n = 23)
Twitter (n = 22)
Semester 4
Twitter (n = 25)
Trad. (n = 23)
Each section enrolled between 20 and 25 students, resulting in 90 students (55% female; 60% Freshman, 22%
Sophomore, 13% Junior, 4% Senior) in Twitter sections, and 95 students (55% female; 59% Freshman, 26%
Sophomore, 8% Junior, 7% Senior) in traditional sections.
MATERIALS
Students in treatment groups created Twitter accounts and shared their Twitter handle with their instructors.
We utilized the TAGS plug-in (API) for Google Sheets (version 5.0; Hawksey, 2013) to track user defined hashtags.
The plug-in searches Twitter for tweets and retweets with the #SUPSY101 hashtag.
INSTRUMENTS
Students completed an online questionnaire through Qualtrics at the beginning and end of each semester. This
questionnaire included demographic items and instruments measuring student engagement and social media use
(discussed in more detail below). Instructors calculated academic performance after completion of the course.
STUDENT ENGAGEMENT
Students completed a modified version of the Student Engagement Instrument (SEI; Appleton, Christenson, Kim, &
Reschly, 2006) at the beginning and end of each semester. Appleton et al., 2006) tested the original SEI with middle
and high school students; we modified it to be appropriate for college students by removing items referring to
“family/guardian” (e.g., “I’ll learn, but only if my family/guardian(s) give me a reward”) and changing occurrences of
the word “teachers” to “professors” (e.g., “I enjoy talking to the professors here”). Our modified SEI contained 29
items with 5-point Likert response options from 1 (strongly agree) to 5 (strongly disagree), and includes subscales
measuring Student-Teacher Relationships (8 items, Cronbach’s alpha = .86), Peer Support for Learning (6 items,
Cronbach’s alpha = .84), Future Aspirations and Goals (5 items, Cronbach’s alpha = .73), Control and Relevance (9
items, Cronbach’s alpha = .78), and Extrinsic Motivation (1 item).
SOCIAL MEDIA USE
We asked students to answer questions regarding how much time per typical school day (in minutes) they spend:
(1) using social media in general, (2) using Twitter, (3) using Twitter in relation to academics, and (4) using Twitter in
relation to Introductory Psychology.
ACADEMIC PERFORMANCE
Following completion of each semester, we calculated students’ grade in the course (up to 100, after Twitter grade
removed, if applicable), Twitter grades (up to 100, if applicable), semester GPA in all courses, and overall GPA in all
courses.
P a g e | 298
PROCEDURE
Over the course of two two-semester school years, each semester one of us taught introductory psychology using
traditional teaching methods (control group), and one of us taught with Twitter as a required component of the
course curriculum (treatment group; see Table 1 for teaching order). For the sections comprising the treatment
group, we integrated Twitter into curriculum as described below.
In the Twitter sections, we provided students with brief tutorials on how to use Twitter, and made clear our
expectations for how it would be used during the semester. Each week, students had to compose at least one original
tweet regarding something they learned about psychology during that week. These tweets could be informative,
humorous, reflective, observational, or inquisitive, and follow standards of relevance and respect that we set for
typical classroom discussion. Tweets not meeting these standards would not earn credit. Because we wanted
students to generate their own tweets, we tended to not provide prompts. However, on occasion, we would tweet
questions to the class that they could respond to, thus serving as a prompt to facilitate communication, or aid
students who were “stuck.” These prompts were sporadic and not mandatory to reply to. A “tweet of the week”
prize was used to enhance participation, with winners receiving token rewards. The only account requirements were
that student accounts would be viewable by instructors, and that students used the appropriate hashtag
(#SUPSY101) when tweeting content for the course.
In addition to award tokens, five percent of each student’s overall course grade was dedicated to the Twitter
assignment in each section. Assigning a course grade to tweeting incentivizes the use of Twitter (Kassens, 2014). In
our courses, tweeting at least one relevant message meeting the aforementioned standards per week would earn
credit on an “all or nothing” basis for each week. Students in control sections did not use Twitter, thus the five
percent was evenly distributed across other course related activities, such as homework assignments or quizzes.
DATA COLLECTION
During the first and last weeks of each semester, students completed a confidential online questionnaire assessing
student engagement and social media usage, thus providing both pre- and post-treatment data. Students had the
option of declining to provide data, even if their section was one utilizing Twitter, without any detrimental impact
to their course grade. No students actively declined to participate, although some students were absent on data
collection days. Pairwise deletion of missing data was used in analyses.
We measured performance through recording students’ course grades (measured as a percentage of points earned
after removing points earned from Twitter). We also measured overall academic performance by recording student
GPA for the semester they took the course and their overall GPA for their time at the university.
RESULTS
COMPARISONS OF TWITTER AND TRADITIONAL CLASS TYPES
We conducted a 2 (Twitter, Traditional) x 2 (Time 1, Time 2) mixed factorial ANOVAs for measures of engagement.
Independent samples t-tests (Twitter, Traditional) were conducted on all academic measures.
Performing 2 x 2 ANOVAs for each engagement subscale revealed there to be only one main effect of time
(comparing Time 1 scores to Time 2 scores): Student Future Aspiration scores declined at the end of the semester
P a g e | 299
compared to the beginning, across both Twitter and Traditional groups, but this effect was no longer statistically
significant after making Bonferroni corrections. We found no main effects for the type of class, and most importantly,
no significant interactions between time and type of class. The lack of significant interactions indicates that changes
in student engagement over time were not dependent on whether students were in a Twitter or Traditional class
type (see Tables 2 and 3).
Table 2
2x2 Mixed-ANOVA results for SEI sub-scale scores.
Future Aspiration
Teacher-Student Relationship
Extrinsic Motivation
Peer Support
SEI Total
df
F
p
η2p
df
F
p
η2p
df
F
p
η2p
df
F
p
η2p
df
F
p
η2p
Time (T)
1, 166
4.46
.036*
0.026
1, 163
0.001
.973
0.000
1, 164
2.25
.136
0.014
1, 165
0.17
.685
0.001
1, 159
2.84
.094
0.018
Twitter
Group
(TG)
1, 166
1.11
.295
0.007
1, 163
1.96
.164
0.012
1, 164
1.67
.198
0.010
1, 165
0.04
.851
0.000
1, 159
1.17
.282
0.007
T x TG
1, 166
0.004
.948
0.000
1, 163
0.081
.776
0.000
1, 164
1.04
.310
0.006
1, 165
0.002
.961
0.000
1, 159
0.35
.555
0.002
Table 3
Descriptive statistics for SEI sub-scale scores (numbers in parenthesis are standard deviations)
Future Aspirations
Teacher-Student Relationship
Extrinsic Motivation
Peer Support
SEI Total
Pre
Post
Pre
Post
Pre
Post
Pre
Post
Pre
Post
17.83
17.46
33.69
33.59
36.32
36.15
24.84
24.94
116.82
115.85
(1.93)
(2.63)
(3.57)
(4.86)
(4.13)
(5.48)
(2.89)
(3.81)
(10.69)
(15.21)
18.13
17.78
32.86
32.94
35.86
35.00
24.76
24.84
115.64
113.64
(2.07)
(2.11)
(3.45)
(3.89)
(3.98)
(4.57)
(3.11)
(3.14)
(9.23)
(9.98)
Twitter
Traditional
Analyses of the academic measures did not reveal any statistically significant difference between class type (see
Tables 4 and 5).
P a g e | 300
Table 4
Independent samples t-test for academic measures between Twitter and traditional courses
df
t
p
Cohen’s d
Semester GPA
192
- 0.29
.773
0.05
Overall GPA
192
0.07
.942
0.01
Grade for PSY101
183
- 0.34
.733
0.05
Table 5
Descriptive statistics for academic measures between Twitter and traditional courses (numbers in parenthesis are
standard deviations)
Semester GPA
Twitter
Traditional
2.83
2.87
(0.78)
(0.77)
2.94
2.94
(0.70)
(0.69)
83.39
83.87
(9.42)
(9.87)
Overall GPA
Grade for PSY101 (%)
P a g e | 301
CORRELATIONAL ANALYSES
For each Time 1 and Time 2, we performed Pearson’s correlational analyses to determine relationships between
academic performance, social media usage, and student engagement variables across all students.
TIME 1
Extrinsic Motivation scores correlated negatively with students’ grades in the course, r = -.163, n = 177, p = .030.
Future Aspirations scores correlated positively with students’ grades in the course, r = .171, n = 178, p = .022. In
other words, prior to any instruction using Twitter or traditional methods, students who were less extrinsically
motivated but more driven toward future goals were more likely to earn better grades in the course. We found no
other significant correlations outside of those that would be expected based on scale content (e.g., student
engagement subscales correlating with each other).
TIME 2
Extrinsic Motivation scores correlated negatively with both students’ grades in the course, r = -.210, n = 175, p =
.005, and their overall semester GPA, r = -.190, n = 175, p = .012, but correlated positively with time spent using
Twitter in general, r = .202, n = 174, p = .007, and in relation to academics, r = .162, n = 173, p = .033. Across all
students, those who were more extrinsically motivated tended to have worse grades, but spent more time using
Twitter. Among students in the Twitter sections, there were large positive correlations between students’ Twitter
grades and non-Twitter grades in the course, r = .561, n = 90, p < .001, and overall semester GPA in all courses, r =
.609, n = 90, p < .001. These strong relationships indicate that students who performed better academically in the
course and in general tended to perform better in the Twitter-based portion of the course as well.
Diving deeper into the connection between student extrinsic motivation and time spent using academic-related
Twitter, we found that among students in the Twitter courses, this relationship was positive, r = .221, n = 84, p =
.044, but among students in the traditional courses, the relationship was not significant, r = .129, n = 89, p = .227.
Converting the correlations to r2 effect sizes shows that the relationship between extrinsic motivation and academicrelated time on Twitter for students in the Twitter sections was approximately three times larger than that
relationship for students in Traditional sections (r2Twitter = .049, r2Trad = .017).
CHANGE IN ENGAGEMENT
When including difference scores (Time 2 - Time 1) for the SEI, we found that for students who were in Twitter
sections, there was not a significant relationship between SEI changes over time and their grades in the course, r = .176, n = 76, p = .128. However, for students in traditional courses, this relationship was positive, r = .222, n = 85, p
= .041. These two correlations together indicate that the Twitter requirement may have moderated the relationship
between engagement and course grades, such that improvements in engagement were unrelated to course grades
in Twitter sections, but in traditional sections, the more students increased their engagement over the semester,
the better grades they earned in the course.
P a g e | 302
DISCUSSION
INTERPRETATION & IMPLICATIONS
In this study, integrating Twitter into course requirements did not result in improvements in student engagement or
performance. The lack of findings showing improvement in student engagement is unlike those of Junco and
colleagues (2011, 2013), who found significant improvements in overall student engagement when including Twitter
as part of course requirements.
The use of Twitter appears to function better as a community building tool where there is little to no face-to-face
contact between individuals (Carpenter & Krutka, 2014). In Junco and colleagues’ studies, students’ courses were
first-year seminars meeting only one hour/week. We taught all sections of the courses involved in this study in
person, meeting either twice or three times per week, for a total of three weekly contact hours. Our course,
Introductory Psychology, enrolls mostly first-year students (approximately 60% in our sample) but also includes
upperclassmen, and occurred in both Fall and Spring semesters.
Junco and colleagues’ (2011, 2013) experiments utilized seven sections of a course and took place over one academic
semester, but it is unclear how instructors were matched to Twitter sections. Welch and Bonnan-White’s (2012)
study included two sections of each of two courses taught by two instructors, and occurred over one semester.
Blessing et al. (2012) utilized two sections of a single course taught by two instructors during a single semester.
Although our pre-test and post-tests were at the beginnings and endings of single semesters, by expanding our entire
study to encompass four semesters, we could include more participants and account for environmental differences
that might have varied systematically with requiring Twitter, had we confined it to a single semester, such as time
of year (Fall vs. Spring), time of day the courses were offered (early vs. late morning), meetings per week (twice or
thrice), or instructor variables (e.g., comfort level with Twitter, ability to establish rapport).
The correlational analyses we performed, while providing insight into relationships of course grades, engagement,
and social media usage, were exploratory in nature and tended to provide more questions than answers. Some
findings, such as those showing that students with higher future aspirations and who were less extrinsically
motivated would earn better grades in the course, shed little light on the impact of Twitter as a teaching tool. The
finding that students with better performance in the Twitter components of their courses were more likely to have
better non-Twitter grades in the course and overall was not surprising.
Whether a student’s section utilized Twitter or not moderated some small, yet notable relationships between other
variables. In Twitter sections, students who were more extrinsically motivated at the beginning of the course spent
more time tweeting about academics, but in traditional sections, this relationship was absent. Not all students have
Twitter accounts, thus by incentivizing tweets in the Twitter sections with points and tokens, we provided extrinsic
reasons for students to spend more time tweeting, but for intrinsically motivated students, the incentives may have
mattered less.
The relationship between student engagement and academic performance was less strong and more nuanced than
we expected, and merits further research. The only areas showing significant associations with academic
performance were future aspirations (positive) and extrinsic motivation (negative), but student-teacher
relationships, control and relevance, and peer support were not related to performance.
P a g e | 303
CHALLENGES & LESSONS LEARNED
When Twitter is used as a consistent part of the academic experience, the additional workload may result in “log-in”
fatigue. This is reflected in negative sentiments shared by some students: “[…] not a huge fan of Twitter, it felt more
like busy work to me […]” or “[…] I am not sure that it really helps with anything. I don’t really use Twitter that often”.
Moreover, it is likely that only students who enjoy using Twitter report increased engagement (Welch & BonnanWhite, 2012).
Similarly, not all instructors are active Twitter users. Having to actively monitor the class Twitter feed, learning
Twitter etiquette (140 characters at the time of the study), responding to student tweets, and when to use Twitter
specific labels (e.g., @ for mentions and #hashtags) took some initial adjustment. When tweeting, and responding
to others’ tweets via mentions (@) instructors and students must remember to use the class hashtag. Not using the
hashtag will cause the API to miss the conversation and potentially result in students being penalized incorrectly.
From a research perspective, teachers attempting to evaluate the effectiveness of integrating Twitter into their
curricula should pay keen attention to their outcome variable(s), how they are operationalized, and whether
outcome measures are sensitive enough to detect changes attributed to Twitter. Academic performance should be
an objective and relatively straightforward construct, but student engagement may not be. In our case, the SEI and
its subscales could have been measuring a broader sense of student engagement than what we were able to affect
with the Twitter requirement. SEI items were phrased to refer to school in general, and not a specific course, e.g.,
“My professors are there when I need them” (Student-Teacher Relationship subscale) or “Other students at school
care about me” (Peer Support subscale). If our courses were the only ones in students’ course-loads utilizing Twitter,
its positive effects may have been washed out by dynamics of other courses.
RECOMMENDATIONS
For instructors who actively use Twitter, tweeting about academic content is likely not a time-consuming task. In
fact, Twitter provides an additional avenue for professional development (see Krutka, & Carpenter, 2016). However,
when tweeting is coupled to student grades, there is an additional time demand of ensuring that students are
tweeting and that these tweets are graded in a timely manner. Therefore, an effective workflow should be
established to reduce workload, e.g., reminding students to use the appropriate hashtag, setting a specific time to
score tweets, utilizing automation to collate student tweets.
If Twitter functions as a means to engage students, then tweeting must be transactional. Students should be
encouraged to respond to classmates’ tweets on top of their weekly tweets. Instructors should feel free to respond
and possibly even “like” a student’s tweet where appropriate. In effect, this transforms the activity of tweeting into
a conversation. By doing so, it makes tweeting more meaningful and reduces the sense that once a tweet is done
and graded, it becomes lost to the Twittersphere. This transactional process provides a sense of community and
receiving likes and garnering retweets are strong extrinsic motivators (Kassens, 2014).
The results of our study mirror the mixed findings in extant literature (Tang & Hew, 2017). At this point, we might
even question if using Twitter is a worthwhile approach to engage our students. As educators, we are encouraged
to engage the digital natives (students) in our classroom with tools that resonate with them. If we look beyond the
null and mixed results, we can take pride in tweets that show learning transferring beyond the classroom, and also
words of appreciation from students who understand our attempts to reach out to them.
P a g e | 304
REFERENCES
Alias, N., Sabdan, M. S., Aziz, K. A., Mohammed, M., Hamidon, I. S., & Jomhari, N. (2013). Research trends and issues
in the studies of Twitter: A content analysis of publications in selected journals (2007-2012). Procedia-Social
and Behavioral Sciences, 103, 773-780.
Appleton, J. A., Christenson, S. L., Kim, D., & Reschly, A. (2006). Measuring cognitive and psychological engagement:
Validation of the Student Engagement Instrument. Journal of School Psychology, 44, 427-445.
doi:10.1016/j.jsp.2006.04.002
Blessing, S. B., Blessing, J. S., & Fleck, B. K. B. (2012). Using Twitter to reinforce classroom concepts. Teaching of
Psychology, 39(4), 268-271.
Buettner, R. (2013). The utilization of Twitter in lectures. GI-Jahrestagung, 244-254.
Carpenter, J. P., & Krutka, D. G. (2014). How and why educators use Twitter: A survey of the field. Journal of Research
on Technology in Education, 46, 414-434.
Fleck, B. K. B., Richmond, A. S., & Hussey, H. D. (2013). Using social media to enhance instruction in higher education.
In S. Keengwe (Ed.), Research perspectives and best practices in educational technology integration (pp.
217-241). Hershey, PA: IGI Global.
Gao, F., Luo, T., & Zhang, K. (2012). Tweeting for learning: A critical analysis of research on microblogging in education
published in 2008-2011. British Journal of Educational Technology, 43, 783-801. doi:10.1111/j.14678535.2012.01357.x.
Hawksey, M. (2013). #TAGS [Computer software]. Retrieved from https://tags.hawksey.info/
Junco, R., Elavsky, C. M., & Heiberger, G. (2013). Putting Twitter to the test: Assessing outcomes for student
collaboration, engagement and success: Twitter collaboration & engagement. British Journal of Educational
Technology, 44(2), 273-287.
Junco, R., Heiberger, G., & Loken, E. (2011). The effect of Twitter on college student engagement and grades: Twitter
and student engagement. Journal of Computer Assisted Learning, 27(2), 119-132.
Kassens, A. L. (2014). Tweeting your way to improved #writing, #reflection, and #community. The Journal of
Economic Education, 45(2), 101-109.
Kassens-Noor, E. (2012). Twitter as a teaching practice to enhance active and informal learning in higher education:
The case of sustainable tweets. Active Learning in Higher Education, 13(1), 9-21.
Kim, Y., Jeong, S., Ji, Y., Lee, S., Kwon, K. H., & Jeon, J. W. (2015). Smartphone response system using Twitter to
enable effective interaction and improve engagement in large classrooms. IEEE Transactions on Education,
58(2), 98-103.
Krutka, D. G., & Carpenter, J. P. (2016). Participatory learning through social media: How and why social studies
educators use Twitter. Contemporary Issues in Technology and Teacher Education, 16(1), 38-59.
Kuznekoff, J. H., Munz, S., & Titsworth, S. (2015). Mobile phones in the classroom: Examining the effects of texting,
Twitter, and message content on student learning. Communication Education, 64, 344-365.
Luckerson, V. (2017, July 27). Twitter finds meaning (and madness) under Donald Trump. The Ringer. Retrieved from
https://theringer.com/Twitter-social-media-politics-donald-trump-6fe4b60f91f9.
P a g e | 305
Munoz, L. R., Pellegrini-Lafont, C., & Cramer, E. (2014). Using social media in teacher preparation programs: Twitter
as a means to create social presence. Perspectives on Urban Education, 11(2), 57-69.
Pew
Research Center. (2016, November 11). Social media update 2016. Retrieved from
http://assets.pewresearch.org/wp-content/uploads/sites/14/2016/11/10132827/PI_2016.11.11_SocialMedia-Update_FINAL.pdf
Tang, Y., & Hew, K. F. (2017). Using Twitter for education: Beneficial or simply a waste of time? Computers &
Education, 106, 97-118.
Twitter. (2017). About Twitter. Retrieved from https://about.Twitter.com/company
Welch, B. K., & Bonnan-White, J. (2012). Twittering to increase student engagement in the university classroom.
Knowledge Management & E-Learning: An International Journal (KM&EL), 4(3), 325-3
P a g e | 306
CHAPTER
30
USING PECHAKUCHA IN THE
CLASSROOM
JENNIFER ANN MORROW UNIVERSITY OF TENNESSEE, LISA SHIPLEY UNIVERSITY OF
TENNESSEE, AND STEPHANIE KELLY NORTH CAROLINA A&T STATE UNIVERSITY
INTRODUCTION
As instructors we want to present our class content in a way that is authentic, engaging, and enables our students
to remember our message. Also, as educators there is a growing importance that we not only impart knowledge to
our students but prepare them for life outside of the classroom (Rainie & Anderson, 2017). We strive to assist them
in developing oral communication skills to be able to successfully navigate a job interview and effectively
communicate with colleagues in their profession. How can we use technology effectively to guarantee this? We need
to use a method of presenting that is engaging and prepares students to make professional presentations.
PowerPoint is commonplace in the classroom and at times misused by both the instructor and the student. Whereas
the goal of a presentation is to engage your audience and communicate your point, ineffectively using PowerPoint
can bore, confuse, and annoy attendees causing them to ignore your message. Too much information on the slides,
lack of visuals, overuse of bulleted lists, overdoing it with animations, using too small font, odd color choices,
awkward transitions, and excessive typos are just some of the ways that make PowerPoints an ineffective
communication tool in the classroom (Satterfield, 2007; Tufte, 2003). Presenters using PowerPoint who spend more
time reading the slides instead of paying attention to the message deliverer, attempting to cover too much material
in the timeframe, goes over their time limit, and does not address the diversity of learners (e.g., visual, auditory)
within a classroom are not using PowerPoint in an effective manner (Garber, 2001; Paradi, 2017; Tufte, 2003).
Communication experts warn presenters regarding these misuses of PowerPoint and their impact on the audience’s
understanding of your message, at times stating that “PowerPoint is evil” (Tufte, 2003) and reminding us of “Death
by PowerPoint” (Garber, 2001).
PechaKucha is a method of using PowerPoint that has shown to be more engaging, better received by audience
members, and rated of higher quality than traditional PowerPoint presentations (Beyer, 2011; Byrne, 2016; Oliver &
Kowalczyk, 2013; Soto-Caban, Selvi, & Avila-Medina, 2011; Widyaningrum, 2016). As instructors we can use this
presentation method not only to present course content but also to train our students on how to prepare an
effective and engaging professional presentation. In the sections that follow, we describe what PechaKucha style
presentations consist of, what research supports their use in the classroom, challenges one may encounter when
developing and utilizing PechaKucha presentations, and suggestions for how to incorporate this presentation
method within your classroom.
PECHAKUCHA AS A PRESENTATION STYLE
PechaKucha (sometimes spelled Pecha Kucha) is a fast-paced, interactive, visual-based presentation style that is
transforming how many conduct oral presentations. This creative presentation format was developed in Japan in
2003 by two architects, Astrid Klein and Mark Dytham, as a way to attract prospective patrons to their new event
space (pechakucha.org). PechaKucha is Japanese for chit-chat or chatter (jisho.org) and is also referred to as the
P a g e | 307
20x20 style of presenting (Bang-Jenson, 2010; Freeman, 2016). This timed presentation technique consists of
displaying 20 slides (mostly images) for only 20 seconds each, thereby making the entire presentation only 6 minutes
and 40 seconds. PechaKucha presentations should be primarily graphical; slides should be predominately images
with little to no text. Also, to keep the fast-paced momentum going throughout the presentation, slides should be
set up to automatically advance every 20 seconds.
PechaKucha style presentations are an effective way to persuade others and communicate your message as they are
longer than an elevator pitch, yet shorter than the 10/20/30 rule espoused by many presentation experts (see
https://guykawasaki.com/the_102030_rule/ for a description). Research has shown that this method of using
PowerPoint is more engaging and audience members have similar levels of recall compared to traditional methods
of using PowerPoint (Beyer, Gaze, & Lazicki, 2012). Instructors can use this method of presenting as way to introduce
a new content topic at the beginning of a class discussion or activity, as a way to provide a real-world example of a
concept discussed in class, or at the end of discussion/activity as way to recap the information learned. Students can
use this presentation style as an individual or group presentation for a class assignment or as their final presentation
for a research/writing project. This gives students experience speaking before an audience in a concise and
professional manner. PechaKuchas have been used by instructors and students within the classroom (both K-12 and
Postsecondary) (Byrne, 2016; Klentzin, Bounds, Paladino, & Johnston, 2010), as a presentation format at academic
conferences (such as the American College Personnel Association, American Marketing Association, International
Association for Social Science Information Services and Technology, and the European Educational Research
Association), and at multi-speaker educational and community events, sometimes called “PechaKucha Nights” (for
example: https://utworks.tennessee.edu/micnite/Pages/main).
PechaKucha is appealing in that it requires the audience to be much more engaged in the presentation (Allen, 2017;
Anderson & Williams, 2012). It may also reduce mental fatigue by allowing the attendees to focus on the spoken
word rather than jumping back and forth between the presenter and text intensive PowerPoint slides. Since the
presenter has less than seven minutes to get their point across, they have to focus and create a more structured
presentation that captures an audience’s attention and gives them more time for questions. Preparing students to
make professional presentations (Byrne, 2016) is just one of the many benefits of utilizing this type of presentation
methodology.
BENEFITS OF USING PECHAKUCHA: ADVANTAGES FOR INSTRUCTORS AND STUDENTS
In the last fourteen years, PechaKucha presentations have gained significant popularity within the classroom.
Research has shown that this style of presentation has been used successfully in educational settings in a variety of
disciplines such as communications, nursing, marketing, psychology, and statistics by both instructors and students
for presenting information on both simple and complex topics (Beyer, 2011; Byrne, 2016; Fewell, 2015; Levin &
Peterson, 2013; Lucas & Rawlins, 2015; McDonald & Derby, 2015; Morrow, Kelly, & Shipley, 2017). One major benefit
of PechaKucha presentations over traditional PowerPoint is how they are perceived by your audience. There is a
plethora of research to support that PechaKucha style presentations are rated as more engaging and innovative than
traditional PowerPoint presentations (Beyer et al., 2012; Klentiz, Paladino, Johnson, & Devine, 2010; Lehtonen, 2011;
Tomsett & Shaw, 2014). Byrne (2016) utilized PechaKucha presentations as an assignment with graduate nursing
students. Students’ comments were overwhelmingly positive, stating “the PechaKucha format helped me learn to
condense a large amount of information into small chunks” and “I learned how to narrow down only main points
and not to be too wordy” (Byrne, 2016, p. 22). PechaKucha has also been shown to be just as effective as traditional
PowerPoint presentations are for learning new information (Klentzin et al., 2010). Another benefit of this
P a g e | 308
presentation method is that it can be implemented in both traditional face-to-face as well as online classrooms.
Within a traditional face-to-face classroom, teachers can hold a PechaKucha presentation event as the culminating
student project. In a distance education classroom both students and faculty can create a digital recording of their
PechaKucha presentation and deliver it either asynchronously or synchronously.
Other major benefits of PechaKucha have to do with its ease of use in the classroom. Researchers found that
students took a similar amount of time to prepare using PechaKucha as their presentation style as they did using a
traditional PowerPoint method of presenting (Beyer, 2011). Since this method of presentation is mostly visuals,
students know that they will not be able to read their content off of the slides, so that forces a level of involvement
and material knowledge with the assignment that might otherwise not be present; students will have to understand
the essence of the concepts in order to present with visuals rather than just reading off text. The learning curve in
order to successfully implement PechaKucha is small as it is easily utilized by those with a basic understanding of
PowerPoint. Also, PechaKucha does not cost any extra to use; one just needs to have access to PowerPoint or
another type of presentation software (e.g., Prezi). Since the presentation time is so short (under 7 minutes) it gives
instructors the ability to schedule numerous student presentations within one class period if they are using this
method of presenting as part of the students’ assignment. For instructors it enables them to condense new course
content into smaller, more manageable information chunks throughout their class period. When using PechaKucha
as a presentation style one has to focus on the topic and understand the content in order to be able to present it
under seven minutes. While there are many benefits to this method of using technology in the classroom, there can
be challenges for both instructors and students in implementing it.
CHALLENGES IN USING THE PECHAKUCHA PRESENTATION STYLE
While there are many benefits to using PechaKucha as a presentation method in the classroom there are challenges
that instructors and students need to consider when creating and implementing this type of presentation. In regards
to creating PechaKucha presentations, instructors and students need to think about choosing the most appropriate
visuals for their presentation. An instructor does not want to choose a visual that is irrelevant and/or distracting to
the audience. Students will need guidance on what types of visuals are appropriate and where they can find them.
Also, when creating your PechaKucha the presenter must be focused on the message they are trying to convey.
There is a limited amount of time to present concepts and there may not be enough time to present more complex
concepts (Byrne, 2016). The instructor or student has to be sure that they fully understand the concept that they are
trying to present on and be able to summarize the main points in under seven minutes. Another challenge is deciding
whether or not to automate slides for the presentation. While many suggest that one should automate slides
(Eriksen, Tollestrup, & Ovensen, 2011; Lucas & Rawlins, 2015), others (Bajaj, 2012; Edwards, 2010; Oliver &
Kowalczyk, 2013) recommend against it as the transitions can increase the anxiety of the presenter and they can
easily mishandle their timing (Swathipatnaik & Davidson, 2016). Those with little or no experience presenting may
find it difficult to adapt to presenting with automated slides without a lot of preparation. Lastly, when creating a
PechaKucha presentation the presenter must practice though not be too over-rehearsed that the presentation
sounds canned and is uninspiring. Suggesting to students to practice their presentation in front of a test audience
and/or recording their practice session is a great way for them to assess their readiness to present their PechaKucha
style presentation in front of the class.
Other challenges have to do with the actual presenting of the PechaKucha presentation. It might be difficult for the
presenter to respond to non-verbal audience cues during their presentation. Since the PechaKucha style
presentation is so fast-paced, and in many cases rigidly timed, the presenter may not be able to adjust their message
P a g e | 309
based on the engagement and understanding (or lack thereof) of the audience. Also, because one is so focused on
delivering the PechaKucha presentation in a specific style one might be so focused on giving the presentation versus
having any time to manage public speaking anxiety symptoms. This is especially true for when students are
presenting as they often report anxiety with presenting this type of presentation (Swathipatnaik & Davidson, 2016).
A instructor can provide guidance and resources to students to help alleviate these challenges and ensure a
successful PechaKucha presentation (see Table 1 for suggested resources).
SUGGESTIONS FOR INCORPORATING PECHAKUCHAS WITHIN THE COLLEGE CLASSROOM
How does an instructor get started using PechaKuchas in the classroom? How do instructors prepare PechaKucha
and train students in using them? There are many researchers and presentation experts offering advice on what to
do and what not to do (Brown, 2014; De Wet, 2006; Edwards, 2010; Fewell, 2015; Genzuk, n.d.; Jones, 2009; Jung,
n.d.; Kandybovich, 2015; Lortie, 2016) when utilizing PechaKucha, either as a method presenting course material or
for student presentations. Below is a summary of tips and strategies for making the most of PechaKucha
presentations within your classroom. A list of PechaKucha resources and examples are also contained within the
table (Table 1).
TABLE 1: PECHAKUCHA PRESENTATIONS TIPS FOR INSTRUCTORS AND STUDENTS
What NOT to do:
1.
Use a lot of text on your slides
2.
Bullet lists of information on your slides
3.
Read the information on your slides to the audience
4.
Use a PowerPoint template that is distracting/busy
5.
Include low quality or watermarked images
6.
Incorporate more than two font types
7.
Use red/green font colors (i.e., difficult for those that are colorblind)
8.
Feature animation in your slides
9.
Include irrelevant clipart and/or sounds
10. Have reference slides at the end of your presentation
What you SHOULD DO (General Suggestions for both Instructors and Students):
1.
KISS….Keep it simple scholar; Your presentation should be clear, concise, and delivered well
2.
Use only san serif fonts (e.g., Calibri, Gotham, Verdana) for any text that is on your slides
3.
Text should be at least 28 point
4.
Limit each slide to one idea or concept
5.
Include mostly visuals on each slide; Visuals should all be high quality
6.
Limit the number of visuals on each slide to 4 or fewer
7.
Include ample white space on the slides
8.
When using text use a dark background and light font; This is easier for your audience to read
P a g e | 310
9.
Create an outline or storyboard of your presentation prior to creating your PechaKucha
10. Include minimal text and only text that are key concepts
11. Make your first slide capture your audience’s attention
12. Be consistent with slide transitions (i.e., choose the same timing for all slides)
13. Practice but do not memorize your presentation
14. Use notes if necessary, just do not read everything verbatim
15. Tell your audience a story; Your slides should all flow together
16. View the slides in Presenter View so you can see the current and the upcoming slide
17. Include a discussion and answer session after the PechaKucha presentation
18. Keep delivering the presentation even if you finish your points prior to the next slide advances; Do not
pause and wait for the slide to transition
19. Give your audience access to your presentation or a handout based on the presentation after you present
so they can refer to the material at a later time
What you SHOULD DO (Suggestions for Instructors to Give to Students Using PechaKucha):
1.
Provide students with detailed instructions on what you want to see included within their PechaKucha
presentation (e.g., having a detailed assignment rubric)
2.
Give students numerous resources on how to create PechaKuchas and how to find visuals to use in their
presentations
3.
Use one of your own PechaKucha presentations (recorded or demonstrate live in the classroom) as an
example for students to use as a template when creating their own
4.
Assign students to do a PechaKucha either alone or with a partner; both have been shown to be effective!
5.
Experiment with modifications to the PechaKucha style to fit your classroom needs (e.g., less time, more
time, video-based, team-presented)
6.
Collect formative assessment (i.e., students perceptions of the activity and their suggestions for
improvement) on the likeability and usefulness of PechaKucha presentations in your classroom
Helpful PechaKucha Resources:
1.
https://www.youtube.com/watch?v=l9zxNTpNMLo
2.
https://www.youtube.com/watch?v=BSQlsqZWtV0
3.
https://www.youtube.com/watch?v=32WEzM3LFhw
4.
https://paulgordonbrown.com/2014/12/13/your-ultimate-guide-to-giving-pechakucha-presentations/
5.
http://cmmr.usc.edu//Pecha_Kucha_TipsResourcesExamples.pdf
Examples of PechaKucha Presentations:
1.
http://www.pechakucha.org/
2.
https://utworks.tennessee.edu/micnite/Pages/main.aspx
3.
https://www.youtube.com/playlist?list=PLxh6rM5tNqYij5Pzn4w0DrHqMHcDsu3So
(playlist of 19 videos; student examples)
4.
https://www.youtube.com/playlist?list=PLOOhndFHTtGOhgV1eWJb75XSkswFk0HF0
(playlist of 110 videos)
P a g e | 311
CONCLUSIONS
Visual aids can be effective tools for enhancing a presentation. At least 50 percent of the human brain is dedicated
to directly or indirectly processing visual information (Smiciklas, 2012). Because of this, visual representations of
information can make complex information much more easily digestible to an audience (Kelly, 2015). However, this
distinction that visual aids should be built for an audience is sometimes lost on presenters.
Whether using PechaKucha or a traditional PowerPoint format, presenters, both instructors and students, must
remember the purpose of a visual aid so that they can build and use one effectively: It is to be an assistant for the
audience. Bokeno et al. (2006) break down this role of visually assisting the audience ever further, specifying that a
visual aid should:
•
Clarify meaning
•
Emphasize important ideas
•
Make concepts more memorable
•
Give your words concrete meaning
While a visual aid can certainly help keep a speaker on track by providing a concrete structure for a presentation, it
should be designed to assist the audience, not the speaker.
Because utilizing PechaKucha style takes away the option to transcribe a presentation on slides (i.e., using just text
in their presentation), using this format can assist presenters in remembering to make their slides audience-centric.
Presenters using PechaKucha are tasked with finding or creating images that clarify and accentuate their words. The
audience then benefits from these assisting images in that their visually-wired brains will be more easily able to
create meaning from the content, which will make the concepts more memorable (Smiciklas, 2012). As with
traditional PowerPoint presentations the presenter can easily ensure that their final PechaKucha presentation is
understandable for all audience members by checking its accessibility
(see https://webaim.org/techniques/powerpoint/ for guidance). Instructors and students can create a more
engaging and appealing presentation using the PechaKucha format.
REFERENCES
Allen, E. (2017). Our stories-our journey: An empowerment group approach for justice-involved women using PechaKucha. Social Work with Groups, doi: 10.1080/01609513.2017.1358129.
Anderson, J.S., & Williams, S.K. (2012). Pecha Kucha for lean and sticky presentations in business classes. Working
Paper Series, 12.1, 1-9.
Bajaj, G. (2012). Ten tips for Pecha Kucha. Retrieved on October 1st, 2017 from
http://blog.indezine.com/2012/05/10-tips-to-create-and-present-pecha.html
Bang-Jensen, V. (2010). Pecha Kucha: A quick and compelling format for student PowerPoint presentations. The
Teaching Professor, 24(5), 5-5.
P a g e | 312
Beyer, A.M. (2011). Improving student presentations: Pecha Kucha and just plain PowerPoint. Teaching of
Psychology, 38(2), 122-126.
Beyer, A.M., Gaze, C., & Lazicki, J. (2012). Comparing students’ evaluations and recall for student Pecha Kucha and
PowerPoint presentations. Journal of Teaching and Learning with Technology, 1(2), 26-42.
Bokeno, M. R., Brewer, E. C., Cole, W. D., Coleman, C. R. C., Cox, S. A., Duffy, C. P., Lidzy, S. D., Malinauskas, B. K., &
Tillson, L. D. (2006). A manner of speaking: Successful presentations for work and life (2nd ed.). Dubuque, IA:
Kendall Hunt.
Brown, P.G. (2014). Your ultimate guide to giving PechaKucha presentations. Retrieved on October 12, 2017 from:
https://paulgordonbrown.com/2014/12/13/your-ultimate-guide-to-giving-pechakucha-presentations/
Byrne, M. (2016). Presentation innovations: Using Pecha Kucha in nursing education. Teaching and Learning in
Nursing, 11(1), 20-22. doi: 10.1016/j.teln.2015.10.002
De Wet, C.F. (2006). Beyond presentations: Using PowerPoint as an effective instructional tool. Gifted Child Today,
29(4), 29-39.
Edwards, R.L. (2010). Pecha Kucha in the classroom: Tips and strategies for better presentations. Remixing the
Humanities. Retrieved on October 1, 2017 from
http://remixhumanities.wordpress.com/2010/11/03/pecha-kucha-in-the-classroom-tips-and-strategiesfor-better-presentations/.
Eriksen, K., Tollestrup, C., & Ovesen, N. (2011, September). Catchy presentations: Design students using Pecha Kucha.
Paper presented at the International Conference on Engineering and Product Design Education, London,
UK.
Fewell, N. (2015). Utilizing the PechaKucha format for presentation activities. OTB Forum, 7(1), 67-69.
Freeman, S. (2016). Comparing Pecha Kucha and traditional methods in occupational safety training. Greenville, NC:
Eastern Carolina University.
Garber, A.R. (2001). Death by PowerPoint. Small Business Computing. Retrieved on October 2, 2017 from
http://www.smallbusinesscomputing.com/biztools/article.php/684871/Death-By-Powerpoint.htm
Genzuk, M. (n.d.). Pecha Kucha: Tips, resources, and examples. Retrieved on October 1, 2017 from:
http://cmmr.usc.edu/Pecha_Kucha_TipsResourcesExamples.pdf
Jones, J.B. (2009). Challenging the presentation paradigm (in 6 Minutes, 40 Seconds): Pecha Kucha. ProfHacker.
The Chronicle of Higher Education. Retrieved on October 1, 2017 from
http://chronicle.com/blogs/profhacker/challenging-the-presentation-paradigm-in-6-minutes-40-secondspecha-kucha/22807
Jung, F. (n.d.). Guide to making a Pecha Kucha presentation. Retrieved on October 12, 2017 from:
https://avoision.com/pechakucha
Kandybovich, S. (2015). 20 ideas for PechaKucha in the classroom. Retrieved on October 1, 2017 from:
https://eltcation.wordpress.com/2015/03/09/20-ideas-for-pechakucha-in-the-classroom/
Kelly, S. (2015). Teaching infographics: Visually communicating data for the business world. Business Education
Forum, 69, 35-37.
Klentzin, J.C., Bounds, E., Paladino, B., & Johnston, C.D. (2010). Pecha Kucha: Using “lightening talk” in university
instruction. References Services Review, 38(1), 158-167. doi: 10.1108/00907321011020798.
P a g e | 313
Lehtonen, M. (2011). Communicating competence through PechaKucha presentations. The Journal of Business
Communication, 48, 464-481. doi: 10.1177/0021943611414542
Levin, M.A., & Peterson, L. (2013). Use of Pecha Kucha in marketing students’ presentations. Marketing Education
Review, 23(1), 59-64. doi: 10.2753/MER1052-8008230110.
Lortie, C.J. (2016). Ten simple rules for lightening and PechaKucha presentations. PeerJ Preprints. Retrieved on
October 12, 2017 from https://peerj.com/preprints/2326v1/
Lucas, K., & Rawlins, J.D. (2015). PechaKucha presentations: Teaching storytelling, visual design, and conciseness.
Communication Teacher, 29, 102-107. doi: 10.1080/17404622.2014.1001419.
McDonald, R.E., & Derby, J.M. (2015). Active learning to improve presentation skills: The use of Pecha Kucha in
undergraduate sales management classes. Marketing Education Review, 25(1), 21-25. doi:
10.1080/10528008.2015.999593.
Morrow, J.A., Kelly, S., & Shipley, L. (2017). Using PechaKucha to enhance communication students understanding of
statistics. Unpublished manuscript (article under review)
Oliver, J., & Kowalczyk, C. (2013). Improving student group marketing presentations: A modified Pecha Kucha
approach. Marketing Education Review, 23(1), 55-58.
Paradi, D. (2017). Results of the 2017 Annoying PowerPoint survey. Retrieved on October 12th, 2017 from:
http://www.thinkoutsidetheslide.com/wp-content/uploads/2017/10/Results-of-the-2017-AnnoyingPowerPoint-survey.pdf
Rainie, L., & Anderson, J. (2017). The future of jobs and jobs training. Pew Research Center. Retrieved on October
12th, 2017 from http://www.pewinternet.org/2017/05/03/the-future-of-jobs-and-jobs-training/
Smiciklas, M. (2012). The power of infographics: Using pictures to communicate and connect with your audiences.
Indianapolis, IN: Que Biz-Tech.
Soto-Caban, S., Selvi, E., & Avila-Medina, F. (2011). Improving communication skills: Using PechaKucha style in
engineering courses. Paper presented at the ASEE Annual Conference and Exposition, Vancouver, BC.
Retrieved on October 1st, 2017 from https://www.asee.org/public/conferences/1/papers/2000/download
Swathipatnaik, D., & Davidson, L.M. (2016). Pecha Kucha: An innovative task for engineering students. Research
Journal of English Language and Literature, 4(4), 49-54.
Tomsett, P. M., & Shaw, M. R. (2014). Creative classroom experience using Pecha Kucha to encourage ESL use in
undergraduate business courses: A pilot study. International Multilingual Journal of Contemporary
Research, 2(2), 89-108.
Tufte, E. (2003). PowerPoint is evil: Power corrupts. PowerPoint corrupts absolutely. Wired. Retrieved on October 1,
2017 from https://www.wired.com/2003/09/ppt2/
Widyaningrum, L. (2016). Pecha Kucha: A way to develop presentation skill. Vision: Journal for Language and Foreign
Language Learning, 5(1), 57-73.
P a g e | 314