Analysis: How multimedia can improve learning
New research sheds light on students’ ability to process multiple modes of learning
By Meris Stansbury, Assistant Editor, eSchool News
An analysis of existing research supports a notion that already has begun to transform instruction in schools from coast to coast: that multimodal learning–using many modes and strategies that cater to individual learners’ needs and capacities–is more effective than traditional, unimodal learning, which uses a single mode or strategy.
According to a new report commissioned by Cisco Systems, adding visuals to verbal (textual and/or auditory) instruction can result in significant gains in basic or higher-order learning, if applied appropriately. Students using a well-designed combination of visuals and text learn more than students who use only text, the report says.
It also provides insights into when interactivity strengthens the multimodal learning of moderate to complex topics, and when it’s advantageous for students to work individually when learning.
“There is a lot of misinformation circulating about the effectiveness of multimodal learning,” said Charles Fadel, Cisco’s global education leader. “As curriculum designers embrace multimedia and technology wholeheartedly, we considered it important to set the record straight, in the interest of the most effective teaching and learning.”
The report, titled Multimodal Learning through Media: What the Research Says, was conducted by the Metiri Group, which serves the education community through a broad range of consulting services. It is the third in a series of metastudies that address “what the research says” about various topics in education; prior reports tackled technology in schools and education and economic growth.
Information was gathered for the report using meta-analysis, or combining the results of several studies that address a set of related research hypotheses. Only studies published after 1997 and addressing the use of multimedia in education were considered.
“The real challenge before educators today is to establish learning environments, teaching practices, curricula, and resources that leverage what we know about the limitations of human physiology and the capacity explained by the cognitive sciences to augment deep learning in students,” says the study.
How students learn
New information about how we acquire knowledge is now available through functional magnetic resonance imaging (fMRI) of the human brain at work and rapid sampling techniques that reveal the pattern of brain activity over time as people read, listen, talk, observe, think, multitask, and perform other mental tasks.
In its introduction, the Metiri Group report indicates that the brain has three types of memory: sensory memory, working memory, and long-term memory.
Working memory is where thinking gets done and is dual-coded with a buffer for storage of verbal or text elements, and a second buffer for visual or spatial elements. Short-term memory is thought to be limited to approximately four objects that can be simultaneously stored in visual or spatial memory and about seven objects that can be simultaneously stored in verbal memory.
Within working memory, verbal/text memory and visual/spatial memory work together, without interference, to strengthen understanding. However, overfilling either buffer can result in cognitive overload and weaken learning.
Recent studies also suggest that convergence, or sensory input combined with new information at the same time, has positive effects on memory retrieval. It creates linked memories, so that the triggering of any aspect of the experience positive effects on memory retrieval. It creates linked memories, so that the triggering of any aspect of the experience will bring to consciousness the entire memory.
Sensory memory is caused by experiencing anything through the five senses (sight, sound, taste, smell, and touch) and is involuntarily stored in long-term memory as episodic knowledge. However, these sensory memories degrade very quickly, and it’s only when the person pays attention to elements of sensory memory that these experiences get introduced into working memory. Once an experience is in a student’s working memory, the learner then can consciously hold that experience in his or her memory and can think about it in context.
Long-term memory is nearly unlimited, and it’s estimated that a person can store up to 10 to the 20th power bits of information over a lifetime–equivalent to 50,000 times the text in the U.S. Library of Congress (30 million cataloged books; 58 million manuscripts).
The brain has two types of long-term memory: episodic and semantic. Episodic comes from sensory input and is involuntary. Semantic memory stores memory traces from working memory, including ideas, thoughts, schema (chunks of multiple individual units of memory that are linked into a system of understanding), and processes that result from the thinking accomplished in working memory.
To show how a student might use all types of memory when learning, the report cites a female student in a science lab, working as part of a team on the development of an architectural design. All sensory activity is involuntarily encoded in her memory through dual sensory channels: The student’s verbal/text memory might include side conversations, noise from other teams, and so on, while the visual/spatial memory might include the current architectural drawings on the screen or paper, facial expressions, and physical movements by others.
Only if she consciously considers each sensory input, or contemplates further about a particular side conversation (such as about the traffic patterns associated with various building designs), will the memory move from short to long term. As the student considers this side conversation about traffic patterns, she will be able to cue up memories from her personal experiences stored in long-term memory that have enriched her thinking. It’s important to understand that while this learning is occurring, the student could become distracted by something (such as an office announcement) and might experience “attention blink,” thereby losing sight of everything else around her owing to cognitive overload.
This cognitive overload won’t prevent the student from continuing to register input involuntarily, but these experiences will be stored only in short-term memory and so won’t last long. Furthermore, as the student consciously considers each sensory input, her cognitive control function limits attention to serial consideration of ideas and concepts–meaning that her cognitive ability is slowed down and multitasking isinefficient. Thinking, decision-making, and cueing of long-term memories require the central cognitive processor, which only works serially. Therefore, the more sensory input there is, the greater the risk of overload–and the greater the risk of leaving information out of long-term memory.
The report concludes its overview of cognitive science by citing the principles listed by the National Academy of Sciences from its 2001 publication, How People Learn: * Student preconceptions of curriculum must be engaged in the learning process. Only when the student has the opportunity to correct misconceptions, build on prior knowledge, and create schemas of understanding a topic will learning be optimized.
- Expertise is developed through deep understanding. Students learn more when the concepts are personally meaningful to them. This translates into a need for authentic learning in classrooms–deep learning, relevant to those outside the classroom, and involving students’ use of the key ideas in a production.
- Learning is optimized when students develop “metacognitive” strategies. To be metacognitive is to be constantly “thinking about one’s own thinking.” Students who are metacognitive are students who approach problems by automatically trying to predict outcomes, explaining ideas to themselves, noting and learning from failures, and activating prior knowledge.
Multimedia and learning
So, what does the science of learning tell us about the use of multimedia during instruction? As the report suggests, multimedia is one modality of learning that can help students learn more efficiently when applied properly, because convergence–or sensory input simultaneously combined with new information–has positive effects on memory retrieval.
But too much sensory input can lead to cognitive overload, the report cautions, so educators must be careful to use multimedia appropriately.
Based on the work of Richard Mayer, Roxanne Moreno, and other researchers, the Metiri Group report synthesized a list of learning principles for multimedia:
- Multimedia Principle: Retention is improved through words and pictures rather than through words alone.
- Spatial Contiguity Principle: Students learn better when corresponding words and pictures are presented near each other, rather than far from each other on the page or screen.
- Temporal Contiguity Principle: Students learn better when corresponding words and pictures are presented simultaneously rather than successively.
- Coherence Principle: Students learn better when extraneous words, pictures, and sounds are excluded rather than included.
- Modality Principle: Students learn better from animation and narration than from animation and on-screen text.
- Individual Differences Principle: Design effects are higher for low-knowledge learners than for high-knowledge learners. Also, design effects are higher for high-spatial learners than for low-spatial learners.
- Direct Manipulation Principle: As the complexity of the materials increases, the impact of direct manipulation (animation, pacing) of the learning materials on the transfer of knowledge also increases.
Therefore, students engaged in learning that incorporates multimodal designs, on average, outperform students who learn using traditional approaches with single modes, the report says.
For example, based on meta-analysis, the average student’s scores on basic skills assessments increase by 21 percentiles when engaged in non-interactive, multimodal learning (which includes using text with visual input, text with audio input, and watching and listening to animations or lectures that effectively use visuals) in comparison with traditional, single-mode learning.
When students shift from non-interactive multimodal to interactive multimodal learning (such as engagement in simulations, modeling, and real-world experiences–most often in collaborative teams or groups), results are not quite as high, with average gains at 9 percentiles.
However, when the average student is engaged in higher-order thinking using multimedia in interactive situations, on average, that student’s percentage ranking on higher-order or transfer skills increases by 32 percentile points over what the student would have accomplished with traditional learning.
When the context shifts from interactive to non-interactive multimodal learning, the result is 20 percentile points over traditional means.
Based on these principles and statistics, the report lists a few tips for educators on how to teach students using multimedia:
- Know the importance of the attention and motivation of the learner. The “scaffolding” of learning–the act of providing learners with assistance or support to perform a task beyond their own reach–by reducing extraneous diversions and focusing the learner’s attention on appropriate elements aligned to the topic has proven effective.
- Know the importance of separating the media from the instructional approach. A recent meta-analysis in which more than 650 empirical studies compared media-enabled distance learning to conventional learning found pedagogy to be more strongly correlated to achievement than media. The media and pedagogy must be defined separately.
Where to go from here
While the report’s analysis provides a clear rationale for using multimedia in learning, the research in this field is evolving, its authors say. Recent articles suggest that efficacy, motivation, and volition of learners, as well as the type of learning task and the level of instructional scaffolding, can weigh heavily on learning outcomes from the use of multimedia.
Future research will focus on the social affordances that multimedia representations provide, the scaffolding required to prepare students to effectively use multimedia representations, and the learning designs necessary to minimize cognitive overload, the report suggests.
The report concludes by noting: “The convergence of the cognitive sciences and neurosciences provides insights into the field of multimodal learning through Web 2.0 tools. The combination will yield important guideposts in the research and development of eLearning using emergent, high-tech environments.”
Links:
Multimodal Learning through Media: What the Research Says
Cisco Systems
The Metiri Group
eSchoolNews
7920 Norfolk Ave, Suite 900, Bethesda Maryland, 20814
Tel. (866) 394-0115, Fax. (301) 913-0119
Web: http://www.eschoolnews.com, Email: WebAdmin@eschoolnews.com