Text vs Visual Learning Tools. Which is better? How do we know?

This is part 1 of a series of posts reviewing the origins of scientific claims that compare visuals vs text.

I’m always up to learning new facts and findings about the power of visual communication. From personal experience, I know that showing drawings is much easier than explaining concepts and facts in words or written text.  Lately I had been wishing that I had access to scientific findings from controlled experiments showing effectiveness of text vs visuals.

In my recent quest to research and write more about visual communications, I kept running into the same research results. Here are a few quoted as proof that using visuals are better than just using text in various websites and specialist companies.

  • Psychologist Albert Mehrabian demonstrated that 93% of communication is nonverbal.
  • Research at 3M Corporation concluded that we process visuals 60,000 times faster than text.
  • In 1986, a 3M-sponsored study at the University of Minnesota School of Management found that presenters who use visual aids are 43% more effective in persuading audience members to take a desired course of action than presenters who don’t use visuals.
  • University of Wisconsin found that visuals improved learning by 200%. Independent research discovered that using imagery took 40% less time to explain complex ideas. Harvard University concluded that visual communication improved retention by 38%.

How very interesting! All the research that prove that illustrations are more useful than words alone! All right! I even remember reading about the third bullet factoid about the Minnesota study in a Japanese book about communications.

Hold on, there’s something funny about some of these claims….

My work depends on scientific and medical accuracy. I constantly read journal articles and attend scientific talks. I am very familiar with the general setup of a research paper. I work daily with dedicated and hard-working researchers and surgeons.

So naturally, I wanted to read the original text so I can properly cite them and further my research about this matter. Since I was most familiar with the 3M/Minnesota study, I decided to get my hands on this paper first. First, I looked for citations in the websites I found the original claims.

Umm..ok? No citations in the first website. The second website had a citation section which I followed. After some fiddling with the search box, I found an article with no author’s name or any citations. Drat. My initial search ended here.

wrongturnok

During my expedition in google, I came across some nice investigative bloggers at cogdogblog and enveritasblog about the second bullet point regarding brain processing images 60,000x faster than text. From their findings, there’s really no real source material for this claim that’s been uncovered so far. By this point, I’m getting pretty skeptic about some of these research findings. Are these numbers true? Can we trust anything anymore?

Finally, I found a pdf version of “working paper series” called

“Persuasion and the Role of Visual Presentation Support: The UM/3M Study” prepared by Douglas R. Vogel, Gary W. Dickson, and John A. Lehman.

Yes! So let’s read this. As the title implies, the work was not published in a peer-reviewed journal. As far as I can tell, it’s still a work-in-progress since its release in 1986.

Abstract:

check. It sounds a little casual and very broad, but it’s ok. cool.

Introduction:

First lines. In bold. Center-align.

Presentations using visual aids were found to be
43% MORE PERSUASIVE
than unaided presentations.

I’ve never seen an introduction that begins with a conclusion and a persuasive phrase. The introduction is where the research question is asked and relative information from previous work is shared to introduce the readers to the research. Using bold caps in the text with different formatting is very strange also. Most people might stop reading here. Hey! We got official research paper about how visuals help presentations! But as any researcher would do, I kept going.

The paper goes on to mention an empirical study conducted in 1981 at the Wharton School of University of Pennsylvania. The researchers’ objective is to go beyond U Penn’s findings. What did the U Penn researchers find? It’s not mentioned. There isn’t a reference section in this paper, so I had to go look for this study from 1981.

It turns out that the empirical study is in a book titled, “Studying Visual Communication” by Sol Worth and Larry Gross. (Link to download the 226 page book in pdf) From scanning and searching in the book, there are some experiments showing films to people who are unfamiliar with a certain kind of culture or technology. I’ll come back to this book later.

I was somewhat concerned to find the phrase, “Overall, the presentations using visual support were 43% more persuasive.” again at the bottom of the first page of the introduction. Typically, a paper poses the main question they want to answer at the end of the introduction. Sounds like they already know the answer. I wonder when the 43% part is explained.

Materials and Methods

The paper continues to describe the methodology of the study. A group of undergraduate students were asked to take a pre-presentation questionnaire about their interest in spending time and money to attend a series of sessions (a time management course to be exact) 3 weeks prior to the presentation/experiment.

The presentation was to offer 6-hour sessions (3hr session x 2) for $15.  The students could take up to 10 sessions, but there was no minimum number of sessions required to attend.

It would’ve been nice to have the questions in the appendix for repeatability, but no. Were they just yes/no questions, multiple choice, or scale 1-5?

The students were divided up to 9 groups of about 35 each and and each group saw the same 10 minute video recording of the presentation. However, during the presentation, one group got no visuals. The remaining 8 groups got different additional presentation support materials projected during the presentation.

The variables for the groups were as follows:

  • color projections vs black/white projections,
  • plain text vs text+clip art/graphs,
  • 35mm slides vs overhead transparencies.

That’s only 6 variables for 8 groups, but maybe they’ll talk more about this later. Also note, that one of the variable is “plain text”. So, there’s a problem in this research already if we are comparing text vs visuals.

The authors stressed that the presentation was given by an “average” speaker. I didn’t understand why this was important. I assume so the wonderful speaker won’t steal the show or to skew the results by his/her skills at presenting.

Immediately after the presentation, the students were asked to take a post-presentation questionnaire indicating their interest in spending time and money to attend a course that was described by the presentation. They also asked questions about the presentation itself to measure how well the students understood the materials presented in the video. The researchers also asked for legibility of the visual aids.

Finally, ten days later the experiment, the students were again asked to take a questionnaire to measure how well the students retained the information from the presentation.

So in short, the 9 groups of students (8 groups w/visuals, 1 group w/o) answered a total of three questionnaires. The sheer difference in numbers–35 students vs 280 students–seemed sort of odd.

End of part 1. Next up! Results. So how did the researchers get that magical number?

Part 2: Results, part 1
Part 3: Results, part 2
Part 4: Discussion
Part 5: New experiment begins and wrap-up