“Why silent types get the girl”: Misadventures in Science Journalism

Progress on my case studies was much slower than I anticipated. I ended up spending about 3-4 hours on each of the twelve cases over the past six weeks. My methods consisted of reading and taking notes on each study to ensure that I understood the research thoroughly. After this, I would perform several Google searches to seek out related articles and videos, altering my search terms each time. I took notes on about 30 articles per case.

It quickly became frustrating.

After reading advice books for science journalists, I was really expecting to find well researched, interesting pieces. Instead, I felt like I was reading the same article over and over again. Many of the most popular sites are apparently not ones with strong science departments. I did find that there are some talented journalists that took the time to restate research methods and findings in interesting ways, and got new, thoughtful quotes from the researchers. However, a good portion of the articles I was taking copious notes on sounded suspiciously similar to researchers’ press releases.

One of the more interesting cases I worked on involves research by Molly Babel at the University of British Columbia. While the research itself is very interesting, I want to tell you about this case because it does a great job of illustrating the problem of copying from press releases. Babel and her colleagues had a group of men and women from California listen to prerecorded words (“boot”, “hoop”, “deed”, “key”, “cot”, “sock”, etc). They had previously recorded these sounds with the help of participants from California and other places west of the Mississippi River. The listeners rated the “attractiveness” of the voices from 1-9. The researchers found, among other things, that women preferred male voices that pronounced the words with shorter duration.

The press release published by the University of British Columbia wrote that women preferred “men who spoke with a shorter average word length.” Restating it this way seems harmless when you understand the methods of the study: that it was a fixed list of short, single words that people were judging. But if someone didn’t read the study, you can understand how this could be misleading.

The Telegraph wrote a brief piece about the research that I judged to be based on the press release because the quotations from Molly Babel that they used were identical to those in the press release. Titled, “Why silent types get the girl”, the article declares that researchers found that “men who use verbose language are deemed less attractive.”  Yikes. It becomes clear that this bold, mistaken conclusion is a misunderstanding of “shorter average word length.” Reading the original study is time consuming, and The Telegraph doesn’t specialize in science stories. But I think this case highlights the importance of going beyond press releases if you are trying to accurately inform the public about research findings.

 

To finish my project, I’m going to create network analysis maps showing how news about the research traveled. I hope that having a visual representation of these relationships will enable me to pinpoint some of the weak or incorrect reporting and show how it got that way. I will also write a short report on my findings, specifically discussing what kinds of research gets diluted, who makes these errors, and when in this process they tend to happen.

 

 

Comments

  1. bmdeschaine says:

    Yikes! You highlight an interesting story of miscommunication in science reporting. I think it is notable that the error traces back to the researchers’ press release, because aren’t press releases supposed to honestly inform the public about scientific research? I took a brief look at the links you kindly provided, and the press release really does mislead the reader! I think most people would read it and reach the same mistaken conclusion as the Telegraph reporters.

    You seem to suggest that reporters should be reading the original papers when reporting on scientific research, but I personally find that expectation to be highly unrealistic. I don’t think laypeople and reporters – even “science reporters” – can be expected to understand papers that are often written for highly specialized publications and audiences.

    I am more concerned that a press release could be so misleading, because I previously would have considered a press release to be a reliable primary source for science news!

  2. It seems as though the University did its job though – the press release never really identified people who speak less as being more attractive. It seems like it’s lazy science reporting; after all, many press releases don’t go through the hassle of almost writing the story themselves.