Recently, I started reading a book called “Be Excellent at Anything." It’s written by the leadership of a top management consulting company, and its premise is that renewing our relationship to four “core needs” - physical, mental, emotional, and spiritual - can make us higher performers at work. The authors support this philosophy, and their business, with various scientific studies and quotes.
I’m generally enjoying the book. It’s well-written and thought provoking. The ideas are worthwhile. But the way science is used in it grates me. It feels incredibly obvious that whoever researched this book went out and cherry-picked studies to support the ideas, rather than the ideas actually being based on science.
And I think it’s time we admit that when we say something that is not science-y is “scientifically proven,” we are actually saying that we have quoted enough studies to make it feel that way.
What got me thinking about this was how, early in the book (page 5, to be precise), the authors hang their hat on one piece of scientific evidence in particular. It’s a 1993 study by Anders Ericsson “designed to explore the power of deliberate practice in violinists.”
"Over the years, numerous writers, including Malcolm Gladwell in his best-selling Outliers, have cited Ericsson’s study for its evidence that intrinsic talent may be overvalued.”
That sentence says to me that one of the authors read Outliers, and then thought, “That also supports my point! Put it in my book!” They use it to support not an idea about talent, as the study was designed for, but their idea that 90 minutes is the ideal amount of time for focused work. Therefore, employees should take short breaks, mental refreshers, throughout the workday. They briefly describe the study, and it immediately raised some red flags.
1. It’s a small study. Only 30 violinists participated, and they were divided into three groups of ten. That’s not a lot of data to draw vast conclusions about the way all people work.
2. The groups were not equal. The first group was selected from violinists that professors thought were destined to be soloists, the second group were thought to be good enough to win a place in a professional orchestra, but not showcased, and the third were taken from the music education department, destined to never play professionally. Then they were judged by how they practiced.
Obviously, the third group did not practice as much as the first two groups. Because they did not have the goal of being professional musicians. Their focus was to become educators. So why should they be on the same playing field as the first two groups? The author of the study may have been sensitive to this distinction, but the authors of the book characterized them as low performers. This seems unfair. I might be especially sensitive to the unfairness as I’m married to a music education grad student.
3. The data on practice times is self-reported, starting with from the time that the musicians were eight years old. How many of us remember accurately how long we did something for when we were eight? Back in those days, five minutes could feel like a half hour. I remember that my mom told me to practice my oboe for a half an hour every day, but did I scrape by with 15 minutes? Obviously I’m not a great musician today with that kind of attitude, but my point is that self-reported data is not the most reliable.
As I read on, it annoyed me that the authors were using such a flimsy study, which is now over 20 years old and has not been replicated, to support major ideas. Over dinner last night, I brought it up to my husband, who had already heard of the study from a music psychology class. He found it on Google Scholar, which showed it had been cited in almost 4,500 other publications. That’s probably not counting the number of times referenced in popular books. My husband is reading a non-fiction book called Quiet, about the power of introverts, and stumbled over the study in his reading just after our conversation. That book quoted Ericsson from an interview with the author of yet another book.
It’s almost as if this study became an input into some pop-pyschology writer hive mind, where any one author could draw it out from whatever keywords it matched, and found it ready to pin to their own ideas. And this is repeated over and over again, to patch together a convincing enough base of “scientific support” from scraps of studies and popular quotes.
Is this a problem? I would argue that it is. People are used to seeing science used in this way, to bolster pre-conceived ideas (hello, Malcolm Gladwell!). It undermines scientific literacy - that is, our ability to truly understand how science works. We are comfortable with reading scientific conclusions, ready-made to agree with whatever we want to think. And science doesn’t always readily agree. In a world of ready-made conclusions, we prefer not to know about the scientific process, uncertainty, or admit that what we nod our heads along to now may be utterly disproven in a few years. That comes back to bite us when we need to deal with socially complex scientific questions (hello, climate change!).
Be Excellent at Anything has good intentions - it wants you to work happier and better - but it ends up falling into the common trap of popular psychology. The trouble is that most of its readers don’t know that they’re in that trap, too.