3

 

     

      What Black Mirror Teaches Us About the State of Scientific Research

    11/11/2016

    This month Netflix began to air season 3 of Black Mirror, an acclaimed British black comedy TV anthology that explores the dystopian, sometimes horrifying aspects that modern technology has on our lives. Think of it like a modern day “Twilight Zone” that is well worth a binge-watch. The first episode, entitled “Nosedive”, took a stab at Facebook, Uber, Yelp and social media’s ratings economy. In this world, people are forced to rate each other out of five stars depending on how well they feel their interaction went (rather like the real-life Peeples app that got publicly rebuked). The higher the star rating, the more entitlements you garner and the more choice you have in attaining a privileged life. The main protagonist in this episode, Lacie Pound, wanted to buy an upscale house but she needed to improve her metrics through interacting with “quality people” in order to qualify for a discount. Lacie’s pursuit of this ambition leads her to reconnect with a highly rated but superficial, abrasive friend and ultimately on a failed quest to attend said friend’s wedding. Her desperate attempt to impress people and to chase a higher score ultimately leads to her downfall. Seeking approval from friends and strangers to grow social capital is fundamental to human nature - this behavior is probably what drove our ancestors to crawl out of caves and invent the wheel instead of swinging aimlessly from trees. But the crippling dependence of one’s life and ambitions on constant social validation in a quantified form highlights a sinister aspect of our society we should all want to avoid.

    A scene from "Nosedive":

    How is this relevant to scientific research? Well, quite a lot. As scientists we are constantly seeking approval and second guessing what reviewers will say. The peer review system is a long established tradition that is considered a hallmark of scientific integrity. The principles of review date back to 18th century England, when manuscript referees were first proposed at the Royal Society - ironically as a way to advocate science to the public. To publish a scientific paper today, you must submit your findings to a journal editor. He / she will send off the manuscript to two or three reviewers, who may or may not be your competitors in the same research discipline. Whether the paper gets approved, sent back for revisions or denied depends entirely on the opinions of the reviewers. If a reviewer was on friendly terms with you then your manuscript may have a better chance of passing through the gauntlet. On the other hand if a reviewer has a dim view of you or your pervious work they may also be biased against your subsequent manuscripts. A similar process happens when you try to fund grant proposals. In the case of NIH grants your proposal is sent to a scientific review officer, whom you have to talk to and try to impress. The officer will assign your proposal to a study section of 20 or more reviewers. The more people you know and are friendly with on that study section, the higher the score you are likely to get. Hence, armed with a high score, you may want to score their grants high too. It’s all very quid pro quo.

    Since demand for publication in high profile journals such as Nature, Cell and Science comes at a premium, peer reviewers and editors tend to give higher scores to authors they know, or from established laboratories which are already well funded. Such well funded laboratories also attain better scores at subsequent NIH funding cycles and will go on to produce even more work in high tier journals. Indeed big laboratories tend to be reputable because they previously made major breakthroughs. However, increasingly these labs gain funding and publish papers regardless of whether the research is genuinely accurate or novel. A recent survey showed that up to 80-90% of scientific data in published literature cannot be replicated. Much of this is due to flawed experimental design and sloppy treatment of statistics. Sometimes, though this is due to fraud and is a result of people gaming the system. The top percentile of scientists gain the most recognition and success often at the expense of the majority of poorer scientists. Subpar work that is not only unretracted but celebrated in prestigious magazines is causing a lot of consternation - rather like how global inequality (or opinions about it) has given rise to the likes of Donald Trump.

    At the fundamental level promotions and jobs in academia depend on your metrics of publication success, which in turn can depend on who you know and how well you interact with your peers. It is accepted dogma that young investigators have to attend meetings and become chummy with journal editors who might review their papers and grants. A scientist, just as Lacie Pound from “Nosedive” must go on a quest to increase their social standing with well-established people at fancy social functions, such as a Gordon meeting or Cold Spring Harbor retreat, in an attempt to move upwards in academia. A failure to establish a good reputation could mean the drying up of further funding or publications. The problem is that scientific research is supposed to be objective, where exploration and creativity are championed over social mobility and nose rubbing. In our case medical advances for diseases like cancer, heart disease and Alzheimer’s will be dependent upon objective judgment of verifiable data. When scientists are forced to pursue bibliographic metrics for financial stability and recognition, this leads to stagnation of actual scientific discovery.

    A survey of how many scientists have experienced failure to reproduce results from other labs:

    Even more pernicious to science is the phenomenon that large numbers of people can make what seem like independent judgments on complex questions until everyone assumes it to be correct (wisdom of crowds according to Joseph Campbell). In the “Nosedive” episode, everyone’s score can be viewed in real time beside their face. When Lacie talks to an office friend who offers her a cookie she wonders why his star rating has been reduced to a “3” range. But she becomes quickly influenced by her colleague to give him a lower rating simply because public opinion at the office turned against him. It is easy to spread both good and bad information quickly, especially on social media. In science, public advocacy can have a similar role. There are obviously good sides to science advocacy when it is not commandeered by a political goal but too often this is not the case. When Andrew Wakefield published his misleading studies linking the MMR vaccine to autism, public perception about vaccination turned resulting in millions of families refusing to accept this essential protection. This war against vaccination continues to wage today, years after the papers were withdrawn. When David Goulson and Jonathan Lundgren made claims that neonicotinoid pesticides caused ailments in bees and a drop in the bee population, the idea spread like wildfire. However, Goulson and others were later found to be part of a pesticide action and conservation group, an activist network devoted to the eliminating pesticides by publishing dubious claims in high impact journals.

    The other side of the problem is that the public can be used as a weapon against individual scientists if and when the social media consensus sees fit. Strangely enough the final episode of Black Mirror this year featured a story around killer bees. In a not so distant future natural bees have died out and a government sponsored company has engineered millions of drone bees to replace them. Simultaneously a computer hacker has decided to play a deadly game of online bullying by hacking one drone bee everyday to specifically target and kill the most hated and villified person on Twitter. All this was done before he turned the entire bee population on the netizens who sent out hate-tweets themselves, thus causing an apocalyptic killing spree. While the science behind designing a drone bee population that can pollinate and make honey is very aspriational, the act of using Twitter as hate mail and destroying lives is not. Knee-jerk instant public shaming is now a real problem in society and in science. Just last week a Princeton professor named Susan Fiske wrote about the "uncurated, unfiltered denigration" of individuals using "new media" (Twitter and Facebook) which is destroying people's careers. Anonymous trolling and online firestorms can now haunt any researcher who has had a paper retraction featured on Retraction Watch, regardless of whether the violation was big or small. One of the most famous victims of mass Twitter hysteria was Tim Hunt, the Nobel Laureate from University College London who was shamed and had to resign after making coarse remarks about women in the lab. After building a decades long career in biochemistry and winning the Nobel prize, his 39-word tweet undid him in just a few days. It is true that women are under-represented in science and it is true that Hunt should have paid more attention to what he tapped online. Even I remember scolding and sharing this story on my then active Facebook account. But the vitriol that is thrown out at scientists and academics who do not conform to today's views of political correctness and the speed with which they are hurled out the door is entirely new to our digital age. Lest the public can process what scientists say more slowly and carefully the world could end up actually killing off smart research minds in future.

    Tim Hunt: Nobel Laureate in pharmacology who was disgraced by one Tweet:

    What can be done about this dysfunction?

    We need to build back a certain level of trust between laymen, scientists and communicators of science. A couple of ideas have emerged recently following a backlash of complaints about gaming the publication system. One is that we can evaluate scientists based not solely on their bibliography metrics but on how much value their work contributes to society. Writing in Nature Comments, Rinze Benedictus and Frank Miedema of Utrecht University suggest a new way to evaluate faculty candidates which they have recently instituted at their departments. Candidates are asked to write an essay describing which publications they think contributes the most to society. They are also judged on multiple elements, only one of which includes bibliographic metrics. New faculty are effectively judged by their citizenship values on top of publication record. Here is a table of some of the elements which I think are very good ideas:

    A second idea is that graduate students can give feedback on how well faculty perform. At Utrecht University, graduate students are invited every year to award “supervisor of the year” to high-level faculty. They are also encouraged to discuss ways to improve the curriculum with faculty. This kind of feedback, however relies on the individual experiences of students and their relationships with their mentors.

    Many of these suggested evaluations are subjective in and of themselves. Furthermore, as much as I have lauded citizen science and the public advocacy of one’s own research, there are pitfalls with presenting data too early. Thus, one cannot eliminate the importance of considering good publication records and grant award history altogether. However, by instilling the values of citizenship and societal contribution, an institution can create more integrity among staff and students. Only by meeting higher, modernized standards of science integrity can we shield ourselves against Twitter hate campaigns and career-wrecking online firestorms. As humans we are always seeking approval from others and to some extent this has benefited us throughout evolution. But it is important not to be blinded by quantifiable data or by public hysteria so that we no longer provide opportunities for talented people to rise up through the ranks of a work force.

     

    References

    Peer Review, trouble from the start
    http://www.nature.com/news/peer-review-troubled-from-the-start-1.19763

    Fewer Numbers, Better Science
    http://www.nature.com/news/fewer-numbers-better-science-1.20858

    Advocacy research discredits science
    http://www.forbes.com/sites/henrymiller/2016/10/05/advocacy-research-discredits-science-aids-unprincipled-activism/#1ad7c4e168d3

    Nature Survey, non reproducibility
    http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970

    Too much public shaming:
    https://www.statnews.com/2016/11/04/public-shaming-science/

    Susan Fiske Essay:
    http://www.psychologicalscience.org/observer/a-call-to-change-sciences-culture-of-shaming#.WCZ5uuErIUE

    Tim Hunt incident
    http://www.spiked-online.com/newsite/article/tim-hunt-how-public-shaming-harms-academia/17061#.WCXwPKIrJE4
    https://www.theguardian.com/science/2015/jun/13/tim-hunt-hung-out-to-dry-interview-mary-collins