The Problem of Bad Research!

Patrick’s Books:

John ioannidis published an essay titled   “why most published research findings are false”,  where he showed that the results of many medical   other researchers. this is obviously a problem!   70% of researchers have tried and failed to  reproduce another scientist’s experiments,   not only that, but more than half admit

To having  during a decade as head of global cancer research  at amgen, c. glenn begley identified 53 “landmark”   reputable labs — for his team to reproduce.   he sought to double-check the findings before  trying to build on them for drug development.   replicated, causing huge problems for those trying   so, what

Might be causing this problem? well,   part way through his project to reproduce these  landmark cancer studies, begley met with the lead   he told the scientist that he had gone through   re-did the experiment 50 times and never got   that they’d done the experiment six times,   such selective publication is just

One  is peppered with incorrect results. many blame the hypercompetitive academic   diminishing funding. the surest ticket to   getting a grant or a good job is getting published  in a high-profile journal, and this can lead   obviously, this is most concerning in the  world of medicine, but the same problem   incredibly

Influential and commonly accepted   in 2011, joseph simmons, a psychologist at the  university of pennsylvania, published a paper in   the journal psychological science, where he showed  that people who listened to the beatles song “when   i’m sixty-four” grew younger, by nearly 18 months.  the result was obviously ridiculous but the

Point   the paper made was serious. it showed how standard  scientific methods, when abused could generate   scientists have been shocked to discover that   what they used to consider reasonable research  practices were flawed and likely to generate   labeled the “replication crisis” by the press.   campbell harvey,

See also  Organigram Graduates To The NASDAQ

A professor of finance at duke  university argues that at least half of the 400   supposedly market-beating strategies identified in  top financial journals over the years are false.   with the replication crisis in finance is to  accept that there is a crisis. and right now,   harvey is the former editor of the journal  

Of finance, a former president of the american  finance association, and an adviser to investment   he has written more than 150 papers on finance,   prizes. this is not like a child saying   criticism of the rigor of academic research   obviously, the stakes of the replication crisis  are much higher in medicine, where

People’s health   can be at risk than in the world of finance, but  flawed financial research is often pitched to   the public either through the press or by fund  management companies looking to raise assets.   people’s portfolios and can affect their   while ioannidis’s 2005 paper has been criticized  

Over time for its use of dramatic and exaggerated  language, most academics do agree with his paper’s   conclusions and its recommendations. so, lets  in statistics, we don’t try to prove that  something is definitely true, instead we show   how unlikely it is that we would have found our  test results if the underlying process

Was random,   this approach is based on the principle of   we can never prove that something is definitely  true, we can only prove that something is   false. statistical hypothesis tests thus, never  prove a model is correct, they instead show how   unlikely it is that we would have gotten our test  results if the

Idea being tested was incorrect.   hypothesis testing is the evidence against a   null hypothesis. the smaller the p-value, the  stronger the evidence is that our results are   whether a given drug is actually helpful, or in  finance if cheap stocks outperform over time.   p-values less than .05 are generally considered  they

See also  7 Vegan Items That Never Touch My Grocery Cart

Tell us that there is a 5% chance that  our results can be attributed to randomness.   this 5% threshold was picked by ronald fisher –  an important statistician in a book he published   the term p-hacking, describes the deliberate or   accidental manipulation of data in a study until  it produces a sufficient p-value. it is

The misuse   of data analysis to find patterns in data that  can be presented as statistically significant,   thus dramatically increasing and understating the  risk of false positives. if you took random data   and tested enough hypothesizes on it, you would  eventually come up with a study that appears to   harvey

(The former editor of the journal of   finance who we mentioned earlier) attributes the  scourge of p-hacking to incentives in academia.   finding published in a prestigious journal   can earn an ambitious young professor the ultimate  academic prize — tenure. wasting months of work   on a theory that does not hold up

To scrutiny  would frustrate anyone. it is therefore tempting   to torture the data until it yields something  interesting, even if other researchers are later   papers, in fact their careers depend on it;   “there is no cost to getting things wrong,   but isn’t science supposed to self-correct by   having other

Scientists replicate the findings of  an initial discovery? it is a lot less glamorous   scientists want to find their own breakthrough,   additionally, many journals don’t publish   replication studies. so, if you’re a scientist  the successful strategy is clear, don’t waste your   time on replication studies, do the kind

Of work  that will get you published, and if you can find   a result that is surprising and unusual, maybe  it will get picked up in the popular press too.   now i don’t want this to be seen as a negative  piece on science or the scientific method,   because people are more aware of this problem  today than in the past and things

See also  What If I Lost Everything

Have started   acknowledge the problems i’ve outlined and   are more large-scale replication studies going on,   there’s a site, retraction watch, that publicizes  research that has been withdrawn, there are online   there has been a move in many fields towards   preregistration of studies, where researchers  and

The methods they will use. a journal then  decides whether to accept it in principle.   stuck to their own recipe; if so, the paper is  published, regardless of what the data show.   this eliminates publication bias, promotes higher  powered studies and lessens the incentive for   about the replication crisis in academia  

Is not the prevalence of incorrect information in  published scientific journals after all getting   to the truth we know is hard and mathematically  not everything that is published can be correct.   and still make this many mistakes,  when we’re not using the scientific method?  as flawed as our research methods may be,  

Amusingly, around nine years after  most published research findings are false”,  a team of biostatisticians jager and leek   attempted to replicate his findings and calculated  that the false positive rate in biomedical studies   was estimated to be around 14%, not the 50% that  ioannidis had asserted. so, things are possibly  

Not quite as bad as people thought 16 years ago,  and science has moved in a positive direction   where researchers are more aware of the mistakes,  today’s video is based on my book statistics  for the trading floor, where i conclude with   analysis and how to avoid them. there is a   if you enjoyed this video, you should watch my  

Transcribed from video
The Problem of Bad Research! By Patrick Boyle

Scroll to top