Navigation
Public engagement

Becoming a Scientist

Read online for free

Print your own copy

Virus Fighter

Build a virus or fight a pandemic!

Play online

Maya's Marvellous Medicine

Read online for free

Print your own copy

Battle Robots of the Blood

Read online for free

Print your own copy

Just for Kids! All about Coronavirus

Read online for free

Print your own copy

Archive
LabListon on Twitter

Entries in scientific method (27)

Friday
Feb162024

A looming threat to scientific publication

You can't argue with Professor John Tregoning, of Imperial College: these graphics are "objectively funny". 

But beyond the snickering, there is a reason why the biomedical science community is in uproar over this paper

It is a failure of peer review that this article was ever published in a scientific journal. Scientific articles are meant to be peer reviewed, precisely to catch garbage articles like this. No system is ever 100% perfect, and science is a rapidly-moving self-correcting ecosystem, but this is just so... prominant... a mistake, how did it happen?

To understand, it is important to recognise that scientists have been aware of the short-comings of peer review for years (and there are many!). The scientific publishing system is flawed: it is hard to find anyone that would argue against that. Unfortunately many of the "solutions" have made the problem worse.

"Open access publishing" opened up science to the world. Rather than "pay to read", scientists "pay to publish". On the plus side, the public can access scientific articles cost-free. On the down-side, it provided a market for pseudo-scientific journals, "predatory journals" to open up and accept any "scientific" article that someone is willing to pay to publish. One of the leading journals in the "open access" movement was "Frontiers". They genuinely transformed the style of peer review, making it rapid, interactive and very, very scaleable. Unfortunately the utopian vision of the journal clashed with the perverse economic incentives of an infinitely scaleable journal that makes thousands of dollars for every article it accepted. I was an early editor at the journal, and soon clashed with the publishing staff, who made it next-to-impossible to reject junk articles. I resigned from the journal 10 years ago, because the path they were taking was a journey to publishing nonsense for cash.

Fast-forward 10 years, and Frontiers publishes more articles than all society journals put together. Frontiers in Immunology publishes ~10,000 articles a year; as a reference, reputable society journals such as Immunology & Cell Biology publish ~100 papers a year. Considering Frontiers earns ~$3000 per article, it is a massive profit-making machine. The vision of transforming science publishing is gone, replaced with growth at all costs. Add onto this a huge incentive to publish papers, even ones that no one reads, and it added to a perverse economy, with "paper mills" being paid to write fake papers and predatory journals being paid to publish them, all to fill up a CV.

Generative AI turbo-charges this mess. Some basic competency at using generative AI, and scamming scientists can rapidly fake a paper. This is where #ratdckgate comes in. The paper is obviously faked, text and figures. Yet it got published. A lot of failures in the system here, in particular perverse incentives to cheat, the generation of an efficient marketplace for cheating, and a journal that over-rode the peer reviewers because it wanted the publication costs. 

As the Telegraph reports, this is "a cock up on a massive scale".

No one really cares about this article, one way or another. Frontiers has withdrawn the article, and even congratulated itself on its rapid action for the one fake paper that went viral, without dealing with the ecosystem it has created. The reason why the scientific community cares is that this paper is just the tip of the iceberg. The scientific publishing system was designed to catch good-faith mistakes. It wasn't designed to catch fraud, and isn't really suitable for that purpose. Yes, reviewers and editors look out for fraud, but as generative AI advances it will be harder and harder to catch it, even at decent journals. It is an arms race that we can't win, and many in the scientific publishing world are struggling to see a solution.

There are many lessons to be learned here:

  • The scientific career pathway provides perverse incentives to cheat. That is human nature, but we need to change research culture to minimise it
  • Even good-intentions can create toxic outcomes, such as open access creating the pay-to-publish market place. We need to redesign scientific publishing fully aware of the way it may be gamed, and pre-empt toxic outcomes
  • Peer review isn't perfect, and isn't even particularly good at catching deliberate fraud. We probably need to separate peer-review from fraud detection, and take a seperate approach to each
  • Scientific journals range radically in the quality of peer review. We need a rigorous accreditation system to provide the stick to publish journals that harm science
  • Generative AI has huge potential for harm, and we need to actively design systems to mitigate those harms

Improving scientific publishing is a challenge for all of us. In a world where science is undermined by politics, we cannot afford to provide the ammunition to vaccine deniers, climate change deniers, science skeptics and others who want to discredit science for their own agenda. So we need to get our house in order.

Friday
Mar122021

How to keep your virtual lab meeting from being just another Zoom call

Terrific write up at Cell Mentor on virtual lab meetings. I had the pleasure of discussing our zoom retreat with Claudia Willmes for the article, and could describe all of the fantastic activities designed by our lab social team to keep everyone engaged. The whole article is worth reading, but a highlight is the call-out to Ruben's excellent escape room:

Escape games were already a popular activity for lab outings before 2020, but they have been taken to another level during the pandemic. There are several escape room companies that offer virtual adventures. Liston's group took this a notch further and went for a custom-made game designed by lab member Ruben Vangestel.

"The custom escape Zoom was based in the lab cold room, with a series of puzzles needed to escape," Liston says. "It had the typical ‘communal problem solving' aspect of an escape room, but by using pictures of our lab as the setting and cameos from lab members giving clues, it really created a warm feeling of togetherness. A reminder of the space we used to share, and the common experiences that unify us."

Friday
Aug072020

Unpopular opinion: the scientific publication system is not the problem

Scientific publishing is undergoing radical change. Nothing surprising there, scientific publishing has been constantly evolving and constantly improving. Innovation and change are needed to improve, although not all innovations end up being useful. I'm on record for saying that the DORA approach, for example, is ideologically well meaning, but so little consideration has been made of the practicalities that the implementation is damaging. Open-access is another example: an excellent ambition, however the pay-to-publish model used for implementation turbo-charged the fake journal industry.

I am glad that we have advocates pushing on various reforms to publishing: pre-print, open-access, retractions, innovations in accreditation, pre-registration, replication journals, trials in blind reviewing, publishing reviews, etc. The advocates do seem, to me, to have far too much belief that their particular reform is critical and often turn a blind eye to the potential downsides. That is also okay: the system needs both passionate advocates and dubious skeptics in order to push changes, throw out the ones that don't work and tweak the ones that do work in order to get the best cost/benefit ratio of implementation.

Fundamentally, though, the publication system is not broken. Oh, it is certainly flawed and improvements are needed and welcomed. But even if every flaw was fixed (which is probably impossible: some ambitions in publishing are at heart mutually contradictory) I don't think it will have the huge benefits that many advocates assume. Because at the heart of it, the problem is not the publication system, but the other systems that publishing flows into.

Let's take two examples:

  • Careers. Probably the main reason why flaws in the publishing system drive so much angst is that scientific publication is the main criteria used in awarding positions and grants. So issues with prestige journals, impact factors and so forth have real implications that damage people's lives and destroy careers. DORA is the ambition to not do that, without the solution of an alternative. Perhaps one day we will find a better system (I happen to believe it lies in improving metrics, and valuing a basket of different metrics for different roles, not in pretending metrics don't exist). But even a perfect system (again, probably impossible) won't fix the issue in career anxiety. Because in the end the issue is that the scientific career structure is broken: it is under-funded, built based on short-term perspectives, and operates on the pressure-cooker approach to milking productivity out of people until they break. From a broader perspective, the scientific career structure is not operating in a vacuum - it is part of a capitalist economy which again fuels these anxieties. Why are people so worried about losing their place in the academic pipeline? Because in our economy changing careers is really, really scary. Fixing publishing doesn't actually fix any of those downstream issues.
  • Translation. The other issue that is frequently raised by advocates for publication change are people who are involved in translation, usually commercialisation or medical implementation. Let's take the example of drug discovery. You don't need to go far in order to find people yelling about the "reproducibility crisis" (although the little data they rely on is, ironically enough, not especially reproducible) or animal-mouse translation issues. It would be great if every published study was 100% reproducible and translatable, although I'm rather sanguine about errors in the literature. There is always a trade-off between speed and reproducibility, and I am okay with speed and novelty being prioritised at the start of the scientific pipeline as long as reproducibility is prioritised at the end. Initiatives to improve what is published are welcomed, but flawed publications on drug discovery are only a problem because they feed into a flawed drug development system. Big pharma uses a system where investments are huge and the decision process is rushed, with the decision-making authority invested in a handful of people. The structure of our intellectual property system rewards decisions made early on incomplete information: snap judgements need to be made too early in the development process. This system will create errors and waste money. More importantly, perhaps, it will also miss opportunities. A medicine slowly developed in the public domain via collaborating experts may be entirely unviable commercially and never enter patients.
So I agree that scientific publishing is flawed, and improvements can and should be made. Unlike some, however, I don't see journals and editors as the enemy - I see them actively engaged in improvements. Like science itself, scientific publishing will improve slowly but steadily, with a few false leads and some backtracking needed. I am perhaps just too cynical to believe that "fixing" publishing will change science the way some advocates state: the problems have a deeper root cause at their heart.

Wednesday
Jan012020

Unpopular grant review opinions

Unpopular grant review opinion 1. Sections on ethics, equality, open publishing, budgets, etc make grants almost unreadable, and should not be sent to external reviewers.

I am not saying that these things are unimportant - far from it - just that a data dump of 100-page long applications with 10 pages of actual science is not a useful way to do things. Issues such as open publishing and equality could be better dealt with at the institute level. The institute should have the requirement to show they have appropriate policies in place before anyone from that institute can apply. These are not individual researcher issues. Issues such as budgets are best dealt with by financial administrators. Do I know the appropriate budget for a post-doc in Sweden? No. So don't send me 20 pages of financial material. This could, and should, be checked internally and not sent to external review. Guess what, I also don't read Greek. So why are there 15 pages of internal Greek administrative material in the 68-page document sent to me to review? It just makes my life difficult, and makes it more likely I will miss important bits. I'm also not a fan of letters of collaboration. If you say that you work with someone, I'm going to believe you. It is a weird thing to make up. If I can't trust you on that, why trust you on anything you've written?

Too many funding agencies seem like they want to have boxes X, Y and Z ticked, which is good. Unfortunately, rather than actually check it internally they just want a data-dump passed on to reviewers. Reviewers who are selected for their familiarity with the science, not for administrative sections. This approach looks like the boxes are ticked, but it is not actually a good way of effecting change. It sometimes feels more like a protection for the funder, so that they can say it was checked by external reviewers. 

What do I want as a reviewer? First, a simple log-in that doesn't require me to fill in all my details. Then a small application with just the science. I want an easy-to-navigate website, with just two open text boxes (project and applicant) to fill in. I want practical guidelines on what the scores given mean (e.g., funding chance at each score, solid examples of each score). And that's it. Anything more just makes my life harder. 

Unpopular grant review opinion 2. Reviewing grants is an inherently wasteful way to distribute resources.

Yes, grant review filters out some bad ideas and in theory saves money. But science has to fund ideas that won't work. There is no other way to push back the frontiers.

The main alternative is just bulk funding. Block funding every researcher equally is not ideal either. If there are no penalties for failure and no rewards for success, the system can become stagnant. This is why block funding systems were gradually phased out and replace with grant review. But are systems of 100% grant review the most efficient way to allocate resources? An enormous amount of work goes into writing and reviewing good ideas that are never funded. Would it not be preferable to have some of that time spent on science?

I would prefer it if institutes were required to provide a minimum core funding of 2 junior staff or students to each group leader, with appropriate consumables. Yes, this would take up perhaps 50% of research funding. Yes, limits on group leader hiring would be needed. But under this system, the cycle of insecurity and short-termism would be broken. Small labs could work on hard problems over the long-term. Effort would be spent on research not writing unsuccessful grants.

The pot of funding for research grants would be halved in size, but the number of applications would go way down. I suspect that the actual success rate for grants may even rise under this system. A lot of scientists would be okay with a small team, and might even prefer it. At the moment, a lot of applications are made from a place of desperation, for survival of the lab. Group leaders are constantly trying to grow, because often growth or death are the only options. Those "survival" grants would now not be needed. Grant applications would be reserved for either a) those who have proven their ability to efficiently lead a larger team, or b) the small labs that have a special idea that needs the extra boost in resources.

I suspect that this hybrid system would be more efficient than either 100% block funding or 100% grant review funding. Any funders willing to rise to the challenge?

Unpopular grant review opinion 3. Aspirations to remove the use of metrics, such as DORA  are well meaning, but ultimately cause more problems than they solve.

DORA seeks to remove the influence of journal impact factors. For good reason, since impact factors are problematic, and an imperfect measure of quality of the articles in those journals. But do you know what else is imperfect? Every other system.

I am reviewing 12 grants for the same funder. The applicants have an average of 70 papers each. Let's say that a proper deep review of a research paper's quality takes 3 hours. Just the CV assessment would require 2520 hours of deep review. That is nearly 4 months of work. No one actually does that. Even if we had the time, it would be a repeat of effort already done by the peer reviewers at the journal. 

We also need to acknowledge that metrics have strengths. First, they are less amenable to bias then just having one person say the paper is good or bad. Second, they are better at comparing large numbers of applicants - which is the entire point of grant panels. 

DORA principles have their place. In particular, the faculty selection process. But trying to use these principles on grant review panels does not accept the reality of the job that panel members are being asked to do. I would suggest that grant agencies embrace metrics, but do so wisely and cautiously. Develop a useful set of metrics that are given for each applicant. Some off-the-cuff ideas:

  • average number of last author papers per lab member per year
  • average impact factor of last author papers over the last five years
  • average citation number of last author papers from more than five years ago
  • average amount of grant funding per impact factor point of last author papers
  • number of collaborative papers compared to lab size

I'm not devoted to any of these metrics, but having them would make CV comparison easier and, arguably, fairer. An enormous amount of research should be put into the correct selection of metrics, so that we select for the type of qualities that we want. What you measure is what you get. But the advantages of using metrics are real. We could identify the strengths of the applicant. "This applicant doesn't publish much, but look at the output compared to their funding!" or "Every post-doc who joins this lab ends up with a good paper". Different grant formats could use emphasize different metrics, for example applications for an infrastructure grant should be given a bonus if the applicant has a record of multiple collaborative papers. It just makes sense - they've proven they work with multiple groups. Likewise, post-doc fellowships could be influenced by a metric on their supervisor's success rate with post-docs - I'd rather send a fellow into a lab where most post-docs succeed than to a lab where 90% disappear into the ether. 

There would also need to be a text entry that allows someone to make a case that the metrics are not appropriate in their particular case. I am happy to look beyond metrics if the applicant can convince me there is a reason to. But that should be the case for the applicant to make, rather than throwing out all of the quantifiable meta-data. Blindly using one metric is bad, but intelligently using multiple metrics, tailored to the purpose of the grant, just makes sense. 

Conclusion. We could be doing grant review much better. Right now, I am not even sure that we are moving in the right direction. I'd like to see more involvement from grant agencies, and a more thoughtful assessment of the burden of peer review on both applicant and reviewer. Scientists should just be reviewing the science, and we should be given useful tools to do so. Administrative issues should be audited independently, and often at the level of the institute rather than the grant. These are complex issues, and on another day I might even argue the opposite case for each opinion above, but the important thing is that we should be having a fearless and data-led discussion on the topic. 

 

Wednesday
Dec112019

Lafferty debate, ASI

One of my favourite events of the scientific year, the humorous Lafferty debate at ASI 2019. 

This year, the value of mouse research. Fantastic points put forward on the incredible value that mouse research has given to immunology. Right now it is popular to diss mouse work, but it is worth remembering that almost the entire basis of immunology - the cell types, the immune responses, the origin of immune diseases - is based on mouse immunology. Yes, there are some differences between mouse and human, but the incredible advances in treating immunological diseases have come from work that developed in mouse. Parallel mouse-human research is the only method that has actually cured disease, so let's not throw out the winning formula.

Saturday
Dec232017

An interview with Stephanie Humblet-Baron

An interview between Dr Liesbeth Aerts and Dr Stephanie Humblet-Baron on her recent paper in JACI:

 

Can you summarize the significance of your findings in a few sentences for people outside your field?

Working in the field of primary immunodeficiency disorders, we described a new mouse model for severe combined immunodeficiency (SCID), recapitulating the key clinical features of SCID patients suffering of both immunodeficiency and autoimmunity (leaky SCID). Importantly our model proposed a novel efficient therapeutic approach for this disease.

What made the paper particularly outstanding?

Due to the pre-clinical evidence of a drug efficiency to treat a rare disease, patient clinical trials can be directly proposed. This treatment is already approved for human use in arthritis, so it could be rapidly be repurposed for leaky SCID patients. In addition, our model is available for further pre-clinical assay, including gene therapy.

When did you realize you were on to something interesting?

When I started to work with this model I already knew which gene was mutated (Artemis). However when I saw the mice for the first time I could tell that they were developing the exact same symptoms that we see in the clinic. I knew that other mouse models working on this gene had never seen leaky SCID symptoms, so I knew we needed to explore in depth the model. The other key moment was after treating our mice with the drug (CTLA4-Ig) – it completely blocked disease, making this a very valuable project with new therapeutic opportunities for patients.

Did the technology available at the department make a difference?

The FACS core was the major technique used for investigation this project.

A huge amount of work and energy must have gone into the paper. How did you cope with stress and doubts?

Liesbeth this is a joker question!

The project went actually quite smoothly, the hard time I got during this project was rather adjusting myself with motherhood and life in science at the same time.

What are you personally most proud of?

This work can be seen as translational medicine, with direct therapeutic benefit for the patients. The ability for better understanding the mechanism of the disease was also valuable to me.

Can you share some advice for others?

Always envision your project as a story to write and tell. When you find a new result ask what would be the next question and continue to explore it further.

Friday
Dec082017

Commercial staining kits

Foxp3 staining is notoriously difficult. The original protocols did not work, and it was a major breakthrough when BD and eBio released fix/perm kits that allowed good Foxp3 staining. The companies keep the formula secret, so that you have to buy from them, but it turns out 0.1% dishwashing liquid works just as well...

Thursday
Dec072017

Pay-to-publish

Open access publication is great in theory, but in practice it had lead to hundreds of fake scientific journals springing up in the hope that pseudo-scientists will pay-to-publish their junk in a real-sounding scientific journal. Todays example... I mean, really?


Thursday
Sep072017

The importance of stupidity in scientific research

Friday
Aug182017

Uncorking the muse: Alcohol intoxication facilitates creative problem solving

Uncorking the muse: Alcohol intoxication facilitates creative problem solving

Andrew F. Jarosz, Gregory J.H.Colflesh and Jennifer Wiley

Consciousness and Cognition. 21(1) 2012, Pages 487-493
 
That alcohol provides a benefit to creative processes has long been assumed by popular culture, but to date has not been tested. The current experiment tested the effects of moderate alcohol intoxication on a common creative problem solving task, the Remote Associates Test (RAT). Individuals were brought to a blood alcohol content of approximately .075, and, after reaching peak intoxication, completed a battery of RAT items. Intoxicated individuals solved more RAT items, in less time, and were more likely to perceive their solutions as the result of a sudden insight. Results are interpreted from an attentional control perspective.