Navigation
Public engagement

Becoming a Scientist

Read online for free

Print your own copy

Virus Fighter

Build a virus or fight a pandemic!

Play online

Maya's Marvellous Medicine

Read online for free

Print your own copy

Battle Robots of the Blood

Read online for free

Print your own copy

Just for Kids! All about Coronavirus

Read online for free

Print your own copy

Archive
LabListon on Twitter

Entries in scientific method (27)

Saturday
Mar262011

An alternative model for peer review

There is no doubt that the current model of peer review is an effective but inefficient system. The high quality of publications that complete peer review is a testomy to the effectivity of the peer review system, as poor papers rarely get accepted in well reviewed journals. However the efficiency of the review system is very low.

Consider that the highest ranked journals have acceptance rates of around 10% and even the middle-ranked journals have acceptance rates of less than 50%. Most papers get published sooner or later, but with the career reward of publishing in high impact factor journals, it is not unusual for a publication to get rejected four or five times as the authors work their way down the journal ranking list. Considering that each review will generally consist of three reviewers, a single paper that had a tough time could consume the (unpaid) time of fifteen reviewers before it is finally accepted. This is an enormous burden on the scientific community, and is largely a wasted burden - afterall, each journal editor only gets to see three of those fifteen reviews when making a decision to accept or decline an article. It also considerably slows down the dissemination of information, as it is not unusual for the entire review process to consume a year or more.

So let's consider an alternative model for peer review, one which keeps the critical aspects that provide effectiveness, but which changes the policies that produce inefficiency. Consider now a consortium of four or five publishers, which may include 20 journals that publish papers on immunology. Rather than authors submit to the individual journals, the authors would submit to a centralised editorial staff, which is paid for by the publishers but which is independent of each journal. An immediate advantage would be the ability to have many more specialised editors available, allowing for better decisions on choosing and assessing the reviews.

Each paper would then be sent out to five or six reviewers, and the reviews would be made available to each of the journals. The editorial staff at the journals would be able to make an assessment of the paper and put forward an option to accept, conditionally accept or decline the paper. This information would be transmitted back to the consortium, and would be provided to the authors. The authors would then be able to make their choice of which offer to accept. In effect, each journal would be making a blind offer to the authors to publish their paper, with full knowledge of the reviews but without the knowledge of whether the other journals put in a bid.

Consider the benefits of this alternative model to each player:

1. The journal gets to judge on more complete information, with double the number of reviews available for each paper, selected by more specialised editorial staff.

2. The reviewing community will more than halve the number of reviews required, while actually providing more information to the journals.

3. The authors will no longer have to make strategic decisions in choosing where to submit, they will simply submit to the consortium and have the option to publish in the top ranked journal which is interested in the paper.

4. The scientific community will have access to cutting-edge research months or even years earlier than under the current system.

Thursday
Jan062011

The verdict on Andrew Wakefield: Fraud

In 1998 Andrew Wakefield published a paper which has severely damaged public health in the last ten years. Based on his observations of only twelve children, nine that he claimed had autism, and without a control group, he concluded that the measles/mumps/rubella vaccine caused autism. As a hypothesis, this was fine, unlikely, but not impossible. He saw nine children with autism, reported that their parents linked this onset with the MMR vaccine, and put it in the literature. Why on earth on underpowered observation like this made it into the Lancet is beyond me, but there is nothing wrong with even outlandish hypotheses being published in the scientific literature. Was it a real observation, or just an effect of a small sample size? Was it a causative link, or just due to coincidence in timing?

As with any controversial hypothesis, after this one was published a large number of good scientists went out and tested it. It was tested over and over and over again, and the results are conclusive - there is no link between the MMR vaccine and autism.

In itself, this was of no shame to Andrew Wakefield. Every creative scientist comes up with multiple hypotheses that end up being wrong. People publish hypotheses all the time, then disprove them themselves or have them disproven by others. If you can't admit being wrong, you can't do science, and it is in fact the mark of a good scientist to be able to generate hypotheses that others seek to knock down. Ten of the thirteen authors on the study were able to see the new data and renounce the hypothesis.

The shame to Andrew Wakefield is not that his hypothesis was wrong. No, the shame he has brought upon himself was by being unscientific, unscrupulous and unethical:

  1. Firstly, Wakefield did not present his paper as a hypothesis generator, to be tested by independent scientists. Instead he went straight to the media and made the outrageous claim that his paper was evidence that the MMR vaccine should be stopped. This is not the way science or medicine works and was a conclusion unsupported by the data. Worst of all it was a conclusion that many parents without scientific training were tricked into believing. Vaccination rates for MMR went down (autism rates have remained unchanged) and children started dying again of easily preventable childhood diseases. A doctor does not see half a dozen children that developed leukemia after joining a football team and then hold a press conference telling parents that playing sports causes cancer in children, which is the direct equivalent of Wakefield's actions.
  2. Secondly, it has now been conclusively demonstrated that his original data was fraudulent. Interviews with the parents of the original nine children with autism show that he faked much of the data of the time of onset, taking cases where autism started before the MMR vaccine and reversing the dates to suggest that the vaccine started the autism. Analysis of the medical records of these children show that as well as the timing being incorrect, many of the symptoms were simply faked and non-existent. The evidence on this charge alone makes Wakefield guilty of professional misconduct and criminal fraud.
  3. Thirdly, unknown to the coauthors of the study and the parents of the children, Wakefield had a financial conflict of interest. Before the study had begun, Wakefield had been paid £435 643 to find a link between vaccines and disease as part of a lawsuit. Every scientist must disclose their financial interests in publication so that possible conflicts are known - Wakefield did not. If he had disclosed this to the press conferences the media may have been slightly more skeptical about his outlandish claims.

These last two issues, scientific misconduct and financial conflict of interest, are the reason why the paper was formally retracted by the Lancet. Studies that are wrong don't get retracted, they just get swamped by correct data and gradually forgotten. Instead, the retraction indicates that the Wakefield paper was fradulent and should never have been published in the first place. Likewise, the British General Medical Council investigated the matter and found that Wakefield "failed in his duties as a responsible consultant" and acted "dishonestly and irresponsibly", and thus struck him off the medical registry.

The worst part about this sorry affair is that it is still dampening down vaccination rates. Literally hundreds of studies, with a combined cohort size of a million children, have found no link between the MMR vaccine and autism, yet one fraudulent and retracted study of nine children is still talked about by parents. Some parents are withholding this lifesaving medical treatment from their children, and their good intentions do nothing to mitigate the fact that cases of measles and mumps are now more than 10 times more likely than they were in 1998, and confirmed deaths have resulted. And Andrew Wakefield, the discredited and disbarred doctor who started this all? Making big money in the US by selling fear to worried parents, and deadly disease to children who have no say in it at all.



Wednesday
Oct132010

The historic quandary of antibody production

The mechanism by which antibodies were formed was once one of the oldest and most perplexing mysteries of immunology. The properties of antibody generation, with the capacity of the immune system to generate specific antibodies against any foreign challenge – even artificial compounds which had never previously existed – defied the known laws of genetics.

Three major models of antibody production were proposed before the correct model was derived. The first was the “side-chain” hypothesis put forward by Ehrlich in 1900, in which antibodies were essentially a side-product of a normal cellular process (Ehrlich 1900). Rather than a specific class of proteins, antibodies were just normal cell-surface proteins that bound their antigen merely by chance, and the elevated production in the serum after immunisation was simply due to the bound proteins being released by the cell so that a functional, non-bound, protein could take its place. In this model antibodies “represent nothing more than the side-chains reproduced in excess during regeneration and are therefore pushed off from the protoplasm”.

 

Figure 1. The “side-chain” hypothesis of antibody formation. Under the side-chain hypothesis, antibodies were normal cell-surface molecules that by chance bound antigens (step 1). The binding of antigen disrupted the normal function of the protein so the antigen-antibody complex was shed (step 2), and the cell responded by replacing the absent protein (step 3). Notably, this model explained the large generation of specific antibodies after immunisation, as surface proteins without specificity would stay bound to the cell surface and not require additional production. The model also allowed a single cell to generate antibodies of multiple specificities.

 

The “side-chain” model was replaced by the “direct template” hypothesis by Haurowitz in 1930. Under this alternative scenario, antibodies were a distinct class of proteins but with no fixed structure. The antibody-forming cell would take in antigen and use it as a mould on which to cast the structure of the antibody (Breinl and Haurowitz 1930). The resulting fixed-structure protein would then be secreted as an antigen-specific antibody, and the antigen reused to create more antibody. In preference to the “side-chain” hypothesis, the “direct template” hypothesis explained the enormous potential range of antibody specificities and the biochemical similarities between them, but it lacked any mechanism to explain immunological tolerance.

 

Figure 2. The “direct-template” hypothesis of antibody formation. The direct-template hypothesis postulated that antibodies were a specific class of proteins with highly malleable structure. Antibody-forming cells would take in circulating antigen (step 1) and use this antigen as a mould to modify the structure of antibody (step 2). Upon antibody “setting”, the fixed structure antibody was released into circulation and the antigen cast was reused (step 3). In this model specificity is cast by the antigen, and a single antibody-producing cell can generate multiple different specificities of antibody. 

 

A third alternative model was put forward by Jerne in 1955 (Jerne 1955). The “natural selection” hypothesis is, in retrospect, quite similar to the “clonal selection” hypothesis, but uses the antibody, rather than the cell, as the unit of selection. In this model the healthy serum contains minute amounts of all possible antibodies. After the exposure to antigen, those antibodies which bind the antigen are taken up phagocytes, and the antibodies are then used as templates to produce more antibodies for production (the reverse of the “direct template” model). As with the “direct template” model, this hypothesis was useful in explaining many aspects of the immune response, but strikingly fails to explain immunological tolerance.

 

Figure 3. The “natural selection” hypothesis of antibody formation. The theoretical basis of the natural selection hypothesis is the presence in the serum, at undetectable levels, of all possible antibodies, each with a fixed specificity. When antigen is introduced it binds only those antibodies with the correct specificity (step 1), which are then internalised by phagocytes (step 2). These antibodies then act as a template for the production of identical antibodies (step 3), which are secreted (step 4). As with the clonal selection theory, this model postulated fixed specificity antibodies, however it allowed single cells to amplify antibodies of multiple specificities.

 

When Talmage proposed a revision with more capacity to explain allergy and autoimmunity in 1957 (Talmage 1957), Burnet immediately saw the potential to create an alternative cohesive model, the “clonal selection model” (Burnet 1957). The elegance of the 1957 Burnet model was that by maintaining the basic premise of the Jerne model (that antibody specificity exists prior to antigen exposure) and restricting the production of antibody to at most a few specificities per cell, the unit of selection becomes the cell. Critically, each cell will have “available on its surface representative reactive sites equivalent to those of the globulin they produce” (Burnet 1957). This would then allow only those cells selected by specific antigen exposure to become activated and produce secreted antibody. The advantage of moving from the antibody to the cell as the unit of selection was that concepts of natural selection could then be applied to cells, both allowing immunological tolerance (deletion of particular cells) and specific responsiveness (proliferation of particular cells). As Burnet wrote in his seminal paper, “This is simply a recognition that the expendable cells of the body can be regarded as belonging to clones which have arisen as a result of somatic mutation or conceivably other inheritable change. Each such clone will have some individual characteristic and in a special sense will be subject to an evolutionary process of selective survival within the internal environment of the cell.” (Burnet 1957)

 

Figure 4. The “clonal selection” hypothesis of antibody formation. Unlike the other models described, the clonal selection model limits each antibody-forming cell to a single antibody specificity, which presents the antibody on the cell surface. Under this scenario, antibody-forming cells that never encounter antigen are simply maintained in the circulation and do not produce secreted antibody (fate 1). By contrast, those cells (or “clones”) which encounter their specific antigen are expanded and start to secrete large amounts of antibody (fate 2). Critically, the clonal selection theory provides a mechanism for immunological tolerance, based on the principle that antibody-producing cells which encounter specific antigen during ontogeny would be eliminated (fate 3).

 

It is important to note that while the clonal selection theory rapidly gained support as explaining the key features of antibody production, for decades it remained a working model rather than a proven theory. Key support for the model had been generated in 1958 when Nossal and Lederberg demonstrated that each antibody producing cell has a single specificity (Nossal and Lederberg 1958), however a central premise of the model remained pure speculation – the manner by which sufficient diversity in specificity could be generated such that each precursor cell would be unique. “One aspect, however, should be mentioned. The theory requires at some stage in early embryonic development a genetic process for which there is no available precedent. In some way we have to picture a “randomization” of the coding responsible for part of the specification of gamma globulin molecules” (Burnet 1957). Describing the different theories of antibody formation in 1968, ten years after the original hypothesis was put forward, Nossal was careful to add a postscript after his support of the clonal selection hypothesis: “Knowledge in this general area, particularly insights gained from structural analysis, are advancing so rapidly that any statement of view is bound to be out-of-date by the time this book is printed. As this knowledge accumulates, it will favour some theories, but also show up their rough edges. No doubt our idea will seem as primitive to twenty-first century immunologists as Ehrlich’s and Landsteiner’s do today.” (Nossal, 1969).

It was not until the research of Tonegawa, Hood and Leder that the genetic principles of antibody gene rearrangement were discovered (Barstad et al. 1974; Hozumi and Tonegawa 1976; Seidman et al. 1979), rewriting the laws of genetics that one gene encoded one protein, and a mechanism was found for the most fragile of Burnet’s original axioms. The Burnet hypothesis, more than 50 years old and still the central tenant of the adaptive immune system, remains one of the best examples in immunology of the power of a good hypothesis to drive innovative experiments.

 

References

Barstad et al. (1974). "Mouse immunoglobulin heavy chains are coded by multiple germ line variable region genes." Proc Natl Acad Sci U S A 71(10): 4096-100.

Breinl and Haurowitz (1930). "Chemische Untersuchung des Prazipitates aus Hamoglobin and Anti-Hamoglobin-Serum and Bemerkungen ber die Natur der Antikorper." Z Phyisiol Chem 192: 45-55.

Burnet (1957). "A modification of Jerne's theory of antibody production using the concept of clonal selection." Australian Journal of Science 20: 67-69.

Ehrlich (1900). "On immunity with special reference to cell life." Proc R Soc Lond 66: 424-448.

Hozumi and Tonegawa (1976). "Evidence for somatic rearrangement of immunoglobulin genes coding for variable and constant regions." Proc Natl Acad Sci U S A 73(10): 3628-32.

Jerne (1955). "The Natural-Selection Theory of Antibody Formation." Proc Natl Acad Sci U S A 41(11): 849-57.

Nossal and Lederberg (1958). "Antibody production by single cells." Nature 181(4620): 1419-20.

Nossal (1969). Antibodies and immunity.

Seidman et al. (1979). "A kappa-immunoglobulin gene is formed by site-specific recombination without further somatic mutation." Nature 280(5721): 370-5.

Talmage. (1957). "Allergy and immunology." Annu Rev Med 8: 239-56.

Friday
Aug132010

2010's worst failure in peer review

Even though it is only August, I think I can safely call 2010's worst failure in the peer review process. Just as a sampler, here is the abstract:

Influenza or not influenza: Analysis of a case of high fever that happened 2000 years ago in Biblical time

Kam LE Hon, Pak C Ng and Ting F Leung

The Bible describes the case of a woman with high fever cured by our Lord Jesus Christ. Based on the information provided by the gospels of Mark, Matthew and Luke, the diagnosis and the possible etiology of the febrile illness is discussed. Infectious diseases continue to be a threat to humanity, and influenza has been with us since the dawn of human history. If the postulation is indeed correct, the woman with fever in the Bible is among one of the very early description of human influenza disease.

If you read the rest of the paper, it is riddled with flaws at every possible level. My main problems with this article are:

1. You can't build up a hypothesis on top of an unproven hypothesis. From the first sentence it is clear that the authors believe in the literal truth of the Bible and want to make conclusions out of the Bible, without drawing in any natural evidence. What they believe is their own business, but if they don't have any actual evidence to bring to the table they can't dine with scientists.

2. The discussion of the "case" is completely nonsensical. The authors rule out any symptom that wasn't specifically mentioned in the Bible ("it was probably not an autoimmune disease such as systemic lupus erythematousus with multiple organ system involvement, as the Bible does not mention any skin rash or other organ system involvement") because medical observation was so advanced 2000 years ago. They even felt the need to rule out demonic influence on the basis that exorcising a demon would be expected to cause "convulsion or residual symptomatology".

This really makes me so mad. The basis for getting published in science is really very simple - use the scientific method. The answer doesn't have to fit dogma or please anyone, but the question has to be asked in a scientific manner. How on earth did these authors manage to get a Bible pamphlet past what is meant to be rigorous peer review? Virology Journal is hardly Nature, but with an impact factor of 2.44 it is at least a credible journal (or was, until this catastrophe). At least the journal has apologised and promised to retract the paper:

As Editor-in-Chief of Virology Journal I wish to apologize for the publication of the article entitled ''Influenza or not influenza: Analysis of a case of high fever that happened 2000 years ago in Biblical time", which clearly does not provide the type of robust supporting data required for a case report and does not meet the high standards expected of a peer-reviewed scientific journal.

Okay, Nature has also made some colossally stupid mistakes in letting industry-funded pseudo-science into their pages, but in the 21st century you would hope that scientific journals would be able to tell the difference between evidence-based science, and faith-based pseudo-science.

Tuesday
Sep222009

Nature attacks peer review

In the latest issue of Nature, the journal has published a rather unfair attack on peer-review. Peer review is the process that most journals use to assess the merit of individual papers - submissions are judged by editorial staff, then sent to scientists working in the field for peer review, then the reports by these scientific peers are judged by the editorial staff to determine whether they warrant publication. While it is the standard today, there has been a lot of resistance to peer review in the past, as the editorial staff of journals exercised their power of selection. Notably Nature, founded in 1869, only moved towards peer review 100 years later, under the direction of John Maddox. Other journals, such as PNAS, are only now scrapping peer review bypasses.

There are certainly problems with the journal submission process, but typically these involve too little peer review, rather than too much. A journal such as Nature typically rejects the majority of papers without review and for those papers reviewed there are only two to three reviewers per paper. Scientists put a lot of effort into reviewing, but as it is an unpaid and unrequited favour, it is not the highest level priority. Even after review, the editorial staff have enormous power to accept or decline the advice of peer review, Nature once famously publishing a paper falsly reporting to show effects of homeopathy. This editorial decision tends to be a combination of ranking the news splash effect (Nature and Science compete for citations in the big newspapers), the "boys club" effect (no longer all male, but certainly the big names have an easier pathway to acceptance) and editorial "gut feeling".

To justify the editorial over-ride, defects in peer review are commonly cited. In this latest editorial piece, Nature presents the results of an unpublished study presented at a conference, reporting that the results show a bias of peer review towards positive results. This may be so, but does the cited study actually show that? What the study did was submit two papers, one with positive results and one with negative results, to two journals, and analyse the peer review results. The results showed that peer reviews at one journal (Journal of Bone and Joint Surgery) had a minor reduction in ranking the negative results paper, while the second journal (Clinical Orthopedics and Related Research) showed no significant difference. Hardly a damming inditement of peer-review.

What are the methodological flaws that could account for the minor differences observed at one out of two journals?

* Different reviewers. Even picking 100 reviewers for each paper does not cancel out this effect unless reviewers were carefully stratified to ensure random distribution.

* The quality of the two papers may have been different. The author of the study tried to make them as identical as possible, but different results need to be presented differently. As the study is unpublished we only have the author's opinion that the two studies were of equal quality.

* Positive and negative results can have very different "impacts". Most journals explicitly request a review which takes into account both scientific validity and scientific impact. Negative results generally have lower impact and hence would get lower review scores, as explicitly requested by the journals. To remove this effect the papers should have been submitted to a journal such as PLOS One, which requests a review only on scientific quality.

* Positive and negative results require different statistical standards. A positive result uses simple statistics to show that the two groups were different. A negative result requires more complex statistics and can only state that the two results were not different above a certain level. A negative result can never exclude that a positive result exists with a smaller effect than would be picked up by the study design.

Certainly the most obvious sign of "positive bias" evidenced by this article is the decision by Nature to write an editorial and broadcast a podcast on a minor unpublished study that denigrates peer reviewers and hence elevates editorial staff. Would they have written a similar editorial on an unpublished presentation showing no sign of bias by peer reviewers? The minor impact observed in one out of two journals tested (with all the caveats above) did not warrant Nature to fill its editorial with phrases such as "dirty", "biased", "more negative and critical" and "biased, subjective people". The worst bias of all is the accusation that peer reviewers from the second study only showed no statistical bias because "these reviewers guessed they were part of an experiment". Surely Nature should have been able to spot that subjective reporting, dimissing negative results and elevating positive results are the very definiton of positive result bias!

Monday
Sep142009

Faith, post-modernism, science and the approximation of truth

Faith, post-modernism and science all have a different approach to truth.

With faith, the underlying premise (whether articulated or not) is that an Absolute Truth exists, and what is more that the believer has an insight into this Truth. Already knowing Truth, evidence contrary to this Truth must be false and can therefore be ignored. End of debate.

Post-modernism is either the opposite of faith, or just a subset of faith. Under post-modernist thought, there is no objective Truth or Reality, merely individual truths or realities that each person constructs for themselves. Every belief or truth then becomes equally valid, it is just as true to describe the sun as a galactic turnip as it is to talk about hydrogen fusion. Ironically enough, post-modernism does have unquestioning faith in one Truth, the Absolute Truth that there are no absolute truths. The irony is generally ignored.

Science has a third, and fundamentally different, way of conceptualising truth. Interestingly, science uses aspects of both the faith and post-modernistic concepts of truth. Science agrees with faith on the claim that there is an objective truth, or rather an objective reality, that exists independent of any observer. However science also agrees with post-modernism on the claim that an individual cannot grasp objective truth, only subjective truth. The unique contribution of science to the concept of truth is the approach of approximation.

Science does not claim to know Truth the way faith does, nor does it give up on the entire venture as a human abstraction the way post-modernism does. Instead science acknowledges that objective truth exists and attempts to reach the closest possible approximation of truth. Science starts with a model of reality. Scientists then attempt to disprove this model in every conceivable way. Inevitably, every model shows a flaw, an experiment which does not act in quite the predicted manner. The scientific model of objective truth / reality is then forced to change to explain the discordant data. Sometimes an entire model is discarded and a new model is picked up, but far more commonly the original model can continue to stand with a few modified improvements. Scientists then attack this modified model of the truth with renewed vigour. Cycle upon cycle, incremental improvements are made to the model, making it harder and harder to find flaws. Science will never be able to reach absolute truth, but it is extraordinary adept at producing an ever-increasingly accurate approximation of truth. The technology we take for granted today is just one display of how accurate scientific approximations of truth are – the scientific model of the atom does not claim perfection, but our daily use of electron flow (electricity) indicates that the scientific approximation is more functionally useful than any other statement of atomic Truth.

Thursday
Sep032009

A Self-correcting System

The ability of science as a method to understand reality is demonstrated by the countless successes science has had in developing technology. Antibiotics, vaccination, flight, agriculture, all of these advances clearly work. Why is this? People came up with many ideas to prevent smallpox in the past, but they consistently failed. The development of a smallpox vaccine which actually worked does not demonstrate that scientists have any unique intelligence, but rather it is testimony to the power of a self-correcting system.

Hypotheses are worthless if they are not tested and then discarded if they fail testing. The process of science is not just coming up with an idea of how to cure smallpox, many people clung to their ideas of what would cure smallpox even as they died. Rather, science is testing this idea by looking at the evidence. Uniquely, science discards ideas that just don't work. The simple process of keeping ideas that work and discarding ideas that don't work has built an amazing edifice of knowledge.

The real beauty of the scientific method is that it does not depend on any single person being right or wrong, being ethical or unethical. There will always be scientists who lie or cheat, falsify data or hide experiments that disprove their pet theory. But the hypotheses that these people put forward will always be discarded, because they will fail tests by other scientists.

Best of all, scientists have a vested interest in knocking down incorrect theories. Often you will hear from anti-science campaigners that scientists are hiding data that the theory of [evolution] / [global warming] / [insert hated theory here] is incorrect. They believe in a vast conspiracy of scientists each trying to hold up a false theory for some unexplained nefarious purpose, assuming that scientists don't want to prove a theory incorrect. They fundamentally do not understand the system of science.  Personal glory does not come to the scientists who prove yet again that the theory of gravity works, personal glory comes to the scientist who finds an exception, who proves a theory incomplete, who can unravel the fatal flaw in a centuries old dogma! Einstein, Newton, Copernicus, Darwin, these are all scientists who destroyed the prevailing theories of their age. Every scientist today would love to join their glorious ranks.

A scientist who could prove today that the theory of relativity, evolution or global warming was wrong would publish in the highest journals, win the Nobel Prize, earn household recognition and become rich. There are only two ways a theory such as evolution could still stand today:

1) Every scientist working in the field is deliberately concealing data that disproves evolution, despite knowing that breaking the nefarious conspiracy would earn them recognition as a leader of science, a place in the history books and a lot of personal glory;
or
2) There are no experiments that reveal a fatal flaw.

That is the beauty of science, individuals have huge power to make advances but very little ability to make delays, since theories are judged by experimental results. To reject science you have to reject human nature and believe in an alternative reality where everyone acts uniformly against their personal interests. Trust in science is not trust is individual scientists, it is trust in a system that for thousands of years has produced results, a system that is self-correcting, a system that acts as an 'invisible hand' to select only the models of reality that actually work, regardless of whether the individuals involved were motivated by a selfless search for truth or a greedy struggle for personal glory. The scientific method is an emergent phenomenon which self-corrects the activities of individual scientists to develop only the most robust theories that have so far resisted every attempt to knock them down.

Page 1 2 3