Discerning “Good” Research

This Just In: Blog Post by AIM Chief Medical Officer Michelle Fiscus, MD

Once upon a time, we could trust that a study that was published in a professional journal was peer reviewed and largely trustworthy. Finding the flaws in a published article was an exercise for academics and journal clubs, but you could at least trust that an article that survived peer review wasn’t trying to actively mislead its readers.

Not so anymore.

Unfortunately, papers that are cited in reports or quoted in hearings or debates can no longer be taken at face value and require us to find and vet the original source. And, unfortunately, many of us don’t have the time to do that. While we can likely still place more weight in publications from high-profile journals such as the Journal of the American Medical Association (JAMA), the American Academy of Pediatrics’ (AAP) Pediatrics, British Medical Journal (BMJ), and New England Journal of Medicine (NEJM), they are not infallible. Personally, I still scrutinize everything published by The Lancet since it published Wakefield’s paper that launched the modern anti-vaccine movement in 1998. But there are some telltale signs of publications that may warrant closer examination.

Here are a few questions you should ask yourself to be certain the paper you are reading is likely to give accurate information:

  • The Publisher: Is the publisher credible? If the paper is in one of the journals mentioned above, it probably gets a point in this category. If you’ve never heard of the journal, or if it was published by someone’s “Institute,” you may want to raise a flag. You can also look at the journal’s “impact factor,” which measures the journal’s influence. While that impact factor doesn’t speak to quality, a high impact factor score suggests that the journal publications are widely cited in the literature and lends some support for credibility. JAMA’s main journal, for example, has an impact factor of 55.0, while JAMA’s open-source platform, JAMA Network Open, has an impact factor of 10.5. The impact factors for BMJ and NEJM are 43.0 and 78.5, respectively. One caveat is that specialty journals, such as Pediatrics, serve a much smaller audience than JAMA or NEJM, so its impact factor is much smaller (6.4) but among the highest of all pediatric journals. The takeaway here is that a high impact factor is somewhat reassuring, but a low score doesn’t necessarily mean the journal isn’t credible.
  • Publication Status: Is it a “pre-print” or has it been accepted and published by a reputable journal? Lately, pre-prints have become more widely circulated. These are papers that have not been published and have not been through peer review. Because the path to publication can be very long (sometimes a year or more), authors, especially those writing on rapidly evolving topics, may want to circulate their papers as “pre-prints” to get important information out more quickly. But, in other cases, pre-prints are released because the authors KNOW the paper won’t pass peer review, and they want to get it into circulation anyway. Be wary, for example, of “medRxiv” (pronounced “med-archive”), a preprint distribution service that was founded by Cold Spring Harbor Laboratory (CSHL), Yale University, and BMJ. While that backing may give the platform some legitimacy, the papers it distributes only undergo “a basic screening process for offensive and/or non-scientific content” and for plagiarism.
  • Authors: Are the authors affiliated with credible institutions (medical centers, academic centers, large agencies with a track record for quality research like the National Institutes of Health (NIH), etc.) and likely to be free of bias? Unless the article is a commentary or editorial, there should always be more than one author and, preferably, they should be from more than one institution. While not all affiliated authors are without bias, the chances are better than if an author is listed without affiliation or if they are affiliated with their own “institute.”

Let’s take, for example, an article circulated February 17, 2026, titled “Deaths Following MMR and MMRV Vaccination in the United States.”

As a critical reader, you could ask the following questions:

  • Publisher: Is Zenodo a credible journal? 
    • Zenodo is a repository for European Union-funded research papers that have not been through peer review (similar to medRxiv mentioned above). While backed by a credible research organization, CERN, information from papers that have not been through peer review should be interpreted with caution. Publication through Zenodo should raise a flag.
    • Takeaway: Red flag based on the publisher alone as this paper was published by a pre-print repository.
  • Publication Status: Is it a pre-print?
    • Yes, this is a pre-print and has not been through peer review.
    • Takeaway: Big red flag based upon publication status.
  • Authors: Are the authors affiliated with credible institutions and likely to be free of bias? Is there more than one author from more than one institution?
    • There are several listed authors. (That’s good.)
    • Seven of the nine listed authors are affiliated with one “Foundation.” One is an “independent researcher” without affiliation. One is affiliated with a biotech company. (That’s not so good.)
    • The foundation’s website has the banner “Fighting Medical Tyranny with Science & Truth” and goes on to say, “Your gift exposes corruption, defends, the injured, and saves lives.”
    • The biotech company’s website states it specializes “in the development of personalized peptide designs for patients with active diseases… for patients seeking to prevent disease and enhance longevity.”
    • Takeaway: While there are several listed authors, the majority are affiliated with the same “Foundation” rather than an academic center or agency with a track record of quality research and, according to the information posted to the foundation’s website, concerns could be raised regarding bias on the part of the authors. This is another red flag.

Overall Assessment: This paper raises several concerns:

  • it was published as a pre-print
  • its authors are largely affiliated with the same foundation
  • the webpages of the foundation and the biotech company raise concerns about bias

At the very least, your assessment of this article should prompt you to dig a little deeper before accepting the conclusions of the article at face value. Upon reading the paper, you find the authors attempt to draw conclusions between death and MMR or MMRV vaccination using data from the Vaccine Adverse Events Reporting System (VAERS), which is a known misuse of VAERS data. The VAERS FAQ resource published by the United States Department of Health and Human Services (HHS) states, “One of the main limitations of VAERS data is that it cannot determine if the vaccine caused the reported adverse event.” The document goes on to state:

  • There have been instances where people have misinterpreted reports of deaths following vaccination as deaths caused by the vaccines; that is not accurate. VAERS accepts all reports of adverse health events following vaccinations without judging whether the vaccine caused the adverse health event. Some reports to VAERS represent true vaccine reactions and others are coincidental adverse health events and not related to vaccination. Overall, a causal relationship cannot be established using information from VAERS report alone.

With the changes that come every day to this work, it’s difficult to find the time to vet published research. If a headline grabs you and seems questionable, take just a moment to walk through the steps of looking at the publisher, publication, and author(s) for clues to the reliability of the study and its findings. Then, if a response is needed before you can read the paper, you can at least make a statement that, “Brief review of the publisher/publication status/author(s) raises concerns around the reliability of the paper’s findings and conclusions. I look forward to reviewing the paper in its entirety and reporting back.”

Back To Top
Search