Promoting Research Integrity Historical Background & Current Trends Workshop on Best Practices for Ensuring Scientific Integrity & Preventing Misconduct


Download 510 b.
Sana08.03.2018
Hajmi510 b.


Promoting Research Integrity Historical Background & Current Trends

  • Workshop on Best Practices for Ensuring Scientific Integrity & Preventing Misconduct

  • 22-23 February, 2007

  • Mita Conference Hall, Tokyo, Japan


Historical Perspective

  • Research misconduct is not new

    • Galileo fabricated & falsified data
    • Piltdown hox and other scandals
    • Some things do not change over time
  • Major changes between 1600 and 2000



Major growth since 1950

  • Financial support

    • 1930s, under 1% GDP
    • 1950 ca. 1% GDP
    • Currently 2-3%


Some doubts along with the growth

  • Science linked with unpopular events and problems:

    • Cold War links science with “military-industrial complex
      • Clock, Bulletin of the Atomic Scientist (1947)
      • War in Vietnam (1960s)
      • Environmental impact of nuclear power
    • Concerns about human & animal experimentation
      • US, Tuskegee experiments
      • Declaration of Helsinki (1964)
    • Global energy crisis (1970s)
    • New worldviews compete with scientific worldview
  • By 1980 (first major US misconduct cases) public is taking a closer look at the way research is conducted



Response to research misconduct, 1980 ff.

  • Events have been driven by major cases & media

    • Story breaks in the news
    • Local institution responds
    • Pressure for an official/government response
  • Official response

    • Gather information ~ committees, hearings, reports
    • Try to resolve the immediate problem ~ the major cases
    • Develop policies and procedures to avoid similar problems in the future
  • Timing

    • US begins in early 1980s, policy development still in process
    • Europe, early 1990s, policy development in process
    • Asia, late 1990s, policy development in process


The misconduct-centered universe

  • First priority ~ major cases

    • Define misconduct
    • Assign authority
    • Develop procedures for investigation
  • Definitions focus on deliberate reporting false data & information

    • Careful to not confuse with scientific disagreement
    • Not the same as waste and sloppiness
  • Policies protect researchers from improper charges

  • Working assumption: pursuing individual cases of misconduct is the best way to protect the integrity of publicly supported research



Misconduct-based universe rested on 5 assumptions

  • Serious misconduct in research is rare

  • Self-regulation keeps improper behavior in check

  • Research misconduct is difficult to detect

  • Research misconduct cannot be prevented

  • Apart from misconduct, standards for integrity in research are high

  • Assumptions were based on common perceptions, not empirical evidence

  • All five can be questioned!



Research misconduct is rare?

  • Martinson, Nature (June 2005)

    • Goal: factors that influence research behavior
    • Method:
      • Developed peer-based list of major offenses
      • Survey to 6,000+ researchers (3,000+ response)
      • Major question: “have you done … in last three years?”
  • Results

    • Major offenses, ca. 0.3%
    • Questionable Research Practices (QRP) ca. 5-15% or higher


Data from other recent studies

  • JM Ranstam, Control Clin Trials (2000)

    • Survey, 442 biostatisticians, 37% response
    • 51% knew about fraud in medical research
      • 26% involved FF
      • 31% directly involved in projects with misconduct
    • Estimates of rate, .69% –> .80% (.25% standard)
  • Geggie, J Med Ethics (2001)

    • Survey, 305 new medical consultants, 64% response
      • 55.7% observed misconduct (FF lower)
      • 5.7% committed misconduct in the past
      • 18% would commit in future
      • 17% had research ethics training


Studies continued

  • Gardner, Contemporary Clinical Trials (2005)

    • Authors pharmaceutical clinical trials (64% response)
    • 1% reported target article misrepresented the research
    • 5% reported fabrication in a study they had participated in over the last 10 years
    • 17% knew personally of fabrication in a study over the last 10 years
  • Rossner, Journal of Cell Biology

    • 11 in 1,100 papers had serious improper digital image manipulation


Realistic estimates:

  • Rough approximation:

    • Evidence ~ 1/1,000+
    • Assume ~ 1/10,000
  • Cases predicted

    • US ~ 1,500
    • EU ~ 1,000
    • Japan ~ 600
    • Other OECD ~ 400
  • Cases seen

    • US ~ 20/year
    • EU ~ 10/year
  • Lesson #1 ~ policy makers have always under-estimated the amount of misconduct in research



Other assumptions about misconduct?

  • Self-regulation keeps improper behavior in check?

    • Researchers do not report suspected misconduct (20-40%)
    • Journals often do not report misconduct
  • Research misconduct is difficult to detect?

    • Hwang could not have completed work in the time reported
    • Sudbø, trial not started, patients did not exists, data repeated
  • Research misconduct cannot be prevented?

    • Schön’s co-author/mentor did not check experiments or data
    • Pohlmann’s MD co-author did not oversee clinical results
  • Lesson #2: Policy makers did not understand the strengths & weaknesses of self-regulation in research

    • If they understood, did not honestly and accurately report


Integrity in research otherwise high?

  • Martinson study, self-reported misbehaviors:

    • 15.5% Changing the design, methodology or results of a study in response to pressure from a funding source
    • 12.5% Overlooking others' use of flawed data or questionable interpretation
    • 7.6% Circumventing certain minor aspects of human-subject requirements
    • 6.0% Failing to present data that contradict one’s own previous research
    • 1.7% Unauthorized use of confidential information
    • 1.4% Using another’s ideas without obtaining permission or giving due credit
    • 1.4% Relationships with students, research subjects or clients that may be interpreted as questionable
    • 0.3% Not properly disclosing involvement in firms whose products are based on one‘s own research
    • 0.3% Ignoring major aspects of human-subject requirements
    • 0.3% Falsifying or ‘cooking’ research data


Al-Marsouki, Contemp Clin Trials 26(2005)

  • Practices felt likely to occur and adversely impact research

    • 83% Over-interpretation of “significant" findings in small trials
    • 80% Selective reporting based on p-values
    • 76% Selective reporting of outcomes in the abstract
    • 75% Subgroup analyses done without interaction tests
    • 68% Negative or detrimental studies not published
    • 68% Putting undue stress on results from subgroup analysis
    • 64% Inappropriate subgroup analyses
    • 64% Selective reporting of (i) subgroups (ii) outcomes (iii) time points
    • 60% Selective reporting of positive results/omission of adverse events data
    • 60% Failure to report results or long delay in reporting
    • 59% Post-hoc analysis not admitted
    • 56% Giving incomplete information about analyses with non significant results
    • 54% Analysis conducted by the sponsor of the trial


How do researchers behave?

  • Lesson #3. Significant gap between ideal (high standards) and actual standards for integrity in research



How should researchers behave?



Slow change to an integrity-centered universe

  • More emphasis on prevention and improving integrity

  • First major change ~ improve education/training

  • US, efforts to require training/education

    • 1989 Institute of Medicine Report called for training
    • 1990/92, National Institutes of Health required for trainees
    • 1997, National Science Foundation required for trainees
    • 2000, Public Health Service proposed general requirement
      • Strong objections raised by research community
      • Requirement has been suspended
  • Global initiatives

    • Finland, national requirement for graduate students
    • Elsewhere, growing number of courses and resources


Training covers more than misconduct

  • Areas developed over time:

    • 1994 Training Grant Requirement
      • Conflict of Interest
      • Responsible authorship
      • Policies for handling misconduct
      • Data management
      • Human & animal subjects
    • 2000 HHS RCR requirement
      • 9 areas
  • Other areas could be added:



Current status of “RCR” training

  • Abundant resources for teaching

    • Textbooks
    • Web pages
    • Train-the-trainer programs
  • No standards for content or approach

    • Coverage depends on the instructor
    • Training not integrated with research
    • Minimal testing or follow-up
    • Little assessment of effectiveness
  • Major challenge:

    • Research is global; research teams and laboratories are international
    • RCR training is local, inconsistent, and for the most part inadequate


Second change ~ Better understanding of behavior

  • Misconduct-based universe ~ cause of misconduct?

    • Too few cases/too much variation to draw “scientific” conclusions
    • Suggested some areas for further attention:
      • Quality of mentoring, supervision, peer review
  • Integrity-centered universe ~ many issues to study

    • Social processes
      • Authorship - who is listed and why?
      • Peer review - weaknesses and how to correct?
      • Data -how do researchers collect and record data?
    • Institutional role and influences
      • Policies - how institutions develop and promote policies
      • Good management - how administrators and committees work
      • Conflict of interest - how institutions manage their own conflicts
  • Much better prepared to implement effective policies



Four crucial challenges

  • “Misconduct” policies can have many objectives:

    • Establish procedures for responding to misconduct in research
    • Detect and eliminate/correct fraudulent information
    • Protect public from consequences of flawed research findings
    • Maintain/restore public confidence in research
  • US Policy (OSTP, 2000) has three general objectives

    • Protect “reliability of the research record”
    • Maintain public “confidence in the research record”
    • Achieve policy unity
  • Reason for objectives

    • Clear understand of goals to be achieved
    • Standard for measuring success and improving policy


Fair, effective procedures for responding

  • Establishing national policies is a starting point

    • Comprehensive definitions that cover all serious misbehavior
    • Assure reporting and accountability
    • Protect informants and insulate from bias and conflicts of interest
  • Global harmonization and communication are next step

    • Research is no longer local or national
    • Laws of nature and scientific methods are not local
    • Standards for reporting, investigating, and judging misconduct should be global
      • Some accommodation for differences in law and government as long as good research practices are not compromised
  • The globalization of research requires the globalization of policies and best research practices



Take steps to make policies more effective

  • Detect and eliminate/correct fraudulent information??

    • Investigating 1 of every 100 cases has little impact on the reliability of the research record
    • Some FFP has is trivial
    • Current policies have minimum impact on the research record
  • Steps that would eliminate/correct current system

    • Improved education, emphasizing professional responsibility
    • Clearer rules for data management, mentoring, and peer review
    • Random audits of publications and supporting data
    • Institutional climate surveys to assess reliability of self-regulation
    • Extend responsibility for reporting to journals
  • Investigating major cases is essential but has little impact on the overall reliability of the research record



Take a serious look at QRPs

  • Viewed from a public perspective, questionable practices are more significant than research misconduct

    • Occur more frequently ~ 10x or more
    • Have serious impacts
      • Poor literature reviews have led to harm of research subjects
      • Bias reporting/duplicate publication impact health-care decisions
      • Ineffective public decisions due to improper statistical analysis
  • How would you explain the following to the public?

    • 25% of researchers reported recording results in loose-leave notebooks
    • 40% of abstracts misrepresent findings reported in the article
    • Funding makes researcher 3-5 times more likely to report results favorable to the funding source
  • Protecting the integrity of the research record requires more than simply responding to cases of misconduct






Download 510 b.

Do'stlaringiz bilan baham:




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2020
ma'muriyatiga murojaat qiling