There will be 6 parallel Breakout Sessions.
1. Good citation practice versus plagiarism – common ground and open questions
Prof. Dr. Ansgar Ohly, Faculty of Law, LMU Munich
One of the essential rules of good academic practice is the correct citation of authors. But even beyond obvious cases of plagarism this rule is too often neglected, partly due to a false sense of academic hierarchy, partly because the rules are unclear. Hence, as Robert K. Merton notes, controversies about authorship "have long been frequent, harsh and ugly". In this session we will explore why correct citations are so important, we will look into the rules of authorship and we will talk about the consequences of plagiarism and wrong attribution.
2. Secret impacts on animal experiments
Dr. Eckhart Thein, Coordinator for Laboratory Animal Welfare, LMU Munich
Dr. Thomas Brill, BMC Core Facility Animal Models, LMU Munich
Animals and thus experiments conducted in animals are influenced by a variety of extrinsic and intrinsic factors that do not necessarily need to be obvious to the experimenter. Different genotypes may for example cause higher or lower susceptibility to vectors/viruses. Insufficient analgesia can lead to shock and release of anti-inflammatory hormones and/or ischemia of organs. Markers of clinical chemistry may vary in dependence of the means used for euthanasia. Nutrition, light-regimens and housing systems as well as noise, odours and infections are known to have an impact on the animals and consequently on the results of experiments performed in them. Most of these factors are not visible to the experimenter. Therefor knowledge of these parameters and their impact on experiments is of highest relevance. Corresponding data will be presented and discussed in detail.
3. How open science can solve (parts of) the replication crisis
Dr. Felix Schönbrodt, Faculty of Psychology and Educational Sciences, LMU Munich
The replication crisis has hit several scientific fields. The most systematic investigation has been done in psychology, which revealed replication rates less than 40% (Open Science Collaboration, 2015). However, the same problem has been well documented in other disciplines, for example preclinical cancer research or economics.
In a first part, this breakout session introduces recent statistical techniques that allow to judge the evidential value in a set of studies. This includes the detection of publication bias and p-hacking, and attempts to recover the true effect size by applying corrections to the (biased) published literature. In a second part, I argue in what way Open Science and research transparency are essential building blocks in overcoming the current credibility crisis in science, with consequences for research practices, journal policies, funding decisions, and hiring and tenure committees.
4. (Ir)reproducible research. The role of experimentation, statistics and result interpretation
Dr. Tobias Straub, BMC Bioinformatics Core Facility, LMU Munich
Concerns are rising that many published results in basic life science research are not reproducible. Among many factors contributing to this shortcoming low standards of statistical treatment and inappropriate experimentation are frequently pointed out. In addition, there are common misconceptions among scientists as to how results of statistical tests should be interpreted. The numerical significance of effects - the p-value - has gained unreasonable importance while the magnitude and biological relevance of effects is decreasingly respected. In addition, we tend to believe that a low p-value is a predictor for reproducibility while the reality proves us wrong.
Competence in experimental design and statistical thinking is crucial for exiting a vicious circle that is starting to hamper our own as well as the society's confidence in research results. In this breakout session we will try to de-mystify p-values and find better parameters for assessing the reproducibility of research findings. Based on real-word examples we will discuss good and bad ways in result generation and reporting. This should facilitate the adjustment of our research practices with the specific aim to ensure sustainable research.
5. Reproducibility in clinical trials
Prof. Dr. Marcus Munafò, School of Experimental Psychology, University of Bristol
Open science - the process of making study materials, data and publications freely available - offers a number of potential benefits. This includes acting as a quality control measure at every stage of the research pipeline. This session will discuss various process-based approaches to ensuring that research is conducted to a high standard, so that the resulting findings are more robust. This includes pre-registration of study protocols, making data and study materials open, and fostering a culture of openness and constructive critique.
6. Safeguarding scientific image integrity
– balancing on a fine line between proper editing and data alteration -
Dr. Jan Brocher, Biovoxxel
Scientific image integrity is a topic that has received more attention in recent years as integrity breaches are detected more abundantly. This break-out session will discuss real life examples of image editing and processing techniques that may lead to deviation from image integrity. Selected tools that can be applied to identify manipulations will be highlighted including its advantages and limitations.
In addition, the potential reasons for deviation of scientific image integrity and its impact on research will be discussed. Since lack of knowledge on proper image editing techniques and clear guidelines on figure preparation play a major role in unintended image manipulation, specific guidance and proper image editing methods will be presented.
Apart from prevention of image manipulation through education, it is important to follow a certain ethical framework to monitor and report occurring scientific misconduct. For instance, a broad application of image screening during the manuscript reviewing process would be key to reduce the need of retractions of published studies, negative impact on research as well as to avoid public allegations resulting in “witch hunting”, illegitimately affecting scientists’ reputation.
The session will close with a questions and an open discussion regarding the presented issues.